Sunteți pe pagina 1din 394

ROBOTICS

(As per New Syllabus of Leading Universities in India)

Dr. S. Ramachandran, M.E., Ph.D., Dr. S. Benjamin Lazarus, M.E., Ph.D.,


Prof & Head – Mechanical
Engineering
The Kavery Engineering College
Salem – 636453

Ms. P. Vijayalakshmi
Asst. Professor – Mech.
Sri Ramanujar Engineering College
Chennai.

AIR WALK PUBLICATIONS


(Near All India Radio)
80, Karneeshwarar Koil Street,
Mylapore, Chennai – 600 004.
Ph.: 2466 1909, 94440 81904
Email: aishram2000@gmail.com,
airwalk800@gmail.com
www.airwalkpublications.com
First Edition: 2017

© All Rights Reserved by the Publisher

This book or part thereof should not be reproduced in any form


without the written permission of the publisher.

ISBN : 978-93-84893-69-9

B ooks wi ll be door del ivered after payment int o AIR WALK


PUBLICATIONS A/c No. 801630110000031 (IFSC: BKID0008016) Bank
of India, Santhome branch, Mylapore, Chennai – 4

(or)

S. Ramachandran, A/c No. 482894441 (IFSC:IDIB000S201), Indian Bank,


Sathyabama University Branch, Chennai − 600 119.

Printed by:

Typeset by: Akshayaa DTP, Chennai − 600 089, Ph: 9551908934.


ROBOTICS-BPUT − ODISHA

B.Tech (Mechanical Engineering) Syllabus for Admission


Batch 2015 − 2016 − V Syllabus −7th Semester
ROBOTICS
(PROFESSIONAL ELECTIVE)
MODULE – I
1. Fundamentals of Robotics: Evolution of robots and robotics, Definition of
industrial robot, Laws of Robotics, Classification, Robot Anatomy, Work
volume and work envelope, Human arm characteristics, Design and control
issues, Manipulation and control, Resolution; accuracy and repeatability, Robot
configuration, Economic and social issues, Present and future application.
2. Mathematical modeling of a robot: Mapping between frames, Description
of objects in space, Transformation of vectors. Direct Kinematic model:
Mechanical Structure and notations, Description of links and joints, Kinematic
modeling of the manipulator, Denavit-Hartenberg Notation, Kinematic
relationship between adjacent links, Manipulator Transformation matrix.
MODULE – II
3. Inverse Kinematics: Manipulator workspace, Solvable of inverse kinematic
model, Manipulator Jacobian, Jacobian inverse, Jacobian singularity, Static
analysis.
4. Dynamic modeling: Lagrangian mechanics, 2D- Dynamic model,
Lagrange-Euler formulation, Newton-Euler formulation.
5. Robot Sensors: Internal and external sensors, force sensors, Thermocouples,
Performance characteristic of a robot.
MODULE – III
6. Robot Actuators: Hydraulic and pneumatic actuators, Electrical actuators,
Brushless permanent magnet DC motor, Servomotor, Stepper motor, Micro
actuator, Micro gripper, Micro motor, Drive selection.
7. Trajectory Planning: Definition and planning tasks, Joint space planning,
Cartesian space planning.
8. Applications of Robotics: Capabilities of robots, Material handling, Machine
loading and unloading, Robot assembly, Inspection, Welding, Obstacle
avoidance.
ME464 Robotics and Automation – KERALA
Syllabus: Definition, Co-ordinate Systems, Work Envelope, types and
classification, Robot drive systems, End Effectors, Grippers, Sensors and
machine vision, Robot kinematics and robot programming, Application of
robots in machining.
Module I: Definition − Co-ordinate Systems, Work Envelope, types and
classification − Specifications − Pitch, Yaw, Roll, Joint Notations, Speed of
Motion, Pay Load − Basic robot motions − Point to point control, Continuous
path control. Robot Parts and Their Functions − Need for Robots Different
Applications.
Module II: Robot drive systems: Pneumatic Drives ? Hydraulic Drives −
Mechanical Drives − Electrical Drives − D.C. Servo Motors, Stepper Motor,
A.C. Servo Motors − Salient Features, Applications and Comparison of all
these Drives.
Module III: End Effectors − Grippers − Mechanical Grippers, Pneumatic and
Hydraulic Grippers, Magnetic Grippers, Vacuum Grippers; Two Fingered and
Three Fingered Grippers; Internal Grippers and External Grippers; Selection
and Design Consideration
Module IV: Sensors and machine vision: Requirements of a sensor, Principles
and Applications of the following types of sensors − Position of sensors (Piezo
Electric Sensor, LVDT, Resolvers, Optical Encoders), Range Sensors
(Triangulation Principle, Structured, Lighting Approach, Laser Range Meters).
Module V: Proximity Sensors (Inductive, Capacitive, and Ultrasonic), Touch
Sensors, (Binary Sensors, Analog Sensors), Wrist Sensors, Compliance Sensors,
Slip Sensors. Camera, Frame Grabber, Sensing and Digitizing Image Data –
Signal Conversion, Image Storage, Lighting Techniques.
Robot kinematics and robot programming: Forward Kinematics, Inverse
Kinematics and Differences; Forward Kinematics and Reverse Kinematics of
Manipulators with Two Degrees of Freedom (In 2 Dimensional) − Deviations
and Problems.
Module VI: Teach Pendant Programming, Lead through programming, Robot
programming Languages − VAL Programming − Motion Commands, Sensor
Commands, End effector commands, and Simple programs. Industrial
Applications: Application of robots in machining, welding, assembly, and
material handling.
*********
Dr. A.P.J ABDUL KALAM TECHNICAL UNIVERSITY −
LUCKNOW UTTAR PRADESH
NME-044: AUTOMATION AND ROBOTICS
UNIT – I AUTOMATION: Definition, Advantages, goals, types, need, laws
and principles of Automation. Elements of Automation. Fluid power and its
elements, application of fluid power, Pneumatics vs. Hydraulics, benefit and
limitations of pneumatics and hydraulics systems, Role of Robotics in Industrial
Automation.
UNIT – II Manufacturing Automation: Classification and type of automatic
transfer machines; Automation in part handling and feeding, Analysis of
automated flow lines, design of single model, multimodel and mixed model
production lines. Programmable Manufacturing Automation CNC machine
tools, Machining centers, Programmable robots, Robot time estimation in
manufacturing operations.
UNIT – III ROBOTICS: Definition, Classification of Robots − Geometric
classification and Control classification, Laws of Robotics, Robot Components,
Coordinate Systems, Power Source. Robot anatomy, configuration of robots,
joint notation schemes, work volume, manipulator kinematics, position
representation, forward and reverse transformations, homogeneous
transformations in robot kinematics, D-H notations, kinematics equations,
introduction to robot arm dynamics.
UNIT – IV ROBOT DRIVES AND POWER TRANSMISSION SYSTEMS:
Robot drive mechanisms: Hydraulic / Electric / Pneumatics, servo & stepper
motor drives, Mechanical transmission method: Gear transmission, Belt drives,
Rollers, chains, Links, Linearto-Rotary motion conversion, Rotary-to-Linear
motion conversion, Rack and Pinion drives, Lead screws, Ball Bearings.
ROBOT END EFFECTORS Classification of End effectors − active and
passive grippers, Tools as end effectors, Drive system for grippers. Mechanical,
vacuum and magnetic grippers. Gripper force analysis and gripper design.
UNIT – V ROBOT SIMULATION: Methods of robot programming,
Simulation concept, Off-line programming, advantages of offline programming.
ROBOT APPLICATIONS Robot applications in manufacturing-Material
transfer and machine loading/unloading, Processing operations like Welding &
painting, Assembly operations, Inspection automation, Limitation of usage of
robots in processing operation. Robot cell design and control, Robot cell
layouts-Multiple robots & Machine interference.
*********
ROBOTICS
JNTU – ANDHRA & TELANGANA
III YEAR – II SEMESTER
UNIT – I INTRODUCTION: Automation and Robotics, CAD/CAM and
Robotics − An over view of Robotics − present and future applications −
classification by coordinate system and control system.
UNIT – II COMPONENTS OF THE INDUSTRIAL ROBOTICS: Function
line diagram representation of robot arms, common types of arms. Components,
Architecture, number of degrees of freedom − Requirements and challenges
of end effectors, determination of the end effectors, comparison of Electric,
Hydraulic and Pneumatic types of locomotion devices.
UNIT – III MOTION ANALYSIS: Homogeneous transformations as
applicable to rotation and translation − problems.
MANIPULATOR KINEMATICS: Specifications of matrices, D-H notation
joint coordinates and world coordinates Forward and inverse kinematics −
problems.
UNIT – IV Differential transformation and manipulators, Jacobians − problems
Dynamics: Lagrange − Euler and Newton − Euler formulations − Problems.
UNIT – V General considerations in path description and generation.
Trajectory planning and avoidance of obstacles, path planning, Skew motion,
joint integrated motion − straight line motion − Robot programming, languages
and software packages-description of paths with a robot programming
language.
UNIT – VI ROBOT ACTUATORS AND FEED BACK COMPONENTS:
Actuators: Pneumatic, Hydraulic actuators, electric & stepper motors.
Feedback components: position sensors − potentiometers, resolvers, encoders
− Velocity sensors.
ROBOT APPLICATIONS IN MANUFACTURING: Material Transfer −
Material handling, loading and unloading − Processing − spot and continuous
arc welding & spray painting -Assembly and Inspection.

*********
CONTENTS

CHAPTER – 1: FUNDAMENTALS OF ROBOT 1.1 – 1.40

1.1. Introduction 1.1

1.2. Industrial robot 1.3

1.3. Robot 1.3

1.4. Laws of robotics 1.4

1.4.1. History of robotics 1.4

1.5. Robot anatomy 1.6

1.5.1. Degrees of freedom 1.7

1.5.2. Robot motions 1.7

1.5.3. Robot Joints 1.9

1.6. Co-ordinate system 1.12

1.6.1. Polar Co-ordinate system 1.13

1.6.2. Cylindrical co-ordinate system 1.14

1.6.3. Cartesian co-ordinate system 1.16

1.6.4. Joined arm co-ordinate system 1.17

1.7. Work envelope 1.19

1.7.1. Cartesian co-ordinate work envelope 1.19

1.7.2. Cylindrical co-ordinate work envelope 1.20

1.7.3. Polar co-ordinate work envelope 1.21

1.7.4. Joined arm work envelope 1.22


C2 Robotics – www.airwalkpublications.com

1.8. Types of robot 1.22

1.8.1. Types of industrial robot 1.23

1.8.2. Based on physical configuration 1.23

1.8.3. Based on control system 1.23

1.8.4. Based on movement 1.23

1.8.5. Based on types of drive 1.24

1.8.6. Based on sensory systems 1.24

1.8.7. Degrees of freedom 1.24

1.8.8. Based on Application 1.24

1.8.9. Based on path control 1.24

1.9. Robot specification 1.25

1.9.1. Spatial resolution 1.26

1.9.2. Accuracy 1.26

1.9.3. Repeatability 1.27

1.9.4. Compliance 1.28

1.9.5. Three degree of freedom wrist assembly 1.28

1.9.6. Joint notation scheme 1.29

1.9.7. Speed of motion 1.30


1.9.8. Pay load 1.32
1.10. Robot parts and their functions 1.32

1.10.1. Power source 1.33


1.10.2. Controller 1.34
Contents C3

1.10.3. Manipulator 1.34

1.10.4. End effector 1.35

1.10.5. Actuator 1.36

1.10.6. Sensors 1.36

1.11. Benefits of robot 1.36

1.12. Need for robot 1.37

1.13. Manufacturing applications of robot 1.37

1.13.1. Material handling 1.37

1.13.2. Machine loading / unloading 1.38

1.13.3. Spray painting 1.38

1.13.4. Welding 1.38

1.13.5. Machining 1.38

1.13.6. Assembly 1.39

1.13.7. Inspection 1.39

1.14. Non manufacturing robotic applications 1.39

1.14.1. Hazardous environment 1.39

1.14.2. Medical 1.40

1.14.3. Distribution 1.40

1.14.4. Others 1.40

1.15. The future of robotics 1.40


C4 Robotics – www.airwalkpublications.com

CHAPTER – 2: ROBOT DRIVE SYSTEMS 2.1 – 2.48


AND END EFFECTORS

2.1. Introduction 2.1

2.2. Actuators 2.1

2.3. Factors considered for selecting drive system 2.2

2.4. Types of actuators or drives 2.2

2.4.1. Pneumatic power drives 2.3

2.4.2. Hydraulic drives 2.5

2.4.3. Electrical drives 2.8

2.4.4. Types of electrical drives 2.9

2.5. DC Servomotor 2.10

2.6. Types of D.C. motors 2.12

2.6.1. Permanent magnet D.C motor 2.12

2.6.2. Brushless permanent magnet D.C motors 2.14

2.7. A.C. motors 2.15

2.7.1. Comparison between A.C motor and D.C motor 2.17

2.8. Stepper motor 2.17

2.8.1. Variable reluctance stepper motor 2.18

2.8.2. Permanent magnet stepper motor 2.20

2.8.3. Hybrid stepper motor 2.21

2.9. Selection of motors 2.22

2.10. Comparison of pneumatic, hydraulic electrical drives 2.23

2.11. End-effectors 2.24


Contents C5

2.12. Grippers 2.25

2.13. Classification of grippers 2.26

2.14. Drive system for grippers 2.26

2.15. Mechanical grippers 2.27

2.15.1. Mechanical gripper mechanism 2.28

2.15.2. Types of mechanical gripper 2.29

2.15.3. Mechanical gripper with 3 fingers 2.34

2.15.4. Multifingered gripper 2.35

2.15.5. Internal gripper 2.36

2.15.6. External grippers 2.37

2.16. Magnetic gripper 2.38

2.16.1. Electromagnetic gripper 2.38

2.16.2. Permanent magnetic gripper 2.40

2.17. Vacuum grippers 2.41

2.18. Adhesive grippers 2.42

2.19. Hooks scoops, other miscellaneous devices 2.43

2.20. Selection and design considerations of gripper 2.45

CHAPTER – 3: SENSORS AND MACHINE VISION 3.1 – 3.74

3.1. Sensors 3.1

3.2. Requirements of sensors 3.2

3.3. Classification of sensors 3.5

3.4. Position sensors 3.7


C6 Robotics – www.airwalkpublications.com

3.4.1. Encoder 3.7

3.4.2. Linear Variable Differential Transformer (LVDT) 3.12

3.4.3. Resolver 3.16

3.4.4. Potentiometer 3.17

3.4.5. Pneumatic position sensor 3.19

3.4.6. Optical encoder 3.20

3.5. Velocity sensor 3.21

3.5.1. Tachometer 3.21

3.5.2. Hall-Effect sensor 3.22

3.6. Acceleration sensors 3.23

3.7. Force sensors 3.24

3.7.1. Strain gauge 3.25

3.7.2. Piezoelectric sensor 3.26

3.7.3. Microswitches 3.27

3.8. External sensors 3.28

3.8.1. Contact type 3.28

3.8.2. Non Contact type 3.35

3.9. Acquisition of images 3.48

3.9.1. Vidicon camera (analog camera) 3.49

3.9.2. Digital camera 3.50

3.10. Machine vision 3.51


Contents C7

3.11. Sensing and digitizing function in machine vision 3.53

3.11.1. Imaging devices 3.53

3.11.2. Lighting techniques 3.55

3.11.3. Analog-to-digital conversion 3.58

3.11.4. Image storage 3.62

3.12. Image processing and analysis 3.62

3.12.1. Image data reduction 3.63

3.12.2. Segmentation 3.64

3.12.3. Feature extraction 3.68

3.12.4. Object recognition 3.69

3.13. Other algorithms 3.71

3.14. Robotic applications 3.71

3.14.1. Inspection 3.72

3.14.2. Identification 3.73

3.14.3. Visual serving and navigation 3.73

3.14.4. Bin picking 3.74

CHAPTER – 4: ROBOT KINEMATICS 4.1 – 4.60

4.1. Introduction 4.1

4.2. Forward kinematics and reverse (inverse) kinematics 4.3

4.3. Forward kinematic of manipulators with 2 DOF IN 2D 4.5

4.4. Reverse kinematics of manipulators with 2 DOF IN 2D 4.6


C8 Robotics – www.airwalkpublications.com

4.5. Forward kinematics of manipulators with 3 DOF IN 2D 4.7

4.6. Forward and reverse transformation of manipulator with 4 4.11


DOF IN 3-D

4.7. Homogeneous transformations 4.13

4.7.1. Translation matrix 4.14

4.7.2. Rotational matrix 4.16

4.8. Jacobians 4.31

4.8.1. Differential relationship 4.31

4.9. Singularities 4.33

4.10. Static forces in manipulators 4.35

4.11. Jacobians in the force domain 4.37

4.12. Manipulator dynamics 4.38

4.12.1. Newton-Euler formulation of equations of motion 4.39

4.12.2. Newton’s equation in simple format 4.40

4.12.3. Euler’s equation in simple format 4.41

4.12.4. The force and torque acting on a link 4.41

4.13. Lagrangian formulation of manipulator dynamics 4.42

4.14. Manipulator kinematics 4.43

4.14.1. Link description 4.43

4.14.2. Link connection 4.45

4.14.3. First and last links in the chain 4.46


Contents C9

4.15. Link parameters–denavite – hartenberg notation 4.46

4.15.1. Convention for affixing frames to links 4.46

4.15.2. Intermediate links in the chain 4.46

4.15.3. First and last links in the chain 4.47

4.15.4. Summary of the link parameters in terms of the 4.47


link frames

4.15.5. Summary of link-frame attachment procedure 4.48

4.16. Manipulator kinematics 4.52

4.16.1. Derivation of link transformations 4.52

4.16.2. Concatenating link transformation 4.55

4.17. The puma 560 4.55

CHAPTER – 5: IMPLEMENTATION AND 5.1 – 5.48


ROBOT ECONOMICS

5.1. Rail guided vehicle (RGV) 5.1


5.2. Automated guided vehicle system (AGVS) 5.5
5.2.1. Components of AGV 5.6
5.2.2. Advantages of using an AGV 5.7
5.2.3. Applications of AGV 5.8
5.3. Vehicle guidance technologies 5.15
5.4. Steering control 5.18
5.4.1. Path decision 5.19
5.5. Vehicle management and safety 5.20
5.6. Implementation of robots in industries 5.23
C10 Robotics – www.airwalkpublications.com

5.7. Safety considerations for robot operations 5.29


5.7.1. Installation precautions and workplace design 5.31
considerations
5.7.2. Safety monitoring 5.34
5.7.3. Other safety precautions 5.36
5.8. Economic analysis of robots 5.36
5.8.1. Methods of economic analysis 5.39
Interest Tables 5.43

CHAPTER – 6: ROBOT PROGRAMMING 6.1 – 6.32

6.1. Introduction 6.1

6.2. Methods or robot programming 6.1

6.2.1. Leadthrough programming 6.2

6.2.2. Textual or computer like programming 6.5

6.2.3. Off-line programming 6.6

6.3. Defining a robot program 6.7

6.4. Method of defining position in space 6.8

6.5. Motion interpolation 6.10

6.6. Basic programming commands in workcell control (wait, 6.15


signal and delay commands)

6.7. Branching 6.17

6.8. Robot programming languages / textual programming 6.17

6.8.1. First generation languages 6.18

6.8.2. Second generation languages 6.18


Contents C11

6.8.3. Future generation languages 6.19

6.9. Structure of robot language 6.20

6.9.1. Operating system 6.21

6.9.2. Elements and functions of a robot language 6.22

6.10. VAL programming 6.23

6.10.1. Robot locations 6.23

6.10.2. Motion commands 6.24

6.10.3. End effector commands 6.27

6.10.4. Sensor and intercock commands 6.29

CHAPTER – 7: TRAJECTORY GENERATION 7.1 – 7.10

7.1. Introduction 7.1


7.2. Joint-space schemes 7.2
7.2.1. Cubic polynomials 7.2
7.2.2. Cubic polynomials for a path with via points 7.7
7.3. Path generation at run time 7.9
7.4. Description of paths with a robot programming language 7.9
7.5. Collision-free path planning 7.10

CHAPTER – 8: MANIPULATOR 8.1 – 8.16


MECHANISM DESIGN

8.1. Introduction 8.1

8.2. Based on the design on task requirements 8.2

8.2.1. Number of degrees of freedom 8.2

8.2.2. Workspace 8.3


C12 Robotics – www.airwalkpublications.com

8.2.3. Load capacity 8.3

8.2.4. Speed 8.3

8.2.5. Repeatability and accuracy 8.3

8.3. Kinematic configuration 8.3

8.3.1. Cartesian 8.4

8.3.2. Articulated 8.5

8.3.3. Spherical 8.6

8.3.4. Cylindrical 8.6

8.4. Wrists 8.7

8.5. Actuation schemes 8.9

8.5.1. Actuator location 8.9

8.5.2. Reduction and transmission systems 8.10

8.6. Stiffness and deflections 8.12

8.7. Actuators 8.12

8.8. Position sensing 8.14

8.9. Force sensing 8.15

Short Questions and Answers SQA 1 – SQA 44

Index I.1 – I.4


CHAPTER – 1

FUNDAMENTALS OF ROBOT

Robot − Definition − Robot Anatomy-Co-ordinate systems, Work Envelope


Types and classification − Specifications − Pitch, Yaw, Roll, Joint Notations,
Speed of motion, Pay load − Robot Parts and their functions − Need for
Robots − Different applications.

1.1. INTRODUCTION:

Today’s changes in every aspect of life and global activity are not
independent of one another. The field of robotics has its origin in science
fiction.

In recent days robots are highly automated mechanical manipulators


controlled by computers. Let us begin this chapter giving the fundamentals of
robotics and industrial automation.

Robotics:

Robotics is a form of industrial automation.

Robotics is the science of designing and building robots suitable for


real-life applications in automated manufacturing and non manufacturing
environment.
1.2 Robotics – www.airwalkpublications.com

Industrial automation:

In reference with the industrial knowledge, Automation is nothing but


the “technology” that is concerned with the use of mechanical, electronic and
computer based systems in the operation and control of production.

The three basic classification of industrial automation are:

(i) Fixed automation


(ii) Programmable automation
(iii) Flexible automation.

(i) Fixed automation:

Volume of production is very high, then the fixed automation is


implemented.

Eg: Mainly finds its application in automobile industry, where the product
needs to be transferred to various number of workstations.

(ii) Programmable automation:

Volume of production is very low, then the programmable automation


is implemented.
In this automation, the instructions are followed by the ‘program’.
In this automation process, the program is read into the production
equipment, and the equipment will perform the series of operation of the
particular product.

(iii) Flexible automation:

It is a fle xi ble ma nufa cturing system whic h is noth ing but


computer-integrated manufacturing system.
Flexible automation system consists of a series of workstation that are
interconnected by a material handling and storage system. “Of the three
automation, robotics coincides most commonly with programmable
automation”.
Fundamentals of Robot 1.3

1.2. INDUSTRIAL ROBOT:

An industrial robot is a reprogrammable multifunctional manipulator


designed to move materials, parts, tools or special devices through variable
programmed motions for the performance of various different task in their
operation.

An industrial robot is a general purpose, programmable machine which


possesses certain human like characteristics.

1.3. ROBOT:

The term ‘robot’ was derived from the English translation of a fantasy
play written in Czechoslovakia around 1920.

‘Robota’ means either a slave or mechanical item that would help its
master.
A robot carries out the task done by a human being.
A robot may do assembly work where some sort of intelligence or
decision making capability is expected.

Various Definitions of Robot:

A Robotics Industries Association in November, 1979 defined Robot as


“a re-programmable multifunctional manipulator designed to move material,
parts, tools or specialized device through various programmed motions for the
performance of a variety of tasks”.
This definition indicates that a robot is a manipulator that is
re-programmable and multifunctional.
The reprogrammability has got its meaning only when a computer or a
microprocessor is interfaced with it.
It can perform various activities, sometimes it can use end effectors to
move raw materials for further processing.
Webster’s defined robot as “an automatic device that performs functions
ordinarily ascribed to human beings”.
1.4 Robotics – www.airwalkpublications.com

1.4. LAWS OF ROBOTICS:

Law 1:

A robot may not injure a human being, or, through inaction, allow a
human to be harmed.
Law 2:

A robot must obey orders given by humans except when they conflicts
with the first law.
Law 3:

A robot must protect its own existence unless that conflicts with the first
or second law.
1.4.1. History of Robotics:

The Table 1.1 summarizes the historical developments in the technology


of robotics

Table 1.1: Historical Development

Year Inventor Development


1700’s J.de.Vaucansol • Machine dolls that played music.
1801 J. Jacquard • Programmable machine for weaving
threads for cloths.
1805 H. Maillardet • Mechanical doll capable of drawing
pictures.
1946 G.C. Devol • Developed a controller device that
could rec ord e lec tri cal signa ls
magnetically and play them back.
1951 Goertz & Bergsland • Development on teleoperators.
1952 Massachusetts • Prototype of Nume rical control
Institute of technology machine.
1954 C.W. Kenward • Got patent for robot design
Fundamentals of Robot 1.5

1954 G.C. Devd • “Programmed article transfer”.


1959 Planet corporation • First commercial robot for limit
switches and cams.
1960 Devolv’s • Hydraulic drive robot.
1961 Ford Motor company • Uni ma te robot for di e c asting
machine.
1966 Trallfa • Built and installed a spray painting
robot
1968 Stanford Research • A mobi le robot wi th sensors,
Institute (SRI) including a vision camera and touch
sensors, and it can move about floor.
1971 Stanford University • A small electrically powered robot arm.
1974 ASEA • Electric drive IRb6 robot.
1974 Cincinnati Milacron • T3 robot with computer control.
1975 Olivetti • Robot for assembly operation.
1978 General motors study • Programmable universal machine for
assembly (PUMA) for assembly by
Unimation.
1979 Yamanashi University • SCARA type robot for assembly.
from Japan
1980 Rhode Island University • Bin − Picking robot.
1981 Carnegie − Mellon • A direct drive robot.
University
1989 MIT • Genghis, a walking robot.
1995 SRI, IBM, MIT • A surgical robot.
2000 Honda • A humanoid robot walking like a
human being.
2005 Cornell University • A self-replicating robot. Fish robot for
navigation.
1.6 Robotics – www.airwalkpublications.com

1.5. ROBOT ANATOMY:

A system is nothing but the integration of whole of parts or subsystems.

A robot is a system as it combines many sub-systems that interact among


themselves as well as with the environment in which the robot works.

A robot anatomy is concerned with the physical construction of the body,


arm, and wrist of the machine.

The basic anatomy of robot is shown in the Figure 1.1.

Fig. 1.1: Anatomy Robot

A robot has many components which include:

1. A base-fixed or mobile.

2. A manipulator arm with several degrees of freedom (DOF).

3. An end-effector or gripper holding a part.

4. Drives or actuators causing the manipulator arm or end-effector to


move in a space.

5. Controller with hardware and software support for giving commands


to the drives.
Fundamentals of Robot 1.7

6. Sensors to feed back the information for subsequent actions of the


arm or gripper as well as to interact with the environment in which
robot is working.

7. Interfaces connecting the robotic subsystems to the external world.

Explanation:

The body is attached to the base and the arm assembly is attached to
the body.

At the end of the arm is the wrist.

The wrist consists of a number of components that allow it to be oriented


in a variety of positions.

The body, arm, and wrist assembly is some times called as manipulator.

Attached to the robot’s wrist is hand. The technical name of hand is


“end effector”.

The arm and body joints of the manipulator are used to position the end
effector, and the wrist joints of the manipulator are used to orient the end
effector.

1.5.1. Degrees of freedom:

The individual joint motions associated with these two categories are
sometimes referred to as “degrees of freedom”.

A typical industrial robot is equipped with 4 to 6 degrees of freedom.

1.5.2. Robot Motions:

Industrial robots are designed to perform productive work. The work is


accomplished by enabling the robot to move its body, arm and wrist through
a series of motions.

Generally Robotic motion is given by, LERT classification system.


1.8 Robotics – www.airwalkpublications.com

where,

L → Linear motion

E → Extension motion

R → Rotational motion

T → Twisting motion.

1. Linear Motion:
Linear motion is obtained by a part moving outside another part, as in
a rack and pinion system.

Fig. 1.2

2. Extension Motion:
Extension motion is obtained where one part of the system comes out
from the other part of the same system.

Fig. 1.3
Fundamentals of Robot 1.9

3. Rotation Motion:

Rotation motion is obtained when one part of the system moves in any
circular direction other than its center. i.e. Rotating about a pivot point.

Fig. 1.4

4. Twisting Motion:
Twisting motion is obtained when the part of the system moves about
its center twisting and untwisting.
Eg: Neck from human body

Fig. 1.5

1.5.3. Robot Joints:

The robot’s motions are accomplished by means of powered joints.

The joints used in the design of industrial robots typically involve a


relative motion of the adjoining links that is either linear or rotational.
1.10 Robotics – www.airwalkpublications.com

The common four types of joints are:

1. Linear (L)

2. Rotational (R)

3. Twisting (T)

4. Revolving (V)

Linear Joints:
Linear Joint involves a sliding or translational motion of the connecting
links.

This motion can be achieved in a number of ways like a piston, a


telescoping mechanism, and relative motion along a linear track or rail.

Linear Joint is also known as prismatic joint or sliding joint.

The example of Linear Joint is shown in Fig. 1.6.

Fig. 1.6: Linear Joint (L)

Rotational Joints (R):

The three Rotating Joints are

1. Rotational (R)

2. Twisting (T)

3. Revolving (V)
Fundamentals of Robot 1.11

In Rotational Joint (R), the axis of rotation is perpendicular to the axes


of the two connecting links.

The Example of Rotational joint is shown in Fig. 1.7.

Fig. 1.7: Rotational Joint (R)

Twisting Joint (T):

Twisting motion is the second type of rotating joint.

In this motion twisting involves between the input and output links.

The axis of rotation of the twisting joint is parallel to the axis of rotation.

(i.e.) Parallel to the both links.

The example of Twisting Joint is shown in Fig. 1.8.

Fig. 1.8: Twisting Joint


1.12 Robotics – www.airwalkpublications.com

Revolving Joint:

Revolving Joint is the third type of rotating Joint.

In this joint, the input link is parallel to the axis of rotation and output
link is perpendicular to the axis of rotation.

(i.e.) Output link revolves about input link.

The example of Revolving Joint is shown in Fig. 1.9.

Fig. 1.9: Revolving Joint

1.6. CO-ORDINATE SYSTEM:

Industrial robots are available in a wide variety of sizes, shapes and


physical configuration.
There are some major co-ordinate system based on which robots are
generally specified.
The common design of robot co-ordinates are:
1. Polar co-ordinate system.

2. Cylindrical co-ordinate system.


Fundamentals of Robot 1.13

3. Cartesian co-ordinate system.


4. Joined − arm configuration or co-ordinate system.

1.6.1. Polar Co-ordinate system:

In this system, robot has one linear and 2 angular motion.

1. The Linear motion corresponds to a radial in and out


translation (1)
2. The one angular motion corresponds to a base rotation about
vertical axis (2)
3. The second angular motion is the one that rotates about an axis
perpendicular to the vertical through the base (3)

★ Polar co-ordinates are also referred as spherical co-ordinates


★ The polar configuration is illustrated in Fig. 1.10.

Fig. 1.10: Polar Co-ordinate


1.14 Robotics – www.airwalkpublications.com

Advantages:

1. Simpler and smaller in design.

2. Easily applicable for commercial purpose.

3. Less space is enough for its installation.

4. High capability.

Applications:

1. Used for machine loading and unloading operation.

2. Water − etching application in electronics industry.

3. Forging.

4. Injection moulding.

1.6.2. Cylindrical co-ordinate system:

★ In this system, linear motions and one rotational motion.

★ Work envelope is cylindrical.

★ The two linear motions consist of a vertical column up & down (1)
and the sliding column (2) for right & left motion.

★ The vertical column is capable of rotating.

★ The manipulator is capable to reach any point in a cylindrical volume


of space.
Fundamentals of Robot 1.15

★ The cylindrical coordinate illustration is shown in Fig. 1.11.

Fig. 1.11

Advantages:

★ Good accuracy, High capability.


★ Large work envelope, High load carrying capacity.
★ Suitable for pick and place operation.
★ High accuracy, High rigid
Disadvantages:

★ The robot cannot rotate through a complete circle in the space


bounded between two cylinders.

Applications:

★ Used in material handling (mainly for pick and place operation).


★ Used in machine loading and unloading.
★ Used in conveyor parallel transfer.
1.16 Robotics – www.airwalkpublications.com

1.6.3. Cartesian co-ordinate system:

★ In this co-ordinate system, three linear motions x, y, z exist.

★ X-co-ordinate axis represents left and right motion.

★ Y-co-ordinate axis represents forward and backward motion.

★ Z-co-ordinate axis represents up and down motions.

★ Motion in any co-ordinate is independent of other two motions.


★ The manipulator can reach any point in a cubic volume of space or
rectangular.

★ The robots with cartesian co-ordinate system are called as rectilinear


or gantry robots.

★ Cartesian co-ordinate illustration is shown in Fig. 1.12.


★ The DOF (Degree of Freedom) is 3 since it has 3 motions.

Fig. 1.12: Cartesian Co-ordinate System


Fundamentals of Robot 1.17

Advantages:

★ It has rigid structure because of box frame.

★ It has minimum error.

★ Simple controls.

★ Good accuracy and repeatability.

★ Easy program and easy to operate.


Disadvantages:

★ Restriction in movement.

★ More floor space is needed for its operation.


Applications:

★ Used for inspection

★ Used to obtain good surface finishing.

★ Find its application in assembly of parts.


1.6.4. Joined arm co-ordinate system:

★ Joined arm co-ordinate system is also called as revolute or


anthropomorphic configuration.

★ It is nothing but the design corresponds to the human arm having


wrist, shoulder and elbow.

★ The link of the arm mounted on the base joint can rotate about z-axis.

★ The shoulder can rotate about horizontal axis.

★ The elbow can rotate about horizontal axis or may be at location


space depending on the base and shoulder
1.18 Robotics – www.airwalkpublications.com

★ The work envelope is spherical or it may be revolute.


★ Joined arm system illustration is shown in Figure 1.13.

Fig. 1.13: Joined arm Co-ordinate System

Advantages:

★ Can occupy large work envelope.


★ It is flexible to reach.
Disadvantages:

★ System is very complex.


★ Require skilled labour for its operation.
★ Accuracy is poor.
★ Controlling on the base of rotation is difficult.
★ Complex programming.
Fundamentals of Robot 1.19

Applications:

★ Spraying, painting, welding.

★ Automated Assembly.
1.7. WORK ENVELOPE:

★ The volume of the space surrounding the robot manipulator is called


work envelope.

★ The work volume is the term that refers to the space with in which
the robot can manipulate its wrist end.

★ The work envelope is determined by the following physical


specification of the robot:

1. Robot’s physical configuration.


2. The size of the body, arm and wrist components.

1.7.1. Cartesian co-ordinate work envelope:

Fig. 1.14 shows the work envelope of rectangular cartesian co-ordinate.

Fig. 1.14: Work Envelope of Rectangular Area


1.20 Robotics – www.airwalkpublications.com

Uses: Rectangular co-ordinate robot is very rigid and suitable for pick and
place in hot environment as in furnace.

It is also a suitable manipulator for overhead operations as it covers a


large work area.

1.7.2. Cylindrical co-ordinate work envelope:

Fig. 1.15: Cylindrical Co-ordinater


Work Envelope

Fig. 1.15 shows the work envelope of cylindrical co-ordinate robot.

The plan view indicates the robot arm pivoted at the center of the base
which can form a portion of a circle by the action of swing.

Thus the work envelope of cylindrical co-ordinate robot is a portion of


a cylinder.

Uses:

Cylindrical co-ordinate robot is suitable for handling parts in machine


tools or other manufacturing equipment.

It can pick-up objects from the floor on which the robot is mounted.
Fundamentals of Robot 1.21

1.7.3. Polar co-ordinate work envelope:

Fig. 1.16: Spherical Co-ordinate


Work Envelope

Fig. 1.16 shows the polar co-ordinate work envelope.

The plan view indicates a swing of the robot’s arm as it is rotated around
its base.

The work envelope of the extension arm of a spherical co-ordinate robot


is the volume swept between two partial sphericals.

Uses:

Spherical or polar co-ordinate robots are most suitable for transferring


parts on machine tools.

They are suitable for picking components from the floor.

They are extensively used in flexible manufacturing system.


1.22 Robotics – www.airwalkpublications.com

1.7.4. Joined arm work envelope:

Fig. 1.17 shows the joined arm co-ordinate work envelope.

Fig. 1.17: Joined Arm


Work Envelope

The plan view indicates the same as shown in the plan view of the
cylindrical co-ordinate robot.

Uses:

Joined arm robot is flexible and versatile as it can easily reach up and
down and can also swing back.

Joints are rotary joints.

1.8. TYPES OF ROBOT:

The common types of robot are:

(i) Industrial Robot.

(ii) Laboratory Robot.

(iii) Explore Robot.

(iv) Hobbyist Robot.


Fundamentals of Robot 1.23

(v) Class Room Robot.

(vi) Educational Robot.

(vii) Tele − Robots.

1.8.1. Types of Industrial Robot:

(i) Sequence Robot.

(ii) Playback Robot.

(iii) Intelligent Robot.

(iv) Repeating Robot.

1.8.2. Based on physical configuration:

(i) Cartesian co-ordinate configuration.

(ii) Cylindrical co-ordinate configuration.

(iii) Polar co-ordinate configuration.

(iv) Joined arm configuration.

1.8.3. Based on control system:

(i) Point to point robots.

(ii) Straight line robots.

(iii) Continuous robot.

1.8.4. Based on movement:

(i) Fixed robot.

(ii) Mobile robot.

(iii) Walking robot.


1.24 Robotics – www.airwalkpublications.com

1.8.5. Based on Types of Drive:

(i) Pneumatic drive.

(ii) Electric drive.

(iii) Hydraulic drive.

1.8.6. Based on Sensory systems:

(i) Intelligent robot.

(ii) Vision robot.

(iii) Simple and blind robot.

1.8.7. Degrees of freedom:

(i) Single degree of freedom.

(ii) Two degree of freedom.

(iii) Three degree of freedom.

(iv) Six degree of freedom.

1.8.8. Based on Application:

(i) Manufacturing.

(ii) Handling.

(iii) Testing.

1.8.9. Based on path control:

(i) Stop-to-stop.

(ii) Point-to-point.

(iii) Controlled path.

(iv) Continuous.
Fundamentals of Robot 1.25

A typical classification system of robot is based on skill of operation


required in various manufacturing applications.

They are,

1. Low accuracy contouring (For spray painting, spot welding, etc.)

2. Low accuracy point-to-point (Loading, unloading from heat treatment


furnaces, die casting machine, etc.)

3. Moderate accuracy contouring (arc welding, deburring etc.)

4. Moderate accuracy point-to-point (Forging, loading/unloading machine


tools, part orientation, etc.)

5. Close tolerance and assembly application.

1.9. ROBOT SPECIFICATION:

The common robot specifications are given as below:

1. Spatial resolution.

2. Accuracy.

3. Repeatability.

4. Compliance.

5. Pitch.

6. Yaw.

7. Roll.

8. Joint Notation.

9. Speed of motion.

10. Pay Load.


1.26 Robotics – www.airwalkpublications.com

1.9.1. Spatial Resolution:

The spatial resolution of a robot is the smallest increment of movement


into which the robot can divide its work volume. The spatial resolution depends
on two factors.

1. Control resolution.

2. Mechanical resolution.

1. Control Resolution:

Control resolution is determined by the robot’s position control system


and its feedback measurement system.

Controller’s ability to divide the total range of movement for the


particular joint into individual increments that can be addressed in the
controller.

Number of increments = 2n

2. Mechanical Inaccuracy:

Mechanical inaccuracy comes from elastic deflection in structural


members, gear backlash, stretching of pulley cords, leakage of hydraulic fluids
and other imperfections in the mechanical system.

These inaccuracies tend to be worse for larger robots simply because the
errors are magnified by the larger components.

1.9.2. Accuracy:

Accuracy refers to a robot’s ability to position its wrist end at a desired


target point within the work volume.

The accuracy of a robot can be defined in terms of spatial resolution


because the ability to achieve a given target point depends on how closely
the robot can define the control increments for each of its joint motions.
Fundamentals of Robot 1.27

The moment the mechanical inaccuracy reduces the robot accuracy, we


could initially define accuracy under this worst case assumption as one half
of the control solution.

Fig. 1.18 shows the mechanical inaccuracies would affect the ability to
reach the target position.

Fig. 1.18: Mechanical Inaccuracy

1.9.3. Repeatability:

Repeatability is concerned with the robot’s ability to position its wrist


or an end effector at a point in space that had previously been brought.

Repeatability refers to the robot’s ability to return to the programmed


point when commanded to do so.

Repeatability errors form a random variable and constitute a statistical


distribution.
1.28 Robotics – www.airwalkpublications.com

1.9.4. Compliance:

The compliance of the robot manipulator refers to the displacement of


the wrist end in response to a force or torque exerted against it.

Compliance is important because it reduces the robot’s precision of


movement under load.

If the robot is handling a heavy load, weight of the load will cause the
robot arm to deflect.

If the robot is pressing a tool against a workpart, the reaction force of


the part may cause deflection of the manipulator.

Robot manipulator compliance is a directional features.

The compliance of the robot arm will be greater in certain directions


than in other directions because of the mechanical construction of the arm.

1.9.5. Three degree of freedom wrist assembly:

To establish the orientation of the object, we can define three degrees


of freedom for the robot’s wrist as shown in Fig. 1.19. The following is one
possible configuration for a three DOF, wrist assembly

1. Roll: This DOF, can be accomplished by a T-type joint to rotate the


object about the arm axis.

2. Pitch: This involves the up-and-down rotation of the object, typically


by means of a type R joint.

3. Yaw: This involves right-to-left rotation of the object, also


accomplished typically using an R-type joint.
Fundamentals of Robot 1.29

Fig. 1.19

1.9.6. Joint Notation scheme:

★ The physical configuration of the robot manipulator can be described


by means of joint notation scheme

★ This notation scheme is given by using the joints L, R, T, V.

★ The joint notation scheme permits the designation of more or less


than the three joints typically of the basic configurations.

★ Joint notation scheme can also be used to explore other possibilities


for configuring robots, beyond the common four types LVRT.
1.30 Robotics – www.airwalkpublications.com

The basic notation scheme is given in Table 1.2.

Table 1.2: Joint Notation Scheme

Robot co-ordinate Joint Notation

1. Polar co-ordinate TRL

2. Cylindrical co-ordinate TLL, LTL, LVL

3. Cartesian co-ordinate robot LLL

4. Joined arm configuration TRR, VVR

Generally the notation starts with the joint closest to the arm interface,
and proceeds to the mounting plate for end effector.

We can use the letter symbols for the four joint types (i.e., L, R, T and
V) to define a joint notation system for the robot manipulator. In this notation
system, the manipulator is described by the joint types that make up the
body-and-arm assembly, followed by the joint symbols that make up the wrist.
For example, the notation TLR: TR represents a 5-d.o.f. manipulator whose
body-and-arm is made up of a twisting joint (joint 1), a linear joint (joint 2)
and a rotational joint (joint 3). The wrist consists of two: a twisting joint (joint
4) and a rotational joint (joint 5). A colon separates the body-and-arm notation
from the wrist notation.

1.9.7. Speed of Motion:

The speed capabilities of current industrial robot range up to a maximum


of about 1.7 m/s.

Generally the speed of motion is measured at the wrist.

High speed can be obtained by large robot with the arm extended to its
maximum distance from vertical axis.

Hydraulic robots are faster than the electric drive robots.


Fundamentals of Robot 1.31

The factors by which the speed of the robot determined are:

1. The accuracy with which the wrist must be positioned.

2. The weight of the object that is being manipulated.

3. The distance to be moved.

There is always inverse relationship between the accuracy and the speed
of the robot motions.

As the accuracy is increased, the robot needs more time to reduce the
location errors in its various joints to achieve the desired final position.

Heavier object means greater inertia and momentum, and the robot must
be operated more slowly and safely deal with the factors.

Fig. 1.19: Time / distance Vs Speed of Motion

Fig. 1.19 shows the motion of robot with respect to time.


1.32 Robotics – www.airwalkpublications.com

Due to acceleration and deceleration problem, a robot is capable of


travelling one long distance in less time than a sequence of short distances
whose sum is equal to long distance.

1.9.8. Pay load:

The size, configuration, construction and drive systems are determined


on the basis of load carrying capacity or payload of the robot.

The load carrying capacity is specified under the condition of robot’s


arm in its weakest position.

For polar, cylindrical or Joined-arm configuration, the robot arm is at


maximum extension.

The common pay load carrying capacity of industrial robot ranges from
0.45 kg. for small robots and 450 kg. for very large robot.

Example:

If the rated load capacity of a given robot were 3 kg. and the end
effector weighed 1 kg., then the net weight − carrying capacity of the robot
would be only 2 kg.

1.10. ROBOT PARTS AND THEIR FUNCTIONS:

Before knowing the parts and function, the working of robot need to be
understood.

1. As per the application, the operator starts the cycle.

2. Signal is sent to the robot controller through an external feed back.

3. On the basis of the command, the controller sends signal to


manipulator.

Once signal is received in the manipulator, the operation of the robot


will start.
Fundamentals of Robot 1.33

A robot has six major components, they are as follows.

1. Power source

2. Controller

3. Manipulator

4. End effector

5. Actuator

6. Sensors.

Fig. 1.20: Parts of a Robot

1.10.1. Power source:

Power source is the unit that supplies the power to the controller and
the manipulator.

All modern robots are driven by brushless AC servo motors, but the
industrial robot uses either hydraulic drive or pneumatic drive.
1.34 Robotics – www.airwalkpublications.com

Normally the manipulator is controlled by hydraulic or pneumatic drives.

A detailed explanations of these drives are given in Chapter 2.

Pneumatic drive:

Pneumatic power can be readily adapted to the actuation of piston to


give movement.

Hydraulic drive:

Hydraulic drives are more controllable than the pneumatic drive.

It could provide more power than the electric drive.

Electric drive:

It is operated either by stepper motor, DC servos or by AC servos.

1.10.2. Controller:

Controller is the robot’s brain. It ensures that the entire movement of


the robot is controlled by the controller.

The controller consists of programs, data algorithm, logic analysis and


various other processing activities.

1.10.3. Manipulator:

The robot manipulator comprises of arm, body and wrist.

Arm:

Arms are used to move and position the parts or tools within the work
cell.

Wrist:

Orientation of the tools and parts are made by wrist.

A robot manipulator is created from a sequence of link and joints


combinations.
Fundamentals of Robot 1.35

The arm body section of the manipulator is based on one of four


configurations which are

1. Polar

2. Cartesian

3. Cylindrical

4. Joined arm

1.10.4. End effector:

The end-effector is mounted on the wrist and it enables the robot to


perform various tasks.

The common end effectors are:

1. Tools

2. Gripper.

Tools:

At certain times, the end effector will itself act as the tool.

Certain tools are spot-welding, spray painting nozzles, rotating spindles


for grinding etc.

Grippers:

Grippers are used to hold the object and place it at the needed location.
The various types of grippers are:

1. Mechanical gripper.

2. Magnetic gripper.

3. Pneumatic and hydraulic gripper.

4. 2 fingered, 3 fingered gripper, etc.


1.36 Robotics – www.airwalkpublications.com

1.10.5. Actuator:

Actuators are used for converting the hydraulic, electrical or pneumatic


energy into mechanical energy.

The special applications of actuators are lifting, clamping, opening,


closing, mixing bending, buckling etc.

Actuators perform the function just opposite to pumps.

1.10.6. Sensors:

Sensor is a device which will convert internal physical phenomenon into


an electrical signal.

Sensors use both internal as well as external feed in robots.

Internal feedback → Temperature, pressure can be checked.

External feedback → Environmental feedback can be analysed.

Sensors are used for an element which produces a signal relating to the
quantity that is being measured.

1.11. BENEFITS OF ROBOT:

1. Increased accuracy.

2. Increased applications.

3. Rich in productivity.

4. Reduced labour charges.

5. Reduces scrap and wastage.


Fundamentals of Robot 1.37

1.12. NEED FOR ROBOT:

★ In initial stage, major applications of robot have been in unpleasant


and hazardous task.

★ Robots have found wide applications in doing repetitive and


monotonous job where consistency and product quality are primary
importance.

★ Usually robots are suitable for automated task which requires little
sensing capability.

★ The need for robot is emerging in the field of Flexible Manufacturing


System (FMS).

FMS:

It is the field where the flexibility of the cell and consistency of the
products are combined.
FMS works at various levels and replaces hard automation technology
by comprising transfer machines as well as automated machine.
FMS is very helpful for batch manufacturing.
In FMS, robots and automated vehicle systems are extensively employed.
In FM module, a robot may be employed to load and unload parts or
tools through a single computer.

1.13. MANUFACTURING APPLICATIONS OF ROBOT:

The very helpful applications of robot in manufacturing industry are:

1.13.1. Material handling:

1. Bottle loading.
2. Parts handling.
3. Transfer of components / tools
4. Depalletizing / Palletizing.
5. Transporting components.
1.38 Robotics – www.airwalkpublications.com

1.13.2. Machine loading / unloading:

1. Loading parts to CNC machine tools.

2. Loading a punch press.

3. Loading a die casting machine.

4. Loading electron beam welding.

5. Loading / orientating parts to transfer machine.

6. Loading parts on the test machine.

1.13.3. Spray painting:

1. Painting of trucks / automobiles.

2. Painting of agricultural equipment.

3. Painting of appliance components.

1.13.4. Welding:

1. Spot Welding

2. Arc welding.

3. Seam welding of variable width.

1.13.5. Machining:

1. Drilling

2. Welding

3. Forging

4. Cutting

5. Sanding

6. Grinding

7. Deburring.
Fundamentals of Robot 1.39

1.13.6. Assembly:

1. Mating components.
2. Riveting small assemblies.

1.13.7. Inspection:

1. In process measuring and quality control.


2. Searching of missing parts.

1.14. NON MANUFACTURING ROBOTIC APPLICATIONS:

The common non manufacturing robotic applications are:

1.14.1. Hazardous Environment:

(i) Mining:

1. Exploration.
2. Search and rasure.
3. Tunneling for main road ways.
4. Operation in short passage.

(ii) Service:

1. Fire Fighting.

2. Underground cleaning.

(iii) Nuclear:

★ Maintenance of atomic reactors.


(iv) Space:

★ Used in space vehicles.


(v) Under sea:

1. Oil / mineral exploration.

2. Salvage operation.
1.40 Robotics – www.airwalkpublications.com

1.14.2. Medical:

1. Surgery

2. Non-invasive / invasive diagnostics.

3. Rehabilation engineering for handicapped.

1.14.3. Distribution:

1. Wearhouseing.

2. Retailing.

1.14.4. Others:

1. Agricultural purpose.

2. Hobby / household purpose.

3. Military applications of robot maybe in both manufacturing and


non-manufacturing area.

1.15. THE FUTURE OF ROBOTICS:

The trends in the future robotics are in the development of

1. Robotic vehicle.

2. Space robotics.

3. Humanoid and walking robots.

4. Personal and service robots.

5. Robots for biological applications.

6. Robots for medical applications.

7. Sensor integrated intelligent robot and for health care − some times
called network robot.

*********
CHAPTER – 2

ROBOT DRIVE SYSTEMS


AND END EFFECTORS

Pneumatic Drives − Hydraulic Drives − Mechanical Drives − Electrical


Drives − D.C Servo Motors, stepper motors, A.C servo motors − Salient
features, Applications and comparison of all these drives, End-effectors −
Grippers − Mechanical Grippers, Vacuum grippers, Two fingured and three
fingured grippers, internal grippers and external grippers, selection and
design consideration. Brushless permanent magnet DC motor

2.1. INTRODUCTION:

The robot arm can be put to a desired motion of payload, if the actuator
modules are fitted into provide power drives to the system. This chapter deals
with the types of actuators along with the end effectors.

2.2. ACTUATORS:

Actuators are the muscles of robot. If we consider the links and joints
as the skeleton of the robot, the actuators act as muscle.

They move or rotate the links to change the configuration of robots.


2.2 Robotics – www.airwalkpublications.com

The actuator must have enough power to accelerate and decelerate the
links and to carry the loads and it should be light, economical, accurate,
responsive, reliable, and easy to maintain.

2.3. FACTORS CONSIDERED FOR SELECTING DRIVE SYSTEM:

1. Low inertia.

2. High power to weight ratio.

3. Possibility of overload and delivery of impulse torques.

4. Capacity to develope high accelerations.

5. Wide velocity range.

6. High positioning accuracy.

7. Good trajectory tracking and positioning accuracy.

8. Should operate in high degrees of freedom.

9. Should have capacity to withstand high pressure.

10. Maintenance of gravitational and acceleration force.

2.4. TYPES OF ACTUATORS OR DRIVES:

There are many types of actuators available, certain types are as follows.

1. Pneumatic actuators.

2. Hydraulic actuators.

3. Electric motors.

(i) AC servomotor.

(ii) DC servomotor.

(iii) Stepper motor.

(iv) Direct drive electric motors.


Robot Drive System and End Effectors 2.3

2.4.1. Pneumatic Power Drives:


Pneumatic system will employ a linear actuator. (i.e.) Double acting
cushioned cylinder.
Working:
Pneumatic power drive systems use compressed air to move the robot
arm.
Air is compressed by an air compressor and then the compressed air is
directed through Filter, Regulator and Lubricator (FRL) units to the hose pipes
and then to the pneumatic cylinders through the directional control valve.

Fig. 2.1: Pneumatic Power Supply Drive

For stable supply, air compressor usually pumps air into a storage tank
and from there, it passes through FRL units to the pneumatic cylinder.

Figure 2.1 clearly illustrates the schematic sketch of pneumatic power


drive.
2.4 Robotics – www.airwalkpublications.com

As the air enters into the cylinder via the directional control valve, the
piston moves on its outward stroke and when the air is diverted to enter into
the other end of the cylinder, the piston makes the return stroke.

The return air is exhausted into the atmosphere.

Pneumatic control valve can be operated by either levers, rollers or


solenoids and this can also be pilot operated.

Solenoid controlled valves are most common and they can be operated
by micro switches which energize the solenoids.

Advantages:

1. It is cheapest form of all actuators.

2. Components are readily available and compressed air normally is an


already existing facility in factories.

3. Compressed air can be stored and conveyed easily over long distances.

4. They have few moving parts making them inherently reliable and
reducing maintenance costs.

5. Compressed air is clean, explosion proof and insensitive to


temperature fluctuations, thus useful to many applications.

6. No mechanical transmission is usually required.

7. The systems are usually compact.

8. Control is simple e.g: Mechanical stops are often used.


Robot Drive System and End Effectors 2.5

9. Individual components can be easily interconnected.

10. They have a very quick action and response time thus allowing for
fast work cycles.

Disadvantages:

1. Pneumatics are not suitable for moving heavy loads and precise
control due to the compressibility of air. This compressibility
necessitates the application of more forces than would normally be
necessary to ensure that the actuator is firmly in position against its
stop under loading conditions.

2. If moisture penetrates the units and ferrous metals have been used,
then damage to individual components will result in.

3. If mechanical stops are used, then resetting the system will become
slow.

2.4.2. Hydraulic Drives:

★ In a hydraulic system, the electric motor pumps fluid (oil) from a


reserve tank to hydraulic actuators which are in general, double acting
piston − cylinder assemblies.

★ Fluid at a higher pressure passes through control valves before its


entry into the linear actuators.

★ Rotary actuators comprising of hydraulic motors which rotate


continuously may also be employed

★ Thus in a hydraulic system, both linear and rotational motions are


possible.
2.6 Robotics – www.airwalkpublications.com

Fig. 2.2: Hydraulic Drives

Figure 2.2 shows the schematic diagram of hydraulic power supply.

Working:

★ Fluid is pumped from the tank and filtered.

★ It then passes through a check valve, accumulator, solenoid controlled


spring centered direction control valve to the cylinders used for
extension of the arm, swing of the shoulder or rotation of the wrist.

★ The circuit contains a pilot operated relief valve so that the fluid
returned to the tank.

★ The filter separates out any foreign particles that may wear off the
hydraulic system elements. It also filters the dirt that may be present.
Robot Drive System and End Effectors 2.7

★ The accumulator helps the systems to send additional fluid to the


cylinder. If there is a sudden demand for the fluid, then it also acts
as a shock absorber.

★ The pilot operated relief valve maintains the system pressure constant.
When the system pressure increases, it allows the fluid to pass through
the central bore of the spool to open a pilot spool and facilitates the
fluid to return to the tank.

★ It eliminates noise and vibration by streamlining the pulsations of the


system pressure and holding the system pressure at the present value.

★ The check valve allows the hydraulic fluid to flow in only one
direction and restricts the fluid to flow in the reverse direction. The
check valve also helps to maintain system pressure.

★ The direction control valve allows the fluid to enter into the valve
from the pump and then to either the rod end or the blind (head) end
of the cylinder by moving the spool to the right or to the left.

★ A hydraulic power source is generally used for increased payload. It


may be used in hazardous, volatile and explosive environments like
a spray painting booth.

Advantages:

1. High efficiency and high power-to-size ratio.

2. Complete and accurate control over speed, position, and direction of


actuators are possible.

3. They generally have a greater load carrying capacity than electric and
pneumatic actuators.

4. No mechanical linkage is required, i.e. a direct drive is obtained with


mechanical simplicity.

5. Self-lubricating and non-corrosive.


2.8 Robotics – www.airwalkpublications.com

6. Due to the presence of an accumulator, which acts as a ‘storage’


device, the system can meet sudden demands in power.

7. Hydraulic robots are more capable of withstanding shock loads than


electric robots.

Disadvantages:

1. Leakage can occur causing a loss in performance and general


contamination of the work area. There is also a high fire risk.

2. The power pack can be noisy, typically about 70 dB or louder if not


protected by acoustic muffler.

3. Servo-control of hydraulic system is complex and is not as widely


understood as electric servo-control.

4. For smaller robots, hydraulic power is usually not feasible


economically as the cost of hydraulic components do not decrease in
proportion to size.

5. Change in temperature alters the viscosity of the hydraulic fluid. Some


times at low temperature, fluid viscosity will increase possibly causing
sluggish movement of the body.

2.4.3. Electrical Drives:

★ Electric actuators are referred where an electric motor drives the robot
links through some mechanical transmission. (e.g) Gears.

★ In early years of industrial robot, hydraulic robots were the most


common, but recent improvements in electric motor design have meant
that most new robots are of all electric construction.
Robot Drive System and End Effectors 2.9

Advantages:

1. Widespread availability of power supply.

2. The basic drive element in an electric motor is usually lighter than


that for fluid power.

3. High power conversion efficiency.

4. No pollution of working environment.

5. The accuracy and repeatability of electric power driven robots are


normally better than the fluid power robots in relation to cost.

6. Being relatively quiet and clean, they are very acceptable


environmentally.

7. They are easily maintained and repaired.

8. Structural components can be light weight.

9. Drive system is well suited to electronic control.

Disadvantages:

1. Electrically driven robots often require the incorporation of some sort


of mechanical transmission system.

2. Due to the increased complexity of the transmission system, additional


cost is incurred for their procurement and maintenance.

3. Electric motors are not completly safe. Therefore they cannot be used
in explosive atmosphere.

2.4.4. Types of Electrical Drives:

The common types of electrical motors are:

(i) Servomotor.

(ii) Stepper motor


2.10 Robotics – www.airwalkpublications.com

2.5. DC SERVOMOTOR:

★ The first commercial electrically driven industrial robot was


introduced in 1974.
★ Traditionally robotics have employed electrically driven D.C
(Direct-current) motors for robots because not only for their powerful
versions available, but they are also easily controllable with relatively
simple electronics.

Fig. 2.3: D.C Servo Motor

★ An electrical actuator (motor) has a stationary part called a stator and


a rotating part called the rotor with an air gap.

★ In a D.C machine the field windings are on the stator and armature
windings are on the rotor.

★ In an induction motor, the stator carries a 3-phase winding which


draws a current and sets up a rotating flux pattern with alternate
north-south in the air gap rotating at synchronous speed corresponding
to the frequency of the supply and the number of poles of the motor.
Robot Drive System and End Effectors 2.11

★ The motor runs at speeds below the synchronous speed.


★ The rotating field induces current in the short-circuited rotor windings
or short-circuited conducting bars located at the slots.

★ The stator and rotor fields will interact and produce the torque.
★ The rotor is placed on the shaft. In a D.C motor brushes and
commutator are used.
The force on a current carrying conductor is given by,

F = BLI

The torque on one armature conductor is,

T=F⋅r

= BLI ⋅ r.

★ Total torque on the armature for Za armature conductor connected in


series is given by

φ PIa Za
Ttotal =
π

where as,

F → Force on the conductor


B → Magnetic flux under pole
I → Current in the conductor
L → Length of the conductor
φ → Flux per pole
Ia → Current in the armature conductor
2P → Magnetic poles
r → Radius of the conductor
2.12 Robotics – www.airwalkpublications.com

In the above equation

★ Torque T is directly proportional to armature current Ia and magnetic


flux φ.

★ Torque T is inversely proportional to π.

Advantages:

★ Some large robots utilise field current control D.C motors. i.e., motors
in which the torque is controlled by manipulating the current to the
field coils. These motors allow high power output at high speed and
can give good power to weight ratio.

2.6. TYPES OF D.C. MOTORS:

The common type of D.C (Direct Current) motors are:

(i) Permanent Magnet (PM) D.C motor.

(ii) Brushless permanent Magnet D.C motors.

2.6.1. Permanent Magnet D.C Motor:

★ In the permanent magnet motor, no field coils are used and the magnet
produces the field themselves.

★ In certain permanent magnet coils, wounds are available in order to


recharge the magnet if the strength fails.

★ The field flux being constant, the torque of these motors is directly
proportional to the armature current.

★ The common two types of permanent magnet D.C motor are shown
in Figure 2.4 and 2.5 which are cylindrical and disk type respectively.
Robot Drive System and End Effectors 2.13

Fig. 2.4: Cylindrical Motor Fig. 2.5: Disk Type Motor

★ The cylindrical motor operates in a similar manner which described


earlier where no field coils exist.

★ Disk motor has a large diameter, short length armature of


non-magnetic material.

★ Normally cylindrical motor is used in Industrial Robot.

Advantages:

★ Excitation power supplies for the field coils are not required.

★ Reliability is improved as there are no field coils to fail.

★ No power loss from field coils and hence more efficiency.

★ It provides more cooling in operation.


2.14 Robotics – www.airwalkpublications.com

2.6.2. Brushless permanent Magnet D.C Motors:

★ A major problem with D.C motor is that they need commutator and
brushes in order to reverse the current through each armature coil.

★ Sliding contact with commutators is made with brush as a


consequence sparks jump between the tows and they suffer wear.

★ In order to avoid such problems, brushless motors have been designed.

★ Brushless motors consist of sequence of the stator coil and a


permanent magnet rotor.

★ A current carrying conductor in a magnetic field experiences a force.

★ With the conventional D.C motor, magnet is fixed and the current
carrying conductor is made to move.

Fig. 2.6: Brush less Permanent DC Motor


Robot Drive System and End Effectors 2.15

★ Brushless permanent D.C motor is shown in Figure 2.6.

★ The current to the stator coil is electronically switched by transistor


in sequence round to coils, the switching being controlled by the
position of the rotor.

Advantages:

★ Reduced rotor inertia.

★ They are weightless.

★ More durable.

★ Motors are less expensive.

★ The absence of brush reduces the maintenance cost.

★ They have a better heat dissipation, heat being more easily lost from
the stator than rotor.

★ Available with small dimensions with compact power.


Disadvantage:

★ Control system for brushless motors is relatively more expensive.


2.7. A.C. MOTORS:

★ Electric A.C motors are similar to D.C motors, except that the rotor
is permanent magnet, the stator houses the windings and all
commutators and brushes are eliminated.

The A.C motors are mainly of two types:

1. Single phase A.C motor.

2. Poly phase A.C motor.


2.16 Robotics – www.airwalkpublications.com

★ These motors may be induction or synchronous type.


★ In robotics, A.C motors are not preferred due to complexity of speed
control.

★ A sketch of A.C synchronous motor is shown in Fig. 2.7.

Fig. 2.7: A.C. Motor

★ The field windings are placed on the rotor and armature windings are
on the stator.

★ The field poles are cylindrical or non-salient type.

★ The field windings are excited through slip ranges from a D.C source.

★ The machine runs at fixed speed called synchronous speed


corresponding to the frequency and number of poles.

The rotational speed of the field is given by,

60 f
Ns =
No. of poles pairs
Robot Drive System and End Effectors 2.17

where as,

Ns → rotational speed of field

f → frequency.

The torque is given by

T = Tmax sin δ

where as,

Tmax → Maximum torque rated

δ → Load angle.

2.7.1. Comparison between A.C motor and D.C motor:

★ A.C motors have greater advantage over D.C motor of being cheaper,
more reliable and maintenance free.

★ Speed control is generally more complex than with D.C motors and
hence, a speed controlled D.C drive generally works out cheaper than
a speed controlled A.C drive

2.8. STEPPER MOTOR:

★ Stepper motors are versatile, long-lasting, simple motor that can be


used in many applications.

★ In many applications, stepper motors are used without feedback. This


is because unless a step is missed, a stepper motor steps a known
angle each time it is moved. The angular position is known hence
feedback is not needed.

★ These motors are used in some robots at the smaller and medium end
of the industrial range.

★ The stepper motor converts D.C voltage pulse train into a proportional
rotation of the shaft.
2.18 Robotics – www.airwalkpublications.com

★ The rotation takes place in a discrete way, and hence it is suitable


for digital controlled system.

★ The speed of the stepper motor can be varied by changing the pulse
train input rate.

The common three different types of stepper motors are:

1. Variable reluctance stepper motor.

2. Permanent magnet stepper motor.

3. Hybrid stepper motor.

2.8.1. Variable reluctance stepper motor:

★ ‘Magnetic reluctance’ or ‘Reluctance’ is the analog of electrical


resistance.

★ In this stepper motor, current occurs only in a closed loop, so a


magnetic flux occurs only around a closed path, although this path
may be more varied than that of current.

★ This stepper motor will have a soft iron multi-toothed rotor with
wound stator.

★ The step angle depends on the number of teeth on the rotor, stator
and the winding configuration and the excitation.

★ Figure 2.8 shows the stepper motor of variable reluctance with 6


teeth rotor and 8 teeth stator.
Robot Drive System and End Effectors 2.19

Fig. 2.8: Variable Reluctance Stepper Motor

★ If phase A of stator is activated alone, two diametrically opposite


rotor teeth align themselves with phase A teeth of stator.

★ The next adjacent set of rotor teeth is 15° out of step in the clock
wise direction with respect to stator teeth.

★ Activation of phase B winding would cause the rotor to rotate further


angle of 15° (i.e. 60° rotor and − 45° stator) in counter clockwise
direction for alignment of the adjacent pair of diametrically opposite
rotor teeth.

★ Clockwise rotation of motor will take place if excitation sequence are


reversed.

★ When the stator and rotor teeth are aligned, the reluctance is
minimized and the rotor is rest at this position.
2.20 Robotics – www.airwalkpublications.com

2.8.2. Permanent Magnet stepper motor:

★ The basic method of operation of a permanent magnet type is similar


to variable reluctance type.

Fig. 2.9: Permanent Magnet Stepper Motor

★ Figure 2.9 has two coils A and B each of them producing four poles
but displaced from each other by half a pole pitch.

★ The rotor is of permanent magnet construction and has 4 poles.

★ Each pole is wounded with field winding, the coils on opposite pairs
of poles being in series.

★ Current is supplied from D.C source to the winding through switches.

★ When the motor is at rest with the poles of the permanent magnet
rotor held between the residual poles of the stator.

★ In this position, the rotor is locked unless a turning force is applied.


Robot Drive System and End Effectors 2.21

★ If the coils are energised and in the first pulse the magnetic polarity
of the poles of coil A is reversed, then the rotor will experience a
torque and will rotate in counter-clockwise.

★ Permanent magnet stepper motors give a larger step angle say


45° − 125°.

2.8.3. Hybrid stepper motor:

★ Hybrid stepper motor is a combination of variable reluctance type and


permanent magnet type.

★ The stator may have 8 salient poles energized by a two-phase


windings.

★ The rotor is a cylindrical magnet axially magnetized.

★ The step angle varies from 0.9° to 5°. The popular step angle is 1.8°.

★ The motor speed depends on the rate at which pulses are applied.

★ The direction of rotation depends up on the order of energizing the


coils in the first instance.

★ The angular displacement depends on the total number of pulses.

Fig. 2.10: Hybrid Stepper Motor


2.22 Robotics – www.airwalkpublications.com

★ Figure 2.10 shows the simplified hybrid stepper motor with only two
coils and a rotor with three lobes (teeth).

2.9. SELECTION OF MOTORS:

★ In any application, it is necessary to check the available actuators


which is suitable or not.

★ Certain parameters like positioning accuracy, reliability, speed of


operation, cost and other factors may be considered.

★ Electric motors are inherently clean and capable of high precision if


operated properly.

★ Electric motors must have individual controls capable of controlling


their power.

★ In large robots, this requires switching of 10 to 50 amperes at 20 to


100 volts.

★ Current switching must be done rapidly otherwise there is a large


power dissipation in the switching circuit that will cause it heat
excessively.

★ Small electrical motors uses simple switching circuits and are easy to
control with the low power circuits.

★ Stepper motors are especially simple for open-loop operation.

★ Electric motors are preffered at power levels under about 1.5 kW


unless there is a danger due to possible ignition of explosive materials.

★ At ranges between 1 − 5 kW the availability of robot in a particular


coordinate system with specific characteristics or at lower cost may
determine the decision.
Robot Drive System and End Effectors 2.23

★ Reliability of all types of robot made by reputable manufactures is


sufficiently good.

★ The above mentioned are the common factors considered for selection
of motor.

2.10. COMPARISON OF PNEUMATIC, HYDRAULIC ELECTRICAL


DRIVES:

Table 2.1 gives the clear summary and characteristics of pneumatic,


hydraulic and electrical drives.

Table 2.1: Comparison of all the drives

Pneumatic Hydraulic Electrical


• Good for on-off • Good for large • Good for all size
applications and robots and heavy of robots.
for pick and pay load
place.
• Very low • Reasonably stiff. • Low stiffness.
stiffness. stiffness
Inaccurate dependent on
response speed reduction.
• The basic • The basic • Electricity is
working fluid is working fluid is most basic
Air, Nitrogen high quality oil working fluid.
combustion base with
products. additives
• Lowest power to • High • Depend on the
weight ratio power/weight ratio type of motor used.
• Noisy system. • May leak not fit • Can be spark
for clean free good for
environment explosive
environments.
2.24 Robotics – www.airwalkpublications.com

Pneumatic Hydraulic Electrical


• Require air • Requires pump • Motors needed
pressure, filter etc. reservoir, motor when not
houses etc. powered otherwise
arm will fail.
• No leaks no • Can be expensive • No noisy.
sparks. and noisy.
• Does not require • Requires • Low maintenance
more maintenance maintenance. cost.
• No reduction • No reduction • Needs reduction
gear is needed. gear is needed. gears, increased
backlash, cost,
weight etc.
• Reduction gears
used to reduce
inertia on the
motor.
• Stiff system, high • Better control, • Reliable
accuracy, better good for high components.
response precision robot.
• Poor availability • Excellent aircraft • Availability is
of servo and industrial very good.
components. component
availability.

2.11. END-EFFECTORS:

★ An end effector is a device that attaches to the wrist of the robot


arm and enables the general purpose robot to perform a specific task.

★ Most production machines require special purpose fixtures and tools


designed for a particular operation, and a robot is no exception.
Robot Drive System and End Effectors 2.25

★ The end effector is part of that special purpose tooling for a robot.

★ Most robot manufacturers have special engineering groups whose


function is to design end effectors.

★ The end effectors are also called the grippers. There are various types
of end-effectors to perform the different work functions.

★ An end-effector of any robot can be designed to have several fingers,


joints and degrees of freedom.

★ The general end-effectors can be grouped according to the type of


grasping as follows:

1. Mechanical fingers.

2. Special tools.

3. Universal fingers.

★ Mechanical fingers are used to perform some special tasks. Gripping


by mechanical type fingers is less versatile and less dextrous than
holding by universal fingers as the grippers with mechanical fingers
have fewer number of joints and lesser flexibility.

2.12. GRIPPERS:

★ Grippers are end effectors used to grasp and hold objects.

★ The objects are generally work parts that are to be moved by the
robot.

★ These part-handling applications include machine loading and


unloading, picking parts from a conveyor and arranging parts into a
pallet.
2.26 Robotics – www.airwalkpublications.com

★ There are grippers to hold tools like welding gun or spray painting
gun to perform a specific task. The robot hand may hold a deburring
tool.

★ The grippers of the robot may be specialized devices like RCC


(Remote Centre Compliance) to insert an external mating component
into an internal member.

2.13. CLASSIFICATION OF GRIPPERS:

1. Mechanical grippers.
2. Magnetic grippers.
3. Vacuum grippers.
4. Adhesive grippers.
5. Hooks, scoops and other miscellaneous devices.

2.14. DRIVE SYSTEM FOR GRIPPERS:

In a typical robot, the common types of gripper system used are.


1. Hydraulic.
2. Electric.
3. Pneumatic.
Hydraulic Drive grippers:

★ Hydraulic drives used in robot gripping system are usually electro


hydraulic drive system.

★ A common hydraulic drive system consists of actuators, control valves


and power units.

★ There are 3 kinds of actuators in the system: Piston cylinder, Swing


motor and Hydraulic motor.

★ To achieve positional control using electric signals, electrohydraulic


conversion drives are used.
Robot Drive System and End Effectors 2.27

Electric Drive grippers:

★ In an electric drive gripper system, there are two types of actuators


− D.C motors and stepper motors.

★ In general each motor need appropriate reduction gear system to


provide proper output force or torque.

★ A servo power amplifier is needed in an electric system to provide


complete actuation system.

Pneumatic Drive grippers:

★ The pneumatic drive system has the advantage of being less expensive
than other methods, which is the main reason for it being used in
most of the industrial robot.

★ The other merit of the pneumatic system is the low-degree of stiffness


of the air-drive system.

★ This feature of the pneumatic system is used to achieve compliant


grasping which is necessary for one of the most important functions
of grippers in order to grasp objects with delicate surfaces carefully.

2.15. MECHANICAL GRIPPERS:

★ A mechanical gripper is an end-effector that uses mechanical fingers


actuated by a mechanism to grip an object.

★ The fingers of the gripper will actually make contact with the object.

★ The fingers may be attached to the mechanism or integral part of the


mechanism.

★ The application of interchangeable mechanism can allow wear.

★ Different sets of fingers for use with the same gripper mechanism can
be designed to accommodate different part of models.
2.28 Robotics – www.airwalkpublications.com

Figure 2.11 shows the mechanical gripper with interchangeable fingers.

Fig. 2.11: Mechanical Gripper


with Interchangeable fingers

2.15.1. Mechanical gripper mechanism:

★ There are many ways to classify the gripper actuating mechanism.

★ The one of the common way of finding the movement of the gripper
is by fingers.

★ In this classification, the grippers can actuate the opening and closing
of the fingers by the following two motions.

1. Pivoting movement.

2. Linear or translation movement.


Robot Drive System and End Effectors 2.29

Pivoting movement:

★ In pivoting movement, the fingures rotate about fixed pivot points on


the gripper to close and open.

★ This motion is usually accomplished by some kind of linkage


mechanism.

Linear movement or translation movement:

★ In linear movement, the fingers open and close by moving in parallel


to each other

★ This is accomplished by means of guide rails so that each finger base


slides along a guide rail during actuation.

★ The translational finger movement might also be accomplished by


means of a linkage which would maintain the fingers in parallel
orientation to each other during actuation.

2.15.2. Types of mechanical gripper:

1. Two fingered gripper.

2. Three fingered gripper.

3. Multifingered gripper.

4. Internal gripper.

5. External gripper.

★ The above mentioned are the common types of mechanical gripper.


Let us discuss all the types one by one.

2.15.2.1. Two fingered gripper with operating mechanism:

★ In most applications, two fingers are sufficient to hold the work piece.
The two fingered gripper uses both pivoting or swinging gripper
mechanism and translation mechanism.
2.30 Robotics – www.airwalkpublications.com

Pivoting or swinging gripper mechanism:

★ This is the most popular mechanical gripper for industrial robot.


★ It is designed for limited shapes of an object, especially cylindrical
work piece.

★ If the actuator produces linear movement, then pneumatic piston


cylinders are used. This device also contains a pair of slider-crank
mechanism.

(i) Slider Crank mechanism:

Fig. 2.12: Gripper with Slider Crank Mechanism

★ Figure 2.12 shows the gripper with slider-crank mechanism. When


the piston 1 is pushed by pneumatic pressure to the right, the element
in the crank 2 and 3, rotate counter clockwise with the fulcrum F1,
and clockwise with fulcrum F2 respectively, when θ < 180°.

★ These rotations will make grasping action at the extended end of the
crank element 2 and 3.
Robot Drive System and End Effectors 2.31

★ The releasing action will be obtained by moving the piston to the


left.

★ An angle θ ranging from 160° to 170° are used for releasing action.
(ii) Swing block mechanism:

Fig. 2.13: Swing Block Gripper Mechanism

★ Figure 2.13 illustrates the swinging gripper mechanism that uses


piston-cylinder.

★ The sliding rod 1 actuated by the pneumatic piston transmits motion


by way of the two symmetrically arranged swing-block linkage
1-2-3-4 and 1-2-3’-4’.

★ This linkage 1-2-3-4 and 1-2-3’-4’ are used to grasp or release the
object by means of the subsequent swinging motions of links 4 and
4’ at their pivots F1 and F2.
2.32 Robotics – www.airwalkpublications.com

(iii) Gripper with rotary actuator:

Fig. 2.14: Rotary Actuator Gripper

★ An example of rotary actuator gripper is shown in Figure 2.14.

★ In the rotary actuator, actuator is placed at the cross point of the two
fingers.

★ Each finger is connected with the rotor and housing of the actuator
respectively.

★ The actuator movement directly produces grasping and releasing


actions.
Robot Drive System and End Effectors 2.33

(iv) Cam actuated gripper:

★ The cam actuated gripper includes a variety of possible designs, one


of which is shown in Fig. 2.15.

Fig. 2.15: Cam Actuated Gripper

★ The cam and follower arrangement, often use a spring-load follower,


can provide the opening and closing action of the gripper.
★ The advantage of this arrangement is that the spring action would
accommodate different sized object.
(v) Screw type actuation gripper:

Fig. 2.16: Screw type Actuation Gripper

★ Figure 2.16 indicates a screw type actuation gripper. In this method


screw is turned by a motor, usually accompanied by a speed reduction
mechanism. Due to the rotation of the screw, the threaded block
moves, causing the opening and closing of the fingers depending on
the direction of rotation of the screw.
2.34 Robotics – www.airwalkpublications.com

Translation gripper mechanism:

Fig. 2.17: Translation Gripper with Cylinder Piston

★ Transnational mechanism is widely used in industrial robot. Fig. 2.17


shows the simple translation gripper with cylinder piston.

★ The fingers motion corresponds to the piston movement without any


connecting mechanism between them.

★ The disadvantage is that it is difficult to design the desired size of


the gripper, because here the actuator size decides the gripper size.

2.15.3. Mechanical gripper with 3 fingers:

★ The increase in number of fingers and degrees of freedom will greatly


aid the versatility of grippers.

★ The reason for using the three-fingers gripper is that it has capacity
of grasping the object in three spots, enabling both a tighter grip and
the holding of spherical objects of different size keeping the centre
of the object at a specified position.
Robot Drive System and End Effectors 2.35

Fig. 2.18: Mechanical Gripper with Three Fingers

★ Mechanical gripper with three finger is operated with three point


chuck mechanism as shown in Figure 2.18.

★ Each finger motion is performed using ball-screw mechanism.


★ Electric motor output is transmitted to the screws attached to each
finger through bevel gear trains which rotate the screws.

★ When each screw is rotated clockwise or anticlockwise, the


transnational motion of each finger will be produced, which result in
the grasping-releasing action.

★ Common work piece is circular.


2.15.4. Multifingered gripper:

★ A multifingered gripper can achieve manipulation of object with six


degrees of freedom or more by actuating six or more joints.

★ A multiple gripper system is one that has a single robot arm but two
or more grippers at the end of arm tools which can be used
interchangeably on the manufacturing process in cell.
2.36 Robotics – www.airwalkpublications.com

★ Multiple fingered gripper enables effective simultaneous execution of


more than two different jobs.

Fig. 2.19: Multi Fingered Gripper

2.15.5. Internal gripper:

Fig. 2.20: Internal Gripper


Robot Drive System and End Effectors 2.37

★ Internal gripping system, grips the internal surface of object with open
fingers.

★ This type of mounting allows the pads to fit into the inside diameter
of the part that it must lift. The pads are pressed against the inside
walls of the part.

★ Frictional force is developed which helps the object to lift securely.


2.15.6. External grippers:

Fig. 2.21: External Gripper

Figure 2.21 shows the external gripper.

★ In the external gripper, it gripes the exterior surface of the object.

★ It may contain 2 finger or 3 finger or multi fingers. The exterior


surface will be held tightly or pressed in order to pick and place.
2.38 Robotics – www.airwalkpublications.com

2.16. MAGNETIC GRIPPER:

★ Generally magnetic grippers are used extensively on ferrous material.


★ The residual magnetism remaining in the workpiece may cause
problem.

★ The magnetic attraction will tend to penetrate beyond the top sheet
in the stack resulting in the possibility that more than a single sheet
will be lifted by magnet.

Advantages:

1. Variations in part size can be tolerated.

2. Pickup times are very fast

3. They have ability to handle metal parts with holes.

4. Only one surface is needed for gripping.

Disadvantages:

1. It is difficult to pick-up one sheet at a time from a stack.

2. Possibility of change in characteristic of work-piece.

Types of Magnetic grippers:

The common types of magnetic grippers are:

(i) Electromagnetic grippers.

(ii) Permanent magnet gripper.

2.16.1. Electromagnetic gripper:

★ Electromagnetic grippers are easy to control but it needs a source of


D.C power and an appropriate controller.

★ When the part is to be released, the control unit reverses the polarity
at a reduced power level before switching off electromagnet.
Robot Drive System and End Effectors 2.39

★ This will cancel the residual magnetism in the workpiece ensuring a


positive release of the part.

★ The attractive force, P of an electromagnet is found from Maxwell’s


equation,

(IN)2
P=
25 Ac (Ra + Rm)

where,

P → Attractive force

IN → Number of amp-turns of coil

Ac → Area of contact with magnet

Ra , Rm → Reluctance of magnetic path of air and metal respectively

Figure 2.22 shows the electromagnetic gripper.

Fig. 2.22: Electromagnetic Gripper


2.40 Robotics – www.airwalkpublications.com

2.16.2. Permanent magnetic gripper:

★ Permanent magnets are often considered for handling tasks in


hazardous environments requiring explosion proof apparatus.

Fig. 2.23: Permanent Magnetic Gripper

★ Permanent magnet do not require external power.


★ When the part is to be released at the end of handling cycle, in case
of permanent magnet grippers, some means of separating the part from
the magnet must be provided.

★ The device which separates the part is called a stripper or stipping


device which is shown in Fig. 2.23.

★ Its function is to mechanically detach the part from the magnet.


Advantages:

★ It does not need external source of power and can work in hazardown
environment.

★ It raduces the fear of spark which might cause ignition.


Robot Drive System and End Effectors 2.41

2.17. VACUUM GRIPPERS:

★ Vacuum grippers are also called as suction cups, that can be used for
handling certain type of objects.

★ Vacuum grippers can handle only flat, smooth, clean conditioned


objects to form a satisfactory vacuum between the object and the
suction cup

★ The vacuum gripper’s suction cup is made of elastic material such as


rubber or soft plastic.

★ While designing some means of removing the air between the cup
and the part surface and to create a vacuum, vacuum pump and the
venturi are the common devices used.

★ The Vacuum pump is piston operated and powered by an electric


motor. It is capable of creating a relatively high vacuum.

★ The Venturi is driven by “shop air pressure”. Its initial cost is less
than that of a vacuum pump and it is relatively reliable because of
its simplicity.

Fig. 2.24: Vaturi device for suction cup


2.42 Robotics – www.airwalkpublications.com

★ Figure 2.24 shows the suction cup with venturi device. The lift
capacity of the suction cup depends on the

(i) Area of the cup

(ii) Negative pressure.

The relation ship is given by,

F = K PAc

= K Ac  Pa − Pres 
 
where as

F → Lift force

P → Negative pressure

Ac → Total effective area of the suction cup

K → Co-efficient depending on atmospheric pressure

Pa → Atmospheric pressure

Pres → Residual pressure

Negative pressure is the pressure difference between the inside and


outside of the vacuum cup.

2.18. ADHESIVE GRIPPERS:

★ In adhesive gripper, an adhesive substance can be used for grasping


the object.

★ This grippers can handle fabrics and other light weight material.

★ The requirement is that, the object can be gripped in one side only
and that other forms of grasping such as a vacuum or magnet are not
appropriate.
Robot Drive System and End Effectors 2.43

Limitations:

★ The main limitation of adhesive gripper is that reliability is lost on


every successive operation.

★ But in order to overcome this limitation, the adhesive material are


loaded in the form of continuous ribbon in feeding mechanism.

2.19. HOOKS, SCOOPS, OTHER MISCELLANEOUS DEVICES:

★ Hooks can be used as the end effector to handle containers and to


load or unload parts hanging from overhead conveyors.

★ The item or object to be handled by a hook must have some sort of


handle, so that hook can handle it.

SCOOPS:

★ Ladles and scoops are also used to handle certain materials like which
are in liquid or in powder form.

Limitations:

★ The amount of material being scooped by the robot is difficult to


control.

Others:

★ Other types of grippers include inflatable devices in which inflatable


bladder is expanded to grasp the object.

★ This inflatable bladder is fabricated out of elastic material like rubber,


which makes it appropriate for gripping fragile objects.
2.44 Robotics – www.airwalkpublications.com

Fig. 2.25 and Fig. 2.26 shows expanding bladder for gripping internal
surface.

Fig. 2.25: Bladder Fully Expanded

Fig. 2.26: Bladder Inside the Container


Robot Drive System and End Effectors 2.45

2.20. SELECTION AND DESIGN CONSIDERATIONS OF GRIPPER:

★ Gripper tools are used for spot welding, arc welding, rotating spindle
operation, and other processing applications.

★ The main consideration of selecting gripper is grasping power of the


gripper.

The following list is based on Engelberger’s gripper selection factors:

1. The part surface to be grasped must be reachable (i.e.) it must not


be enclosed within a chuck or other holding fixture.

2. Variation in size must be accounted for, and how this might influence,
the accuracy of locating the part (i.e.) there might be the problem in
placing a rough casting or forging into a chuck for machine
operations.

3. The gripper design must accommodate the change in size that occurs
between part loading and unloading (i.e) part size is reduced in
machining and forging operation.

4. The problem of scratching and distorting the part during gripping


should be considered if the part is fragile or has delicate surface.

5. If there is a choice between two different dimensions an a part, then


the larger dimension should be selected for grasping. Holding the part
by its larger surface will give better control and stability.

6. Fingers must be designed to provide conform to the part shape by


using self aligning fingers. The choice of self-aligning fingers is to
ensure that each finger makes contact with the part in more than one
place. This will give better part control and stability.

7. Use of self-replaceable finger will allow for wear and also for inter
changeability for different part models.
2.46 Robotics – www.airwalkpublications.com

The factors considered for selection of grippers are given in precise


manner.

1. Selection of gripper for part handling:

1. Weight and size.

2. Shape.

3. Change in shape during processing.

4. Tolerances on the part size.

5. Surface condition, protection of delicate surfaces.

2. Selection of gripper for actuation method:

1. Mechanical grasping.

2. Vacuum cup.

3. Magnet.

4. Adhesive, scoops etc.

3. Selection of power and signal transmission:

1. Pneumatic

2. Electrical.

3. Hydraulic.

4. Mechanical.

4. Selection of mechanical gripper:

1. Weight of the object.

2. Method of holding.

3. Co-efficient between fingers and object.

4. Speed and acceleration during motion cycle.


Robot Drive System and End Effectors 2.47

5. Selection of positioning problems:

1. Length of fingers.

2. Inherent accuracy and repeatability of robot.

3. Tolerance on the part size.

6. Selection of grippers on service conditions:

1. Number of actuation during life time of gripper.

2. Repeatability of wear components.

3. Maintenance and serviceability.

7. Selection of grippers on operating environment:

1. Heat and temperature.

2. Humidity, moisture, dirt, chemicals.

8. Selection of gripper on temperature protection:

1. Heat shield.

2. Long fingers.

3. Forced cooling.

4. Use of heat-resistant materials.

9. Selection of gripper on fabrication materials:

1. Strength, rigidity, durability.

2. Fatigue strength.

3. Cost and ease of fabrication.

4. Friction properties of fingure surface.

5. Comparability with operating environment.


2.48 Robotics – www.airwalkpublications.com

10. Others:

1. Use of interchangeable finger.

2. Design standards.

3. Mounting connections and interfacing with robot.

4. Lead time for design and fabrication.

5. Spare parts, maintenance and service.

6. Tryout of the gripper in production.

*********
CHAPTER – 3

SENSORS AND
MACHINE VISION

Requirements of a sensor, principle and applications of the following types


of sensors − Position sensors − Piezo Electric Sensor, LVDT, Resolvers,
Optical encoders, Pneumatic position sensors, Range sensors Triangulation
principles, structured – Lighting Approach − Time of flight, Range finders,
Laser range meters, Touch Sensors, binary sensors, Analog sensors, Wrist
sensors, Compliance sensors, slip sensors, Camera, Frame grabber, Sensing
and Digitizing. Image Data, signal conversion, Image storage, Lighting
Techniques, Image processing and Analysis − Data Reduction, Segmentation,
Feature Extraction, Object Recognition, other Algorithms, Applications −
Inspection, Identification, visual serving and Navigation.

3.1. SENSORS:

In robotic, sensors are used for both internal feedback control and
external interaction with the outside environment.

A sensor is a transducer used to make a measurement of a physical


variable.
3.2 Robotics – www.airwalkpublications.com

Any sensor needs calibration in order to use it as a best measuring


device. Calibration is nothing but a procedure by which the relationship
between the measured variable and the converted output signal will be
established.

It is common that if an industrial robots are capable of performing the


same type of task without constant human supervision, then it should be
equipped with the sensors.

The feedback from various sensors are analyzed through digital


computers and related softwares.

Sensors can also be used as peripheral devices for the robot.

Most of the industrial applications requires that sensors to be employed


to operate with other piece of equipment in the work cell.

In this chapter, Requirements of sensor, types of sensors and machine


vision are discussed in detail.

3.2. REQUIREMENTS OF SENSORS:

In order to choose an appropriate sensor for any need, numerous number


of characteristics must be considered before choosing.

These characteristics will determine the performance, economy, ease of


application, applicability of the sensors.

In many situation, different types of sensors are available for same


purpose.

Here are certain needs available before choosing an sensor.

1. Accuracy:

Accuracy is defined as how close the output of the sensor is to the


expected value.

Accuracy of the measurement should be as high as possible.


Sensors and Machine Vision 3.3

The output of the sensing device should properly reflect the input
quantity being measured or sensed.

2. Reliability:

Reliability is the ratio of how many times a system operates properly


divided by how many times it tried.

The sensor should possess a high reliability.

It should not be subjected to frequent failures during the operation.

3. Sensitivity:

Sensitivity is the ratio of a change in output in response to a change in


input.

Highly sensitive sensors will show large fluctuations in output as a result


of fluctuations in input including noise.

The sensitivity should be as high as possible.

4. Linearity:

Linearity represents the relationship between the input variation and the
output variation.

This means that in a sensor with linear output, (i.e.) change in input at
any level with in the range will produce the same change in the output.

The sensor device should exhibit the same sensitivity over its entire
operating range.

5. Range:

Range refers to the minimum and maximum change in input signal to


which the sensor can respond.

The sensor should have wide operating range and should be accurate
and precise over the entire range.
3.4 Robotics – www.airwalkpublications.com

6. Resolution:

Resolution is the minimum step size within the range of measurement


of the sensor.

In a wire wound potentiometer, it will be equal to the resistance of one


turn of the wire.

In a digital device with n bits, the resolution will be,

Full range
Resolution =
2n

7. Response time:

Response time is the time that a sensor’s output requires to reach a


certain percentage of the total change.

It is also defined as the time required to observe the change in output


as a result of change in input.

It is usually expressed in percentage of total change, such as 95%.

8. Repeatability:

Repeatability is a measure of how varied the different outputs are relative


to each other.

Repeatability is generally random and cannot be easily compensated.

Repeatability is a range which can include all results around the nominal
value.

9. Calibration:

The sensor should be easy to calibrate.

The time and work required to accomplish the calibration procedure


should be minimum.

The sensor should not require frequent re-calibration.


Sensors and Machine Vision 3.5

10. Cost and ease of operation:

The cost to purchase, install and operate the sensor should be as low as
possible.

Under any circumstance, the installation and operation of the device


would not need any special trained or skilled operator.

3.3. CLASSIFICATION OF SENSORS:

The function of robot sensors may be divided into two principal


categories:

1. Internal state.

2. External state.

Internal state sensor:

These sensors deals with the detection of variable such as arm joint
position, which are used in robot control.

Internal sensors as the name explains it is used to measure the internal


state of a robot.

It measures position, velocity, acceleration etc.

External state sensors:

These sensors deals with the detection of variables such as range,


proximity and touch.

These type of sensors are used to find the robot environment.

External state sensors may be further classified as

(i) Contact sensors.

(ii) Non-contact sensors.


3.6 Robotics – www.airwalkpublications.com

Contact sensors:
Contact sensors respond to physical contact such as touch, slip and torque
sensors.
Non-contact sensors:

Fig. 3.1: Classification of sensors


Sensors and Machine Vision 3.7

Non-contact sensors rely on the response of variations in acoustic or


electromagnetic radiation.

The most prominent examples of non-contact sensors are range,


proximity and visual properties of the object.

3.4. POSITION SENSORS:

Position sensors measure the position of each joint i.e. the joint angle
of a robot.

The position sensors are used to measure displacements both rotary and
linear movements.

In many case, encoders, the position information may also be used to


calculate the velocity.

3.4.1. Encoder:

Encoder is a digital optical device that converts motion into a sequence


of digital pulses.

By counting a single bit or by decoding a set of bits, the pulses can be


converted to relative or absolute measurement.

The types of encoder are given below.


3.8 Robotics – www.airwalkpublications.com

3.4.1.1. Incremental Linear Encoder:

Fig. 3.2: incremental Linear Encoder

Working:

Figure 3.2 shows the incremental linear encoder where it has transparent
scale with an opaque grating.

The thickness of the grating lines and the gap between them is made
equal of range in microns.

On one side, the scale is provided with a light source and a condenser
lens.

On the other, side there are light sensitive cells.


The resistance of the cells decreases whenever a beam of light falls on
them.

A pulse is generated each time a beam of light is intersected by the


opaque line. This pulse is fed to the controller, which updates a counter.
Sensors and Machine Vision 3.9

3.4.1.2. Absolute Linear Encoder:

Fig. 3.3: Absolute Linear Encoder

The principle of absolute linear encoder is same as that of the


incremental linear encoder.

The difference is that it gives an absolute value of the distance covered


at any time.

Thus the missing pulses at high speed are less in chance. In this type,
the output is digital.

Figure 3.3 show the scale which is marked with sequence of opaque
and transparent strip.
3.10 Robotics – www.airwalkpublications.com

This scale shows that opaque black represents and the transparent block
is zero.

The left most column will show a binary number as 00000 − The next
column will show a binary number 00001.

3.4.1.3. Incremental Rotary Encoder:

Fig. 3.4: Incremental


Rotary Encoder

Incremental rotary encoder is similar to that of the linear incremental


encoder with the difference that the gratings are now on a circular disc as
shown in Figure 3.4.

The width of transparent spaces is equal to 20 microns.

The two sets of grating lines on two different circles detect the direction
of rotation, and it also enhances the accuracy of the sensor.

The other circle which has only single grating mark is used for the
measurement of the full circle.
Sensors and Machine Vision 3.11

3.4.1.4. Absolute Rotary Encoder:

Table 3.1: Gray code & Binary code samples

Decimal Binary code Gray code

0 0000 0001

1 0001 0001

2 0010 0011

3 0011 0010

4 0100 0110

In the absolute rotary encoder the circular disk is divided into a number
of circular strips and each strip has definite segments as shown in Figure 3.5.

This sensor directly gives the digital output.


3.12 Robotics – www.airwalkpublications.com

The encoder is directly mounted on the motor shaft or with some gearing
to enhance the accuracy of measurement.

To avoid noise in this encoder, a gray scale is some times used.

A gray code allows only one of the binary bits in a code sequence to
change between radial lines.

It avoids the changes in binary output of the absolute encoder when the
encoder oscillates between points.

Table 3.1 gives the sample code. The basic schematic Rotary is given
in Figure 3.6.

Fig. 3.6: Rotary Encoder

3.4.2. Linear Variable Differential Transformer (LVDT):

A linear variable differential transformer is the most used displacement


transducer.

It is actually a transformer whose core moves along with the distance


being measured and that outputs a variable analog voltage as a result of this
displacement.
Sensors and Machine Vision 3.13

A transformer is a device which converts electrical energy into the same


form of energy, but changes the voltage current ratio.

Apart from the losses, the total input energy to the device is the same
as the total output energy.

A transformer can increase or decrease voltage based on the number of


turns in its coils, the corresponding current changes inversely with voltage.

The electrical energy into one coil creates a flux which induces a voltage
in the second coil proportional to the ratio of the number of turns in the
windings.
Figure 3.7 show the LVDT construction, where it consists of a primary,
two secondary and movable core.

Fig. 3.7: LVDT Construction


3.14 Robotics – www.airwalkpublications.com

The primary is excited with an a.c source when the core is in its exact
central location, the amplitude of the voltage induces in secondary − 1 will
be the same as that in secondary − 2.

The secondary are connected to cancel phase, and the output voltage
will be zero at this point.

Figure 3.8 shows the nature of output voltage as the core is moved to
the left or to the right.

The figure explains that the magnitude of the output voltage is shown
to be linear function of core position, and the phase is determined by the side
of the null position on which the core is located.

However the induction of voltage in the secondary is a function of the


strength of the flux.

Fig. 3.8: LVDT Output Voltage Vs Core Position

If no iron core is present, the flux lines can disperse reducing the strength
of the magnetic field.
Sensors and Machine Vision 3.15

In the presence of an iron core, the flux lines are gathered inward,
increasing the strength of the field and thus the induced voltage.

Figure 3.9 shows the another sketch of LVDT.

Advantages:

★ It provides a very high accuracy.

★ It has no friction (i.e.) no sliding contact.

★ Consumes very low amount of power.

★ These transducers can tolerate a high degree of vibration and shock.


Disadvantages:

★ Cannot be operated in high temperature due to transducer.

★ It can operate only AC signals.

★ For DC supply, separate demodulator network is needed.


3.16 Robotics – www.airwalkpublications.com

★ They are sensitive to magnetic field.


★ Can be operated only for large displacements.
3.4.3. Resolver:

Resolvers are very similar to LVDT in principle, but they are used to
measure an angular motion.

A resolver is also a transformer, where the primary coil is connected to


the rotating shaft and carries an alternating current through slip range as shown
in Fig. 3.10.

Fig. 3.10: Schematic of Resolver

Resolver gives analog signal as their output. They consist of a rotating


shaft (rotor) and a stationary housing (stator) as shown in Figure 3.11.

Fig. 3.11: Resolver Arrangement


Sensors and Machine Vision 3.17

This signals must be converted into digital form through an analog to


digital converter before fed to the computer.

Modern resolvers, are available in a brushless form that employ a


transformer to couple the rotor signals from the stator to the rotor.

The primary windings of this transformer will be on the stator, and the
secondary on the rotor.

Other resolvers use the more traditional brushes or slip rings to couple
the signal into the rotor windings.

Most resolvers are specified for work over 2V to 40V RMS and at the
frequency from 400 Hz to 110K Hz.

Angular accuracies range from 5 arc − minutes to 0.5 arc − minutes.

Advantages:

★ It is reliable.
★ It is accurate.
★ It is robust.
3.4.4. Potentiometer:

Potentiometers are analog devices whose output voltage is proportional


to the position of the wiper.
A potentiometer converts position information into a variable voltage
through a resistor.
As the sweeper on the resistor moves due to change in position, the
proportion of the resistance before or after the point of contact with the
sweeper compared with the total resistance varies.
Potentiometer are of two types
1. Rotary

2. Linear
3.18 Robotics – www.airwalkpublications.com

These two potentiometer measure rotary and linear motion respectively.


The schematic illustration of rotary and linear potentiometer is shown in
Fig. 3.12(a), (b).

where,

VR → Reference voltage.

V0 → Measured voltage.

L1 x1 R, a, 0 → Physical Parameters

Potentiometers are either wire wounded or thin film deposit, of resistive


material on a surface.

The major benefit of this film potentiometer is that their output is


continuous and thus less noisy.

When the voltage is applied across the resistive element, the output
voltage between the wiper and the ground is proportional to the ratio of the
resistance on the side of the wiper to the total resistance of the resistive
element, which essentially gives the position of the wiper.

The function of potentiometer can be represented by the function

V0 = K θ
Sensors and Machine Vision 3.19

where,

V0 → Output voltage

K → Voltage constant of the potentiometer in volts/radian

θ → position of the pot in radian or mm.

Potentiometers are generally used as internal feedback sensors in order


to report the position of joints and links.

Potentiometer is used alone or together with other sensors such as


encoders.

In case, the encoder reports the current position of joints and links, where
as the potentiometer reports the startup positions.

As a result, the combination of the sensors allows for a minimal input


requirement but maximum accuracy.

Advantages:

★ Potentiometer is relatively inexpensive.

★ Easy to apply.
Disadvantages:

★ They are temperature sensitive, this character will affect its accuracy.

★ The wipe contact is limiting factor, which will subject to wear and
produce electrical noise.

3.4.5. Pneumatic Position Sensor:

Compressed air is used for the operation of pneumatic sensor.

These sensors are used to find the displacement or proximity of the


object.
3.20 Robotics – www.airwalkpublications.com

In this sensor, low pressure air is allowed to pass through a port in front
of the sensor.

This escaping air, in the absence of any close-by object, escapes and on
doing so, also reduces the pressure in near by sensor.

If there is any close-by object, the air cannot easily escape and as a
result, the pressure increases in the sensor output port.

The output pressure from the sensor thus depends on the proximity of
the object.

Figure 3.13 shows the operation of pneumatic sensor.

3.4.6. Optical Encoder:

Optical encoders are either opaque wheels with removed material for
clear areas (by drilling, cutting) or clear materials such as glass with printed
opaque areas.

Many encoder wheels are also etched, such that either reflect the light
or do not reflect the light.

In case of the light source, the pick-up sensors are both on the same
side of the wheel.
Sensors and Machine Vision 3.21

These sensors are of two types

1. Incremental optical encoder.

2. Absolute encoder.

The operation of these encoders are explained in the section 3.4.1.

Optical encoders come in both linear and rotary versions, and except for
their type of motion, they are exactly same and work under the same principle.

Advantages:

★ They are simple in construction.

★ They are versatile.


3.5. VELOCITY SENSOR:

Velocity sensors are used to measure the velocity or speed by taking


consecutive position measurements at known intervals and computing the time
rate of change of the position values or directly finding it based on different
principles.

3.5.1. Tachometer:

Tachometer is directly used to find the velocity at any instant of time,


and without much of computational load.

This measures the speed of rotation of an element.

There are various types of tachometers in use but a simple design is


based on the Fleming’s rule, which states “the voltage produced is proportional
to the rate flux linkage”.

A conductor (basically a coil) is attached to the rotating element which


rotates in a magnetic field (stator).

As the speed of the shaft increases, the voltage produced at the coil
terminals also increases.
3.22 Robotics – www.airwalkpublications.com

Figure 3.14 shows the schematic of tachometer.

Magnet is placed on the rotating shaft and a coil on the stator.

The voltage produced is proportional to the speed of rotation of the shaft.

This information is digitised using an analog to digital converter and


passed on to the computer.

3.5.2. Hall-Effect Sensor:

Hall-Effect sensor is also called as a velocity measuring sensor.

If a flat piece of conductor material called the Hall chip, is attached to


a potential difference on its two opposite faces as shown in Fig. 3.15, the
voltage across the perpendicular face is zero. If a magnetic field is imposed
at right angles to the conductor, the voltage is generated on the two other
perpendicular faces.
Sensors and Machine Vision 3.23

Fig. 3.15: Schematic of Hall-effect Sensor

Higher the field value, higher is the voltage level.


If one provides a ring magnet, the voltage produced is proportional to
the speed of rotation of the magnet.
3.6. ACCELERATION SENSORS:

The acceleration sensors find the accelerations as the time rate of change
of velocities obtained from velocity sensors or calculated from the position
information.
But this is not an efficient way to calculate the acceleration because this
will put a heavy computational load on the computer and that can hamper the
speed of operation of the system.
Another way to measure the acceleration is to measure the force which
is the result of mass times acceleration.
3.24 Robotics – www.airwalkpublications.com

Forces are measured using strain gauge for which the formula is,

∆RAE
F=
RC

where

F → Force

∆R → Change in resistance of the strain gauge

A → Area

E → Elastic modulus of the strain gauge material

R → Original resistance of the gauge

C → Deformation constant of the strain gauge.

The acceleration a is the force divided by mass of the accelerating


object m.

∆RAE
a=
R Cm

Disadvantages:

★ It is generally not desirable, as the noise is the measured data if any,


will be amplified.

★ In order to overcome, integrators are used, which will tend to suppress


the noise.

3.7. FORCE SENSORS:

A force sensor is the one in which the weight is applied to the scale
pan that causes a displacement. The displacement is then a measure of the
force.
Sensors and Machine Vision 3.25

The types of force sensors are:

1. Strain gauge.

2. Piezoelectric switches.

3. Microswitches.

3.7.1. Strain gauge:

The principle of this sensor is that the elongation of a conductor increases


its resistance.

Typical resistances for strain gauge are 50 − 100 Ω. The increase in


resistance is due to

★ Increase in the length of the conductor.


★ Decrease in the area of the conductor.
Strain gauges are made of electrical conductors, usually of wire or foil,
etched on a base material as shown in Fig. 3.16.
3.26 Robotics – www.airwalkpublications.com

These strain gauges are used to measure the strain. The strains cause
change in the resistance of the strain gauges, which are measured by attaching
them to the wheatstore bridge circuit as on the four resistances R1 , R2 ,
R3 , R4 as shown in Fig. 3.17.

Wheat stone bridge is cheap and accurate method of measuring strain.

A draw back is its temperature change.

To enhance the output voltage, cancel away the resistance changes due
to the change in temperature, two strain gauges are used.

3.7.2. Piezoelectric Sensor:

A piezoelectric material uses a phenomenon known as piezoelectric


effect.

This effect explains that when asymmetrical, elastic crystals are deformed
by a force, an electrical potential will be developed within the distorted crystal
lattice as shown in Figure 3.18.
Sensors and Machine Vision 3.27

This effect is reversible.

If a potential is applied between the surface of the crystal, it will change


the physical dimension.

The magnitude and polarity of the induced charges are proportional to


the magnitude and direction of the applied forces.

The piezoelectric materials are quartz, tourmaline, rochelle salt, and


others.

The range of forces that are measured using Piezoelectric sensors are
from 1 to 20 kN.

These sensors can be used to measure instantaneous change in force


(dynamic force).

3.7.3. Microswitches:

Microswitches are extremely simple, and it is common in all robotics.

They are used to cut off the electrical current through a conductor, and
they can be used for safety purpose, for determining contact for sending signals
based on displacements, and other uses.
3.28 Robotics – www.airwalkpublications.com

Advantage:
★ They are Robust.
★ Simple in design.
★ Inexpensive.
★ Simple in operation.
3.8. EXTERNAL SENSORS:
External sensors are used to learn about the robot’s environment,
especially the objects being manipulated.
External sensors can be divided into the following categories.
★ Contact type.
★ Non-Contact type.
3.8.1. Contact type:

3.8.1.1. Limit Switches:

A limit switch is constructed as an ordinary light switch used at homes


and offices.
It has the same on/off characteristics.
They work on the pressure sensitive mechanical arm as shown in
Figure 3.19.
Sensors and Machine Vision 3.29

When an object applies pressure on the mechanical arm, the switch is


energised.
An object might have an attached magnet that causes a contact to rise
and close when the object passes over the arm as shown in Fig. 3.20.

Fig. 3.20: Operation when object pass over the arm

The pull up keeps the signal at + v until the switch closes, sending the
signal to the ground.
Limit switches can be either normally open (NO) or normally closed
(NC) and may have multi-poles.
Advantage:
★ Limit switches are used in robot to detect the extreme positions of
the motions, when the link reaches the extreme position, switch off
the corresponding actuator, and thus safeguard any possible damage
to the mechanical structure of the robot arm.
Disadvantages:
★ Subject to mechanical failure.
★ Their mean time between failure is low compared to non-contact
sensors.

★ Speed of operation is low.


3.30 Robotics – www.airwalkpublications.com

3.8.1.2. Slip Sensors:

A slip sensor is used for sensing both direction and magnitude.


A slip sensor is shown in Figure 3.21.

It consist of a free moving dimpled ball which deflects a thin rod


mounted on the axis of conductive disk.
A number of electrical contacts are evenly spaced under the disk.
Ball rotation resulting from an object slipping past the ball causes the
rod and disk to vibrate at a frequency which is proportional to the speed of
the ball.
The direction of ball rotation determines which of the contacts touch the
disk as it vibrates, pulsing the corresponding electrical circuits and thus
providing signals that can be analyzed to determine the average direction of
the slip.
Sensors and Machine Vision 3.31

3.8.1.3. Torque Sensor:

Torque sensors are primarily used for measuring the reaction forces
developed at the interface between mechanical assemblies.

The principal approach for doing this are joint and wrist sensing.

Joint Sensor:

A joint sensor measures the cartesian components of force and torque


acting on a robot joint and adds them vectorially. For a joint driven by a DC
motor, sensing is done simply by measuring the armature current.

Wrist Sensor:

The purpose of a force sensing wrist is to provide information about


three components of force (Fx , Fy , Fz) and three moments (Mx , My , Mz)
being applied at the end of the arm.

Based on sensory information and calculations carried out by the robot,


controller, resolves the forces and moments into their six components.

The robot controller can obtain the exact amount of forces and moments
being applied at the wrist which can be used for a number of applications.

The robot equipped with a force sensing wrist plus the proper computing
capacity could be programmed to accomplish these kind of operations.

Most wrist force sensors function as transducers for transforming forces


and moments exerted at the hand into measurable deflection or displacements
at the wrist.

Figure 3.22 shows the example of wrist sensor which uses eight pair of
semiconductor strain gauge mounted on the four deflection bars as one gauge
on each side of a deflection bar.
3.32 Robotics – www.airwalkpublications.com

Since 8 pair of strain gauges are oriented normally to the X, Y and Z


axis of the force coordinate frame, the three components of force and moment
can be determined by proper adding and subtracting the voltage respectively.

Advantages:

★ They are small.


★ Sensitive.
★ Light in weight.
★ Relatively compact in design.
3.8.1.4. Touch Sensors:

Touch sensors are used in robotics to obtain information associated with


contact between a manipulator hand and object in the work space.

Touch sensors are used for,

1. Object location and recognition.

2. To control the force exerted by a manipulator on a given object.


Sensors and Machine Vision 3.33

Touch sensors are classified into two types:

(i) Binary sensor.

(ii) Analog sensor.

3.8.1.4.1. Binary Sensor:

Binary sensors are basically switches which respond to the presence or


absence of an object.

Switches are always microswitches.

Figure 3.23 shows the simple arrangement of the binary sensors.

It is constructed in such a way that, a switch is placed on the inner


surface of each finger of a manipulator hand.

This type of sensing is useful for determining if a part is present between


the fingers.

When moving the hand over an object and subsequentially making


contact with its surface, it is also possible to center the hand over the object
for grasping and manipulation.
3.34 Robotics – www.airwalkpublications.com

Multiple binary touch sensors can be used on the inside surface of each
finger to provide further tactile information.

It can also be mounted on the external surface of a manipulator hand


to provide control signals useful for guiding the hand throughout the work
cell.

Advantages:

★ Simple in construction.

★ Easy to operate.
3.8.1.4.2. Analog Sensor:

Analog sensors will give a signal as output proportional to the local


force.

Figure 3.24 shows a simple analog touch sensor.


Sensors and Machine Vision 3.35

The simplest form of these devices consist of a spring loaded rod, which
is mechanically linked to a rotating shaft in such a way that the displacement
of the rod due to a lateral force results in a proportional rotation of the shaft.
The rotation is then measured continuously using a potentiometer or
digitally using a code wheel.
The spring constant yields the force corresponding to a given
displacement.

3.8.2. Non Contact type:

Non contact sensors rely on the response of a detector to variations in


an acoustic or an electromagnetic radiation.
The common non-contact sensors are:
1. Range sensor.

2. Proximity sensor.

3. Ultrasonic sensor.

4. Laser sensor.

3.8.2.1. Range Sensor:

A ranger sensor measures the distance from a reference point to the


object in the field of operation of the sensor.

Range sensors are used for robot navigation and obstacle avoidance,
where interest lies in estimating the distance to the closest objects, to more
detailed applications in which the location and general shape characteristics of
objects in the work cell of a robot are desired.

The common range sensors are:

1. Triangulation

2. Structured lighting Approach.

3. Time to flight range finders.


3.36 Robotics – www.airwalkpublications.com

1. Triangulation:

Triangulation is the simple methods for measuring range.


Figure 3.25 shows a narrow beam of light which is swept over the
surface.

The sweeping motion is in the plane defined by the line from the object
to the defector and the line from the detector to the source.
If a detector is focused on a small portion of the surface then, when the
detector sees the light spot, its distance D to the illuminated portion of the
surface can be calculated from the geometry of the Figure 3.25, since the
angle of the source with the base line and the distance B between the source
and the detector are known.
The above method yields a point measurement. If the source-detector
arrangement is moved in a fixed plane, then it is possible to obtain a set of
points whose distance from the detectors are known.
Then these distances are easily transformed to three-dimensional
co-ordinates by keeping tract of the location and orientation of the detector
as objects are scanned.
Sensors and Machine Vision 3.37

2. Structured Lighting Approach:


This approach consists of projecting a light pattern onto a set of objects
and using the distortion of the pattern to calculate the range.
Figure 3.26 shows the structure light approach.
Specific range values are computed by first calibrating the system. The
Figure 3.26(b) shows the arrangement of top view Figure 3.26(a).
3.38 Robotics – www.airwalkpublications.com

In this arrangement, the light source and camera are placed at same
height, and the sheet of light is perpendicular to the line joining the origin of
the light sheet and the center of the camera lens.

The vertical plane containing this line is called as reference plane.

The reference plane is perpendicular to the sheet of light, and any vertical
flat surface that intersects the sheet will produce a vertical strip of light, in
which every point will have the same perpendicular distance to the reference
plane.

The need for this arrangement is to position the camera so that every
such vertical strip also appears vertical in image plane.

The calibration procedure consists of measuring the distance B between


the light source and lens sensor, and then determining the angle αc and α0.

Once these quantities are known, it follows from elementary geometry


that d, from Fig. 3.26(b).

d = λ tan θ .... (1)

where,

λ → Focal length of lens.

θ = αc − α0 .... (2)

For M-column digital image, the distance increment dk between the


column is given by

d 2 kd
dk = k = .... (3)
M⁄2 M

for 0 ≤ K ≤ M ⁄ 2, the angle αk made by the projection of an arbitrary stripe


is easily done by,

αK = αc − θ′K .... (4)


Sensors and Machine Vision 3.39

where,

d − dk
tan θ′K = .... (5)
λ

by using eqn. (3)

 d (M − 2k) 
θ′k = tan− 1   .... (6)
 Mλ 

where, 0 ≤ K ≤ M ⁄ 2

For remaining value of K, we have

αk = αc + θ′′K .... (7)

where,

− 1  d (2K − M) 
θ′′
K = tan   .... (8)
 Mλ 

for, M/2 ≤ K ≤ (M − 1)

From Figure 3.26(b) the perpendicular distance DK, between an arbitrary


light stripe and the reference plane is given by,

DK = B tan θK .... (9)

where,

0 ≤ K ≤ M − 1, as αK is given either in eqn. (4) or (7). From geometry


we obtain

 Dc 
αC = tan− 1   .... (10)
 B 

In order to determine α0, we move the surface close to the reference


plane until its light stripe is imaged at y = 0 on the image plane.
3.40 Robotics – www.airwalkpublications.com

We then measure, D0 as

 D0 
α0 = tan− 1   .... (11)
 B 
This completes the calibration procedure.

Once the calibration is completed, the distance associated with every


column in the image is computed using eqn. (9).

3. Time of flight range finders: (Laser Range finder)

Time of flight ranging consists of sending a signal from a transmitter


that bounces back from an object and is received by a receiver.

The distance between the object and the sensor is half the distance
traveled by the signal, which can be calculated by measuring the time of flight
of the signal by knowing its speed of travel.

Time of flight Range finder uses laser to determine the range and to
measure the time it takes for an emitted pulse of light to return coaxially
(along the same path) from the reflecting surface.

CT
The distance of the surface is given by simple relationship D = ,
2

where,

T → Pulse transmit time

L → Speed of light.

A Pulser-lased system produces a two-dimensional array with values


proportional to distance.

The 2-D scan is accomplished by deflecting the laser light via a rotating
mirror.

The working range of this device is on the order of 1 to 4 m, with an


accuracy of ± 0.25 cm.
Sensors and Machine Vision 3.41

An alternative is a pulse light with continuous beam laser and measure


the delay (phase shift) between the outgoing and returning beam.

This concept is shown in Figure 3.27.

A beam of laser light of wave length λ is split into two beams.

★ One called reference beam travels the distance L.


★ Other travels to the distance D.
The total distance travelled by the reflected beam is
D′ = L + 2D.
Advantages:

★ The time measurement will be very fast to be accurate.


★ For small distance measurement, the wave length of signal must be
very small.
3.42 Robotics – www.airwalkpublications.com

3.8.2.2. Proximity Sensors:

Proximity sensors have a binary output which indicates the presence of


the object within a specified distance interval.

Proximity sensors are used in robotics for near-field work in connection


with object grasping or avoidance.

The common proximity sensors are:

1. Inductive Sensors.

2. Hall effect Sensor

3. Capacitive Sensor.

4. Ultrasonic Sensor.

5. Optical proximity Sensor.

3.8.2.2.1. Inductive Sensor:

Sensors based on a change of inductance due to the presence of a


metallic object are among the most, widely used industrial proximity sensor.

The principal operation of inductive sensor is shown in the Figure 3.28.


Sensors and Machine Vision 3.43

The inductive sensor basically consists of a wound coil located next to


a permanent magnet packaged in a simple, rugged housing.

The effect of bringing the sensor in close proximity to a ferromagnetic


material causes a change in the position of the flux lines of the permanent
magnet as shown in the Fig. 3.28(b).

Under static condition, there is no movement of the flux lines and no


current is induced in the coil.

As ferromagnetic object enters or leaves the field of magnet, the resulting


change in flux line induces a current pulse whose amplitudes and shape are
proportional to the rate of change of flux.

Since the sensor requires the motion to produce an output waveform,


one approach for generating a binary signal is to integrate the wave form.

The binary output remains low as long as the integral value remains
below a specified threshold, and then switches to high when the threshold is
exceeded.
3.44 Robotics – www.airwalkpublications.com

3.8.2.2.2. Hall-effect Sensor:

Hall-effect relates the voltage between two points in a conducting or


semi-conducting material to a magnetic field across the material.

The explanation of Hall-effect sensor is given in section 3.5.2.

3.8.2.2.3. Capacitive Sensor:


Inductive and Hall-effect sensor will detect only ferromagnetic materials,
whereas, capacitive sensor is capable of detecting all solids and liquid
materials.

The basic components of capacitive sensor is shown in Figure 3.29.

The sensing element is a capacitor composed of a sensitive electrode


and a reference electrode.

A cavity of dry air is usually placed behind the capacitive element to


provide isolation.

The rest of sensor consists of electronic circuitry which can be included


as an integral part of the unit.

There are number of electronic approaches for detecting proximity based


on change in capacitance.
Sensors and Machine Vision 3.45

The other simplest method includes a capacitor as part of an oscillator


circuit designed so that the oscillator starts only when the capacitance of sensor
exceeds a predefined threshold value.

The start of oscillation is then translated into an output voltage which


indicates the presence of an object.

This method provides a binary output whose triggering sensitivity


depends on the threshold value.

3.8.2.2.4. Ultrasonic Sensor:

An ultrasonic range finder uses an ultrasonic chip which is transmitted


over a short time period and since the speed of sound is known for a specified
medium, a simple calculation involving the time interval between the outgoing
pulse and return echo yields an estimate of the distance to the reflecting
surface.

Figure 3.30 shows the typical ultrasonic transducer used for proximity
sensing.
3.46 Robotics – www.airwalkpublications.com

The basic element is an electro acoustic transducer, often of the


piezoelectric ceramic type.

The resin layer protects the transducer against humidity, dust and other
environmental factors.

It also acts as an acoustical impedance matcher.

The same-transducer is generally used for both transmitting and


receiving, and fast damping of the acoustic energy is necessary to detect the
object at closest range.

This is accomplished by providing acoustic absorbers and by decoupling


the transducer from its housing.

The housing is designed so that it produces a narrow acoustic beam for


efficient energy transfer and signal directionality.

A set of wave form is shown in the Figure 3.31.


Sensors and Machine Vision 3.47

Waveform A is the gating signal used to control transmission.

Waveform B shows the output signal as well as the resulting echo signal.

Waveform C results either upon transmission or reception.

An echo received while signal D is high produces the signal shown in


E, which is rest to low at the end of a transmission pulse in signal A.

Signal F is set on high in positive edge of pulse in E and is reset to


low when E is low and a pulse occurs in A.

That is F is the output in an ultrasonic sensor operating in a binary


mode.

Disadvantage:

★ Ultrasonic sensors cannot be used with surface such as rubber and


foam that do not reflect when sound waves in echo mode.

3.8.2.2.5. Optical Proximity Sensor:

Optical proximity sensor consists of a light source called an emitter and


a receiver, which senses the presence or the absence of the light.

The receiver is usually a photo transistor, and the emitter is usually LED.

The combination of these two creates a light sensor and is used in many
application including optical encoder.

As an proximity sensor, the sensor is set up such that the light, emitted
by the emitter, is not received by the receiver, unless an object is close by.
3.48 Robotics – www.airwalkpublications.com

Figure 3.32 is a illustration of optical proximity sensor.

Unless a reflective object is within range of the switch, the light is not
seen by the receiver, therefore there will be no signal.

3.9. ACQUISITION OF IMAGES:

There are two types of vision cameras

1. Analog camera.

2. Digital camera.

An analog camera is not very common any more, but are still around,
they used to be standard at television stations.

Digital camera is much more common and mostly similar to each other.

A video camera is a digital camera with an added video-tape recording


section.
Sensors and Machine Vision 3.49

3.9.1. Vidicon Camera (Analog Camera):

Figure 3.33 shows the scheme of vidicon camera.

An image is formed on the glass faceplate that has its inner surface
coated with two layers of materials.

The first layer is a transparent signal electrode film deposited on the


face plate of the inner surface.

The second is a thin layer of photo sensitive material deposited over the
conducting film which consists of small areas of high density.

Each area produces a decreasing electrical resistance in response to


increasing illumination.

A charge is generated in small area upon illumination.

This electrical charge pattern is generated corresponding to image formed


on the face plate, which is a function of light intensity over a specified time.
3.50 Robotics – www.airwalkpublications.com

Once a light sensitive charge is generated, it is read out by scanning the


photosensitive layer by an electron beam to produce a video signal, which is
controlled by deflection coils mounted on the tube.

The electron beam deposits enough electrons to neutralize the


accumulated charge and the flow of electrons causes a current at the video
signal electrode.

The magnitude of the signal is proportional to the light intensity and


time for which the area is scanned.

Scan the faceplate from left to right and top to bottom.

To avoid flickering of the image, the frame is divided into two interlaced
fields each scanned at twice the frame rate.

The first field of each frame scans the odd lines, the second scans the
even lines.

The camera output is a continuous voltage signal for each line scanned
and is then sampled and quantified before being stored as a series of sampled
voltage in the memory of a computer.

3.9.2. Digital Camera:

A digital camera is based on solid state technology.

This camera contains a set of lenses used to project the area of interest
onto the image area of the camera.

The main part of the camera is a solid-state silicon wafer image area
that has hundreds and thousands of extremely small photosensitive areas called
photosites printed on it.

Each small area of wafer is a pixel.

As the image is projected onto the image area, at each pixel location of
the wafer, a charge is developed that is proportional to the intensity of light
at that location.
Sensors and Machine Vision 3.51

The wafer may have as many as 5,20,000 pixels in an area with


dimensions of fraction of an inch.

Advantages:

★ Light weight.
★ Small size.
★ Longer life.
★ Low power consumption.
3.10. MACHINE VISION:

Machine vision is an important sensor technology with potential


application in many industrial operation.

Machine vision is concerned with the sensing of vision data and its
interpretation by a computer.

The typical vision system consists of the camera, a digitizing hardware,


a digital computer and hardware and software necessary to interface them.

The operation of vision system consists of the following three functions.

1. Sensing and digitizing image data.

2. Image processing and analysis.

3. Application.

The above said functions of Mission vision are given in the following
sub-divisions:

1. Sensing and digitizing

(i) Image Data

(i) Analog Camera

(ii) Digital camera


3.52 Robotics – www.airwalkpublications.com

(ii) Signal conversion

(i) Sampling

(ii) Quantization

(iii) Encoding

(iii) Image storage/Frame grabber.

(iv) Lighting technique

(i) Front light source.

(ii) Back light source.

(iii) Other miscellaneous device.

2. Image processing and analysis:

(i) Image data reduction

(ii) Segmentation

(i) Thresholding

(ii) Region growing

(iii) Edge detection

(iii) Feature extraction

(iv) Object Recognition

(i) Template matching technique

(ii) Structural technique.

3. Robot Applications:

(i) Inspection

(ii) Identification

(iii) Visual serving and navigation.


Sensors and Machine Vision 3.53

3.11. SENSING AND DIGITIZING FUNCTION IN MACHINE VISION:

Image sensing requires some type of image formation device such as a


camera and a digitizer which stores a video frame in the computer memory.
The sensing and digitizing functions are divided into several steps.
The initial step involves capturing the image of the scene with the vision
camera.
The image consists of relative light intensities corresponding to the
various portions of the scene.
These light intensities are continuous analog values which must be
sampled and converted into digital form.
The second step digitizing is achieved by an Analog to Digital converter.
The A/D converter is either a part of digital video camera or the front
end of a frame grabber.
The choice is dependent on the hardware of the system.

The third step is frame grabber, and an image storage and computation
device which stores a given pixel array.

The frame grabber can vary in capability from one in which simply
stores an image to computation capability.

The stored image is then processed and analyzed by the combination of


frame grabber and the vision controller.

3.11.1. Imaging Devices:

There are variety of commercial imaging devices available.

Camera technology available include the older black and white vidicon
camera and the newer second generation, solid state cameras.

Solid state cameras used for robot vision including Change − Coupled
Devices (CCD), Change Injection Devices (CID) and silicon bipolar sensor
cameras.
3.54 Robotics – www.airwalkpublications.com

3.11.1.1. Vidicon Camera:

The brief explanation of this camera is given in section 3.9.

3.11.1.2. CCD:

The other approach for obtaining a digitized image is by the use of the
charge coupled device (CCD).

In this technology, the image is projected by a video camera onto the


CCD which detects, stores and reads out the accumulated charge generated by
the light on each portion of the image.

Light detection occurs through the absorption of light on a photo


conductive substrate (e.g. silicon).

Charges accumulate under positive control electrodes in isolated wells


due to voltages applied to the central electrodes.

Each isolated well represents one pixel which can be transferred to output
storage register by varying the voltage on the metal control electrode, which
is shown in the Figure 3.34(a, b).
Sensors and Machine Vision 3.55

Advantages:

★ Light weight.

★ Small size.

★ Longer life.

★ Less power consumption.


3.11.2. Lighting Techniques:

Lighting technique is the proper lighting technique for machine vision.

Good illumination of the screen is important because of its effect on the


level of complexity of image processing algorithms required.

Poor lighting makes the task of interpreting the scene more difficult.

Proper lighting techniques should provide high contrast and minimize


specular reflections and shadows unless specifically designed into the system.

The basic types of lighting devices used in machine vision may be


grouped in the following category.

1. Diffuse surface device:

Example of diffuse surface device are the typical fluorescent lamps and
light tables.

2. Condenser Projectors:

A condenser projector transforms an expanding light source into a


condensing light source.

This is useful for imaging optics.

3. Flood or spot projectors:

Flood lights and spot lights are used to illuminate surface areas.
3.56 Robotics – www.airwalkpublications.com

4. Collimators:

Collimators are used to provide parallel beam of light on the subject.

5. Imagers:

Imagers such as slide projectors and optical enlargers form an image of


the target at the object plane. Various illumination techniques have been
developed to use these lighting devices.

The common types are listed below.

3.11.2.1. Front light source:

(i) Front Illumination:

Uses:

Area is flooded so that surface is defining the feature of image.

(ii) Specular illumination: (Dark field)

Uses:

Used for surface defect recognition (background darkness).

(iii) Specular illumination (light field):

Uses:

Used for surface defect recognition, camera in-line with reflected rays
(background light).

(iv) Front imager:

Uses:

1. Structure light applications.

2. Imaged light superimposed on object surface − light beam displaced


as a function of thickness.
Sensors and Machine Vision 3.57

3.11.2.2. Back light source:

(i) Rear illumination (lighted field):

Function:

Uses surface diffusor to silhouette features.

Uses:

Used in parts inspection and basic measurement.

(ii) Rear illumination (condenser):

Function:

Produces high contrast images.

Uses:

Useful for high magnification application.

(iii) Rear illumination (collimator):

Produces parallel light ray source such that features of object do not lie
in same plane.

(iv) Rear offset illumination:

Uses:

Useful to produce feature highlights when features is in transparent


medium.

3.11.2.3. Other miscellaneous devices:

(i) Beam splitter:

Function:

Transmits light along some optical axis as sensor.

Uses:

It can illuminate objects difficult to view.


3.58 Robotics – www.airwalkpublications.com

(ii) Split mirror:


Uses:
Similar to beam splitter but more efficient with lower intensity
requirements.
(iii) Non-selective redirectors:
Uses:
Light source is redirected to provide proper illumination.
(iv) Retro reflectors:
Functions:
A device that redirect incident rays back sensor.
Uses:
Incident angle capable of being varied.
Provides high contrast for object between source & reflector.
(v) Double intensity:
Functions:
A technique used to increase illumination intensity at sensor.
Uses:
Used with transparent media and retro reflector.
3.11.3. Analog-to-Digital Conversion:
For a camera utilizing the vidicon tube technology, it is necessary to
convert analog signal to digital signal.
The A/D conversion process involves taking an input voltage signal and
producing an output that represents the voltage signal in the digital memory
of a computer.
A/D conversion consists of three phases.
1. Sampling.
2. Quantization.
3. Encoding.
Sensors and Machine Vision 3.59

3.11.3.1. Sampling:

A given analog signal is sampled periodically to obtain a series of


discrete time analog signals.
Example of this process is shown in Figure 3.35.
By setting a specified sampling rate, the analog signals can be
approximated by the sampled digital output.
3.60 Robotics – www.airwalkpublications.com

We approximate the analog signal and it is determined by the sampling


rate of the A/D converter.

The sampling rate should be atleast twice the highest frequency in the
video signal if we wish to reconstruct that signal exactly.

3.11.3.2. Quantization:

Each sampled discrete time voltage level is assigned to a finite number


of defined amplitude levels.

These amplitude levels correspond to the gray scale used in the system.

The predefined amplitude levels are characteristic to a particular A/D


converter and consist of a set of discrete values of voltage levels. The number
quantization level is defined by,

Number of quantization levels = 2n.


where,
n → Number of bits of the A/D converter.
A larger number of bits enables a signal to be represented more precisely.

Problem 3.1: A continuous video voltage signal is to be converted into a


discrete signal. The range of the signal after amplification is 0 to 5V. The
A/D converter has 8-bit capacity. Determine the number of quantization levels,
the quantization level spacing the resolution and the quantization error.

Solution:

(i) Number of quantization level:

For a 8-bit capacity,

Quantization level = 2n

= 28

level = 256.
Sensors and Machine Vision 3.61

(ii) The resolution:

The A/D converter resolution

1
= = 0.0035
256

Resolution = 0.39%.

(iii) Quantization level spacing:

For the 5.V range,

5V
=
28

Level spacing = 0.0195 V

(iv) Quantization error:

1
=± (0.0195 V)
2

Error = 0.00975 V.

3.11.3.3. Encoding:

The amplitude levels that are quantized must be changed into digital
code.

This process termed encoding involves representing an amplitude level


by a binary digit sequence.

The ability of the encoding process to distinguish between various


amplitude levels is a function of the spacing of each quantization level.

Given the full scale range of an analog video signal, the spacing of each
level would be defined by.

Quantization level spacing full scale range.

1
Quantization error = ± (quantization level spacing).
2
3.62 Robotics – www.airwalkpublications.com

3.11.4. Image Storage:

Once A/D conversion is completed, the image is stored in the computer


memory, typically it is called as frame grabber or frame buffer.

This frame buffer is a part of frame grabber.

Various techniques have been developed to acquire and access digital


image.

The frame grabber is one example of a video data acquisition device


1
that will store a data of digitized picture and acquire it in S.
30

Digital frame are typically quantized to 8 bits per pixel.

However a 6-bit buffer is adequate since the average camera system


cannot produce 8-bits of noise-free data.

A combination of row and column counters are used in the frame grabber
which are synchronized with the scanning of the electron beam in the camera.

Thus each position on the screen is uniquely addressed.

Such frame grabber techniques become extremely popular and are used
frequently in vision system.

3.12. IMAGE PROCESSING AND ANALYSIS:

To accomplish image processing and analysis, the vision system


frequently must be trained.

In training, information is obtained on prototype objects and stored as


computer models.

The information gathered during training consists of features such as the


area of the object, its perimeter length, major and minor diameters, and similar
features.
Sensors and Machine Vision 3.63

During subsequent operation of the system, feature values computed on


unknown objects viewed by the camera are compared with the computer
models to determine it a match has occurred.

These techniques include:

1. Image data reduction.

2. Segmentation.

3. Feature extraction.

4. Object recognition.

3.12.1. Image Data reduction:

In image data reduction, the objective is to reduce the volume of data.

As a preliminary step in the data analysis, the following two schemes


have found common usage for data reduction.

1. Digital conversion.

2. Windowing.

Digital Conversion:

Digital conversion reduces the number of gray levels used by the


machine vision system.

Depending on the requirement of the application, digital conversion can


be used to reduce the number of gray levels by using fewerbits to represent
the pixel light intensity.

Four bit would reduce the number of gray levels to 16.

This kind of conversion would significantly reduce the magnitude of the


image processing problem.
3.64 Robotics – www.airwalkpublications.com

Windowing:

Windowing involves using only a portion of the total image stored in


the frame buffer for image processing and analysis.

This portion is called the window.

A rectangular window is selected to surround the component of interest


and only pixels within the window are analyzed.

The rationals for windowing is that proper recognition of an object


requires only certain portions of the total scene.

3.12.2. Segmentation:

Segmentation applies to various methods of data reduction.

In segmentation, the objective is to group areas of an image having


similar characteristics or features into distinct entities representing parts of the
image.

The important techniques of segmentation are:

1. Thresholding.

2. Region growing.

3. Edge detection.

1. Thresholding:

Thresholding is a binary conversion technique in which each pixel is


converted into a binary value, either black or white.

This is accomplished by utilizing a frequency histogram of the image


and establishing what intensity (gray level) is to be the border between black
and white.

It should be pointed out that the histogram used to determine a threshold


is only one of a large number of ways to threshold an image.
Sensors and Machine Vision 3.65

In some cases, this is not possible and a local thresholding method as


described below may be employed.

Thresholding is most widely used technique for segmentating industrial


vision applications.

The reason are that it is fast and easily implemented and that the lighting
is usually controllable in an industrial setting.

Once threshold is established for a particular image, the next step is to


identify particular areas associated with objects within the image.

Such regions usually possess uniform pixel properties computed over the
area.

The pixel properties may be multidimensional (i.e.) there may be more


than a single attribute that can be used to characterize the pixel.

2. Region growing:

Region growing is a collection of segmentation technique in which pixels


are grouped in regions called grid elements.

Defined regions can then be examined as to whether they are independent


or can be merged to other region by means of an analysis of the difference
in their average properties and spatial connectiveness

For example, consider an image as shown in Figure 3.36. To


differentiate between the object and the background, assign 1 for any grid
element occupied by an object and 0 for background elements.
3.66 Robotics – www.airwalkpublications.com

For a simple image such as a dark blob on a light background, a runs


technique can provide useful information.

For more complex images, this technique may not provide an adequate
partition of an image into a set of meaningful regions.

Such regions might contain pixels that are connected to each other and
have similar attributes.

Procedure for region growing technique in complex images:

1. Select a pixel that meets a criterion for infusion in a region. In


simplest case, this could mean, select white pixel and assign a value
of 1.
Sensors and Machine Vision 3.67

2. Compare the pixel selected with all adjacent pixels. Assign an


equivalent value to adjacent pixels until an attribute match occurs.

3. Go to an equivalent adjacent pixel and repeat process until no


equivalent pixel can be added to the region.

3. Edge detection:

Edge detection considers the intensity change that occurs in the pixels
at the boundary or edges of a part.

Given that a region of similar attributes has been found but the boundary
shape is unknown, the boundary can be determined by a simple edge following
procedure.

An example of edge detection is shown in Figure 3.37.

For the binary image, the procedure is to scan the image until a pixel
within the region is encountered.
3.68 Robotics – www.airwalkpublications.com

For a pixel within the region, turn left and step, otherwise, turn right
and step.

The procedure is stopped when the boundary is transversed and the path
has returned to the starting pixel.

3.12.3. Feature Extraction:

Feature extraction is used to accomplish the feature that uniquely


characterize the object.

Some features of the object that can be used in machine vision include,
area, diameter and perimeter.

A list of some features commonly used in vision application are given


below:

1. Gray level (Maximum, average, or minimum).

2. Area.

3. Perimeter length.

4. Diameter.

5. Minimum enclosing rectangle.

6. Center of gravity,

1
C ⋅ Gx =
n ∑ x
x

1
C ⋅ Gy =
n ∑ y
y

7. Eccentricity.

Maximum chord length of A


Eccentricity =
Maximum chord length of B
Sensors and Machine Vision 3.69

8. Aspect ratio:

Length to weight ratio

9. Thickness:

(Perimeter)2
Thickness =
Area

(or)

Diameter
=
area

10. Moments:

Mpq = ∑ xp yq
x, y

Feature extraction is available to extract feature values for 2-D cases and
it can be roughly categorized as those that deals with area features.

3.12.4. Object Recognition:

The next step in image data processing is to identify the object the image
represents.

This identification problem is accomplished using the extracted feature


information described in the previous subsection.

Object recognition technique used in industry are classified into two


major categories.

1. Template-matching technique.

2. Structural techniques.

1. Template - matching technique:

Template − matching techniques are a subset of the more general


statistical pattern recognition techniques that serve to classify objects in an
image to pre-determined category.
3.70 Robotics – www.airwalkpublications.com

The basic problem in template matching is to match the object with a


stored pattern feature set defined as a model template 1.

The model template is obtained during the training procedure in which


the vision system is programmed for known prototype objects.

These techniques are applicable if there is not a requirement for a large


number of model templates.

The procedure is based on the use of a sufficient number of features to


minimize the frequency of errors in the classification process.

The features of the object in the image are compared to the


corresponding stored values.

These values constitute the stored template.

If a match is found, allowing for certain statistical variations in the


comparison process, then the object has been properly classified.

2. Structural Techniques:

Structural techniques of pattern recognition consider relationships


between features or edges of an object.

This kind of technique, known as syntactic pattern recognition, is the


most widely used structural technique.

Structural techniques differ from decision − theoretic techniques in that


the latter deals with a pattern on a quantitative basis and ignores for the most
part interrelationships among object primitives.

It can be computationally time consuming for complete pattern


recognition. Accordingly, it is often more appropriate to search for simpler
regions or edges within an image.
Sensors and Machine Vision 3.71

The simpler regions can then be used to extract the required features.

The majority of commercial robot vision system makes use of this


approach to the recognition of 2-D objects.

The recognition algorithms are used to identify each segmented objects


in an image and assign it to a classification.

3.13. OTHER ALGORITHMS:

Vision system manufacturers have developed application software for


each individual system marketed.

These softwares are based on a high-level programming language.

1. Object Recognition system Inc.

2. RAIL (Robot Automatic Incorporated Language).

3. Auto matix Inc.

3.14. ROBOTIC APPLICATIONS:

Many of the current applications of machine vision are inspection tasks


that do not involve the use of an industrial robot.

A typical application is where the machine vision system is installed on


a high-speed production line to accept or reject parts made on the line.

Unacceptable parts are ejected from the line by some mechanical device
that is communicating with the vision system.

Machine vision applications can be considered to have three levels of


difficulty.

These levels depend on whether the object to be viewed is controlled


in position and/or appearance.

Controlling the appearance of the object is accomplished by lighting


techniques.
3.72 Robotics – www.airwalkpublications.com

The three levels of difficulty used to categorize machine vision


applications in industrial settings are:

1. The object can be controlled in both position and appearance.

2. Either position or appearance of the object can be controlled but not


both.

3. Neither position nor appearance of the object can be controlled.

In this section, we will emphasize the use of machine vision in robotic


applications.

Robotic applications of machine vision fall into three broad categories


listed below:

1. Inspection.

2. Identification.

3. Visual serving and navigation.

3.14.1. Inspection:

The first category is one in which the primary function is the inspection
process.

This is carried out by the machine vision system and the robot is used
in a secondary role to support the applications.

The objective of machine vision inspection include checking for gross


surface defects, discovery of flows in labeling, verification of the presence of
components in assembly, measuring for dimensional accuracy, and checking
for the presence of holes and other features in a part.

When these kinds of inspection operations are done manually, then there
is a tendency for human error.

The time required in most manual inspection operations requires that the
procedures be accomplished on a sampling bases.
Sensors and Machine Vision 3.73

With machine vision, these procedures are carried out automatically,


using 100% inspection and usually much less time.

3.14.2. Identification:

Identification is the second category which is concerned with the


applications in the purpose of the machine vision system to recognize and
classify an object rather than to inspect it.

Inspection implies that the part must be either accepted or rejected.

Identification involves a recognition process in which the part itself, or


its position and/or orientation is determined.

Following this a subsequent decision and action will be taken by the


robot.

Identification applications of machine vision include part sorting,


palletizing and depalletizing, and picking parts that are randomly oriented a
conveyer or bin.

3.14.3. Visual Serving and Navigation:

Visual serving and navigational control is the third application for the
purpose of the vision system to direct the actions of the robot based on its
visual input.

The example of robot visual serving is where the machine vision system
is used to control the trajectory of the robot’s end effector towards an object
in the work space.

Industrial examples of this application include part positioning, retrieving


parts moving along a conveyor, retrieving and reorienting parts moving along
a conveyer, assembly, bin picking and seam tracking in continuous arc
welding.

An example of navigation control would be in automatic robot path


planning and collision avoidance using visual data.
3.74 Robotics – www.airwalkpublications.com

Clearly the visual data are just an important input in this type of task
and a great deal of intelligence is required in the controller to use the data
for navigation and collision avoidance.

The visual serving task remains important research topics and are now
viable applications of robot vision system.

3.14.4. Bin Picking:


Bin picking involves the use of a robot to grasp and retrieve randomly
oriented parts out of a bin or similar container.

The application is complex because parts will be overlapping in each


other.

The vision system must first recognize a target part and its orientation
in the container and then it must direct the end effector to a position to permit
grasping and pick-up.

The difficulty is that the target part is jumbled together with many other
parts and the conditions of contrast between the target and its surroundings as
far from ideal part recognition.

*********
CHAPTER – 4

ROBOT KINEMATICS

Forward kinematics, Inverse kinematics and Difference, Forward kinematics


and Reverse kinematics of manipulators with two, three degrees of freedom
(in 2 Dimension), Four degrees of freedom (in 3 Dimension), Jacobians,
Velocity and Forces − Manipulator Dynamics.

4.1. INTRODUCTION:

A robot is a machine capable of doing work by using different physical


motions called mechanism. Various types of mechanism are used for generating
robotic motion. A robot mechanism is a multi-link system. We know that the
combination of links is called linkage. The linkage (joint) can be described
by the way the pair of links are connected to each other.
4.2 Robotics – www.airwalkpublications.com

There are two types of connections between the pair of links. Refer
Fig. 4.1(a) and (b)

★ Prismatic joint: One link slides on the other along a straight line
− Linear motion.
★ Revolute joint: Here, the pair of links rotate about a fixed axis −
like a link rotates about a hinge − Rotational motion. Most of the
robots are built with the combination of the above two types of joints.
The two joints are shown in Fig. 4.2 Normally each joint represents
1 degree of freedom (DOF). In this Fig. 4.2, there are 6 joints − 5 revolute
joints and 1 prismatic joint. So there are 6 degrees of freedom (6 DOF). i.e.
6 is the minimum number of parameters required to position the end effector.
For dynamic systems, velocity should be added to each DOF.
Robot Kinematics 4.3

Refer Fig. 4.3. Joints are labelled as J1 and J2.

Links are labelled as L1 , L2 and L3.

4.2. FO RWARD KINE MAT ICS AND RE VE RSE (INVE RSE )


KINEMATICS:
If the position and orientation of the end effector are derived from the
given joint angles (θ1 , θ2 …) and link parameters L1 , L2 then the way is
called forward kinematics. Fig. 4.4.

Fig. 4.4: Forward Kinematics

On the other hand, if the joint angles (θ1 , θ2 .... ) and link parameters
(L1 , L2 …) of the robot are derived from the position and orientation of the
end effector, then the way is called the Reverse kinematics (or) Inverse
kinematics Fig. 4.5.
4.4 Robotics – www.airwalkpublications.com

Fig. 4.5: Reverse (Inverse) Kinematics

Refer Fig. 4.6. The position of end effector can be represented by two
ways.

One way → using two joint angles θ1 and θ2.

It is known as ‘Joint space’.

Pj = (θ1 , θ2)
Robot Kinematics 4.5

Another way → defining the end effector position


in ‘World space’ using Cartesian coordinate system.

Pw = (x, y) in 2D

Pw = (x, y, z) in 3D

Among these two ways, world space is the best way to understand the
Robot’s kinematics.

In forward kinematics, the world space Pw (x, y) is found out by using


joint space Pj (θ1 , θ2).

In reverse kinematics, the joint space Pj (θ1 , θ2), is found out by using
world space Pw (x, y).

4.3. FORWARD KINEMATIC OF MANIPULATORS WITH 2 DOF


IN 2D:

For link 1, x1 = L1 cos θ1

y1 = L1 sin θ1

Position of Link 1, r1 (x1 , y1) = [ L1 cos θ1 , L1 sin θ1 ]

Similarly for Link 2, r2 (x, y) =  L2 cos (θ1 + θ2), L2 sin (θ1 + θ2) 
 
i.e., r1 = L1 cos θ1 , L1 sin θ1

r2 = L2 cos (θ1 + θ2) , L1 sin (θ1 + θ2)

Adding vectorially, we can get the coordinates x and y of the manipulator


end effector Pw in world space, as

x2 = L1 cos θ1 + L2 cos (θ1 + θ2) .... (1)

y2 = L1 sin θ1 + L2 sin (θ1 + θ2) .... (2)


4.6 Robotics – www.airwalkpublications.com

4.4. REVERSE KINEMATICS OF MANIPULATORS WITH 2 DOF


IN 2D:
In this case, the joint angles are derived from the position in world space
of end effector.

We know,
cos (A + B) = cos A cos B − sin A sin B
sin (A + B) = sin A cos B + cos A sin B

Now the equations 1 and 2 can be expanded as


x2 = L1 cos θ1 + L2 cos θ1 cos θ2 − L2 sin θ1 sin θ2 .... (3)

y2 = L1 sin θ1 + L2 sin θ1 cos θ2 + L2 cos θ1 sin θ2 .... (4)

Squaring on both sides and adding the two equations, we get

x2 + y2 − L21 − L22
cos θ2 =
2 L1 L2

Substituting value of θ2 in equations (3) and (4), we can get θ1.


Robot Kinematics 4.7

4.5. FORWARD KINEMATICS OF MANIPULATORS WITH 3 DOF


IN 2D:

★ In the forward kinematics, to find out the position of the end effector,
we have to construct the different transformation matrices and
ee
combine them. The result being bs T, where bs is the base frame of
the robot manipulator and ee is end effector.

★ This can be done by the use of the Denavit-Hartenberg convention.

★ Thus the compound homogeneous transformation matrix is found by


premultiplying the individual transformation matrices.

ee N 1 2 3
bs
T = 0
T = 0
T 1
T 2
T .......

ee − end effector,

bs − base.
4.8 Robotics – www.airwalkpublications.com

L1 , L2 , L3 are the lengths of the arm, θ1 , θ2 , θ3 are the twisted angles


respectively.

The orientation of the first link, relative to the reference frame is given
by

 cos θ1 − sin θ1 0 
 
T1 (θ1) =  sin θ1 − cos θ1 0 
 0 0 1
 

The orientation of the second link relative to the first link is given by

 cos θ2 − sin θ2 L1 
 
T2 (θ2) =  sin θ2 − cos θ2 0 
 0 0 1 
 

This corresponds to a rotation by an angle θ2 and translation by a


distance L1 (L1 = Length of the first link).

The orientation of the third link, relative to the second link is given by

 cos θ3 − sin θ3 L2 
 
T3 (θ3) =  sin θ3 − cos θ3 0 
 0 0 1 
 

The position of the end effector, relative to the third link is given by

 1 0 L3 
 
T4 =  0 1 0 
0 0 1
 
Robot Kinematics 4.9

Then solution by Denavit − Hartenberg Convention.

ee 4
bs T= 0 T

 cos θ1 − sin θ1 0   cos θ2 − sin θ2 L1   cos θ3 − sin θ3 L2   1 0 L3 


    
=  sin θ1 − cos θ1 0   sin θ2 − cos θ2 0   sin θ3 − cos θ3 0   0 1 0 
 0 0 1 0 0 1  0 0 1 0 0 1
    

cos(θ1+θ2+θ3) −sin (θ1+θ2+θ3) L1cosθ1+L2cos(θ1+θ2) +L3cos (θ1+θ2+θ3)


 
= sin (θ1+θ2+θ3) cos (θ1+θ2+θ3) L1sinθ1+L2sin(θ1+θ2) +L3sin (θ1+θ2+θ3) 
 0 0 1 
 

Resulting kinematic equations are:

x = L1 cos θ1 + L2 cos (θ1 + θ2) + L3 cos (θ1 + θ2 + θ3)

y = L1 sin θ1 + L2 sin (θ1 + θ2) + L3 sin (θ1 + θ2 + θ3)

Forward kinematics:

The position and orientation of the end-effector (in world space) can be
determined from the joint angles and the link parameters by the following
equations,

x3 = L1 cos θ1 + L2 cos (θ1 + θ2) + L3 cos (θ1 + θ2 + θ3) .... (5)

y3 = L1 sin θ1 + L2 sin (θ1 + θ2) + L3 sin (θ1 + θ2 + θ3) .... (6)

φ = (θ1 + θ2 + θ3) .... (7)

Reverse Kinematics:

The joint angles can also be determined from the end-effector position
(x3 , y3) and the orientation (φ), using reverse kinematics in the following way

x2 = x3 − L3 cos φ .... (8)

y2 = y3 − L3 sin φ .... (9)


4.10 Robotics – www.airwalkpublications.com

From the given geometry,

x2 = L1 cos θ1 + L2 cos θ1 cos θ2 − L2 sin θ1 sin θ2 .... (10)

y2 = L1 sin θ1 + L2 sin θ1 cos θ2 + L2 cos θ1 sin θ2 .... (11)

Squaring and adding Eqs. (10) and (11),

x22 + y22 − L21 − L22


cos θ2 = .... (12)
2 L1 L2

Substituting the value of θ2 in Eqs. (10) and (11), we obtain the value
of θ1.
Robot Kinematics 4.11

y2 (L1 + L2 cos θ2) + x3 (L2 sin θ2)


(or) tan θ1 = .... (13)
x3 (L1 + L2 cos θ2) + y3 (L2 sin θ2)

Finally, the value of θ3 can be obtained using the following relation.

θ3 = φ − (θ1 + θ2) .... (14)

4.6. FO RW ARD AND RE V E RSE T RANS FO RMA TIO N OF


MANIPULATOR WITH 4 DOF IN 3-D:
A 4-DOF manipulator in 3-D is shown in Fig. 4.10. Joint 1 rotates about
the z-axis, joint 2 rotates about an axis perpendicular to the z-axis and parallel
to y-axis, joint 3 is a linear joint and joint 4 rotates about an axis parallel to
y-axis.

Let
θ1 = Angle of rotation of joint 1 (base rotation)
θ2 = Angle of rotation of joint 2 (elevation angle)
L = Length of the linear joint 3 (a combination of L2 and L3)
θ4 = Angle of rotation of joint 4
4.12 Robotics – www.airwalkpublications.com

Forward Transformation:

The position of the end-effector P (x, y, z) in world space is given by

x = cos θ1 (L cos θ2 + L4 cos θ4)

y = sin θ1 (L cos θ2 + L4 cos θ4)

z = L1 + L sin θ2 + L4 sin θ4 where L = L2 + L3

Reverse Transformation:

If the pitch angle (θ4) and the world coordinates of the point
P (x, y, z) are given, the joint positions can be determined in the following
way:

If the coordinates of the joint 4 be (x4 , y4 , z4).

Then,

x4 = x − cos θ1 (L4 cos θ4) .... (15)

y4 = y − sin θ1 (L4 cos θ4) .... (16)

z4 = z − L4 sin θ4

Now the values of θ1 , θ2 and L can be found out by

y4
cos θ1 = .... (17)
L

z4 − L1
sin θ2 = .... (18)
L
1⁄2
L =  x24 + y24 + (z4 − L1)2  .... (19)
 
y2 (L1 + L2 cos θ2) + x2 (L2 sin θ2)
(or) tan θ1 =
x2 (L1 + L2 cos θ2) + y2 (L2 sin θ2)
Robot Kinematics 4.13

4.7. HOMOGENEOUS TRANSFORMATIONS:

In the last articles, only three joints are analysed. When a more number
of joints of manipulator is to be analysed, there should be a general single
met hod t o solve the ki nema ti c e quat ions, for this, homogeneous
transformations are to be used. For knowing homogeneous transformations,
the knowledge of vectors and matrices is necessary.

A point can be defined as

P = ai + bj + ck

It can be represented in matrix as

 x 
 
 y  a=
x
 z  where
s
 
 s 
 
y
b=
s

z
c= and
s

s = scaling factor.

For example, P = 20i + 15j + 30k can be given as

 20   10   40 
     
 15  =  7.5  =  30 
 30   15   60 
     
 1   0.5   2 
     
The above vector form can be used to define end effector of robot
manipulator. The vector can be translated in space by means of a translation
matrix (4 × 4).

The vector can be rotated in space by means of rotation matrix (4 × 4).


4.14 Robotics – www.airwalkpublications.com

4.7.1. Translation Matrix:

A vector can be translated in space by a distance

a in x direction

b in y direction

c in z direction and

it can be given as

1 0 0 a
 
Trans (a, b, c) = 
0 1 0 b
0 0 1 c 
0 0 0 1 

Problem 4.1: Translate a vector P = 30i + 20j + 15k by a distance of 10 in x


direction, 8 in y direction and 2 in the z direction.

1 0 0 10 
0 0 8 
Trans (a, b, c) = 
1
0 0 1 2 
0 0 0 1 

The translated vector

1 0 0 10   30   40 
     
= 
0 8   20 
=  
0 1 28
  
0 0 1 2   15   17 
0 0 0 1   1   1
  
[Explanation : (1 × 30) + (0 × 20) + (0 × 15) + (10 × 1) = 40

(0 × 30) + (1 × 20) + (0 × 15) + (8 × 1) = 28

(0 × 30) + (0 × 20) + (1 × 15) + (2 × 1) = 17

(0 × 30) + (0 × 20) + (0 × 15) + (1 × 1) = 1.]


Robot Kinematics 4.15

Problem 4.2: One point Puvw = (8, 6, 5)T are to be translated a distance + 7
units along OX axis, 2 units along OY axis and − 3 units along the OZ axis.
Using appropriate homogeneous matrix, determine the new points Pxyz.
Solution:
Given that dx = 7, dy = 2, dz = − 3, u = 8, v = 6, w = 5.
1 0 0 dx   u 
0 0 dy   v 
Pxyz = 
1
0 0 1 dz   w 
0 0 0 1   1 

1 0 0 7 8
   
Pxyz = 
0 1 0 2 6
0 0 1 − 3   5 
0 0 0 1   1 

 15 
 
=  
8
 2
 1
 
 15 
New position points Pxyz =  8  .
 2
 
Problem 4.3: A robot has two links of variable length as shown in Fig 4.11.

Assuming that the origin of the global coordinate system is defined at


joint J1, determine the coordinates of the end-effector.
4.16 Robotics – www.airwalkpublications.com

Solution:

Given that (x1 , y1) = (0, 0), L2 = 4, L3 = 6

1 0 L2 
 
Translation Matrix T =  0 1 − L3 
0 0 1
 

x  x1 
y=T  
Now    y1 
1  1
   

x 1 0 4 0
y=0 1 −6 0
     
1 0 0 1  1
     

x  4
y= −6
   
1  1
   

Therefore the end-effector point is given by (4, − 6).

4.7.2. Rotational Matrix:

A vector can be rotated about each of the three axes x, y and z by an


angle θ by rotation matrix (4 × 4).
Rotations Matrix,
1 0 0 0
 
 0 cos θ − sin θ 0
About x axis, Rot (x, θ) = 
 0 sin θ cos θ 0 
0 0 0 1 

Rotation matrix, about y-axis
 cos θ 0 sin θ 0 
 0 0 
Rot (y, θ) = 
1 0

 − sin θ 0 cos θ 0 
 0 0 0 1 

Robot Kinematics 4.17

Rotation matrix about z-axis,

 cos θ − sin θ 0 0
 
sin θ cos θ 0
Rot (z, θ) = 
0
 0 0 1 0 
 0 0 0 1 

Rot (x, θ) can be written as R (x, θ)

X=R⋅A

A = R− 1 ⋅ X

Rotation R Matrix R− 1 = RT

1 0 0  1 0 0 
R (x, θ)  0 cos θ − sin θ   0 cos θ sin θ 
   
 0 sin θ cos θ   0 − sin θ cos θ 
   

 cos θ 0 sin θ   cos θ 0 − sin θ 


R (y, θ)  0 0   0 0 
 1  1
 − sin θ 0 cos θ   sin θ 0 cos θ 
   

 cos θ − sin θ 0   cos θ sin θ 0 


R (z, θ)  sin θ cos θ 0   − sin θ cos θ 0 
   
 0 0 1   0 0 1 
   

Problem 4.4: Rotate the vector P = 10 i + 6 j + 16 k by an angle of 90° about


x-axis.
The rotation matrix about x-axis,
1 0 0 0 1 0 0 0
   
 0 cos 90 − sin 90 0 0 0 −1 0
Rot (x, 90) =  =
 0 sin 90 cos 90 0   0 1 0 0 
0 0 0 1   0 0 0 1 

4.18 Robotics – www.airwalkpublications.com

1 0 0 0  10   10 
0 0 − 1 0   6   − 16 
The rotated vector =   = 
0 1 0 0   16  
   6 
0 0 0 1   1  1 
   

(1 × 10) + (0 × 6) + (0 × 16) + (0 × 1) = 10 and simi la rly, we ge t


− 16, 6 and 1

Problem 4.5: A robot has two links of length 1.5 m each with the origin
J1 as (0, 0).

(a) Determine the coordinates of the end-effector if the joint rotations are
30° at both joints.

(b) Determine joint rotations if the end-effector is located at (2, 1).

Solution:

(a) Coordinates of the end effector point if the joint rotations are 30°
at both joints:
Robot Kinematics 4.19

Given that (x1 , y1) = (0, 0), θ = 30° , L2 = 1.5 m; L3 = 1.5 m

 1 0 L2 cos (θ) + L3 cos (α − θ) 


 
TRR =  0 1 L2 sin (θ) − L3 sin (α − θ) 
 0 0 1 
 

L2 cos θ = 1.5 cos 30° = 1.3

L3 cos (α − θ) = 1.5 cos (30 − 30) = 1.5

L2 sin θ = 1.5 sin 30 = 0.75

L3 sin (α − θ) = 1.5 sin (0) = 0

 1 0 2.8 
TRR =  0 1 0.75 
0 0 1 
 

 x1 
x  
y=T
  RR  y1 
1  1
   

 x   1 0 2.8   0 
 y  =  0 1 0.75   0 
     
1 0 0 1   1 
  

 x   2.8 
 y  =  0.75 
   
1  1 
   

Therefore end-effector point is given by (2.8, 0.75).

x = 2.8; y = 0.75
4.20 Robotics – www.airwalkpublications.com

(b) Joint rotation if the end effector is located at (2, 1)

Given that (x3 , y3) = (2, 1), x3 = 2, y3 = 1, x1 = 0, y1 = 0

L2 = 1.5 m, L3 = 1.5 m

x23 + y23 − L22 − L23


We know cos (θ3) =
2 L2 L3

22 + 12 − 1.52 − 1.52
cos (θ3) = = 0.5
2 × 1.5 × 1.5

θ3 = cos− 1 (0.5) = 60°

θ3 = 60°

(y3 − y1) (L2 + L3 cos (θ3)) + (x3 − x1) L3 sin (θ3)


tan (θ2) =
(x3 − x1) (L2 + L3 cos (θ3)) − (y3 − y1) L3 sin (θ3)
Robot Kinematics 4.21

Substituting the values in equation


(1 − 0) (1.5 + 1.5 cos (60°)) + (2 − 0) 1.5 sin (60°)
tan (θ2) =
(2 − 0) (1.5 + 1.5 cos (60°)) − (1 − 0) 1.5 sin (60°)
2.25 + 2.598
tan (θ2) = = 1.515
4.5 − 1.299
θ2 = tan− 1 1.515

Angle θ2 = 56.56°

Problem 4.6: In a TL robot, assume that the coordinate system is defined at


joint J2

(a) Determine the coordinates of the end-effector if joint J1 twists by an


angle of 40° and the variable link has a length of 2 m.

(b) Determine variable link length and angle of twist at J1 if the


end-effector is located at (1.2, 1.8).

Solution:
(a) Coordinates of the end-effector:
Given that (x2 , y2) = (0, 0); L2 = 2 m and θ = 40°

x2 = 0, y2 = 0
4.22 Robotics – www.airwalkpublications.com

 1 0 L2 cos (θ) 
 
Translation Matrix TTL =  0 1 L2 sin (θ) 
0 0 1 
 
x = x2 + L2 cos θ

y = y2 + L2 sin θ

 x   1 0 L2 cos θ   x2 
 
In matrix form,  y  =  0 1 L2 sin θ   y2 
1 0 0 1   1
     
Substituting θ = 30°,

 1 0 2 cos 40° 
TTL =  0 1 2 sin 40° 
0 0 1 
 
 1 0 1.532 
TTL =  0 1 1.286 
0 0 1 

x  x2 
y=T ⋅y 
   TL   2 
1  1
   
Substituting x2 = 0 and y2 = 0,

 x   1 0 1.532   0 
 y  =  0 1 1.286   0 
     
1 0 0 1  1
     
 x   1.532 
 y  =  1.286 
   
1  1 
   
(x, y) = (1.532, 1.286)

x = 1.532, y = 1.286
Robot Kinematics 4.23

(b) Variable link length

Given that x = 1.2 m, x2 = 0

y = 1.8 m, y2 = 0

L=√

(x − x2)2 + (y − y2)2

L=√

(1.2 − 0)2 + (1.8 − 0)2

L = 2.16 m
(y − y2) 1.8 − 0
sin (θ) = = = 0.8333
L 2.16
θ = 56.44°

Problem 4.7: The world coordinates for a robot Fig. 4.15 are x3 = 300 mm,
y3 = 400 mm, and φ = 30° and given that the links have values L1 = 350 mm,
L2 = 250 mm and L3 = 50 mm, determine the joint angles θ1 , θ2 and θ3.
Given that L1 = 350 mm, L2 = 250 mm, L3 = 50 mm.
4.24 Robotics – www.airwalkpublications.com

Solution:
To find x2 and y2, using given coordinates x3 = 300 and y3 = 400,
φ = 30°,
x2 = x3 − L3 cos φ = 300 − 50 × cos 30
= 256.7 mm
y2 = y3 − L3 sin φ = 400 − 50 sin 30 = 375 mm

(x2) + (y2)2 − L21 − L22


cos θ2 =
2 (L1) (L2)

256.72 + 3752 − 3502 − 2502


= = 0.123
2 (350) (250)
cos θ2 = 0.123

θ2 = cos− 1 (0.123) = 82.9°

θ2 = 82.9°

The angle θ1 is found by using,


y3 (L1 + L2 cos θ2) + x3 L2 sin θ2
tan θ1 =
x3 (L1 + L2 cos θ2) + y3 (L2 sin θ2)

375 (350 + 250 cos 82.9°) + 256.7 (250) sin 82.9°


tan θ1 =
256.7 (350 + 250 cos 82.9°) + 375 (250) sin 82.9°
tan θ1 = 0.4146

θ1 = tan− 1 (0.4146) = 22.5°

θ1 = 22.5°

We know φ = θ1 + θ2 + θ3

So, θ3 = 30° − 82.9° − 22.5°

θ3 = − 75.4°
Robot Kinematics 4.25

Problem 4.8: A point Puvw = (5, 4, 3) is attached to a rotating frame, the frame
rotates 50 degrees about the OZ axis of the reference frame. Find the
coordinates of the point relative to the reference frame after the rotation.

Given u = 5, v = 4, w = 3, θ = 50°.

Solution:

Pxyz = (Rz.50) ⋅ Puvw

 cos θ − sin θ 0 
Rot (z, θ) =  sin θ cos θ 0 
 0 0 1 

 cos 50° − sin 50° 0   u 
Pxyz =  sin 50° cos 50° 0   v 
 0 0 1   w 

 0.643 − 0.766 0   5 
Pxyz =  0.766 0.643 0   4 
 0 0 1   3 

x = (0.643 × 5) − (0.766 × 4) + 0 = 0.151 m

y = (0.766 × 5) + 0.643 × 4 + 0

= 6.402 m

z=0+0+3=3m

 0.151 
Pxyz =  6.402 
 3 
 
New position points = [0.151, 6.402, 3]

Problem 4.9: A point Pxyz = (0.151, 6.402, 3) is the coordinate with respect
to the reference coordinate system, find the corresponding point Puvw with
respect to the rotated O-U-V-W coordinate system if it has been rotated 50°
about OZ-axis.
4.26 Robotics – www.airwalkpublications.com

Given: P (x, y, z) = (0.151, 6.402, 3), x = 0.151, y = 6.402, z = 3

θ = 50°

To find: Puvw

Solution: Puvw = Rot (z, 50)T ⋅ Pxyz

 cos 50° sin 50° 0   0.151 


=  − sin 50° cos 50° 0   6.402 
 
 0 0 1   3 
  

 0.643 0.766 0   0.151 


Puvw =  − 0.766 0.643 0   6.402 
 0 0 1   3 

u = (0.643 × 0.151) + (0.766 × 6.402) + 0 = 5m

v = (− 7.66 × 0.151) + (0.643 × 6.402) + 0 = 4m

w = 0 + 0 + 3 = 3m

5
Coordination vector =  4 
3
 

Note: The same result is obtained as the given data of previous problem.

Problem 4.10: pxyz = (6, 5, 4)T and qxyz = (8, 7, 6)T are the coordinates with
respect to the reference coordinate system, determine the corresponding points
puvw, quvw with respect to the rotated OUVW coordinate system, if it has
rotated 30° about the OZ-axis.

Solution:

Given: θ = 30° , p (x, y, z) = 6, 5, 4

x = 6, y = 5, z=4
Robot Kinematics 4.27

 cos θ − sin θ 0 
We know that, Rot (z, θ) =  sin θ cos θ 0 
 0 0 1 

puvw = Rot (z, 30)T ⋅ pxyz

 cos 30° sin 30° 0   6 


puvw =  − sin 30° cos 30° 0  ⋅  5 
 0 0 1   4 

 0.866 0.5 0  6


=  − 0.5 0.866 0  5
 
 0 0 1  4
  

u = (0.866 × 6) + (0.5 × 5) + 0 = 7.696 m

v = ( − 0.5 × 6) + (0.866 × 5) + 0 = 1.33 m

w=0+0+4=4

 7.696 
Coordination vector puvw =  1.33 
 4 
 

Case (ii): Given that

q (x, y, z) = 8, 7, 6

x=8

y=7

z=6
4.28 Robotics – www.airwalkpublications.com

quvw = Rot (z, 30)T ⋅ qxyz

 cos 30 sin 30 0   x 
=  − sin 30° cos 30 0  ⋅  y 
 0 0 1   z 

 cos 30° sin 30° 0   8 
=  − sin 30° cos 30° 0  ⋅  7 
 0 0 1   6 

 0.866 0.5 0  8
=  − 0.5 0.866 0  7
 
 0 0 1  6
  
 10.428 
=  2.062 
 6 
 

Problem 4.11: q (u, v, w) is given by (5, 4, 3)T which is rotated about X-axis
of the reference frame by angle of 40°. Determine the point qxyz.

Solution:

Given: q (u, v w) = (5, 4, 3)T , α = 40° , u = 5, v = 4, w = 3

q (x, y, z) = Rot (x, 40) ⋅ quvw

1 0 0   u
q (x, y, z) =  0 cos α − sin α   v
 
 0 sin α cos α  w
   
1 0 0  5
=  0 cos 40° − sin 40°  4
 
 0 sin 40° cos 40°  3
   
1 0 0  5  5 
=  0 0.766 − 0.643   4  =  1.135 
   
 0 0.643 0.766   3   4.87 
     
Robot Kinematics 4.29

Problem 4.12: Frame [2] is rotated with respect to frame [1] about the x-axis
by an angle of 45°. The position of the origin of frame [2] with respect to
frame [1] is D2 = [4, 5, 8]T

1. Determine the transform matrix T2, which describes frame [2] relative
to frame [1].

2. Find the description of point ‘P’ in frame [1] if P2 = [2, 5, 4]T.

Given: Rotated about x-axis by an angle (θ) = 45°

Position of the origin of frame

dx = 4

dy = 5

dz = 8

2
P2 =  5 
4
 

Solution:

To write the homogeneous transform matrix describing frame [2] with


respect to frame { 1 } ,

Frame { 2 } is rotated relative to frame { 1 } about ‘x’ axis by 45°.

1 0 0  1 0 0 
1R (θ) =  0 cos θ − sin θ  =  0 cos 45 − sin 45 
2    
 0 sin θ cos θ   0 sin 45 cos 45 
   

1 0 0 
1
R2 (θ) =  0 0.707 − 0.707 
 0 0.707 0.707 
 
4.30 Robotics – www.airwalkpublications.com

Rotation and Translation matrix – Homogeneous transform matrix

1 0 0 4
 
1  0 0.707 − 0.707 5
T2 = 
 0 0.707 0.707 8 
0 0 0 1 

2
If P2 =  5 
4
 

P1 = 1T2 ⋅ P2

Substituting the above value in equation (5), we get

1 0 0 4 2
   

P1 = 
0 0.707 0.707 5 5
 0 0.707 0.707 8  4
 
0 0 0 1  1
  

 6 
 
= 
5.707 

 14.363 
 1 
 

The 3 × 1 position vector of point P in frame { 1 } in physical


coordinates is

 6 
P1 =  5.707  .
 14.363 
 
Robot Kinematics 4.31

4.8. JACOBIANS:

The position and orientation of the manipulator end-effector can be


evaluated in relation to joint displacements. The joint displacements
corresponding to a given end-effector location can be obtained by solving the
kinematic equation for the manipulator. This preliminary analysis permitted the
robotic system to place the end-effector at a specified location in space. In
addition to find the final location of the end-effector, the velocity at which
the end-effector moves should also be found out. In order to move the
end-effector in a specified direction at a specified speed, it is necessary to
coordinate the motion of the individual joints and to achieve coordinated
motion in multiple-joint robotic systems. The end-effector position and
orientation are directly related to the joint displacements. Hence, in order to
coordinate joint motions, the differential relationship between the joint
displacements and the end-effector location should be derived to find out the
individual joint motions.
4.8.1. Differential Relationship:
Ref. Fig. 4.16.
4.32 Robotics – www.airwalkpublications.com

The kinematic equations relating the end-effector coordinates xe and ye


to the joint displacements θ1 and θ2 are given by

xe (θ1 , θ2) = L1 cos θ1 + L2 cos (θ1 + θ2) .... (1)

ye (θ1 , θ2) = L1 sin θ1 + L2 sin (θ1 + θ2) .... (2)

Small movements of the individual joints at the current position, and the
resultant motion of the end-effector, can be obtained by the total derivatives
of the above kinematic equations:

∂ xe ∂ xe
dxe = d θ1 + d θ2 .... (3)
∂ θ1 ∂ θ2

∂ ye ∂ ye
dye = d θ1 + d θ2 .... (4)
∂ θ1 ∂ θ2

where xe , ye are variables of both θ1 and θ2, hence two partial derivatives are
involved in the total derivatives. In vector form, the above equation can be
reduced to

Where d xe = J ⋅ dq .... (5)

 d xe   d θ1 
d xe =  , dq=dθ  .... (6)
dy
 e  2

and J is a 2 by 2 Jacobian matrix given by

 ∂ xe ∂ xe 
 
 ∂ θ1 ∂ θ2 
J=
∂ ye 
.... (7)
 ∂ ye
 ∂ θ1 ∂ θ2 
 

The matrix J comprises the partial derivatives of the function


xe and ye with respect to joint displacements θ1 and θ2. The matrix J, is called
Robot Kinematics 4.33

as Jacobian Matrix, Since most of the robot mechanisms have multiples of


active joints, a Jacobian matrix is needed for describing the mapping of the
vectorial joint motion to the vectorial end-effector motion.

For the two-dof robot arm of Figure 4.16, the components of the
Jacobian matrix are computed as

 − L1 sin θ1 − L2 sin (θ1 + θ2) − L2 sin (θ1 + θ2) 


J=
L2 cos (θ1 + θ2) 
.... (8)
 L1 cos θ1 + L2 cos (θ1 + θ2)

Hence the Jacobian collectively represents the sensitivities of individual


end-effector coordinates to individual joint displacements. This sensitivity
information is needed in order to coordinate the multi dof joint displacements
for generating a desired motion at the end-effector.

When the two joints of the robot arm are moving at joint velocities
. . . . .
q = (θ1 , θ2)T, and let ve = (xe , ye)T be the resultant end-effector velocity
vector. The Jacobian provides the relationship between the joint velocities and
the resultant end-effector velocity. Dividing equ. (5) by the infinitesimal time
increment dt yields.

d xe dq
=J , or ve = J ⋅ q .... (9)
dt dt

Thus the Jacobian determines the velocity relationship between the


joints and the end-effector.

4.9. SINGULARITIES:

When we have a linear transformation relating joint velocity to Cartesian


(end effector) velocity, a reasonable question to ask is: Is this matrix
invertible? That is, is it nonsingular? If the matrix is nonsingular, then we
can invert it to calculate joint velocities from given Cartesian end effector
velocities:
.
q = J− 1 ⋅ ve
4.34 Robotics – www.airwalkpublications.com

This is an important relationship. For example if we wish the end effector


of the robot to move with a certain velocity vector in Cartesian space, then
using above equation, we could calculate the necessary joint velocities at each
instant along the path. The real question of invertibility is: Is the Jacobian
invertible for all values of q. If not, where is it not invertible?
.
Most manipulators have values of q where the Jacobian becomes
singular. Such locations are called singularities of the mechanism or simply
singularities. All manipulators have singularities at the boundary of their
workspace, and most have loci of singularities inside their workspace.
Singularities are divided into two categories:
1. Workspace-boundary singularities occur when the manipulator is
fully stretched out or folded back on itself in such a way that the
end-effector is at or very near the boundary of the workspace.
2. Worspace-interior singularities occur away from the workspace
boundary; they generally are caused by a lining up of two or more
joint axes.
When a manipulator is in a singular configuration, it has lost one or
more degrees of freedom (as viewed from Cartesian space). This means that
there is some direction (or subspace) in Cartesian space along which it is
impossible to move the hand of the robot, no matter what joint velocities are
selected. It is obvious that this happens at the workspace boundary of robots.
To find the singularity for 2 link arm the determinant of its Jacobian
should be equal to zero and hence Jacobian has lost full rank and is singular:
 L1 s2 0 
DET [J] =   = L1 L2 s2 = 0
 L1 c2 + L2 L2 
s2 = sin θ2
c2 = cos θ2
So, a singularity of the mechanism exists when θ2 is 0 or 180 degrees.
Physically, when θ2 = 0, the arm is stretched straight out. In this configuration,
motion of the end-effector is possible along only one Cartesian direction (the
one perpendicular to the arm). Therefore, the mechanism has lost one degree
of freedom. Likewise, when θ2 = 180, the arm is folded completely back on
itself, and motion of the hand again is possible only in one Cartesian direction
instead of two.
Robot Kinematics 4.35

4.10. STATIC FORCES IN MANIPULATORS:

The robot with the end-effector is supporting a load. We wish to find


the joint torques to keep the system in static equilibrium.
To find static forces in a manipulator, all the joints should be locked so
that the manipulator becomes a structure. We then consider each link in this
structure and write a force-moment balance relationship in terms of the link
frames. Finally static torque acting about the joint axis can be computed in
order to keep manipulator in static equilibrium. In this way, we can solve for
the set of joint torques required to support a static load acting at the
end-effector.
The static forces and torques acting at the joints are considered when
the manipulator has its end-effector with the load.
Fi = Force exerted on link i by neighbour link i − 1,

ni = Torque exerted on link i by neighbour link i − 1.

Figure 4.17 shows the static forces and moments (excluding the gravity
force) acting on link i. Summing the forces and setting them equal to zero,
we have
4.36 Robotics – www.airwalkpublications.com

Fi − Fi + 1 = 0. .... (1)

Summing torques about the origin of frame { i }, we have

ni − ni + 1 − Pi + 1 × Fi + 1 = 0 .... (2)

Fi = Fi + 1 .... (3)

ni = ni + 1 + Pi + 1 × Fi + 1 .... (4)

In order to write these equations in terms of only forces and moments


defined within their own link frames, we transform with the rotation matrix
describing frame.

{ i + 1 } relative to frame { i } . This leads to important result for static


force “propagation” from link to link:

Fi = R Fi + 1 .... (5)

ni = R ni + 1 + Pi + 1 × Fi .... (6)

To find the joint torque required to maintain the static equilibrium, the
dot product of the joint-axis vector with the moment vector acting on the link
is computed:
Robot Kinematics 4.37

Joint force τi = i nTi I Zi . .... (7)

In the case that joint i is prismatic, we compute the joint actuator force
as

τi = i FTi I Zi . .... (8)

4.11. JACOBIANS IN THE FORCE DOMAIN:

Torques will exactly balance forces at the end effector in the static
situation. When forces act on a mechanism, work is done if the mechanism
moves through a displacement. Work is defined as a force acting through a
distance and is a scalar with units of energy. Work is the dot product of a
vector force or torque and a vector displacement. Thus, we have

F ⋅ dx = τ ⋅ d θ .... (1)

where F is a force vector acting at the end-effector, dx is a displacement of


the end-effector,

τ = Torque vector

 d θ1 
dq =  =dθ
 d θ2 

τ is a vector of torques at the joints, and d θ is a vector of infinitesimal joint


displacements.

FT dx = τT d θ .... (2)

The definition of the Jacobian is

dx = J d θ .... (3)

so we may write

FT J d θ = τT d θ .... (4)

So FT J = τT . .... (5)
4.38 Robotics – www.airwalkpublications.com

Transposing both sides yields this result:

τ = JT F. .... (6)

The jacobian transpose maps Cartesian forces acting at the end effector
into equivalent joint torques.

4.12. MANIPULATOR DYNAMICS:

The manipulator dynamics behavior is described in terms of the time


rate of change of the robot configuration in relation to the joint torques.
Equations of motion, govern the dynamic response of the robot linkage to
input joint torques.

Two methods are used to obtain the equations of motion: the


Newton-Euler formulation, and the Lagrangian formulation. The
Newton-Euler formulation is derived from Newton’s Second Law of Motion,
which describes dynamic systems in terms of force and momentum. The
equations incorporate all the forces and moments acting on the individual robot
links, including the coupling forces and moments between the links. The
equations obtained from the Newton-Euler method include the constraint forces
acting between adjacent links.

In the Lagrangian formulation, the system’s dynamic behavior is


described in terms of work and energy using generalized coordinates. All the
workless forces and constraint forces are automatically eliminated in this
method. The resultant equations are generally compact and provide a
closed-form expression in terms of joint torques and joint displacements.
Furthermore, the derivation is simpler and more systematic than in the
Newton-Euler method.

The robot’s equations of motion are basically a description of the


relationship between the input joint torques and the motion of the robot
linkage.
Robot Kinematics 4.39

4.12.1. Newton-Euler Formulation of Equations of Motion:

Basic Dynamic Equations:


The motion of a rigid body can be resolved into the translational motion
with respect to an arbitrary point fixed to the rigid body, and the rotational
motion of the rigid body about that point. The dynamic equations of a rigid
body can also be represented by two equations: one describes the translational
motion of the centroid – Newton’s equation of motion, while the other
describes the rotational motion about the centroid – Euler’s equation of motion.

Figure 4.19 shows all the forces and moments acting on link i.

Let Vci be the linear velocity of the centroid of link i with reference to
the base coordinate frame O-xyz, which is an inertia reference frame. The
.
inertia force is then given by − mi Vci, when mi is the mass of the link and
.
Vci is the time derivative of Vci – acceleration. Based on D’Alembert’s
principle, the equation of motion is then obtained by adding the inertia force
to the static balance of forces so that
.
Fi − 1, i − Fi, i + 1 + mi g − mi Vci = 0, i = 1, … , n .... (1)
4.40 Robotics – www.airwalkpublications.com

Fi − 1, i and − Fi, i + 1 are the coupling forces applied to link i by links


i − 1 and i + 1, respectively, and g is the acceleration of gravity.

Rotational motions are described by Euler’s equations by adding “inertia


torques” to the static balance of moments.

Newton-Euler equations for link 1 are given by


.
F0, 1 − F1, 2 + m1 g − m1 Vc1 = 0, .... (2)
.
N0, 1 − N1, 2 + r1, c1 × F1, 2 − r0, c1 × F0, 1 − I1 ω1 = 0 .... (3)
.
F1, 2 + m2 g − m2 Vc2 = 0, .... (4)
.
N1, 2 − r1, c2 × F1, 2 − I2 ω2 = 0 .... (5)

4.12.2. Newton’s Equation in Simple Format:


.
A right body whose center of mass is accelerating with acceleration V.
In such a situation, the force, F, acting at the center of mass and causing this
acceleration is given by Newton’s equation
.
F = m V, .... (6)
.
where m is the total mass of the body and V is the acceleration.
Robot Kinematics 4.41

4.12.3. Euler’s Equation in Simple Format:

Figure 4.21 shows a rigid body rotating with angular velocity ω and
.
with angular acceleration ω. In such a situation, the moment N, which must
be acting on the body to cause this motion, is given by Euler’s equation
.
N=Iω+ω×Iω, .... (7)

where I is the inertia tensor of the body

4.12.4. The force and torque acting on a link:

Having computed the linear and angular accelerations of the mass center
of each link, we can apply the Newton − Euler equations to compute the
inertia force and torque acting at the center of mass of each link. Thus we
have
.
F=mV
.
N = I ω + ω × I ω .... (8)
4.42 Robotics – www.airwalkpublications.com

4.13. L AG RAN GIA N F OR MUL A T ION OF MANIPUL AT OR


DYNAMICS:

The Newton − Euler formulation is a “force balance” approach to


dynamics, the Lagrangian formulation is an “energy-based” approach to
dynamics. Both are giving the same equations of motion. The statement of
Lagrangian dynamics is brief and specialized to the case of a serial-chain
mechanical manipulator with rigid links.

The kinetic energy of the ith link, ki, can be expressed as

1 1
ki = m VT V + ωT I ω .... (9)
2 i i i 2 i i i

1 1
where mV2 is kinetic energy due to linear velocity of the link and I ω2
2 2
is kinetic energy due to angular velocity of the link. The total kinetic energy
of the manipulator is the sum of the kinetic energy in the individual links −
That is,

n
k= ∑ ki . .... (10)
i−1

. .
the Vi and ωi are functions of x and θ.

The potential energy of the ith link, ui, can be expressed as

ui = − mi gT Pi + uref , .... (11)


i

where g is the gravity vector, Pi is the vector locating the center of mass of
the ith link, and uref is a constant chosen so that the minimum value of ui is
i

zero. The total potential energy stored in the manipulator is the sum of the
potential energy in the individual links − that is,
Robot Kinematics 4.43

n
u= ∑ ui . .... (12)
i=1

The Lagrangian dynamic formulation provides a means of deriving the


equations of motion from a scalar function called the Lagrangian, which is
defined as the difference between the kinetic and potential energy of a
mechanical system. Lagrangian of a manipulator is

L=K−u

The equations of motion for the manipulator are then given by

d ∂ L. ∂ L
− =τ. .... (13)
dt ∂ θ ∂ x

where τ is the vector of actuator torques. In the case of a manipulator, this


equation becomes

d ∂k ∂k ∂u
. − + =τ. .... (14)
dt ∂ θ ∂ x ∂ x

4.14. MANIPULATOR KINEMATICS:

Kinematics is the science of motion without considering forces that


cause it. In kinematics, one studies the position, velocity, acceleration, and all
higher order derivatives of the position variables. Hence, the study of the
kinematics of manipulators refers to all the geometrical and time based
properties of the motion. The relationships between these motions and the
forces and torques that cause them constitute the problem of dynamics.

4.14.1. Link Description:

A manipulator is a set of bodies connected in a chain by joints. These


bodies are called links. Joints form a connection between a neighboring pair
of links. The term lower pair is used to describe the connection between a
pair of bodies when the relative motion is characterized by two surfaces sliding
over one another. Figure 4.22 shows the six possible lower pair joints.
4.44 Robotics – www.airwalkpublications.com

Most manipulators have revolute joints or have sliding joints called


prismatic joints.

The links are numbered starting from base of the arm, called link 0. The
first moving body is link 1, and so on, last arm is link n. In order to position
an end-effector generally in 3-space, a minimum of six joints is required.

Because the description of an object in space requires six parameters


− three for position and three for orientation.
Robot Kinematics 4.45

Figure 4.23 shows link i − 1 and the mutually perpendicular line along
which the link length, ai − 1, is measured.

The second parameter defines the relative location of the two axes called
the link twist αi − 1.

4.14.2. Link Connection – Intermediate links in the chain:

Neighboring links have a common joint axis between them. One


parameter of interconnection in the distance along this common axis from one
link to the next is called the link offset. The offset at joint axis i is called
di. The second parameter describes the amount of rotation about this common
axis between one link and its neighbor. This is called the joint angle, θi.

Figure 4.23 shows the interconnection of link i − 1 and link i. ai − 1 is


the mutual perpendicular between the two axes of link i − 1. The link offset
di is variable if joint i is prismatic.
4.46 Robotics – www.airwalkpublications.com

4.14.3. First and Last Links in the Chain:

Link length, ai, and link twist, αi, depend on joint axes i and i + 1. Link
offset, di, and joint angle, θi, are well defined for joints 2 through n − 1. If
joint 1 is revolute, the zero position for θ1 may be chosen arbitrarily, d1 = 0,
will be our convention. Similarly, if joint 1 is prismatic, the zero position of
d1 may be chosen arbitrarily, θ1 = 0 will be our convention. Exactly the same
statements apply to joint n.

These conventions have been chosen so that, in a case where a quantity


could be assigned arbitrarily, a zero value is assigned so that later calculations
will be as simple as possible.

4.15. LINK PARAMETERS–DENAVIT − HARTENBERG NOTATION:

Any robot is described kinematically by four quantities for each link.


Two describe the link itself, and two describe the link’s connection to a
neighboring link. In a revolute joint, θi is called the joint variable, and the
other three quantities would be fixed link parameters. For prismatic joints,
di is the joint variable, and the other three quantities are fixed link parameters.
The definition of mechanisms by means of these quantities is called the
Denavit − Hartenberg notation.

4.15.1. Convention for Affixing Frames to links:

In order to describe the location of each link relative to its neighbors,


we define a frame attached to each link. The link frames are named by number
according to the link to which they are attached. That is, frame { i } is attached
rigidly to link i.

4.15.2. Link Connection – Intermediate links in the chain:

The Z-axis of frame { i }, called Zi, is coincident with the joint axis i.
Xi points along ai in the direction from joint i to joint i + 1.
Robot Kinematics 4.47

In the case of ai = 0, Xi is normal to the plane of Zi and Zi + 1 αi is


measured in the right hand sense about Xi. Figure 4.24 shows the location of
frame { i − 1 } and { i } for a general manipulator.

4.15.3. First and Last links in the chain:


We attach a frame to the base of the robot, or link 0, called
frame { 0 }– reference frame. The positions of all other link frames are
described in terms of this frame
Frame { 0 } is arbitrary, so it always simplifies matters to choose Z0
along axis 1 and to locate frame { 0 } so that it coincides with frame { 1 }
when joint variable 1 is zero.Using this conventions, we will always have
a0 = 0, α0 = 0. And d1 = 0, if joint 1 is revolute, or θ1 = 0 if joint 1 is prismatic.

4.15.4. Summary of the link parameters in terms of the link frames:


ai = Distance from Zi to Zi + 1 measured along Xi.

αi = Angle from Zi to Zi + 1 measured about Xi,

di = Distance from Xi − 1 to Xi measured along Zi, and

θi = Angle from Xi − 1 to Xi measured about Zi.


4.48 Robotics – www.airwalkpublications.com

4.15.5. Summary of link-frame attachment procedure:

1. Identify the joint axes.


2. Identify the common perpendicular between them. At the point where
the common perpendicular meets the ith axis, assign the link – Frame
origin.
3. Assign the Z1 axis pointing along the ith joint axis.

4. Assign the Xi axis pointing along the common perpendicular, or, if


the axes intersect, assign Xi to be normal to the plane containing the
two axes.
5. Assign the Yi axis to complete a right hand coordinate system.

Problem 4.13: Figure 4.25(a) shows a three-link planar arm (R for revolute)
called an RRR (or 3R) mechanism.
Robot Kinematics 4.49

The Denavit − Hartenberg parameters are assigned by defining the


reference frame, frame { 0 } . It is fixed to the base and aligns with frame
{ 1 } when the first joint variable (θ1) is zero. Therefore, we position frame
{ 0 } as shown in Fig. 4.25(b) with Z0 aligned with the joint-1 axis (Not
shown). For this arm, all joint axes are oriented perpendicular to the plane of
the arm. Because the arm lies in a plane with all Z-axis parallel, there are no
link offsets − all di are zero. All joints are rotational, so when they are at
zero degrees, all X axes must align.

Table 4.1: Link parameters of the three-link planar manipulator.

i αi − 1 ai − 1 di θi

1 0 0 0 θ1

2 0 L1 0 θ2

3 0 L2 0 θ3
4.50 Robotics – www.airwalkpublications.com

To find the frame assignments shown in Fig. 4.25(b), the corresponding


link parameters are shown in Table 4.1.

Because the joint axes are all parallel and all the Z-axis are taken as
pointing out of the paper, all αi are zero.

Problem 4.14: Figure 4.26(a) shows a robot having three degrees of freedom
and one prismatic joint. This manipulator can be called an “RPR mechanism”.

Figure 4.26(b) shows the manipulator with the prismatic joint at


minimum extension; the assignment of link frames is shown in Fig. 4.26(c).
Robot Kinematics 4.51

Note that frame { 0 } and frame { 1 } are shown as exactly coincident


in this figure because the robot is drawn for the position θ1 = 0.

Rotational joints rotate about the Z-axis of the associated frame, but
prismatic joints slide along Z. In the case where joint i is prismatic, θi is a
fixed constant, and di is the variable. The link parameters are shown in
Table 4.2.

Note that θ2 is zero for this robot and that d2 is a variable. Axes 1 and
2 intersect, so a1 is zero. Angle α1 must be 90 degrees in order to rotate
Z1 so as to align with Z2 (about X1).

Table 4.2: Link parameters for the RPR manipulator

i αi − 1 ai − 1 di θi

1 0 0 0 θ1

2 90° 0 d2 0

3 0 0 L2 θ3
4.52 Robotics – www.airwalkpublications.com

4.16. DIFFERENT FRAME ARRANGEMENTS:

Various possible frame assignments are shown here.

4.16.1. Derivation of link transformations:


The transformation defining frame { i } relative to the frame { i − 1 } .
will be a function of the four link parameters. For any given robot, this
transformation will be a function of only one variable, the other three
parameters being fixed by mechanical design. By defining a frame for each
link, we have broken the kinematics problem into n subproblems.
Robot Kinematics 4.53

In Figure 4.28 only the X and Z axes are shown for each frame, to
make the drawing clearer. Frame { R } differs from frame { i − 1 } only by a
rotation of αi − 1.

Frame { Q } differs from { R } by a translation ai − 1. Frame { P } differs


from { Q } by a rotation θi, and Frame { i } differs from { P } by a translation
di. To write the transformation that transforms vectors defined in { i } to their
description in { i − 1 },
i−1 i−1 R Q P
P= R T QT P T i Ti P, .... (1)

i−1 i−1
or P = i Ti P, .... (2)

i−1 T = i − R1 T RT QT P T. .... (3)


i Q P i

Considering each of these transformations, we see that equation (3) may


be written
i−1
i T = RX (αi − 1) DX (ai − 1) RZ (θi) DZ (di), .... (4)

or i−1 T = Screwx (ai − 1, αi − 1) Screwz (di , θi), .... (5)


i
4.54 Robotics – www.airwalkpublications.com

where the notation screw x (ai − 1 , αi − 1) stands for the combination of a


translation along an axis X by a distance ai − 1 and a rotation about the same
axis by an angle (α − 1). Multiplying out equation (4), we obtain the general
form of i−1 T :
i

 c θi − s θi 0 ai − 1 
 
 s θi c αi − 1 c θi c αi − 1 − s αi − 1 − s αi − 1 di 
sθ sα c αi − 1 di 
.... (6)
 i i − 1 c θi s α i − 1 c αi − 1
 0 0 0 1 
 
Note: c for cos

s for sin

Using the link parameters given in Table 4.2 of article 4.15 for the robot
of Fig. 4.26 in Problem 4.14 individual transformations for each link can be
computed by
Substituting the parameters into (6) we obtain
 c θ1 − s θ1 0 0 
  Note: c θ1 = cos θ1
0  s θ1 c θ1 0 0 
1T= 0 0 1 0 
 s θ1 = sin θ1
 0 0 0 1
 
1 0 0 0 
 
 0 0 − 1 − d2 
1
T=
0 
2
,
0 1 0
0 0 0 1 
 
 c θ3 − s θ3 0 0 
 
s θ3 c θ3 0 0 
2T= .... (1)
3  0 0 1 l2 

 0 0 0 1 
 
Once having derived these link transformations, the elements of the
fourth column of each transform will give the coordinates of the origin of the
next higher frame.
Robot Kinematics 4.55

4.16.2. Concatenating link Transformation:

Once the link frames have been defined and the corresponding link
parameters found, developing the kinematic equations is straightforward. From
the values of the link parameters, the individual link-transformation matrices
can be computed. Then, the link transformations can be multiplied together to
find the single transformation that relates frame { N } to frame { 0 } :
0 0 1 2 N−1
NT = 1 T 2 T 3T NT .... (8)

This transformation, 0 T, will be a function of all n joint variables.


N

4.17. THE PUMA 560:

The Unimation PUMA 560. Fig. 4.29 is a robot with six degrees of
freedom and all rotational joints (i.e., it is a 6R mechanism).
4.56 Robotics – www.airwalkpublications.com

Fig. 4.30 shows a detail of the forearm of the robot.


Note that the frame { 0 } (not shown) is coincident with frame { 1 }
when θ1 is zero.
Robot Kinematics 4.57

Table 4.3: Link parameters of the PUMA 560

i αi − 1 ai − 1 di θi

1 0 0 0 θ1

2 − 90° 0 0 θ2

3 0 a2 d3 θ3

4 − 90° a3 d4 θ4

5 90° 0 0 θ5

6 − 90° 0 0 θ6

Using equation (6), we compute each of the link transformations:

 c θ1 − s θ1 0 0
 
0  s θ1 c θ1 0 0
=  0 0 
1 T
 0 1
 0 0 0 1
 

 c θ2 − s θ2 0 0
 
1T=
0 0 1 0
2  −sθ −cθ 0 0 
 2 2
 0 0 0 1
 

 c θ3 − s θ3 0 a2 
 
2T=
s θ3 c θ3 0 0 
3  0 0 1 d3 

 0 0 0 1 
 
4.58 Robotics – www.airwalkpublications.com

 c θ4 − s θ4 0 a3 
 
3  0 0 1 d4 
4T= −sθ − c θ4 0 0 
 4
 0 0 0 1 
 
 c θ5 − s θ5 0 0
 
4  0 0 −1 0
5T=sθ c θ5 0 0 
 5
 0 0 0 1
 
 c θ6 − s θ6 0 0
 
5  0 0 1 0
6T= −sθ − c θ6 0 0 
.... (9)
 6
 0 0 0 1
 
0
We now form 6
T by matrix multiplication of the individual link
matrices. While forming this product, we will derive some subresults that will
4 5
be useful. We start by multiplying 5 T and 6 T ; that is,

 c5 c6 − c5 s6 − s5 0
 
4 4 5  s6 c6 0 0
6T = 5T 6T =  s c − s5 s6 c5 0 
.... (10)
 5 6
 0 0 0 1
 
where c5 is c θ5 , s5 for s θ5 , and so on. Then we have

 c4 c5 c6 − s4 s6 − c4 c5 s6 − s4 c6 − c4 s5 a3 
 
3 3 4  s5 c6 − s5 s6 c5 d4 
T = T T =  −s c c −c s  .... (11)
6 4 6
 4 5 6 4 6 s4 c5 s6 − c4 c6 s4 s5 0 
 0 0 0 1 
 

Because joints 2 and 3 are always parallel, multiplying 12T and 23T first
and then applying sum-of-angle formulas will yield a some what simpler final
expression. This can be done whenever two rotational joints have parallel axes
and we have
Robot Kinematics 4.59

 c23 − s23 0 a2 c2 
 
1 1 2  0 0 1 d3 
3T = 2T 3T =  − s − c23 0 − a2 s2 
.... (12)
 23
 0 0 0 1 
 
where we have used the sum-of-angle formulae
c23 = c2 c3 − s2 s3 ,

s23 = c2 s3 + s2 c3 .

Then we have
 1r 1
r12 1
r13 1
px 
 11 
 1r 1
r22 1
r23 1 
py 
1 1 3  21
6T = T
3 6T = 1 1 1 1 
 r31 r32 r33 pz 
 
 0 0 0 1 
 
where
1
r11 = c23  c4 c5 c6 − s4 s6  − s23 s5 s6 ,
 
1
r21 = − s4 c5 c6 − c4 s6 ,
1r
31 =
− s23  c4 c5 c6 − s4 s6  − c23 s5 c6
 
1
r12 = − c23  c4 c5 s6 + s4 c6  + s23 s5 s6 ,
 
1
r22 = s4 c5 s6 − c4 c6 ,
1r
32 = s23
 c4 c5 s6 + s4 c6  + c23 s5 s6 ,
 
1r = − c c s − s c ,
13 23 4 5 23 5
1
r23 = s4 s5 ,
1
r33 = s23 c4 s5 − c23 c5 ,
1p = a2 c2 + a3 c23 − d4 s23 ,
x
1
py = d3 ,
1
pz = − a3 s23 − a2 s2 − d4 c23 . .... (13)
4.60 Robotics – www.airwalkpublications.com

Finally, we obtain the product of all six link transforms:


 r11 r12 r13 px 
 
0 0 1  r21 r22 r23 py 
6T = 1T 6T =  r r33 pz 
.
 31 r32
 0 0 0 1 
 
Here,
r11 = c1  c23 (c4 c5 c6 − s4 s5) − s23 s5 c5  + s1 (s4 c5 c6 + c4 s6) ,
 
r21 = s1  c23 (c4 c5 c6 − s4 s6) − s23 s5 c6  − c1 (s4 c5 c6 + c4 s6) ,
 
r31 = − s23 (c4 c5 c6 − s4 s6) − c23 s5 c6 ,

r12 = c1  c23 ( − c4 c5 s6 − s4 c6) + s23 s5 s6  + s1 (c4 c6 − s4 c5 s6) ,


 
r22 = s1  c23 ( − c4 c5 s6 − s4 c6) + s23 s5 s6  − c1 (c4 c6 − s4 c5 s6) ,
 
r32 = − s23 ( − c4 c5 s6 − s4 c6) + c23 s5 s6 ,

r13 = − c1 (c23 c4 s5 + s23 c5) − s1 s4 s5 ,

r23 = − s1 (c23 c4 s5 + s23 c5) + c1 s4 s5 ,

r33 = s23 c4 s5 − c23 c5 ,

px = c1  a2 c2 + a3 c23 − d4 s23  − d3 s1 ,
 
py = s1  a2 c2 + a3 c23 − d4 s23  + d3 c1 ,
 
pz = − a3 s23 − a2 s2 − d4 c23 . .... (14)

Equation (14) constitute the kinematics of the PUMA 560. They specify
how to compute the position and orientation of frame { 6 } relative to frame
{ 0 } of the robot. These are the basic equations for all kinematic analysis of
this manipulator.

*********
CHAPTER – 5

IMPLEMENTATION AND
ROBOT ECONOMICS

RGV, AGV; Implementation of Robots in Industries – Various Steps; Safety


Considerations for Robot Operations – Economic Analysis of Robots.

5.1. RAIL GUIDED VEHICLE (RGV):

Rail Guided Vehicle (RGV) is a fast, flexible and easily installed material
handling system, which has separate input/output stations allowing multiple
performances to be done at once. There are linear and circular types of RGVs,
which run safely and smoothly at high speeds. RGVs can be used to transport
all types of goods in the warehouse and combine with the automated storage
and retrieval system to complete the storage and transportation of materials in
a factory.

An RGV can link multiple destinations and be a good & economic


alternative of conveyor by its characteristic that it can eliminate complex and
fixed layout of conveyors, thereby enabling simple and easily maintainable
transportation system. RGV is controlled by distribution control system and
can be expanded easily as the system parameters change. This characteristic
cannot be obtained in normal conveyor system.
5.2 Robotics – www.airwalkpublications.com

In a system, multiple vehicles can be operated according to the


transportation requirement. RGV system constitutes of transportation rail,
vehicles and controller. The RGV rail can be installed linear or circular.

The following are the features of a RGV.

★ Efficient automatic controlled material flow.

★ Has the ability of sorting and collection for channel conveyer of


automated storage and retrieval system.

★ Minimize potential injuries for employees.

★ RGV, together with the MES (Manufacturing Execution System), can


sort the goods based on the purchase order.

★ Timely transportation of the parts, required for the production line, to


the designated point.

★ RGV system transports materials between different processes and


automatically replenishes materials according to the MES production
process.

★ Can be added with conveyor, lift, Robot or fork according to


customers’ requirements.

★ Low noise & vibration.

★ Has faster handling speed than conveyor.


Implementation and Robot Economics 5.3

If the RGV system uses just one rail, it is called a monorail system,
which typically operate from a suspended position overhead, whereas it can
also consist of a two-rail system generally found on the plant floor.

Rail guided vehicles operate asynchronously and are driven by an


on-board electric motor, with power being supplied by an electrified rail. This
removes the necessity of stoppages owing to battery power wear-out, but it
presents a new safety hazard in the form of the electrified rail.

Routing variations are possible in these systems through a combination


of turntables, switches, and other specialised track sections. This allows
different loads to travel different routes. Rail-guided systems are generally
considered to be more versatile than conveyor systems, but less versatile than
AGVS (Automated guided vehicle system). Rail-guided systems are commonly
used in the automotive industry where overhead monorails move large
components and subassemblies in manufacturing operations.
5.4 Robotics – www.airwalkpublications.com

Sorting Transfer Vehicle (STV) is a fast, flexible and easily installed


material transport system. It can be used to move loads of all sizes in a
warehouse.

STV has sorting and collecting capabilities for multiple AS/RS


(Automated Storage and Retrieval System) aisle conveyor stations. It enables
picking by order line and sorting by destination to one.

The STV track can be installed in a loop or straight line to accommodate


for a variety of applications, such as mixed pallet picking, cycle counting,
quality inspection, load sorting and truck loading.

Benefits of STVs include, fewer motors, no single point of failure,


high-speed, high-throughput and expansion flexibility to handle future growth.
Implementation and Robot Economics 5.5

5.2. AUTOMATED GUIDED VEHICLE SYSTEM (AGVS):

An automated guided vehicle (AGV) is a mobile robot that follows


markers or wires in the floor, or uses vision or lasers.

They are most often used in industrial applications to increase efficiency


and reduce costs by helping to automate a manufacturing facility or warehouse.

It is a known fact that human interference in any system is subjected to


the introduction of human error into the system. In many systems, there are
works which are repetitive. To do the same work with diligence is very
difficult for us. When such works are automated by using a machine, the
system’s efficiency is increased and production line becomes much smoother.
Some examples of this kind of automated machines are pick and place robot,
automated conveyor system, automated bottle capping unit etc.

In most cases, these automated devices are a part of the material handling
system or part of an assembling unit. All these systems are stationed at one
place and handle things from there. The next level of automation is to create
units which can be moved from one place to another in a guided path or by
self-guidance which gives way to create the Automated Guided Vehicle
System. An Automated Guided Vehicle is considered as a mobile robot. Still
research is going on to make AGV work intelligently by using Artificial
Intelligence (AI), Nano technology etc.,

Going back, the first commercial AGV was a simple tow truck that
followed a guide wire in the floor instead of a rail track. It was used to
transport raw material / product from one station to another. It was developed
by Barrett Electronics in 1953.

The AGV can tow objects behind them in trailers to which they can
autonomously attach. The trailers can be used to move raw materials or
finished product. The AGV can also store objects on a bed. The objects can
5.6 Robotics – www.airwalkpublications.com

be placed on a set of motorized rollers (conveyor) and then pushed off by


reversing them.

Application of the automatic guided vehicle has broadened during the


late 20th century and they are no longer restricted to industrial environments.
AGVs are employed in nearly every industry, including, pulp, paper, metals,
newspaper, and general manufacturing.

AGV systems offer many advantages over other forms of material


transport. However, the design of these systems is complex due to the
interrelated decisions that must be made and the large number of system design
alternatives that are available.

5.2.1. Components of AGV:


Implementation and Robot Economics 5.7

The essential components of AGV are,

(i) Mechanical structure

(ii) Driving and steering mechanism actuators

(iii) Servo controllers

(iv) On board computing facility

(v) Servo amplifier

(vi) Feedback components

(vii) On board power system

5.2.2. Advantages of using an AGV:

(i) AGV can be controlled and monitored by computers.

(ii) On a long run, AGVs decrease labour costs.

(iii) They are unmanned and hence can be used in extreme/hazardous


environments.

(iv) They are compatible with production and storage equipment.

(v) They can be used to transport hazardous substances.

(vi) They can reduce downtime in production.

(vii) Improvement in productivity and profit.

(viii) They can be used for continuous operations.


5.8 Robotics – www.airwalkpublications.com

5.2.3. Applications of AGV:

Some of the common characteristics where an AGV can be used are:

(i) Repetitive movement of materials over a distance

(ii) Regular delivery of stable loads

(iii) Medium throughput/volume

(iv) When on-time delivery is critical and late deliveries are causing
inefficiency

(v) Operations with at least two shifts

(vi) Processes where tracking material is important

Applications where the above traits are there and use AGV are

(a) Raw Material / Work in Progress (WIP) / Product Handling system:

Fig. 5.4

All the above is to be transported from one place to another which is


handled by a material handling system. AGV is a part of the material handling
system and is ideal for these applications.
Implementation and Robot Economics 5.9

(b) Pallet Handling System:

Fig. 5.5

In systems like manufacturing or distributing, pallets AGVs are


commonly employed. A pallet handling AGV is ideal for handling the
movement of pallets in those systems.
(c) Trailer Loading System:

Fig. 5.6
5.10 Robotics – www.airwalkpublications.com

Today AGVs are used to load the trailers without the requirement of
special docking machine to load them. These trailers are stacked with materials
by means of AGVs.

(d) Roll Handling System:

Fig. 5.7

Rolls are transported in many types of plant like paper mills, converters,
printers, newspapers, steel producers, and plastics manufacturers. They are used
to stack, rack and load them in those plants.

(e) Container Handling System:

Fig. 5.8
Implementation and Robot Economics 5.11

AGVs are used in container yards to handle containers. The system is


made of cranes, lift trolleys and other handling machine.

Some of the common industries which use AGV are as follows:

(a) Manufacturing

(b) Paper and print

(c) Food and beverage

(d) Hospital

(e) Warehousing

(f) Automotive

(g) Theme parks

(h) Chemical

(i) Pharmaceutical

The different types AGVs are as follows:

(a) Towing Vehicles (also called “tugger” vehicles)

Fig. 5.9
5.12 Robotics – www.airwalkpublications.com

(b) AGVS Unit Load Vehicles

Fig. 5.10

(c) AGVS Pallet Trucks


Implementation and Robot Economics 5.13

(d) AGVS Fork Truck

(e) AGVS Hybrid Vehicles


5.14 Robotics – www.airwalkpublications.com

(f) Light Load AGVS

Fig. 5.14

(g) AGVS Assembly Line Vehicles

Fig. 5.15
Implementation and Robot Economics 5.15

5.3. VEHICLE GUIDANCE TECHNOLOGIES:

(a) Wired:

A wire is placed in a slot which is about 1 inch from the surface. This
wire transmits a radio signal which is received by the AGV. By means of
this signal, the AGV is guided along the path where the wire is installed. The
sensor constantly detects its relative position with the signal it receives from
the wire. This feedback loop regulates the steering of the AGV along the
path of the wire.

(b) Guide tape:

Instead of a wire a tape is used for guidance. This tape is either a colour
or a magnetic tape. Both have a way to send and receive signals to control
the vehicle. These vehicles are called as carts and hence the system is called
AGC. The main advantage of this system is that it is easy to install the tapes
5.16 Robotics – www.airwalkpublications.com

when compared to laying the wire. To make changes in the movement of the
vehicle, a change of polarity with a small piece of magnetic tape is alone
enough. Another advantage is that the tapes can be removed and modified as
per the requirement of the shop floor. This is also ideal for high traffic areas.

(c) Laser Target Navigation:

In this system the AGV carries a laser transmitter and receiver on a


rotating turret. The area where the vehicle supposed to travel is mounted with
reflectors at walls, poles and other fixed targets. A map is provided to the
system. By triangulation of the reflected signal from the laser with reference
to the memory of the map, the position of the vehicle is determined. As the
vehicle moves, the position gets updated in the map. There are two types of
laser which are used for this application (a) Modulated Laser (b) Pulsed Laser.
Implementation and Robot Economics 5.17

(d) Inertial Gyroscopic Navigation:

(a) Gyroscope (b) Gyroscope Axis

Fig. 5.18

AGV is guided by the principle of inertial gyroscopic navigation. A


computer control directs the path of the AGV, the transponders respond
accordingly to changing directions of the vehicle. This system can be used in
extreme conditions. It can also be used with magnetic guides and strips as the
principle does not deviate with magnetic interference.

(e) Natural Features (Natural Targeting) navigation:

Fig. 5.19: Laser Range

When AGV guidance is provided without making any installations like


installing of wire or placing tapes or fixing reflectors in the workplace, then
it is called Natural feature or Natural Targeting Navigational system. One such
5.18 Robotics – www.airwalkpublications.com

system is Ranger finder sensors like Laser range finder. Another system uses
a Gyroscope with Monte-Carlo/Markov localization techniques. Both the
systems understand its position dynamically and plot the shortest permitted
path to its destination. This system is very advantageous as it is dynamic and
can work around the path full of obstacles to get to the destiny. Hence this
reduces downtime and increases flexibility of the system.
(f) Vision guidance:

With the advent of technology Vision-Guided system was developed by


using Evidence grid technology. This technology uses a camera to record the
features in the path of the vehicle. This path is mapped by 360-degree images
on a 3D map. Now the AGV uses a probabilistic volumetric sensing (invented
by Dr. Moravac at Carnegie Mellon University) to navigate through the
environment. It follows the trained path without any assistance, judging the
features in the path, adjusting the position accordingly.
(g) Geoguidance:
In this system, the AGV recognizes the environment and establishes its
location on real time. A simple example of this system would be a forklift
equipped with Geoguidance system which can handle loading and unloading
of racks in the facility. The operation of loading and unloading is clear only
after defining the parameters like the racks and boundary of the facility. The
key, routes that the forklift may use to complete the task are infinitely
modifiable.
5.4. STEERING CONTROL:
AGVs have a fully automated steering control. This is attained by three
methods as follows
(a) Differential Speed Control: This type of steering control is used in Tanks.
Here there are two independent drive wheels. To turn the AGV, the two drive
wheels are programmed to move in different speeds. To move the AGV
forwards or backwards they are programmed to move in the same speed. Thus
steering control of the AGV is made. Though this mechanism is simple, it is
not used as the turning is not as smooth as expected for real time applications.
Implementation and Robot Economics 5.19

(b) Steered Wheel Control: This is similar to the steering control in a car.
It is not easy to maneuver it. To make easy, it is used with three wheeled
vehicles. In this type of steering controls, the turns are very smooth. It is
commonly used in all applications. This type of steering control can also have
manual over-ride.

(c) Combination of Both: To incorporate the advantages of both the systems,


a combination of both the controls are used. Here, the AGV has two
independent drive controls installed in it. Hence it can make smooth turns
(rotating like an arc) using the Steered wheel control. It can move in any
direction with differential mode..

5.4.1. Path Decision:

AGVs must decide the path it must take to reach its destination. This
is achieved in different ways based on the system used.

(a) Frequency select mode: This is used in the system where AGV is
controlled by means of wire. This wire is beneath the floor. It guides the AGV
by sending a frequency. When there is change in direction of the path, there
is corresponding change in frequency of the wire. The AGV detects this change
in frequency and makes the path change. Installing wires under the floor is
costly. Hence to make modifications in the path is also costly.

(b) Path Select Mode: In this system, AGV is programmed with path by
programmers. On real time, the AGV uses the sensors to detect its movement
and changes speed and directions as per the programming. This method
requires employing programmers to modify path parameters.

(c) Magnetic Tape Mode: The magnetic tape is laid on the surface of the
floor. It provides the path for the AGV to follow. The strips of the tape in
different combinations of polarity, sequence, and distance, guide the AGV to
change lane and speed up or slow down, and stop.
5.20 Robotics – www.airwalkpublications.com

5.5. VEHICLE MANAGEMENT AND SAFETY:

AGV is a driverless vehicle. It is usually employed in a part of an


automated system. Chief features of FMS is error recovery. AGVs are only
machines which are susceptible to malfunction or breakdown. To monitor that,
systems are in place to govern and control AGVs. The following are the ways
in which AGVs are managed and controlled.

System Management:

To keep control and track of all the AGVs a System Management is


required. There are three types of System Management which are used to
control AGVs and they are as follows.

(a) Locator panel: This system gives the area of location of the AGV. If an
AGV is in particular area for more than the specified time, then it must be
understood that either there is block in the path or there is a breakdown of
AGV.

(b) CRT Display: This displays the location of all the AGVs in real time. It
also gives a status of the AGV, its battery voltage, unique identifier, and can
show blocked spots.

(c) Central logging: This system is used to keep track of the history of all
the AGVs. Central logging stores all the data and history of these vehicles
which can be used to give technical support or for other planning activities.

Traffic Control:

In manufacturing environment of FMS, the number of AGVs is more as


a variety of parts and products are to be handled. In that scenario, path clash
must be avoided. To achieve that, a proper traffic control is a must.

Some common traffic control methods of AGV are given here

(a) Zone Control: As per this system, at each zone there is a transmitter
transmitting a signal. This signal marks the area. The AGV receives this signal
and transmits back to the transmitter. If an AGV is in a particular area, then
Implementation and Robot Economics 5.21

it transmits its presence in that area. This signal is transmitted to all other
AGVs. If there is any AGV which wants to enter that zone, then until the
AGV present in that zone crosses, it is given a stop signal. An alternate method
for this is the AGV itself has a small transmitter and receiver communicating
its zonal location to other AGVs. The other AGVs respond accordingly.

Though this system is easy and economical, the only drawback is if one
zonal communication fails, then the system collapses resulting in collision of
vehicles.

(b) Forward Sensing Control: In this system the AGVs use a collision
avoidance sensor. For collision avoidance sonic, optics and bumper sensors
are used. Sonic sensors work similar to RADAR which is by transmitting a
Chirp or a high frequency signal and receive its signal to detect the presence
of other objects/AGV to avoid collision. The optical sensors use infra-red
transmitter and receiver. The infra-red is transmitted to the work place and
the reflections are received back. By analyzing those signals, the AGV
recognizes if there is obstruction in the path or not. Bumper uses Physical
contact sensor to move around its work environment. The major drawback of
these systems is that they can only protect the AGV from the sides. They are
also hard to install and work with as well.

(c) Combinational control: The advantages of both the control systems are
blended in this combinational control. For normal operations, zonal control is
employed and for collision control forward sensing control is used.

General safety precautions while using an AGV:

(i) AGV travel paths should be clearly marked, including turning areas.

(ii) Workers should be trained to watch out for AGVs and to keep clear
of an AGV path if a vehicle is approaching. Companies should also
provide training for contractors working in their plant.
5.22 Robotics – www.airwalkpublications.com

Fig. 5.20

(iii) Weighted safety cones should be placed around a work area when
working on or near an AGV travel path.

(iv) “Virtual” bumper systems can increase productivity and system


flexibility, and improve plant safety with regard to object
detection/avoidance.
(v) Warning and Alarm Lights: AGVs should have warning lights,
audible-warning signals, emergency stop buttons, and non-contact
obstacle detectors. When the AGV is approaching a turn, the warning
lights function as directional signals to alert personnel in the area of
the AGV’s intention to branch right or left on the Guide path. When
the AGV goes into an alarm mode, the Alarm Lights blink to indicate
an alarm.
(vi) Audible Warning/Alarm Signals: Two distinct tones are used during
the vehicle’s operation, an acknowledge tone and an alarm tone. The
AGV emits a slow repeating acknowledge (run) tone during normal
operation. The alarm tone sounds when an alarm is active.
Implementation and Robot Economics 5.23

(vii) Emergency Stop Buttons: Emergency stop buttons are provided on


each AGV. When activated, the AGV enters an emergency stop state
and all motion capable equipment will become inactive.

(viii) Collision Avoidance System: The non-contact collision avoidance


system on the AGV can utilize a number of different laser sensors
mounted on the front, rear, side, and upper locations of the AGV.
When the AGV is travelling on the Guide path, this system will detect
an obstacle (such as a person) in any of the coverage locations.

5.6. IMPLEMENTATION OF ROBOTS IN INDUSTRIES:

The implementation of robotics in an organization requires engineering


expertize and more importantly, the involvement of the management. Also the
staff involved i.e. managers, engineers, shop floor operators and maintenance
personnel should be consulted. Therefore management support and production
personnel acceptance of the robotics technology are critical factors in its
introduction within an industry. Any robotic application must consider the total
system impact.

A logical approach is necessary for the successful implementation of this


technology. The following are the steps involved in the approach.

★ Making a start − Initial familiarization with the technology.

★ The plant survey − to identify potential applications.

★ Selection of an application(s).

★ Robot(s) selection for the application(s).

★ Thorough economic analysis and capital authorization.

★ Planning and engineering the installation.

★ Installation.
5.24 Robotics – www.airwalkpublications.com

Initial familiarization with Robotics:

The personnel of many companies do not have any expertise or previous


experience in robotics but feel the need for robots in their plants. So, the
initial task of the engineers in a company is to gain some basic knowledge
and to have a clear understanding of the current robot capabilities.

The personnel can acquire knowledge through books, technical


magazines, journals and by attending courses and conferences which
specifically deal with industrial applications. Also study tours and ‘hands on’
experience with existing commercial robots are an invaluable way (although
time consuming) of seeing applications first hand. Nowadays, Robotics is
included in the training and education of engineers in many undergraduate and
post graduate courses.

During this period, it is very important that the management provides


continuous consistent support and encouragement to implement robotics.

The Plant Survey – Identifying Potential Applications:

Generally two categories of robot applications must be distinguished −


one, the design of a new plant or new facility and another, a robot project
within an existing plant. The applications engineer has greater design flexibility
in the first category than in the second, where he has to substitute a robot in
the place of an existing human operator.

In an existing plant surveying the existing plant operations conveniently


identifies those operations which are susceptible to automation by robotics.
Also shop floor operators have an intimate knowledge of the processes and
are able to give the surveyor detailed information, essential for any successful
robot application. The general considerations for an industrial robot to be
feasible are given here.
Implementation and Robot Economics 5.25

★ Hazardous or uncomfortable working conditions:


Identified by protective clothing (masks, helmets, safety clothing, etc.)
worn by workers or special equipment (like ventilating systems) to protect
workers.

★ Repetitive Operations:
Large and medium quantities of a product

★ Handling of difficult-to-hold jobs:


Operations where the operator needs some form of mechanical assistance
(e.g. hoists and cranes) in handling work parts or tools.

★ Multishift Operation:
As compared to human labour having high variable cost, robot
substitution would have a high fixed cost (which can be spread over the
number of shifts) and low variable cost. Therefore the overall effect of robot
application would be to reduce the total operating cost.

Selection of an Application(s):

Having identified the number of potential areas of robot application, the


problem is to determine which potential application(s) to pursue. The important
criterion is to determine which alternative gives the best financial payback and
return on investment.

The following technical criteria have been considered by the General


Electric Company in choosing robot applications:

★ Operation is simple and repetitive.

★ Operation cycle time is greater than five seconds.

★ Parts can be delivered with proper position and orientation (POSE).


★ Part weight is suitable (typical upper weight limit is 1100 lb).
5.26 Robotics – www.airwalkpublications.com

★ No inspection required for the operation.


★ One or two personnel can be replaced in a 24 hour period.
★ Setups and changeovers are not frequent.
Robot(s) Selection for the application(s):

For the selected application(s), the engineer must choose a suitable


robot(s), from the existing commercial machines, to meet the requirements.

The choice is often a very difficult decision. Expert opinion, vendor


information and various other sources aid in the selection process. The
appropriate combination of parameters or technical features that need to be
considered for the selection of the robot include:

★ The degrees of freedom.

★ The type of drive and control system.

★ Sensor’s capability.

★ Programming features.

★ Accuracy and precision requirements.

★ Load capacity.

The specifications of each robot model is compared to the required or


desirable features and a rating score would be assigned. Based on the rating
score the best robot is selected for the application.

Thorough Economic analysis and capital authorization:

Having established that there is a suitable robot for the selected


application, a detailed economic and technical analysis is to be documented
to justify the proposed project to the management. This analysis is of
Implementation and Robot Economics 5.27

considerable importance in most companies as management usually decides


whether to install the project on the basis of this analysis.

Costs will be incurred in modifying existing equipment to allow for use


of the robot and for the provision of additional equipment for feeding,
orientating, transporting and inspecting parts. The economic analysis would
evaluate the probable financial benefits from the project. It may be carried out
by different methods such as Payback method, Equivalent Uniform Annual
Cost (EUAC) method and Return on Investment (ROI) method.

The technical analysis describes the project in terms of its application


features, required change to existing equipment, new equipment to be acquired,
fixtures and tooling, expected production rates, effects on labor, etc.

The management, based on the documented economic and technical


analysis, would decide whether to implement the project or not. If yes, a
capital authorization is provided to use the funds for detailed planning and
engineering work and to purchase and install the equipment for the project.

Planning and engineering the installation:

The planning and engineering of a robot installation includes:

★ Operational methods to be employed.

★ Incorporating manual or automatic inspection procedure before being


delivered to the workstation.

★ Robot workcell design and its control.


★ Choice or design of end-effectors.
★ Design of additional tools and fixtures.
★ Sensory and programming requirements.
★ Safety considerations for the workcell.
★ Overall systems integration.
5.28 Robotics – www.airwalkpublications.com

The engineer should attempt to develop a workplace that is best suited


to the robot rather than to a human. Computer aided design and simulation
packages are utilized for the overall design and analysis of robotic work cells.
Also, total workcell productivity can be addressed by optimizing robot
placement and key performance parameters.

Installation:

The installation of an industrial robot is done according to the standard


practices for machine installation, in conjunction with robot manufacturer’s
advice. The activities included in the installation phase are:

★ Procurement of the robot(s) and other equipment. Also the supplies


needed to install the workcell are purchased.

★ The site in the plant where the robot cell is to be located is prepared.
Also, provisions of electrical, pneumatic and other utilities for the cell
are readied.

★ Installation of conveyors and other material handling systems for


delivery of parts into and out of the work cell.

★ Installation and programming of the work cell controller.


★ Installation of inter locks and sensors and their integration with the
controller.

★ End effectors and other tooling fabrication.


★ Installation of safety systems.
★ Startup, debugging, trial production runs and fine tuning of the
workcell.

★ Training, maintenance and quality control.


Trial production runs must be done in a logical and scientific way to
yield correct information. During trials it is important to check not only the
Implementation and Robot Economics 5.29

operation but the performance of the setup during a tool change. It is important
that tool changes are adequately planned and documented.

Also it is important that the installation engineer should carefully


consider potential failures and have the trouble shooting procedures ready, for
dealing with their occurrence. This is important particularly when considering
the safety of engineers and operators involved in the installation as human
intervention may be required when parts get jammed in feeders or on
conveyors and in the removal of scrap material.

After the trials are completed, the complete project can be run. Now,
the performance of the system is measured against the objectives set out at
the beginning of the robot implementation and final optimization is done to
achieve satisfactory performance.

5.7. SAFETY CONSIDERATIONS FOR ROBOT OPERATIONS:

Safety of operators and people working in the area is one of the major
considerations in implementing a robot cell. In fact, the justification for the
use of robots in industries is to remove humans from potentially hazardous
work environments like heat, noise, vibrations, fumes, physical dangers due to
nature of work, toxic atmospheres, radiation, etc. This substitution of robots
for human labor has become significant ever since the Occupational Safety
and Health Act (OSHA) was enacted in 1971.

Now robots are very fast can perform powerful movements, manipulate
dangerous and sharp curves and are almost silent in operation which can cause
hazards to the humans surrounding them. Therefore, the safety issue arises
again due to the potential hazards posed by the robot itself.

In order to prevent accidents in human robot interactions, it is important


to identify sources of potential harm
5.30 Robotics – www.airwalkpublications.com

★ To determine which persons in the robot’s vicinity may be in greatest


peril.

★ To assess the type of injuries the robot may cause to this person, and
★ Which factors have the greatest impact on safety.
The causes of accidents by robots can be divided into three categories

★ Engineering errors.
★ Human worker errors.
★ Poor environmental conditions.

Engineering errors include faulty electronics, loose connections across


parts, controller errors (programming bugs, faulty algorithm, etc.). These errors
are unpredictable.

Human errors occur due to inattention, fatigue, noncompliance of


guarding procedures, inadequate training programs or incorrect robot start-up.

Environmental factors include extreme temperatures, poor sensing in


difficult weather or lighting conditions. These factors can lead to incorrect
response by the robot.

The occasions when the humans are in contact or in close proximity to


the robot, exposing them (humans) to danger are:
Implementation and Robot Economics 5.31

During

★ Programming of the robot.

★ Operation of the robot cell in human presence.

★ Maintenance of the robot.


The accidents may occur due to collision between the human and the
robot, parts dropped from the robot gripper, electrical shock, loose power
cables or hydraulic lines on the floor. These risks can be reduced by safety
measures such as proper grounding of electrical cables, covering power cables
and hydraulic lines, setting speed of arm at low level during programming and
turning off the power to the machine during maintenance.

The robot installation must be made by qualified person and should


confirm to all national and local codes. Inspection, training, adjustment and
repair must be carried out by trained personnel who posses the ability to
perform these tasks safely.

Apart from these, more extensive measures must be taken to prevent


hazards that arise during robot operation.

5.7.1. Installation Precautions and Workplace Design Considerations:

★ Ensuring proper installation environment.


The robot should not be installed in any environment where.

✰ There are flammable gases or liquids.

✰ There are any acidic, alkaline or other corrosive material.

✰ There are any high output/high frequency transmitters, large −


sized inverters or other sources of electrical noise.

✰ There is a mist.

✰ There are any grinding or machining chips or shavings.


5.32 Robotics – www.airwalkpublications.com

★ Enough service space


Sufficient service space must be maintained for safe teaching,
maintenance and inspection.

★ Placing of control devices


The robot controller, teach pendant and mini pendant should be installed
outside the robot’s restricted space. Also, the placing should be such that
observing the robot’s movements and operating the robot becomes easy.

★ Positioning of gauges
All gauges (pressure gauges, oil pressure gauges and other gauges)
should be installed in an easy-to-check location.

★ Positioning of emergency stop switches


Emergency stop buttons or panic buttons should be provided where they
can be reached easily, in case of emergency, to stop the robot immediately.
These switches should be designed so that they will not be released after
pressed, automatically or mistakenly by any other person. They should be in
red color and separate from the power switch.
Deadman switch is a trigger or toggle switch device usually located on
the teach pendant which requires active pressure to be applied in order to
drive the manipulator. If the pressure is removed from the switch, the device
springs back to its neutral position which stops all robot movement.

★ Setting-up a safety fence


To exclude people from the working area-safety fences, light curtains
and doors with interlock switches can be provided.

A safety fence must be set-up around the periphery of the robot workcell,
so that no one can easily enter the robot’s restricted space. This prevents
human intruders from entering the vicinity of the robot while it is operating.
The fence should be outside the farthest reach of the robot in all directions.
Implementation and Robot Economics 5.33

Fig. 5.21: Safety Fence Around the Robot Work Cell

In some cases where humans are coworkers with the robot in the
production process (like for loading and unloading parts) some form of two
position parts manipulator can be used to exchange parts between the robot
and the worker. This prevents inadvertent collision between the worker and
the robot and also, improves the production efficiency as two operations are
performed simultaneously i.e. loading and unloading of parts by the human
operator takes place while the robot processes the parts.

Fig. 5.22: Two Position


Parts Manipulator
5.34 Robotics – www.airwalkpublications.com

★ Proper Lighting:
Sufficient illumination should be provided for safe robot operation.

★ Placing the warning labels:


Warning labels are affixed on the safety fence at the entrance/exit or in
a position where it is easy to see.
5.7.2. Safety Monitoring:
Safety monitoring involves the use of sensors to indicate conditions or
events that are unsafe for humans as well as for the equipment in the cell.
Sensors ensure that the components are present and loaded correctly or
that the tooling is operating properly. Collision sensors placed between the
robot arm and end effector will stop the robot arm if excessive force is applied
to the tool. Also, sensors detect the presence/proximity of any intruder who
enters the robot area and the robots themselves take necessary action, through
control system.
The National Bureau of Standards has divided the sensor system into
three levels as:
Level 1 : Perimeter penetration detection around robot workstation.
Level 2 : Intruder detection within the robot work-cell.
Level 3 : Intruder detection in the immediate vicinity of the robot
i.e. safety skin.
Level 1:
Here sensors are provided to detect the crossing of the work station
boundary by any intruder without providing information regarding the location
of the intruder. This level is to alert the person who is entering the robot
work-cell and to pass the information to other levels. Interlock gates and
infrared light curtains fall under this category.
Level 2:
This level detects the presence of an intruder in the region between the
work cell boundary and the robot work volume. The actual boundaries depend
Implementation and Robot Economics 5.35

upon the work-station layout and safety strategies. Safety sensing systems like
pressure mats and change in electrical capacitance can be used at this level.

Level 3:

This level provides intruder detection within the robot work volume. The
response of these sensors must be very fast and should be capable of detecting
imminent collision between robot and operator and of executing a strategy for
avoiding collision (like slowing down activating warning alarms, diverting the
robot). Proximity sensors and ultrasonic vision are used here.

There is a possibility that the hazard detection sensors in a safety device


can fail and go unnoticed until some emergency occurs. Hence a more
sophisticated monitoring system called a ‘fail safe hazard detector’ is used.
This detector apart from monitoring potential hazards in the workcell, is also
5.36 Robotics – www.airwalkpublications.com

capable of periodically and automatically check the sensor subsystem to make


certain that it is operating properly, thereby providing a much more reliable
safety monitoring system.

5.7.3. Other Safety Precautions:

★ Should not enter the robot’s restricted space while in operation or


when the motor power is on. Emergency stop device should be
activated to cut the power to the robot motor upon entry into the
robot’s restricted space.

★ Creation of working regulations and assuring worker adherence while


performing training or maintenance inspections.

★ There should be some display to indicate that the robot is in operation


on the operation panel or teach pendant.

★ Under no circumstances should the interlock mechanism be removed


and repairs should not be performed outside the designated range.

★ Workers should be trained in advance for the actions to be taken


when emergencies or malfunctions occur.

★ Daily and periodical inspections should be performed and records


should be maintained.

5.8. ECONOMIC ANALYSIS OF ROBOTS:

The major factors justifying the use of robotic technology for


manufacturing are economic. A robot manufacturing system requires
considerable capital investment and hence economic analysis is of considerable
importance.

An economic analysis is basically a systematic examination of a complex


business activity will help in making a decision about a capital investment.
Implementation and Robot Economics 5.37

The basic information required to perform the economic analysis of a


proposed project includes the type of project, cost of robot installation,
production cycle times, savings and benefits resulting from the project.

Type of Robot Installation:

Robot installations are generally of two categories:

★ Robot Installation at a new facility.

★ Robot Installation to replace a current method of operation performed


manually.

In both cases, certain basic information is required to perform the


economic analysis. The types of cost information necessary is discussed in the
following section.

Data for Analysis:

The cost data needed is of two types:

★ Investment costs, and

★ Operating costs.
Investment Costs:

This includes the robot purchase cost, engineering costs, installation


costs, special tooling costs (like end effector, fixtures, etc.) and miscellaneous
costs.

Operating Costs:

This includes the direct labour cost, indirect labor cost (supervision,
setup, etc.), maintenance, and training costs.

While considering the operating costs it is often convenient to identify


the cost savings that will result from the use of robot, rather than to separately
identify the operating costs of alternative methods.
5.38 Robotics – www.airwalkpublications.com

The following table lists the investment and savings data typically
encountered in robot projects.

Table 1: Investment and Savings Data

1. Robot Cost Basic cost of the robot, operational equipment,


maintenance and test equipment included in the basic
price of the robot.

2. Accessories Additional equipment, optional and required, that is


Cost purchased for the robot (includes additional hardware,
recorders, testers, computers, and tools).

3. Related Should include all additional hardware costs and


Expense expenses for the application (such as conveyors, guard
rails, component cabinets, interface hardware, and
insurance).

4. Engineering Estimated cost of planning and design in support of


Costs project development (includes research and laboratory
expense).

5. Installation Labor and materials for site preparation, floor or


Costs foundation work, utility (fair, water, electricity) and
set-up costs.

6. Special Labor and materials for special tooling (end effectors),


Tooling interface devices between controller and tooling,
Costs fabrication of part positioners, fixtures and tool
controllers.

7. Direct Labor Net direct labor savings realized from converting to the
Savings proposed method (compares costs of direct labor,
benefits, allowances, shift premiums, etc. and may
include overhead costs to simplify calculations).
Implementation and Robot Economics 5.39

8. Indirect Net indirect labor savings realized from converting to


Labor the proposed method (maintenance, repair, and other
Savings related labor support costs).

9. Maintenance Estimate of net maintenance savings to be realized from


Savings c onve rsi on to the propose d me thod (include s
maintenance supplies, replacement parts, spare parts,
lubricants, service contract charges, etc.)

10. Other Costs Increased or additional costs of the proposed method


over the current method for supplies, utilities, training,
etc.

11. Other Savings of cost reductions of the proposed method


Savings compared to the current method (includes material
savings, i.e., reduced scrap, and reduced downtime).

5.8.1. Methods of Economic Analysis:

The different methods of economic analysis for analysing investments


are

★ Payback method.

★ EUAC − Equivalent Uniform Annual Cost method.

★ ROI − Return on Investment method.

5.8.1.1. Payback Method:

The duration or time period taken for the net accumulated cash flow to
equal the initial investment in the development of a robot is called the payback
method or payback period. If the net annual cash flows are identical, then
5.40 Robotics – www.airwalkpublications.com

Investment Cost (IC)


Payback period (n) =
Net Annual Cash Flow (NACF)
If the cash flows are not equal every year, then we have,
n
0 = − (IC) + ∑ (NACF)
i=1

Here, i is used to identify the year. Also costs are treated as negative
values while revenues or savings are treated as positive values.
The payback period (PBP) is the traditional method of capital budgeting.
It is the simplest and perhaps, the most widely used quantitative method for
appraising capital expenditure decision.
For example, A firm requires an initial cash outflow of Rs. 20,000 and
the annual cash inflows for 5 years are Rs. 6000, Rs. 8000, Rs. 5000, Rs. 4000
and Rs. 4000 respectively. Calculate Pay back Period. Here, when we cumulate
the cash flows for the first three years, Rs. 19,000 is recovered. In the fourth
year Rs. 4000 cash flow is generated by the project but we need to recover
only Rs. 1000 so the time required for recovering Rs. 1000 will be
(Rs. 1000/Rs. 4000) × 12 months = 3 months. Thus, the PBP is 3 years and
3 months (3.25 years).
5.8.1.2. EUAC Method:

The equivalent uniform annual cost method is used to convert the total
cash flows and investments into their equivalent uniform costs over the
expected time of developing a robot.
A company may not know what effective interest rate, to use in
economic analysis. In such a case, the company must first select the Minimum
Attractive Rate of Return (MARR) which is used as the criterion to decide
whether a project should be funded. Many elements contribute in determining
the MARR are given here.

★ Amount, source and cost of money available.


★ Number and purpose of good projects available.
Implementation and Robot Economics 5.41

★ Perceived risk of investment opportunities.


★ Type of organisation.
Interest factors for the MARR are used to make the conversions. Then
the values for each of the various investment and cash flows, are summed up.
If the sum of the EUAC is greater than zero, the project is qualified i.e. it is
viable, otherwise it is considered unattractive.

Example:

Total investment cost = 1,10,000


Total operating cost = 20,000
Anticipated annual revenues = 70,000
Service life = 5 years
MARR = 30%

The initial investment cost must be converted to its equivalent uniform


annual cost value using the capital recovery factor from the table of interest
factors. The sum of the annual cash flows would be as follows.

EUAC = − 1,10,000 (AP, 30%, 5) + 70,000 − 20,000

= − 1,10,000 (0.4106) + 50,000

= + 4834

Since the uniform annual cost value is positive, the robot project would
be attractive.

5.8.1.3. Return On Investment Method:

The Return On Investment (ROI) is used to determine the return ratio


or rate of return based on the anticipated expenditures and revenues. The rate
of return is the effective annual interest rate that makes the present worth of
the investment zero. Alternatively, it is the effective annual interest rate that
makes the benefits and costs equal.
5.42 Robotics – www.airwalkpublications.com

Once the rate of return for an investment is known, it can be compared


with MARR to justify whether the investment is viable.

Example:

The data from the previous example is used.

The determination for the rate of return involves setting up an EUAC


equation and equating to zero.

EUAC = − 1,10,000 (AP, i, 5) + 70,000 − 20,000 = 0

50,000
⇒ (AP, i, 5) = = 0.4545
1,10,000

Now, from the interest factor tables, for n = 5 yrs. we have.

when, i = 30%, (AD, 30%, 5) = 0.4106.

i = 35%, (A ⁄ P, 35%, 5) = 0.4505

From above, we can observe that the value of (AP, 35%, 5) = 0.4505 is
close to 0.45, And hence the rate of return should be close to 35 percent. By
interpolation, the rate of return is found to be i = 34.96 percent.

*********
Implementation and Economics 5.43

APPENDIX
Interest Table for Annual Compounding with i = 20%
n F ⁄ P, i, n P ⁄ F, i, n F ⁄ A, i, n A ⁄ F, i, n P ⁄ A, i, n A ⁄ P, i, n A ⁄ G, i, n
1 1.200 0.8333 1.000 1.0000 0.8333 1.2000 0.0000
2 1.440 0.6944 2.200 0.4545 1.5278 0.6545 0.4545
3 1.728 0.5787 3.640 0.2747 2.1065 0.4747 0.8791
4 2.074 0.4823 5.368 0.1863 2.5887 0.3863 1.2742
5 2.488 0.4019 7.442 0.1344 2.9906 0.3344 1.6405
6 2.986 0.3349 9.930 0.1007 3.3255 0.3007 1.9788
7 3.583 0.2791 12.916 0.0774 3.6046 0.2774 2.2902
8 4.300 0.2326 16.499 0.0606 3.8372 0.2606 2.5756
9 5.160 0.1938 20.799 0.0481 4.0310 0.2481 2.8364
10 6.192 0.1615 25.959 0.0385 4.1925 0.2385 3.0739
11 7.430 0.1346 32.150 0.0311 4.3271 0.2311 3.2893
12 8.916 0.1122 39.581 0.0253 4.4392 0.2253 3.4841
13 10.699 0.0935 48.497 0.0206 4.5327 0.2206 3.6597
14 12.839 0.0779 59.196 0.0169 4.6106 0.2169 3.8175
15 15.407 0.0649 72.035 0.0139 4.6755 0.2139 3.9588
16 18.488 0.0541 87.442 0.0114 4.7296 0.2114 4.0851
17 22.186 0.0451 105.931 0.0094 4.7746 0.2094 4.1976
18 26.623 0.0376 128.117 0.0078 4.8122 0.2078 4.2975
19 31.948 0.0313 154.740 0.0065 4.8435 0.2065 4.3861
20 38.338 0.0261 186.688 0.0054 4.8696 0.2054 4.4643
21 46.005 0.0217 225.026 0.0044 4.8913 0.2044 4.5334
22 55.206 0.0181 271.031 0.0037 4.9094 0.2037 4.5941
23 66.247 0.0151 326.237 0.0031 4.9245 0.2031 4.6475
24 79.497 0.0126 392.484 0.0025 4.9371 0.2025 4.6943
25 95.396 0.0105 471.981 0.0021 4.9476 0.2021 4.7352
26 114.475 0.0087 567.377 0.0018 4.9563 0.2018 4.7709
27 137.371 0.0073 681.853 0.0015 4.9636 0.2015 4.8020
28 164.845 0.0061 819.223 0.0012 4.9697 0.2012 4.8291
29 197.814 0.0051 984.068 0.0010 4.9747 0.2010 4.8527
30 237.376 0.0042 1,181.882 0.0008 4.9789 0.2008 4.8731
31 284.852 0.0035 1,419.258 0.0007 4.9824 0.2007 4.8908
32 341.822 0.0029 1,704.109 0.0006 4.9854 0.2006 4.9061
33 410.186 0.0024 2,045.931 0.0005 4.9878 0.2005 4.9194
34 492.224 0.0020 2,456.118 0.0004 4.9898 0.2004 4.9308
35 590.668 0.0017 2,948.341 0.0003 4.9915 0.2003 4.9406
36 708.802 0.0014 3,539.009 0.0003 4.9929 0.2003 4.9491
37 850.562 0.0012 4,247.811 0.0002 4.9941 0.2002 4.9564
38 1,020.675 0.0010 5,098.373 0.0002 4.9951 0.2002 4.9627
39 1,224.810 0.0008 6,119.048 0.0002 4.9959 0.2002 4.9681
40 1,469.772 0.0007 7,343.858 0.0001 4.9966 0.2001 4.9728
41 1,763.726 0.0006 8,813.629 0.0001 4.9972 0.2001 4.9767
42 2,116.471 0.0005 10,577.355 0.0001 4.9976 0.2001 4.9801
43 2,539.765 0.0004 12,693.826 0.0001 4.9980 0.2001 4.9831
44 3,047.718 0.0003 15,233.592 0.0001 4.9984 0.2001 4.9856
45 3,657.262 0.0003 18,281.310 0.0001 4.9986 0.2001 4.9877
46 4,388.714 0.0002 21,938.572 0.0000 4.9989 0.2000 4.9895
47 5,266.457 0.0002 26,327.286 0.0000 4.9991 0.2000 4.9911
48 6,319.749 0.0002 31,593.744 0.0000 4.9992 0.2000 4.9924
49 7,583.698 0.0001 37,913.492 0.0000 4.9993 0.2000 4.9935
50 9,100.438 0.0001 45,497.191 0.0000 4.9995 0.2000 4.9945
5.44 Robotics – www.airwalkpublications.com

Interest Table for Annual Compounding with i = 25%


n F ⁄ P, i, n P ⁄ F, i, n F ⁄ A, i, n A ⁄ F, i, n P ⁄ A, i, n A ⁄ P, i, n A ⁄ G, i, n
1 1.250 0.8000 1.000 1.0000 0.8000 1.2500 0.0000
2 1.563 0.6400 2.250 0.4444 1.4400 0.6944 0.4444
3 1.953 0.5120 3.813 0.2623 1.9520 0.5123 0.8525
4 2.441 0.4096 5.766 0.1734 2.3616 0.4234 1.2249
5 3.052 0.3277 8.207 0.1218 2.6893 0.3718 1.5631
6 3.815 0.2621 11.259 0.0888 2.9514 0.3388 1.8683
7 4.768 0.2097 15.073 0.0663 3.1611 0.3163 2.1424
8 5.960 0.1678 19.842 0.0504 3.3289 0.3004 2.3872
9 7.451 0.1342 25.802 0.0388 3.4631 0.2888 2.6048
10 9.313 0.1074 33.253 0.0301 3.5705 0.2801 2.7971
11 11.642 0.0859 42.566 0.0235 3.6564 0.2735 2.9663
12 14.552 0.0687 54.208 0.0184 3.7251 0.2684 3.1145
13 18.190 0.0550 68.760 0.0145 3.7801 0.2645 3.2437
14 22.737 0.0440 86.949 0.0115 3.8241 0.2615 3.3559
15 28.422 0.0352 109.687 0.0091 3.8593 0.2591 3.4530
16 35.527 0.0281 138.109 0.0072 3.8874 0.2572 3.5366
17 44.409 0.0225 173.636 0.0058 3.9099 0.2558 3.6084
18 55.511 0.0180 218.045 0.0046 3.9279 0.2546 3.6698
19 69.389 0.0144 273.556 0.0037 3.9424 0.2537 3.7222
20 86.736 0.0115 342.945 0.0029 3.9539 0.2529 3.7667
21 108.420 0.0092 429.681 0.0023 3.9631 0.2523 3.8045
22 135.525 0.0074 538.101 0.0019 3.9705 0.2519 3.8365
23 169.407 0.0059 673.626 0.0015 3.9764 0.2515 3.8634
24 211.758 0.0047 843.033 0.0012 3.9811 0.2512 3.8861
25 264.698 0.0038 1,054.791 0.0009 3.9849 0.2509 3.9052
26 330.872 0.0030 1,319.489 0.0008 3.9879 0.2508 3.9212
27 413.590 0.0024 1,650.361 0.0006 3.9903 0.2506 3.9346
28 516.988 0.0019 2,063.952 0.0005 3.9923 0.2505 3.9457
29 646.235 0.0015 2,580.939 0.0004 3.9938 0.2504 3.9551
30 807.794 0.0012 3,227.174 0.0003 3.9950 0.2503 3.9628
31 1,009.742 0.0010 4,034.968 0.0002 3.9960 0.2502 3.9693
32 1,262.177 0.0008 5,044.710 0.0002 3.9968 0.2502 3.9746
33 1,577.722 0.0006 6,306.887 0.0002 3.9975 0.2502 3.9791
34 1,972.152 0.0005 7,884.609 0.0001 3.9980 0.2501 3.9828
35 2,465.190 0.0004 9,856.761 0.0001 3.9984 0.2501 3.9858
36 3,081.488 0.0003 12,321.952 0.0001 3.9987 0.2501 3.9883
37 3,851.860 0.0003 15,403.440 0.0001 3.9990 0.2501 3.9904
38 4,814.825 0.0002 19,255.299 0.0001 3.9992 0.2501 3.9921
39 6,018.531 0.0002 24,070.124 0.0000 3.9993 0.2500 3.9935
40 7,523.164 0.0001 30,088.655 0.0000 3.9995 0.2500 3.9947
41 9,403.955 0.0001 37,611.819 0.0000 3.9996 0.2500 3.9956
42 11,754.944 0.0001 47,015.774 0.0000 3.9997 0.2500 3.9964
43 14,693.679 0.0001 58,770.718 0.0000 3.9997 0.2500 3.9971
44 18,367.099 0.0001 73,464.397 0.0000 3.9998 0.2500 3.9976
45 22,958.874 0.0000 91,831.496 0.0000 3.9998 0.2500 3.9980
46 28,698.593 0.0000 114,790.370 0.0000 3.9999 0.2500 3.9984
47 35,873.241 0.0000 143,488.963 0.0000 3.9999 0.2500 3.9987
48 44,841.551 0.0000 179,362.203 0.0000 3.9999 0.2500 3.9989
49 56,051.939 0.0000 224,203.754 0.0000 3.9999 0.2500 3.9991
50 70,064.923 0.0000 280,255.693 0.0000 3.9999 0.2500 3.9993
Implementation and Economics 5.45

Interest Table for Annual Compounding with i = 30%


n F ⁄ P, i, n P ⁄ F, i, n F ⁄ A, i, n A ⁄ F, i, n P ⁄ A, i, n A ⁄ P, i, n A ⁄ G, i, n
1 1.300 0.7692 1.000 1.0000 0.7692 1.3000 0.0000
2 1.690 0.5917 2.300 0.4348 1.3609 0.7348 0.4348
3 2.197 0.4552 3.990 0.2506 1.8161 0.5506 0.8271
4 2.856 0.3501 6.187 0.1616 2.1662 0.4616 1.1783
5 3.713 0.2693 9.043 0.1106 2.4356 0.4106 1.4903
6 4.827 0.2072 12.756 0.0784 2.6427 0.3784 1.7654
7 6.275 0.1594 17.583 0.0569 2.8021 0.3569 2.0063
8 8.157 0.1226 23.858 0.0419 2.9247 0.3419 2.2156
9 10.604 0.0943 32.015 0.0312 3.0190 0.3312 2.3963
10 13.786 0.0725 42.619 0.0235 3.0915 0.3235 2.5512
11 17.922 0.0558 56.405 0.0177 3.1473 0.3177 2.6833
12 23.298 0.0429 74.327 0.0135 3.1903 0.3135 2.7952
13 30.288 0.0330 97.625 0.0102 3.2233 0.3102 2.8895
14 39.374 0.0254 127.913 0.0078 3.2487 0.3078 2.9685
15 51.186 0.0195 167.286 0.0060 3.2682 0.3060 3.0344
16 66.542 0.0150 218.472 0.0046 3.2832 0.3046 3.0892
17 86.504 0.0116 285.014 0.0035 3.2948 0.3035 3.1345
18 112.455 0.0089 371.518 0.0027 3.3037 0.3027 3.1718
19 146.192 0.0068 483.973 0.0021 3.3105 0.3021 3.2025
20 190.050 0.0053 630.165 0.0016 3.3158 0.3016 3.2275
21 247.065 0.0040 820.215 0.0012 3.3198 0.3012 3.2480
22 321.184 0.0031 1,067.280 0.0009 3.3230 0.3009 3.2646
23 417.539 0.0024 1,388.464 0.0007 3.3254 0.3007 3.2781
24 542.801 0.0018 1,806.003 0.0006 3.3272 0.3006 3.2890
25 705.641 0.0014 2,348.803 0.0004 3.3286 0.3004 3.2979
26 917.333 0.0011 3,054.444 0.0003 3.3297 0.3003 3.3050
27 1,192.533 0.0008 3,971.778 0.0003 3.3305 0.3003 3.3107
28 1,550.293 0.0006 5,164.311 0.0002 3.3312 0.3002 3.3153
29 2,015.381 0.0005 6,714.604 0.0001 3.3317 0.3001 3.3189
30 2,619.996 0.0004 8,729.985 0.0001 3.3321 0.3001 3.3219
31 3,405.994 0.0003 11,349.981 0.0001 3.3324 0.3001 3.3242
32 4,427.793 0.0002 14,755.975 0.0001 3.3326 0.3001 3.3261
33 5,756.130 0.0002 19,183.768 0.0001 3.3328 0.3001 3.3276
34 7,482.970 0.0001 24,939.899 0.0000 3.3329 0.3000 3.3288
35 9,727.860 0.0001 32,422.868 0.0000 3.3330 0.3000 3.3297
36 12,646.219 0.0001 42,150.729 0.0000 3.3331 0.3000 3.3305
37 16,440.084 0.0001 54,796.947 0.0000 3.3331 0.3000 3.3311
38 21,372.109 0.0000 71,237.031 0.0000 3.3332 0.3000 3.3316
39 27,783.742 0.0000 92,609.141 0.0000 3.3332 0.3000 3.3319
40 36,118.865 0.0000 120,392.883 0.0000 3.3332 0.3000 3.3322
41 46,954.524 0.0000 156,511.748 0.0000 3.3333 0.3000 3.3325
42 61,040.882 0.0000 203,466.272 0.0000 3.3333 0.3000 3.3326
43 79,353.146 0.0000 264,507.153 0.0000 3.3333 0.3000 3.3328
44 103,159.090 0.0000 343,860.299 0.0000 3.3333 0.3000 3.3329
45 134,106.817 0.0000 447,019.389 0.0000 3.3333 0.3000 3.3330
46 174,338.862 0.0000 581,126.206 0.0000 3.3333 0.3000 3.3331
47 226,640.520 0.0000 755,465.067 0.0000 3.3333 0.3000 3.3331
48 294,632.676 0.0000 982,105.588 0.0000 3.3333 0.3000 3.3332
49 383,022.479 0.0000 1,276,738.264 0.0000 3.3333 0.3000 3.3332
50 497,929.223 0.0000 1,659,760.743 0.0000 3.3333 0.3000 3.3332
5.46 Robotics – www.airwalkpublications.com

Interest Table for Annual Compounding with i = 35%


n F ⁄ P, i, n P ⁄ F, i, n F ⁄ A, i, n A ⁄ F, i, n P ⁄ A, i, n A ⁄ P, i, n A ⁄ G, i, n
1 1.350 0.7407 1.000 1.0000 0.7407 1.3500 0.0000
2 1.823 0.5487 2.350 0.4255 1.2894 0.7755 0.4255
3 2.460 0.4064 4.173 0.2397 1.6959 0.5897 0.8029
4 3.322 0.3011 6.633 0.1508 1.9969 0.5008 1.1341
5 4.484 0.2230 9.954 0.1005 2.2200 0.4505 1.4220
6 6.053 0.1652 14.438 0.0693 2.3852 0.4193 1.6698
7 8.172 0.1224 20.492 0.0488 2.5075 0.3988 1.8811
8 11.032 0.0906 28.664 0.0349 2.5982 0.3849 2.0597
9 14.894 0.0671 39.696 0.0252 2.6653 0.3752 2.2094
10 20.107 0.0497 54.590 0.0183 2.7150 0.3683 2.3338
11 27.144 0.0368 74.697 0.0134 2.7519 0.3634 2.4364
12 36.644 0.0273 101.841 0.0098 2.7792 0.3598 2.5205
13 49.470 0.0202 138.485 0.0072 2.7994 0.3572 2.5889
14 66.784 0.0150 187.954 0.0053 2.8144 0.3553 2.6443
15 90.158 0.0111 254.738 0.0039 2.8255 0.3539 2.6889
16 121.714 0.0082 344.897 0.0029 2.8337 0.3529 2.7246
17 164.314 0.0061 466.611 0.0021 2.8398 0.3521 2.7530
18 221.824 0.0045 630.925 0.0016 2.8443 0.3516 2.7756
19 299.462 0.0033 852.748 0.0012 2.8476 0.3512 2.7935
20 404.274 0.0025 1,152.210 0.0009 2.8501 0.3509 2.8075
21 545.769 0.0018 1,556.484 0.0006 2.8519 0.3506 2.8186
22 736.789 0.0014 2,102.253 0.0005 2.8533 0.3505 2.8272
23 994.665 0.0010 2,839.042 0.0004 2.8543 0.3504 2.8340
24 1,342.797 0.0007 3,833.706 0.0003 2.8550 0.3503 2.8393
25 1,812.776 0.0006 5,176.504 0.0002 2.8556 0.3502 2.8433
26 2,447.248 0.0004 6,989.280 0.0001 2.8560 0.3501 2.8465
27 3,303.785 0.0003 9,436.528 0.0001 2.8563 0.3501 2.8490
28 4,460.109 0.0002 12,740.313 0.0001 2.8565 0.3501 2.8509
29 6,021.148 0.0002 17,200.422 0.0001 2.8567 0.3501 2.8523
30 8,128.550 0.0001 23,221.570 0.0000 2.8568 0.3500 2.8535
31 10,973.542 0.0001 31,350.120 0.0000 2.8569 0.3500 2.8543
32 14,814.281 0.0001 42,323.661 0.0000 2.8569 0.3500 2.8550
33 19,999.280 0.0001 57,137.943 0.0000 2.8570 0.3500 2.8555
34 26,999.028 0.0000 77,137.223 0.0000 2.8570 0.3500 2.8559
35 36,448.688 0.0000 104,136.251 0.0000 2.8571 0.3500 2.8562
36 49,205.728 0.0000 140,584.939 0.0000 2.8571 0.3500 2.8564
37 66,427.733 0.0000 189,790.667 0.0000 2.8571 0.3500 2.8566
38 89,677.440 0.0000 256,218.400 0.0000 2.8571 0.3500 2.8567
39 121,064.544 0.0000 345,895.841 0.0000 2.8571 0.3500 2.8568
40 163,437.135 0.0000 466,960.385 0.0000 2.8571 0.3500 2.8569
41 220,640.132 0.0000 630,397.519 0.0000 2.8571 0.3500 2.8570
42 297,864.178 0.0000 851,037.651 0.0000 2.8571 0.3500 2.8570
43 402,116.640 0.0000 1,148,901.829 0.0000 2.8571 0.3500 2.8570
44 542,857.464 0.0000 1,551,018.469 0.0000 2.8571 0.3500 2.8571
45 732,857.577 0.0000 2,093,875.934 0.0000 2.8571 0.3500 2.8571
46 989,357.729 0.0000 2,826,733.511 0.0000 2.8571 0.3500 2.8571
47 1,335,632.934 0.0000 3,816,091.239 0.0000 2.8571 0.3500 2.8571
48 1,803,104.461 0.0000 5,151,724.173 0.0000 2.8571 0.3500 2.8571
49 2,434,191.022 0.0000 6,954,828.634 0.0000 2.8571 0.3500 2.8571
50 3,286,157.879 0.0000 9,389,019.656 0.0000 2.8571 0.3500 2.8571
Implementation and Economics 5.47

Interest Table for Annual Compounding with i = 40%


n F ⁄ P, i, n P ⁄ F, i, n F ⁄ A, i, n A ⁄ F, i, n P ⁄ A, i, n A ⁄ P, i, n A ⁄ G, i, n
1 1.400 0.7143 1.000 1.0000 0.7143 1.4000 0.0000
2 1.960 0.5102 2.400 0.4167 1.2245 0.8167 0.4167
3 2.744 0.3644 4.360 0.2294 1.5889 0.6294 0.7798
4 3.842 0.2603 7.104 0.1408 1.8492 0.5408 1.0923
5 5.378 0.1859 10.946 0.0914 2.0352 0.4914 1.3580
6 7.530 0.1328 16.324 0.0613 2.1680 0.4613 1.5811
7 10.541 0.0949 23.853 0.0419 2.2628 0.4419 1.7664
8 14.758 0.0678 34.395 0.0291 2.3306 0.4291 1.9185
9 20.661 0.0484 49.153 0.0203 2.3790 0.4203 2.0422
10 28.925 0.0346 69.814 0.0143 2.4136 0.4143 2.1419
11 40.496 0.0247 98.739 0.0101 2.4383 0.4101 2.2215
12 56.694 0.0176 139.235 0.0072 2.4559 0.4072 2.2845
13 79.371 0.0126 195.929 0.0051 2.4685 0.4051 2.3341
14 111.120 0.0090 275.300 0.0036 2.4775 0.4036 2.3729
15 155.568 0.0064 386.420 0.0026 2.4839 0.4026 2.4030
16 217.795 0.0046 541.988 0.0018 2.4885 0.4018 2.4262
17 304.913 0.0033 759.784 0.0013 2.4918 0.4013 2.4441
18 426.879 0.0023 1,064.697 0.0009 2.4941 0.4009 2.4577
19 597.630 0.0017 1,491.576 0.0007 2.4958 0.4007 2.4682
20 836.683 0.0012 2,089.206 0.0005 2.4970 0.4005 2.4761
21 1,171.356 0.0009 2,925.889 0.0003 2.4979 0.4003 2.4821
22 1,639.898 0.0006 4,097.245 0.0002 2.4985 0.4002 2.4866
23 2,295.857 0.0004 5,737.142 0.0002 2.4989 0.4002 2.4900
24 3,214.200 0.0003 8,032.999 0.0001 2.4992 0.4001 2.4925
25 4,499.880 0.0002 11,247.199 0.0001 2.4994 0.4001 2.4944
26 6,299.831 0.0002 15,747.079 0.0001 2.4996 0.4001 2.4959
27 8,819.764 0.0001 22,046.910 0.0000 2.4997 0.4000 2.4969
28 12,347.670 0.0001 30,866.674 0.0000 2.4998 0.4000 2.4977
29 17,286.737 0.0001 43,214.343 0.0000 2.4999 0.4000 2.4983
30 24,201.432 0.0000 60,501.081 0.0000 2.4999 0.4000 2.4988
31 33,882.005 0.0000 84,702.513 0.0000 2.4999 0.4000 2.4991
32 47,434.807 0.0000 118,584.519 0.0000 2.4999 0.4000 2.4993
33 66,408.730 0.0000 166,019.326 0.0000 2.5000 0.4000 2.4995
34 92,972.223 0.0000 232,428.056 0.0000 2.5000 0.4000 2.4996
35 130,161.112 0.0000 325,400.279 0.0000 2.5000 0.4000 2.4997
36 182,225.556 0.0000 455,561.390 0.0000 2.5000 0.4000 2.4998
37 255,115.779 0.0000 637,786.947 0.0000 2.5000 0.4000 2.4999
38 357,162.090 0.0000 892,902.725 0.0000 2.5000 0.4000 2.4999
39 500,026.926 0.0000 1,250,064.815 0.0000 2.5000 0.4000 2.4999
40 700,037.697 0.0000 1,750,091.741 0.0000 2.5000 0.4000 2.4999
41 980,052.775 0.0000 2,450,129.438 0.0000 2.5000 0.4000 2.5000
42 1,372,073.885 0.0000 3,430,182.213 0.0000 2.5000 0.4000 2.5000
43 1,920,903.439 0.0000 4,802,256.099 0.0000 2.5000 0.4000 2.5000
44 2,689,264.815 0.0000 6,723,159.538 0.0000 2.5000 0.4000 2.5000
45 3,764,970.741 0.0000 9,412,424.353 0.0000 2.5000 0.4000 2.5000
46 5,270,959.038 0.0000 13,177,395.095 0.0000 2.5000 0.4000 2.5000
47 7,379,342.653 0.0000 18,448,354.132 0.0000 2.5000 0.4000 2.5000
48 10,331,079.714 0.0000 25,827,696.785 0.0000 2.5000 0.4000 2.5000
49 14,463,511.600 0.0000 36,158,776.500 0.0000 2.5000 0.4000 2.5000
50 20,248,916.240 0.0000 50,622,288.099 0.0000 2.5000 0.4000 2.5000
5.48 Robotics – www.airwalkpublications.com

Interest Table for Annual Compounding with i = 50%


n F ⁄ P, i, n P ⁄ F, i, n F ⁄ A, i, n A ⁄ F, i, n P ⁄ A, i, n A ⁄ P, i, n A ⁄ G, i, n
1 1.500 0.6667 1.000 1.0000 0.6667 1.5000 0.0000
2 2.250 0.4444 2.500 0.4000 1.1111 0.9000 0.4000
3 3.375 0.2963 4.750 0.2105 1.4074 0.7105 0.7368
4 5.063 0.1975 8.125 0.1231 1.6049 0.6231 1.0154
5 7.594 0.1317 13.188 0.0758 1.7366 0.5758 1.2417
6 11.391 0.0878 20.781 0.0481 1.8244 0.5481 1.4226
7 17.086 0.0585 32.172 0.0311 1.8829 0.5311 1.5648
8 25.629 0.0390 49.258 0.0203 1.9220 0.5203 1.6752
9 38.443 0.0260 74.887 0.0134 1.9480 0.5134 1.7596
10 57.665 0.0173 113.330 0.0088 1.9653 0.5088 1.8235
11 86.498 0.0116 170.995 0.0058 1.9769 0.5058 1.8713
12 129.746 0.0077 257.493 0.0039 1.9846 0.5039 1.9068
13 194.620 0.0051 387.239 0.0026 1.9897 0.5026 1.9329
14 291.929 0.0034 581.859 0.0017 1.9931 0.5017 1.9519
15 437.894 0.0023 873.788 0.0011 1.9954 0.5011 1.9657
16 656.841 0.0015 1,311.682 0.0008 1.9970 0.5008 1.9756
17 985.261 0.0010 1,968.523 0.0005 1.9980 0.5005 1.9827
18 1,477.892 0.0007 2,953.784 0.0003 1.9986 0.5003 1.9878
19 2,216.838 0.0005 4,431.676 0.0002 1.9991 0.5002 1.9914
20 3,325.257 0.0003 6,648.513 0.0002 1.9994 0.5002 1.9940
21 4,987.885 0.0002 9,973.770 0.0001 1.9996 0.5001 1.9958
22 7,481.828 0.0001 14,961.655 0.0001 1.9997 0.5001 1.9971
23 11,222.741 0.0001 22,443.483 0.0000 1.9998 0.5000 1.9980
24 16,834.112 0.0001 33,666.224 0.0000 1.9999 0.5000 1.9986
25 25,251.168 0.0000 50,500.337 0.0000 1.9999 0.5000 1.9990
26 37,876.752 0.0000 75,751.505 0.0000 1.9999 0.5000 1.9993
27 56,815.129 0.0000 113,628.257 0.0000 2.0000 0.5000 1.9995
28 85,222.693 0.0000 170,443.386 0.0000 2.0000 0.5000 1.9997
29 127,834.039 0.0000 255,666.079 0.0000 2.0000 0.5000 1.9998
30 191,751.059 0.0000 383,500.118 0.0000 2.0000 0.5000 1.9998
31 287,626.589 0.0000 575,251.178 0.0000 2.0000 0.5000 1.9999
32 431,439.883 0.0000 862,877.767 0.0000 2.0000 0.5000 1.9999
33 647,159.825 0.0000 1,294,317.650 0.0000 2.0000 0.5000 1.9999
34 970,739.737 0.0000 1,941,477.475 0.0000 2.0000 0.5000 2.0000
35 1,456,109.606 0.0000 2,912,217.212 0.0000 2.0000 0.5000 2.0000
36 2,184,164.409 0.0000 4,368,326.818 0.0000 2.0000 0.5000 2.0000
37 3,276,246.614 0.0000 6,552,491.227 0.0000 2.0000 0.5000 2.0000
38 4,914,369.920 0.0000 9,828,737.841 0.0000 2.0000 0.5000 2.0000
39 7,371,554.881 0.0000 14,743,107.761 0.0000 2.0000 0.5000 2.0000
40 11,057,332.321 0.0000 22,114,662.642 0.0000 2.0000 0.5000 2.0000
41 16,585,998.481 0.0000 33,171,994.963 0.0000 2.0000 0.5000 2.0000
42 24,878,997.722 0.0000 49,757,993.444 0.0000 2.0000 0.5000 2.0000
43 37,318,496.583 0.0000 74,636,991.166 0.0000 2.0000 0.5000 2.0000
44 55,977,744.875 0.0000 111,955,487.750 0.0000 2.0000 0.5000 2.0000
45 83,966,617.312 0.0000 167,933,232.624 0.0000 2.0000 0.5000 2.0000
46 125,949,925.968 0.0000 251,899,849.936 0.0000 2.0000 0.5000 2.0000
47 188,924,888.952 0.0000 377,849,775.905 0.0000 2.0000 0.5000 2.0000
48 283,387,333.428 0.0000 566,774,664.857 0.0000 2.0000 0.5000 2.0000
49 425,081,000.143 0.0000 850,161,998.285 0.0000 2.0000 0.5000 2.0000
50 637,621,500.214 0.0000 1,275,242,998.428 0.0000 2.0000 0.5000 2.0000
CHAPTER – 6

ROBOT PROGRAMMING

Lead through Programming, Robot programming Languages − VAL


Programming − Motion Commands, Sensor Commands, End Effector
Commands and Simple Programs.

6.1. INTRODUCTION:

Robot must be programmed to teach the robot the particular motion


sequence and other actions that must be performed in order to accomplish its
task. A robot program can be considered as a path in space that is to be
followed by the manipulator, combined with peripheral actions that support
the work cycle. It can defined as a sequence of joint coordinate positions.
We can say, Robot programming is the defining of desired motions so
that the robot may perform them without human intervention.
6.2. METHODS OF ROBOT PROGRAMMING:

To program a robot, specific commands are entered into the robot’s


controller memory. This is done in several ways; one of which is the manual
programming method used for limited sequence robots i.e., robots have short
work cycles. Here programming is done by setting limit switches and
mechanical stops to control the end points of its motions. A sequencing device
determines the order in which each joint is actuated to form the complete
motion cycle.
6.2 Robotics – www.airwalkpublications.com

After the rapid proliferation of micro computers in industries, almost all


industrial robots have digital computers as their controllers. Here, three styles
of programming are followed.

★ Leadthrough programming.

★ Textual (or) computer like programming languages.

★ Off-line programming.

6.2.1. Leadthrough Programming:

Here the manipulator is driven through the various motions needed to


perform a given task, recording the motions into the robot’s computer memory
for subsequent playback. Based on the teach procedure, lead through
programming can be distinguished into two as

★ Powered leadthrough, and

★ Manual leadthrough.

In powered leadthrough, the user would guide the robot through


interaction with a Teach Pendant. The teach pendant is a hand-held control
box that allows control of each manipulator joint or of each cartesian degree
of freedom. It consists of toggle switches, dials and buttons which the
programmer activates in a coordinated fashion to move the manipulator to the
required positions in the workplace. The important components of a teach
pendant are illustrated in the Fig. 6.1.
Robot Programming 6.3

The powered leadthrough is most commonly used nowadays and is


usually limited to point to point motions (like spot welding, machine loading
and unloading) rather than continuous movement. This is due to the difficulty
in using the teach pendant to regulate complex geometric motions in space.

The manual leadthrough method or walkthrough method requires the


operator to physically move the manipulator through the motion sequence. It
is convenient for continuous path programming where the motion cycle
6.4 Robotics – www.airwalkpublications.com

involves smooth complex curvilinear movements of the robot arm. Spray


painting and continuous arc welding are examples of this type of robot
application.

In this method since the programmer has to physically grasp the robot
arm and end-effector it could be difficult to move through the motion sequence
in the case of large robots. So, a special programming device which has the
same joint configuration as the actual robot is substituted for the actual robot.
The closely spaced motion points are recorded into the controller memory and
during playback, the path is recreated by controlling the actual robot arm
through the same sequence of points.

Advantages of Leadthrough Programming:

★ No special programming skills or training required.

★ Easy to learn.

Disadvantages:

★ Difficult to achieve high accuracy and straight line movements or


other geometrically defined trajectories.

★ Difficult to edit unwanted operator moves.

★ Limited programming logic ability.

★ Difficult to synchronize with other machines or equipment in the


workcell.

★ Large memory requirements.

★ Robot cannot be used in production while it is programmed.


Robot Programming 6.5

6.2.2. Textual or Computer like Programming:

This method of robot programming involves the use of a programming


language similar to computer programming. The textual language apart from
the various capabilities of a computer programming language, also includes
statements specifically designed for robot control like motion control and input
/ output.

Motion − control commands are used to direct the robot to move its
manipulator to some defined position in space while the input/output
commands are used to control the receipt of signals from sensors and other
devices and to initiate control signals to other pieces of equipment in the
workcell.

These robot languages use offline / online methods of programming


i.e. the program is written off-line with the textual language to define the logic
and sequence while the teach pendant is used to define on-line, the specific
point locations in the workspace.

The advantages of textual programming over lead-through programming


include

★ Extended program logic.

★ Enhanced sensor capabilities.

★ Improved output capabilities for controlling external equipment.

★ Computations and data processing capabilities.

★ Communication with other computer systems.


6.6 Robotics – www.airwalkpublications.com

6.2.3. Off-line Programming:

Here, the programs can be developed without needing to use the robot,
i.e. there is no need to physically locate the point positions in the work space
for the robot as required with textual and leadthrough programming. The robot
program can be prepared at a remote computer terminal and down loaded to
the robot controller for execution without interrupting production. This saves
production time lost to delays in teaching the robot a new task. The programs
developed off-line can be tested and evaluated using simulation techniques.

The benefits of off-line programming are:

★ Higher utilization of the robot and the equipment with which it


operates as off-line programming can be done while the robot is still
in production on the preceding job.

★ The sequence of operations and robot movements can be optimized


or easily improved.
Robot Programming 6.7

★ Existing CAD data can be incorporated.

★ Enables concurrent engineering and reduces production time.

★ Programs can be easily maintained and modified.


6.3. DEFINING A ROBOT PROGRAM:

The manipulator of an robot is made up of a sequence of link and joint


combinations. The links are rigid members connecting the joints. The joints,
sometimes also referred to as the axes, are the movable components of the
robot that cause relative motion between the adjacent links.

The manipulator consists of two sections

★ An arm and body, and

★ A wrist.
An end-effector is attached to the wrist, and a robot program can be
considered as the path in space through which the end effector is to be moved
by the robot. The arm and body determine the general position of the end
effector in the robot’s work space while the wrist determines its orientation.
The robot is required to move its joints (axes) through various positions in
order to follow that path.

If we consider a point in space in the robot program as a position and


orientation of the end effector, there is more than one possible set of joint
co-ordinate values that can be used for the robot to reach that point. [Refer
Fig. 6.3]. So the specification of a point in space does not uniquely define
the joint coordinates of the robot but the specification of the joint coordinates
of the robot defines only one point in space, so, the robot program definition
can be refined as ‘a sequence of joint coordinate positions’. This way, the
position and orientation of the end effector at each point in the path are
specified simultaneously.
6.8 Robotics – www.airwalkpublications.com

6.4. METHOD OF DEFINING POSITION IN SPACE:

The different methods by which the programmer moves the manipulator


to the required positions in the workspace are:

★ Joint mode.

★ World coordinate mode (or x − y − z method)

★ Tool coordinate mode.


In manual leadthrough, the programmer simply moves the arm through
the required path to create the program whereas in the powered leadthrough
the programmer uses a teach pendant to drive the manipulator. The teach
pendant is equipped with a set of toggle switches or contact buttons to operate
each joint in either of its two directions until the end effector has been
positioned to the desired point. Successive positioning of the robot arm in this
way defines a sequence of points. This method is referred to as the joint mode.

However, this way of coordinating the individual joints with the teach
pendant can be very tedious and time consuming. Therefore, to overcome the
difficulties, two alternate methods called the world coordinate mode and tool
coordinate mode are used for controlling movement of the entire manipulator
during programming, in addition to controls for individual joints. Both these
methods make use of a cartesian coordinate system, where the programmer
can move the robot’s wrist end in straight line paths. For polar, cylindrical
and jointed arm robot, the controller must solve a set of mathematical equations
Robot Programming 6.9

to convert the rotational joint motions of the robot into cartesian coordinate
system. To the programmer, the end-effector is being moved in motions that
are parallel to the x, y and z axes.

The world coordinate mode allows the wrist location to be defined with
origin at some location in the body of the robot. It is illustrated in Fig. 6.4.

Fig. 6.4

In the tool coordinate mode, the alignment of the axis system is defined
relative to the orientation of the wrist face plate, to which the end effector is
attached. Here the origin is located at some point on the wrist, the xy-plane
is oriented parallel to the wrist faceplate and the z-axis is perpendicular to the
faceplate pointing in the same direction as a tool or end-effector. The
programmer can orient the tool in the desired way and then control the robot
to make linear moves in directions parallel or perpendicular to the tool. Hence,
this method could be used to provide a driving motion of the tool.

The two key reasons for defining points in a program are

★ To define a working position for the end effector (like picking up a


part or to perform a spot welding operation).

★ To avoid obstacles (like machines, conveyors and other equipment).


6.10 Robotics – www.airwalkpublications.com

6.5. MOTION INTERPOLATION:

Interpolation is the method of constructing new data points within the


range of a discrete set of known data points.

Consider programming a two axis servo controlled cartesian robot with


eight addressable points for each axis. Then there would be 64 addressable
points which can be used in any program. The work space is illustrated in
the Fig. 6.5.

An addressable point is one of the available points that can be specified


in the program so that the robot can be commanded to go to that point.

For example,

A program for the robot to start in the lower left corner and traverse
the perimeter of a rectangle would be as follows:
Robot Programming 6.11

1 1, 1

2 8, 1

3 8, 8

4 1, 8

Fig. 6.6
5 1, 1

Now, if a step is removed from the programs say step 3, then the robot
would execute step 4 by tracing a path along the diagonal line from point
(8, 1) to point (1, 8). Refer Fig. 6.7. This process is referred to as
interpolation.

The possible methods of interpolation include

★ Joint interpolation.
★ Straight line interpolation.
★ Circular interpolation.
★ Irregular smooth motions.
6.12 Robotics – www.airwalkpublications.com

In joint interpolation, for the robot to move its wrist end between two points,
the controller determines how far each joint must move. Then it actuates each
of the joints simultaneously such that all the joints start and stop at the same
time.

In straight line interpolation, the robot controller calculates the


sequence of addressable points in space through which the robot wrist end
must pass to achieve a straight line path between two points.

A cartesian robot has only linear axis and hence in its case both joint
interpolation and straight line interpolation are the same (i.e., both create a
straight line approximation).

This is illustrated in the program below.

Step Move Comments

1 1, 1 User specified starting point

2 2, 2 Internally generated interpolation point

3 3, 2 Internally generated interpolation point

4 4, 3 Internally generated interpolation point

5 5, 3 Internally generated interpolation point

6 6, 4 Internally generated interpolation point

7 7, 4 User specified end point


Robot Programming 6.13

The approximation would be better with a denser grid and much larger
number of addressable points.

In joint interpolation, usually less total motion energy is required to make


the move than straight line interpolation thereby move could be made in
slightly less time. For most robots, joint interpolation is the default procedure
used unless the programmer specifics some other type of interpolation.

For robots with rotational joints or with a combination of rotational and


linear joints, straight line interpolation produces a path that is different from
joint interpolation.

Considering a robot having one rotational axis and one linear axis with
each axis having eight addressable points, so we get a grid of 64 addressable
points as shown in Fig. 6.9.

Here, from the viewpoint of euclidean geometry, the moves created


during interpolation are of different lengths. In the Fig. 6.9 this is shown with
the move from (1, 1) to (3, 2) and the move from (1, 6) to (3, 7).
6.14 Robotics – www.airwalkpublications.com

The incremental moves executed by the robot is a combination of


rotational moves (along axis 1) and linear moves (along axis 2).
In circular interpolation, the programmer is required to define a circle
in the robot’s workspace. This is usually done by specifying three points that
lie along the circle. Then the controller selects a series of addressable points
that lie closest to the defined circle to construct a linear approximation of the
circle. The robot movements consist of short straight line segments which look
very much like a real circle if the grid work of addressable points is dense
enough.
Irregular smooth motions refer to an interpolation process used in
manual lead through programming, when the movements involved typically
consist of combinations of smooth motion segments. The robot must follow
the sequence of closely spaced points that are defined during the programming
procedure. This is used in applications such as spray painting.
Robot Programming 6.15

6.6. B ASIC P RO GRA MM ING C OMMANDS IN W ORKCE L L


CONTROL (WAIT, SIGNAL AND DELAY COMMANDS):
Almost all industrial robots can be instructed to send signals or wait for
signals during the execution of a program. These signals are often referred to
as interlocks. One common form of interlock signals is to actuate the robot’s
end-effector like a gripper where the signal is to open or close the gripper.
This type of signals are usually binary i.e. the signal is ON/OFF or high level
/ low level and are implemented by using one or more dedicated lines.
Some commands are:
SIGNAL P − Instructs the robot controller to output a signal
through line P.
WAIT Q − Indicates that the robot should wait at its current
location until it receives a signal on line Q (Q is one
of the input lines)
DELAY X SEC − Indicates that the robot should wait X seconds before
proceeding to the next step.
For example, let us consider a two-axis robot with workspace of 8 × 8
addressable points. Its task is to unload a press. The robot must remove the
parts from the platen (located at 8, 8) and drop them in a tote pan (located
at 1, 8). Here, (Refer Fig. 6.10) the robot must move its arm around the near
side of the press column (by making use of point 8, 1 and 1, 1) in order to
avoid colliding with it. The point 1, 1 will be the safe starting portion and
the point 8, 1 will be the waiting position of the robot arm before entering
the press to remove the part.
6.16 Robotics – www.airwalkpublications.com

Controller ports 1 to 10 will be used as output (SIGNAL) lines and ports


11 to 20 as input (WAIT) lines. Here, output line 4 will be used to actuate
the press, output line 5 to close the gripper and output line 6 to open the
gripper. Indication that the press has opened (WAIT) will be through the input
line 11.

Also, to cause the robot to wait for a specified amount of time to ensure
that the operation had taken place before proceeding to the next step, the
DELAY command is used. The steps given below explain the press unloading
application where the gripper is in the open position to begin with.

MOVE DESCRIPTION

1, 1 Start at home position.

8, 1 Go to wait position.

WAIT 11 Wait for press to open.

8, 8 Go to pickup point.

SIGNAL 5 Actuate gripper to close.

DELAY 1 SEC Wait for gripper to close.

8, 1 Go to safe position.

SIGNAL 4 Signal press that arm is clear.

1, 1 Go around the press column.

1, 8 Go to tote pan.

SIGNAL 6 Actuate gripper to open.

DELAY 1 SEC Wait for gripper to close.

1, 1 Go to home portion.
Robot Programming 6.17

6.7. BRANCHING:

Branching is a method of dividing a program into convenient segments


that can be called one or more times and can be executed during the program.
A Branch can be considered as a subroutine that can be executed either by
branching it at a particular space in the program or by testing an input signal
line to branch to it.

6.8. ROB O T P RO GRA MMING L ANGUAGE S / TE XT UAL


PROGRAMMING:

Textual programming languages widely used in digital computers took


over the control function in robotics. The increasing complexity of the tasks
that the robots were expected to perform with the need to imbed logical
decisions into the robot workcycle stimulated the growth of these languages.
Some of the textual robot languages developed over the years are.

LANGUAGE DEVELOPED BY
WAVE Stanford Artificial Intelligence
laboratory.
AL Stanford Artificial Intelligence
laboratory.
VAL Victor Scheinman
(Victor’s Assembly language) used in PUMA robot series by
Unimation, Inc.
AUTO PASS IBM Corporation.
AML
RAIL Automatix
MCL McDonnell − Douglas under US.
(Machine control language) Air Force sponsorship.
HELP DEA, Italy, Licensed to General
Electric company.
6.18 Robotics – www.airwalkpublications.com

These the textual robot languages provide a variety of structures and


capabilities and can be grouped into three major classes as:

★ First generation languages.


★ Second generation languages.
★ Future generation languages.
6.8.1. First Generation languages:

These languages use a combination of off-line programming (i.e.


command statements) and teach pendant programming. Since these languages
were developed largely to implement motion control, they are also called as
‘motion level’ languages. The capabilities of first generation languages are
similar to the advanced teach pendant methods. The features include ability
to define manipulator motions, handling elementary sensor data like ON/OFF
signals, branching, input/output interfacing and opening and closing of the
end-effectors.
An example of the first generation language is VAL. It is designed with
simple syntax and is capable of illustrating the robot functions very easily. It
was mainly adopted for Unimation robots.
The limitations of first generation languages include

★ The inability to specify complex arithmetic computations.

★ The incapability to use complex sensors and sensor data.

★ Limited capacity to communicate with other computers.

★ Incapability of extension for future enhancements.


6.8.2. Second Generation Languages:

Second generation languages are structured programming languages


which overcome the limitations of the first generation languages. These are
similar to computer programming languages and can accomplish more complex
tasks.
Robot Programming 6.19

The features include:

★ Motion control: Complex motions can be generated apart from straight


line interpolation.

★ Advanced sensor capabilities: Capacity to handle analog signals in


addition to binary (ON/OFF) signals and to control devices using the
sensory data.

★ Limited intelligence: The robot controller can be programmed to


respond to any problem of malfunction i.e. error recovery procedure.
Limited intelligence refers to the situation that the robot cannot figure
out what to do on its own beyond what it has been programmed to
do.

★ Improved communication and data processing: Second generation


languages have improved capability to interact with other computers
and computer data bases for keeping records, generating performance
reports and controlling activities in the workcell.

★ Extensibility: Second generation languages can be extended by the


user to handle the requirements of future sensing devices, robots and
applications.

Some of the commercially available second generation languages include:


AML (A Manufacturing Language), RAIL (High level robot language based
on Pascal), MCL (Modification of APT) and VAL II.

6.8.3. Future Generation Languages:

These languages are also called world modeling, ‘model based’ languages
and ‘task oriented’ object level languages.

In this concept, the robot possesses knowledge of the three dimensional


model of its work space by inputing data into its control memory or by
providing it with the capacity to see the work environment and properly
interpret what it sees. By this, the robot knows the desired locations without
6.20 Robotics – www.airwalkpublications.com

being taught each point and is capable of self-programming to perform a task


based on a stated objective. Examples of such high-level objective oriented
commands would be like ‘TIGHTEN A NUT’ OR ‘ASSEMBLE THE TYPE
WRITER’.

Current limitations of Future generation languages would include:

★ Accuracy of the world model contained in the robot’s memory.


★ Technology of artificial intelligence and hierarchial control systems
that would permit a robot to accept an objective oriented command
and translate it into a step-by-step procedure to accomplish it.

6.9. STRUCTURE OF ROBOT LANGUAGE:


We have discussed in earlier sections that it is difficult to have close
control through pendant teaching and so textual programming is attempted,
where the computer instructions are given following the syntax of certain robot
language. A robot language must be able to support the programming of the
robot, control the robot manipulator interface with sensors and equipment and
support data communications with other computer systems in the company.
This is illustrated in the Fig. 6.11.

Fig. 6.11: Language Coordinated Components in a Robot System


Robot Programming 6.21

6.9.1. Operating System:

This is used to perform several functions like writing, editing, or


executing a program in the robot textual languages. The term operating system
in computer science, refers to the software that supports the internal operation
of the computer system. It facilitates the operation of the computer by the
user and increases the efficiency of the system and associated peripheral
devices.
An operating system requires an interpreter or compiler for processing
a robot language program. An interpreter is used to run every instruction of
the robot program one by one, Eg. VAL, while a compiler is used to convert
the instructions into the machine level method by passing through the complete
program, Eg. MCL. Programs processed by a compiler result in faster
execution times, whereas editing of a interpreted program is very fast.
A robot language operating system has three basic modes of operation as:

★ Monitor or supervisory mode.


★ Run or execute mode.

★ Edit mode.
Monitor mode or Supervisory mode:

The purpose of a monitor mode is to control the complete operating


system. This mode allows the users to carry out important tasks like:

★ Defining the positions with the help of a teach pendant.


★ Entering the sufficient speed for operating a robot.

★ Storing programs in the memory.


★ Transfer the stored programs to robot controller memory.

★ Bringing back the existing program for performing other tasks like
edit and run.
6.22 Robotics – www.airwalkpublications.com

Edit mode:

In the edit mode, the programmer can write new programs or edit
existing programs. There is an instruction set for writing and editing the
programs which can be done by three different methods such as:

★ Editing or deleting the instruction in existing program.


★ Writing new series of instruction lines in a program.
★ Adding of new lines in the program.
Run mode or Execute mode:

The run mode is used to execute the sequence of instructions in the


program. This mode helps in checking the errors available on the program.
These errors are corrected with the help of language debugging methods. For
example: the points defined in the program may exceed the limits of moving
the manipulator. In such case, an error message will be displayed on the screen.
To correct this error, the program will be sent back to the edit mode for
corrections.

6.9.2. Elements and Functions of a Robot Language:

The basic elements and functions that should be incorporated into a robot
language to enable the robot to perform various tasks are:

★ Constants, variables and other data objects.


★ Monitor commands.
★ Motion commands.
★ End effector and sensor commands.
★ Program control and subroutines.
★ Computations and operations.
★ Communications and data processing.
Robot Programming 6.23

Some of these are discussed in the following sections with respect to


VAL programming.

6.10. VAL PROGRAMMING:

VAL is a robot programming language and operating system developed


by Unimation, Inc. for its PUMA series industrial robots.

It is a compact stand alone system designed to be highly interactive to


minimize programming time and to provide as many programming aids as
possible.

The motions and actions of the robot are controlled by a program created
by the user and stored in RAM. These control programs are called user
programs or VAL programs. Also, subroutines which are separate programs
can be included in the VAL program. A subroutine can call other subroutines
upto ten such levels possible.

VAL was later upgraded to VAL II.

It provides the capability to easily define the task a robot is to perform.


Also, it includes the ability to respond to information from sensor systems
such as machine vision, enhanced performance in terms of arm trajectory
generation and working in unpredictable situations or using reference frames.

VAL contains an easy to use program editor that allows the user to
create or to modify robot control programs. The program modification facilities
include the ability to insert new program steps, to delete old steps and to
replace all or part of existing steps.

6.10.1. Robot Locations:

Robot locations refer to the data that represents the position and
orientation of the robot tool. A ‘point’ or a ‘position’ is a cartesian reference
in the workspace. A ‘location’ is a point plus an orientation. While executing
a motion instruction, the robot tool point moves to the specified destination
point and the tool frame is oriented to the specified destination orientation.
6.24 Robotics – www.airwalkpublications.com

VAL has got two possible methods of defining robot locations:

★ Precision points, and

★ Transformations.
When a location is expressed in terms of the positions of the individual
robot joints, then it is called a precision point. Here, the advantage is that
maximum precision can be obtained without any ambiguity regarding the robot
configuration but they cannot be used for other robot structures (i.e. robot
dependent) and cannot be modified.

Transformations is a robot independent representation of position and


orientation for the tool. The location values are expressed in terms of the
cartesian coordinates and orientation angle of the robot tool relative to a
reference frame fixed in the robot base.

Also relative transformations called compound transformations are


available in VAL which define a location relative to other locations and are
written as strings of transformation names separated by colons. The advantage
is that they can be used with other robot and can be modified by shifting a
position or an angle.

Path Control: VAL uses straight line interpolation and joint interpolation
methods to control the path of a robot from one location to another. In straight
line interpolation the motion speed of the robot tool tip can be accurately
controlled but is slower than the corresponding joint − interpolated motions.

6.10.2. Motion Commands:

There are programming commands used to define the motion path.


Usually textual statements are used to describe the motion while the
leadthrough methods are used to define the position and orientation of the
robot during and / or at the end of the motion.

Some specific commands used in VAL II are discussed below.


Robot Programming 6.25

HERE P1 (or) LEARN P1 : These statements are used to define the


desired point P1 and record that point into
memory (i.e. the set of joint positions or
co-ordinates used by the controller to
define t he poi nt). Ei the r powere d
leadthrough or manual leadthrough is used
to place the robot at the desired point.
MOVE P1 : Causes the robot to move by a joint
interpolated motion from its current
position to a position and orientation
defined by the variable name P1.

Other variants of the MOVE statement include the definition of straight


line interpolation motions, incremental moves, approach and depart moves and
paths.
MOVES P1 : Causes the robot to move by straight line interpolation
to the point P1. The suffix S on the statement designates
straight line motion.
DMOVE : The prefix D designates delta, so the statement
represents delta move or incremental move. An
incremental move is where the endpoint is defined
relative to the current position of the manipulator rather
than to the absolute coordinate system of the robot. For
instance, the statement D (4, 125) moves the joints from
the current position to an incremental value of 125.
APPRO : The approach command moves the gripper from its
current position to within a certain distance of the
pick-up point by joint interpolation. It is useful for
avoiding obstacles such as parts in a tote pan.
DEPART : This statement moves the gripper away from the point
after the pickup is made.
6.26 Robotics – www.airwalkpublications.com

The approach and depart statements are used in material handling


operations. Let as consider the following sequence.

APPRO P1, 40

MOVE P1

SIGNAL (to close gripper)

DEPART 40

In the above sequence, the approach command moves the gripper to a


safe distance of 40 mm above the destination point. The orientation of the
gripper is the same as that defined for the point P1. The MOVE command
moves the tool to the point P1 and the DEPART command moves the tool
away from P1 by 40 mm.

APPROS and DEPARTS carry out the same instruction but along straight
line paths.

A series of points connected together in a single move is a path in a


robot program. The path can be specified as follows:

DEFINE PATH I = PATH (P1, P2, P3, P4)

The above defined path consists of the connected series of points P1,
P2, P3, P4, defined relative to the robot’s world space.

SPEED command is used to control robot speed by defining either a


relative velocity or an absolute velocity. The two ways to specify speed are:

SPEED 70

The manipulator should operate at a percentage of speed specified (here,


70%) of the initial commanded velocity

SPEED 10 MMPS

This statement indicates that the motion commands are to be executed


at a speed of 10 mm/s.
Robot Programming 6.27

The concept of frame definition in robot programming is conveyed


through the following statement.

DEFINE FRAME 1 = FRAME (P1, P2, P3)

The position of FRAME 1 in space is defined using the three points P1,
P2 and P3. P1 would be the origin of the frame, P2 is a point along the
x-axis and P3 is a point in the xy plane.

DRIVE This command can be used to change a single joint by a


certain amount

DRIVE 3, 65, 30

This statement changes the angle of joints by driving it 65° in the


positive direction at 30 percent of the monitor speed.

ALIGN : This command is used to align the tool or end effector for
subsequent moves such that its z-axis is aligned parallel to
the nearest axis of the world coordinate system.

DO : This command allows a robot to execute a program


instruction, Examples: DO ALIGN, DO MOVE P1.

6.10.3. End Effector Commands:

We have discussed previously about the SIGNAL command to operate


the gripper. Better ways of exercising control over the end effector operation
are available in second generation languages.

The basic commands are

OPEN

CLOSE
6.28 Robotics – www.airwalkpublications.com

The above two commands cause the action to occur during execution of
the next motion. If the execution is to take place immediately rather than
during the next motion, the following commands are to be used.

OPENI

CLOSEI

The above statements are for a non-servoed or pneumatically operated


gripper. VAL II has provisions for controlling an electrically driven servoed
hand also. Here greater control can be achieved, for instance by the following
ways..

OPEN 65

This statement causes the gripper to open to 65 mm during the next


motion of the robot

CLOSEI 40

This causes the gripper to close immediately to 40 mm.

If a check has to be made to determine if the gripper has closed by an


expected amount, the GRASP command can be used. For example,

GRASP 11.5, 110

The above statement closes the gripper and checks if the opening is less
than 11.5 mm. If true, the program branches to statement 110 in the program.
In case the statement number is not specified in the command, the system
displays an error message.
Robot Programming 6.29

Some grippers have tactile and/or force sensors built into the fingers to
permit the robot to sense the presence of the object and to apply a measured
force during grouping.

For example,

CLOSE 4.0 LB

indicates to apply a 4-lbs gripping force against the part.

To accomplish in a single statement, the control of the servo controlled


gripper with the simultaneous movement of the robot arm, is

MOVET P2, 80

This causes a joint interpolated motion from the current position to the
point P2 and during the motion also actuates the hand opening to 80 mm.

For straight line motion, the command is

MOVEST P2, 80

For a pneumatically operated gripper, the above commands mean ‘open’


if the value is greater than zero and ‘close’ if the value is otherwise.

6.10.4. Sensor and Interlock Commands:

The two basic commands are ‘SIGNAL’ and ‘WAIT’.

SIGNAL 4, ON

This statement would allow the signal from output port 4 to be turned
on

SIGNAL 4, OFF
6.30 Robotics – www.airwalkpublications.com

would allow the signal from output port 4 to be turned off.

Also in VAL II the command used to turn output signals ON or OFF


is

SIGNAL − 2, 4

This would turn output signal 2 ‘off’ and output signal 4 ‘on’. i.e., the
negative numbers turn the corresponding signals ‘on’ and the positive numbers
turn the corresponding signals ‘off’.

An analog output can also be controlled with the SIGNAL command.

SIGNAL 105, 5.5

This gives an output of 5.5 units (probably volts) to the device from
controller output port 105.

The WAIT command can also be used for on − off conditions in the
same manner as the SIGNAL command. The WAIT command is used to verify
whether the device has been turned ‘on’ or ‘off’ before permitting the program
to continue. For example, it is useful in an situation where the robot needs
to wait for the completion of an automatic machine cycle for loading and
unloading application.

WAIT 20, ON

would cause program execution to stop until the input signal coming to the
robot controller at port 20 is in ‘on’ condition.
Robot Programming 6.31

WAIT SIG ( − 1, 2)

stops program execution until external (input) signal 1 is turned ‘off’ and input
signal 2 is turned ‘on’.

RESET This turns off all external output signals. Used in the initialization
portion of a program to ensure that all external signals are in a known
state-‘off’.

REACT statement. It is used to continuously monitor an incoming signal and


to respond to a change in the signal. It serves to illustrate the types of
commands used to interrupt regular execution of the program in response to
some higher priority event (Like when some error or safety hazard occurs in
the workcell).

REACT 11, SAFETY

indicates that the input line 11 is to be continuously monitored. If the current


state of signal is ‘off’, the signal is monitored for a transition from ‘off’ to
‘on’ and then again ‘off’. When the change is detected, REACT transfers
program control to the subroutine SAFETY.

Usually REACT completes current motion command before interrupting.


In cases where immediate action is required, the following statement is used.

REACTI.

This interrupts robot motion immediately and transfer to the subroutine is done
at once.
6.32 Robotics – www.airwalkpublications.com

SAMPLE VAL PROGRAM:

PROGRAM DEMO : Program name is DEMO.

APPRO P1, 40 : Move to location 40 mm above P1 (P1 is a


location to be defined).

MOVES P1 : Move along straight line to P1.

CLOSEI : Gripper to close immediately onto the object.

DEPARTS 100 : Withdraw arm 100 mm from P1 along straight


line path.

APPROS BOX, 150 : Move along straight line to location 150 mm


above BOX (BOX to be defined).

OPENI : Gripper to open to drop the object.

DEPART 70 : Withdraw arm 70 mm from BOX.

*********
CHAPTER – 7

TRAJECTORY GENERATION

Trajectory Generator

7.1. INTRODUCTION:

Trajectory (or) path describes the desired motion of a manipulator in


space. Trajectory refers to a time history of position, velocity, and acceleration
for each degree of freedom.
.
For generating the trajectory, position ( θ ), vel oci ty ( θ ), and
.
acceleration ( θ ), are computed on digital computers, at a certain rate, called
the path-update rate. In typical manipulator systems, this rate lies between
60 and 2000 Hz.

The motions of a manipulator can be considered as motions of the tool


frame, { T }, relative to the station frame, { S }.

When we move the manipulator from an initial position to some desired


final position − that is, to move the tool frame from its current value,

 Tinitial  , to a desired final value,  Tfinal  , the motion involves both a change
   

in orientation and a change in the position of the tool relative to the station.

To specify the motion in detail, we should include a path description to


give a sequence of desired via points (intermediate points between the initial
7.2 Robotics – www.airwalkpublications.com

and final positions). Thus for completing the motion, the tool frame must pass
through a set of intermediate positions and orientations as described by the
via points.
The path points includes all the via points plus the initial and final
points.

For the motion of the manipulator to be smooth, we can define a smooth


function as a function that is continuous and has a continuous first derivative,
and a continuous second derivative.

Rough and jerky motions tend to cause increased wear on the mechanism
and cause vibrations by exciting resonances in the manipulator. In order to
guarantee smooth paths, there is a great variety of the ways that paths might
be specified and planned. Any smooth functions of time, passing through the
via points can be used to specify the exact path shape.

7.2. JOINT-SPACE SCHEMES:

In path generation, the path shapes (in space and in time) are described
in terms of functions of joint angles.

Each path point is usually specified in terms of a desired position and


orientation of the tool frame, { T }, relative to the station frame, { S } . Each
of these via points is “converted” into a set of desired joint angles by using
inverse kinematics. Then a smooth function is found for each of the n joints
that pass through the via points and end at the goal point.

7.2.1. Cubic Polynomials:

Consider the problem of moving the tool from its initial position to a
final position in a certain amount of time. Inverse kinematics allow the set of
joint angles that correspond to the goal position and orientation to be
calculated. t0 is the initial position of the joint and tf is the desired final
position of that joint. There are many smooth functions, θ (t), to interpolate
the joint value as shown in Fig. 7.1.
Trajectory Generation 7.3

In making a single smooth motion, at least four constraints on θ (t) are


needed. Two constraints are initial and final values.

Initial → At t = 0, θ (0) = θ0 ,

Final → At t = f , θ (tf) = θf , .... (1)

An additional two constraints are that the function be continuous in


velocity, which means that the initial and final velocity are zero.
.
Initially, Velocity, θ (0) = 0,
.
Finally, Velocity, θ (tf) = 0. .... (2)

These four constraints can be satisfied by a polynomial of at least third


degree.

A cubic has the form

θ (t) = a0 + a1 t + a2 t2 + a3 t3 , .... (3)


7.4 Robotics – www.airwalkpublications.com
.
The joint velocity θ (t) = a1 + 2a2 t + 3a3 t2
..
Acceleration θ (t) = 2 a2 + 6a3 t. .... (4)

Applying four desired constraints yields four equations in four unknowns:

θ0 = a0 ,

θf = a0 + a1 tf + a2 t2f + a3 t3f ,
.
θ (0) = 0 ∴ a1 = 0 , .... (5)
.
θ (tf) = 0 ⇒ 0 = a1 + 2a2 tf + 3a3 t2f ,

Solving these equations for the ai, we obtain

a0 = θ0 ,

a1 = 0,

3
a2 = (θ − θ0) .... (6)
t2f f

2
a3 = − (θ − θ0)
t3f f

Using equation (6), we can calculate the cubic polynomial that connects
any initial joint angle position with any desired final position. This solution
is for the case when the joint starts and finishes at zero velocity.

Problem 7.1: A single-link robot with a rotary joint is motionless at


θ = 20°. It is desired to move the joint in a smooth manner to θ = 100° in 3
seconds. Find the coefficients of a cubic that accomplishes this motion and
brings the manipulator to rest at the goal. Plot the position, velocity, and
acceleration of the joint as a function of time.
Trajectory Generation 7.5

Using equation (6)


a0 = θ0 = 20°

a1 = 0

3 3
a2 = (θ − θ0) = 2 (100 − 20) = 26.67
t2f f 3

2 2
a3 = − (θ − θ0) = − 3 (100 − 20)
3 f
tf 3

= − 5.93
Using equations (3) and (4), we obtain

θ (t) = 20 + 26.67 t2 − 5.93 t3


.
θ (t) = 53.34 t − 17.79 t2
..
θ = 53.34 − 35.58 t
Figure 7.2 shows the position, velocity, and acceleration functions for
this motion. Note that the velocity profile for any cubic function is a parabola
and that the acceleration profile is linear.

Angular Position

θ (0.6) = 28.32°

θ (1.2) = 48.16°

θ (1.8) = 71.83°

θ (2.4) = 91.64°

θ (3) = 99.92°
7.6 Robotics – www.airwalkpublications.com

.
Angular Velocity θ
.
θ (0.6) = 25.6 rad/s2
.
θ (1.2) = 38.39 rad ⁄ s
.
θ (1.8) = 38.37 rad ⁄ s
.
θ (2.4) = 25.55 rad ⁄ s
.
θ (3) = 0

..
Angular acceleration θ

..
θ (0.6) = 31.9 rad/s2
..
θ (1.2) = 10.64 rad ⁄ s2
..
θ (1.8) = − 10.704 rad ⁄ s2
..
θ (2.4) = − 32.05 rad ⁄ s2
..
θ (3) = − 52.8 rad ⁄ s2

Fig. 7.2: Position, velocity, and acceleration profiles for a


single cubic segment that starts and ends at rest.
Trajectory Generation 7.7

7.2.2. Cubic Polynomials for a path with via points:

In the last article, we have considered motions described by a desired


duration and a final goal point. In general, we wish to decide paths to include
intermediate via points. If the manipulator is to come to rest at each via point,
then we can use the cubic solution explained in the last problem.

Usually, we wish to pass through a via point without stopping, and hence
we should generalize the way to fit cubics to the path constraints.

For single goal point, each via point is usually specified in terms of a
desired position and orientation of the tool frame relative to the station frame.
Each of these via points is “converted” into a set of desired joint angles by
using inverse kirematics. Then cubics can be computed to connect the via-point
values for each joint together in a smooth way.

If desired velocities of the joints at the via points are known, then we
can construct cubic polynomials. In the case, the velocity constraints at each
end are not zero, but known velocity.

. .
θ. (0) = θ.0 , 
θ (tf) = θf . 
Then equating (2) becomes .... (7)

Then four equations describing this general cubic are

θ0 = a0 , 

θf = a0 + a1 tf + a2 t2f
+ a3 t3f , 
.  .... (8)
θ0 = a1 , 
. 
θf = a1 + 2a2 tf + 3a3 t2f . 
7.8 Robotics – www.airwalkpublications.com

Solving these equations for the ai, we obtain


a0 = θ0 , 

. 
a1 = θ0 , 

3 2 . 1 .  .... (9)
a2 = 2 (θf − θ0) − θ0 − θf , 
tf tf tf 
. . 
a3 = − 3 (θf − θ0) + 2 (θf + θ0). 
2 1
tf tf 

Using equation (9) the cubic polynomial can be calculated to connect


any initial and final positions with any initial and final velocities.

In Fig. 7.3 a reasonable choice of joint velocities can be made at the


via points. The via points can be connected with straight line segments. If the
slope of these lines changes sign at the via point, then zero velocity should
be chosen. If the slope of these lines does not change sign, choose the average
of the two slopes as the via velocity. In this way, from the desired via points,
the system can choose the velocity at each point and path (trajectory) can be
generated.
Trajectory Generation 7.9

7.3. PATH GENERATION AT RUN TIME:

At run time, the path-generator constructs the trajectory, usually in terms


. ..
of θ , θ , and θ , and feeds this information to the manipulator’s control system.
This path generator computes the trajectory at the path-update rate.

7.4. DESCRIPTION OF PATHS WITH A ROBOT PROGRAMMING


LANGUAGE:

Various types of paths might be specified in a robot language.

To move the mani pul at or in j oint-spac e m ode a long


linear-parabolic-blend paths, we could say

move ARM to C with duration = 5 ∗ seconds;

The move to the same position and orientation in a straight-line we could


say

move ARM to C linearly with duration = 5 ∗ seconds;

where the keyword “linearly” denotes that Cartesian straight-line motion. If


duration is not important, the user can omit this specification, and the system
will use a default velocity − that is,

move ARM to C;

A via point can be added, and we can write

move ARM to C via B;

or a whole set of via points might be specified by

move ARM to C via B, A, D;


7.10 Robotics – www.airwalkpublications.com

7.5. COLLISION-FREE PATH PLANNING:

It is very important that robot system should be instructed what the


desired goal point of the manipulator motion is and let the system determine
where and how many via points are required to reach the goal without hitting
any obstacles.

*********
CHAPTER – 8

MANIPULATOR
MECHANISM DESIGN

Manipulator Mechanism Design.

8.1. INTRODUCTION:

A manipulator can perform one task with the particular design. It will
vary with different designs. The robot manipulator’s performance is varying
with factors like load capacity, speed, size of workspace, and repeatability, the
overall manipulator size, weight, power consumption, and cost.

Elements of a robot system

1. The manipulator, including its internal or proprioceptive sensors,

2. The end-effector, or end-of-arm tooling,

3. External sensors and effectors, such as vision systems and part feeders,
and

4. The controller.

Designing a manipulator is an iterative process.


8.2 Robotics – www.airwalkpublications.com

8.2. BASED ON THE DESIGN ON TASK REQUIREMENTS:

Large robots capable of handling high payloads do not generally have


the electronic components. So, the size, the number of joints, the arrangement
of the joints, and the types of actuation, sensing, and control will vary
significantly with the sort of task to be performed.

8.2.1. Number of degrees of freedom:

The number of degrees of freedom in a manipulator should match the


number required by the task. All tasks will not require a full six degrees of
freedom.

The most common such circumstance occurs when the end-effector has
an axis of symmetry. Arc welding, spot welding, deburring, glueing, and
polishing provide examples of tasks that often employ end-effectors with at
least one axis of symmetry.

In analyzing the symmetric-tool situation, the actual manipulator need


not have more than five degrees of freedom. Quite a large percentage of
existing industrial robots are 5-DOF, in recognition of the relative prevalence
of symmetric-tool applications.

Some tasks are performed with less than 5 degrees of freedom. Placement
of components on circuit boards is an example of this. Circuit boards are
planar and contain parts with different heights. Positioning parts on a planar
surface requires three degrees of freedom (x, y, and θ), in order to lift and
insert the parts, a fourth motion normal to the plane can be added (z).

Parts with an axis of symmetry also reduce the required degrees of


freedom for the manipulator. For example, cylindrical parts can be picked up
and inserted independent of the orientation of the gripper with respect to the
axis of the cylinder.
Manipulator Mechanism Design 8.3

8.2.2. Workspace:

In performing tasks, a manipulator has to reach a number of workpieces


or fixtures.

The overall scale of the task sets the required workspace of the
manipulator.

The intrusion of the manipulator itself is an important factor. Depending


on the kinematic design, operating a manipulator in a given application could
require more or less space around the fixtures in order to avoid collisions.

8.2.3. Load Capacity:

The load capacity of a manipulator depends upon the sizing of its


structural members, power-transmission system, and actuators.

8.2.4. Speed:

High speed manipulator offers advantages in many applications.

The process itself limits the speed like welding and spray-painting
applications.

8.2.5. Repeatability and Accuracy:

High repeatability and accuracy are expensive to achieve. High accuracy


is achieved by having good knowledge of the link and other parameters.

8.3. KINEMATIC CONFIGURATION:

Once the required number of degrees of freedom has been fixed, a


particular configuration of joints must be chosen to achieve those freedoms.

Most manipulators are designed so that the last n − 3 joints orient the
end-effector and have axes that intersect at the wrist point, and the first three
joints position this wrist point. Manipulators with this design are composed of
a positioning structure followed by an orienting structure or wrist.
8.4 Robotics – www.airwalkpublications.com

Further more, the positioning structure is designed to be kinematically


simple, having link twists equal to 0° or ± 90° and having many of the link
lengths and offsets equal to zero.

8.3.1. Cartesian:
A Cartesian manipulator has straight forward configuration. As shown
in Fig. 8.1, joints 1 through 3 are prismatic, mutually orthogonal, and
correspond to the X, Y, and Z Cartesian directions.

This configuration produces robots with very stiff structures. As a


consequence, very large robots can be built. These large robots, often called
gantry robots, resemble overhead gantry cranes. Gantry robots sometimes
manipulate entire automobiles or inspect entire aircraft.

The other advantages of Cartesian manipulators are with the first three
joints decoupled makes them simpler to design and prevents kinematic
singularities due to the first three joints.

The disadvantage is that all of the feeders and fixtures must lie “inside”
the robot. The size of the robot’s support structure limits the size and
Manipulator Mechanism Design 8.5

placement of fixtures and sensors. These limitations make retrofitting Cartesian


robots into existing workcells extremely difficult.

8.3.2. Articulated:

Figure 8.2 shows an articulated manipulator, which is also called a


jointed, elbow, or anthropomorphic manipulator. This manipulator consists
of two “shoulder” joints one for rotation about a vertical axis and one for
elevation out of the horizontal plane. An “elbow” joint whose axis is usually
parallel to the shoulder elevation joint, and two or three wrist joints, are at
the end of the manipulator. Both the PUMA 560 and the Motoman L-3, fall
into this class.

Articulated robots minimize the intrusion of the manipulator structure


into the workspace, making them capable of reaching into confined spaces.
They require much less overall structure than Cartesian robots, making them
less expensive.
8.6 Robotics – www.airwalkpublications.com

8.3.3 Spherical:

The spherical configuration as shown in Fig. 8.3 has many similarities


to the articulated manipulator, but with the elbow joint replaced by a prismatic
joint.

The link that moves prismatically might telescope − or even “stick out
the back” when retracted.

8.3.4. Cylindrical:
Manipulator Mechanism Design 8.7

Cylindrical manipulators Fig. 8.4 consist of a prismatic joint for


translating the arm vertically, a revolute joint about a vertical axis, another
prismatic joint orthogonal to the revolute joint axis, and finally, a wrist.

8.4. WRISTS:
The wrist configurations consist of either two or three revolute joints
with orthogonal, intersecting axes. The first of the wrist joints usually forms
joint 4 of the manipulator.

A configuration of three orthogonal axes will guarantee that any


orientation can be achieved.

A three orthogonal axis wrist can be located at the end of the manipulator
in any desired orientation with no penalty. Fig. 8.5. is a schematic of one
possible design of such a wrist, which uses several sets of bevel gears to drive
the mechanism from remotely located actuators.

Some industrial robots have wrists that do not have intersecting axes.
8.8 Robotics – www.airwalkpublications.com

The wrist is mounted on an articulated manipulator in such a way that


the joint-4 axis is parallel to the joint-2 and -3 axes as shown in Fig. 8.6.
Likewise, a non intersecting-axis wrist mounted on a Cartesian robot yields a
closed-from-solvable manipulator.
Manipulator Mechanism Design 8.9

Typically, 5-DOF welding robots use two axis wrists oriented as shown
in Fig. 8.7.

8.5. ACTUATION SCHEMES:

Once the general kinematic structure of a manipulator has been chosen,


the actuation of the joints, the actuator, reduction and transmission system
must be designed together.

8.5.1. Actuator Location:

The choice of actuator location is at or near the joint it drives. If the


actuator can produce enough torque or force, its output can attach directly to
the joint. This direct – drive configuration has the advantages of simplicity in
design and superior controllability. It has no transmission or reduction elements
between the actuator and the joint and hence the joint motions can be
controlled with the same fidelity as the actuator itself.

However, many actuators are best suited to relatively high speeds and
low torques and therefore require a speed reduction system. Further more to
8.10 Robotics – www.airwalkpublications.com

avoid actuators to be heavy, they can be located remotely from the joint and
towards the base of the manipulator. So that the overall inertia of the
manipulator can be reduced considerably. This, in turn, reduces the size needed
for the actuators. To obtain these benefits a transmission system is needed
to transfer the motion from the actuator to the joint.

In a joint drive system with a remotely mounted actuator, the reduction


system could be placed either at the actuator or at the joint. Some arrangements
combine the functions of transmission and reduction. The major disadvantage
of reduction and transmission system is that they introduce additional friction
and flexibility into the mechanism. When the reduction is at the joint, the
transmission will be working at higher speeds and lower torques with lesser
flexibility.

The optimal distribution of reduction stages throughout the transmission


will depend solely on the flexibility of the transmission, the weight of the
reduction system, the friction associated with the reduction system, and the
ease of incorporating these components into the overall manipulator design.

8.5.2. Reduction and transmission systems:

Gears are mostly used for reduction to provide large reductions in


relatively compact configurations. Gear pairs come in various configurations
for parallel shafts (spur gears), orthogonal intersecting shafts (bevel gears),
skew shafts (worm gears or cross helical gears), and other configurations.
Different types of gears have different load ratings, wear characteristics, and
frictional properties.

The major disadvantages of using gearing are added backlash and


friction. Backlash, arises from the imperfect meshing of gears. Backlash can
be defined as the maximum angular motion of the output gear when the input
gear remains fixed. If the gear teeth are meshed tightly to eliminate backlash,
there can be excessive amounts of friction. Very precise gears and very precise
mounting minimize backlash but also increase cost.
Manipulator Mechanism Design 8.11

The other reduction elements are flexible bands, cables and belts. The
flexibility of these elements is proportional to their length. Because these
systems are flexible, there must be some mechanism for preloading the loop
to ensure that the belt or cable stays engaged on the pulley. Large preloads
can add undue strain to the flexible element and introduce excessive friction.

Cables or flexible bands can be used either in a closed loop or as single


ended elements that are always kept in tension by preload.

Roller chains are similar to flexible bands but can bend around relatively
small pulleys while retaining a high stiffness. As a result of wear and high
loads on the pins connecting the links, toothed belt systems are more compact
than roller chains for certain applications.

Band, cable, belt, and chain drives have the ability to combine
transmission with reduction.

Lead screws or ball-bearing screws provide speed reduction in a compact


package Fig. 8.9. Lead screws are very stiff and can support very large loads,
to transform rotary motion into linear motion. Ball-bearing screws are similar
to lead screws, but instead of having the nut threads riding directly on the
8.12 Robotics – www.airwalkpublications.com

screw threads, a recirculating circuit of ball bearings rolls between the sets of
threads. Ball bearings screws have very low friction.

8.6. STIFFNESS AND DEFLECTIONS:

The good design of most manipulators is depending on overall stiffness


of the structure and the drive system. Stiff systems provide two main benefits.
1. The tool frame location can be calculated by using the forward kinematics
based on sensed joint positions. For an accurate calculation, the links cannot
sag under gravity or other loads. 2. Denavit-Hartenberg description of the
linkages can be used for various loading conditions.
Flexibilities in the structure or drive train will lead to resonances,
undesirable effect on manipulator performance.
Finite element techniques can be used to predict the stiffness (and other
properties) of more realistic structural elements more accurately.
8.7. ACTUATORS:

Earlier days, hydraulic cylinders or vane actuators were originally used


in manipulators. They can produce enough force to drive joints without a
reduction system. The speed of operation depends upon the pump and
accumulator system, usually located remotely from the manipulator.
Manipulator Mechanism Design 8.13

But hydraulics system requires many equipments such as pumps,


accumulators, hoses, and servo valves, and hence it is unsuitable for some
applications.
Pneumatic cylinders possess all the favorable attributes of hydraulics.
However, pneumatic actuators have proven difficult to control accurately,
because of the compressibility of air and the high friction of the seals.
Electric motors are the most popular actuator for manipulators. Although
they don’t have the power to weight ratio of hydraulics or pneumatics, their
controllability and ease of interface makes them attractive for small to medium
sized manipulators.
Direct current (DC) brush motors Fig. 8.10 are used to interface and
control. The current is conducted to the windings of the rotor via brushes,
which make contact with the revolving commutator. Brush wear and friction
can be problems.

The limiting factor on the torque output of these motors is the


overheating of the windings.
8.14 Robotics – www.airwalkpublications.com

Brushless motors solve brush wear and friction problems. Here, the
windings remain stationary and the magnetic field piece rotates. A sensor on
the rotor detects the shaft angle and is then used by external electronics to
perform the commutation. Another advantage of brushless motors is that the
winding is on the outside, attached to the motor case, affording it much better
cooling.

Alternating current (AC) motors and stepper motors have been used
frequently in industrial robotics.

8.8. POSITION SENSING:

All manipulators should have servo controlled mechanisms to eliminate


the error between the sensed position of the joint and the desired position.
For this, each joint has some sort of position sensing device.

A position sensor is fitted directly on the shaft of the actuator. If the


drive train is stiff and has no backlash, the true joint angles can be calculated
from the actuator shaft positions. Such co-located sensor and actuator pairs
are easiest to control.

The most popular position feedback device is the rotary optical encoder.
As the encoder shaft turns, a disk containing a pattern of fine lines interrupts
a light beam. A photo detector turns these light pulses into a binary waveform.

The shaft angle is determined by counting the number of pulses, and


the direction of rotation is determined by the relative phase of the two square
waves. Additionally, encoders generally emit an index pulse at one location,
which can be used to set a home position in order to compute an absolute
angular position.
Manipulator Mechanism Design 8.15

Resolvers are devices that output two analog signals 1. The sine of the
shaft angle, 2. The cosine. The shaft angle is computed from the relative
magnitude of the two signals.

Resolvers are often more reliable than optical encoders, but their
resolution is lower. Typically, resolvers cannot be placed directly at the joint
without additional gearing to improve the resolution.

Potentiometers provide the most straightforward form of position


sensing. Connected in a bridge configuration, they produce a voltage
proportional to the shaft position. Difficulties with resolution, linearity, and
noise susceptibility limit their use.

Techometers are sometimes used to provide an analog signal


proportional to the shaft velocity. In the absence of such velocity sensors, the
velocity feedback is derived by taking differences of sensed position over time.
The numerical differentiation can introduce both noise and a time lag.
Despite these potential problems, most manipulators are without direct velocity
sensing.

8.9. FORCE SENSING:

Strain gauges are designed to measure forces of contact between a


manipulator’s end effector and the environment that it touches. Strain gauges,
are made up of the semiconductor or the metal foil variety. These strain gauges
are bonded to a metal structure and produce an output proportional to the
strain in the metal.

There are three places where such sensors are usually placed on a
manipulator:

1. At the joint actuators. These sensors measure the torque or force


output of the actuator itself. These are useful for some control
schemes, but usually do not provide good sensing of contact between
the end effector and the environment.
8.16 Robotics – www.airwalkpublications.com

2. Between the end effector and last joint of the manipulator. These
sensors are usually referred to as wrist sensors to measure the forces
and torques acting on the end effector. These sensors are capable of
measuring from three to six components of the force/torque vector
acting on the end effector.

3. At the “fingertips” of the end effector. Usually, these force sensing


fingers have built-in strain gauges to measure from one to four
components of force acting at each fingertip.

*********
ROBOTICS

Short Questions and Answers


CHAPTER – 1: FUNDAMENTALS OF ROBOT

1.1. What is Robotics?

Robotics:

Robotics is a form of industrial automation.

Robotics is the science of designing and building robots suitable for


real-life applications in automated manufacturing and non manufacturing
environment.

1.2. What is Industrial Robot?

Industrial Robot:

An industrial robot is a reprogrammable multifunctional manipulator


designed to move materials, parts, tools or special devices through variable
programmed motions for the performance of various different task in their
operation.

An industrial robot is a general purpose, programmable machine which


possesses certain human like characteristics.

1.3. State Laws of Robotics.

Laws of Robotics:

Law 1:

A robot may not injure a human being, or, through inaction, allow a
human to be harmed.
Law 2:

A robot must obey orders given by humans except when they conflicts
with the first law.
SQA 2 Robotics – www.airwalkpublications.com

Law 3:

A robot must protect its own existence unless that conflicts with the first
or second law.
1.4. Explain Robot Motions.
Robot Motions:

Industrial robots are designed to perform productive work. The work is


accomplished by enabling the robot to move its body, arm and wrist through
a series of motions.

Generally Robotic motion is given by, LERT classification system.

where,

L → Linear motion

E → Extension motion

R → Rotational motion

T → Twisting motion.

1.5. Explain Robot Coordinate System.


Industrial robots are available in a wide variety of sizes, shapes and
physical configuration.

There are some major co-ordinate system based on which robots are
generally specified.

The common design of robot co-ordinates are:

1. Polar co-ordinate system.

2. Cylindrical co-ordinate system.

3. Cartesian co-ordinate system.

4. Joined − arm configuration or co-ordinate system.


Short Questions and Answers SQA 3

1.6. What is Work Envolope in Robot System?

Work Envelope:

★ The volume of the space surrounding the robot manipulator is called


work envelope.

★ The work volume is the term that refers to the space with in which
the robot can manipulate its wrist end.

★ The work envelope is determined by the following physical


specification of the robot:

1. Robot’s physical configuration.


2. The size of the body, arm and wrist components.

1.7. What are the types of industrial Robot?

Types of Industrial Robot:

(i) Sequence Robot.

(ii) Playback Robot.

(iii) Intelligent Robot.

(iv) Repeating Robot.

1.8. What are the types of Robots based on Physical configuration?

Based on physical configuration:

(i) Cartesian co-ordinate configuration.

(ii) Cylindrical co-ordinate configuration.

(iii) Polar co-ordinate configuration.

(iv) Joined arm configuration.


SQA 4 Robotics – www.airwalkpublications.com

1.9. What are the types of Robots based on degrees of freedom?

Degrees of freedom:

(i) Single degree of freedom.

(ii) Two degree of freedom.

(iii) Three degree of freedom.

(iv) Six degree of freedom.

1.10. What are the common Robot Specification?

Robot Specification:

The common robot specifications are given as below:

1. Spatial resolution.

2. Accuracy.

3. Repeatability.

4. Compliance.

5. Pitch.

6. Yaw.

7. Roll.

8. Joint Notation.

9. Speed of motion.

10. Pay Load.


Short Questions and Answers SQA 5

1.11. Explain 3 DOF wrist assembly.


Three degree of freedom wrist assembly:

To establish the orientation of the object, we can define three degrees


of freedom for the robot’s wrist as shown in Fig. The following is one
possible configuration for a three DOF, wrist assembly

1. Roll: This DOF, can be accomplished by a T-type joint to rotate the


object about the arm axis.

2. Pitch: This involves the up-and-down rotation of the object, typically


by means of a type R joint.

3. Yaw: This involves right-to-left rotation of the object, also


accomplished typically using an R-type joint.

1.12. How Robots Physical configuration is described?

Joint Notation scheme:

★ The physical configuration of the robot manipulator can be described


by means of joint notation scheme

★ This notation scheme is given by using the joints L, R, T, V.


★ The joint notation scheme permits the designation of more or less
than the three joints typically of the basic configurations.
SQA 6 Robotics – www.airwalkpublications.com

★ Joint notation scheme can also be used to explore other possibilities


for configuring robots, beyond the common four types LVRT.
The basic notation scheme is given in Table .
Table : Joint Notation Scheme

Robot co-ordinate Joint Notation


1. Polar co-ordinate TRL
2. Cylindrical co-ordinate TLL, LTL, LVL
3. Cartesian co-ordinate robot LLL
4. Joined arm configuration TRR, VVR

1.13. What are the major components of Robots?


A robot has six major components, they are as follows.
1. Power source
2. Controller
3. Manipulator
4. End effector
5. Actuator
6. Sensors.
Short Questions and Answers SQA 7

1.14. What is end effector?


End effector:

The end-effector is mounted on the wrist and it enables the robot to


perform various tasks.
The common end effectors are:
1. Tools

2. Gripper.

1.15. What is sensors?


Sensors:

Sensor is a device which will convert internal physical phenomenon into


an electrical signal.

Sensors use both internal as well as external feed in robots.

Internal feedback → Temperature, pressure can be checked.

External feedback → Environmental feedback can be analysed.

Sensors are used for an element which produces a signal relating to the
quantity that is being measured.

1.16. What are the benefits of Robot?


Benefits of Robot:

1. Increased accuracy.

2. Increased applications.

3. Rich in productivity.

4. Reduced labour charges.

5. Reduces scrap and wastage.

*********
SQA 8 Robotics – www.airwalkpublications.com

CHAPTER – 2: ROBOT DRIVE SYSTEMS AND


END EFFECTORS

2.1. What are the types of actuators?

Types of Actuators or Drives:

There are many types of actuators available, certain types are as follows.

1. Pneumatic actuators.

2. Hydraulic actuators.

3. Electric motors.

(i) AC servomotor.

(ii) DC servomotor.

(iii) Stepper motor.

(iv) Direct drive electric motors.

2.2. What are the types of D.C motors?

Types of D.C Motors:

The common type of D.C (Direct Current) motors are:

(i) Permanent Magnet (PM) D.C motor.

(ii) Brushless permanent Magnet D.C motors.

2.3. What are the advantages of brushless permanent magnet D.C


motors?

Advantages:

★ Reduced rotor inertia.

★ They are weightless.

★ More durable.
Short Questions and Answers SQA 9

★ Motors are less expensive.


★ The absence of brush reduces the maintenance cost.
★ They have a better heat dissipation, heat being more easily lost from
the stator than rotor.

★ Available with small dimensions with compact power.


2.4. What is hybrid stepper motor?
Hybrid stepper motor:

★ Hybrid stepper motor is a combination of variable reluctance type and


permanent magnet type.

★ The stator may have 8 salient poles energized by a two-phase


windings.

★ The rotor is a cylindrical magnet axially magnetized.


★ The step angle varies from 0.9° to 5°. The popular step angle is 1.8°.
★ The motor speed depends on the rate at which pulses are applied.
★ The direction of rotation depends up on the order of energizing the
coils in the first instance.

★ The angular displacement depends on the total number of pulses.


2.5. Classify the grippers.
Classification of Grippers:

1. Mechanical grippers.
2. Magnetic grippers.
3. Vacuum grippers.
4. Adhesive grippers.
5. Hooks, scoops and other miscellaneous devices.
SQA 10 Robotics – www.airwalkpublications.com

2.6. What are the types of mechanical gripper?


Types of mechanical gripper:

1. Two fingered gripper.

2. Three fingered gripper.

3. Multifingered gripper.

4. Internal gripper.

5. External gripper.

2.7. What are the advantages of magnetic grippers?


Advantages:

1. Variations in part size can be tolerated.

2. Pickup times are very fast

3. They have ability to handle metal parts with holes.

4. Only one surface is needed for gripping.

*********
Short Questions and Answers SQA 11

CHAPTER – 3: SENSORS AND MACHINE VISION

3.1. What is the use of sensors?

Sensors:

In robotics sensors are used for both internal feedback control and
external interaction with the outside environment.
A sensor is a transducer used to make a measurement of a physical
variable.
3.2. Classify Sensors.

Classification of sensors
SQA 12 Robotics – www.airwalkpublications.com

3.3. What are the types of encoder?

3.4. What is LVDT?

Linear Variable Differential Transformer (LVDT):


A linear variable differential transformer is the most used displacement
transducer.
It is actually a transformer whose core moves along with the distance
being measured and that outputs a variable analog voltage as a result of this
displacement.
3.5. What is Resolver?
Resolver:
Resolvers are very similar to LVDT in principle, but they are used to
measure an angular motion.
A resolver is also a transformer, where the primary coil is connected to
the rotating shaft and carries an alternating current through slip range as shown
in Fig.
Short Questions and Answers SQA 13

3.6. Explain Potentiometer

Potentiometer:

Potentiometers are analog devices whose output voltage is proportional


to the position of the wiper.

A potentiometer converts position information into a variable voltage


through a resistor.

As the sweeper on the resistor moves due to change in position, the


proportion of the resistance before or after the point of contact with the
sweeper compared with the total resistance varies.

Potentiometer are of two types


1. Rotary

2. Linear

3.7. What is velocity sensor?

Velocity sensor:

Velocity sensors are used to measure the velocity or speed by taking


consecutive position measurements at known intervals and computing the time
rate of change of the position values or directly finding it based on different
principles.

3.8. What is techometer?

Tachometer:

Tachometer is directly used to find the velocity at any instant of time,


and without much of computational load.

This measures the speed of rotation of an element.


SQA 14 Robotics – www.airwalkpublications.com

3.9. Explain Half-Effect sensor.


Hall-Effect Sensor:

Hall-Effect sensor is also called as a velocity measuring sensor.

If a flat piece of conductor material called the Hall chip, is attached to


a potential difference on its two opposite faces as shown in Fig., the voltage
across the perpendicular face is zero. If a magnetic field is imposed at right
angles to the conductor, the voltage is generated on the two other perpendicular
faces.

Higher the field value, higher is the voltage level.


If one provides a ring magnet, the voltage produced is proportional to
the speed of rotation of the magnet.
Short Questions and Answers SQA 15

3.10. What are the types of force sensors?

The types of force sensors are:

1. Strain gauge.

2. Piezoelectric switches.

3. Microswitches.

3.11. What is the use of external sensors?

External Sensors:

External sensors are used to learn about the robot’s environment,


especially the objects being manipulated.

External sensors can be divided into the following categories.

3.12. What is the use of Torque sensor?

Torque Sensor:

Torque sensors are primarily used for measuring the reaction forces
developed at the interface between mechanical assemblies.

The principal approach for doing this are joint and wrist sensing.

3.13. What is the use of Touch sensors?

Touch Sensors:

Touch sensors are used in robotics to obtain information associated with


contact between a manipulator hand and object in the work space.

Touch sensors are used for,

1. Object location and recognition.

2. To control the force exerted by a manipulator on a given object.


SQA 16 Robotics – www.airwalkpublications.com

3.14. What are the non-contact types sensors?

Non Contact type:

Non contact sensors rely on the response of a detector to variations in


an acoustic or an electromagnetic radiation.
The common non-contact sensors are:
1. Range sensor.

2. Proximity sensor.

3. Ultrasonic sensor.

4. Laser sensor.

3.15. What is the use of proximity sensors?

Proximity Sensors:

Proximity sensors have a binary output which indicates the presence of


the object within a specified distance interval.

Proximity sensors are used in robotics for near-field work in connection


with object grasping or avoidance.

3.16. Write commonly used proximity sensors?

The common proximity sensors are:

1. Inductive Sensors.

2. Hall effect Sensor

3. Capacitive Sensor.

4. Ultrasonic Sensor.

5. Optical proximity Sensor.


Short Questions and Answers SQA 17

3.17. What is machine vision?


Machine Vision:

Machine vision is an important sensor technology with potential


application in many industrial operation.
Machine vision is concerned with the sensing of vision data and its
interpretation by a computer.
The typical vision system consist of the camera a digitizing hardware,
a digital computer and hardware and software necessary to interface them.
The operation of vision system consist of the following three functions.
1. Sensing and digitizing image data.
2. Image processing and analysis.
3. Application.

3.18. What are the application of machine vision?


Robotic applications of machine vision fall into three broad categories
listed below:
1. Inspection.

2. Identification.

3. Visual serving and navigation.

3.19. What is Bin Picking?


Bin Picking:

Bin picking involves the use of a robot to grasp and retrieve randomly
oriented parts out of a bin or similar container.
The application is complex because parts will be overlapping in each
other.
The vision system must first recognize a target part and its orientation
in the container and then it must direct the end effector to a position to permit
grasping and pick-up.
*********
SQA 18 Robotics – www.airwalkpublications.com

CHAPTER – 4: ROBOT KINEMATICS

4.1. What is Forward Kinematics?


If the position and orientation of the end effector are derived from the
given joint angles (θ1 , θ2 …) and link parameters L1 , L2 then the way is
called forward kinematics. Fig.

Forward Kinematics

4.2. What is Reverse Kinematics?


If the joint angles (θ1 , θ2 .... ) and link parameters (L1 , L2 …) of the
robot are derived from the position and orientation of the end effector, then
the way is called the Reverse kinematics (or) Inverse kinematics Fig.

Reverse (Inverse) Kinematics

4.3. What is the use of homogeneous transformation?


When a more number of joints of manipulator is to be analysed, there
should be a general single method to solve the kinematic equations, for this,
homogeneous transformations are to be used.
Short Questions and Answers SQA 19

4.4. Show the rotation matrix about x-axis.


1 0 0 
 0 cos θ − sin θ 
 
 0 sin θ cos θ 
 
4.5. Show the rotation matrix about y-axis?
 cos θ 0 sin θ 
 0 0 
 1
 − sin θ 0 cos θ 
 
4.6. Show the rotation matrix about z-axis?
 cos θ − sin θ 0 
 sin θ cos θ 0 
 
 0 0 1 

4.7. What is Jacobian Matrix?
The kinematic equations relating the end-effector coordinates xe and ye
to the joint displacements θ1 and θ2 are given by

xe (θ1 , θ2) = L1 cos θ1 + L2 cos (θ1 + θ2)

ye (θ1 , θ2) = L1 sin θ1 + L2 sin (θ1 + θ2)

Small movements of the individual joints at the current position, and the
resultant motion of the end-effector, can be obtained by the total derivatives
of the above kinematic equations:

∂ xe ∂ xe
dxe = d θ1 + d θ2
∂ θ1 ∂ θ2

∂ ye ∂ ye
dye = d θ1 + d θ2
∂ θ1 ∂ θ2

where xe , ye are variables of both θ1 and θ2, hence two partial derivatives are
involved in the total derivatives. In vector form, the above equation can be
reduced to
SQA 20 Robotics – www.airwalkpublications.com

Where d xe = J ⋅ dq

 d xe   d θ1 
d xe =   , dq= 
 dye   d θ2 

and J is a 2 by 2 Jacobian matrix given by

 ∂ xe ∂ xe 
 
 ∂ θ1 ∂ θ2 
J= 
 ∂ ye ∂ ye 
 ∂ θ1 ∂ θ2 
 
The matrix J comprises the partial derivatives of the function
xe and ye with respect to joint displacements θ1 and θ2. The matrix J, is called
as Jacobian Matrix, Since most of the robot mechanisms have multiples of
active joints, a Jacobian matrix is needed for describing the mapping of the
vectorial joint motion to the vectorial end-effector motion.

4.8. What is the use of Jacobian?


Jacobian collectively represents the sensitivities of individual
end-effector coordinates to individual joint displacements. This sensitivity
information is needed in order to coordinate the multi dof joint displacements
for generating a desired motion at the end-effector.

When the two joints of the robot arm are moving at joint velocities
. . . . .
q = (θ1 , θ2)T, and let ve = (xe , ye)T be the resultant end-effector velocity
vector. The Jacobian provides the relationship between the joint velocities and
the resultant end-effector velocity. Dividing equ. (5) by the infinitesimal time
increment dt yields.

d xe dq
=J , or ve = J ⋅ q
dt dt

Thus the Jacobian determines the velocity relationship between the


joints and the end-effector.
Short Questions and Answers SQA 21

4.9. What is Singularities?


.
Most manipulators have values of q where the Jacobian becomes
singular. Such locations are called singularities of the mechanism or simply
singularities. All manipulators have singularities at the boundary of their
workspace, and most have loci of singularities inside their workspace.
Singularities are divided into two categories:

1. Workspace-boundary singularities occur when the manipulator is


fully stretched out or folded back on itself in such a way that the
end-effector is at or very near the boundary of the workspace.

2. Worspace-interior singularities occur away from the workspace


boundary; they generally are caused by a lining up of two or more
joint axes.

4.10. What are the 2 methods to obtain equations of motion in


manipulator dynamics?

Two methods are used to obtain the equations of motion: the


Newton-Euler formulation, a nd the Lagrangian formulation. The
Newton-Euler formulation is derived from Newton’s Second Law of Motion,
which describes dynamic systems in terms of force and momentum. The
equations incorporate all the forces and moments acting on the individual robot
links, including the coupling forces and moments between the links. The
equations obtained from the Newton-Euler method include the constraint forces
acting between adjacent links.

In the Lagrangian formulation, the system’s dynamic behavior is


described in terms of work and energy using generalized coordinates. All the
workless forces and constraint forces are automatically eliminated in this
method. The resultant equations are generally compact and provide a
closed-form expression in terms of joint torques and joint displacements.
Furthermore, the derivation is simpler and more systematic than in the
Newton-Euler method.
SQA 22 Robotics – www.airwalkpublications.com

4.11. Explain Newton’s − Euler equation.


Newton’s Equation in Simple Format:
.
A right body whose center of mass is accelerating with acceleration V.
In such a situation, the force, F, acting at the center of mass and causing this
acceleration is given by Newton’s equation
.
F = m V,
.
where m is the total mass of the body and V is the acceleration.

Euler’s Equation in Simple Format:

A rigid body rotating with angular velocity ω and with angular


.
acceleration ω. In such a situation, the moment N, which must be acting on
the body to cause this motion, is given by Euler’s equation
.
N=Iω+ω×Iω,

where I is the inertia tensor of the body

4.12. Explain kinetic energy in Lagrangian formation of manipulator


dynamics.
The kinetic energy of the ith link, ki, can be expressed as

1 1
ki = m VT V + ωT I ω
2 i i i 2 i i i

1 1
where mV2 is kinetic energy due to linear velocity of the link and I ω2
2 2
is kinetic energy due to angular velocity of the link. The total kinetic energy
of the manipulator is the sum of the kinetic energy in the individual links −
That is,

n
k= ∑ ki .
i−1
Short Questions and Answers SQA 23

4.13. What is Lagrangian equation?


The Lagrangian dynamic formulation provides a means of deriving the
equations of motion from a scalar function called the Lagrangian, which is
defined as the difference between the kinetic and potential energy of a
mechanical system. Lagrangian of a manipulator is

L=K−u

4.14. What is Manipulator kinematics?


Kinematics is the science of motion without considering forces that
cause it. In kinematics, one studies the position, velocity, acceleration, and all
higher order derivatives of the position variables. Hence, the study of the
kinematics of manipulators refers to all the geometrical and time based
properties of the motion.

4.15. What is Denavit − Hartenberg notation?


Any robot is described kinematically by four quantities for each link.
Two describe the link itself, and two describe the link’s connection to a
neighboring link. In a revolute joint, θi is called the joint variable, and the
other three quantities would be fixed link parameters. For prismatic joints,
di is the joint variable, and the other three quantities are fixed link parameters.
The definition of mechanisms by means of these quantities is called the
Denavit − Hartenberg notation.

*********
SQA 24 Robotics – www.airwalkpublications.com

CHAPTER – 5: IMPLEMENTATION AND


ROBOT ECONOMICS

5.1. What is RGV?


Rail Guided Vehicle (RGV) is a fast, flexible and easily installed material
handling system, which has separate input/output stations allowing multiple
performances to be done at once.

5.2. What are the essential features of RGV?

★ Efficient automatic controlled material flow.


★ Has the ability of sorting and collection for channel conveyer of
automated storage and retrieval system.

★ Minimize potential injuries for employees.


★ RGV, together with the MES (Manufacturing Execution System), can
sort the goods based on the purchase order.

5.3. What is AGV?


An automated guided vehicle (AGV) is a mobile robot that follows
markers or wires in the floor, or uses vision or lasers.

The AGV can tow objects behind them in trailers to which they can
autonomously attach. The trailers can be used to move raw materials or
finished product.

5.4. Mention the essential components of AGV.


The essential components of AGV are,

(i) Mechanical structure

(ii) Driving and steering mechanism actuators

(iii) Servo controllers

(iv) On board computing facility


Short Questions and Answers SQA 25

(v) Servo amplifier

(vi) Feedback components

(vii) On board power system

5.5. What are the advantages of using an AGV?


(i) AGV can be controlled and monitored by computers.

(ii) On a long run, AGVs decrease labour costs.

(iii) They are unmanned and hence can be used in extreme/hazardous


environments.

(iv) They are compatible with production and storage equipment.

(v) They can be used to transport hazardous substances.

5.6. Write some applications of AGV?


(i) Repetitive movement of materials over a distance

(ii) Regular delivery of stable loads

(iii) Medium throughput/volume

(iv) When on-time delivery is critical and late deliveries are causing
inefficiency

(v) Operations with at least two shifts

5.7. List out the common industries which use AGV.


(a) Manufacturing

(b) Paper and print

(c) Food and beverage

(d) Hospital

(e) Warehousing

(f) Automotive
SQA 26 Robotics – www.airwalkpublications.com

(g) Theme parks


(h) Chemical
(i) Pharmaceutical

5.8. List out the types of AGV vehicles.


(a) Towing Vehicles
(b) AGVS Unit Load Vehicles
(c) AGVS Pallet Trucks
(d) AGVS Fork Truck
(e) AGVS Hybrid Vehicles
(f) Light Load AGVS
(g) AGVS Assembly Line Vehicles

5.9. Define wired Guidance technology.


A wire is placed in a slot which is about 1 inch from the surface. This
wire transmits a radio signal which is received by the AGV. By means of
this signal, the AGV is guided along the path where the wire is installed. The
sensor constantly detects its relative position with the signal it receives from
the wire. This feedback loop regulates the steering of the AGV along the
path of the wire.
5.10. What is meant by Guide tape technology?
Instead of a wire a tape is used for guidance. This tape is either a colour
or a magnetic tape. Both have a way to send and receive signals to control
the vehicle. These vehicles are called as carts and hence the system is called
AGC.
5.11. What is meant by Inertial Gyroscopic Navigation?
AGV is guided by the principle of inertial gyroscopic navigation. A
computer control directs the path of the AGV, the transponders respond
accordingly to changing directions of the vehicle. This system can be used in
extreme conditions.
Short Questions and Answers SQA 27

5.12. Define vision guidance.

With the advent of technology Vision-Guided system was developed by


using Evidence grid technology. This technology uses a camera to record the
features in the path of the vehicle. This path is mapped by 360-degree images
on a 3D map.

5.13. Define Geoguidance.

In this system, the AGV recognizes the environment and establishes its
location on real time. A simple example of this system would be a forklift
equipped with Geoguidance system which can handle loading and unloading
of racks in the facility.

5.14. What is meant by Differential Speed Control?

This type of steering control is used in Tanks. Here there are two
independent drive wheels. To turn the AGV, the two drive wheels are
programmed to move in different speeds. To move the AGV forwards or
backwards they are programmed to move in the same speed.

5.15. List out the ways to decide the path? (i) Frequency select mode,
(ii) Path select mode, (iii) Magnetic tape mode.

(i) Frequency select mode: This is used in the system where AGV is
controlled by means of wire. This wire is beneath the floor. It guides the AGV
by sending a frequency. When there is change in direction of the path, there
is corresponding change in frequency of the wire. The AGV detects this change
in frequency and makes the path change. Installing wires under the floor is
costly. Hence to make modifications in the path is also costly.

(ii) Path Select Mode: In this system, AGV is programmed with path by
programmers. On real time, the AGV uses the sensors to detect its movement
and changes speed and directions as per the programming. This method
requires employing programmers to modify path parameters.
SQA 28 Robotics – www.airwalkpublications.com

(iii) Magnetic Tape Mode: The magnetic tape is laid on the surface of the
floor. It provides the path for the AGV to follow. The strips of the tape in
different combinations of polarity, sequence, and distance, guide the AGV to
change lane and speed up or slow down, and stop.

5.16. What is vehicle system management?

To keep control and track of all the AGVs a System Management is


required. There are three types of System Management which are used to
control AGVs and they are

(a) Locator panel

(b) CRT Display

(c) Central logging

5.17. What are the methods to control the traffic?

In that scenario, path clash must be avoided. To achieve that, a proper


traffic control is a must.

Some common traffic control methods of AGV are given here

(a) Zone Control

(b) Forward Sensing Control

(c) Combinational control

5.18. What is the role of warning and alarm light?

When the AGV is approaching a turn, the warning lights function as


directional signals to alert personnel in the area of the AGV’s intention to
branch right or left on the Guide path. When the AGV goes into an alarm
mode, the Alarm Lights blink to indicate an alarm.
Short Questions and Answers SQA 29

5.19. What is the function of Emergency Stop Button?

Emergency stop buttons are provided on each AGV. When activated,


the AGV enters an emergency stop state and all motion capable equipment
will become inactive.

5.20. Define Collision Avoidance System?

The non-contact collision avoidance system on the AGV can utilize a


number of different laser sensors mounted on the front, rear, side, and upper
locations of the AGV. When the AGV is travelling on the Guide path, this
system will detect an obstacle (such as a person) in any of the coverage
locations.

5.21. Define implementation of Robots?

The implementation of robotics in an organization requires engineering


expertize and more importantly, the involvement of the management. Also the
staff involved i.e. managers, engineers, shop floor operators and maintenance
personnel should be consulted.

5.22. What are the steps involved in Implementation?

A logical approach is necessary for the successful implementation of this


technology. The following are the steps involved in the approach.

★ Making a start − Initial familiarization with the technology.

★ The plant survey − to identify potential applications.

★ Selection of an application(s).

★ Robot(s) selection for the application(s).

★ Thorough economic analysis and capital authorization.

★ Planning and engineering the installation.

★ Installation.
SQA 30 Robotics – www.airwalkpublications.com

5.23. What are the Parameters included for the selection of robot?

★ The degrees of freedom.


★ The type of drive and control system.
★ Sensoris capability.
★ Programming features.
★ Accuracy and precision requirements.
★ Load capacity.
5.24. What is meant by through economic analysis?

This analysis is of considerable importance in most companies as


management usually decides whether to install the project on the basis of this
analysis.
The economic analysis would evaluate the probable financial benefits
from the project.

5.25. What are the main causes to make the accidents?

The causes of accidents caused by robots can be divided into three


categories

★ Engineering errors.
★ Human worker errors.
★ Poor environmental conditions.
Short Questions and Answers SQA 31

5.26. Define a Deadman switch?


Deadman switch is a trigger or toggle switch device usually located on
the teach pendant which requires active pressure to be applied in order to
drive the manipulator. If the pressure is removed from the switch, the device
springs back to its neutral position which stops all robot movement.

5.27. Define safety monitoring?


Safety monitoring involves the use of sensors to indicate conditions or
events that are unsafe for humans as well as for the equipment in the cell.
Sensors ensure that the components are present and loaded correctly or
that the tooling is operating properly.
5.28. What are the levels in the sensor system?
The National Bureau of Standards has divided the sensor system into
three levels as:
Level 1 : Perimeter penetration detection around robot workstation.
Level 2 : Intruder detection within the robot work-cell.
Level 3 : Intruder detection in the immediate vicinity of the robot
i.e. safety skin.
5.29. Define Economic analysis of Robots?
A robot manufacturing system requires considerable capital investment
and hence economic analysis is of considerable importance.
An economic analysis is basically a systematic examination of a complex
business activity will help in making a decision about a capital investment.

5.30. What are the methods of economic analysis?


The different methods of economic analysis for analysing investments
are

★ Payback method.
★ EUAC − Equivalent Uniform Annual Cost method.
★ ROI − Return on Investment method.
SQA 32 Robotics – www.airwalkpublications.com

5.31. What is Payback Method?


The duration or time period taken for the net accumulated cash flow to
equal the initial investment in the development of a robot is called the payback
method or payback period.

5.32. Define EUAC method?


The equivalent uniform annual cost method is used to convert the total
cash flows and investments into their equivalent uniform costs over the
expected time of developing a robot.

5.33. Define ROI method?


The Return On Investment (ROI) is used to determine the return ratio
or rate of return based on the anticipated expenditures and revenues. The rate
of return is the effective annual interest rate that makes the present worth of
the investment zero. Alternatively, it is the effective annual interest rate that
makes the benefits and costs equal.

*********
Short Questions and Answers SQA 33

CHAPTER – 6: ROBOT PROGRAMMING

6.1. Define robot program.


A robot program can be considered as a path in space that is to be
followed by the manipulator, combined with peripheral actions that support
the work cycle. It can defined as a sequence of joint coordinate positions.

6.2. What are different methods of robot programming?

★ Leadthrough programming.
★ Textual (or) computer like programming languages.
★ Off-line programming.
6.3. What is leadthrough programming and how can it be distinguished?
The manipulator is driven through the various motions needed to perform
a given task, recording the motions into the robot’s computer memory for
subsequent playback. Based on the teach procedure, lead through programming
can be distinguished into two as

★ Powered leadthrough, and


★ Manual leadthrough.
6.4. What is a teach pendant?
The teach pendant is a hand-held control box that allows control of each
manipulator joint or of each cartesian degree of freedom. It consists of toggle
switches, dials and buttons which the programmer activates in a coordinated
fashion to move the manipulator to the required positions in the workpace.

6.5. Explain wall through programming method.


The manual leadthrough method or walkthrough method requires the
operator to physically move the manipulator through the motion sequence. It
is convenient for continuous path programming where the motion cycle
involves smooth complex curvilinear movements of the robot arm. Spray
SQA 34 Robotics – www.airwalkpublications.com

painting and continuous arc welding are examples of this type of robot
application.

In this method since the programmer has to physically grasp the robot
arm and end-effector it could be difficult to move through the motion sequence
in the case of large robots. So, a special programming device which has the
same joint configuration as the actual robot is substituted for the actual robot.
The closely spaced motion points are recorded into the controller memory and
during playback, the path is recreated by controlling the actual robot arm
through the same sequence of points.

6.6. Mention any two advantages and disadvantages of Leadthrough


Programming.

Advantages of Leadthrough Programming:

★ No special programming skills or training required.

★ Easy to learn.

Disadvantages:

★ Difficult to achieve high accuracy and straight line movements or


other geometrically defined trajectories.

★ Difficult to edit unwanted operator moves.

6.7. What is textual programming?

This method of robot programming involves the use of a programming


language similar to computer programming.

These robot languages use offline / online methods of programming


i.e. the program is written off-line with the textual language to define the logic
and sequence while the teach pendant is used to define on-line, the specific
point locations in the workspace.
Short Questions and Answers SQA 35

6.8. List the advantages of textual programming over lead-through


programming.

The advantages of textual programming over lead-through programming


include

★ Extended program logic.

★ Enhanced sensor capabilities.

★ Improved output capabilities for controlling external equipment.

★ Computations and data processing capabilities.

★ Communication with other computer systems.


6.9. Explain off-line programming.

Here, the programs can be developed without needing to use the robot,
i.e. there is no need to physically locate the point positions in the work space
for the robot as required with textual and leadthrough programming. The robot
program can be prepared at a remote computer terminal and down loaded to
the robot controller for execution without interrupting production. This saves
production time lost to delays in teaching the robot a new task. The programs
developed off-line can be tested and evaluated using simulation techniques.

6.10. List the benefits of off-line programming.

★ Higher utilization of the robot and the equipment with which it


operates as off-line programming can be done while the robot is still
in production on the preceding job.

★ The sequence of operations and robot movements can be optimized


or easily improved.

★ Existing CAD data can be incorporated.


★ Enables concurrent engineering and reduces production time.
★ Programs can be easily maintained and modified.
SQA 36 Robotics – www.airwalkpublications.com

6.11. What are the different methods of defining positions in space?

The different methods by which the programmer moves the manipulator


to the required positions in the workspace are:

★ Joint mode.
★ World coordinate mode (or x − y − z method)
★ Tool coordinate mode.

6.12. What are the reasons for defining points in a robot program?

The two key reasons for defining points in a program are

★ To define a working position for the end effector (like picking up a


part or to perform a spot welding operation).

★ To avoid obstacles (like machines, conveyors and other equipment).

6.13. Illustrate an 8 × 8 robot workspace.

Consider programming a two axis servo controlled cartesian robot with


eight addressable points for each axis. Then there would be 64 addressable
points which can be used in any program. The work space is illustrated in
the Fig.
Short Questions and Answers SQA 37

6.14. What is an addressable point?

An addressable point is one of the available points that can be specified


in the program so that the robot can be commanded to go to that point.

6.15. What are the different methods of interpolation?

The possible methods of interpolation include

★ Joint interpolation.

★ Straight line interpolation.

★ Circular interpolation.

★ Irregular smooth motions.


6.16. What is straight line interpolation?

In straight line interpolation, the robot controller calculates the


sequence of addressable points in space through which the robot wrist end
must pass to achieve a straight line path between two points.

6.17. Explain circular interpolation.

In circular interpolation, the programmer is required to define a circle


in the robot’s workspace. This is usually done by specifying three points that
lie along the circle. Then the controller selects a series of addressable points
that lie closest to the defined circle to construct a linear approximation of the
circle. The robot movements consist of short straight line segments which look
very much like a real circle if the grid work of addressable points is dense
enough.
SQA 38 Robotics – www.airwalkpublications.com

6.18. Write short note on Wait, Signal and DELAY commands?

SIGNAL P − Instructs the robot controller to output a signal


through line P.
WAIT Q − Indicates that the robot should wait at its current
location until it receives a signal on line Q (Q is one
of the input lines)
DELAY X SEC − Indicates that the robot should wait X seconds before
proceeding to the next step.
6.19. What is Branching?

Branching is a method of dividing a program into convenient segments


that can be called one or more times and can be executed during the program.
A Branch can be considered as a subroutine that can be executed either by
branching it at a particular space in the program or by testing an input signal
line to branch to it.

6.20. How can textual programming languages be distinguished?

The textual robot languages provide a variety of structures and


capabilities and can be grouped into three major classes as:

★ First generation languages.


★ Second generation languages.
★ Future generation languages.
6.21. What are the limitations of first generation languages?

The limitations of first generation languages include

★ The inability to specify complex arithmetic computations.


★ The incapability to use complex sensors and sensor data.
★ Limited capacity to communicate with other computers.
★ Incapability of extension for future enhancements.
Short Questions and Answers SQA 39

6.22. Mention few second generation languages.

Some of the commercially available second generation languages include:


AML (A Manufacturing Language), RAIL (High level robot language based
on Pascal), MCL (Modification of APT) and VAL II.

6.23. Illustrate the structure of robot language.

We have discussed in earlier sections that it is difficult to have close


control through pendant teaching and so textual programming is attempted,
where the computer instructions are given following the syntax of certain robot
language. A robot language must be able to support the programming of the
robot, control the robot manipulator interface with sensors and equipment and
support data communications with other computer systems in the company.

6.24. What are the basic modes of operation in a robot operating system?

A robot language operating system has three basic modes of operation as:

★ Monitor or supervisory mode.


★ Run or execute mode.
★ Edit mode.
6.25. Differentiate ‘position’ and ‘location’ on robotics.

Robot locations refer to the data that represents the position and
orientation of the robot tool. A ‘point’ or a ‘position’ is a cartesian reference
in the workspace. A ‘location’ is a point plus an orientation. While executing
a motion instruction, the robot tool point moves to the specified destination
point and the tool frame is oriented to the specified destination orientation.

6.26. What are methods of defining robot locations in VAL?

VAL has got two possible methods of defining robot locations:

★ Precision points, and

★ Transformations.
SQA 40 Robotics – www.airwalkpublications.com

6.27. Explain MOVES and DMOVE commands in VAL.

MOVES P1 : Causes the robot to move by straight line interpolation


to the point P1. The suffix S on the statement designates
straight line motion.
DMOVE : The prefix D designates delta, so the statement
represents delta move or incremental move. An
incremental move is where the endpoint is defined
relative to the current position of the manipulator rather
than to the absolute coordinate system of the robot. For
instance, the statement D (4, 125) moves the joints from
the current position to an incremental value of 125.
6.28. Explain APPRO and DEPART commands in VAL.

APPRO : The approach command moves the gripper from its


current position to within a certain distance of the
pick-up point by joint interpolation. It is useful for
avoiding obstacles such as parts in a tote pan.
DEPART : This statement moves the gripper away from the point
after the pickup is made.
6.29. What are the basic end effector commands?

The basic commands are

OPEN

CLOSE

6.30. Write a short note on REACT command?

REACT statement. It is used to continuously monitor an incoming signal and


to respond to a change in the signal. It serves to illustrate the types of
commands used to interrupt regular execution of the program in response to
some higher priority event (Like when some error or safety hazard occurs in
the workcell).
Short Questions and Answers SQA 41

REACT 11, SAFETY

indicates that the input line 11 is to be continuously monitored. If the current


state of signal is ‘off’, the signal is monitored for a transition from ‘off’ to
‘on’ and then again ‘off’. When the change is detected, REACT transfers
program control to the subroutine SAFETY.

Usually REACT completes current motion command before interrupting.


In cases where immediate action is required, the following statement is used.

REACTI.

This interrupts robot motion immediately and transfer to the subroutine is done
at once.

*********
SQA 42 Robotics – www.airwalkpublications.com

CHAPTER – 7: TRAJECTORY GENERATION

7.1. What is meant by Trajectory?


Trajectory (or) path describes the desired motion of a manipulator in
space. Trajectory refers to a time history of position, velocity, and acceleration
for each degree of freedom.

7.2. Define path update rate?


.
For generating the trajectory, position ( θ ), ve locity ( θ ), a nd
.
acceleration ( θ ), are computed on digital computers, at a certain rate, called
the path-update rate. In typical manipulator systems, this rate lies between
60 and 2000 Hz.

7.3. What is meant by via points and path points?


To specify the motion in detail, we should include a path description to
give a sequence of desired via points (intermediate points between the initial
and final positions). Thus for completing the motion, the tool frame must pass
through a set of intermediate positions and orientations as described by the
via points.
The path points includes all the via points plus the initial and final
points.
7.4. Define run time?
At run time, the path-generator constructs the trajectory, usually in terms
. ..
of θ , θ , and θ , and feeds this information to the manipulator’s control system.
This path generator computes the trajectory at the path-update rate.

7.5. What is meant by collision free path?


It is very important that robot system should be instructed what the
desired goal point of the manipulator motion is and let the system determine
where and how many via points are required to reach the goal without hitting
any obstacles.

*********
Short Questions and Answers SQA 43

CHAPTER – 8: MANIPULATOR MECHANISM DESIGN

8.1. What are the elements of a robot system?

1. The manipulator, including its internal or proprioceptive sensors,

2. The end-effector, or end-of-arm tooling,

3. External sensors and effectors, such as vision systems and part feeders,
and

4. The controller.

8.2. What are workspace requirements in manipulator design?

In performing tasks, a manipulator has to reach a number of workpieces


or fixtures.

The overall scale of the task sets the required workspace of the
manipulator.

The intrusion of the manipulator itself is an important factor. Depending


on the kinematic design, operating a manipulator in a given application could
require more or less space around the fixtures in order to avoid collisions.

8.3. What are advantage and disadvantages of cartesian manipulators?

Advantages of Cartesian manipulators are with the first three joints


decoupled makes them simpler to design and prevents kinematic singularities
due to the first three joints.

The disadvantage is that all of the feeders and fixtures must lie “inside”
the robot. The size of the robot’s support structure limits the size and
placement of fixtures and sensors. These limitations make retrofitting Cartesian
robots into existing workcells extremely difficult.
SQA 44 Robotics – www.airwalkpublications.com

8.4. What are the benefits of articulated manipulator?

Articulated robots minimize the intrusion of the manipulator structure


into the workspace, making them capable of reaching into confined spaces.
They require much less overall structure than Cartesian robots, making them
less expensive.
8.5. What are elements used for actuator speed reduction?

Gears, lead screws, or ball bearing screws, flexible bands, cables and
belts.

8.6. What are Resolvers?

Resolvers are devices that output two analog signals 1. The sine of the
shaft angle, 2. The cosine. The shaft angle is computed from the relative
magnitude of the two signals.

Resolvers are often more reliable than optical encoders, but their
resolution is lower. Typically, resolvers cannot be placed directly at the joint
without additional gearing to improve the resolution.
8.7. What is the use of strain gauges in robots?

Strain gauges are designed to measure forces of contact between a


manipulator’s end effector and the environment that it touches. Strain gauges,
are made up of the semiconductor or the metal foil variety. These strain gauges
are bonded to a metal structure and produce an output proportional to the
strain in the metal.
8.8. What are the places where sensors are placed on a manipulator?

1. At the joint actuators.

2. Between the end effector and last joint of the manipulator.

3. At the “fingertips” of the end effector.

*********
INDEX

A Classification of grippers, 2.26


Co-ordinate system, 1.12
A.C. motors, 2.15 Collision-free path planning, 7.10
Acceleration sensors, 3.23 Compliance, 1.28
Accuracy, 1.26
Components of AGV, 5.6
Acquisition of images, 3.48
Concatenating link transformation, 4.55
Actuation schemes, 8.9
Contact type, 3.28
Actuator location, 8.9
Controller, 1.34
Actuator, 1.36
Actuators, 2.1, 8.12 Cubic polynomials, 7.2
Adhesive grippers, 2.42 Cylindrical co-ordinate system, 1.14
Advantages of using an AGV, 5.7 Cylindrical, 8.6
Analog-to-digital conversion, 3.58
Applications of AGV, 5.8
D
Articulated, 8.5
Assembly, 1.39 DC Servomotor, 2.10
Defining a robot program, 6.7
B Degrees of freedom, 1.7
Based on types of drive, 1.24 Derivation of link transformations, 4.52
Based on Application, 1.24 Differential relationship, 4.31
Based on path control, 1.24 Digital camera, 3.50
Based on movement, 1.23 Distribution, 1.40
Based on sensory systems, 1.24 Drive system for grippers, 2.26
Based on control system, 1.23
Benefits of robot, 1.36 E
Bin picking, 3.74
Branching, 6.17 Economic analysis of robots, 5.36
Electrical drives, 2.8
C
Electromagnetic gripper, 2.38
Cartesian, 8.4 Encoder, 3.7
Cartesian co-ordinate system, 1.16 End effector, 1.35 2.24
Classification of sensors, 3.5 End effector commands, 6.27
I.2 Robotics – www.airwalkpublications.com

Euler’s equation in simple format, 4.41 Joint notation scheme, 1.29


External grippers, 2.37 Joint-space schemes, 7.2
External sensors, 3.28
L
F
Laws of robotics, 1.4
Feature extraction, 3.68 Leadthrough programming, 6.2
First generation languages, 6.18 Lighting techniques, 3.55
Force sensors, 3.24, 8.15 Link description, 4.43
Future generation languages, 6.19 Link connection, 4.45
Load capacity, 8.3
G
M
Grippers, 2.25
Machine loading / unloading, 1.38
H Machine vision, 3.51
Machining, 1.38
Hall-Effect sensor, 3.22
Magnetic gripper, 2.38
Hazardous environment, 1.39
Manipulator dynamics, 4.38
History of robotics, 1.4
Manipulator kinematics, 4.43
Hybrid stepper motor, 2.21
Manipulator, 1.34
Hydraulic drives, 2.5
Manipulator kinematics, 4.52
I Material handling, 1.37
Mechanical gripper mechanism, 2.28
Identification, 3.73 Mechanical grippers, 2.27
Image data reduction, 3.63 Medical, 1.40
Image storage, 3.62 Methods or robot programming, 6.1
Imaging devices, 3.53 Methods of economic analysis, 5.39
Industrial robot, 1.3 Microswitches, 3.27
Inspection, 1.39, 3.72 Motion commands, 6.24
Intermediate links in the chain, 4.46 Motion interpolation, 6.10
Internal gripper, 2.36 Multifingered gripper, 2.35
J N
Jacobians, 4.31 Need for robot, 1.37
Jacobians in the force domain, 4.37 Newton’s equation in simple format, 4.40
Joined arm work envelope, 1.22 Non Contact type, 3.35
Joined arm co-ordinate system, 1.17 Number of degrees of freedom, 8.2
Index I.3

O Robot motions, 1.7


Robotic applications, 3.71
Object recognition, 3.69 Rotational matrix, 4.16
Off-line programming, 6.6
S
Operating system, 6.21
Optical encoder, 3.20 Safety monitoring, 5.34
Other safety precautions, 5.36 Second generation languages, 6.18
Other algorithms, 3.71 Segmentation, 3.64
Selection of motors, 2.22
P
Sensor and intercock commands, 6.29
Sensors, 1.36
Path generation at run time, 7.9
Sensors, 3.1
Path decision, 5.19
Singularities, 4.33
Pay load, 1.32
Spatial resolution, 1.26
Permanent magnet D.C motor, 2.12
Speed of motion, 1.30
Permanent magnet stepper motor, 2.20
Speed, 8.3
Piezoelectric sensor, 3.26
Spherical, 8.6
Pneumatic position sensor, 3.19
Spray painting, 1.38
Pneumatic power drives, 2.3
Static forces in manipulators, 4.35
Polar co-ordinate work envelope, 1.21
Steering control, 5.18
Polar Co-ordinate system, 1.13
Stepper motor, 2.17
Position sensing, 8.14
Stiffness and deflections, 8.12
Position sensors, 3.7
Strain gauge, 3.25
Potentiometer, 3.17
Structure of robot language, 6.20
Power source, 1.33
T
R
Tachometer, 3.21
Rail guided vehicle (RGV), 5.1 The future of robotics, 1.40
Repeatability and accuracy, 8.3 The puma 560, 4.55
Requirements of sensors, 3.2 Translation matrix, 4.14
Resolver, 3.16 Types of actuators or drives, 2.2
Robot anatomy, 1.6 Types of industrial robot, 1.23
Robot locations, 6.23 Types of robot, 1.22
Robot, 1.3 Types of D.C. motors, 2.12
Robot Joints, 1.9 Types of electrical drives, 2.9
Robot specification, 1.25 Types of mechanical gripper, 2.29
I.4 Robotics – www.airwalkpublications.com

V
Vacuum grippers, 2.41
VAL programming, 6.23
Variable reluctance stepper motor, 2.18
Vehicle management and safety, 5.20
Vehicle guidance technologies, 5.15
Velocity sensor, 3.21
Vidicon camera (analog camera, 3.49
Visual serving and navigation, 3.73
W
Welding, 1.38
Work envelope, 1.19
Workspace, 8.3
Wrists, 8.7

*********

S-ar putea să vă placă și