Sunteți pe pagina 1din 16

Available online at www.sciencedirect.

com

ScienceDirect
Procedia Computer Science 112 (2017) 1601–1610

International Conference on Knowledge Based and Intelligent Information and Engineering


Systems, KES2017, 6-8 September 2017, Marseille, France

Calibration method for articulated industrial robots


*Marwan Alia ,*, Milan Simicb, Fadi Imadc
a, b,c
School of Engineering, RMIT University, Melbourne,vic, 3083, Australia

Abstract

Robot calibration is technique used to increase system positioning accuracy. In our research, we have reviewed self-calibration
techniques of articulated industrial robots and proposed an original method to reduce production and maintenance down times.
The proposed technique is using an inertial measurement unit (IMU) to measure robot poses, plus ultrasonic triangulation sensors
to increase processing speed and reduce computational power. Thanks to all of that, robot is more responsive, i.e. we have an
improved speed, reliability and accuracy in determining the orientation of the manipulators. The advantages of this method in
comprising with the vision based calibration, is that it does not need the complex steps, such as camera calibration, image
capture, memory and corner detection. This makes the robot calibration procedure more autonomous in a dynamic manufacturing
environment. The proposed technique was applied at Six Degrees of Freedom (6-DOF) robot and compared to existing
calibration methods.
e Authors. Published by sevier .
© 2017 The Authors. Published by Elsevier B.V.
Peer-review under responsibility of KES International
Peer-review under responsibility of KES International.
Keywords: Online Calibration, IMU, Ultrasonic, Articulated Robot
Keywords: Online Calibration, IMU, Ultrasonic, Articulated Robot

1. Introduction
1. Introduction
Industrial robots have been used for about 50 years with their usage becoming increasingly popular in
applIicnadtuiostnrsiasl urcohboasts mhaatveeriabl eheanndulsinedg, fw oreldaibnogutas5se0mybelyarsanw d itdhisptheenisrinugs, aigne wbheiccohmtihnegy ihnacvreeadseinmgolynstproapteudlahr iginh
eafpfpicliiceantcioynsansducshpeaesdmpaeterrfoiarlmhaanncdelsi.ngI,n wtehledianpgpaliscsaetm iobnlsy tahnadt dreisqpueinresinhgig, hin aw cchuicrahcyth, esyuchhavaesdepm rootontsytrpaitnegd, hpirgeh-
emffaicchieinnicnyg eand mspileliendg,ppeoroforrpmreacnicseios.n Ilinmittheed athpepilricuastei.oP nsostihtiaotnirnegquerireorshiagrhe aascscoucriacteyd, wsuitchhthaes mparnoutofatycptuinrign,g parned-
amsasechmibnliyngteonledramniclelisn.gE , prrooorrs pcroecuilsdiobnelim sigitneidfitchaenitrlyuse.liPmoisniatitoedninbgy eirmroprsleamreenatsisnogcicaatelidbrwaittihonthperm ocaensusf.acNtuorwinagdaaynsd,
caassliebmrabtliyontotalsekrasnucsees.a Esurrbosrtsanctioaulladmboeunstigonfim ficeaanstulryemeleinmtinteacthedniqbuyesi;mnpolneemtheenlteinssg, tchaelim broastitocnom prmoocensas.ppNrooawcahdiasytso,
ca
utilliibzreataiolanstearstkrsacukseinatesrufebrsotamnetitaelr athmaot umnetaosfum reesatshuerepmoseintitontecahnndiqourieesn;tantoionnethe
of ltehses,etnhde em ffoescttocro1m, 2 mon approa3ch is to
. Gong et al used a
utilize a laser track interferometer that measures the position and orientation of the end effector . Gong et al3 used a 1, 2

* Corresponding author. Tel.: +61 450-700-333.


*CE-omrraeislpaodnddriensgs:auMthaorw
r. aTne.al.l:i@
+6r1m4it5.e0d-7
u00-333.
E-mail address: Marwan.ali@rmit.edu

1877-0509 © 2017 The Authors. Published by Elsevier B.V.


Peer-review under responsibility of KES International
10.1016/j.procs.2017.08.246
1602 Marwan Ali et al. / Procedia Computer Science 112 (2017) 1601–1610 1602

hybrid non-contact optical sensor (a combination of vision sensor and structured light) attached to the end-effector to
implement self-calibration of a 6-DOF robot manufactured by Staubli Company based on the distance measurement
rather than absolute position measurement. Guanglong et al4 proposed online calibration that required a position
marker, inertial measurement unit (IMU) and hybrid sensors. However, his method required the robot t o stop
working during the pose measurement.
Robot calibration is more efficient with the dynamic pose measurements. Meanwhile, Zhuang in5 proposed self-
calibration method for the parallel mechanism with a case study on the Stewart platform in which he used the
forward and inverse kinematics with six rotary encoders for the three objectives functions of parameter
identification. On the other hand, Khalil and Besnard6 installed two orthogonally allocated inclinometers as tool to
calibrate Stewart platform. There are still some limitations of these methods. One of them is that some kinematic
parameters orthogonally are not independent of the error model, hence, the position and/or orientation of the tool on
the platform cannot be calibrated.
In the other approach, motion constraint approach, the mobility of the resultant system is lowered, than its
inherent degrees-of-sensing position, by fixing one or more passive joints or constraining partial DOF of the
manipulators so that the calibration algorithm can be performed7. In their research, Park et al8 lowered the mobility
of the tool of a serial manipulator and performed self- calibration by using only the inherent joint sensors in the
manipulator. Consequently, this idea was used and extended to calibrate a robot system with a hand mounted
instrumented stereo camera9. However, the position and/or orientation of the tool on the platform cannot be
calibrated, and some parameter errors related to the locked passive joints may become unobservable in the
calibration algorithm due to their ability constraints.
Laser pointer could also be used for self-calibration of a robot through line-based approach10, 11, and 12. With the
restriction of cost, absolute accuracy of a robot can be achieved to a large extent through the calibration of its
geometric parameters using these techniques. However, these systems are not only expensive but also not user-
friendly. In order to perform calibration, production has to be halted until the set-up, calibration and programming
are completed. This interruption causes undesirable delays in the process cycle and inflicts losses due to the downtime.
Therefore, some studies as in references 13, 14 have proposed autonomous calibration methods which do not require
an external sensor, thus, eliminate the need for elaborate time consuming setup. Since all the industrial robots are
equipped with joint sensors, these were used, along with orientation sensor and camera fixed on the robot, to measure
the tool position and orientation.
The use of self-calibration reduces production downtime. However, the downtime needed for solving kinematic
equations for calibration and programming the kinematic parameters is still present. Hence, some studies as in
references15, 16, 17, 18 and 19 managed to eliminate the occurrence of any production downtime through implementing on-
line calibration using different algorithms and processing techniques. These techniques have been proven to be
efficient and accurate, yet the use of visual camera for position tracking and running image processing algorithms in
real time requires a huge amount of CPU processing power and is relatively slow.
Following all of that, we introduced the utilization of ultrasonic triangulation in position tracking and Inertial
Measurement Unit (IMU) for orientation measurement in Six Degrees of Freedom (6-DOF) robot calibration. The
use of ultrasonic triangulation for position tracking increases processing speed and reduces demand for
computational power, therefore allowing the robot to be more responsive.
Shraga and Johann20 presented a method for measuring the relative position and orientation between two mobile
robots using a dual binaural ultrasonic sensor system. DO Kim et al21 introduced a GPS system using ultrasonic
sensor for a mobile robot to perceive its’ location. Ultrasonic sensors (sonars) are commo nly used for range
measurements, including robotics applications as in22 and 23. This paper is organized as follows; Section 2 provides the
kinematic modeling for the ABB IRB120 robot, Section 3 discusses the methods of the pose measurements and
parameters identification in details, finally discussion and conclusion of the paper are presented in Section 4 and 5
respectively.
2. Case study - kinematic model
Many researchers have found suitable robot kinematic models since 1980s, such as: Hayati et al. models24,
Veitschegger and Wu’s model25, Stone and Sanderson’s S-model26 and Zhuang et al, model27. Standard Denavit-
Hartenberg (D-H) convention is the most widely used to describe robot kinematics. The error models of D -H are not
continuous for the robots that possess parallel joint axes. In order to avoid the singularity of D-H convention, D-H
modelling, or Hayat modelling convention were used, respectively. Singularity-free calibration model prevents the
1603 Marwan Ali et al. / Procedia Computer Science 112 (2017) 1601–1610 1603

use of the single minimal modelling convention, which can be used to identify all possible robot parameters. Robot
tool position and orientation are defined in the robot tool coordinate; the kinematic equation can be defined as
 N i1
0 0 0 1 N1 (1)
TN T N (v) T1 T2 ...T N Ti
i1

Where Tii 1 is the translation matrix from i 1 coordinate to i coordinate, N is the number of joints, Fig.1.

T T T T is the robot parameter vector, and T


vi the link parameter vector for the joint i , which includes
is
v v v
1 ...
2v N
the joint errors
vi vi vi (2)
T
T T ... T 
vi is the nominal value vector for the joint i , vi v1 v2 vN

is the link parameter error vector for the
joint i . The exact kinematic equation is
N
0
KN K 0N(v) K 0 K11 ...K N1 K i1
2 N i
i1 (3)
i1 i1 i Ti
(4)
Ki Ti (vi vi ) Ti

Taking the joint variables into consideration, thus


K N TN T , T T (u, v)
(5)
0 0 0
K : N N v . As case study,
T T T T
is a vector of joint variables. Thus N
u N 1 ... 2
Where is a function of u and
mechanical model of ABB IRB120 robot manipulator is shown in Fig.1. Table 1 shows the D-H parameters of the
ABB IRB 120 robot [D-H] the two-link rigid - type robot28.

Fig1. The D-H parameters of the ABB IRB 120 robot 28.

Table 1 The D-H parameters of the ABB IRB 120 robot [D-H]28
Joints D-H
a (mm) α (mm) d (mm) θ (Degree) Range (Degree)
1 0 -90 290 0 -165 to 165
2 270 0 0 -90 -110 to 110
3 -70 90 0 180 -90 to 70
4 0 -90 302 0 -160 to 160
5 0 90 0 0 -120 to 120
6 0 0 75 0 -400 to 400
1604 Marwan Ali et al. / Procedia Computer Science 112 (2017) 1601–1610 1604

3. Framework and method


For a robotic manipulator to be calibrated, angles of the joints are required. Meanwhile, every robot incorporates
rotary sensors in its joints, thus the angles are available. In addition, the position and orientation of the end-effector
are necessary to find the calibration values. While orientation can be found through using tilt sensors
(Accelerometer and Gyroscope), finding the position of the end-effector is a tedious task. Current methods include
1605 Marwan Ali et al. / Procedia Computer Science 112 (2017) 1601–1610 1605

the use of visual sensors which consume a lot of processing power. The use of ultrasonic triangulation for
positioning will reduce processing time significantly, resulting in more responsive robot. Ultrasonic triangulation is
to be done by attaching an ultrasonic transmitter to the robot end-effector. Additionally, three ultrasonic receivers
should be attached at the edges of the work base. Three receivers will detect signals from the transmitter and feed it
to the processor, to compute the location of the end-effector. Location and orientation of the end-effector, along
with the joint angles, are used to compute the calibration values and perform calibration in real time. Method
reliability is to be examined and compared to current visual techniques. Fig.2 shows the schematic diagram of the
proposed system, whereas, Fig.3 shows the system block diagram.

Fig.2. Schematic diagram of the proposed

Fig.3. Block diagram of the proposed system

3.1 Position Calculation

Ultrasonic system presented here consists of three receivers and one transmitter. Receivers and the transmitter
utilize RF signal for synchronization to obtain the distance data from the ultrasonic wave. Fig. 2 shows that the
receivers are located from the transmitter by a distance D and height H. Transmitter sends ultrasonic wave signals to
receivers. Data acquisition system will compile the distance data and calculate them using trigonometrically
1606 Marwan Ali et al. / Procedia Computer Science 112 (2017) 1601–1610 1606

functions. The system presented in this paper required simple installation and simple trigonometric functions for the
calculations. Accurate distance can be obtained by measuring the TOF (Time of Flight) value of the ultrasonic wave.
However, the TOF is subject to change by the timer resolution of the built-in microprocessor, ambient temperature,
or other dynamics of the surrounding environment, resulting in the location error21. In this study, Kalman Filter (KF)
is used in the estimating process, to estimate position P , which is the state of the position from a set of noisy
measurements. There are six measurements that are available in this estimation: three acceleration components in
the tool frame from the IMU and three position components in the local frame. The direction cosine matrix M H 25
from the tool frame to the local frame, can be presented as

mX x mY x m Zx 

(6)
M H 25 m X y m y Y mZ
y
m mzY m 
Xz Zz

The acceleration of the tool in the local frame can be expressed as


Vx m X x
Ax mY A
x
y mZ Az

Vy mX y
Ax mY A y mZ x Az
y y

Vz m X Ax mY A
z y
mZ Az
z z
(7)
where g1 is the magnitude of the local gravity vector and ( Ax , Ay , Az ) are acceleration measurement components in
each axis, in the tool frame. The velocity components (Vx ,V y ,Vz ) in each axis in the shoulder frame can be defined as
Vx Px
V x Py
V x Pz (8)
From Eqs. (7) and (8), the state X 'k of the position KF is expressed as

Xk '  p x,k ,Vx,k , Ax,k , p y,k ,V y,k , p z,k ,Vz,k , Az,k  (9)

It involves values of variables at the time k . According to Eqs. (7) and (8), the state-transition matrix A'kis defined
as27
1 t mXx t 2 / 2 0 0 mYx .t 2 / 2 0 0 mZx t 2 / 2

Z
0 1 mX x t 0 0 mYx t 0 0 m x
t 

0 0 1 0 0 0 0 0 0 

2
0 0 m X y t / 2 1 t mYy t 2 / 2 0 0 m Z y t 2 / 2
pos 0  0 mX t 0 1 mYy t 0 0 mZ t 
y y

0 0 0 0 0 1 0 0 0 
Z
X
2 2
0 0 m z
t /2 0 t mYz t / 2 1 t m z t2 / 2

0 0 mX Z t 0 0 mYz t 0 1 mZ z t 

0 0 0 0 0 0 0 0 1
(10)
1607 Marwan Ali et al. / Procedia Computer Science 112 (2017) 1601–1610 1607

Since the acceleration measurements, caused by the gravitational force in Z-axis, is parallel to the gravity vector, the
system input matrix is
T
pos  u' k 1 0,0,0,0,0,0, g l t 2 / 2, g t,0
(11)

We use the acceleration to estimate the position state, so the Process noise vector is
T
w'k 0,0, w' ,0,0,
x w' ,0,0,
y w z (12)

where (w'k , wk' , w z ) is the process noise of the tool acceleration. Since the ultrasonic sensor and the IMU are calibrated
and initialized, the observation matrix H0 for the position estimation is
1608 Marwan Ali et al. / Procedia Computer Science 112 (2017) 1601–1610 1608

1 0 0 0 0 0 0 0 0
 00
0 0 1 0 0 0 0 
0 0 0 1 0 0 0 0 0
H pos 
0 00 
0 0 0 0 1 0
0 
 0 0 0 0 0 1 0 0
0 0 1
0 0 0 0 0 0
(13)
Nine states are observable. The determined position Pk ( p x,k , p y,k , p z,k ) at time k is the optimal value of the position
of robot tool.
3.2 Pose measurement
A sensor module, which uses an Inertial Measurement Unit (IMU) is utilized in this application to determine
the orientation (roll, pitch and yaw) of the End-Effector (EE). The IMU sensor consists of one three-axis
accelerometer, two two-axis gyroscopes and one three-axis magnetometer. Three frames are defined as follows:
body frame xb yb zb , sensor frame x s y s z s and Earth-fixed frame xe ye ze . The sensor frame xs y s z s corresponds to the

axes of three orthogonally mounted accelerometers/magnetometers. Since the sensor is rigidly attached to the robot
EE, the body frame xb yb zb is assumed to coincide with the sensor frame xs y s z s . The Earth-fixed frame xe ye ze

follows the North-East-Down (NED) convention, where xe points north, y e points east and z e point down. The

IMU measures its own orientation (roll, pitch and yaw). The rotation about the xe axis is known as roll, the
rotation θ about ye axis is known as pitch and the rotation ψ about z e axis as the yaw. According to Euler’s

theorem28 of finite rotation, the conversion from Euler angle to quaternion is


q0  cos( / 2) cos( / 2) cos( / 2) sin( / 2) sin( / 2) sin( / 2)

q1 sin( / 2) cos( / 2) cos( / 2) cos( / 2) sin( / 2) sin( / 2)
q
q2  cos( / 2) sin( / 2) cos( / 2) sin( / 2) cos( / 2) sin( / 2)

q3  cos( / 2) cos( / 2) sin( / 2) sin( / 2) sin( / 2) cos( / 2) 


(14)

The four Euler parameters are constrained as [29].


2 q 2q q2
q0 2 1
1 2 3  (15)
So the direction cosine matrix M se from the sensor frame to the Earth-fixed frame is represented as

q2 q2 q2 q2 2(q1q 2 q0 q3 ) 2(q0 q 2 q1q3 )


0 1 2 3

Me 2(q1q 2 q0 q3 ) q2 q2 q2 q2 2(q2 q3 q0 q1 )
s 0 1 2 3 
2 2 2 2
2(q1q3 q0 q 2 ) 2(q0 q1 q 2 q3 ) q q q q
0 1 2 3
(16)

3.3 Kalman Filter


The state is estimated in KFs using measurements, system model and measurement model in two steps, they
are prediction and update30. For a time, variant system and measurement model, a state space dynamic equation
should be considered as
xk f k (xk 1 , uk 1 , bk 1 )
1609 Marwan Ali et al. / Procedia Computer Science 112 (2017) 1601–1610 1609

z k hk ( xk , vk 1 )
(17)
(18)
Where t k is the iteration time represented by the subscript k , x k is the state from time t k 1 to time t k , f k is the state
transition function, u k 1 is the deterministic input, bk 1 is the process noise is, z k is the measurement, hk is the

measurement function, and v k is the measurement noise. The measurement and process noises can be assumed to
1610 Marwan Ali et al. / Procedia Computer Science 112 (2017) 1601–1610 1610

be zero-mean Gaussian when the system is linear so that a KF is used. Then the rewritten form of Eqs. (17) and (18)
becomes
xk k xk 1 k uk 1 , bk 1
(19)
zk H k xk vk
(20)
Where k is the system transition matrix for time tk 1 to time t k , k is the input matrix at time t k and H k is the

measurement matrix at time t k . The posterior density function (PDF) with a mean and covariance can be estimated
by the KF when the system can be expressed in the forms of Eqs. (19) and (20). Therefore, the prediction and update
steps for KF are as follows:
Prediction state:
x̂k k xk 1 k uk
(21)
Prediction covariance:

 T
P P Q
k k k1 k k1 (22)
Update Kalman gain:
1
K P HT H P HT R
k k k k k k k (23)
Estimated covariance:
Pˆk 1 K k Hk 1 P k
(24)

Estimated state:
x̂k x̂k 1 K k (Z k H k xk )
(25)
3.4 Orientation IMU

Gyroscope and magnetometer both have noise and drift, therefore we use Kalman filter to compute the state x of
27
IMU from a set of noisy and incomplete measurements . The differential equation of the quaternion q with respect
to time t is
q0 t q0 q1 q2 q3 0 
  

q1 t q1 q0 q3 q2 wx 2
0

q
 2 t q
 2q 3 q q 1
w y 2

q3 t q3 q2 q1 q0 wz 2
(26)

Where w x , w y and w z are the angular velocity components of IMU in w s , y s and z s axes. Since the state x k
includes the quaternion state and the angular velocities, x k has the following form:

x k q 0, k q1, k q 2, k q 3, k w x, k w y, k w z, k  (27)

Where q 0,k , q1,k , q 2,k , q 3,k , w x,k , w y,k and w z, k are the quaternion states and the angular velocities at time k .

Form Eq. (26), the state – transition Matrix is31, 32.


1611 Marwan Ali et al. / Procedia Computer Science 112 (2017) 1601–1610 1611

1 0 0 0 q1,k t 2 q 2,k .t 2 q3,k .t 2


 
0 1 0 0 q0,k .t 2 q3,k .t 2 q 2,k .t 2
3,k 0,k 1,k

ori 

(28)
where t is sampling time. Let ori be the zero matrix because there are no control inputs. We use angular velocities to
estimate the quaternion state, so the process noise vector is

 0 0 1 0 q .t 2 q .t 2 q .t 2
 0 0 0 1 q 2,k .t 2 q1,k .t 2 q0,k .t 2
 
 0 0 0 0 1 0 0 
 0 0 0 0 0 1 0 
 
 0 0 0 0 0 0 1 
1612 Marwan Ali et al. / Procedia Computer Science 112 (2017) 1601–1610 1612

T
wk 0 0 0 0 wx wy w
z
(29)
where w x , w y and w z are the process noise components of the angular velocity. Because we use calibrated

gyroscopes to measure angular velocities, the observation matrix H is


H ori O n p I nn  (30)

where n is the number of angular velocities vector, and p is the number of the quaternion. The determined
quaternion q k at time k should be normalized by
q k q 0,k M q1,k M q 2,k M q 3,k M

2 2
M q0,k q1,k q22,k 2
q3,k
(31)

3.5 Parameter identification


Kinematic identification is used to find the kinematic model parameters, of a robot manipulator, by giving a set of
pose measurements and the corresponding joint angles readings. The objective of a kinematic identification
algorithm is to minimize the differences between the computed and the measured poses. If we assume that the
number of measured poses is m , then we have
Kˆ K̂ N0 (K̂ (u1, v), K̂ (u 2 , v),..., K̂ (u m , v)T
(32)
0 T
T̂ T̂ N (T̂ (u1 , v), T̂ (u 2 , v),..., T̂ (u m , v) (33)
where ui (i 1,2.....m) is the vector of joint variable for the i th measured pose. The objective of the kinematic

identification is the computation for parameter vector v v v , which is used to minimize the discrepancy
between the computed and the measured poses
A(v , u) B(u)
 (34)
A is the function of pose of T and B(u) ( B(u1 ), B(u 2 ), ...B(u m ))T is the measured function of joint variables u . For
each measurement pose B(u i ) , it defines orientation measurement Ri and position measurement Pi , and
Ri Pi 
B(ui ) 0 
 1 
(35)
If the measurement system can provide orientation and position measurements, each pose can be formulated by six
measurement equations.
A(v , u) B(u) A(v, u) C (v, u)
 (36)
Where C is the discrepancy function of the pose components of T . Introducing the Jacobain matrix, J.
C (v, u) J . v xb
And then
C(v, u) B(u) A(v, u)
When using
b B(u) A(v, u)
And
x v
Eq. (37) can be rewritten
1613 Marwan Ali et al. / Procedia Computer Science 112 (2017) 1601–1610 1613

(40)
(37) (38) (39)
(41)
33
To solve the non-linear in Eq. (33), the least squares estimation (LSE) method is a fast and computationally
efficient identification algorithm. In this work, LSE is sensitive to noise. EKF, is used to fit data to that the non-
linear model. Most particularly and common method in the case that m is much larger than n is the non-linear least-
1614 Marwan Ali et al. / Procedia Computer Science 112 (2017) 1601–1610 1614

squares34. The poses of the tool are measured from IMU and the position sensor. Since uncertainty exists in the
measurement, extend Kalman filter (EKF) is used as an optimization algorithm and the Jacobean matrices are used
to estimate the kinematic errors of D-H parameters from the measured poses values9. Since there are four parameters
for N joints and four parameters for the transformation from the sensors to the tool, the number of total parameters
to be considered is 4(N 1) . So the predicted state x̂ is 4(N 1) of the D-H parameters in the prediction step of the
EKF. The covariance matrix of the predicted state P is
x̂k 1 k x̂k k
(42)
Pk 1 k Pk k Qk
(43)
where Qk is the covariance matrix of the system noise at the kth iteration. In the observation step of the EKF,

Jacobian matrix J , measurement residual ~


y , and residual covariance S are calculated as follows:
T (x)
Jk1
x x̂ k 1 k
(45)
~
yk 1 m T (~
xk 1
k1 k)
(46)
T

Sk 1 J k 1Pk 1J Rk 1
k1 (47)

where mk and Rk are the measured pose values and the covariance matrix of measurement noise at the kth

iteration. (k 1) k means a prior estimate, and (k 1) k means a posteriori estimate. In the update step, the state

covariance matrix is updated by an optimal Kalman gain K


K k 1 Pk 1J k 1STk 1 1 (48)

x̂k 1 k1 x̂ k1 k Kk 1 ~
yk 1
(49)

Pk 1 k1 (I K k 1J k 1 )Pk 1 k
(50)
where I is the identity matrix. The norm value of the state vector is calculated for every iteration once the updating
procedure is completed. Note that if Q and R are set to zero, then EKF simply reduces to the Newton-Raphson
method.
4. Discussion
In order to verify the above method, an ABB IRB120 robot was used. The self-calibration procedure includes four
steps: Data collection, joints pose estimation, kinematic parameters identifications and calibration accuracy
assessment. Working conditions of manipulator, which are relatively simple, such as constant temperature, constant
load, and calibration, can be used to identify the kinematic parameters offline. Although closed-loop controls can
tolerate parameter errors, we need constant adjustments of the robot joints to reach the target point. Therefore, it is
time-consuming and the accuracy of the manipulator depends on the feedback errors. The capability of online
calibration to find the position and orientation, quickly after calibration, can be seen as a tradeoff between online
calibration and closed loop control. In comparison the online calibration takes lesser time however, the close-loop
control higher accuracy.
5. Summary
Ultrasonic triangulation based, online autonomous calibration for industrial robot is proposed in this paper. In
this approach, an IMU and a position sensor are rigidly attached to the robot tool to estimate the robot poses
automatically during the working time. After the robot poses are estimated, the kinematics identification can be
carried out by EKF. The robot kinematics parameters can be corrected from the identification results in real time.
The whole procedure of the robot calibration is automatic and without any manual intervention. This research
1615 Marwan Ali et al. / Procedia Computer Science 112 (2017) 1601–1610 1615

investigated and proposed a simple and efficient calibration method. Compared with other existing, expensive and
complex approaches, our proposed method can conduct the calibration more accurately and in shorter execution
time.

References
1616 Marwan Ali et al. / Procedia Computer Science 112 (2017) 1601–1610 1616

1. Zhuang H, Yan J, Masory O. Calibration of Stewart platforms and other parallel manipulators by minimizing inverse kinematic residuals. J
Robot Syst, 1998; 15 (7):395–405.
2. Lars Richter, Floris Ernst, Alexander Schlaefer, “Achim Schweikard Robust real-time robot–world calibration for robotized transcranial
magnetic stimulation”, Int J Med Robotics Comput Assist Surg 2011; 7: 414–422.
3. GONG Chun-he, YUAN Jing-xia, NI Jun. Nongeometric error identification and compensation for robotic system by inverse calibration [J].
International Journal of Machine Tools & Manufacture, 2000, 40(14): 2119−2137.
4. Guanglong Du, Ping Zhang, Di Li, “Online robot calibration based on hybrid sensors using Kalman Filters” Robotics and Computer-
Integrated Manufacturing, 2015; 31:91–100
5. Zhuang H. Self-calibration of parallel mechanisms with a case study on Steward Platform. IEEETransRobotAutom1997; 13(3): 387–97.
6. Khali W, Besnard S. Self-calibration of Stewart–Gough parallel robots without extra sensors. IEEETransRobotAutom1999; 15(6): 1116–21.
7. Bennett DJ, Hollerbach J M. Autonomous calibration of single-loop closed kinematic chains formed by manipulators with passive end point
constraints. IEEE TransRobotAutom1991; 7(5):597–606.
8. Park I, Lee B, Cho S, Hong Y. Laser-based kinematic calibration of robot manipulator using differential kinematics. IEEE/ASME Trans
Mechatron 2012; 17(6):1059–67.
9. Benneet D J, Hollerbach J M. Autonomous robot calibration for hand-eye coordination. Int J Robot Res1991; 10(5):550–9.
10. LIU Y, SHEN Y T, XI X. Rapid Robot Work cell Calibration Using Line-based Approach [C]// The 4th IEEE Conf on Automation Science
and Engineering. Washington D C, 2008: 510−515.
11. ZHUANG Han-qi, ROTH Z S, WANG K. Robot calibration by mobile camera system [J]. Robotic Systems, 1994, 11(3): 155−68.
12. RENAUD P, ANDREFF N, MARQUET F, MARTINET P. Vision-based kinematic calibration of a H4 parallel mechanism [C] Proceedings
of IEEE Int. Conference Robotics and Automation. Taipei, 2003: 1191−1196.
13. Khalil W., Garcia G., Delagarde J.F. "Calibration of the geometric parameters of robots without external sensors", IEEE robotics and
automation conference1995, pp.3039-3044.
14. Khalil, W., Lemoine, Ph., Gautier, M. "Autonomous calibration of robots using planar points", ISRAM, Montpellier, 1996.
15. Lu, Tien-Fu; Lin, Grier C. I. and He, Juan R., “Neural Network Based 3D Force/Torque Sensor Calibration for Robot Applications,” Journ al
of Engineering Applications of Artificial Intelligence, 1997, (10)1: 87-97.
16. Guanglong Du and Ping Zhang"Online robot calibration based on vision measurement" Journal of Robotics and Computer-Integrated
Manufacturing, Volume 29 Issue 6, December, 2013
17. Giuseppe O, Antonio P, Lorenzo R, Marilena V,” Humanoid odometric localization integrating kinematic, inertial and visual information”;
Springer, Auton Robot, 2016; 40:867–879
18. LIU Yu, LIU Hong, NI Feng, XU Wen-fu, “New self-calibration approach to space robots based on hand-eye vision”, Springer, J. Cent.
South Univ. Technol, 2011; 18: 1087−1096
19. Zhu Y, Hu C,Hu J, Yang K. “Accuracy and simplicity oriented self-calibration approach for two-dimensional precision stages”. IEEE Trans
Ind Electron 2012; 60 (6):2264–72.
20. Shraga Shova, Johann Borenstein, “Measuring the Relative Position and Orientation between Two Mobile Robots With Binaural Sonar”
ANS 9th International Topical Meeting on Robotics and Remote Systems,” Seattle, Washington, March 4- 8, 2001,
21. Do E Kim, Kyung H. Hwang, Dong H Lee, Tae Y Kuc, “A Simple Ultrasonic GPS System for Indoor Mobile Robot System using Kalman
Filtering”. 2006 SICE-ICASE International Joint Conference; 89 (5): 2915 – 2918.
22. Ivan Paunović, Darko Todorović, Calibration of Ultrasonic Sensors of a Mobile Robot. Serbian Journal of Electrical Engineering. 2009; (6)
3: 427 - 437
23. James L. Crowley, “World Modeling and Position Estimation for a Mobile Robot Using Ultrasonic Ranging”, 1989 IEEE Conference on
Robotics and Automation, 1989, 3: 1574-1579.
24. Hayati S, Mirmirani M. Improving the absolute positioning accuracy of robot manipulators. JRobotSyst1985; 2 (4):397–413.
25. Veitschegger W, Wu C. Robot accuracy analysis based on kinematics. IEEE TransRobotAutom1986; 2(3):171–9.
26. Stone H, Sanderson A. A prototype arm signature identification system. In: Proceedings of the IEEE conference on robotics and automation;
April1987.p.175–82.
27. Zhuang H. Kinematic modeling, identification and compensation of robot manipulators. Florida: Florida Atlantic University; 1989
(Ph.D.dissertation).
28. Biqiang Du, Ning Xi, Erick Nieves,” Industrial Robot Calibration Using A Virtual Linear Constraint”, International Journal On Smart
Sensing And Intelligent Systems, Vol. 5, No. 4, December 2012
29. Goldstein H, Poole C, Safko J. Classical mechanics. MA: Addison-Wesley; 2002; 33–78.
30. Won S P, Melek W W, Golnaraghi F. Fastening tool tracking system using a Kalman filter and particle filter combination. MeasSci Technol;
2011, 22(12).
31. Won S P, Golnaraghi F, Melek W W. A fastening tool tracking system using an IMU and a position sensor with Kalman filters and a fuzzy
expert system. IEEE Trans Ind Electron 2009; 56 (5):1782–92.
32. Won S P, Melek W W, Golnaraghi F. A Kalman/particle filter-based position and orientation estimate on method using a position
sensor/inertial measurement unit (IMU) hybrid system. IEEE Trans Ind Electron2010; 57(5):1787–98.
33. Gatti G, Danieli G. A practical approach to compensate for geometric errors in measuring arms: application to a six-degree-of-freedom
kinematic structure. Me as SciTechnol2008; 19 (1).
34. Dennis JE, Schnabel R B. Numerical methods for unconstrained optimization and nonlinear equations.NewJersey: Prentice-Hall; 1983.
35. Cui Xiang. Closed-loop control for a cable-driven parallel manipulator with joint angle feedback, In: Proceedings of the 2013 IEEE/ASME
International Conference on Advanced Intelligent Mechatronics (AIM).Wollongong, Australia; 2013: 625–630.
36. Penning Ryan S. Towards closed loop control of a continuum robotic manipulator for medical applications.In:Proceedingsofthe2011IEEE
International Conference on Robotics and Automation (ICRA), Mech.Eng. Dept., Shanghai; 2011: 4822–27.

S-ar putea să vă placă și