Documente Academic
Documente Profesional
Documente Cultură
Precise indoor localization of multiple mobile robots with adaptive sensor fusion
using odometry and vision data
Abstract: An accurate, reliable and a cost effective localization is the key feature of self-navigation of
autonomous mobile robots. The position and orientation, together known as pose, of the mobile robot can
be determined by using certain localization systems. In this work we use the mobile robot system
Robotino – a practice-orientated educational and training system offered by Festo Didactic GmbH. For
the Robotinos, position determination can be provided by laser scanner for Robotino or Robotino®
Northstar System (Festo Didactic GmbH). These existing systems are quite expensive and localization
accuracy in certain field dimensions is quite low. In this paper we provide a relatively inexpensive and a
more accurate localization system, which combines the strengths of odometry and vision-based
localization. The fusion of odometry and vision-based localization data is accomplished with the use of
the Extended Kalman Filter (EKF). The resulting localization system delivers better accuracy and more
frequent pose information of the mobile robots on the test field.
Keywords: Vision-based localization, Robotino, Mobile robots, (Adaptive) Sensor fusion, Extended
Kalman Filter
2 2 2
3 cos(ϕ + δ ) − 3 cos(ϕ − δ ) 3 sin(ϕ )
x q1
2 sin(ϕ + δ ) − 2 sin(ϕ − δ ) − 2 cos(ϕ ) q , (1)
y= 3 3 3 2
q3
ϕ 1 1 1
3R 3R 3R
( q1 q3 ) = ( n1 n3 ) p ,
T T
q2 n2 (2)
2π rw
p= , (3)
g ⋅ 60
7183
19th IFAC World Congress
Cape Town, South Africa. August 24-29, 2014
(b)
Fig. 3. Determination of orientation using circle positions;
(a) Illustration of angles; (b) Illustration of the vectors on real
Robotino image
β − 90 , θ > 0
Fig. 4. Raw image from camera: Robotinos; Docking-Station
ϕ= (4)
β + 90 , θ < 0.
7184
19th IFAC World Congress
Cape Town, South Africa. August 24-29, 2014
4. SENSOR FUSION USING EXTENDED KALMAN As an input of EKF uk we use three wheel encoder signals
FILTER and pose information from vision-based localization. For
Kalman filter is one of the most widely used algorithms for further information about the EKF we recommend
sensor fusion. In this paper, we employ the well-known corresponding literature and [10].
Extended-Kalman-Filter (EKF) [10] for the state estimation At first, we have implemented a 6-state EKF with the
of the nonlinear model (1) for odometry. Pose changes of the following states
mobile robot can be determined using wheel encoder signals
at a very high frequency. However, this allows only relative= xk ( dx, dy, dϕ , x, y, ϕ ) ∈ 6 , (10)
localization and delivers very inaccurate pose information
during a long time maneuver. Using odometry in short time where dx, dy, dϕ are the predicted speed of Robotino and
intervals though, it provides very reasonable pose
information. In contrast, vision-based localization delivers x, y, ϕ are the predicted pose information. In this case we
much more accurate pose information at a low frequency. calculate the Jacobi matrix A as follows:
We have implemented an EKF, which utilizes the odometry
localization as its state prediction method. It is reasonable to 0 0 0 0 0 A1,6
use the pose information from the vision-based localization
for the measurement update (a posteriori update) of EKF. The 0 0 0 0 0 A2,6
0 0 0 0 0 0
measurement update is only executed, when new position and A= , (11)
orientation information is provided from the vision-based ∆t 0 0 0 0 0
localization. 0 ∆t 0 0 0 0
In order to use the odometry in the prediction phase of the
EKF, we utilize the Equation (1) as state update calculation 0 0 ∆t 0 0 0
and set the wheel encoder signal as part of the input of the
EKF. with the sampling time ∆t of EKF,
Extended Kalman Filter has the following system and
measurement equations [10]: 2 2 2
A1,6 =− ⋅ sin ( x6 +δ ) ⋅ q1 + ⋅ sin ( x6 − δ ) ⋅ q2 + ⋅ cos ( x6 ) ⋅ q3
3 3 3
xk = f ( xk −1 , uk , wk ) , (5) and
2 2 2
yk = g ( xk , vk ) , (6) A2,6 = ⋅ cos ( x6 +δ ) ⋅ q1 − ⋅ cos ( x6 -δ ) ⋅ q2 + ⋅ sin ( x6 ) ⋅ q3 .
3 3 3
where xk is the state vector of the system and uk is the input The output matrix C selects the pose information from the
vector of the filter, wk and vk are Gaussian noises, states of EKF and has a dimension of 3×6 :
f ( xk −1 , uk , wk ) is the nonlinear function representing the
model of the system and g ( xk , vk ) is the output function. The 0 0 0 1 0 0
linearization of the nonlinear state and measurement C = 0 0 0 0 1 0 . (12)
equations is made by using [11]: 0 0 0 0 0 1
=xk f ( xˆk+−1 , uk −1 , wk −1 ) + Ak −1 ( xk −1 − xˆk+−1 ) + Fk −1 ( wk − wk −1 ) ,(7) During the evaluation of this implemented 6-state EKF, we
have recognized a drawback in position accuracy, which is
described in the evaluation section later in this paper. In order
yk g ( xˆk− , vk ) + Ck ( xk − xˆk− ) + Gk vk ,
= (8) to compensate the drawback of 6-state EKF, we implemented
9-state EKF for the identification of three additional
where xˆk+−1 is the previous a priori state, xˆk− is the current a correction factors krw1 , krw 2 and krw3 . The state vector of this
posteriori state, A , F , C and G are the linearized system- 9-state EKF is given by:
matrices or Jacobi matrices (see Appendix A). These matrices
are defined by: = xk ( dx, dy, dϕ , x, y, ϕ , krw1 , krw2 , krw3 ) ∈ 9 . (13)
7185
19th IFAC World Congress
Cape Town, South Africa. August 24-29, 2014
Table 1. Pseudo code for EKF [11] correction factors ( krw1 , krw 2 , krw3 ) , which make the EKF
EKF Algorithm prediction false.
{Initialization} The system noise covariance matrix Q is determined
x0 = {0} empirically, and during experimental tests we have defined
the noise of wheel encoders with 10 rpm:
P0+ = {0}
Q = {}, R{} Q 102 ⋅ I 3 .
= (15)
loop
{prediction update - a priori update} Measurement noise covariance matrix R is described by the
linearization(A,F) inaccuracy of vision-based localization. Inaccuracy of
xˆk− = f ( xk+−1 , uk , wk ) position determination, in our case, equates to 12 mm and
=Pk− Ak −1 Pk+−1 AkT−1 + Fk −1Qk −1 FkT−1 orientation inaccuracy to 0.029 rad or 1.7°:
{measurement update – a posteriori update}
if new pose information from vision-based localization 0.0122 0 0
linearization( C , G ) R= 0 0.0122 0 . (16)
= K Pk− CkT / (Ck Pk− CkT + Gk RGkT ) 0 0 0.0292
xˆk+ =xˆk− + K ( yk − g ( xˆk− , 0))
Pk+= ( I − KC ) Pk− The sampling time of EKF defines the speed of the
localization system. In our case we chose the sampling time
else – no measurement (a posteriori= a priori)
with 40 ms. This sampling time equals to the sampling time
Pk+ = Pk− of the control system for mobile robots in
xk+ = xk− MATLAB/Simulink. Otherwise, faster sampling causes a
end if lack in wheel encoder signal information, which makes the
{calculate the output} calculations of odometry incorrect and the use of EKF
= yk Ck xk + vk ineffective. The implemented EKF is shown in Table 1.
end loop 5. EVALUATION
For the evaluation of tracking we compare different
2 2 2 trajectories of the robot(s) by driving them using pure
3 cos(ϕ + δ ) − 3 cos(ϕ − δ ) 3 sin(ϕ ) odometry, vision and two different EKF pose information.
x
q1 ⋅ krw1
2 2 2 . (14) As we know, localization by using odometry cannot deliver
y = 3 sin(ϕ + δ ) − 3 sin(ϕ − δ ) − 3 cos(ϕ ) ⋅ q2 ⋅ krw 2 very good results. Therefore, we do not evaluate the
q3 ⋅ krw3
ϕ EKF 1 1 1 odometry localization in this case. For the evaluation of a
vision-based localization method, we utilized simple
3R 3R 3R
measurements on the ground and compared it with the pose
The Equation (14) provides in 9-state EKF the state update information from the vision. As a result we define the
calculation and the correction factors, which compensate the standard deviation of position determination using vision-
faulty rolling speed estimation by acting for example as a based localization in x and y with 12 mm and the standard
variable wheel radius (see Equation (2) and (3)) caused by an deviation of orientation determination is set to 1.7°.The
uneven floor. The correction factors are adaptive and have reason of these inaccuracies are to be explained by the image
the same effect as the Kalman gain and it is to be expected distortion caused by short focal length of the used camera
that these factors correlate to a certain value. system. This inaccuracy information, as mentioned in the last
A faulty estimation of the rolling speed of the wheels can be section, is used to describe the noise covariance matrix. In
caused by the slip, nubby floor in the experimental hall and certain situations, vision-based localization fails for the fast
special form of omni-wheels [12] and its mounting positions moving robots because of blurred marker points. After setting
on the Robotino body frame [5]. The error propagation of and using the EKF, localization failures at visual level can be
odometry and longtime gap between update phases caused compensated for, making the localization system more
the increasing inaccuracy of pose estimation in the 6-state reliable.
EKF. The correction factors in 9-state EKF however allowed We have implemented two different demonstrations using
compensating the faulty pose estimation of odometry. Robotinos. The first demonstration (Figure 6) is a docking
Consequently we are able to make long period update cycles maneuver of a single Robotino into its docking-station.
in order to get more reliable pose information from vision- Robotino is driven by using pose information from:
based localization. Due to possible extremely faulty pose
information (very big value difference to last position) from - Odometry localization,
vision-based localization, we limit the value range of the - Pure vision-based localization,
correction factors from 0.7 to 1.3. Otherwise, the extreme - 6-state EKF localization and
value changes may lead to uncertain Kalman gain and - 9-state EKF localization.
7186
19th IFAC World Congress
Cape Town, South Africa. August 24-29, 2014
(a) (b)
Fig. 6. Docking drive of Robotino; (a) Different driving trajectories caused by different localization methods; (b) Signal
pattern of correction factors of 9-state EKF
(a) (b)
(c) (d)
Fig. 7. Synchronous drive of the Robotinos; Robotino trajectories driven by: (a) odometry localization; (b) vision
localization; (c) 6-state EKF; (d) 9-state EKF localization
7187
19th IFAC World Congress
Cape Town, South Africa. August 24-29, 2014
As seen in Figure 6a Robotino driven by odometry pose using a single camera system provided global pose
information (green line) fails to hit its mark with a 200 mm information of all marked mobile robots, in this case
deviation. Using pure vision-based localization (red line), we Robotinos.
got very reasonable results and it was possible to dock the The localization approach presented here delivers position
Robotino into the docking station. However, we have noticed and orientation information of each robot at a 25 Hz tracking
bucking movements of Robotino due to the low tracking frequency with a maximum inaccuracy of 12 mm (position)
frequency of vision-based localization. Trajectories caused by and 1.7° (orientation) in a 4.8 m × 3.5 m field dimension.
using two different EKF pose information are also visualized Using additional sensors, for example a gyroscope, the
in Figure 6a. localization accuracy can be improved. In future work, we
Robotino driven by 6-state EKF pose information (magenta will also investigate possibilities to employ onboard cameras
line) missed its mark by 20 – 30 mm while using 9-state EKF of Robotinos for so called “vision odometry”, for example to
pose information allowed perfect docking of Robotino. These properly maneuver the Robotino into a docking station
results show the advantages of correction factors in 9-state beyond a capture field of the global vision-based localization.
EKF (blue line). Figure 6b shows the signal pattern of the We will also investigate the possibilities to use our
correction factors in 9-state EKF. During the extensive tests localization system for a precision positioning of objects or
we have seen that the rolling speed correction factors of the mobile robots to certain points.
two front wheels (wheel #1 and #2) correlated into 0.95 and
the correction factor of the back wheel (wheel #3) remains at REFERENCES
the value of 1. These values result from the special form of
[1] P. Goel, S. I. Roumeliotis und G. S. Sukhatme, Robust Localization
the omni-wheel and their orientation in Robotino frame. In Using Relative and Absolute Position Estimates in IEEE/RSJ
order to achieve good results using 9-state EKF, it is international Conerence on Intelligent Robots and Systems, Los
reasonable to define the correlating values of each correction Angeles, CA, USA, 1999,
factor detected during test drives and set these as the initial [2] P. F. Alcantrilla, S. M. Oh, G. L. Mariottini, L. M. Bergasa, F. Dellaert,
state values in 9-state EKF. Learning Visibility of Landmarks for Vision Based Localization in
IEEE International Conference on Robotics and Automation, 2010,
In a second demonstration, three Robotinos drive a given
[3] B. Bischoff, D. Nguyen-Tuong, F. Streichert, M. Ewert, A. Knoll,
rectangle in formation (synchronously). Figure 7a shows the Fusing Vision and Odometry for Accurate Indoor Robot Localization in
trajectory of the Robotinos driven by using odometry 12th International Conference on Control, Automation, Robotics &
localization. As can be seen, error propagation is the main Vision, Guangzhou, China, 2012, pp. 347-352.
problem when using odometry for localization. Controlling [4] Festo Didactic GmbH, “Robotino Northstar Handbook“.
the Robotinos using the vision-based localization method [5] Festo Didactic GmbH, “Robotino Handbook”. http://www.robotino.de
(Figure 7b.) allows for very good compliance of formation, [6] X. Li, M. Wang, A. Zell, Dribbling Control of Omnidirectional Soccer
Robot“ in IEEE International Conference on Robotics and Automation,
which results in the Robotinos driving a nearly perfect Roma, Italy, 2007, pp. 2623-2628
rectangle track. However, the bucking movements of [7] Openrobotino - Robotino Wiki, Robotino Matlab programming,
Robotinos were still observed. The same accuracy results http://wiki.openrobotino.org/index.php?title=Matlab,
could not be repeated by using 6-state EKF (Figure 7c). In [8] IDS-Imaging GmbH, UI-6250SE-C Camera, http://de.ids-
this case Robotinos cannot follow the expected trajectories imaging.com/store/ui-6250se.html,
and have the same behavior as the driving by odometry [9] MVTec Software GmbH, “HALCON 11”,
localization. Because of the missing correction factors, error http://www.mvtec.com/halcon
propagation from the odometry could not be corrected [10] Dan Simon, Optimal State Estimation, Hobokoen: Wiley Interscience,
2006,
properly in 6-state EKF. Robotino trajectories driven by 9-
[11] M. Norgaard, State Estimation for Nonlinear Systems –
State EKF are shown in Figure 7d. The first round had some KALMTOOL,Technical University of Danemark, Kongens, Lyngby,
deviations from the expected rectangle track. This can be Denmark, 2002
explained with the learning phase of the EKF or the lack of [12] RoboterNETZ, Omni-Wheels, http://www.rn-
compensation for correction factors of 9-state EKF. wissen.de/index.php/OmniWheels
Additional rounds allowed Robotinos to follow the
predetermined tracks with more accuracy. Upon further
evaluations, we considered implementing more complex
maneuvers for Robotinos. For measuring the ground truth, we
need to use an independent localization system with
corresponding accuracy to draw some comparable
conclusions.
6. CONCLUSIONS
In this paper we presented a reliable and accurate localization
method for the indoor mobile robots. We have shown that a
well-combined use of odometry and vision-based localization
methods provide very reasonable results. For this
combination (sensor fusion) we employed the Extended
Kalman Filter, which allows direct use of the nonlinear
translation equations of odometry. Vision-based localization
7188
19th IFAC World Congress
Cape Town, South Africa. August 24-29, 2014
2 2 2
A1,6 = - ⋅ sin ( x6 +δ ) ⋅ q1 ⋅ x7 + ⋅ sin ( x6 -δ ) ⋅ q2 ⋅ x8 + ⋅ cos ( x6 ) ⋅ q3 ⋅ x9
3 3 3
2 2 2
A1,7 = ⋅ cos ( x6 +δ ) ⋅ q1 ; A1,8 =− ⋅ cos ( x6 -δ ) ⋅ q2 ; A1,9 = ⋅ sin ( x6 ) ⋅ q3 ;
3 3 3
2 2 2
A2,6 = ⋅ cos ( x6 +δ ) ⋅ q1 ⋅ x7 - ⋅ cos ( x6 -δ ) ⋅ q2 ⋅ x8 + ⋅ sin ( x6 ) ⋅ q3 ⋅ x9 ;
3 3 3
2 2 2
A2,7 = ⋅ sin ( x6 +δ ) ⋅ q1 ; A2,8 =− ⋅ sin ( x6 -δ ) ⋅ q2 ; A2,9 =− ⋅ cos ( x6 ) ⋅ q3 ;
3 3 3
1 1 1
A
=3,7 ⋅ q1 ; A
=3,8 ⋅ q2 ; A
=3,9 ⋅ q3 ; A4,1 = ∆t ; A5,2 = ∆t ; A6,3 = ∆t ;
3⋅ R 3⋅ R 3⋅ R
2 2 2
F1,1 = ⋅ cos ( x6 + δ ) ⋅ p ⋅ x7 ; F1,2 =− ⋅ cos ( x6 − δ ) ⋅ p ⋅ x8 ; F1,3 = ⋅ sin ( x6 ) ⋅ p ⋅ x9 ;
3 3 3
2 2 2
F2,1 = ⋅ sin ( x6 + δ ) ⋅ p ⋅ x7 ; F2,2 =− ⋅ sin ( x6 -δ ) ⋅ p ⋅ x8 ; F3,3 =− ⋅ cos ( x6 ) ⋅ p ⋅ x9 ;
3 3 3
1 1 1
F3,1= ⋅ p ⋅ x7 ; F3,2= ⋅ p ⋅ x8 ; F3,3= ⋅ p ⋅ x9 ;
3⋅ R 3⋅ R 3⋅ R
0 0 0 1 0 0 0 0 0
C = 0 0 0 0 1 0 0 0 0 ; G=0
0 0 0 0 0 1 0 0 0
7189