Sunteți pe pagina 1din 28

49274 Advanced Robotics

Project 1 – Feature
Map vs Occupancy
Grid Map Localization

Student Name Student Number


Daniel Sayers 99127058
Joshua Gonsalves 11848759
TABLE OF CONTENTS

1 INTRODUCTION 3
1.1 BACKGROUND ................................................................................................................ 3
1.2 SCOPE ........................................................................................................................... 4
1.3 PURPOSE ....................................................................................................................... 4
2 FEATURE BASED MAP LOCALISATION 5
2.1 THE EXTENDED KALMAN FILTER ......................................................................................... 5
2.1.1 Overview ...................................................................................................................................................... 5
2.1.2 The Algorithm ............................................................................................................................................ 5
2.2 MATLAB ...................................................................................................................... 6
2.3 SIMULATIONS ................................................................................................................. 7
2.3.1 1 Feature ...................................................................................................................................................... 7
2.3.2 2 Features ................................................................................................................................................. 11
2.3.3 4 Features ................................................................................................................................................. 13
3 OCCUPANCY GRID MAP LOCALISATION 19
3.1 THE PARTICLE FILTER ..................................................................................................... 19
3.1.1 Overview ................................................................................................................................................... 19
3.1.2 Algorithm .................................................................................................................................................. 19
3.2 SIMULATIONS ............................................................................................................... 20
3.2.1 Test 1 .......................................................................................................................................................... 21
3.2.2 Test 2 .......................................................................................................................................................... 23
4 CONCLUSION 25
4.1 DISCUSSION OF FINDINGS ................................................................................................ 25
4.2 SUMMARY OF PROBLEMS ENCOUNTERED........................................................................... 25
4.3 FUTURE WORK ............................................................................................................. 25
5 BIBLIOGRAPHY 27

1
TABLE OF FIGURES

FIGURE 1 - TEXTURED OCCUPANCY GRIDS FOR MONOCULAR LOCALIZATION (MASON, 2011)................ 3


FIGURE 2 - SIMULATION STAGE .......................................................................................................................... 4
FIGURE 3 - KALMAN FILTER (AIMONEN, 2014) .............................................................................................. 5
FIGURE 4 - MATLAB INPUT FLOWCHART (EARNSHAW, 2016) .................................................................. 6
FIGURE 5 - ROBOT PATH, 1 FEATURE, 2% NOISE ........................................................................................... 7
FIGURE 6 - COVARIANCE, 1 FEATURE, 2% NOISE............................................................................................ 8
FIGURE 7 - ROBOT PATH, 1 FEATURE, 7% NOISE ........................................................................................... 9
FIGURE 8 - COVARIANCE, 1 FEATURE, 7% NOISE......................................................................................... 10
FIGURE 9 - ROBOT PATH, 2 FEATURES, ~20% NOISE ................................................................................ 11
FIGURE 10 - COVARIANCE, 2 FEATURES, ~20% NOISE .............................................................................. 12
FIGURE 11 - ROBOT PATH, 4 FEATURES, ~20% NOISE.............................................................................. 13
FIGURE 12 - COVARIANCE, 4 FEATURES, ~20% NOISE .............................................................................. 14
FIGURE 13 - ROBOT PATH, 4 FEATURES, MAX NOISE .................................................................................. 15
FIGURE 14 - COVARIANCE, 4 FEATURES, MAX NOISE................................................................................... 16
FIGURE 15 - ROBOT PATH, 4 FEATURES, HIGH NOISE ................................................................................. 17
FIGURE 16 - COVARIANCE, 4 FEATURES, HIGH NOISE .................................................................................. 18
FIGURE 17 - MAP 1 ........................................................................................................................................... 20
FIGURE 18 - MAP 2 ........................................................................................................................................... 20
FIGURE 19 - MAP 1, 100 PARTICLES ............................................................................................................. 21
FIGURE 20 - MAP 1, 1000 PARTICLES ........................................................................................................... 21
FIGURE 21 - MAP 1, 10000 PARTICLES ........................................................................................................ 22
FIGURE 22 - ALL X AXIS ERRORS .................................................................................................................... 23
FIGURE 23 - ALL Y AXIS ERRORS .................................................................................................................... 23
FIGURE 24 - ALL ORIENTATION ERRORS........................................................................................................ 24

2
1 INTRODUCTION

1.1 BACKGROUND

University of Technology, Sydney (UTS) course 49274 Advanced Robotics involves a major project
involving the comparison in accuracy of feature based map localisation, and occupancy grid based
map localisation algorithms.

In this report, we will be discussing these two kinds of maps that could be used for localisation. One
is feature based map, and the localisation algorithm is Extended Kalman Filter (EKF) based. Another
is occupancy grid map, and the localisation algorithm is Particle Filter (PF) based.

Figure 1 - Textured Occupancy Grids for Monocular Localization (Mason, 2011)

This report outlines our methods used in order to compare the feature based and occupancy grid
maps. This document outlines the scope of our project, the steps taken to achieve our objectives,
and discusses our results obtained. The report will also include a summary of problems encountered,
as well as future developments we would like to make.

3
1.2 SCOPE

This In this project, we compare and contrast the differences between:

1) Using an Extended Kalman Filter (EKF) localisation algorithm in a feature based map, and

2) Using a Particle Filter (PF) localisation algorithm in an occupancy grid based map.

In order to test out the PF, we will be using our PF we designed in assignment 1 in ROS. We will vary
the map size, map, and number of particles used. In each variation, we will run the robot through a
certain path, and measure the actual position of the robot vs the estimated position of the robot.

In order to test the EKF filter, we will use MATLAB to acquire data by simulating a 2D map with
features. We will vary the number of landmarks in the map, the map size, and the noise of the
sensors, and simulate a robot through a path, and compare its actual position vs its estimated
position. We will acquire this data through each variance, tabulate the data, and compare its
reliability to the reliability of the PF.

1.3 PURPOSE

The purpose of this document is to present our team’s methodology behind our execution of the
project.

Through our implementation method, which is primarily focused on visually tabulating results of
simulations, followed by our observations, we hope to communicate our understanding of
localisation and overall Advanced Robotics knowledge.

Figure 2 - Simulation Stage

4
2 FEATURE BASED MAP LOCALISATION

2.1 THE EXTENDED KALMAN FILTER

2.1.1 Overview

The EKF algorithm is a nonlinear version of the Kalman Filter (KF). The KF has many uses, and is
primarily used now days in vehicle guidance, and robotics applications. The algorithm is a 2 step
process:

1) Predict: Uses a control input to make a prediction on the next state


2) Update: Uses a sensor to make observations to compare with the prediction state

Figure 3 - Kalman Filter (Aimonen, 2014)

Mathematically, the EKF shares the same prediction and update steps as the KF, but in contrast,
some of the linear matrices in the KF algorithm now non-linear in the EKF. EKF is currently
considered as the standard in the theory of nonlinear state estimation, GPS, and navigation systems.

2.1.2 The Algorithm

The algorithm for EKF that we studied was derived from ‘Understanding Extended Kalman Filter –
Part III: Extended Kalman Filter’ (Huang, 2010). The EKF process formula, as time 𝑘 → 𝑘 + 1, is:

𝑥𝑘+1 = 𝑓(𝑥𝑘 , 𝑢𝑘 ) + 𝑤𝑘

Observation model at k+1:

Prediction:

Update:

5
2.2 MATLAB

As per discussed in consultation with our supervisor Shoudong, we used MATLAB to simulate the
feature based map localisation using EKF. We did not write the MATLAB code ourselves, but sourced
it from a research paper that developed a MATLAB code using EKF algorithms for feature based
robot localisation. The most difficult part of this project was understanding how the code worked,
and modifying it to suit our objectives. The size of the feature to the size of the map is 1:10000.

The simulations show 2 different styles of observing features. One which is range based, and one
which is bearing based. Graphs then show the covariance and error.

Below is the input flowchart of the code:

Figure 4 - MATLAB Input Flowchart (Earnshaw, 2016)

6
2.3 SIMULATIONS

2.3.1 1 Feature

In order to effectively demonstrate how accurate the localisation is with just 1 feature, we did 2
simulations: 1 with 2% noise, and one with the noise increased by 5%. The path used to demonstrate
is the sine wave path.

Movement noise: 0.01/0.5


Turning noise: 0.5/10
Range noise: 0.1/2
Bearing noise: 1/20

Figure 5 - Robot Path, 1 Feature, 2% Noise

7
Figure 6 - Covariance, 1 Feature, 2% Noise

8
Movement noise: 0.2/0.5
Turning noise: 0.5/10
Range noise: 0.2/2
Bearing noise: 2.5/20

Figure 7 - Robot Path, 1 Feature, 7% Noise

9
Figure 8 - Covariance, 1 Feature, 7% Noise

Analysis

As can be derived from the results shown above, a small increase in noise significantly increases
covariance when there is only 1 feature, especially as the robot approaches the feature. We can
conclude from this, that even with minimal noise, that feature based localization is unreliable when
there is only 1 feature (Feature:Noise = 1:10000)

10
2.3.2 2 Features

Movement noise: 0.1/0.5


Turning noise: 5/10
Range noise: 0.25/2
Bearing noise: 2.5/20

For 2 features, the simulation presented below is that of a significantly higher noise point than the
previous simulations.

Figure 9 - Robot Path, 2 Features, ~20% Noise

11
Figure 10 - Covariance, 2 Features, ~20% Noise

Analysis

In comparison to the 1 feature, the 2 feature map did a lot better in terms of accuracy, even with
heavily increased noise. We ran a few more simulations for 2 feature with even further increased
noise, but felt as though it was irrelevant to this report as we have already been able to make a good
comparison between the 1 and 2 feature map.

12
2.3.3 4 Features

For the 4 feature map, the simulation covariance were half that of the 2 feature one. We’ve also got
a different map with different step counts.

Movement noise: 0.1/0.5


Turning noise: 5/10
Range noise: 0.25/2
Bearing noise: 2.5/20

Figure 11 - Robot Path, 4 Features, ~20% Noise

13
Figure 12 - Covariance, 4 Features, ~20% Noise

14
Movement noise: 0.5/0.5
Turning noise: 10/10
Range noise: 2/2
Bearing noise: 20/20

Figure 13 - Robot Path, 4 Features, Max Noise

15
Figure 14 - Covariance, 4 Features, Max Noise

16
Movement noise: 0.1/0.5
Turning noise: 5/10
Range noise: 0.5/2
Bearing noise: 5/20

Figure 15 - Robot Path, 4 Features, High Noise

17
Figure 16 - Covariance, 4 Features, High Noise

Analysis

As can be demonstrated with the above simulations, unless the noise of the robot sensors is
extremely high, the covariance of the robot localisation is quite low with 4 features in a 1:10000
feature to space size map. As the robot gets further away from its last closest feature, the inaccuracy
increases.

18
3 OCCUPANCY GRID MAP LOCALISATION

3.1 THE PARTICLE FILTER

3.1.1 Overview

The particle filter is a localisation algorithm used for many different applications, which we have
used for the localisation of a robot. Particle filters aim is to solve a filtering problem which consists
of estimating internal states in dynamic systems in an attempt to compute the location of the robot
in the real world. In order to implement this kind of filter, the machine must have several resources
to be able to discover the robots location. It requires a map of the area in which it is operating, to
allow it to estimate its position based on data obtained from laser sensors and comparing that to the
map and hence also requires sensors, to allow for this. In a particle filter we use random samples to
represent our belief about the robots position and orientation. Each particle is a guess at the robots
actual position, each with a weighting which is based on how accurate the particle is with the
readings it has obtained. Each time the robot moves, the sensors gain new information and then a
new estimation is calculated, based on the previous belief as well as the new data. This belief is
updated by first predicting where we now think we are, and then is compared to the map to find a
final belief.

3.1.2 Algorithm

The particles movement is based on the below equations:

xt + 1 = xt + (d + wd) cos(θt)

yt + 1 = yt + (d + wd) sin(θt)

θt + 1 = θt + (∆θ + wθ)

The new x and y values are calculated by adding the old belief values to the distance that is believed
to be travelled with the inclusion of noise, calculated using a random Gaussian distribution, which is
then split into the x and y components using sine and cos. The orientation is found similarly but is
instead calculated as the previous belief which is added to the change in the orientation plus noise.
This therefor gives us a somewhat random distribution of particles.

In order to calculate the measurement probability, that is, how accurate each particle is likely to be
in relation to the actual position, we use another equation.

(𝑧̂ −𝑧 )2
1 − 𝑖 2𝑖
𝑒 2𝜎𝑧
√2𝜋𝜎𝑧2

The terms that change in this equation are z and z(hat), where sigma is the standard deviation which
is held constant. These z terms represent the real measurement and the predicted measurement
respectively, based on the location of that particle and the map. The result of this equation is a
likelihood of the accuracy of the particle relative to the actual position of the robot.

Our final step involves taking the average of all of our particles to find the best estimate of the actual
position of the robot. We implemented this by taking the weighted average of all the particles,

19
resulting in an estimate around the middle of the converged particles. The equation used for this
was:

𝑛𝑜 𝑜𝑓 𝑝𝑎𝑟𝑡𝑖𝑐𝑙𝑒𝑠

𝑥𝑎𝑣𝑒𝑟𝑎𝑔𝑒 = ∑ 𝑥 ∗ 𝑤𝑒𝑖𝑔ℎ𝑡
1

𝑛𝑜 𝑜𝑓 𝑝𝑎𝑟𝑡𝑖𝑐𝑙𝑒𝑠

𝑦𝑎𝑣𝑒𝑟𝑎𝑔𝑒 = ∑ 𝑦 ∗ 𝑤𝑒𝑖𝑔ℎ𝑡
1

∑𝑛 sin(𝜃𝑖 ) ∑𝑛𝑖=1 cos(𝜃𝑖 )


𝜃̅ = 𝑎𝑡𝑎𝑛2( 𝑖=1 , )
𝑛 𝑛

As the total weight of all particles summed together is equal to 1, multiplying each Cartesian value
by its weight then adding all of these together gives the average Cartesian value related to the
weight, giving the best estimate of the robots pose. The angle has a different equation due to the
orientation being a polar orientation, this is converted to a Cartesian coordinate to find the average
and then transformed back into a polar coordinate in radians.

Finally, as particles sometimes become less accurate, as their random sampling takes them further
away from the actual position of the robot, they need to be reset. This is achieved in a resampling
function which removes old no longer relevant particles and replaces them with particles that join
back in the converged area.

3.2 SIMULATIONS

In order to test the accuracy of our particle filter, we modified the simulation to allow us to change
the number of particles used, the size of the map, and to use a different map altogether. We
designated to test our robot’s accuracy using 100 particles, 1000 particles and 10000 particles,
across two map types seen below, and 2 different sizes of each map.

Figure 17 - Map 1 Figure 18 - Map 2

20
3.2.1 Test 1

Error over Time


6
5
4
3
2
1
0
-1
-2
-3

X error Y error 0 error

Figure 19 - Map 1, 100 Particles

Analysis

The minimal number of particles took a lot longer to converge, showing a greater margin of error
until they finally are able to maintain a convergence on the robot. Even still, the error registered
here is greater than that found in later tests. This is largely due to the fact that there are less guesses,
therefor minimising the likelihood of an accurate guess of the robots pose. If we were to reduce the
number of particles even further, we would see this trend exponentially increase as we approach 1
particle, the error tends toward infinity.

Error over Time


8

-2

-4

X error Y error 0 error

Figure 20 - Map 1, 1000 Particles

21
Analysis

The use of 1000 particles provides both a reasonable amount of computational effort as well as a
fairly accurate result with a fast convergence as compared to the 100 particle example. There is
however a large spike in the orientation losing its position which appears to have a large error. This
is a result of the calculation method to find the difference in the actual and estimated orientations.
The orientation is measured in radians, and in polar form and hence while one estimate believes
that the robot is at -𝜋, the actual position is at 𝜋, which are the same position in polar coordinates,
hence why there appears to be an error of 2𝜋. The overall accuracy of the particle filter in this
application is much more stable than that shown using 100 particles. This can be attributed to an
increase in particles which results in a more accurate weighted average, as there is more chance that
more particles will be closer to the actual position of the robot.

Error over Time


3

-1

-2

-3

X error Y error 0 error

Figure 21 - Map 1, 10000 Particles

Analysis

Again the trend continues as we increase the number of particles, the trend lines become more
stable and more accurate. This can be attributed to the same reasons as in the last test, where the
increased number of particles provide more guessed at the actual pose of the robot, meaning that
there is a greater chance of more particles accurately describing the robots pose. From this trend so
far we are able to see that as the number of particles tend toward infinity, the error tends toward 0.

22
3.2.2 Test 2

Error in X Axis
6
5
4
3
2
1
0
-1 1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637
-2

map 1 100 p map 1 1000 p map 1 10000 p map 2 100 p


map 2 1000 p map 2 10000 p map 3 100 p map 3 1000 p
map 3 10000 p map 4 100 p map 4 1000 p map 4 10000 p

Figure 22 - All X Axis Errors

Error in Y Axis
3.5
3
2.5
2
1.5
1
0.5
0
-0.5 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41

-1
-1.5

map 1 100 p map 1 1000 p map 1 10000 p map 2 100 p


map 2 1000 p map 2 10000 p map 3 100 p map 3 1000 p
map 3 10000 p map 4 100 p map 4 1000 p map 4 10000 p

Figure 23 - All Y Axis Errors

23
Error in Orientation
8
6
4
2
0
-2 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41
-4

Series1 Series2 Series3 Series4


Series5 Series6 Series7 Series8
Series9 Series10 Series11 Series12

Figure 24 - All Orientation Errors

Analysis

Through the results obtained above, it is clear how the effect of the number of particles effects
performance. Throughout the different map types, we observed similar results in error with the
major influencing factor being the number of particles which were used for the simulation. The use
of 100 particles varied between-1 and 1 with relative uncertainty, as the algorithm continually tries
to resample to gain the exact position of the robot. As for the 1000 and 10000 particle simulations,
the results were quite similar, with an average error of about 0.1 for 1000 particles and around
0.001 for 10000 particles. The amount of extra processing time required to be able to run the 10000
particle simulation was not worthwhile in the results, as the robot gained accuracy, but not enough
to warrant around a 1000% increase in time required per calculation. If this were a real time robot,
trying to act in real time, then to use a particle density which is too high, such as 10000 in our
simulation, restricts its ability to act in a reasonable time period, even though it is more confident in
its approximate position. We created one further simulation to test the theory as particles approach
infinity, error goes to 0. For this we used 100,000 particles, requiring a time of around 35 seconds to
simulate each step. This test gave us an error result of around 1*10-8, which following this trend
does show that error tends to 0 as particles increase.

24
4 CONCLUSION

4.1 DISCUSSION OF FINDINGS

From making multiple MATLAB simulations, and playing around with different map sizes and noises,
we found that the EKF algorithm was more efficient when there were more features. As the robot
approached a feature, the accuracy would increase, and as it got further away from a feature,
accuracy would decrease.

More noise would increase inaccuracy, and the more steps the robot would take, inaccuracy would
increase every step the robot took.

Note that accuracy of features was based on feature size in relation to map size. If the map was 10
meters by 10 meters, a 1x1 meter sized robot would have higher accuracy levels in localization.
However, a 1x1 meter sized robot would have a harder time estimating its position in a 1000x1000
meter map.

With the particle filter, more particles meant better accuracy too. However, too many particles
required significantly more processing power, which meant the robot localised itself slower. This
makes the particle filter the more accurate filter theoretically, however in real world applications the
immense processing power required to achieve these accuracy results is just not possible with
current technology.

Given certain conditions, both filters can be optimal depending on the real-world environment
which they are placed in. if a room were to have many features, then the EKF could achieve a more
accurate result than the particle filter, however with more sensors and particles, the Particle filter
can be more accurate. We have concluded from our research that the sensor to be chosen is
dependent on the situation and application in which the robot will be used.

4.2 SUMMARY OF PROBLEMS ENCOUNTERED

The main issue that we encountered with the testing of the particle filter was the ROS environment.
We found that different computers would yield different results, often quite dramatically. This was
most evident on my own laptop, where the result of the simulation would never converge in the
correct location, but rather would converge in a random pose. This was also evident when running
the same simulation multiple times on the same computer that normally would yield expected
results. Sometimes on this computer the simulation would converge exactly opposite to the robot,
mirroring the robots moves instead of following them.

4.3 FUTURE WORK

The comparison we made was between an EKF and a PF, which can both be used for 2D applications.
However, as MATLAB and ROS simulated different environments, it made it difficult to directly
compare accuracies that were primarily based on the simulations themselves. Ideally both of these
filters would be implemented on the same robot in the same environment, being somewhat
interchangeable and achieving similar results. This way the results can be compared with more
constants in place, holding a lot of the uncertainties stable for each test, and then changing one
aspect of the simulation to show how this affects accuracy.

25
This could also be improved by creating a much larger testing environment. By increasing the
number of maps used, multiple different sizes, and a changing a larger range of parameters, such as
the amount of noise in the system, or the number of sensors used. By doing this and changing all of
the aspects of each filter, we would be able to create a much more accurate analysis of the
differences, benefits and downfalls of each filter to prove which is generally more accurate. This
however does not mean that the results we obtained are irrelevant, as they do show how each of
these filters perform in their own environment, and then a comparison can be made.

26
5 BIBLIOGRAPHY

Aimonen, P. (2014). Kalman Filter. Retrieved from Wikipedia:


https://en.wikipedia.org/wiki/Kalman_filter

Earnshaw, C. (2016). Develop EKF Based Algorithms for Landmark Based Robot Localization.
University of Technology Sydney, Faculty of Engineering and Information Technology,
Sydney.

Huang, S. (2010). Understanding Extended Kalman Filter – Part III: Extended Kalman Filter. Faculty of
Engineering and Information Technology, University of Technology Sydney, ARC Centre of
Excellence for Autonomous Systems, Sydney.

Mason, J. (2011, March 29). Textured Localization. Retrieved from Duke University Computer
Science: https://users.cs.duke.edu/~parr/textured-localization/

27

S-ar putea să vă placă și