Documente Academic
Documente Profesional
Documente Cultură
"Position"
Localization Cognition
Global Map
Localization and
Mapping
Belief representation
Map representation
Planning in the Cognition step requires more than only position as input
The movement of the robot, sensed with wheel encoders and/or heading
sensors is integrated to the position.
Pros: Straight forward, easy
Cons: Errors are integrated -> unbound
deterministic non-deterministic
(systematic) (non-systematic)
Drift error: difference in the error of the wheels leads to an error in the
robots angular orientation
Over long periods of time, turn and drift errors far outweigh range
errors!
Consider moving forward on a straight line along the x axis. The error in the y-
position introduced by a move of d meters will have a component of dsinDq,
which can be quite large as the angular error Dq grows.
⎡ x⎤ ⎡Δx⎤
p = ⎢ y⎥ p′ = p + ⎢⎢Δy ⎥⎥
⎢ ⎥
⎢⎣ θ⎥⎦ ⎢⎣Δθ⎥⎦
Note: Errors ellipse in does not remain perpendicular to the direction of movement!
BILD 1 Borenstein
b) Continuous map
with multiple hypothesis
c) Discretized map
with probability distribution
d) Discretized topological
map with probability
distribution
Continuous Discrete
DBNs
St St-1 St St-1 At
Markov Bayesian
Markov Loc MDPs
Chains Filters
Kalman
St St-1 Ot Filters St St-1 Ot At
More specific
© R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
25 Single-hypothesis Belief – Continuous Line-Map
Courtesy of W. Burgard
Continuous Representation
Decomposition (Discretisation)
Environment Representation
Continuos Metric → x,y,θ
Discrete Metric → metric grid
Discrete Topological → topological grid
Environment Modeling
Raw sensor data, e.g. laser range data, grayscale images
• large volume of data, low distinctiveness on the level of individual values
• makes use of all acquired information
Low level features, e.g. line other geometric features
• medium volume of data, average distinctiveness
• filters out the useful information, still ambiguities
High level features, e.g. doors, a car, the Eiffel tower
• low volume of data, high distinctiveness
• filters out the useful information, few/no ambiguities, not enough information
Courtesy of S. Thrun
© R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
35 Map Representation: Decomposition (5)
Topological Decomposition
node
(location)
edge
(connectivity)
Sensor fusion
2D...3D
"Position"
Localization Cognition
Global Map
?
position
Position Update
(Estimation?)
Encoder Prediction of
Position matched
(e.g. odometry) observations
YES
Perception
Perception
beacons or landmarks
Observation
• Probabilistic Map Based Localization
© R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
42 Probabilistic, Map-Based Localization (1)
As it start to move, say from a precisely known location, it might keep track
of its location using odometry.
However, after a certain movement the robot will get very uncertain about
its position.
Î update using an observation of its environment.
observation leads also to an estimate of the robots position which can than
be fused with the odometric estimation to get the best possible update of
the robots actual position.
Given
the position estimate p(k k )
its covariance Σ p (k k ) for time k,
Probability Density
the current control input u(k )
the current set of observations Z (k + 1) and
the map M (k )
Compute the
new (posteriori) position estimate p(k + 1 k + 1) and
its covariance Σ p (k + 1 k + 1)
matched predictions
Predicted features
and observations
observations
Map
data base
YES
Matching
perception
2. Observation with on-board sensors
3. Measurement prediction based on prediction and map Observation
on-board sensors
4. Matching of observation and map
5. Estimation -> position update (posteriori position)
© R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
47 Markov Ù Kalman Filter Localization
During each update, the probability for each state (element) of the entire
space is updated.
Product rule:
Bayes rule:
Bayes rule:
Map from a belief state and a sensor input to a refined belief state (SEE):
Map from a belief state and a action to new belief state (ACT):
Summing over all possible ways in which the robot may have reached l.
with lt and l’t-1 : positions, ot: odometric measurement
Markov assumption: Update only depends on previous state and its most
recent actions and perception.
t-i is used instead of t-1 because the topological distance between n’ and n is
very depending on the specific topological map
* Note that the probabilities do not sum up to one. For simplicity normalization was avoided in this example
3. Robot moves
¾ Action model enables to estimate the
new probability distribution based
on the previous one and the motion.
4. Robot perceives second pillar
¾ Base on all prior knowledge the
probability being at pillar 2 becomes
dominant
Action update:
Sum over previous possible positions
and motion model
Perception update:
Given perception i, what is the
probability to be a location l
Courtesy of
W. Burgard
Courtesy of
W. Burgard
T E
D A
U P
How to calculate p(i | l) - examples M E
S
Depends largely on the sensor and environmentO model
D S
E.g.: Counting the on numberE ofE
cell for a given pose (x,y,θ).N
sensor readings that are laying in a occupied
D E
L I
S
E.g.: learning p(i | l) from experiments -> What is the probability to see a door if
the robots stand in front of a door with a certain angle.
Courtesy of
W. Burgard
Position 5
Position 3
Position 4 © R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
63 Markov Localization: Case Study 2 – Grid Map (5)
Example 2: Museum Courtesy of
Laser scan 1 W. Burgard
Weighted least-square
u = velocity
w = noise
Motion
matched predictions
Predicted features
and observations
observations
Map
data base
YES
Matching
perception
2. Observation with on-board sensors
3. Measurement prediction based on prediction and map Observation
on-board sensors
4. Matching of observation and map
5. Estimation -> position update (posteriori position)
© R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
74 Kalman Filter for Mobile Robot Localization: Robot Position Prediction
In a first step, the robots position at time step k+1 is predicted based on its
old location (time step k) and its movement due to the control input u(k):
Σ p ( k + 1 k ) = ∇ p f ⋅ Σ p ( k k ) ⋅ ∇ p f T + ∇u f ⋅ Σu ( k ) ⋅ ∇u f T
Σ p ( k + 1 k ) = ∇ p f ⋅ Σ p ( k k ) ⋅ ∇ p f T + ∇u f ⋅ Σu ( k ) ⋅ ∇u f T
⎡kr Δsr 0 ⎤
Σu (k ) = ⎢ ⎥
⎣ 0 kl Δsl ⎦
line j
αj
rj
⎡α j ⎤
R Sensor
z j (k + 1)= ⎢ ⎥ (robot)
⎣ rj ⎦ frame
⎡σαα σαr ⎤
ΣR, j (k + 1) = ⎢ ⎥
⎣ σrα σrr ⎦ j
© R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
78 Kalman Filter for Mobile Robot Localization: Measurement Prediction
Ẑ (k + 1) = {ẑi (k + 1)(1 ≤ i ≤ ni )}
The function hi is mainly the coordinate transformation between the world
frame and the sensor frame
and its innovation covariance found by applying the error propagation law:
The validity of the correspondence between measurement and prediction can e.g.
be evaluated through the Mahalanobis distance:
with
SLAM
The Simultaneous Localization and Mapping Problem
a and b define the learning and forgetting rate and ns and nu are the
number of matched and unobserved predictions up to time k, respectively.
An online SLAM algorithm factorize that formula to estimate the robot state
at current time t
raw 3D scan of
the same scene
raw data
explore
on stack
already
examined
•Andrew J. Davison, Real-Time Simultaneous Localization and Mapping with a Single Camera, ICCV 2003
•Nicholas Molton, Andrew J. Davison and Ian Reid, Locally Planar Patch Features for Real-Time Structure
from Motion, BMVC 2004
(From Davison’03)
Can be Real-Time !
© R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
121 SLAM versus SFM
Repeatable Localization
Ability to estimate the location of the camera after 10 minutes of motion with the same
accuracy as was possible after 10 seconds.
Features must be stable, long-term landmark, no transient (as in SFM)
Where:
In the SLAM approach the Covariance matrix is not diagonal, that is, the
uncertainty of any feature affects the position estimate of all other features
and the camera
Prediction: a motion model is used to predict where the camera will be in the next time
step (P increases)
?
Observation: a new feature is measured (P decreases)
Attempts to predict where the camera will be in the next time step
Most Structure from Motion approaches did not use any motion model!
The unknown intentions, and so unknown accelerations, are taken into account by
modeling the acceleration as a process of zero mean and Gaussian distribution:
By setting the covariance matrix of n to small or large values, we define the smoothness
or rapidity of the motion we expect. In practice these values were used:
⎡ 4 m/s ⎤
⎢6 rad / s ⎥
⎣ ⎦
• At each frame, the features occurring at previous step are searched in the elliptic region
where they are expected to be according to the motion model (Normalized Sum of Square
Differences is used for matching)
• Large ellipses mean that the feature is difficult to predict, thus the feature inside will
provide more information for camera motion estimation
•Once the features are matched, the entire state of the system is updated©according to EKF
R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
128 Initialization of New Features in the Database
Along this line, 100 possible feature positions are set uniformly in the range 0.5-5 m
Each matching produces a likelihood for each hypothesis and their probabilities are recomputed
The number of features to be constantly visible in the image varies (in practice between 6-
10) according to
• Localization accuracy
• Computing power available
If a feature is required to be added, the detected feature is added only if it is not expected
to disappear from the next image
N. Molton, A. J. Davison and I. Reid, Locally Planar Patch Features for Real-Time Structure from Motion, BMVC 2004
A. J. Davison, Y. Gonzalez Cid, N. Kita, Real-Time 3D SLAM with wide-angle vision, IFAC Symposium on Intelligent
Autonomous Vehicles, 2004