Sunteți pe pagina 1din 122

Motion Planning for Flying Robots

air lab

Sebastian Scherer
http://theairlab.org
at the Field Robotics Center
Outline

• Introduction
• Problem
• Abstraction and Approach
• Results
• Summary
• Exercise

2
Once a dream…

Uber Elevate Vertiports Lililum Operation Concept

3
… becoming a reality

Lilium eHang

Aurora Airbus A3
4
What has changed?

• e-VTOL have the potential to be much


lower operating cost due to the simplicity
of electric designs and active control.

• If the operating cost is low enough urban


air taxis may make economic sense.

5
Why is autonomy relevant for
these systems?
– Current vertical flight safety
expectation don’t match
passenger expectations.

– Not enough pilots for the scale of Total number of eligible pilots in the
US: 25,918
operation and take a long time to (FAA US Civil Airmen Statistics 2016)
train.

– Congested and concerted air


traffic can be beyond pilots
capability.

– 200 pounds of extra batteries Assuming 200 Wh/kg


make a big difference ~ another
20kWh

WSJ 2009, https://www.wsj.com/articles/SB123370317612745375 6


Why is Autonomy Difficult?

7
Why is Autonomy Difficult?

>100’s meters

4mm

8
Why is Autonomy Difficult?

Land here?

9
2006

2007
Obstacle
Avoidance
2008
Landing
2009

River Shipdeck
2010 Mapping Tracking

Flying
2011 Cars

2012 Emergency
Indoor
Exploration Landing
2013
Bridge
2014 Inspection

Payload <100g ~1kg ~10’s kg ~100’s kg >1000


10 kg
Small Example

11
Sebastian Scherer, Carnegie Mellon University, http://theairlab.org
Large: Autonomous Approach and Landing

12
Choudhury et al. AHS Forum 2014, and other work
Each System Operates in Different Environments
and Has a Different Motion Planning Approach

• Why don’t we have an ultimate motion


planning system that works well for all
applications?
• What is common and what is different
between applications?
• What are potential approaches?

13
Outline

• Introduction
• Problem
• Abstraction and Approach
• Results
• Summary
• Exercise

14
Motion Planning Problem

React in real-time to previously unknown


obstacles, avoid no-fly-zones, and land.

Obstacles

Helicopter

Land here.
No-Fly
Zone
(NFZ)
15
Assumptions
Here:
- Little uncertainty
- No exploration actions necessary

Variations on the problem


System uncertainties:
- Position
- Sensing
- Action
Need to gather data about the world:
- Maximize information gain
- No explicit goal state
- Viewpoint planning or active exploration
16
we use and how they are used with the trajectory executive and trajectory

rajectory planner is to find a path that respects the vehicle and obstacle
The Trajectory Planning Problem
ptimizing a cost function to find the “best” path. The trajectory planning
by a set of initial conditions, constraints, and a cost functional. The trajec-
em can be formally stated as follows, where all quantities are in SI (meters,


f ind (t) = x(t), y(t), z(t), ⇥(t),t f Time parameterized trajectory
´ tf
minimize : J = 0 c( (t))dt + c( (t f )) Cost function
constraints : (0) = 0
Boundary value constraints
(t f ) = f (1)
˙
h( (t), (t), ¨ ...) = 0
(t), Non-holonomic constraints
˙
g( (t), (t), ¨ ...) 0
(t), System limitations
J< Cost function constraints
me parameterized command trajectory of the coordinates of the centre of
elicopter and the heading of the vehicle, where t f is the final time [s].
Variant
unction which is ofa the optimalofcontrol
function problem constrained
the trajectory to a as
(t) as well trajectory
the terminal
Note that typically J is partially known and discovered on the fly.
) are the boundary value constraints of the trajectory
of equality constraints which are of non-holonomic nature
of bounds conveying the limitations of the system
Example

(t)

J
0 f
h( (t), ˙ (t), ¨ (t), ...) = 0
g( (t), ˙ (t), ¨ (t), ...)  0
(t)
Example non-holonomic constraint:
Example system constraint:
h( (t), ˙ (t)) = wq̇ = 0
0 1 g( (t), ˙ (t)) = k✓˙max ˙ 0
✓k

q̇ = @ẏ A w = sin ✓ cos ✓ 0
✓˙
s to reach the goalReference
WIND 40 knots

point as quickly as possible. The cost functions and the dynam


60

ecessary
Combinedcomponents
Objective that define the planning problem.
40
Tracked
Planning Problem: Cost Functions
20

individual cost functions are assigned weights and summed to create an overall cost fun
functions
0

. This is as follows −20

functions belong to 4 categories. They are as follows −100 −80 −60 −40 −20 0 20 40 60 80 100

me to mission completion
(a) Jtotal = wi Ji (b)
i
ost function conveys the need to minimize mission completion time. It is formula
where Ji is flying
: Vehicle the costatfunction theaithwind
of in category and w is the weight of the corresponding co
1. Time to Mission Completion of 20 knotsi (a) Turning into the wind requires a
100 knots
ction.
bank angle in order to compensate the drift (b) Turning with the wind requires much les
orand
thecauses
April PDR,
much aless
subset of the cost functions were used. The weights are as follows
deviation. J =t
1 f
Ji wi
is the final time. t is inherently available since the planning algorithms produces ti
2. No Fly fZone J1 6.0
rized trajectories (t).
J2 1.0
tance to obstacle J3 ⇥1.0
(t) ⇥ XNFZ
ost conveys the need to keep the
J2 =J4 vehicle
0.0 safe by penalizing proximity to obstac
(disabled)
g on the speed. It is formulated as 0 otherwise
X NFZ is Dynamic
Vehicle the volumeModel
inside
ˆ a no fly zone, including the 2 boundary.
y zones are polygons (dmax tomin(d
provided the max , dobs-(t)))
planner therev(t)dt
are simple analytical functi
J2 =
aspace
pointoflies
all within
possiblea command
polygon. The 2dmax its
reason
trajectories this cost is binary
restricted in formdifferential
by non-linear is becausecoth
ints
dobsimposed
criteria bynearest
(t) attached
is the the dynamics
with it. Theoftrajectory
distance the
to vehicle
isand thefrom
expected
the obstacle limits of
(t),thethe
to hug controller.
dmaxwall Theno
of the
is the goal
flyofzon
saturation th
lim
0

T = F( r , vr )

Planning Problem: Cost Functions


r
3. Time to Collision
• Representation of collision risk
vr
• Scales with velocity
• Based on reachability volume
• =Approximated
1, dobs < by adpyramid
critical

treach
c = min t (Reachability)
Xobs 2R(t,x0 )

(
(tmax treach
c )2
J3 =
1 dobs < dcritical
Obstacle Representation

Obstacle

Range vector
measurement

Sensor
Model P(o | m)

3D Evidence Grid
Updating an Evidence Grid Cell
• Each cell is assumed to be
independent and contains
the belief of occupancy of
the volume
• The belief can be updated
as follows (assuming the
prior P(m)=0.5): ⇣ ⌘ 1
P (m|o) b(m)
b0 (m) = 1 1+ 1 P (m|o) · 1 b(m)
• However usually a log-odds
representation is used:
b(m) 0
b (m) = b(m) + ln P (m | o) ln (1 P (m | o))
b(m) = ln
1 b(m)
22
Foessel02, Moravec
A typical Sense, Plan, and Act Cycle
(Comparison between Ground and Air Robots)
Ground Robot (2D Top View) Air Vehicle (Top View, 3D Slice)

3D

O(dmax2): O(dmax3):
dmax=5 ≈ 25 cells dmax=5 ≈ 125 cells
dmax=20 ≈ 400 cells dmax=20 ≈ 8000 cells

Nuske et al. , JFR 2015, Scherer09


Legend:Red = large/obstacle Blue = small/free
A Completely Incremental Framework for
Planning
Obstacle Map Cost Map

Evidence Grid
Filtering:

List of Updated
Obstacle Classifications

Plan Limited Incremental


Distance Transform:
List of Updated
Cell Costs

Incremental
Planning Algorithm:

Updated Plan

Nuske et al. , JFR 2015, Scherer09


How does one handle large
environments and speeds?
• Full-scale helicopter at 60 m/s.
Mission lengths ~ 400km.
• Scrolling buffer grids:

i = rx mod n
j = ry mod n
k = rz z mod m
• r, r z = resolution, n,m = map size,
• x,y,z = coordinates, I,j,k = map
indices

25
Nuske et al. , JFR 2015
Questions

• What are some of the pros/cons of this


type of approach for filtering?
• What are alternative approaches to
represent the world what are their
advantages and disadvantages?

26
trajectory.
Dynamic constraints on trajectory trajectory. We We briefly
briefly expand onexpand
each ofonthese
each oi
J =1
q. 1, constraints h and g are derived from vehicle dynamics 3.1 Dynamic and controller
constraints limits. They a
on trajecto
ollows: 3.1 Dynamic constraints on trajectory
Planning Problem: Constraints
1. The non-holonomic equality constraints h(. X J
J =In1Eq. 1, constraints h and g are derive
. )N1
.= are
F Z that the trajectory should represe
In Eq. 1, constraints h and g are derived from v
as follows:
coordinated flight where the vehicle has no side slip (above 20 knots). This implies th
as follows: 1. The non-holonomic equality con
heading ⇥ and
1. Dynamicsroll are constrained to be X
XNN
1. The non-holonomic
FZ
FZ
coordinated equality
flight whereconstraints
the veh
coordinated1 ẏ heading ⇥ and
flight where roll vehicle
heading
the are constra
has
Coordinated vehicle motion with ⇥ = tan
heading ẋ⇥ and roll are roll constrained to b(
no sideslip (above 20 knots)
1 v⇥ ˙
= tan
g ⇥
where v is the velocity and g is the gravitational constant.
2. Actuator Limits
2. The inequality constraints g(. . . ) on the trajectory are where based vonisthe the controller
velocity and limits,
g is= thi
the space ofmaxfeasible˙ trajectories that
˙ ˙ , v̇  can ,be ˙ tracked 2. by, vthe
The controller.
inequality This implies
constraints g(. . . r)
 max ,  max , v̇  amax ,  max
, ˙ max a max ˙ ,v 
max 
vmax v, max h  vh,max
vh ,vvh,max
, roll rate ˙ , heading rate ⇥, ˙ forward velocitywherev,v vertical
is the
thevelocity of and
velocity
space vh andg isacceleration
feasible the gravit
trajectories
 max
are constrained to ,be˙  max ˙ , v̇  amax 2. , ˙ inequality
The max ,˙ v
constraints
max , v, roll vrate 
g(.v.h,max
, hheading on ⇥,
. )rate the
˙ fot
(3. Wind) ẋg = Va⇥cos( (t)) + constrained
Vw,x
max ofare
the space feasible to be that can
trajectories
Related motion in airmass to ẋg = V˙a⇥ ,cos(
roll˙ rate
max
(t)) + ˙, V heading
w,x rate ⇥, ˙ forward ve
ground track – constrains the
minimum radius of curvature. ẏg = Vv̇a⇥sin(
are aconstrained
max (t)) + Vw,y
to be
(
ẏg = ⇥V ⇥max(t)) + Vw,y
˙ a⇥sin(
v ⇥ vmax
Coordinated Turn Equations
Center of Turn
Turning Radius ⇡
Va2 cn = pn + R cos( s )
R= 2
g tan ⇡
ce = pe + R sin( s )
2
Change in Angle New Position

Va p0n = cn + R cos( + s )
2
= t ⇡
R p0e = ce + R sin( + s )
2

Va = Forward speed
R = Turning radius
= Roll angle
= Heading angle
pn , pe = North, east position
28
cn , ce = Center of turn
Outline

• Introduction
• Problem
• Abstraction and Approach
• Results
• Summary
• Exercise

29
we use and how they are used with the trajectory executive and trajectory

rajectory planner is to find a path that respects the vehicle and obstacle
The Trajectory Planning Problem
ptimizing a cost function to find the “best” path. The trajectory planning
by a set of initial conditions, constraints, and a cost functional. The trajec-
em can be formally stated as follows, where all quantities are in SI (meters,


f ind (t) = x(t), y(t), z(t), ⇥(t),t f Time parameterized trajectory
´ tf
minimize : J = 0 c( (t))dt + c( (t f )) Cost function
constraints : (0) = 0
Boundary value constraints
(t f ) = f (1)
˙
h( (t), (t), ¨ ...) = 0
(t), Non-holonomic constraints
˙
g( (t), (t), ¨ ...) 0
(t), System limitations
J< Cost function constraints
me parameterized command trajectory of the coordinates of the centre of
elicopter and the heading of the vehicle, where t f is the final time [s].
unction which is aWhat
functionare potential
of the trajectory (t)approaches
as well as the terminal

) are the boundaryto solve


value this ofproblem?
constraints the trajectory
of equality constraints which are of non-holonomic nature
of bounds conveying the limitations of the system
How do we discretize the problem?

s1 s2

sstart sgoal

How do you
define a edge?

s3 s4

(Not always explicitly constructed)


How do we find the best path through the
graph?
s1 s2

sstart sgoal

s3 s4
Edge Costs
• Cost between two nodes. (Can also have vertex costs but typically
only edge.)
• Cost c(s,s’) depends on the cost of the objective J of the trajectory
segment of the edge s->s’
• Calculating c(s,s’) can be expensive (collision checking)
3
s1 s2

2
c(sstart ,s1)=4

5 5
sstart sgoal

3
5

s3 s4
3

33
Calculating the least-cost path

g(s) cost of the least-cost path. Optimal g


satisfies: g(s) = min g(s0 ) + c(s0 , s)
s0 2pred(s)

g=4 g=7

3
s1 s2

2
4
g=0 g=9
5 5
sstart sgoal

3
5 g=5 g=8

s3 s4
3

34
Finding the Least-Cost Path: Backtracking

• Start at the goal and greedily backtrack to


start: s 00
= argmin 0
s 2pred(s) (g(s 0
) + c(s 0
, s)
g=4 g=7

3
s1 s2

2
4
g=0 g=9
5 5
sstart sgoal

3
5 g=5 g=8

s3 s4
3

35
What are the main questions?
Fixed time/memory budget (real-time planning):
1. What vertices to create?
2. Where to search?

s1
? s2

?
sstart ? sgoal

s3 s4
?
36
3 Representative approaches:

• Regular graph search: A*-grid search


• Sampling-based: RRT*
• Trajectory optimization: CHOMP

37
Compute optimal g and heuristic h
A* 1. What vertices to create?
Based on heuristic and cost g+h
2. Where to search?
Look at priority queue

Cost of the best path g(s)


from the start so far.
Underestimate of
the cost to go.
s1 s2 h(s)

sstart sgoal

s3 s4
38
Admissible Heuristic Function h

• Popular function: Euclidean distance

h(s):
• Admissible: h(s)≦c(s,sgoal )
• Consistent (satisfies triangle inequality):
– h(sgoal ,sgoal )=0 and for all other states:
– h(s) ≦c(s,succ(s))+h(succ(s))

39
A* Search

Update g based on the smallest g+h cost:


A*:

g(sstart )=0; g(s≠sstart )=∞; OPEN={sstart }

while(sgoal ≠ s)
remove s with the smallest g(s)+h(s) from OPEN
expand s

40
A* Properties

• Resolution-complete and optimal


• Minimum number of state expansions

41
Questions
• What graph should you create?
• What resolution to pick?
• How do you expand it?
• Performance depends on
– Graph: Branching factor/Abstraction of the
system
– Quality of the heuristic for the system and
environment
– Match of the graph abstraction to the
environment

42
Going Deeper
• How do you incorporate dynamics into your graph?
– State lattice: [Pivtoraika09]
– Maneuver Automaton: [Frazzoli02]
• What are more interesting heuristics for dynamical
systems?
– Precompute heuristics [Knepper06]
– Dubin’s heuristic [Dubins57]
• How do we repair rather than redo the search for the
changing graph?
– D* Lite [Koenig02]
– Anytime D* [Likhachev05]
– Anytime Search [Hansen07]

43
Sampling Based Planning

• Where should you put your graph?


• What resolution to pick?

s1 s2

s1 s2

s1 sstart
s ssgoal s2
2 1

sstart sgoal

sstart sg
sstart sss3goal ss4
1 2

sstart ss3goal s4

s3 s3 s4 s4

44
s3 s4
Sample the Environment (in an Increasingly Denser
Fashion) and Connect the Samples to Construct a
Tree to Find a Path to the Goal (RRT*)

sstart sgoal
Sample a Potential Location to Expand To

sstart sgoal

46
Add Edge to Connect to Graph

sstart sgoal

47
Add Edge to Connect to Graph

s1

sstart sgoal

48
Add Edge to Connect to Graph and
Repeat

s1

sstart sgoal

49
Some Iterations Later…

snew

s4

s3

sstart sgoal

s1 s2
50
Look at the neighbors within r

snew

r
s4

s3

sstart sgoal

s1 s2
51
Connect

snew

s4

s3

sstart sgoal

s1 s2
52
Check outgoing edges for lower costs
(Rewiring)

snew
r
s4

s3

sstart sgoal

s1 s2
53
Change the Parent and Fix Tree

snew

s4

s3

sstart sgoal

s1 s2
54
Change the Parent and Fix Tree

snew

s4

s3

sstart sgoal

s1 s2
55
RRT* Algorithm (Concept)
V = sstart ; E={}
for i=1…n
snew = getNewValidRandomSample()
Snear = getVerticesWithin( r(i) )
(smin,J min) = getLowestCostNeighbor(Snear )
E = E ∪ (smin, snew ), S = S ∪ snew
E = rewireTree(E , Snear )

56
Radius
card(V)= number of vertices, d =
number of dimensions
Ratio of the volume to
µ(Xf ree )/⇣d the volume of the unit
sphere.
⇣ ⌘ 1/d
log (card(V ))
r= RRT ⇤ card(V )

1 1/d
RRT ⇤ =2 1+ d (µ(Xf ree )/⇣d )1/d

57
Why rewire the tree?

• Remove unnecessary detours


• Optimize for the minimum cost rather than
committing to connections to early.

58
Questions

• What is expensive in this algorithm?


• What about r if our robot is motion
constrained?
• How can one incorporate heuristics?
• In what environments will this algorithm
perform well?

59
Going Deeper

• Why does this particular choice of r lead


to optimum plans? [Karaman11]
• What if we consider a batch of samples to
expand and keep a heuristic?
[Gammell15]
• What if we one to have alternative routes
instead of just the best route?
[Choudhury13 ]

60
Trajectory Optimization (CHOMP)

• Exploit the first order


information available about
the trajectory
• Perturb an initial guess to
minimize the cost function
• Example on left:
– Straight line initial guess
– Several optimization steps

61
Zucker13
Going Back To Our Graph Example:

sstart sgoal
Add Vertices based on the Gradient

sstart sgoal

63
Add Vertices based on the Gradient

sstart sgoal

64
Cost Function

• Want the trajectory to


be smooth to be
executable by the
robot and reach the
goal

• Avoid obstacles

65
Key Idea

• Minimize the update to the trajectory with


a smooth perturbation of the trajectory
• For example minimize the amount of
velocity or acceleration added
=> Perform steepest descent in trajectory
space
1
⇠k+1 = ⇠k M 1 gk

66
Measuring the Difference in Trajectory
Space
Depends on your
representation. M

Example M for
waypoints:
Finite difference
between waypoints

67
1 1
M gk
gk

1
M

1 1
⇠k+1 = ⇠k M gk 68
Questions?

• Is it complete and optimal?


• How could one include dynamic
constraints in the trajectories (other than
projection)?
• Why is it an effective method for air
vehicles?
• How could you include other planning
approaches in the optimization?

69
Going Deeper

• Enforcing constraints on the trajectory?


Project back along same space
• More challenging or larger environments?
[He2013]
• Other trajectory optimization approaches?
[Kolter09]
• Combing RRT* with Optimization?
[Choudhury16]

70
Planner Ensemble Idea
Planning Problem
Representation
Vehicle Dynamics
Lookup
Planner Ensemble
Partially
feasible Feasible
Optimizer paths Dynamics trajectories Trajectory Trajectory
Filter Executive to FCS
RRT*-AR Smooths out
trajectories for
dynamic feasibility Emergency
End Maneuver
Game Library
Library
Picks trajectories
Plans high fidelity final and ensures safety
descent trajectories
71
Choudhury et al. AHS Forum 2014 71
Conceptual Illustration of Trajectory Planner

Obstacle enters sensor range


Conceptual Illustration of Trajectory Planner

Obstacle mapping updates occupancy grid


Conceptual Illustration of Trajectory Planner

Goal

Planners receive start pose (lookahead) and goal pose


Conceptual Illustration of Trajectory Planner

Each planners computes a trajectory and gives it to the


executive
Conceptual Illustration of Trajectory Planner

Executive selects planner 2 trajectory as optimal path


Conceptual Illustration of Trajectory Planner

Executive retains other paths as alternate routes


Ensemble/Executive result

78
78
Dynamics Filter –
Producing feasible paths
Dynamic s Filter
• Accepts a partially feasible
trajectory and filters it to be fully
dynamically feasible.
• Dynamics have non-linear
constraints – no analytical solution
exists
• Thus planners plan dubin s curve
(analytic) which is filtered to be
within a funnel around the original
path.

Choudhury and Scherer, ICRA 2015 79


79
Outline

• Introduction
• Problem
• Abstraction and Approach
• Results
• Summary
• Exercise

80
Avoiding a Mountain in the Flight Path

81
Planning Result for Optimizer.

82
The optimizer can smoothly avoid the mountain
and is selected because the cost optimal

83
Obstacle Avoidance with a No-Fly-Zone:
Comparison between RRT* and Optimizer

84
RRT*-AR Path is Picked Because the
Optimizer Gets Stuck in a Local Minimum

85
Trajectory Planner Design
Planning Problem
Representation
Vehicle Dynamics
Lookup
Planner Ensemble
Partially
feasible Feasible
Optimizer paths Dynamics trajectories Trajectory Trajectory
Filter Executive to FCS
RRT*-AR Smooths out
trajectories for
dynamic feasibility Emergency
End Maneuver
Game Library
Library
Picks trajectories
Plans high fidelity final descent and ensures
trajectories safety
86
86
Emergency Maneuver Library –
Defining Safety

Known Volume Safe Trajectory


Known Obstacles
87
radians)
radians)
radians) units:
units:
units: problem can be formally stated as follows, where
tory planning
radians) units:
ffind (t)
(t)== x(t), y(t), z(t), ⇥(
ind ´´t f x(t), y(t), z(t), ⇥
Emergency Maneuver minimize Problem
f ind :
minimize : J =
J(t)
==
t fc( (t))dt + c( (
0 c( x(t),(t))dt + c( ⇥
y(t), z(t),
0 tf
constraints
minimize :::
constraints J = 0 c((0)
´
(0) == 0 0+ c(
(t))dt
constraints : (t(tf(0)
)f )=== f f0
f f
h( ˙ ¨
˙˙(t f )(t), ¨¨ . ..f..). )=
h( (t),
Safety Constraint
h( (t), (t),
(t), (t),
(t), =
(t),
(t), ...)
g(h(
g( (t),
(t),
(t), ˙ (t),
˙
(t), ˙ (t),
˙(t), ¨ (t),
¨ . ....). ).
¨¨(t),
g( (t), (t), (t), . . . )
g( (t), J(t), JJ˙<< < (t), ¨ ...
1. is the time parameterized command trajectoryJ < of the
1. (t) is the time parameterized command trajectory of
(t) the time parameterized command trajectory of th th
mass
1. mass(t) of of
is thethe
theTime helicopter
time
helicopter
parameterized
helicopter and
parameterized the
theheading
andtrajectory
and the command
heading
heading ofof
ofthe vehicle,
vehicle,wher
thetrajectory
the vehicle, of
whe
wh
2. J is
mass the ofcostthe
cost
Known function
Volume which
helicopter
function at and tisisaheading
time the
which
2. J is the cost function which is a function of the trajecto a function
function ofofthe
of the trajectory
vehicle,
the trajector wh
2. point
J is the(t (tcost
f ) function
Known
f ) Obstacleswhich
at time tis a function of the trajecto
point (t f )
3. point
(0) and(tBoundary
and are
(tff)) are
f )(t theconstraints
the
value boundaryvalue
boundary valueconstraints
constraintsofofthe thetr
3. (0) and (t f ) are the boundary value constraints of the
3. h(.(0)
4. . . )and is aaNon-holonomic
is set
set fof
(tof are theequality
equality
) equality boundary
constraints
constraints value
constraints which
which constraints
areofofnon-holo
are of the
non-ho
4. h(. . . ) is a set of equality constraints which are of non-ho
4. g(.
5. h(.....))is isisaaaInequality
setof
set
set ofbounds
of equality
bounds constraints
conveying
bounds conveying
specifying thewhich
the
system limitations
limitations are ofofof non-ho
thesys
the
5.5. g(.
g(.. .. .))isisaalimitations
set
set ofofbounds
bounds conveying
conveying the
the limitations
limitations ofofthe
the
In addition to the problem itself the planning
In addition to the problem itself the planning approach88 we choos approach we choo
In
Inaddition
addition
requirements toto the
the
such as problem
problem
as limited itself
itself
limitedcomputationthe
the planning
planning
computationand approach
approach
andreal-time
real-timereaction we we choo
reactiocho
requirements such
Example Use Case

Planned Trajectory Safe Emergency Maneuver


Known Space Unsafe Emergency Maneuver
Current Laser Coverage
89
Example Use Case

Planned Trajectory Safe Emergency Maneuver


Known Space Unsafe Emergency Maneuver
Current Laser Coverage
90
Example Use Case

Planned Trajectory Safe Emergency Maneuver


Known Space Unsafe Emergency Maneuver
Current Laser Coverage
91
Example Use Case

Planned Trajectory Safe Emergency Maneuver


Known Space Unsafe Emergency Maneuver
Current Laser Coverage
92
Approach: Emergency Library is
Computed Offline to Enable Verification

Set of dynamically feasible control invariant trajectories


Number of trajectories allowed in the set
Probability of at least one trajectory in the set being
unoccupied

93
The Emergency Maneuver Set
can be Found Greedily
The general optimization problem is NP hard
We greedily generate a set, while proving a sub-
optimality bound on the greedy optimization

Set of dynamically feasible control invariant trajectories


Number of trajectories allowed in the set
Probability of at least one trajectory in the set being
unoccupied

Arora et al., AHS 2014 94


Example Emergency Maneuver Library
Library at 50 m/s for Swerving (maximum unknown obstacle width 30m)

Top View
Frontal View

X Library at 25 m/s for Stopping

Frontal View

Top View

X Parameters: wind speed = 0 knots Y


95
Example Simulation Result

96
Flight Test Result

97
Outline

• Introduction
• Problem
• Abstraction and Approach
• Results
• Summary & Outlook
• Exercise

98
Summary
• Reviewed the planning problem
for autonomous flying robots
• Considered several algorithms
and the idea of planning
ensembles
• Which approaches are successful
is highly dependent on your
environment and dynamical
system

99
Coming back to the Beginning

100
Key Problem: How to react in Off-Nominal
Cases?
Emergencies:
– Electrical fire
– GPS-outage
– Bird strike
– Propulsion failure
– Propeller damage
– Door opens in flight
– Unexpected crane in approach
– Person on landing pad

How do you build an autonomy system to correctly


react in these situations?

101
Current Approach (Useful for Self-Driving)
Under what
assumptions does my
approach work?

Design Checks to
Expertise
make sure all
assumptions are meet

Famous example of
Monitor
Invariants
problem with this
approach:
World Cost
Air France Flight 447
Model Function If check failed: Report: BEA (France) (July
2012), Final report On the
Perception Planning Fall Back to accident on 1st June 2009 to the
Manual Control Airbus A330-203 registered F-
GZCP operated by Air France flight
Sensors Control AF 447 Rio de Janeiro – Paris
(Or for driving https://www.bea.aero/docspa/20
Real World Robot stop) 09/f-cp090601.en/pdf/f-
cp090601.en.pdf
102
How do we guarantee that we will at least react as good as a pilot?
Paradigm 1: Engineering Paradigm

Case 1: Engine
outage in forward Module
Module 11
New Module 1
flight
Case 2: Engine
failure in vertical Module 11
What could go Module
New Module 1
flight
wrong?
What if 1+4
Case 3: Electrical happen at the
Fire Module
Module 11
Expertise New Module 1 same time?

Case 4: Passenger
Emergency Module
Module 11
New Module 1

Case 5: Engine
Fire Module
Module 11
New Module 1

Case 6:
Communications Module
Module 11
Failure New Module 1
103
Example Planning for Engine-out Landing

104
How do we guarantee that we will at least react as good as a pilot?
Paradigm 2: Modelling Approach
You just have overly simplified
models if you had the full model you
just would have to observe the
parameters?

What is the model that


captures all possible
behavior of the system?
How do you
observe all the
Expertise parameters of the
Model models?

Magic Observer

105
How do we guarantee that we will at least react as good as a pilot?
Paradigm 3: Learning Approach (online)
As long as your learning system is
expressiveness you can learn what
works

What representation, size of


parameters is expressive
enough to captures all possible
behavior of the system? How do you know
your learner
captures all
behaviors?
Expertise Policy /
Learner Model / Will the learning
Mapping rate be fast
enough?

Feedback Execution

106
How do we guarantee that we will at least react as good as a pilot?
Paradigm 4: Learning Approach (offline)
As long as your learning system is
expressiveness you can learn what
works

How do you know


have sufficient
data to be able to
Expertise
interpolate all
situations?
Experimental
Training data Policy /
or Learner Model /
Complete Examples of Mapping
Simulator all possible
situations

107
Challenge for Formal Guarantees in
Autonomy: How tight can one go?
Abstracted models,
and requirements /
conditions can lead to
over approximations

Need tight bounds for


sufficient performance Performance
is directly related
to size of the
reachable set

108
Challenges: Coverage and Communicating
Requirements, and Gaining Trust
How do you define safety conditions efficiently and
completely defining the intent of the designer?

Desired
Behavior Desired Actual Robot
Actual Robot Behavior Behavior
Behavior

Expertise
Mission
Objectives
Constraints Simulation Metrics/
World System (Dynamics/ Conditions/
Model Dynamic Sensors) Requirements
Objectives Model

Expertise
World
Model
Cost
Function Robot
Whole Simplified Derived by
Constraints
Perception system? model? the designer?
Planning

Sensors Control Expertise


Components? As realistic Given by the
Real World Robot
as possible? simulation?
Evaluation or simulation as equivalent109to Coverage in terms of covering
specify formal conditions? (Envelope the designers intend?
expansion 109
Challenges: Adaptive Systems?
World Cost
Bounding the impact of Model Function

adaptation? Perception Planning

Sensors Control

Adaptation is necessary Real World Robot

to go beyond the
designers intent or
effectively capture the
intent in changing Policy

situations. Sensors
Robot
Real World

110
Toolbox Setup
1. Install MATLAB (no toolboxes necessary)
2. Download MATLAB toolbox. In a command line execute:
git clone
https://bitbucket.org/castacks/matlab_planning_toolbox.git
(Supported platforms: Mac, Linux, and
Windows, for Windows you also need Visual C++)

3. Go to
https://bitbucket.org/castacks/matlab_planning_toolbox
and follow the directions.

111
Outline

• Introduction
• Problem
• Abstraction and Approach
• Results
• Summary
• Exercise

112
Goals
1. Setup the toolbox (10 min)
2. Get familiar with the different planning
algorithms from the lecture (20 min)
3. Explore the concept of environment
dependence for planning algorithms
(20 min)
4. Planning Challenge (40 min)

113
1. Toolbox Setup
1. Install MATLAB
2. Download MATLAB toolbox. In a command line execute:
git clone git@bitbucket.org:castacks/matlab_planning_toolbox.git

(Supported platforms: Mac, Linux, and


Windows, for Windows you also need Visual C++)

3. Go to
https://bitbucket.org/castacks/matlab_planning_toolbox
and follow the directions.

114
1. Matlab Toolbox Overview
• Run init_setup.m to setup paths and
compile mex files
• Folders
– global_search: Sampling and grid planners
– local_search: Trajectory optimization
– cost_functions: Tools to setup and evaluate
cost functions
– environment_generation: Tools to generate
environments
– saved_environments: Different maps

115
2. Get familiar with the different planning
algorithms
Different examples are located here:
planning_experiments/detailed_examples/
1. example_astar.m
2. example_chomp.m
3. example_rrtstar.m

116
2. Questions

• How do the different algorithms behave?


• What happens if you vary the parameters?
• What happens if you inflate the heuristic?
(Turn A* into Weighted-A*)
• How do you turn RRT* into RRT?

117
3. Run Algorithms on this Matrix

CHOMP A* RRT*
Env 1

Env 2

Env 3

Where do the algorithms work?

Fill out the matrix.


118
3. Assumptions so Far:

• No dynamics
• Start
• Goal
• No changing environment
• No time budget/real-time

119
4. Planning Challenge
Use this file:
planning_experiments/planning_challenge/run_planner_challenge.m

Prize: Fame and first pick at desert during lunch!

You can use at most 2 planning algorithms

• Develop your own planning approach on the training


set.

• The test set will be released in the last ten minutes.


Please report your score in this google doc:
https://goo.gl/7O0tBc

120
References 1
_Rywoi 59a Wxi t l i r X2Ry woi 0Wer nm f er Gl sy hl y v} 0Wi ~epNem r 0Er hvi { H2Gl eq f i vw0Py oi ] shi v0Wi f ewxm er Wgl i vi v0
P} pi N2Gl eq f i vpem r 0L y k l Gszi v0er h Wer nm z Wm r k l 0&Ey xsr sq sy w I | t psvexm sr er h Q sxm sr T per r m r k j sv er Y r q er r i h
Ei vm epZi l m gpi Rezm k exm r k Vm zi vw0& Nsy vr epsj Jm i ph V sf sxm gw0Ny r i 064592
_Evsve59ea Wer oept Evsve0Wer nm f er Gl sy hl y v} 0Her m i pEpxl sм0er h Wi f ewxm er Wgl i vi v0&I q i vk i r g} Q er i y zi v Pm f vev}
Ü I r wy vm r k Wej i Rezm k exm sr m r T evxmepp} Or s{ r I r zm vsr q i r xw0& 6459 M I I I M r xi vr exm sr epGsr j i vi r gi sr V sf sxm gw er h
Ey xsq exm sr 0Q e} 064592
_Evsve59f a Wer oept Evsve er h Wi f ewxm er Wgl i vi v0&T EWT >T spm g} F ewi h Et t vsegl j sv Wi r wsv T per r m r k 0& 6459 M I I I
Mr xi vr exm sr epGsr j i vi r gi sr V sf sxm gw er h Ey xsq exm sr 0Q e} 064592
_Gl syhl yv} 59ea Wer nm f er Gl sy hl y v} er h Wi f ewxm er Wgl i vi v0&Xl i H} r eq m gw T vsni gxm sr Jm pxi v ,HT J- 1 V i ep1Xm qi
Rsr pm r i ev Xveni gxsv} S t xm qm ~exmsr Y wm r k T vsni gxm sr S t i vexsvw0& M I I I M r xi vr exm sr epGsr j i vi r gi sr V sf sxm gw er h
Ey xsq exm sr 0Q e} 064592
_Gl syhl yv} 59f a Wer nm f er Gl sy hl y v} 0Wer oept Evsve0er h Wi f ewxm er Wgl i vi v0&Xl i T per r i v I r wi q f pi >Q sxm sr
T per r m r k f } I | i gy xm r k Hm zi vwi Epk svm xl q w0& M I I I Gsr j i vi r gi sr V sf sxm gw er h Ey xsq exm sr 0Q e} 064592
_Q exyver e59a Her m i pQ exy ver e er h Wi f ewxm er Wgl i vi v0&7H Gsr zspy xm sr epRi y vepRi x{ svow j sv Per hm r k ^sr i
Hi xi gxm sr j vsq Pm HEV 0& M r xi vr exmsr epGsr j i vi r gi sr V sf sxm gw er h Ey xsq exm sr 0Q evgl 064592
_Epxl sḵ59a Her m i pEpxl sм er h Wi f ewxm er Wgl i vi v0&Gsr r i gxi h M r zevm er x Wi xw j sv L m k l 1Wt i i h Q sxm sr T per r mrk m r
T evxm epp} 1Or s{ r I r zm vsr q i r xw0& 6459 M I I I M r xi vr exm sr epGsr j i vi r gi sr V sf sxm gw er h Ey xsq exm sr 0Q evgl 064592
_Evsve58a Wer oept Evsve0Wer nm f er Gl sy hl y v} 0Wi f ewxm er Wgl i vi v0er h Her m i pEpxl sм0&E T vm r gm t pi h Et t vsegl xs
I r ef pi Wej i er h L m k l T i vj svq er gi Q er i y zi vw j sv Ey xsr sq sy w V sxsvgvej x0& EL W; 4xl Er r y epJsvy q 0Q sr xvi e ñp0
Uy i fñ i g0Ger ehe0Q e} 64Ü660Q e} 064582
_Gl syhl yv} 58a Wer nm f er Gl sy hl y v} 0Wer oept Evsve0er h Wi f ewxm er Wgl i vi v0&Xl i T per r i v I r wi q f pi er h Xveni gxsv}
I | i gy xm zi >E L m k l T i vj svq er gi Q sxm sr T per r m r k W} wxi q { m xl K y ever xi i h Wej i x} 0& EL W; 4xl Er r y epJsvy q 0Q sr xvi e ñp0
Uy i fñ i g0Ger ehe0Q e} 064582
_Jsi wwi p46a E2Jsi wwi p0ŬWgi r i Q shi pm r k j vsq Q sxm sr 1Jvi i V ehev Wi r wm r k 0ŮXl i V sf sxm gw M r wxmxy xi 0GQ Y 0T m xxwf y vk l 0
T E064462
_Wgl i vi v4=a W2Wgl i vi v0H2Ji vk y wsr 0er h W2Wm r k l 0ŬI н gm i r x G1wt egi er h gswx j y r gxm sr y t hexi w m r 7H j sv y r q er r i h
ei vm epzi l m gpi w0Ůt vi wi r xi h ex xl i V sf sxm gw er h Ey xsq exm sr 0644=2M GV E +4=2M I I I Mr xi vr exmsr epGsr j i vi r gi sr 0644=0
t t 2648=Ü64982
_Keq q i pp59a K eq q i pp0N2H2? Wvm rmzewe0W2W2? F evj ssx0X2H20&F exgl M r j svq i h Xvi i w ,F M X.->Weq t pm r k 1f ewi h st xm q ep
t per r mr k zm e xl i l i y vm wxm gepp} k y m hi h wi evgl sj m q t pm gmx ver hsq k i sq i xvm g k vet l w0& V sf sxm gw er h Ey xsq exm sr ,M GV E-0
6459 M I I I M r xi vr exmsr epGsr j i vi r gi sr 0zsp20r s20t t 274: ; 074; 806: 174 Q e} 6459
References 2
[Koenig02] S. Koenig and M. Likhachev, “ D* Lite.” In Proceedings of the AAAI Conference on Artificial
Intelligence (AAAI), pages 476-483, 2002.
[Knepper06] R. Knepper and A. Kelly, “ High Performance State Lattice Planning Using Heuristic Look-Up
Tables.” Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems,
October, 2006, pp. 3375-3380.
[LaValle06] S. LaValle, “ Planning Algorithms” . Cambridge: Cambridge University Press. ISBN 0521862051.
[Hansen07] E.A. Hansen and R. Zhou, “ Anytime Heuristic Search” . Journal of Aritifical Intelligence Research 28
(2007) pages 267-297.
[Likhachev04] Maxim Likhachev, Geoff Gordon and Sebastian Thrun, " ARA*: Anytime A* with Provable Bounds
on Sub-Optimality, " Advances in Neural Information Processing Systems 16 (NIPS), MIT Press, Cambridge,
MA, 2004.
[Dubins57] L.E. Dubins. “ On Curves of Minimum Length with a Constraint on Average Curvature and with
Prescribed Initial and Terminal Positions and Tangents,” American Journal of Mathematics, 79: (1957) pages
497-516, 1957.
[Pivtoraiko09] M. Pivtoraiko, R. A. Knepper, and A. Kelly, “ Differentially constrained mobile robot motion
planning in state lattices,” Journal of Field Robotics, vol. 26, no. 3, pp. 308–333, Mar. 2009.
[Frazzoli02] E. Frazzoli, M. Dahleh, and E. Feron, “ Real-time motion planning for agile autonomous vehicles,”
AIAA Journal of Guidance Control and Dynamics, vol. 25, no. 1, pp. 116–129, 2002.
[He13] Keliang He '13, Elizabeth Martin '13, and Matt Zucker. “ Multigrid CHOMP with local smoothing” Proc.
IEEE-RAS Int’l Conf. on Humanoid Robotics, 2013.
[Kolter09] Z. Kolter, and A. Ng “ Task-Space Trajectories via Cubic Spline Optimization,” ICRA 2009
[Fang15a] Zheng Fang, Shichao Yang , Sezal Jain, Geetesh Dubey, Silvio Mano Maeta, Stephan Roth,
Sebastian Scherer, Yu Zhang, and Stephen T. Nuske, " Robust Autonomous Flight in Constrained and Visually
Degraded Environments," Field and Service Robotics, June, 2015.
[Holtz15] Kristen Holtz, Daniel Maturana, and Sebastian Scherer, " Learning a Context-Dependent Switching
Strategy for Robust Visual Odometry," Field and Service Robotics, June, 2015.
[Fang15b] Zheng Fang and Sebastian Scherer, " Real-time Onboard 6DoF Localization of an Indoor MAV in
Degraded Visual Environments Using a RGB-D Camera," 2015 IEEE International Conference on Robotics and
Automation, May, 2015.
[Choudhury16] S. Choudhury, J. D. Gammell, T. D. Barfoot, S. Srinivasa, and S. Scherer, “ Regionally
accelerated batch informed trees (RABIT*): A framework to integrate local information into optimal path
planning,” presented at the IEEE International Conference on Robotics and Automation ICRA, Stockholm,
Sweden, 2016, pp. 4207–4214.

S-ar putea să vă placă și