Documente Academic
Documente Profesional
Documente Cultură
net/publication/264898248
CITATIONS READS
17 1,553
3 authors:
Derek Long
King's College London
191 PUBLICATIONS 4,435 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Sara Bernardini on 22 September 2014.
ing our SaT application. In particular, we built on the /tum_ardrone/com /tum_ardrone/com /cmd_vel
following ROS packages: /cmd_vel
• ARDRONE AUTONOMY1 : this is a ROS driver for /drone_autopilot
the Parrot AR.Drone based on the official AR.Drone
SDK version 2.0. The driver’s executable node,
Figure 3: ROS nodes and topics for the search phase
ARDRONE DRIVER , offers two main features:
– it converts all raw sensor readings, debug values and
status reports sent from the drone into standard ROS seconds; (iii) upon receiving the last plan generated
messages, and within the time limit, translates the actions of the
– it allows to send control commands to the drone for plan into corresponding commands specified by using
taking off, landing, hovering and specifying the de- the scripting language provided by the ROS node
sired linear and angular velocities. DRONE AUTOPILOT ; and (iv) sends the translated plan
• AR RECOG2 : this is a ROS vision package that al- to the node DRONE AUTOPILOT, which then imparts
lows the drone to recognise specific tags as well as the commands directly to the ARDRONE DRIVER node.
to locate and transform them in the image-space. It is Since ARDRONE AUTONOMY is the drone’s driver,
based on the ARToolKit3 , which is a robust software we use it both for the tracking and the search
library for building augmented reality applications. phases. AR TAG FOLLOWING and AR RECOG were em-
ployed for tracking only, whereas the DRONE STATE -
• TUM ARDRONE4 : this ROS package implements au- ESTIMATION and DRONE AUTOPILOT for search only.
tonomous navigation and figure flying in previ- The AR PLANNER node acts as the interface between
ously unknown and GPS-denied environments (En- the tracking and the search phases. Figure 3 shows the
gel, Sturm, and Cremers 2012a; 2012b). The package ROS nodes and topics active during search.
contains two main nodes: Although using ROS and exploiting existing pack-
– DRONE STATE - ESTIMATION: This node provides the ages for the AR.Drone facilitated the implementation
drone with SLAM capabilities by implementing an of our application, working with a real quadcopter re-
algorithm based on Parallel Tracking and Mapping mains a time-consuming and challenging task. Despite
(PTAM) (Klein and Murray 2007). In addition, it im- the fact that the Parrot AR.Drone is considerably more
plements an EKF for state estimation and compen- robust than other low-cost quadcopters, it is yet an un-
sation of the time delays in the system arising from stable system and its control is not as easy as the control
wireless LAN communication. of a ground robot.
– DRONE AUTOPILOT: This nodes implements a PID
controller for pose stabilisation and navigation. 7 Flight Tests
We also implemented two additional ROS nodes: We conducted a series of real-world experiments to as-
• AR TAG FOLLOWING: This node implements a PID sess whether the planner is capable of generating effec-
controller that, based on the messages received from tive plans for the drone and whether the drone is able
the AR RECOG package, allows the drone to follow to fly them accurately to completion. Flight tests were
the detected tag. performed indoors in a room of dimensions 20.5m (l)
• AR PLANNER: This node wraps the O PTIC planner × 8.37m (w) × 5.95m (h). The experiments presented
and allows us to integrate it with the rest of the sys- here pertain to the search phase of a SaT mission only.
tem. They were carried out by using the ROS architecture
AR PLANNER is invoked by the node shown in Figure 3 and the AR.Drone 2.0.
AR TAG FOLLOWING when the target has been We employed the following procedure to run our
lost. Based on real-time data from the navdata channel tests: we manually generated several planning problems
and information concerning the probability distri- and fed each problem, together with the drone’s domain
bution of the target’s position in the search area, model, into the ROS architecture shown in Figure 3.
the AR PLANNER: (i) builds a planning problem; Manually writing the planning problems was a time-
(ii) calls the O PTIC planner with a time limit of 10 consuming task. As described in Section 4.2, the initial
state describes the set of candidate search patterns from
1
http://wiki.ros.org/ardrone autonomy amongst which the planner chooses those to execute. In
2
http://wiki.ros.org/ar recog particular, it assigns to each pattern the following infor-
3
http://www.hitl.washington.edu/artoolkit/ mation: the time and battery needed for flying the pat-
4
http://wiki.ros.org/tum ardrone tern, the drone’s degree of confusion after executing the
pattern, a time window in which the pattern is active and In future we would like to run experiments to com-
a reward step function within this window. To write re- pare our approach with other methods. While Ablavsky
alistic initial states, we manually created several search et al. have developed a mathematical model of a com-
patterns for the drone with suitable dimensions for our parable approach, there appears to be no implemented
room and ran extensive flight tests of such patterns to system available. In terms of path-planning, both ap-
gather reliable information about their duration and the proaches are very similar, but Ablavsky et al. could
battery usage. In particular, we created several versions not manage resource constraints or non-search goals. It
of each pattern with different dimensions, ran each pat- would be worth comparing our search pattern-based ap-
tern ten times and annotated the average time and en- proach with the probabilistic methods of (Richardson
ergy consumed by the pattern. The results obtained for and Stone 1971) and (Furukawa et al. 2012), on exper-
one version of each patterns are reported in Table 4. The iments to find a stationary or drifting target, to explore
take-off (1m) and PTAM map initialisation take approx- scaling. Our future work will explore this further.
imately 40 seconds and 10% of battery, while landing
(1m) takes five seconds and 2% of battery. 8 Conclusions and Future Work
In this paper, we describe our work on the use of au-
PTS-1 CLS-1 ESS-1 SS-1 tomated planning for the high level operation of a low-
Area (m*m) 4*3 1*4 4*4 3*3 cost quadcopter engaged in SaT missions. We formulate
Perimeter (m) 19 9 20 29 the search problem as a planning problem in which stan-
Time (sec) 50 40 45 95 dard search patterns must be selected and sequenced to
Battery (%) 10 15 5 10 maximise the probability of rediscovering the target. We
use an off-the-shelf planner, O PTIC, to solve the plan-
Table 4: Characteristics of patterns PTS-1, CLS-1, ESS- ning problem and generate plans for the drone.
1 and SS-1. The use of planning technology in the search phase of
SaT missions satisfies the drone’s requirement of hav-
On average, O PTIC produces around 8 plans per ing to constantly compromise between two orthogonal
problem instance in its 10 second window. metrics: one favouring locations where the probability
When we made the drone execute the plans produced of rediscovering the target is high and the other favour-
by the planner, we observed that the drone is able to fly ing locations that generate robust sensor measurements
all the figures very accurately. Figure 4 shows, in the top for safe flight. If the drone needs to complete its mission
row, the execution of a SS pattern and an ESS pattern by within the limit of its battery life (no recharge action
the drone’s simulator and, in the bottom row, the execu- available), yet another metric comes into play, favour-
tion of two fragments of a plan involving combination ing manoeuvres that minimise battery use. Only with
of patterns by the real drone. By comparing these pic- a carefully crafted strategy can the drone juggle these
tures, it appears that the drone can execute the planned constraints and achieve satisfactory performance.
manoeuvres very precisely. Although we focus here on the Parrot AR.Drone2.0
SaT missions, our approach can easily be generalised
to other MAVs and robotic vehicles as well as to dif-
ferent surveillance operations. Ultimately, our goal is to
demonstrate that our planning-based approach can be
used in any surveillance scenario to underpin the be-
haviour of an observer that, knowing the dynamics of
the target but not its intentions, needs to quickly make
control decisions in the face of such uncertainty while
under tight deadlines and resource constraints.
(a) SS (simulator) (b) ESS (simulator)
References
Abbeel, P.; Coates, A.; Quigley, M.; and Ng, A. Y. 2007.
An application of reinforcement learning to aerobatic
helicopter flight. In In Advances in Neural Information
Processing Systems 19. MIT Press.
Ablavsky, V., and Snorrason, M. 2000. Optimal search
for a moving target: a Geometric Approach. In Proceed-
ings of the AIAA GNC Conference.
(c) SS + ESS (drone) (d) ESS + CLS (drone)
Achtelik, M.; Bachrach, A.; He, R.; Prentice, S.; and
Roy, N. 2009. Stereo Vision and Laser Odometry for
Figure 4: A comparison between the figures in the above Autonomous Helicopters in GPS-denied Indoor Envi-
and bottom rows shows that our drone is able to execute ronments. In Proceedings of the SPIE Unmanned Sys-
the plans generated by the planner very accurately tems Technology XI, volume 7332.
Bachrach, A.; de Winter, A.; He, R.; Hemann, G.; Pren- Furukawa, T.; Mak, L. C.; Durrant-Whyte, H. F.; and
tice, S.; and Roy, N. 2010. RANGE - Robust Au- Madhavan, R. 2012. Autonomous Bayesian Search and
tonomous Navigation in GPS-denied Environments. In Tracking, and its Experimental Validation. Advanced
2010 IEEE International Conference on Robotics and Robotics 26(5-6):461–485.
Automation (ICRA), 1096–1097. Furukawa, T.; Durrant-Whyte, H. F.; and Lavis, B.
Benton, J.; Coles, A.; and Coles, A. 2012. Temporal 2007. The element-based method – Theory and its ap-
Planning with Preferences and Time-Dependent Con- plication to Bayesian search and tracking. In IEEE/RSJ
tinuous Costs. In Proceedings of the Twenty Second International Conference on Intelligent Robots and Sys-
International Conference on Automated Planning and tems (IROS 2007), 2807–2812.
Scheduling (ICAPS-12). Graether, E., and Mueller, F. 2012. Joggobot: a flying
Bernardini, S.; Fox, M.; Long, D.; and Bookless, J. robot as jogging companion. In CHI ’12 Extended Ab-
2013. Autonomous Search and Tracking via Tem- stracts on Human Factors in Computing Systems, CHI
poral Planning. In Proceedings of the 23st Interna- EA ’12, 1063–1066. ACM.
tional Conference on Automated Planning and Schedul- He, R.; Prentice, S.; and Roy, N. 2008. Planning
ing (ICAPS-13). in Information Space for a Quadrotor Helicopter in a
Bills, C.; Chen, J.; and Saxena, A. 2011. Autonomous GPS-denied Environments. In Proceedings of the IEEE
MAV Flight in Indoor Environments using Single Im- International Conference on Robotics and Automation
age Perspective Cues. In 2011 IEEE International Con- (ICRA 2008), 1814–1820.
ference on Robotics and Automation (ICRA). Hehn, M., and D’Andrea, R. 2012. Real-time Tra-
Chung, C. F., and Furukawa, T. 2006. Coordinated jectory Generation for Interception Maneuvers with
Search-and-Capture Using Particle Filters. In Proceed- Quadrocopters. In IEEE/RSJ International Conference
ings of the 9th International Conference on Control, Au- on Intelligent Robots and Systems, 4979–4984. IEEE.
tomation, Robotics and Vision. IMO. 2013. International Aeronautical and Maritime
Coles, A. J.; Coles, A. I.; Fox, M.; and Long, D. 2010. Search and Rescue Manual (IAMSAR). United States
Forward-Chaining Partial-Order Planning. In Proceed- Federal Agencies.
ings of the 20th International Conference on Automated Klein, G., and Murray, D. 2007. Parallel tracking and
Planning and Scheduling. mapping for small AR workspaces. In Proc. Sixth IEEE
Courbon, J.; Mezouar, Y.; Guenard, N.; and Martinet, P. and ACM International Symposium on Mixed and Aug-
2009. Visual navigation of a quadrotor aerial vehicle. In mented Reality (ISMAR’07).
2009 IEEE/RSJ International Conference on Intelligent Moore, R.; Thurrowgood, S.; Bland, D.; Soccol, D.;
Robots and Systems (IROS), 5315–5320. and Srinivasan, M. 2009. A stereo vision system
CSAR. 2000. Canadian National Search and Rescue for UAV guidance. In Intelligent Robots and Systems,
Manual. Department of National Defence. 2009. IROS 2009. IEEE/RSJ International Conference
Edelkamp, S., and Hoffmann, J. 2004. PDDL2.2: The on, 3386–3391.
language for the classical part of the 4th international NATSAR. 2011. Australian National Search and Res-
planning competition. In Proceedings of the 4th Inter- cue Manual. Australian National Search and Rescue
national Planning Competition (IPC-04). Council.
Engel, J.; Sturm, J.; and Cremers, D. 2012a. Accurate Quigley, M.; Gerkey, B.; Conley, K.; Faust, J.; Foote,
figure flying with a quadrocopter using onboard visual T.; Leibs, J.; Berger, E.; Wheeler, R.; and Ng, A. 2009.
and inertial sensing. In Proc. of the Workshop on Visual ROS: an open-source Robot Operating System. In Pro-
Control of Mobile Robots (ViCoMoR) at the IEEE/RJS ceedings of the International Conference on Robotics
International Conference on Intelligent Robot Systems. and Automation (IJCAI).
Engel, J.; Sturm, J.; and Cremers, D. 2012b. Camera- Richardson, H. R., and Stone, L. D. 1971. Opera-
based navigation of a low-cost quadrocopter. In Proc. of tions analysis during the underwater search for Scor-
the International Conference on Intelligent Robot Sys- pion. Naval Research Logistics 18:141–157.
tems (IROS). Roberts, J.; Stirling, T.; Zufferey, J.; and Floreano, D.
Fan, C.; Song, B.; Cai, X.; and Liu, Y. 2009. Dynamic 2007. Quadrotor using minimal sensing for autonomous
visual servoing of a small scale autonomous helicopter indoor flight. In European Micro Air Vehicle Confer-
in uncalibrated environments. In 2009 IEEE/RSJ Inter- ence and Flight Competition (EMAV2007).
national Conference on Intelligent Robots and Systems Zingg, S.; Scaramuzza, D.; Weiss, S.; and Siegwart, R.
(IROS), 5301–5306. 2010. MAV navigation through indoor corridors using
Furukawa, T.; Bourgault, F.; Lavis, B.; and Durrant- optical flow. In 2010 IEEE International Conference on
Whyte, H. F. 2006. Recursive Bayesian Search-and- Robotics and Automation (ICRA), 3361–3368.
tracking using Coordinated UAVs for Lost Targets. In
Proceedings of the 2006 IEEE International Conference
on Robotics and Automation (ICRA 2006), 2521–2526.