Sunteți pe pagina 1din 79

DESIGN OF PATH PLANNING ALGO-

RITHM FOR A MOBILE ROBOT

1. Examiner: Prof. Dr. Andreas Schwung


2. Examiner Prof. Dr. Ulf Witkowski

Mochammad Rizky Diprasetya


Matrikelnummer: 10053435
ABSTRACT

II
DECLARATION OF AUTHORSHIP

I hereby certify that this thesis has been composed by me and is based on my own work,
unless stated otherwise. No other person’s work has been used without due acknowledg-
ment in this thesis. All references and verbatim extracts have been quoted, and all sources
of information, including graphs and data sets, have been specifically acknowledge.

Date: Signature:

III
TASK OF THE PROJECT

 Familiarization with the mobile robot platform and with sensor


technology and drives technology.
 Familiarization with path planning algorithms for mobi le robot
applications.
 Design and development of path planning algorithms for the mobile
robot
 Implementation within the available simulation environment

IV
CONTENT

ABBREVIATIONS .........................................................................................VII

1 INTRODUCTION ........................................................................................... 1
1.1 Background ......................................................................................................... 1
1.2 Scope..................................................................................................................... 2
1.3 Purpose................................................................................................................. 2
1.4 Organization of the Thesis ................................................................................. 3

2 FUNDAMENTALS ......................................................................................... 4
2.1 Omnidirectional Robot ....................................................................................... 4
2.2 Path-Planning ...................................................................................................... 9
2.2.1 Path-Planning Approaches ........................................................................... 11
2.3 Robot Operating System .................................................................................. 18
2.3.1 Client-Server Connection............................................................................. 18
2.3.2 Multi-lingual Programming ......................................................................... 20
2.3.3 Open Source ................................................................................................. 21

3 PATH-PLANNING SYSTEM .......................................................................22


3.1 System Overview ............................................................................................... 22
3.1.1 RRT-Star Algorithm .................................................................................... 22
3.1.2 Optimizer Algorithm.................................................................................... 28

4 MOBILE ROBOT SIMULATION ...............................................................34


4.1 System Overview ............................................................................................... 34
4.1.1 Simulation Model ........................................................................................ 34
4.1.2 Mapper Node ............................................................................................... 38
4.1.3 MATLAB Node ........................................................................................... 44
4.1.4 Path to Vector Speed Node .......................................................................... 45
4.1.5 Kinematic Node ........................................................................................... 50

5 RESULT ..........................................................................................................51
5.1 The Mobile Robot Simulation System Result ................................................. 51
5.2 The Path-Planning with a Static Obstacle ...................................................... 54
5.3 The Path-Planning with a Dynamic Obstacle ................................................ 57

V
6 CONCLUSION .............................................................................................. 60
6.1 Conclusion .......................................................................................................... 60

LIST OF FIGURES .......................................................................................... 62

LIST OF TABLES ............................................................................................ 64

REFERENCES.................................................................................................. 65

VI
ABBREVIATIONS

KUKA Keller und Knappich Augsburg

PRM Probabilistic Roadmap

ROS Robot Operating System

RRT Rapidly-Exploring Random Trees

SLAM Simultaneous Localization and Mapping

VII
1 Introduction

1 Introduction

1.1 Background

Nowadays, robotic industry uses a navigation system in their robot. A navigation system
is one of the requirement to categorize a robot as an autonomous robot. There are three
processes in a navigation system for an autonomous robot; mapping, localization, and
path-planning [1]. The mapping process is a process where a robot explores and captures
an environment by using a rangefinder such as a laser scanner. The localization process
uses the environment data from the previous process to locate the current position of a
robot on the environment. The path-planning uses the current position of a robot and the
environment data to create a collision-free path. Many kinds of application use a
navigation system such as a self-driving car, a mobile industrial robot, and a vacuum
cleaner robot.

Since ten years ago, a well-known internet company, Google Inc., has developed a self-
driving car. Now, Waymo, a self-driving technology company, claims it. That is one of
the biggest examples of navigation system research because the majority of navigation
system research is using a simple environment and most of them is a static environment.
A self-driving car needs to be able to adapt to a complex environment [2]. The Waymo

Another example is KUKA.NavigationSolution. Keller und Knappich Augsburg


(KUKA) has been developing KUKA.NavigationSolution. KUKA mobile robot uses
KUKA.NavigationSolution as its navigation system for navigating the robot in an
industrial area. In this case, they developed the navigation system to be able to adapt to a
static and dynamic environment in an industrial area. It is less complex than the
navigation system for a self-driving robot [3].

Automation Laboratory at the University of Applied Sciences Soest has been developing
an omnidirectional robot since 2016. A Master student has manufactured the
omnidirectional robot [4]. A Bachelor student has been developing the electrical and
control system of the robot [5]. The other requirement for the robot is a navigation system.
By implementing a navigation system, the robot can move autonomously. The three

1
Introduction 2

process of the navigation system is related to each other. The mapping process needs to
be done before the localization process. The path-planning process cannot be done
without a mapping and localization process. The mapping and localization process can be
done simultaneously by using Simultaneous Localization and Mapping (SLAM). It has
been done by a Bachelor student [6]. The path-planning system needs to be done to
complete the navigation system.

The researcher has familiarized with the omnidirectional robot behavior. The behavior of
the mobile robot is used in the simulation. The path-planning system is based on RRT*
algorithm. Since the improvement of the RRT* algorithm from the RRT algorithm is only
in the way of the algorithm construct the tree and smooth the path, an optimizer is
implemented in the RRT* algorithm to obtain the best solution. The criteria of the best
solution are where the path is short and collision-free. The path-planning will be written
in MATLAB. The researcher will integrate MATLAB with ROS. The simulation system
will be based on ROS system. Gazebo is simulation software on Linux operating system.
Gazebo has a built-in ROS. The researcher will use ROS Indigo version. This version is
the most stable version because it fully supports the LTS (Long Term Support) of Ubuntu.

1.2 Scope

The thesis project focuses on the design of the path-planning algorithm for an omnidirec-
tional robot by using MATLAB. The whole path planning system including the robot
software is developed by using Robotic Operating System (ROS). The thesis project only
uses a simulation with a static and dynamic environment for testing the path planning
system. The actual robot is currently in the development process.

1.3 Purpose

The main goals of the thesis project are to design a path-planning system for an
omnidirectional robot, create a simulation of an omnidirectional robot with a static and
dynamic environment, and simulate the path-planning system on the created simulation.

2
3 Introduction

1.4 Organization of the Thesis

The researcher divides this thesis into six chapters. The first chapter discusses the back-
ground, the purpose and the scope of the thesis project. The second chapter discusses
omnidirectional robot, the path-planning system in robotic, and software framework that
use for creating the system of the robot. The third chapter discusses the path planning
system that is designed by the researcher for the thesis project. The fourth chapter
discusses the simulation of the omnidirectional robot. The fifth chapter discusses the re-
sult of the omnidirectional robot simulation and the path planning system. The sixth chap-
ter discusses the conclusion of this thesis.

3
Fundamentals 4

2 Fundamentals

2.1 Omnidirectional Robot

An omnidirectional robot becomes popular in mobile robot industry. A conventional mo-


bile robot has some limitation. The movement of a conventional robot is limited when
traveling in a small and narrow space. An omnidirectional robot can move any direction
without turning its body [7]. There is no problem when an omnidirectional robot
encounters a small and narrow space. It can easily move vertically, horizontally, diago-
nally and circularly. That is why the robot is called as an omnidirectional robot [8].

An omnidirectional robot has a unique type of wheel. There are two types of omnidirec-
tional robot wheels; omni wheel and mecanum wheel. An omni wheel is shown in Figure
2.1 [9, 10].

Figure 2.1: An omni wheel

4
5 Fundamentals

An omni wheel is a wheel that has rollers around the wheel. The rollers are placed in the
direction of the motor axis. The wheel can slide frictionlessly in the motor axis direction.
Usually, an omnidirectional robot uses four omni wheels that placed at some angle to the
center of the robot. An example is shown in Figure 2.2 [10, 11].

Figure 2.2: The placement of the omni wheel

Each wheel provides force in the direction of the motor axis and the floor. The forces add
up and create a translational and rotational motion.

The second type of an omnidirectional robot wheel is the mecanum wheel. The design of
a mecanum wheel is also a wheel with rollers around the wheel. The difference between
the omni wheel and the mecanum wheel is the placement of the roller. The roller on a

5
Fundamentals 6

mecanum wheel is placed at 45 degrees to the wheel. A mecanum wheel is shown in


Figure 2.3 [12, 13].

Figure 2.3: A mecanum wheel

Usually, an omnidirectional robot uses four mecanum wheels. The placement of the wheel
is different from an omnidirectional robot with omni wheels. The placement of the wheel
can be seen in Figure 2.4 [14].

Figure 2.4: The mecanum wheels placement

6
7 Fundamentals

The force from each wheel is added up. The combination of the rotation of the mecanum
wheel makes the robot can move in any direction. For example, if w1, w2, w3, and w4
are rotating forward, T2x and T4x will cancel T1x and T3x. It means the robot will move
forward. If w1 and w4 are rotating forward, and w2 and w3 are rotating backward, T1x,
T2x, T3x and T4x will result in a force to the left. It means the robot will move to the left.
The relation between the rotation of the mecanum wheel and the motion of the robot is
shown in Figure 2.5 [14].

Figure 2.5: The relation between the rotation of the mecanum wheel and the motion
of the robot

The motion of the robot can be represented in kinematics equation. There are two kine-
matics equation; forward kinematics and inverse kinematics. The forward kinematic
equation is for determining the speed of the rotation of each mecanum wheel. The inverse
kinematic equation is for determining the longitudinal, transversal, and angular velocity
of the robot. The forward and inverse kinematics equation are shown in equation 1 and 2
[15].

𝜔1 1 −1 −(𝑙𝑥 + 𝑙𝑦 )
𝑣𝑥
𝜔2 1 1 1 (𝑙𝑥 + 𝑙𝑦 ) (1)
[𝜔 ] = [ 𝑣𝑦 ]
3 𝑟 1 1 −(𝑙𝑥 + 𝑙𝑦 ) 𝜔
𝑧
𝜔4 [1 −1 (𝑙𝑥 + 𝑙𝑦 ) ]

7
Fundamentals 8

1 1 1 1 𝜔1
𝑣𝑥
𝑟 −1 1 1 −1 𝜔2 (2)
[ 𝑣𝑦 ] = [ −1 1 −1 1 ] [𝜔 ]
𝜔𝑧 4 3
(𝑙𝑥 + 𝑙𝑦 ) (𝑙𝑥 + 𝑙𝑦 ) (𝑙𝑥 + 𝑙𝑦 ) (𝑙𝑥 + 𝑙𝑦 ) 𝜔4

Where

Table 2.1: Table properties of Omnidirectional Kinematics

𝑤1, 𝑤2, 𝑤3, 𝑤4 The wheel speed of each wheel

𝑟 The radius of the wheel

𝑙𝑥 Half distance between front wheels

Half distance between front wheel and


𝑙𝑦
rear wheel

𝑣𝑥 Longitudinal Velocity

𝑣𝑦 Transversal Velocity

𝜔𝑧 Angular Velocity

8
9 Fundamentals

2.2 Path-Planning

Path planning is one of the problems in robotics, and its ability to plan a collision-free
path is one of the requirements for an autonomous robot. The task of the path planning is
to find paths by connecting different point in an environment. Path planning helps mobile
robots to see the obstacle and generate an optimum path to avoid the obstacle [16].

The general problem of path planning is to plan a path from point A to point B in a de-
scribed environment [17]. Point A is the current position of the robot and point B is the
goal position. The problem is a robot is not just a point in a space. Besides the position, a
robot also has an orientation. The mobile robot has to determine the correct direction to
move to point B.

The geometric of the robot is also needed for the path planning. One of them is the radius
of the robot. The radius of the robot needs to be set to the path planner; otherwise, the
path planner will create a path that closes to the obstacle where there is no room for the
robot to move. A vacuum cleaner robot uses its geometry robot to create the cleaning path
[18]. A path planning without the robot geometry is shown in Figure 2.6.

Figure 2.6: A Path Planning without robot geometry

9
Fundamentals 10

Figure 2.6 shows that the path is touching the obstacle. Since, the path is the point where
the middle of the robot is, the radius of the robot needs to be included in the path calcu-
lation. A path planning with the robot geometry is shown in Figure 2.7.

Figure 2.7: A Path Planning with robot geometry

A graph is a numerical idea formally characterized by an arrangement of vertices (or


nodes) V and an arrangement of edges E associating these vertices. Edges are called co-
ordinated if, for a couple of vertices, an edge can be utilized to travel from one node to
the next node. Edges may likewise have a numerical esteem, frequently called the weight
or cost, which are a theoretical portrayal of the "work”, expected to move along that edge
[19].

In general, the graph representation is one of the methods that is used by researchers to
do path planning. Space is represented by the grid. The researcher determines the size of
the grid. The size of the grid also affects the computational time and the result of the path
planning. A big size of the grid can produce better path than a small size of the grid, but
the computational time is increased due to the path planner need to explore all the grid
[20]. The example of the graph representation is shown in Figure 2.8 [21].

10
11 Fundamentals

Figure 2.8: The graph representation of path planning

2.2.1 Path-Planning Approaches

Various path-planning algorithm has been introduced for a mobile robot. The differences
of the algorithm are according to environments, type of sensor, and robot capabilities.
The algorithm is becoming more sophisticated and has better performance in terms of
time, cost, and complexity [22]. The general definition of path planning is to find a colli-
sion-free path and the shortest path. Most of the new approaches provide a better perfor-
mance in terms of the shorter path but sometimes it does not guarantee that by finding the
shorter path also decrease the time that takes to find the shorter path.

A big number of algorithms have been developed for path planning of robot. For example
a path planning with Genetic Algorithm [23], and A* algorithm [24]. One of the simplest
algorithms is Dijkstra's algorithm [25].

Beginning at the source of the vertex, the algorithm assigns each adjacent between the
vertex with the cost value of the distance between them. This step is repeated until all of
the vertices has been processed. At the end of Dijkstra's algorithm, the solution is not only
a single shortest path, but also the distance between the source vertex and any other vertex
[26]. The graph of the Djisktra’s algorithm is shown in Figure 2.9 [27].

11
Fundamentals 12

Figure 2.9: The graph of Dijkstra's Algorithm

The Dijkstra's algorithm calculates all the distance between vertices. It makes the robot
can use any solution that provided by the Djikstra’s algorithm. Since the goal of the path
planning is to find the shortest path; the Dijkstra's algorithm will automatically choose
the solution with the small distance. In figure 2.9 shows, that path ACEG has the small
distance between the other solutions.

The better version of Dijkstra's algorithm is A* algorithm. This algorithm is based on


Equation 3 [28].

(3)
𝑓(𝑛) = 𝑔(𝑛) + ℎ(𝑛)

Where f(n) is the total cost of the n node. g(n) is the cost of reaching n node from the
current node. h(n) is the cost of reaching goal node from n node. The algorithm will
choose a node that has the lowest f(n). The step is repeated until node n is the goal node
[29]. The computation time of A* algorithm beats the Dijkstra's algorithm. The major
drawbacks of A* algorithm are that the algorithm uses a lot of memory since the algorithm
keeps all nodes in the memory. For example, if the map has a size of 300 x 300 and the
distance between node is 0.3 then the memory space that used by the algorithm is 27000
[30].

12
13 Fundamentals

In a plain area, the cost function of g(n) depends on the movement of the robot. For ex-
ample, the cost of a vertical and horizontal movement is 1. The cost of a diagonal move-
ment is 1.5. The algorithm calculates only the node that around the current node. An
illustration of the area of calculation is shown in Figure 2.10.

Figure 2.10: The area of calculation of A* algorithm

The algorithm chooses a node that has the lowest cost between the other nodes to be the
next current node. The previous node is added to an array. The calculation of the cost is
shown in Figure 2.11 [31].

13
Fundamentals 14

Figure 2.11: The cost calculation of A* algorithm

The number inside the node is the distance between the node and the goal position. Green
node is the goal position. Red node is the start position. Yellow node is the node that has
small cost from the previous node.

According to Torres Haro [32], the A* algorithm produce an optimal path but cost in
time. The cost function of g(n) and h(n) has to be adjusted depends on the environment.
A* algorithm is not the best for a dynamic environment.

A genetic algorithm is a class of versatile techniques that can be utilized to take care of
search and improvement issues including huge inquiry spaces. The search is performed
utilizing reproduced development (survival of the fittest) [33]. With every generation, the
best arrangements are genetically controlled to shape the solution set for the following
generation. As in nature, the solutions are joined as well as experience irregular change
or mutation [34].

One of the examples of a genetic algorithm for path planning is an approach that uses
distance and direction of the path as the genotype for the genetic algorithm [35]. Through
the process of the genetic algorithm, the path becomes a feasible path with a short dis-
tance.

In the process of the genetic algorithm, an initial solution is represented as a gene. The
gene is consist of chromosome. The chromosome is a node in the path-planning graph.

14
15 Fundamentals

The node is connected each other into a path. One of a genetic algorithm process is cross-
ing process. In the crossing process, the algorithm choose a random point and exchange
it with a chromosome. The crossing process will be continued until the algorithm find the
best fitness, which is the shortest path and collision-free path.

In general, the genetic algorithm is used for optimize the current path. Han Baek said,
“After finding objects and goal position, a genetic searching algorithm is activated in the
path planner to generate via-points for a short and safe path to the goal.” [36]. Therefore,
the genetic algorithm optimizes the path planner.

Another approach is called as sampling-based algorithm. Sampling-based algorithm is


effective for a large class problem, such as robotics, manufacturing, and computational
biology [37]. The sampling-based algorithm handles a high-dimensional space. A classi-
cal grid-based methods, such as A* algorithm can be used optimally in low dimensional
space. [38] One of the sampling-based method is PRM (Probabilistic Roadmap).

The probabilistic roadmap is described in [39]. The PRM goes through a learning phase.
In the learning phase, there are two steps, which are a construction step and an expansion
step. The construction step is where the PRM initially generate a random node in a space.
Then, the algorithm tries to connect for each node with other node. A local path planner
is called during this process. The local path planner constructs a path from a node to a
node. Nodes that close to each other will be connected. A node that already has connected
with the other node will not be processed again.

The expansion step is when a node has a big gap between the other nodes or there is a
small gap between two obstacles, a new configuration of node is created near a node. The
new node is processed through the construction process. The entire connected node will
be queried. It means that the algorithm finds connected nodes from the start position to
the goal position [40]. A PRM result is shown in Figure 2.12 [41].

15
Fundamentals 16

Figure 2.12: Probabilistic Roadmap (PRM) result

A more recent development of path planning algorithm is known as Rapidly-Exploring


Random Trees (RRT). RRT quickly finds some solution. A smooth and shortest path re-
quire the additional algorithm to optimize the solution provided by the RRT. RRT algo-
rithm is an approach that used sampling-based planners. The planner connects two points
sampled randomly in a space. The connection of these points constructs a tree that
explores the whole space [42]. It is often that the path is found quickly by using sampling-
based planners. An additional time is required to improve the solution. The RRT is also
known as an online path planner. The incremental of these algorithms avoid the necessity
to set the number of samples. The RRT itself can generate an infinite sample without any
limit. Due to this, the RRT algorithm is able to be implemented in online problems [43].
A research paper by Ferguson and Stentz [44], said that the solution of RRT algorithm
can be improved by running the algorithm multiple times. They made the first solution as
the reference to the next solution.

16
17 Fundamentals

Due to the random points sampled by the RRT algorithm, a zigzag path can occur. To
overcome this problem, an approach named after RRT is proposed by Karaman and Fra-
zozoli [42]. It is called as RRT*. The RRT* improved the path quality by rewiring the
tree and searching the neighbor node. The comparison between RRT and RRT* is shown
in Figure 2.13 [45] and Figure 2.14 [46].

Figure 2.13: RRT Algorithm Result

Figure 2.14: RRT* Algorithm Result

17
Fundamentals 18

However, RRT* algorithm has a trade-off between the result and the computational time.
The RRT* algorithm requires more time to produce a smooth path [47]. In this project,
the researcher proposed an algorithm to optimize the RRT*.

2.3 Robot Operating System

Since the scale and scope of robotics continue to grow, developing software for a robot
becomes difficult. Each type of robot has its different hardware. Each hardware has its
own interface. For example, a mobile robot has two hardware; hardware A and B. Hard-
ware A has an old interface and Hardware B has a new interface. The interface between
them is not compatible [48].

Robot Operating System (ROS) is an open-source software framework for robots. ROS
has been developing since 2007 until now at Stanford Artificial Intelligence Laboratory.
ROS has a set of software libraries and tools that help to develop software for robots.
ROS is continuing to be developed. The first version of ROS is called Box Turtle. Box
Turtle version can execute low-level and high-level tasks process such as sensor access,
diagnostic reporting, power management, autonomous navigation, 1-D and 3-D percep-
tion, arms controllers, and visualization. There are three main features of ROS; peer-to-
peer connection, multi-lingual, and open-source [49].

2.3.1 Client-Server Connection

A system that built using ROS consist of hosts that connected each other in client-server
connection with one server. The server is called as roscore. If the roscore is down, then
all the connection will be disconnected and all the process will stop. The connection of
ROS system is shown in Figure 2.15.

18
19 Fundamentals

Figure 2.15: Client-Server Connection of ROS network

For example, a mobile robot is designed to move manually and autonomously. The mo-
bile robot can be controlled by using a controller. A camera is attached to the mobile
robot. The user can see the direction of the robot by seeing the image from the camera.
The user also can control the robot wirelessly from an off-board computer. The ROS
network connection is shown in Figure 2.16.

Figure 2.16: A simple ROS network connection

19
Fundamentals 20

The onboard computer inside the mobile robot is connected each other via Ethernet. The
off-board computer is connected wirelessly to the roscore. In this simple ROS network
connection, the mobile robot can be controlled wirelessly by the user. The problem occurs
when the wireless connection is not stable. In this case, a strong wireless router is needed
to maintain the stability of the connection. So, the data that sent by the off-board computer
is not delayed [50].

Each host has its own different process. In ROS, the process is called a node. The node
can be any process such as sensor measuring, data fusion, controller, and other processes.
Each node can communicate each other by publishing and subscribing a message. This
message contains a data that produced by the node. Each message has its own name. The
publishing process is when the node broadcast the data to the server. Any node that con-
nected to the server and subscribing the message with the corresponding name can use
the data [51].

For example, there is a talker node and a listener node. The talker node publishes a string
data named “chatter” to the network. The listener node subscribes the string data named
“chatter”. There is a tool in ROS for printing the graph of the ROS system. It is called as
rqt_graph. The ROS system graph of the above example is shown in Figure 2.17.

Figure 2.17: The ROS system graph

2.3.2 Multi-lingual Programming

When building a system together, many individuals have their own knowledge of some
programming language. These differences resulted in the different syntax of the program,
programming time, and runtime efficiency. For example, there are three people in a group.
The first person can do C++. The second person can do Python. The third person can do
LISP. To maximize the performance of the development stage, the group has to decide
first which programming language they want to use. If the group choose C++, then the
second and the third person will have a problem. The problem is time-consuming. They
need to learn C++ syntax. Since they do not have any experience in C++, the runtime

20
21 Fundamentals

efficiency of the program is questionable. ROS currently supports four different program-
ming languages; C++, Python, Octave, and LISP [52]. The most popular programming
language in ROS is C++ and Python. The developer can still develop a system even with
different programming language by using ROS framework.

The node can be written in C++, Python, Octave or LISP. The question is, how can a node
know if the message that received by the node is a valid data and has the same data type?
It is called as cross-language development. ROS uses a simple language-neutral interface
definition language (IDL) to define the message that sends between nodes [53]. An ex-
ample of IDL is shown in Figure 2.18.

Figure 2.18: An example of IDL

The IDL uses a short text file to define each message. An example in Figure 9 shows that
the message contains a variable header with a data type of std_msgs/Header and variable
w1, w2, w3, and w4 with a data type of float32. ROS will expand the IDL into each of
programming language. Each of programing language has its own message definition
with its language. Each of node with its own programming language can publish and
subscribe the same message without any missing and invalid data.

2.3.3 Open Source

ROS is an open source framework. Its source code is publicly available. ROS is based on
BSD license, which allows the development of non-commercial and commercial projects.
ROS does not require a module to be used. It is only a process where a data passes be-
tween modules through inter-process communication. Therefore, anyone can use ROS
freely at zero cost.

21
Path-Planning System 22

3 Path-Planning System

3.1 System Overview

The thesis project uses RRT-Star algorithm as its main path-planning algorithm. The
RRT-Star algorithm is written in MATLAB language. The RRT-Star produces a
collision-free path with random distance. The next step is to implement an optimizer. The
optimizer uses the raw output from the RRT-Star to obtain a collision-free path with less
distance.

3.1.1 RRT-Star Algorithm

The first step of RRT-Star algorithm is to construct a tree that will continuously grow
based on random samples. The first sample is the start point. Then, the algorithm will
choose a random point within a space. The algorithm will find the nearest point from the
chosen point to each of sample. If there is no obstacle between the chosen point and the
nearest point, then the algorithm will save the nearest point to the data samples and
connect them as a branch. This process is illustrated in Figure 3.1.

Figure 3.1: The first step of RRT-Star Algorithm

22
23 Path-Planning System

The researcher limits the length of the branch. If the chosen point found the nearest point
and it is not within the length limit of the branch, then the algorithm will generate a point
within the length limit of the branch that towards the chosen point from the nearest point
and saves it to the data samples. This process is illustrated in Figure 3.2.

Figure 3.2: The generated point within the length limit of the branch

By iterating the first step of RRT-Star Algorithm, the tree will start to grow. The
MATLAB result of the first step is shown in Figure 3.3.

23
Path-Planning System 24

Figure 3.3: The MATLAB result of the first step of RRT-Star Algorithm

Figure 3.3 shows black lines and red dots. The black lines are the branch of the tree. The
red dots are points that not connected to the branch. These random points can be
recalculated through the first in a small chance because of the big environment.

By limiting the length of the branch, it prevents the algorithm to create a long branch
which will alter the second step of the RRT-Star algorithm in terms of finding the shortest
path. The disadvantage of using limited branch length is; the algorithm will require more
time to create branches from the start point to the goal point. The comparison between
limited branch length and unlimited branch length is shown in Figure 3.4. The limited
branch length needs more time to cover the whole space. The unlimited branch length can
cover the whole space at the same runtime as the limited branch length.

24
25 Path-Planning System

(a) (b)

Figure 3.4: (a) Limited branch length, (b) Unlimited branch length

The second step is to connect a data sample with another data sample within a limited
boundary. The start point is the first parent for the connection. The next data sample that
connects with the start point within the start point boundary will be the start point child.
The difference between the first step and the second step is; in the second step, the algo-
rithm will not generate a point if there is no data sample within the parent boundary. The
second step is illustrated in Figure 3.5 and Figure 3.6.

Figure 3.5: No sample that located in the parent area

25
Path-Planning System 26

Figure 3.6: Sample is located within the parent area

Figure 3.6 shows one sample that located in the parent boundary. The sample becomes
the parent-child. The second step will keep running until the child point is equal to the
goal point. The MATLAB result of the second step is shown in Figure 3.7.

26
27 Path-Planning System

Figure 3.7: The MATLAB result of the second step of RRT-Star Algorithm

Since a random point generates the data samples, there is little chance to generate a ran-
dom point equal to the goal point. The researcher adds a goal function to the algorithm,
which will generate a point equal to the goal point at a certain chance. In this case, the
researcher set the chance to 50%. The goal function will affect the first step that will
generate a data sample in the direction of the goal point. It will make branches towards
the goal point. The difference between the second step with the goal function and without
the goal function is shown in Figure 3.8.

(a) (b)

Figure 3.8: (a) The result without goal function, (b) with goal function

In the first step, the writer mentioned about the problem that will happen if there is no
limited branch length. Since the step from point to point is unlimited, there is a chance
that the initial result of the algorithm is a long path. The difference of the initial result
between the last step with a small step (limited branch length) and with a big step (unlim-
ited branch length) is shown in Figure 3.9.

27
Path-Planning System 28

(a) (b)

Figure 3.9: (a) The result of a small step, (b) The result of a big step

The first result is shorter than the second result, but the second result takes less time than
the first result. The first result takes 0.2 seconds. The second result takes 0.02 seconds.
The resulting time valid only for an environment that shown in Figure 9. The more
complex the environment, the more time is needed. To achieve a short and smooth path
in a short time, the researcher implements an optimizer to the algorithm.

3.1.2 Optimizer Algorithm

The main goal of using path-planning algorithm is to create a collision-free path at mini-
mum cost. The minimum cost is achieved when the collision-free path has the shortest
path from the start position to the goal position. The researcher implements an optimizer
to the RRT-Star algorithm. The optimizer optimizes the initial result of the RRT-Star
algorithm. The optimizer will keep running until the shortest path is found. The researcher
will take the initial result of the RRT-Star algorithm with a big step because it produces
data samples faster than the initial result of RRT-Star with a small step.

The optimizer is based on an ellipse shape. An Ellipse consists of two focal points. The
sum of the distance to the two focal point is constant for every point on the curve. An
ellipse has two axes which are major axis and minor axis. The major axis is a line that

28
29 Path-Planning System

through the focal point. The minor axis is a line that perpendicular to the major axis
through the center point. The ellipse shape is shown in Figure 3.10.

Figure 3.10: Foci and Loci of an ellipse

A is the major axis. B is the minor axis. C is the focal distance. P is a point on the curve.
P can be at any point on the curve. F1 and F2 are the focal points of the ellipse. PF1 is the
distance between F1 and P. PF2 is the distance between F2 and P. O is the center point of
the ellipse. The sum of PF1 and PF2 is always constant. If PF1 and PF2 are different
because of the P is moved to the other point on the curve than the sum of PF1 and PF2 is
still the same as previous.

In the ellipse shape, the focal points are the start point and the goal point. C is the distance
from either the start point or the goal point to the middle point between the start point and
the goal point. A and B can be obtained by derivating an equation based on the ellipse
shape that is shown in Figure 10. Point P is moved to the left or right of the ellipse to
obtain the equation. The illustration is shown in Figure 3.11.

29
Path-Planning System 30

Figure 3.11: Foci and Loci of an ellipse with point P moved to the left

The first equation is to find the relation between the axes of the ellipse and the distance
of the focal points to the point P on the ellipse’s axis point of view.

𝑋 = 𝑃𝐹2

2𝐴 = 2𝐶 + 𝑃𝐹2 + 𝑋

(4)
2𝐴 = 2𝐶 + 𝑃𝐹2 + 𝑋

The second equation is the same with the first equation but on the focal points point of
view.

(5)
𝑃𝐹1 = 2𝐶 + 𝑃𝐹2

By substituting Equation 4 into Equation 5, the relation of focal points to the axis of the
ellipse is obtained.

𝑃𝐹1 = (2𝐴 − 2𝑃𝐹2) + 𝑃𝐹2

𝑃𝐹1 + 𝑃𝐹2 (6)


𝐴=
2

30
31 Path-Planning System

Equation 6 shows that if PF1 and PF2 are equal than PF1 and PF2 are equal to A. A and
C are already found. The last equation is to find the value of B. In Figure 3.12 shows that
A, B, and C can be related to Pythagoras’s theorem.

Figure 3.12: Pythagoras’s theorem on an ellipse

From Figure 3.12, the value of B can be obtained by taking Pythagoras’s theorem within
ABC triangle.

𝐴2 = 𝐵2 + 𝐶 2

(7)
𝐵2 = 𝐴2 − 𝐶 2

The optimizer will optimize the first solution of the path-planning algorithm. It will con-
tinue to optimize until the shortest path is found. The first step is to relate the first solution
to Equation 7. Let's take a first solution that is shown in Figure 3.13.

31
Path-Planning System 32

Figure 3.13: First solution of the RRT-Star Algorithm

The start point and the goal point are defined as the focal points. The value of C in Equa-
tion 7 requires the middle point between the start point and the goal point. The value of
A can be obtained by finding the maximum sum of the distance between the two focal
points to the point P on the curve. In the first solution, P is all points that connected from
the start point to the goal point. The middle point between the start point and the goal
point, and the maximum sum of the distance between the two focal points to the point P
on the first solution are shown in Figure 3.14.

Figure 3.14: Finding the needed value from the first solution for the optimizer

The sum of PF1 and PF2 has the maximum value when P3 is chosen. Equation 6 obtains
the value of A by substituting PF1 and PF2. By calculating the distance from either F2 or
F1 to O, obtains the value of C. Equation 7 obtains the value of B by substituting A and
C. The optimizer creates an ellipse based on the value A, B, and C that is obtained before.

32
33 Path-Planning System

The optimizer forces the path-planning algorithm to generate samples inside the ellipse.
The path-planning algorithm generates samples in the whole environment without the
optimizer. It takes more time than the path-planning with the optimizer. The optimizer
also makes the path-planning algorithm to find the shortest path faster. It is shown in
Equation 7. If B is equal to zero, then A is equal to C which means a straight line. A
straight line between a point to a point is always the shortest path. In Figure 3.15 shows
the difference between the path-planning algorithm with the optimizer and without the
optimizer.

(a) (b)

Figure 3.15: (a) The path planning with the optimizer, (b) without the optimizer

Figure 3.15a shows that the path-planning algorithm generates the data samples only in-
side the ellipse shape. Figure 3.15b shows that the path-planning algorithm generates the
data samples in the whole environment.

33
Mobile Robot Simulation 34

4 Mobile Robot Simulation

4.1 System Overview

The researcher creates the simulation of the mobile robot by using Gazebo. The simula-
tion simulates the behavior of the omnidirectional robot. To integrate the simulation with
the path-planning algorithm, the researcher creates a system based on Robotic Operating
System (ROS). The system is divided into five main nodes; simulation node, mapper
node, MATLAB node, the path to vector speed node, and kinematic node. The connection
of each node is shown in Figure 4.1.

Figure 4.1: The mobile robot simulation system

4.1.1 Simulation Model

The researcher creates the simulation model by using Gazebo Simulator. The model of
the simulation is based on the real robot. The dimension of the robot is shown in Figure
4.2.

34
35 Mobile Robot Simulation

Figure 4.2: The dimension of the mobile robot

The robot has eight ultrasonic sensors and one laser scanner. The position of each sensor
is shown in Figure 4.3.

Figure 4.3: The position of the ultrasonic sensors and the laser scanner

35
Mobile Robot Simulation 36

The real robot is shown in Figure 4.4.

Figure 4.4: The actual mobile robot

The body of the robot is based on rectangular shape. The wheel of the robot is based on
a cylindrical shape. An example of a cylindrical shape and rectangular shape is shown in
Figure 4.5.

36
37 Mobile Robot Simulation

Figure 4.5: Rectangular shape and circular shape in Gazebo

The size of each shape can be modified inside Gazebo. Gazebo has a laser scanner built-
in plugin. The plugin can be attached to the shape. The properties of the plugin can be
modified by modifying the plugin script. The plugin script is shown in Figure 4.6.

37
Mobile Robot Simulation 38

Figure 4.6: The laser scanner’s plugin script

The researcher focuses only on minimum and maximum range of the laser scanner and
the horizontal angle resolution. The researcher sets the laser scanner plugin to be the same
with the real laser scanner. In this research, the laser scanner only scans 180 degrees area.
According to the datasheet of RPLIDAR A2 [54], the angular resolution is 0.9 degrees.
It means that the samples between 0 degrees until 180 degrees are 201. The minimum
distance for scanning is 0.05 meter. The maximum distance for scanning is 5 meters.

4.1.2 Mapper Node

The laser scanner and the ultrasonic sensor scan the environment around the robot. The
result of the laser scanner is an array of distance between the laser scanner and the object
that detected by the laser scanner. Let an object stand in front of the robot as shown in
Figure 4.7.

38
39 Mobile Robot Simulation

Figure 4.7: The mobile robot with an object in front of it

The result of the laser scanner from Figure 4.7 example is shown in Figure 4.8.

Figure 4.8: The result of the laser scanner

The result of the laser scanner is saved in an array called ranges. The size of the array of
ranges depends on the angle increment of each scan. If the ranges data is Inf, then there
is no object in front of the robot at a certain angle. If the ranges show value, then there is

39
Mobile Robot Simulation 40

an object in front of the robot at a certain angle. The angle depends on the location of the
data at the array of ranges. The result of the laser scanner is only the distance between an
object and the laser scanner position. The researcher creates an algorithm to make a point
cloud on the environment map based on the result of the laser scanner. It is done by using
the result of the laser scanner, the current position of the robot and the position of the
laser scanner. The calculation is illustrated in Figure 4.9.

Figure 4.9: Calculating the point cloud of the laser scanner data

Where

Table 4.1: Table properties of the illustration in Figure 4.9

The position of a point cloud in x and y


𝑥, 𝑦
coordinate on the environment map.

𝛼 The angle of the laser scanner data

The first formula for calculating a point cloud is shown in Equation 8 and 9.

(8)
𝑥 = 𝑟𝑜𝑏𝑜𝑡𝑋 + 𝑙𝑎𝑠𝑒𝑟𝑋 + 𝑟𝑎𝑛𝑔𝑒𝑠𝑑𝑎𝑡𝑎 ∗ cos⁡(𝛼)

40
41 Mobile Robot Simulation

(9)
𝑦 = 𝑟𝑜𝑏𝑜𝑡𝑌 + 𝑙𝑎𝑠𝑒𝑟𝑌 + 𝑟𝑎𝑛𝑔𝑒𝑠𝑑𝑎𝑡𝑎 ∗ sin⁡(𝛼)

The upper and the lower limit of the range of the angle of the laser scanner for scanning
is shown in Equation 10.

1 1 (10)
𝛼 = [− 𝜋, 0, 𝜋]
2 2

The equation 5 and 6 valid only when the pitch, yaw, and roll of the robot are 0. The next
example is when the robot has rotated itself. It is shown in figure 4.10.

Figure 4.10: The calculation of the point cloud when the robot has rotated

Where

Table 4.2: Table properties of the illustration in Figure 4.10

𝑥, 𝑦 The position of a point cloud in x and y


coordinate on the environment map.

𝛼 The angle of the laser scanner data

41
Mobile Robot Simulation 42

𝛽 The yaw angle of the robot

Based on Figure 24, the final formula for calculating a point cloud can be obtained. It is
shown in Equation 11 and 12.

𝑥 = 𝑟𝑜𝑏𝑜𝑡𝑋 + 𝑙𝑎𝑠𝑒𝑟𝑋 ∗ cos⁡(𝛽) + 𝑟𝑎𝑛𝑔𝑒𝑠𝑑𝑎𝑡𝑎 ∗ cos⁡(𝛼 (11)


+ 𝛽)

𝑦 = 𝑟𝑜𝑏𝑜𝑡𝑌 + 𝑙𝑎𝑠𝑒𝑟𝑌 ∗ sin⁡(𝛽) + 𝑟𝑎𝑛𝑔𝑒𝑠𝑑𝑎𝑡𝑎 ∗ sin⁡(𝛼 (12)


+ 𝛽)

The researcher creates a simple MATLAB program to proof that equation 8 and 9 are
valid. The first result is based on an example in Figure 4.7. The result is shown in Figure
4.11.

Figure 4.11: The result of the mapper node

42
43 Mobile Robot Simulation

The second result is based on an example in Figure 4.12.

Figure 4.12: The second example of the point cloud calculation

In the example in Figure 4.12 shows that the robot has rotated and the object has moved
to another position. The result of the example is shown in Figure 4.13.

43
Mobile Robot Simulation 44

Figure 4.13: The result of the mapper node for the second example

All the result shows that equation 11 and 12 are valid. The mapper node sends the point
clouds to the MATLAB node.

4.1.3 MATLAB Node

The path-planning algorithm is written in MATLAB code. The path-planning algorithm


has explained in Chapter 3. In this node, the researcher integrates the path-planning with
the ROS system. The path-planning algorithm needs the current position of the robot and
the goal position. The user in MATLAB can change the goal position. The path-planning
also needs the current data of the environment. The mapper node sends the data of the
environment to the MATLAB node. The result of the MATLAB node is a collision-free
path. The MATLAB will send the result to the path to vector node. The flowchart of the
MATLAB node is shown in Figure 4.14.

44
45 Mobile Robot Simulation

Figure 4.14: The flowchart of the MATLAB node

4.1.4 Path to Vector Speed Node

The path to vector speed node converts the path data that is sent by the MATLAB node,
to vector speed X, Y, and Z. The conversion is done by calculating the distance error and
the angle error. The distance error is the distance between the current position of the robot
and the goal point. The simulation node sends the current position of the robot. If the
distance error is positive, then the robot will move forward. If the distance error is
negative, then the robot will move backward. The angle error is the angle difference be-
tween the robot angle with respect to the world coordinate and the angle of the line from
the current position of the robot to the goal point. If the angle error is positive, then the
robot will turn left. If the angle error is negative, then the robot will turn right. The illus-
tration of above problem is shown in Figure 4.15

45
Mobile Robot Simulation 46

Figure 4.15: The distance error and the angle error

α is the angle of the line from the current position of the robot to the goal point. β is the
current angle of the robot. γ is the angle error. de is the distance error.

The laser scanner is located in front of the robot. The laser scanner is the main sensor for
mapping the environment and detecting an obstacle. Therefore, the position of the goal
point is always in front of the mobile robot, which means the robot cannot move backward
to the goal point. The first step of this node is to align the front of the robot with the goal
point. If the goal point is located behind the robot, then the robot will turn 180 degrees.

The world coordinate system of the simulation uses arctan2 as its function for converting
Cartesian coordinate to polar coordinate. The range value of the angle is shown in Figure
4.16.

46
47 Mobile Robot Simulation

Figure 4.16: The polar coordinate of the simulation

A problem occurs when calculating the angle error with the common formula, which is
shown in Equation 13.

(13)
𝑎𝑛𝑔𝑙𝑒⁡𝑒𝑟𝑟𝑜𝑟 = 𝑡𝑎𝑟𝑔𝑒𝑡𝑎𝑛𝑔𝑙𝑒 − 𝑐𝑢𝑟𝑟𝑒𝑛𝑡𝑎𝑛𝑔𝑙𝑒

α is the angle of the robot, β is target angle, γ is the angle error 1, and δ is the angle error
2. The example of the calculation is shown in Figure 4.17.

47
Mobile Robot Simulation 48

Figure 4.17: The illustration of the example

𝛼 = 1.25⁡𝑟𝑎𝑑𝑠

𝛽 = ⁡𝑎𝑟𝑐𝑡𝑎𝑛2⁡(𝑌⁡𝑡𝑎𝑟𝑔𝑒𝑡⁡𝑝𝑜𝑠 − 𝑌⁡𝑟𝑜𝑏𝑜𝑡⁡𝑝𝑜𝑠, 𝑋⁡𝑡𝑎𝑟𝑔𝑒𝑡⁡𝑝𝑜𝑠 − 𝑋⁡𝑟𝑜𝑏𝑜𝑡⁡𝑝𝑜𝑠)

𝛽 = 𝑎𝑟𝑐𝑡𝑎𝑛2(−1 − 2, −5 − 1.6) = ⁡ −2.72⁡𝑟𝑎𝑑𝑠

𝛾 = ⁡𝛽 − ⁡𝛼 = −3.97⁡𝑟𝑎𝑑𝑠

By applying direct calculation using the common formula for calculating an error, angle
error 1 is chosen as the angle error, and the robot will turn right. Figure 31 shows that δ
is smaller than γ. It means that the robot should turn left. The problem is, by only using
the common formula for calculating the error; the shortest angle is not calculated. The
researcher implements an error selection function to the path to vector speed node. The
function selects either angle error 1 or angle error 2 for the angle error. It is done by
finding the shortest angle between angle error 1 and angle error 2. The flowchart of the
error selection function is shown in Figure 4.18.

48
49 Mobile Robot Simulation

Figure 4.18: The flowchart of the error selection algorithm

The flowchart explains that if the angle error 1 is greater than a half circle, then calculate
the other part of the circle that is not included in the angle error 1. The other part of the
circle is the angle error 2. Then the program will choose the angle error 2 as the angle
error for the next step of the path to vector speed node.

Table 4.3: Example of the error calculation

𝛾 = ⁡ −3.97⁡𝑟𝑎𝑑𝑠; ⁡𝛾 < −𝜋 𝛾 = ⁡3.97⁡𝑟𝑎𝑑𝑠; ⁡𝛾 > 𝜋

𝑎𝑛𝑔𝑙𝑒𝑒𝑟𝑟𝑜𝑟 = ⁡2𝜋 − 𝑎𝑏𝑠(𝛾) 𝑎𝑛𝑔𝑙𝑒𝑒𝑟𝑟𝑜𝑟 = ⁡ −2𝜋 − 𝛾 = −3.97⁡𝑟𝑎𝑑𝑠


= 3.97⁡𝑟𝑎𝑑𝑠

49
Mobile Robot Simulation 50

Robot turns left Robot turns right

4.1.5 Kinematic Node

The kinematic node receives the vector speed from the path to vector speed node. It con-
verts the vector speed into the wheel speed of the mobile robot. The converter uses the
forward kinematics of the mecanum wheeled omnidirectional robot. The forward kine-
matics is shown in Equation 14.

𝑤1 1 −1 −(𝑙𝑥 + 𝑙𝑦)
1 𝑣𝑥
𝑤2 1 1 (𝑙𝑥 + 𝑙𝑦) (14)
[ ]= [ ] [ 𝑣𝑦 ]
𝑤3 𝑟 1 1 −(𝑙𝑥 + 𝑙𝑦) 𝑤𝑧
𝑤4 1 −1 (𝑙𝑥 + 𝑙𝑦)

Where:

Table 4.4: Table properties of Kinematic Equation in Equation 14

𝑤1, 𝑤2, 𝑤3, 𝑤4 The wheel speed of each wheel

𝑟 The radius of the wheel

𝑙𝑥 Half distance between front wheels

Half distance between front wheel and


𝑙𝑦
rear wheel

𝑣𝑥 Longitudinal Velocity

𝑣𝑦 Transversal Velocity

𝜔𝑧 Angular Velocity

After the forward kinematics calculation, the kinematic node sends the wheel speed of
each wheel to the simulation.

50
51 Result

5 Result

5.1 The Mobile Robot Simulation System Result

The researcher creates the robot model by using Gazebo. The model is based on rectan-
gular and cylindrical shape. The dimension of the robot is the same with the real robot.
The model of the robot is shown in Figure 5.1.

Figure 5.1: The model of the mobile robot

The robot model includes the laser scanner and eight ultrasonic sensors. The complete
system of the simulation can be seen in Figure 5.2.

51
Result 52

Figure 5.2: The ROS system of the mobile robot

52
53 Result

Gazebo node sends eight data, which is ultra1, ultra2, ultra3, ultra4, ultra5, ultra6, laser,
and odom. The ultrasonic data is sent to the ultrasonic node. In this node, six ultrasonic
data are combined into an array. Laser is the data of the laser scanner. Odom is the data
of the current position of the robot. All data from Gazebo node are sent to the mapper
node. The mapper node creates a point cloud, which is saved in maps data. The maps data
are sent to the MATLAB node. The MATLAB node runs the path-planning algorithm.
The result of the MATLAB node is the paths. Pathpublish node is an additional node for
latching the paths data. The path from pathpublish is sent to the path to speed node. In
this node, the path is converted into vector speed x and y. The vector speed is sent to the
kinematics node. The kinematics node calculates the wheel speed of the robot. The wheel
speed is sent to the gazebo and the robot starts to move. Then the process will start from
the first step and will continue to run.

The MATLAB node is programmed to determine the goal position of the robot. The goal
position of the robot can be modified by changing the value of x and y. An example to
change the x and y value is shown in Figure 5.3.

Figure 5.3: Changing the goal position in MATLAB

53
Result 54

5.2 The Path-Planning with a Static Obstacle

Two environments are tested for the path-planning algorithm. The first environment is a
simple environment. The first environment is shown in Figure 5.4.

Figure 5.4: The simple environment

The start position of the robot is at point (0, 0). The goal position of the robot is at point
(400, 400). The result of the path-planning for the simple environment is shown in Figure
5.5.

54
55 Result

Figure 5.5: The path-planning result for the simple environment

The solution is found in 0.5 seconds. There is two lines. The first line is the first solution.
The second line is the first solution that has been optimized by the optimizer. The second
environment is a complex environment. The second environment is shown in Figure 5.6.

55
Result 56

Figure 5.6: The complex environment

The differences between the simple environment and the complex environment are the
number of the obstacle. The complex environment has more obstacle than the simple
environment. It will cause the path-planning algorithm to take a longer time to find the
solution. The result of the path-planning for the complex environment is shown in Figure
5.7.

56
57 Result

Figure 5.7: The path-planning result for the complex environment

The solution is found in 1.2 seconds. It means that the more complex the environment,
the more time needed for the path-planning to find the solution.

5.3 The Path-Planning with a Dynamic Obstacle

There are two scenarios for testing the path-planning in a dynamic environment. The first
scenario is when the robot is moving there will be an object fall in front of the robot. The
second scenario is when the robot is moving there will be another robot move from the
left to the right. The path-planner must able to recalculate the path with the new environ-
ment.

The result of the first scenario is illustrated in Table 5.1

57
Result 58

Table 5.1: The path-planning with a dynamic obstacle illustration

The Path-planning (MATLAB) Simulation Environment (Gazebo)

The initialization process of the simulation. The simulation environment is opened first
since it starts the roscore for the ROS network. Then, MATLAB is opened.

In order to start the path planning and move the robot, a goal position is needed. The
researcher manually inputs the goal position. In this case, X = 60, Y = 60. After the
researcher input the goal position, the algorithm is started to calculate the path for the
robot. The robot starts to move.

58
59 Result

In the middle of the robot’s way towards the goal position, the researcher places an
object to cut the pathway of the robot. Since the robot has a laser scanner, the robot can
adapt to a new environment. The laser scanner data is sent to the MATLAB. The path
planning calculates the new path for the new environment.

The robot is succeeded to move from the start position to the goal position without
colliding any obstacle.

59
Conclusion 60

6 Conclusion

6.1 Conclusion

The automation lab in the University of Applied Science has been engaging the research
of a mobile robot until now. The omnidirectional robot has been chosen to be developed.
The first stage of this development is to design the mechanical structure of the robot. The
second stage is to design the electrical structure of the robot. A master student has done
the first stage. The second stage is still in development process. The robot itself can be
controlled manually by sending a command to the robot or by writing a simple algorithm
to move the robot. A ROS system is implemented to the robot to improve the control
system of the robot and to easily integrate hardware in the mobile robot. All the hardware
can run independently without disturbing another process. One of the goals of this mobile
robot project is that the robot can autonomously move. In order to achieve the goal, a
navigation system is developed. A localization and mapping process is a must before im-
plement the path planning process. A bachelor student has done those processes. In re-
searcher opinion, the localization and mapping process is not ready yet to be implemented
to the mobile robot. The localization and mapping process works perfectly in a simulation
environment.

The next development is to develop the path-planning algorithm for the navigation sys-
tem. The researcher chooses RRT* as the base structure of the path-planning algorithm.
The RRT* shows a reduced performance in terms of time to find the solution. To over-
come that problem, the researcher implements an optimizer to the RRT* algorithm. The
result shows that the optimizer decreases the time that takes to find the solution. The
optimizer is based on an ellipse equation. The ellipse will shrink in overtime and its opti-
mum solution is a straight line. The path-planning is written in MATLAB code.

To test the algorithm, the researcher designs a simulation. The model of the mobile is
created in Gazebo. The dimension of the mobile robot is the same with the actual robot.
The ultrasonic sensor and the laser scanner also have the same properties with the actual
sensor. The researcher also designs a ROS system for the mobile robot. It is designed to

60
61 Conclusion

be used later on in the actual robot. The ROS system connects the simulation to
MATLAB. This allows either the simulation or MATLAB to communicate each other.
The result is that the path-planning is able to create a collision-free path and the simula-
tion is able to execute the path and move the robot. The path-planning also can recreate
the path over time. This allows the algorithm to adapt to the new environment.

61
List of Figures 62

List of Figures

Figure 2.1: An omni wheel ................................................................................................ 4


Figure 2.2: The placement of the omni wheel ................................................................... 5
Figure 2.3: A mecanum wheel .......................................................................................... 6
Figure 2.4: The mecanum wheels placement .................................................................... 6
Figure 2.5: The relation between the rotation of the mecanum wheel and the motion of
the robot..................................................................................................................... 7
Figure 2.6: A Path Planning without robot geometry ....................................................... 9
Figure 2.7: A Path Planning with robot geometry .......................................................... 10
Figure 2.8: The graph representation of path planning ................................................... 11
Figure 2.9: The graph of Dijkstra's Algorithm ................................................................ 12
Figure 2.10: The area of calculation of A* algorithm ..................................................... 13
Figure 2.11: The cost calculation of A* algorithm ......................................................... 14
Figure 2.12: Probabilistic Roadmap (PRM) result .......................................................... 16
Figure 2.13: RRT Algorithm Result ................................................................................ 17
Figure 2.14: RRT* Algorithm Result .............................................................................. 17
Figure 2.15: Client-Server Connection of ROS network ................................................ 19
Figure 2.16: A simple ROS network connection ............................................................ 19
Figure 2.17: The ROS system graph ............................................................................... 20
Figure 2.18: An example of IDL ..................................................................................... 21
Figure 3.1: The first step of RRT-Star Algorithm ........................................................... 22
Figure 3.2: The generated point within the length limit of the branch............................ 23
Figure 3.3: The MATLAB result of the first step of RRT-Star Algorithm ..................... 24
Figure 3.4: (a) Limited branch length, (b) Unlimited branch length............................... 25
Figure 3.5: No sample that located in the parent area ..................................................... 25
Figure 3.6: Sample is located within the parent area ...................................................... 26
Figure 3.7: The MATLAB result of the second step of RRT-Star Algorithm ................ 27
Figure 3.8: (a) The result without goal function, (b) with goal function ........................ 27
Figure 3.9: (a) The result of a small step, (b) The result of a big step ............................ 28
Figure 3.10: Foci and Loci of an ellipse .......................................................................... 29

62
63 List of Figures

Figure 3.11: Foci and Loci of an ellipse with point P moved to the left ........................ 30
Figure 3.12: Pythagoras’s theorem on an ellipse ............................................................ 31
Figure 3.13: First solution of the RRT-Star Algorithm .................................................. 32
Figure 3.14: Finding the needed value from the first solution for the optimizer ............ 32
Figure 3.15: (a) The path planning with the optimizer, (b) without the optimizer ......... 33
Figure 4.1: The mobile robot simulation system ............................................................ 34
Figure 4.2: The dimension of the mobile robot .............................................................. 35
Figure 4.3: The position of the ultrasonic sensors and the laser scanner........................ 35
Figure 4.4: The actual mobile robot................................................................................ 36
Figure 4.5: Rectangular shape and circular shape in Gazebo ......................................... 37
Figure 4.6: The laser scanner plugin script ..................................................................... 38
Figure 4.7: The mobile robot with an object in front of it .............................................. 39
Figure 4.8: The result of the laser scanner ...................................................................... 39
Figure 4.9: Calculating the point cloud of the laser scanner data ................................... 40
Figure 4.10: The calculation of the point cloud when the robot has rotated .................. 41
Figure 4.11: The result of the mapper node .................................................................... 42
Figure 4.12: The second example of the point cloud calculation ................................... 43
Figure 4.13: The result of the mapper node for the second example.............................. 44
Figure 4.14: The flowchart of the MATLAB node ........................................................ 45
Figure 4.15: The distance error and the angle error ........................................................ 46
Figure 4.16: The polar coordinate of the simulation ...................................................... 47
Figure 4.17: The illustration of the example................................................................... 48
Figure 4.18: The flowchart of the error selection algorithm........................................... 49
Figure 5.1: The model of the mobile robot ..................................................................... 51
Figure 5.2: The ROS system of the mobile robot ........................................................... 52
Figure 5.3: Changing the goal position in MATLAB ..................................................... 53
Figure 5.4: The simple environment ............................................................................... 54
Figure 5.5: The path-planning result for the simple environment .................................. 55
Figure 5.6: The complex environment ............................................................................ 56
Figure 5.7: The path-planning result for the complex environment ............................... 57

63
List of Tables 64

List of Tables

Table 2.1: Table properties of Omnidirectional Kinematics ............................................. 8


Table 4.1: Table properties of the illustration in Figure 4.9 ........................................... 40
Table 4.2: Table properties of the illustration in Figure 4.10 ......................................... 41
Table 4.3: Example of the error calculation .................................................................... 49
Table 4.4: Table properties of Kinematic Equation in Equation 14................................ 50
Table 5.1: The path-planning with a dynamic obstacle illustration ................................ 58

64
65 List of Tables

References

[1] T. Ersson and X. Hu, “Path planning and navigation of mobile robots in unknown
environments,”

[2] I. Waymo, Waymo. [Online] Available: https://waymo.com/. Accessed on: Sep. 25


2017.

[3] K. AG, KUKA.Navigation Solution | KUKA AG. [Online] Available:


https://www.kuka.com/en-de/products/mobility/navigation-solution. Accessed on:
Sep. 25 2017.

[4] K. J. A. Kumar, “Modelling, Design, and Fabrication of Omni-Directional Mobile


Robot,” University of Applied Science Soest, Germany, 2017.

[5] J. Bernauer, “Entwicklung Einer Mikroprozesor-Basierten Steuerung fur Einen


Mobilen Roboter,” University of Applied Science Soest, Germany.

[6] B. Wijaya, “Microcontroller Design for an Omnidirectional Mobile Robot (Pro-


gramming for Simultaneous Localization, Mapping, and Exploration),” Bachelor's
Thesis, Swiss German University, Indonesia, 2017.

[7] X. Li and A. Zell, “Motion Control of an Omnidirectional Mobile Robot,” in Lec-


ture notes in electrical engineering, 1876-1100, v. 24, Informatics in control, auto-
mation and robotics: Selected papers from the International Conference on Infor-
matics in Control, Automation and Robotics / Joaquim Filipe, Jean-Louis Ferrier,
Juan Andrade-Cetto (eds), J. Filipe, J.-L. Ferrier, and J. Andrade-Cetto, Eds., Ber-
lin, London: Springer, 2009, pp. 181–193.

[8] C.-C. Tsai, L.-B. Jiang, T.-Y. Wang, and Wang Tung-Sheng, Eds., Kinematics
Control of an Omnidirectional Mobile Robot. Taiwan: National Chung Hsing Uni-
versity, Nov. 2005.

65
References 66

[9] Vexrobotics, VEX Robotics Omni-Wheel. [Online] Available: https://www.vexro-


botics.com/media/catalog/product/cache/1/im-
age/9df78eab33525d08d6e5fb8d27136e95/2/1/217-2613.jpg. Accessed on: Sep. 29
2017.

[10] R. Rojas, “Omnidirectional Control,” Freie Universitat Berlin, Germany, 2005.

[11] Robotshop, 4wd-omni-directional-mobile-robot-kit. [Online] Available:


http://www.robotshop.com/media/catalog/product/cache/1/im-
age/900x900/9df78eab33525d08d6e5fb8d27136e95/4/w/4wd-omni-directional-
mobile-robot-kit-5_1.jpg. Accessed on: Sep. 29 2017.

[12] O. Diegel, A. Badve, Bright Glen, J. Potgieter, and S. Tlale, Eds., Improved
Mecanum Wheel Design for Omni-Directional Robots. New Zealand: Massey Uni-
versity, 2002.

[13] Robu.in, 100mm Aluminium Mecanum wheels (Bearing type rollers) LEFT -
Robu.in | Indian Online Store | RC Hobby | Robotics. [Online] Available:
https://aws.robu.in/wp-content/uploads/2014/03/100mm-aluminum-mecanum-
wheel-left-14087-2.jpg. Accessed on: Sep. 29 2017.

[14] J. E. Mohd Salih, M. Rizon, and S. Yaacob, “Designing Omni-Directional Mobile


Robot with Mecanum Wheel,” American J. of Applied Sciences, vol. 3, no. 5, pp.
1831–1835, 2006.

[15] H. Taheri, B. Qiao, and N. Ghaeminezhad, “Kinematic Model of a Four Mecanum


Wheeled Mobile Robot,” IJCA, vol. 113, no. 3, pp. 6–9, 2015.

[16] B. Frank, M. Becker, C. Stachniss, W. Burgard, and M. Teschner, “Efficient path


planning for mobile robots in environments with deformable objects,” in Robotics
and Automation, 2008, ICRA 2008, IEEE International Conference on: Date, 19-
23 May, 2008, Pasadena, CA, USA, 2008, pp. 3737–3742.

[17] B. N, W. N. W.A.J, S. N, and M. Z, “A Simple Local Path Planning Algorithm for


Autonomous Mobile Robots,” International Journal of System Applications, Engi-
neering and Development, vol. 5, no. 2, pp. 1–12, 2011.

66
67 List of Tables

[18] C. Hofner and G. Schmidt, “Path planning and guidance techniques for an autono-
mous mobile cleaning robot,” in IROS '94: Proceedings of the IEEE/RSJ/GI Inter-
national Conference on Intelligent Robots and Systems : advanced robotic systems
and the real world … 1994, Federal Armed Forces University, Munich, Germany,
Munich, Germany, 1994, pp. 610–617.

[19] W. Wu, Z. Q. Sen, J. B. Mbede, and H. Xinhan, “Research on path planning for
mobile robot among dynamic obstacles,” in Joint 9th IFSA world congress and
20th NAFIPS international conference, Vancouver, BC, Canada, 2001, pp. 763–
767.

[20] J. Barraquand, B. Langlois, and J.-C. Latombe, “Numerical potential field tech-
niques for robot path planning,” IEEE Trans. Syst., Man, Cybern., vol. 22, no. 2,
pp. 224–241, 1992.

[21] AiGameDev, Theta*: Any-Angle Path Planning for Smoother Trajectories in Con-
tinuous Environments. [Online] Available: http://aigamedev.com/open/tutori-
als/theta-star-any-angle-paths/. Accessed on: Sep. 29 2017.

[22] M. Karova et al., “Path Planning Algorithm for Mobile Robot,” in Recent advances
in computer engineering series, 1790-5109, vol. 30, Recent researches in applied
computer science: Proceedings of the 15th international conference on applied
computer science (ACS '15), proceedings of the 16th international conference on
mathematics and computers in business and economics (MCBE '15), proceedings
of the 6th international conference on education and educational technologies
(EET '15) ; Konya Turkey, May 20-22, 2015 / editor, Xiaodong Zhuang ; associate
editors, Kerim Koçak, Klimis Ntalianis, X. Zhuang, K. Koçak, and K. Ntalianis,
Eds., Greece: WSEAS Press, 2015, pp. 26–29.

[23] N. Achour and M. Chaalal, “Mobile Robots Path Planning using Genetic Algo-
rithms,” in Robotics and Automation, 2008, ICRA 2008, IEEE International Con-
ference on: Date, 19-23 May, 2008, Pasadena, CA, USA, 2008, pp. 111–145.

[24] D. S. Yershov and S. M. LaValle, “Simplicial Dijkstra and A ∗ Algorithms: From


Graphs to Continuous Spaces,” Advanced Robotics, vol. 26, no. 17, pp. 2065–2085,
2012.

67
References 68

[25] S. S and Dr. C. Chandrasekar, “Modified Djisktra's Shortest Path Algorithm,” In-
ternational Journal of Innovative Research in Computer and Communication Engi-
neering, vol. 2, no. 11, pp. 6450–6456, https://www.rroij.com/open-access/modi-
fied-dijkstras-shortest-path-algorithm.pdf, 2014.

[26] H. Wang, Y. Yu, and Q. Yuan, “Application of Dijkstra algorithm in robot path-
planning,” in 2011 Second International Conference on Mechanic Automation and
Control Engineering: July 15-17, 2011, Inner Mongolia, China : proceedings, In-
ner Mongolia, China, 2011, pp. 1067–1069.

[27] Reviewmylife.co.uk, Djikstra's Algorithm in C++. [Online] Available:


http://www.reviewmylife.co.uk/blog/2008/07/15/dijkstras-algorithm-code-in-c/.
Accessed on: Sep. 29 2017.

[28] C. W. Warren, “Fast path planning using modified A* method,” in Robotics and
automation: International conference : Papers, Atlanta, GA, USA, 1993, pp. 662–
667.

[29] A. K. Guruji, H. Agarwal, and D. K. Parsediya, “Time-efficient A* Algorithm for


Robot Path Planning,” Procedia Technology, vol. 23, pp. 144–149, 2016.

[30] S. M. Ayazi, M. F. Mashhorroudi, and M. Ghorbani, Eds., MODIFIED A* ALGO-


RITHM IMPLEMENTATION IN THE ROUTING OPTIMIZED FOR USE IN GEO-
SPATIAL INFORMATION SYSTEMS, 2014.

[31] J. van den Berg, D. Ferguson, and J. Kuffner, “Anytime path planning and replan-
ning in dynamic environments,” in Robotics and Automation, 2006. ICRA 2006.
Proceedings 2006 IEEE International Conference on, Orlando, FL, USA, 2006, pp.
2366–2371.

[32] F. Haro and M. Torres, “A Comparison of Path Planning Algorithms for Omni-Di-
rectional Robots in Dynamic Environments,” in 2006 IEEE 3rd Latin American
Robotics Symposium: Santiago, Chile, 26-27 October 2006, Santiago, Chile, 2006,
pp. 18–25.

[33] A. Elshamli, H. A. Abdullah, and S. Areibi, “Genetic algorithm for dynamic path
planning,” in Canadian conference on electrical and computer engineering 2004 =

68
69 List of Tables

Conférence Canadienne en génie électrique et informatique 2004, Niagara Falls,


Ont., Canada, 2004, pp. 677–680.

[34] K. H. Sedighi, K. Ashenayi, T. W. Manikas, R. L. Wainwright, and H.-M. Tai,


“Autonomous local path planning for a mobile robot using a genetic algorithm,” in
Proceedings of the 2004 congress on evolutionary computation: CEC 2004, Port-
land, OR, USA, 2004, pp. 1338–1345.

[35] T. Arora, Y. Gigras, and V. Arora, “Robotic Path Planning using Genetic Algo-
rithm in Dynamic Environment,” IJCA, vol. 89, no. 11, pp. 8–12, 2014.

[36] W.-G. Han, S.-M. Baek, and T.-Y. Kuc, “Genetic algorithm based path planning
and dynamic obstacle avoidance of mobile robots,” in Computational cybernetics
and simulation, Orlando, FL, USA, 1997, pp. 2747–2751.

[37] H. M. Choset, Principles of robot motion: Theory, algorithms, and implementation


/ Howie Choset … [et al.]. Cambridge, Mass., London: MIT Press, 2005.

[38] L. Jaillet, J. Cortés, and T. Siméon, “Sampling-Based Path Planning on Configura-


tion-Space Costmaps,” IEEE Trans. Robot., vol. 26, no. 4, pp. 635–646, 2010.

[39] L. E. Kavraki, P. Svestka, J.-C. Latombe, and M. H. Overmars, “Probabilistic


roadmaps for path planning in high-dimensional configuration spaces,” IEEE
Trans. Robot. Automat., vol. 12, no. 4, pp. 566–580, 1996.

[40] R. Bohlin and L. E. Kavraki, “Path planning using lazy PRM,” in 2000 IEEE inter-
national conference on robotics and automation, San Francisco, CA, USA, 2000,
521-528 vol.1.

[41] I. The MathWorks, Probabilistic Roadmaps (PRM) - MATLAB. [Online] Available:


https://de.mathworks.com/help/robotics/ug/probabilistic-roadmaps-prm.html?re-
questedDomain=www.mathworks.com. Accessed on: Sep. 30 2017.

[42] S. Karaman and E. Frazzoli, “Sampling-based algorithms for optimal motion plan-
ning,” The International Journal of Robotics Research, vol. 30, no. 7, pp. 846–894,
2011.

69
References 70

[43] J. J. J. Kuffner, S. Kagami, K. Nishiwaki, M. Inaba, and H. Inoue, “Dynamically-


stable Motion Planning for Humanoid Robots,” Autonomous Robots, vol. 12, no. 1,
pp. 105–118, 2002.

[44] D. Ferguson and A. Stentz, “Anytime RRTs,” in 2006 IEEERSJ International Con-
ference on Intelligent Robots and Systems: Beijing, China, 9-13 October 2006, Bei-
jing, China, 2006, pp. 5369–5375.

[45] I. Aguinaga, D. Borro, and L. Matey, “Parallel RRT-based path planning for selec-
tive disassembly planning,” Int J Adv Manuf Technol, vol. 36, no. 11-12, pp. 1221–
1233, 2008.

[46] F. Islam, J. Nasir, U. Malik, Y. Ayaz, and O. Hasan, “RRT∗-Smart: Rapid conver-
gence implementation of RRT∗ towards⁡optimal⁡solution,”⁡in⁡International
Conference on Mechatronics and Automation (ICMA), 2012: 5 - 8 Aug. 2012,
Chengdu, China, Chengdu, China, 2012, pp. 1651–1656.

[47] I. Noreen, Khan Amna, and Z. Habib, “A Comparison of RRT, RRT*, and RRT*-
Smart Path Planning Algorithms,” International Journal of Computer Science and
Network Security, vol. 16, no. 10, pp. 20–27, 2016.

[48] M. Quigley et al., “ROS: an open-source Robot Operating System,”

[49] ROS, ROS.org | Powering the World's Robots. [Online] Available:


http://www.ros.org/. Accessed on: Sep. 25 2017.

[50] A. Koubâa, Robot operating system (ROS): The complete reference. Volume 1 /
Anis Koubâa, Editors. Cham: Springer, 2016.

[51] L. Joseph, Mastering ROS for Robotics Programming. Birmingham: Packt Publish-
ing, Limited, 2015.

[52] M. Quigley, B. Gerkey, and W. D. Smart, Programming robots with ROS: A prac-
tical introduction to the robot operating system / Morgan Quigley, Brian Gerkey,
William D. Smart. Beijing: O'Reilly, 2015.

70
71 List of Tables

[53] C. Fairchild and T. L. Harman, ROS robotics by example: Bring life to your robot
using ROS robotic applications / Carol Fairchild, Dr. Thomas L. Harman. Bir-
mingham, UK: Packt Publishing, 2016.

[54] Slamtec, RPLIDAR A2 Datasheet. [Online] Available: http://bucket.down-


load.slamtec.com/d25d26d45180b88f3913796817e5db92e81cb823/LD208_SLAM
TEC_rplidar_datasheet_A2M8_v1.0_en.pdf. Accessed on: Sep. 29 2017.

71

S-ar putea să vă placă și