Sunteți pe pagina 1din 10

1

Vision Based Guidance and Switching based SMC


Controller: An Integrated Framework for Cyber
Physical Control of Mobile Robot
Padmini Singh, Pooja Agrawal, Hamad Karki, Amit Shukla, Nishchal K. Verma, Senior member IEEE
and Laxmidhar Behera, Senior member IEEE

Abstract—This work proposes a vision based guidance strategy component modification will be needed and if error is occur-
for mobile robot safe navigation in unknown indoor environ- ring in the physical region then only replacing the physical
ments. Vision and controller are integrated to control the robot purpose can solve our purpose. Therefore, this type of co-
in cyber phsical environment. Guidance strategy uses depth map
of the scene generated from RGB-D sensor to compute the desired design is preferable for resource constrained CPS environment
angular and linear velocities. Centroid of the obstacle depth and controlling is also easier for such systems.
map is used to generate the desired angular velocities. Fuzzy With the advancement in technology several work has been
rule based guidance is develop to generate the desired linear done on vehicle transportation system using CPS to handle
velocity command. To show the effectiveness of the proposed Real-time events [26] [29] [16] [25]. From the CPS perspec-
strategy, analysis of the guidance strategy is carried out in case
of infinite length obstacle. Next, Switching based chattering free tive, integrating autonomous mobile robots with communica-
sliding mode controller is proposed to achieve the guided path in tion domain, one should have the knowledge of robot hardware
finite time. Finite time convergence of reachability is proved using like robot basic platform, sensors adopted, and its software
Krasovskii Method and asymptotically stability of the system is part like control and navigation. Previously, lots of research
carried out using Lyapunov theorem. Furthermore, Experiments work has been done for various mobile robot applications [27]
are done on Pioneer P3-DX robot in different obstacle scenarios.
Results show that the proposed sliding mode controller is robust [31] [7], which is based on potential field method, roadmap
in the presence of communication channel burst losses. method, behaviour-based algorithm and control method for
obstacle avoidance. Although lots of work has been carried out
Index Terms—Obstacle avoidance, Depth Map, Centroid, mo-
bile robot , sliding mode controller. using artificial potential field approach [9] [17], the approach
is having the problem of local minima and chaterring. This
problem can be solved using harmonic functions [5] using
I. I NTRODUCTION non-gradient vector field approach [18] and a modified newton
YBER physical system (CPS) is a platform, which com- method [19]. Likewise in Roadmap approach a guided path is
C bines the several features of computation, communica-
tion and control. Nowadays researchers are solving most of
generated over obstacle free space [8] [15]. In behaviour based
approach researchers fuse different behaviours for avoiding
the problem from cyber perspective [10] [20] [11] [14] as obstacles but coordinating different behaviour efficiently is
recent application areas involves integration of all the three itself a challenge. In potential field and road-map method
of CPS. Despite the several advancement made in the area of one should have the prior knowledge of entire space and
CPS, design of application based CPS is still challenging. it also require lot of computation which is not good for
Autonomous vehicle has gain popularity due to various CPS environment. Whereas, in behaviour based approach
Real-time applications such as monitoring task, obstacle avoid- generally local convergence is possible and still require lots
ance, surveillance, border patrolling, etc. Hence, a unified of computation.
model is needed which overcomes the problem of interfacing As vision sensors provide rich information of the envi-
with different domains. Effectiveness of the unified model ronment, many vision based avoidance techniques have been
depends on how much the interfacing of two domains are proposed for ground robots and aerial vehicles navigation in
independent. One way of designing is the coupling of cyber unknown environments [2] [30]. These approaches are mainly
parameters with the physical parameters. This is easier from based on optical flow and stereo vision. Although both the
computation point of view but designing controller for such techniques can provide depth map of the scene, stereo vision
a complex system is difficult as all the parameters cannot is computationally expensive and requires more than one
be considered for designing. For CPS application, a single camera. On the other hand, optical flow based approaches are
coupled model cannot give direct or linear relationship be- very sensitive to noise, and hence depth cannot be accurately
tween input and output of the systems. Hence, co-design of measured.
CPS should require direct interfacing and linear relationship To overcome the limitations of the above approaches, in
between different domains as well as it should maintain this work, we have used RGB-Depth sensor for obstacle
orthogonality between different layers. Here, orthogonality avoidance, which belongs to vision sensor category. The
means independence from one domain to other i.e. if there advantage of using RGB-Depth sensor is that it provides
is some error occurring in the cyber domain then only cyber accurate depth of the scene at fast rate [6] [2] [30]. This can
2

help for designing computationally efficient controller for 3) Switching based sliding mode controller is proposed and
achieving the guided path. Moreover, depth sensor is capable its finite time convergence is guaranteed by kravoskii
of estimating depth of the obstacle even in the night. method.
4) A new sliding surfaces is proposed, which offers simul-
To achieve the guided path in presence of disturbances taneously convergence of robot state errors.
and communication channel losses, an efficient and robust 5) Experiments are conducted to validate the proposed
controller is required which can achieve commanded path in approach for Pioneer P3-DX robot in presence of com-
finite time. In [12] and [4], switching signal and backstepping munication channel losses.
feedback linearization based approaches for following the Paper is organized as follows: Section 2 describes the problem
guided path are presented, respectively. These approaches does and mathematical models. Section 3 discusses the guidance
not consider the communication channel constraints. As it strategies followed by analysis of the guidance strategy in
is not possible to model an ideal CPS model considering Section 4. Controller design is given in Section 5. Stability
all the cyber and physical constraints, a robust controller to analysis is shown in Section 6. Experimental results are
handle certain non-linearity and uncertainty is required in CPS provided in Section 7. Conclusions are drawn in Section 8.
environment. Sliding mode control is a robust controller as
it works well under unwanted non-linearity and reduces the
system dynamics. Additionally, the controller convergences the II. PROBLEM FORMULATION
error in finite time under certain range of uncertainty. In [28]
Fig. 2 shows a CPS model mainly consist of mobile robot,
Physical Process RGB-D sensor and sliding mode controller communicating
with each other through a wireless network. Objective of
RGB-D 3D view
the work is to design a guidance law for robot safe navi-
Actuators Robot Sensor gation through indoor environment. And, a robust controller
to achieve the desired command in finite time in presence of
random burst losses and disturbances occur due to communi-
Depth
map Measurements cation channel. RGB-D sensor is fixed with the robot which
provides the depth map of the scene comes in camera field-
of-view. Centroid of the depth map is used to generate the
Guidance Controller desired angular velocity command (ωd ). Depth of the pixel
located at the image center is used to generate the desires
Wireless Communication (RT-WMP Protocol)
linear velocity (vd ). Error between the commanded and robot
actual states are passed to the controller for robot to follow the
Fig. 1: CPS architecture for obstacle avoidance. desired commands. Here, two surface sliding mode controller
is design to achieve the desired commands.
[3], SMC is designed for path tracking using mobile robot.
However, SMC has chattering which can be avoided using
saturation function and higher order sliding mode control.
Designing of proper sliding surface is also a challenge as
𝜔𝑑 𝑣𝑑
it ensures that the error states [23] [24] should reach in the
neighbourhood of surface in finite time. After reaching the 𝑥𝑑 𝑥𝑒
𝑣𝑐
cos 𝜃 sin 𝜃 0
surface, the error should reach to zero state in finite time. This Trajectory 𝑦𝑑 − − sin 𝜃 cos 𝜃 0 𝑦𝑒 Control
generation 0 0 1 Law 𝜔𝑐
work proposes a novel sliding surface using switching based 𝜃𝑑 − 𝜃𝑒
reachability condition to ensure finite time convergence. Fig. 1 −
shows a architecture of CPS for obstacle avoidance comprises
of physical process, guidance and controller. Physical process 𝑥 𝜔
𝑦 Posture
includes robot, RGB-D sensor, actuators and sensors. RGB- estimation 𝑣
𝜃
D sensor generates the depth map of the scene. Depth map is
passed to the guidance block to generate the desired commands Fig. 2: System Overview.
to avoid the obstacles. These commands are given to the
controller for robot to follow the guided command.
Key contribution of the paper is summarized as follows:
1) Vision based guidance is proposed for robot safe nav-
A. RGB-D Sensor
igation in indoor environment, where centroid of the
obstacle depth map is used to generate the desired In this work, kinect sensor is used for generating scene
angular and linear velocities commands. RGB-D sensor depth map. Kinect sensor consist of LED, 3D depth sensors,
is used to generate the depth map. RGB camera, a servo to adjust the tilt of the device and a
2) Analytical avoidance properties of the proposed strategy microphone array as shown in Fig. ??. Here, 3D depth sensors
is carried out in case of infinite length obstacle. are an infrared laser emitter and an infrared camera.
3

B. Robot Model
In this work, kinematic model of the mobile robot is used
as shown in Fig. 3. The mathematic model of the mobile robot
is give by

v (a) (b)

Fig. 4: Scene sample image and corresponding depth map.


θ
y

x
Fig. 3: Robot motion.

ẋ = v cos θ (a) (b)


ẏ = v sin θ (1) Fig. 5: Ground estimation and segmented image.
θ̇ = ω
where, (x, y) and θ represent the position and orientation of C. Angular velocity guidance
the robot, respectively. Robot angular and linear velocities are
represented by ω and v, respectively. Segmented image is used to compute the desired angular
velocities, where pixels in white color are replace by respective
depth value. Centroid of the segmented image with obstacle
C. Communication Model
depth is consider to design guidance law. Based on the heading
In this work, robot operating system (ROS) is used for direction, the robot has two possibility as describe in the next
experiments. Communication from sensor to controller, and subsection.
controller to robot are established by wireless channel, where 1) Facing an obstacle: Robot faces obstacle when intensity
communication medium uses RT-WMP protocol. The protocol value of the obstacle depth image center pixel is not black as
is having some advanced features over IEEE802.11 and 802.11 shown in Fig. 6a. Here, (0, 0) and (xc , yc ) represent the image
RTS/CTS protocols [22] [21]. The protocol maintains link center and centroid position, respectively. Image frame x and
quality between nodes by supporting message priorities, frame y -axes are represented by xm and ym , respectively. Desired
retransmission, frame duplication, offering better bandwidth
and throughput to the nodes.
ym
ωd
III. GUIDANCE STRATEGIES κ1

This section discusses the ground plan estimation followed Obstacle


by obstacle segmentation and proposed guidance laws. (0, 0) xc
xm
(xc , yc )
−κ1
A. Ground Plan Estimation
For the estimation of the ground plan, the depth map of
the ground without any obstacle is generated using kinect (a) (b)
sensor. Fig. 5a shows the depth map of the ground plan. Fig. 6: Robot angular velocity command for robot facing
Next, the ground depth map is subtracted from the scene depth obstacle
map to segment the obstacle as discussed in the subsequent
subsection. value of the angular (ωd ) velocity for this scenario is obtained
as
B. Obstacle Segmentation ωd = k1 (sign(xc ) × 1 − tanh(xc )) (2)
Fig. 5b shows a gray-level image of a scene captured by where, κ1 is a proportionality constant. Fig. 6b shows the
the sensor and corresponding depth map. Firstly, ground depth angular velocity variation with xc . Figure shows that high
map is subtracted from the scene depth to remove the effect value of guidance command will be applied when the image
of ground plan. Next, the pixels with depth value more than centroid xc is close to the center.
a predefined threshold is defined as obstacle and converted 2) Facing an open space: For this scenario, image center
into white color. Remaining are considered as background and will have black pixel as shown in Fig. 7a. The angular velocity
converted into black color. For avoidance, center portion of of the mobile robot is governed by
the segmented image’s lower half is considered as shown in
Fig. 4a. ωd = k2 tanh(xc ) (3)
4

Fig. 7b shows the variation of guidance command with xc . • Rule1: If Depth is near speed is slow.
• Rule2: If Depth is closer speed is medium.
• Rule3: If Depth is far speed is fast.
ym
ωd The range of output (linear velocity) is between 0 to 0.3 m/s.
κ2
It is also divided into 3 equal parts. According to rules, fuzzy
guidance selects corresponding fuzzy output (membership
xc value). Table 2 gives three membership functions for outputs.
xm (0, 0)

(xc , yc ) −κ2

(a) (b) Membership Slow Medium Fast


value 1

Fig. 7: Robot angular velocity command for robot facing open


space

Low turn rate is applied for centroid closer to the image 0 .075 0.15 0.225 0.3
center which is demanded as vehicle has to pass between Speed
the obstacles. For robot to pass through the passage, First, Fig. 9: Output of the fuzzy system.
it is checked whether the separation between the obstacles
is enough or not. If width of the passage is safe for robot
then desired command is computed using Eq. 3. Otherwise, TABLE II
guidance command is computed using Eq. 2
Fuzzy output Range(metre/second)
Slow [0, .15]
D. Linear velocity guidance
Medium [.075, .225]
To avoid sudden jerks, it is necessary to vary linear velocity
Fast [.15, 0.3]
to the distance of the obstacle. It is required that robot linear
velocity should be slowed down when angular velocity is high
and vice-versa. Otherwise, wear and tear to the robot may Defuzzification of the fuzzified output is obtained as
X
happen. For achieving such a smooth linear velocity, we have vd = µvi (4)
presented fuzzy guidance law based on the distance of the
obstacle from the vehicle. Input of the guidance is ’Range of where, µ is membership value from input and vi is the
image center depth’. Based on input range, three rule base to corresponding maximum value of the output. For example:
fire corresponding fuzzy output is created. Linear velocity is If obstacle is at 2 m distance from robot and membership
obtained by averaging the fired fuzzy output. From Fig. 8, the values from each rule is 0,0.25 and .75. From output table
corresponding output for each membership value is .075,
0.0375 and 0.16875. Substituting these values in Eq. 4 results
in crisp output equal to 0.20625 m/s.
Membership
value 1
Closer Far
Near IV. A NALYSIS OF G UIDANCE S TRATEGY FOR I NFINITE
LENGTH O BSTACLE
Consider a robot with sensor facing an infinite length
obstacle at initial heading θ as shown in Fig. 10. Point C
0 0.75 1.5 2.25 3
Distance
represents the center of the depth image. Here, D shows a
point located at the left of the image at angle αL . Depth of
Fig. 8: Input of the fuzzy system.
the point located at D for kinect [13] can be written as
xo
range of the input is 0 to 3 m, where the range is equally xD = xo (5)
divided into three parts. Depending upon the robot distance 1 + dL
fb
from the obstacle any of the rule base can be fired. Table 1
shows three membership functions for input. where, xo is the distance of the reference pattern. The terms f ,
d, and b are infrared camera focal length, observed disparity
TABLE I in image plane and base length, respectively. From geometry,
Fuzzy Input Range(metre) expression of dL can be obtained as
Near [0, 1.5] dL = xrel tan(θ + αL ) (6)
Closer [.75, 2.25] where, xrel represents the separation between the robot and
Far [1.5, 3] obstacle. Variation in xrel and yrel can be obtained as [1]
Z
Rule Base for fuzzy guidance ẋrel = − v cos θdt (7)
5

Sensor Field of View


After substituting xc from Eq. in Eq. to obtain desired value

{ B D

xD
C
dL

V
αL αR
θ
dR

T
xrel x
α
E

(x, y)

Robot and RGB-D


E

yi
Obstacle
A

xi
of the angular velocity for robot facing an obstacle is

ωd = k1 (sign(xc ) × 1


+ tanh 

x rel x 0 f ln

2


cos
2

0.5f be α + x0 xrel ln
ln

α cos(θ − α/2)

cos(θ + α/2) 
cos(θ − α/2) (17)




cos(θ + α/2)
Similarly, using Eqs., desired value of the angular velocity for
robot facing an open space is
o 
α cos(θ − α/2)

Fig. 10: A Robot an infinite length obstacle.
 xrel x0 f ln cos 2 ln cos(θ + α/2) 

ωd = −k2 tanh  

cos(θ − α/2) 
Z 2
0.5f be α + x0 xrel ln
ẏrel = − v sin θdt (8) cos(θ + α/2)
(18)
After substituting dL from Eq. 6, Eq. 5 results in Consider a scenario, where the robot facing an infinite length
obstacle from (5, 0) m at initial heading θ0 = 2◦ . Velocity
xo
xD = xo (9) of the robot is assumed to be constant 0.3 m/s. The robot
1 + xrel tan(θ + αL ) analytical trajectories using Eqs. 2, 3, 17 and 17 is shown in
fb
Fig. 11a. The corresponding heading variations is shown in
Similar to the point D, the depth of the point E located at the Fig. 11b. Results show that the proposed centroid strategy is
right side of the image angle αR can be written as able to navigate the robot safely by turning away from the
xo obstacle.
xE = xo (10)
1 + xrel tan(θ − αR )
fb 90
0.5
80
Centroid of the depth image can be computed as 0
70
−0.5
P P 60

xi yi De(xi , yi ) −1

θ, deg.
yrel, m

xi
P yi Obstacle 50

xc = (11) −1.5

xi De(xi , yi )
40
−2
30
−2.5
20

where, De(xi , yi ) represents the pixel depth located at point −3

−3.5
10

(xi , yi ). For analysis, single row is considered for centroid 0 0.5 1 1.5 2 2.5
xrel, m
3 3.5 4 4.5 5 5.5
0
1.5 2 2.5 3 3.5
xrel, m
4 4.5 5

computation, so Eq. 11 can be written as (a) (b)


P
xi De(xi , yi ) Fig. 11: a robot trajectory. (b)Heading profile.
xc = Pxi (12)
xi De(xi , yi )

As image is split from the center, Eq. 12 can be written as


RaRb L RaRb
x xD dαL dαR + 0 0 xR xE dαL dαR V. SLIDING MODE C ONTROLLER D ESIGN
0 0
xc = RaRb RaRb (13)
x dαL dαR + 0 0 xE dαL dαR For achieving the guidance law [?]in finite time we have
0 0 D
designed a sliding mode controller as SMC is robust under
where, xL and xR represent the pixel position in the left and parametric uncertainity and unknown nonlinearity. Drawback
right helves of the image, respectively. The expressions for the of SMC is chattering which can be avoided using saturation
xL and xR can be written as function and higher order sliding mode control. Designing
proper sliding surface is a challenge as it ensures that the
xL = f tan αL (14)
error states should reach in the neighbourhood of surface
xR = −f tan αR (15) in finite time and after reaching the surface in finite time
As obstacle is infinite length, the values of a and b can be the error should reach to zero state in finite time. We have
replaced by α/2, where α represents the camera field of view. designed a novel sliding surface for our problem and proposed
After substitution of xD , xE , xL and xR from Eqs. 9, 10, 14 a novel switching based reachability condition to ensure finite
and 17, we get time convergence. In this paper sliding mode controller with
two sliding manifold is proposed for trajectory tracking of
α cos(θ − α/2) non-holonomic mobile robot. By the proposed two surface
xrel x0 f ln cos ln

2 cos(θ + α/2) manifold our trajectory tracking error dynamics reduces by
xc = −
cos(θ − α/2) (16) two and during sliding, the motion equation of error dynamics
2
0.5f be α + x0 xrel ln is governed by one.

cos(θ + α/2)
6

Assume that the reference trajectory generated by the desired 2) control law: In second step, the aim is to design the
velocity, suitable control that ensures the reaching phase crossing in
ẋd = vd cos θd finite time and subsequently helps to attain zero in finite time.
ẏd = vd sin θd (19) For this, the proposed control structure is:

θ̇d = ωd vc = vslide + vreach


(25)
ωc = ωslide + ωreach
error model in body frame is,
here the first component vslide and ωslide is the sliding control,
     which plays its role when error states are on the surface and
xe cos θ sin θ 0 x − xd it drives the error states to zero in finite time, Whereas the
 ye =− sin θ cos θ 0  y − yd  second component vreach and ωreach is reaching law and its
θe 0 0 0 θ − θd aim is to dive the error states in the vicinity of the surface in
finite time in presence of system disturbances.
Hence, the error dynamics can be derived as the first component is obtained using principle of invariance
Therefore, using eq. 22
ẋe = ωye + v − vd cos θe
ẏe = −ωxe + vd sin θe (20) ṡ1 = ẋe − ẏe
θ̇e = ω − ωd ṡ2 = −ẋe + ẏe + θ̇e
Substituting all the values from eq. 20 the above equations
A. Control objective can be written as
The aim is to propose a control law that guarantees the ṡ1 = ωye − vd cos θe + v + ωxe − vd sin θe
tracking error achieves zero states in finite time ṡ2 = −ωye + vd cos θe − ωxe + v + vd sin θe + ω − ωd
(26)
lim xe = 0, lim ye = 0, lim θe = 0 (21) from Eq. 26, vslide and ωslide are
t→tf t→tf t→tf

vslide = vd cos θe + vd sin θe − ωye − ωxe


(27)
B. Design Steps ωslide = ωd tanh(ωd )
1) sliding surface design: Designing of proper sliding and The second component,vreach and ωreach , is the control
surface is a major design problem. It can be either linear or input applied in the reaching phase, and is accountable to bring
nonlinear depends on the application and control objective. the error states on to the neighbourhood of sliding surface
number of sliding surface also depends on the order of the in finite time. In addition, it should work also to ensures
system and number of control input to the system. In our chattering diminishing. Therefore, to accomplish the above-
system since we have two control inputs and order of the mentioned characteristics in the reaching phase, it is proposed
system is three , the optimal number of surface can be two. here to apply the reaching law-based control.
Hence to make computation efficient control system we have
designed optimum number of surface i.e. two and our system vreach = −k1 tanh(s1 )
(28)
dynamics order reduces by two. Now our system dynamics ωreach = −k2 tanh(s2 )
governed by first order system on sliding surface. We have
designed novel linear sliding surface as a combination of error Hence the commanded Linear and Angular velocity of the
states i.e.: mobile robot is
s1 = xe − ye vc = veq − k1 tanh(s1 )
(22) (29)
s2 = −xe + ye + θe ωc = ωeq − k2 tanh(s2 )
The selected sliding surface will drive the error states to Substituting value ofveq and ωeq in eq. 29, The control input
equilibrium, in finite time, in addition its objective is to will be
converge all the error states simultaneously. During sliding
s1 = 0 and s2 = 0. Hence motion equation of the error states
are governed by first order system which can be obtained by vc = −ω(xe + ye ) + vd cos θe + vd sin θe − k1 tanh(s1 )
the following formulation. ωc = ωd tanh(ωd ) − k2 tanh(s2 )
(30)
xe = ye and θe = xe − ye Using in θe = 0 (23)

Now, motion equation on the sliding surface will be C. switching based reachability condition

y˙e = −ωxe + vd sin θe = −ωye (24)


ṡ1 = −k1 tanh(s1 )
therefore our order of the system reduces to one. ṡ2 = k1 tanh(s1 ) − k2 tanh(s2 ) + ωd (tanh(ωd ) − 1)
7

VI. S TABILITY ANALYSIS Theorem 2: Once, the closed-loop (20) attitude error states
Lemma 1: (Gopal et al. 2012)Using Krasovskii method, For , crossed the reaching phase, and reached to the neighborhood
a continuous system ẋ = f (x) , f (0) = 0,x ∈ Rn , suppose of s = 0, the control (30) will ensure them to reach zero in
there exist a Lyapunov function V (x) = f T (x)P f (x), where finite time.
P is a symmetric positive definite matrix, such that following Consider Lyapunov candidate function
condition holds: 1 1 1
V = x2e + ye2 + θe2 (34)
˙ = f (x)T [J T + P Jx ]f (x) 2 2 2
V (x) x
when reachability condition fulfilled then s1 = 0, ṡ1 = 0 and
where also s2 = 0, ṡ2 = 0.
Q = −[J T x + P Jx] Hence reachability ensures that
Hence the system is asymptotically stable. s1 =⇒ xe = ye ṡ1 = 0, ⇒ ẋe = ẏe
Theorem 1: Under assumption 2k1 sech2 (s1 ) and 4k2 − s2 =⇒ θe = xe − ye ṡ2 = 0 ⇒ θ̇e = ẋe − ẏe
2sech2 (s2 ) > 1 the mobile robot obstacle avoidance tracking
Therefore after crossing the reachability phase ẋe = ẏe and
control system (20). With the sliding surface (22), and the
θe becomes zero. Taking the derivative of Eq.34
control law (30) the finite time convergence of reachability
law is guaranteed. V̇ = xe ẋe + ye ẏe + θe θ̇e = ye ẏe + ye ẏe (35)
Proof:
After substituting all the values in Eq. 35 becomes
ṡ1 = −k1 tanh(s1 )
(31) V̇ = xe (ωslide ye + vslide − vd cos θe )
ṡ2 = k1 tanh(s1 ) − k2 tanh(s2 ) + ωd (tanh(ωd ) − 1) (36)
+ ye (−ωslide xe + vd sin θe )
, Let
f1 (s) = ṡ1 Now, substituting the values of vslide and ωslide in Eq.36
(32)
f2 (s) = ṡ2 V̇ = xe (−ω(xe + ye ) + vd cos θe + vd sin θe
(37)
s = 0 is the equilibrium point. We can apply Krasovskii − k1 tanh(s1 ) − vd cos θe ) + ye (vd sin θe )
method to determine the reachability of the equilibrium point. After simplification and considering xe = ye and θe = 0 Eq.37
A lyapunov function is becomes
V (s) = f T (s)P f (s) V̇ = xe (−ω(xe + xe ) = −2ωd tanh(ωd )x2e (38)
taking P = I (identity matrix) for simplicity
VII. S IMULATION RESULTS
V̇ (s) = f T (s)[J T (s) + J(s)]f (s) The proposed switching based sliding mode controller is
where first verified using some simulations on kinematic model of
∂f1 ∂f1
 
mobile robot following different types of path. in the simula-
 ∂s1 ∂s2  tion initial positions of the mobile robot are taken as q(0) =
J(s) = 
[3, −3, −1]T and initial velocities are [v(0), ω(0)] = [0, 0]T .

 ∂f2 ∂f2 
∂s1 ∂s2 The control parameters are set as k1 = 5 and k2 = 2Following
" # four cases are simulated
−k1 sech2 (s1 ) 0 case 1: setpoint stabilization vd = 0,ωd = 0
J(s) =
k1 sech2 (s1 ) k2 sech2 (s2 ) case 2: tracking a line vd = 2,ωd = 0
case 3: tracking a circle vd = 2,ωd = 1
case 4: tracking random path From Simulations results it is
Q = −[J T (s)P + P J(s)] verified that states reaches the sliding surface in finite time
"
2k1 sech2 (s1 ) −k1 sech2 (s1 )
# and then converges to zero asymptotically.
=
−k1 sech2 (s1 ) k2 sech2 (s2 ) VIII. E XPERIMENTAL SETUP AND RESULTS
For Q should be positive definite following conditions should Experimental validation of the proposed approach for robot
be hold safe navigation is carried out in unknown indoor environments.
Fig. () shows the experimental setup, where Pioneer P3-DX is
2k1 sech2 (s1 ) > 0
used to perform real-time experiments in Linux environment
and with Robot Operating System (ROS). Kinect V2 is equipped
2 2 2
4k1 k2 sech (s1 )sech (s2 ) − k1 sech (s1 ) > 0 (33) on top of the robot to generate depth map. The sensor can
which implies not give depth map for very close and far obstacles, in this
work, sensor is inclined at 30◦ . Frequency of the sensor is
4k2 − 2sech2 (s2 ) > 1
30Hz. Fig. 16a, 16b, 16c shows sample scenarios considered
Hence in light of the lemma1 , finite time stability is guaran- for experiments. Experiments are conducted in the absence
teed and hence it completes the proof. and presence of channel losses.
8

1 5 5
Commanded s1 s2 10 Commanded s1 s2

Surface

Surface
Achieved Achieved
0 0 0

y[m]
y[m]

-1 -5 -5
0 5 (c) 10 15 0 5 (c) 10 15
10
-2
0

v[m/s]
0
0

v[m/s]
-10
-3 Commanded Commanded
S S -10
-20 Achieved Achieved
-1 0 1 2 3 4 -5 0 x[m] 5 10
x[m] -30 -20
0 5 10 15 (a) 0 5 10 15
(a) (d) (d)
3 5 4
4 Commanded xe ye 3e Commanded

![rad/s]
xe ye 3e ![rad/s] 2 2

Error
Achieved
Error

2 Achieved 0
1
0
0 0
-1 -5 -2
-2 t[s]
0 5 t[s] 10 15 0 5 t[s] 10 15 0 5 10 15 0 5 t[s] 10 15
(b) (e) (b) (e)

Fig. 12: Robot following a set point Fig. 15: Robot following random Trajectory

10 5
Commanded s1 s2
Surface

Achieved
5 0

0
y[m]

-5
S
0 5 (c) 10 15

-5 10 (a)
v[m/s]

0
-10 Commanded
-10 Achieved
0 5 10 15x[m] 20 25 -20
(a) 0 5 (d) 10 15
5 2
xe ye 3e
Commanded (b) (c)
![rad/s]

1
Error

0
Achieved
0 Fig. 16: Scene sample images.
-5 -1
0 5 t[s] 10 15 0 5 t[s] 10 15
(b) (e)
robot commanded and achieved navigation trajectories repre-
Fig. 13: Robot following a line sented by solid and dashed lines, respectively. Corresponding
commanded and actual linear and angular velocities are shown
5
4 s1 s2 in Fig. 17(d) and 17(e), respectively. Results show that the
Surface

0
robot is able to track the desired command computed by the
2

-5
y[m]

0 5 10 15 0.5 1
0 (c) s1 s2
10
Surface

0
v[m/s]

Commanded 0
-2 0 S
Achieved
S -10 Commanded
y[m]

-1
Achieved 0 2
-4 -2 0 x[m] 2 4
-20 (c)4 6 8

(a) 0 5 10 15 -0.5
(d) 1
5 3
xe ye 3e
v [m/s]

Commanded
Commanded
![rad/s]

2 Achieved
Error

Achieved -1 0
0
1 0 0.5 1 1.5
x[m] Commanded Achieved
-5 0 (a) -1
0 5 t[s] 10 15 0 5 t[s] 10 15 0 2 4 6 8
(d)
(b) (e) 1 1
xe ye 3e
Fig. 14: Robot following a circle
![rad/s]
Error

0 0

Commanded Achieved
-1 -1
A. Without loss 0 2 4 t[s] 6 8 0 2 4 t[s] 6 8
(b) (e)
In this section, experiment is conducted in two obstacle
scenario in the absence of channel losses. Fig. 17(a) shows the Fig. 17: Robot trajectory.
9

guidance laws. Resultant position and angle error profiles are IX. C ONCLUSION
shown in Fig. 17(b). From figure it can be observed that all
the errors converses to zero simultaneously and in finite time. This paper presents a vision based obstacle avoidance
Both the surfaces variation with time is shown in Fig. 17(c). guidance for non-holonomic robot in indoor environments.
Generated desired commands are followed by proposed sliding
mode controller. Analysis of the guidance strategy is carry out
B. With loss in case of an infinite length obstacle. Experiment is carried
Here, experimental results for two cases are discussed in out on Pioneer robot with kinect sensor in CPS environment.
presence of channel losses. Results show the effectiveness of the proposed approach in
Case 1: Here, single obstacle scenario is considered with real-time. To show the robustness property, the approach is
low values of losses. Robot trajectory and corresponding implemented in presence of losses.
desired guidance commands are shown in Figs. 18(a) and
18(d) and 18(e), respectively.
R EFERENCES
1
[1] P. Agrawal, A. Ratnoo, and D. Ghose, “Inverse optical flow
s1 s2
0.2 based guidance for {UAV} navigation through urban canyons,”
Surface

0
Aerospace Science and Technology, vol. 68, pp. 163 – 178,
0 S 2017. [Online]. Available: http://www.sciencedirect.com/science/article/
pii/S127096381730826X
[2] J. Biswas and M. Veloso, “Depth camera based indoor mobile robot
y[m]

-1
-0.2 0 2 4
(c)
6 8 10 localization and navigation,” in Robotics and Automation (ICRA), 2012
1 IEEE International Conference on. IEEE, 2012, pp. 1697–1702.
-0.4 [3] D. Chwa, “Sliding-mode tracking control of nonholonomic wheeled
mobile robots in polar coordinates,” IEEE transactions on control
v[m/s]

Commanded 0 systems technology, vol. 12, no. 4, pp. 637–644, 2004.


-0.6 Achieved [4] ——, “Robust distance-based tracking control of wheeled mobile robots
Commanded Achieved using vision sensors in the presence of kinematic disturbances,” IEEE
-0.5 0 0.5 1 1.5
x[m] -1 Transactions on Industrial Electronics, vol. 63, no. 10, pp. 6172–6183,
(a) 0 2 4 6 8 10
(d) 2016.
1 1 [5] C. I. Connolly, J. B. Burns, and R. Weiss, “Path planning using laplace’s
xe ye 3e Commanded Achieved
equation,” in Robotics and Automation, 1990. Proceedings., 1990 IEEE
![rad/s]

International Conference on. IEEE, 1990, pp. 2102–2106.


Error

0 0 [6] D. S. O. Correa, D. F. Sciotti, M. G. Prado, D. O. Sales, D. F. Wolf, and


F. S. Osorio, “Mobile robots navigation in indoor environments using
-1
kinect sensor,” in Critical Embedded Systems (CBSEC), 2012 Second
-1
0 2 4 6
t[s]
8 10 0 2 4 6 8 10 Brazilian Conference on. IEEE, 2012, pp. 36–41.
t[s]
(b) (e) [7] E. DiGiampaolo and F. Martinelli, “Mobile robot localization using
the phase of passive uhf rfid signals,” IEEE Transactions on Industrial
Fig. 18: Robot trajectory for Case 1. Electronics, vol. 61, no. 1, pp. 365–376, 2014.
[8] S. Garrido, L. Moreno, D. Blanco, and P. Jurewicz, “Path planning for
Case 2: In this case burst losses are occuring in the channel mobile robot navigation using voronoi diagram and fast marching,” Int.
J. Robot. Autom, vol. 2, no. 1, pp. 42–64, 2011.
as seen from the velocity profile of the robot. from trajectory [9] S. S. Ge and Y. J. Cui, “Dynamic motion planning for mobile robots
tracking plot it is verified that sliding mode controller is robust using potential field method,” Autonomous robots, vol. 13, no. 3, pp.
under channel disturbances. 207–222, 2002.
[10] H. Giese, B. Rumpe, B. Schätz, and J. Sztipanovits, “Science and
1 1
engineering of cyber-physical systems (dagstuhl seminar 11441),” in
s1 s2 Dagstuhl Reports, vol. 1, no. 11. Schloss Dagstuhl-Leibniz-Zentrum
fuer Informatik, 2012.
Surface

0.5
0 [11] M. D. Ilic, L. Xie, and U. A. Khan, “Modeling future cyber-physical
0 S
energy systems,” in Power and Energy Society General Meeting-
Conversion and Delivery of Electrical Energy in the 21st Century, 2008
y[m]

-0.5 -1
0 5 10 15 20 25 IEEE. IEEE, 2008, pp. 1–9.
(c)
-1 [12] J. Jin, Y. Kim, S. Wee, D. Lee, and N. Gans, “A stable switched-system
1
approach to collision-free wheeled mobile robot navigation,” Journal of
-1.5 Commanded Intelligent & Robotic Systems, vol. 86, no. 3-4, pp. 599–616, 2017.
v[m/s]

Achieved 0 [13] K. Khoshelham, “Accuracy Analysis of Kinect Depth Data,” ISPRS


-2 - International Archives of the Photogrammetry, Remote Sensing and
0 1 2 3 Commanded Achieved
x[m] Spatial Information Sciences, vol. 3812, pp. 133–138, Sep. 2011.
-1
(a) 0 5 10 15 20 25 [14] M. Kim, M.-O. Stehr, and C. Talcott, “A distributed logic for networked
(d) cyber-physical systems,” Science of Computer Programming, vol. 78,
1 1 no. 12, pp. 2453–2467, 2013.
xe ye 3e
[15] S. M. LaValle and J. J. Kuffner Jr, “Randomized kinodynamic planning,”
![rad/s]
Error

0 0
The international journal of robotics research, vol. 20, no. 5, pp. 378–
400, 2001.
Commanded Achieved [16] S. M. Loos, A. Platzer, and L. Nistor, “Adaptive cruise control: Hybrid,
-1 -1 distributed, and now formally verified,” in International Symposium on
0 5 10 t[s] 15 20 25 0 5 10 t[s] 15 20 25
(b)
Formal Methods. Springer, 2011, pp. 42–56.
(e)
[17] Y. Ma, G. Zheng, W. Perruquetti, and Z. Qiu, “Motion planning for
Fig. 19: Robot trajectory for Case 2. non-holonomic mobile robots using the i-pid controller and potential
field,” in Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ
International Conference on. IEEE, 2014, pp. 3618–3623.
10

[18] D. Panagou, H. G. Tanner, and K. J. Kyriakopoulos, “Control of


nonholonomic systems using reference vector fields,” in Decision and
Control and European Control Conference (CDC-ECC), 2011 50th IEEE
Conference on. IEEE, 2011, pp. 2831–2836.
[19] J. Ren, K. A. McIsaac, and R. V. Patel, “Modified newton’s method
applied to potential field-based navigation for nonholonomic robots in
dynamic environments,” Robotica, vol. 26, no. 01, pp. 117–127, 2008.
[20] L. Sha, S. Gopalakrishnan, X. Liu, and Q. Wang, “Cyber-physical sys-
tems: A new frontier,” in Sensor Networks, Ubiquitous and Trustworthy
Computing, 2008. SUTC’08. IEEE International Conference on. IEEE,
2008, pp. 1–9.
[21] D. Tardioli, A. R. Mosteo, L. Riazuelo, J. L. Villarroel, and L. Montano,
“Enforcing network connectivity in robot team missions,” The Interna-
tional Journal of Robotics Research, 2010.
[22] D. Tardioli and J. L. Villarroel, “Real time communications over 802.11:
Rt-wmp,” in Mobile Adhoc and Sensor Systems, 2007. MASS 2007. IEEE
International Conference on. IEEE, 2007, pp. 1–11.
[23] P. M. Tiwari, S. Janardhanan, and M. un Nabi, “Rigid spacecraft attitude
control using adaptive non-singular fast terminal sliding mode,” Journal
of Control, Automation and Electrical Systems, vol. 26, no. 2, pp. 115–
124, 2015.
[24] ——, “Spacecraft attitude control using non-singular finite time conver-
gence fast terminal sliding mode,” International Journal of Instrumen-
tation Technology, vol. 1, no. 2, pp. 124–142, 2012.
[25] C. Tricaud and Y. Chen, “Optimal trajectories of mobile remote sensors
for parameter estimation in distributed cyber-physical systems,” in
American Control Conference (ACC), 2010. IEEE, 2010, pp. 3211–
3216.
[26] A. Wagh, X. Li, J. Wan, C. Qiao, and C. Wu, “Human centric data fusion
in vehicular cyber-physical systems,” in Computer Communications
Workshops (INFOCOM WKSHPS), 2011 IEEE Conference on. IEEE,
2011, pp. 684–689.
[27] B.-F. Wu and C.-L. Jen, “Particle-filter-based radio localization for
mobile robots in the environments with low-density wlan aps,” IEEE
Transactions on Industrial Electronics, vol. 61, no. 12, pp. 6860–6870,
2014.
[28] J.-M. Yang and J.-H. Kim, “Sliding mode control for trajectory track-
ing of nonholonomic wheeled mobile robots,” IEEE Transactions on
robotics and automation, vol. 15, no. 3, pp. 578–587, 1999.
[29] J. Yuan, Y. Zheng, X. Xie, and G. Sun, “Driving with knowledge
from the physical world,” in Proceedings of the 17th ACM SIGKDD
international conference on Knowledge discovery and data mining.
ACM, 2011, pp. 316–324.
[30] N. A. Zainuddin, Y. Mustafah, Y. Shawgi, and N. Rashid, “Autonomous
navigation of mobile robot using kinect sensor,” in Computer and
Communication Engineering (ICCCE), 2014 International Conference
on. IEEE, 2014, pp. 28–31.
[31] M. Zhang and H. H. Liu, “Game-theoretical persistent tracking of a
moving target using a unicycle-type mobile vehicle,” IEEE Transactions
on Industrial Electronics, vol. 61, no. 11, pp. 6222–6233, 2014.

S-ar putea să vă placă și