Documente Academic
Documente Profesional
Documente Cultură
\
|
=
|
|
.
|
\
|
=
(
(
(
= =
Y
X
v
q u B u q B q ; ;
1 0
0 sin
0 cos
; ) ( & (1)
A reference or desired path to be followed can be
described by a single parameter, namely r, and it
can be expressed as a vector of desired state
coordinates q
des
(r)=(X
des
(r), Y
des
(r),
des
(r))
t
. To
study the tracking of a path q
des
(r) let us define
another intrinsic coordinate system ()=(q
des
(r); t,
n) linked to the path (usually called the Frenet
frame [16], see Fig. 4). t is the unitary vector
parallel to the robot orientation in the desired point
q
des
and n the normal to it. Let (e
x
, e
y
) be the
position errors of point P
o
relative to these axis and
e
the robot orientation error, so e
q
=(e
x
, e
y
, e
)
t
will
be the relative errors vector (an analogous
coordinate system was used by Kanayama et al
[14], but in that case the system was linked to the
robot itself). Let u
des
(r)=(v
des
(r),
des
(r))
t
be the
desired inputs expressed as a function of the
descriptor parameter r. Besides, applying the chain
law for the desired inputs, we can get to:
r r r r
dt
dr
dr
d
dt
d
t
des des
des des
des
& & ) ( ) ( ) (
= = = =
where (') means differentiation with respect to r.
The relation for v
des
is analogous if we consider the
length s
des
of the path (
des des
s t v & = ) ( ). To sum up we
can declare that r r t & ) ( ) (
des des
u u = .
For the chosen frame, error coordinates e
q
can be
obtained as a rotation around an axis normal to the
XOY-plane. In fact and according to Fig. 4:
( ) ( )
( ) ( )
|
|
|
.
|
\
|
= =
1 0 0
0 cos sin
0 sin cos
) ( ; ) )( (
des des
des des
des des
R q q R e
des q
Therefore, using the relative coordinates linked to
the path, and by simple calculations, the following
state equations can be found [8]:
|
|
.
|
\
|
|
|
|
.
|
\
|
+
|
|
|
.
|
\
|
|
|
|
.
|
\
|
+
|
|
|
.
|
\
|
=
|
|
|
.
|
\
|
v
e
e
e
e
e v
e
e
e
y
x
des
des
des
des
y
x
1 0
0 ) ( sin
0 ) cos(
0 0 0
0 0
0 0
0
&
&
&
(3)
O
Y
X
2R
2d
P
l
Po
t
n
r
v
des
des
()
()
SIRIUS dimensions:
d=0.25m. R=0.16m
des
e
=-
des
e
x
e
y
v
Fig. 4: Extrinsic and intrinsic robot coordinates.
This form agrees with the intuition that error
variables must grow with both real and desired
postures advancement.
4 Error adaptive tracking selection and
Lyapunov-based control law
In section II we proposed the error adaptive tracking
based on the election of r& as a function g(t,e
q
). Let
us first analyze the most elementary case, where g
has not dependence on t. We have shown that if
errors are small, then g(e
q
) should tend to 1, but if
they were large, then g(e
q
) would be small.
The conditions stated above give us a first simple
idea for g(e
q
): a function bounded by 0 and 1 e
q
,
e. g. g(e
q
)=exp(-|e
q
|). Therefore when any error
grows, g tends to zero. Although this elementary
construction may be useful for some systems, the
nonholonomic constraint of the unicycle prohibits it
(if we want to look for a Lyapunov-based control
law). For this kind of mobile robots, it can be seen
that if a Lyapunov function V is the sum of positive
definite functions of the relative errors e
q
=(e
x
, e
y
,
e
, while e
y
remains
constant. Even going further, in this situation we
can observe that when r& becomes lower, the
convergence will be slower. This is due to the fact
that the input v can not exceed v
des
(t)=r& v
des
(r) (to
avoid e
x
being increased). To sum up, when V
&
is
zero, the decrement of V is "connected" or
"attached" to the increment of r& . Therefore if V
&
is
null, r& should be maximum, to get a fast
convergence. Moreover if the control law will
pursuit V
&
to be the closest to -KV, K>0, then
g(e
q
)=exp(K
V
V
&
) can be a plausible solution (K
V
>0
is a scale factor). With closest we mean that V
&
can not be -KV because of the constraint, although
it was desirable to get exponential convergence.
Considering this, we can design a control law based
on the following theorem.
Theorem: For unicycle robots and the Lyapunov
function:
( )
V e e A e
x y
= + +
1
2
1
2 2 2
( ) cos( )
, then
e
q
=(e
x
,e
y
,e
=-
2
-1
A
-2
a
2
(e
q
),
2
>0
where a
1
(e
q
)= e
x
cos(e
)+ e
y
sin(e
);
a
2
(e
q
)= A
2
sin(e
), and v = e
v
+ v
des
(t)cos(e
)
) cos( ) (
1
) ( ) (
1
) (
2 2
e t v e
A
e sin t v e
A
t e
des y des x
des
+
+ + =
Note that v and are composed by feedforward
terms and e
v
, e
e A e v
e A e e e e v V
des x des
x y
+ +
+ + + =
&
Substituting v and :
V
&
= e
v
a
1
(e
q
)+ e
a
2
(e
q
)
And applying the control law:
V
&
= -
1
-1
a
1
(e
q
)
2
+ -
2
-1
A
-2
a
2
(e
q
)
2
Then V is non-increasing, so 0 V
&
, and V
converges to some limit: 0
lim
V V . Therefore by
Barbalats lemma [19] 0 , 0
e e & . At this limit,
V
.
is negative definite on error e
x
, so 0 , 0
x x
e e & ,
e
y
e
y,lim
<, and the control law is:
v= e
v
+ v
des
(t)
) (
1
) (
, 2
t v e
A
t e
des lim y des
+ =
and e
v
, e
0
,
And the state equations tend to:
|
|
|
.
|
\
|
=
|
|
|
.
|
\
|
+ +
0
0
0
) (
0
) ( ) (
,
t
v e t t v
des
lim y des des
Therefore by the third state equation we have
des
(t), and by the control law: vv
des
(t). Then:
0=e
y,lim
v
des
(t);
0=e
y,lim
des
(t);
Squaring and adding previous equations we have:
( ) ) ( ) ( 0
2 2 2
,
t v t e
des des lim y
+ =
At this limit r& = 1, so ( ) ) ( ) ( 0
2 2 2
,
r v r e
des des lim y
+ = ,
that implies (by the first hypothesis) that e
y,lim
=0.
QED
Remark 1: Note that if g(e
q
) tends to zero in the
case 0 , 0
x x
e e & , e
y
e
y,lim
then u also tends to
zero and convergence will be very slow.
Remark 2: Actually there is not special exigencies
on g(e
q
); only it is convenient to tend to one in the
case 0 V
&
and to zero when V
&
. In fact we
will use g(e
q
)=exp(-K
V
V
&
), K
V
>0 in the next
section to achieve an adequate convergence.
The results of this proposal will be shown in the
next section. Now we will study the case where g
has also dependence on t. As mentioned before, we
concentrate on a function g(t,e
q
) that is a combined
function of the errors and the difference between
the parameter r and the time t (i.e. t-r), in order to
get a relaxed deterministic tracking (determinism
is not applied when errors are large). The
dependence of g on errors could be the same as
before, because here previous reasons apply also
when r=t. The intention for introducing the
dependence on t is to permit that r remains at rest
when the robot is far from the path (in spite of the
increase of r-t). In a second case, when robot
recuperates and approaches the path, the
difference r-t should be reduced. Therefore g(t,e
q
)
can not be bounded by 1, because in the second
case r& must approach t. Moreover at the origin (t-
r=0, e
q
=0), g must be 1. Thus we might think of a
first proposal for this more complex EAT, e.g.:
g(t,e
q
)=exp(V
&
) (1+K
tr
arctan(t-r)) (4)
where K
tr
>0 is a scale factor that indicates how fast
the convergence of r to t is. Here g(t,e
q
) is upper
bounded by
2
1
tr
K + ,and lower bounded by
2
1
tr
K .
We want to point the fact that the second factor of
g(t,e
q
) is just a control law for parameter r. The
whole function g can even be seen as a new DOF
of the system since we are designing the behavior
of r, which can be considered a new state variable.
Note that in this law, the desired evolution for the
parameter is r=t, hence the constant 1 is a
feedforward term
1
. Moreover, when the difference
t-r becomes small, K
tr
arctan(t-r) behaves as a
proportional control law. We simply choose the
arctan because it is a bounded function, and large
inputs on the new DOF would mean large robot
inputs (v and ), and because such simple control
law will be sufficient for a state variable r that
has not any real physical model. We assume that
the previous choice can not be optimal at all for
some applications and more sophisticated g can be
tried in future work, but this simple alternative is
sufficient for SIRIUS wheelchair as a first
proposal.
In addition, the proposed function g(t,e
q
) fulfils
remark 2 of the previous theorem: naturally it tends
to zero when V
&
, and in the case 0 V
&
it
1
In general if the tracking would need an evolution like
r=f(t), the feedforward term would be r& = ) (t f
&
.
does not tend to one if and only if r is much greater
than t (i.e. the robot is in advance with respect to
time). But in this last case, the robot will simply be
quiet until the time has approached r, and at that
moment the convergence will be rapid (however
note that this situation is not very usual). Finally if
K
tr
>2/, the variation of r can be null or even
negative when robot is in advance with respect to
the desired point. This situation does not make
sense when a wheelchair is required to track a path
and it will be avoided in this paper (we will choose
exactly K
tr
=2/, so the chair will wait for q
des
(t) if
necessary, i.e. r& =0).
5 Simulation results.
In this section we show the behavior of our system
in several situations. For our real system SIRIUS,
the inertia load driven by the motors is important
even when gearbox ratio is high (31:1). Due to this
it is very important to consider the case where the
motors have a response delay (i.e., the real
kinematic inputs are delayed with respect to the
commanded inputs). In spite of this, we will show
that the EAT method ensures that errors will not
grow too much when motor response is slow.
Even when asymptotic convergence is ensured,
simulation is always a good way to verify and
observe the control behavior. This behavior is
primarily significant when errors are large,
because, if they were small, simulated systems
behavior will be similar to that of an exponential
convergent system.
In this subsection, we first analyze a path
composed of three very different segments: a
straight line, a 1.5m constant radius arc (which are
the most frequent cases found in current literature)
and a zero-radius turn. The first two segments have
a length of 5m in the state coordinates space (X, Y,
A
; A
2
=0.25m
2
) and the last one is non-ended.
The generality of the EAT method is confirmed;
moreover, we present three different initial
conditions (with considerable initial errors), so the
robot has to cope with each segment. We
summarize the initial position and errors of the
three cases in the following table. The duration is
10 seconds for each experiment.
Case e
x
(m) e
y
(m) e
(rad) R(m)
1 0 -0.4 -+0.5 0
2 0 0.7 /2 4.5
3 -0.3 -0.3 -/2 10
The values for constant parameters have been tuned
to ensure a smooth convergence and achieve a
satisfactory variation of r when errors are large:
K
V
=5s
1/2
m
-1
,
1
=0.5s,
2
=1/6s.
0 1 2 3 4 5 6 7
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
Y(m)
X(m)
Desired path
Real trajectories
Fig. 5: EAT behavior under different initial conditions.
Obviously when errors are small the convergence
to the path is smooth and fast. However, in spite of
the presence of extreme conditions, good
convergence of EAT method is observed. In Fig. 5
the three different initial conditions are examined;
each one begins near each segment. The
convergence of the three errors to zero is similar
for all the cases.
The solid line gives us the real robot trajectory, and
the arrows show the initial robot orientation . Note
that in the second experiment the robot begins
following the path (at the first transient) in reverse
direction, in order to reduce errors faster, while
parameter r remains nearly constant. In Fig. 6 we
can see how r& evolves for each case: r& begins
with a value near to zero, and then it recovers
until it reaches the stationary value of 1. The
moments where r& approaches 1 rapidly (around the
first second) coincides approximately with the
situation when the robot is parallel to the desired
posture (e