2
T
M(q) m
2
(q)
2
R
n
(2.2)
where m
1
is a positive constant, m
2
() is a positive function, and denotes
the Euclidean norm.
Property 2: The inertia and the centripetalCoriolis matrices satisfy the following
relationship [18]
T
_
1
2
M(q) V
m
(q, q)
_
= 0 R
n
(2.3)
where
M(q) represents the time derivative of the inertia matrix.
10
Property 3: The robot dynamics given in (2.1) can be linearly parameterized as
follows [18]
Y (q, q, q) ] M(q) q +V
m
(q, q) q +G(q) (2.4)
where R
p
contains constant system parameters, and Y (q, q, q) R
np
denotes
a regression matrix composed of q(t), q(t), and q(t).
Adaptive VFC Control Objective
As described previously, many robotic tasks can be eectively encapsulated as
a velocity eld. That is, the velocity eld control objective can be described as
commanding the robot manipulator to track a velocity eld that is dened as a
function of the current link position. To quantify this objective, a velocity eld
tracking error, denoted by
1
(t) R
n
, is dened as follows
1
(t) ] q(t) (q) (2.5)
where () R
n
denotes the velocity eld. To achieve the control objective, the
subsequent development is based on the assumption that q(t) and q(t) are measurable,
and that (q) and its partial derivative
(q)
q
R
n
, are assumed to be bounded
provided q(t) L
.
Benchmark Control Modication
To develop the openloop error dynamics for
1
(t), we take the time derivative of
(2.5) and premultiply the resulting expression by the inertia matrix as follows
M(q)
1
= V
m
(q, q) q G(q) + +V
m
(q, q)(q) (2.6)
V
m
(q, q)(q) M(q)
(q)
q
q
where (2.1) was utilized. From (2.5), the expression in (2.6) can be rewritten as
follows
M(q)
1
= V
m
(q, q)
1
Y
1
(q, q) + (2.7)
11
where was introduced in (2.4) and Y
1
(q, q) R
np
denotes a measurable regression
matrix that is dened as follows
Y
1
(q, q) ] M(q)
(q)
q
q +V
m
(q, q)(q) +G(q). (2.8)
Based on the openloop error system in (2.7), a number of control designs could be
utilized to ensure velocity eld tracking (i.e.,
1
(t) 0) given the assumption in
(1.1). Motivated by the desire to eliminate the assumption in (1.1), a norm squared
gradient term is incorporated in an adaptive controller introduced in [25] as follows
(t) ]
_
K +
_
_
_
_
V (q)
q
_
_
_
_
2
_
1
+Y
1
(q, q)
1
(2.9)
where K R
nn
is a constant, positive denite diagonal matrix, and
V (q)
q
was
introduced in (1.2). In (2.9),
(t) R
p
denotes a parameter estimate that is generated
by the following gradient update law
.
1
(t) =
1
Y
T
1
(q, q)
1
(2.10)
where
1
R
pp
is a constant, positive denite diagonal matrix. After substituting
(2.9) into (2.7), the following closedloop error system can be obtained
M(q)
1
= V
m
(q, q)
1
Y
1
(q, q)
_
K +
_
_
_
_
V (q)
q
_
_
_
_
2
_
1
(2.11)
where the parameter estimation error signal
1
(t) R
p
is dened as follows
1
(t) ]
1
. (2.12)
Remark 1 While the control development is based on a modication of the adaptive
controller introduced in [25], the norm squared gradient term could also be incorporated
in other benchmark controllers to yield similar results (e.g., sliding mode controllers).
Stability Analysis
To facilitate the subsequent stability analysis, the following preliminary theorem
is utilized.
12
Theorem 1 Let
V (t) R denote the following nonnegative, continuous dierentiable
function
2
()d (2.15)
where R is a positive constant, and (t) R is dened as follows
]
_
_
_
_
V (q)
q
_
_
_
_
1
. (2.16)
If (t) is a square integrable function, where
_
t
t
0
2
()d , (2.17)
and if after utilizing (2.5), the time derivative of
V (t) satises the following inequality
.
V (t)
3
(q) +
0
(2.18)
where
3
(q) is the class K function introduced in (1.2), and
0
R denotes a positive
constant, then q(t) is global uniformly bounded.
Proof: The time derivative of
V (t) can be expressed as follows
.
V (t) =
V (q)
q
(q) +
V (q)
q
1
2
(t)
where (2.5) and (2.15) were utilized. By exploiting the inequality introduced in (1.2)
and the denition for (t) provided in (2.16), the following inequality can be obtained
.
V (t)
3
(q) +
0
+
_
(t)
2
(t)
. (2.19)
13
After completing the squares on the bracketed terms in (2.19), the inequality intro
duced in (2.18) is obtained where
0
]
0
+
1
4
.
Hence, if (t) L
2
, then Lemma 5 in the appendix can be used to prove that q(t) is
GUB.
In the following analysis, we rst prove that (t) L
2
. Based on the conclusion
that (t) L
2
, the result from Theorem 1 is utilized to ensure that q(t) is bounded
under the proposed adaptive controller given in (2.9) and (2.10).
Theorem 2 The adaptive VFC given in (2.9) and (2.10) yields global velocity eld
tracking in the sense that
1
(t) 0. (2.20)
Proof: Let V
1
(t) R denote the following nonnegative function
V
1
]
1
2
T
1
M
1
+
1
2
T
1
1
1
1
. (2.21)
After taking the time derivative of (2.21) the following expression can be obtained
V
1
=
T
_
Y
1
(q, q)
1
+
_
K +
_
_
_
_
V (q)
q
_
_
_
_
2
_
1
_
T
1
1
1
.
1
(2.22)
where (2.3) and (2.11) were utilized. After utilizing the parameter update law given
in (2.10), the expression given in (2.22) can be rewritten as follows
V
1
=
T
1
_
K +
_
_
_
_
V (q)
q
_
_
_
_
2
_
1
. (2.23)
The expressions given in (2.16), (2.21), and (2.23) can be used to conclude that
1
(t),
1
(t) L
and
1
(t), (t) L
2
. From the fact that
1
(t) L
1
(t) L
and the
assumption that (q),
V (q)
q
L
provided q(t) L
1
(t),
1
(t) L
. Given
that
1
(t),
1
(t) L
and
1
(t) L
2
, Barbalats Lemma [25] can now be utilized to
prove (2.20).
14
Navigation Function Control Extension
Control Objective
The objective in this extension is to navigate a robots endeector along a collision
free path to a constant goal point, denoted by q
R
n
denotes
the constant goal point in the interior of D. Mathematically, the primary control
objective can be stated as the desire to ensure that
q(t) q
as t (2.24)
where the secondary control is to ensure that q(t) D. To achieve these two control
objectives, we dene (q) R
1
as a function (q) : D [0, 1] that is assumed to
satisfy the following properties:
P1) The function (q) is rst order and second order dierentiable (i.e.,
q
(q)and
q
_
q
(q)
_
exist on D).
P2) The function (q) obtains its maximum value on the boundary of D.
P3) The function (q) has unique global minimum at q (t) = q
.
P4) If
q
(q) = 0 then q (t) = q
.
Based on (2.24) and the above denition, an auxiliary tracking error signal, de
noted by
2
(t) R
n
, can be dened as follows to quantify the control objective
2
(t) ] q(t) +(q) (2.25)
where (q) =
q
(q) denotes the gradient vector of (q) dened as follows
(q) ]
_
q
1
q
2
...
q
n
_
T
. (2.26)
15
Remark 2 As discussed in [24], the construction of the function (q), coined a
navigation function, that satises all of the above properties for a general obstacle
avoidance problem is nontrivial. Indeed, for a typical obstacle avoidance, it does not
seem possible to construct (q) such that
q
(q) = 0 only at q (t) = q
. That is, as
discussed in [24], the appearance of interior saddle points (i.e., unstable equilibria)
seems to be unavoidable; however, these unstable equilibria do not really cause any
diculty in practice. That is, (q) can be constructed as shown in [24] such that only
a few initial conditions will actually get stuck on the unstable equilibria.
Benchmark Control Modication
To develop the openloop error dynamics for
2
(t), we take the time derivative of
(2.25) and premultiply the resulting expression by the inertia matrix as follows
M
2
= V
m
(q, q)
2
+Y
2
(q, q) + . (2.27)
where (2.1) and (2.25) were utilized. In (2.27), the linear parameterization Y
2
(q, q)
is dened as follows
Y
2
(q, q) ] M(q)f(q, q) +V
m
(q, q) (q) G(q) (2.28)
where Y
2
(q, q) R
nm
denotes a measurable regression matrix, R
m
was intro
duced in (2.4), and the auxiliary signal f(q, q) R
n
is dened as
f(q, q) ]
d
dt
((q))
= H(q) q (2.29)
where the Hessian matrix H(q) R
nn
is dened as follows
H(q) ]
q
2
1
q
1
q
2
2
q
1
q
n
q
2
q
1
q
2
2
2
q
2
q
n
q
n
q
1
2
q
2
n
. (2.30)
16
Based on (2.27) and the subsequent stability analysis, the following adaptive controller
introduced in [25] can be utilized
] k
2
Y
2
(q, q)
2
(2.31)
where k R is a positive constant gain, and
2
(t) R
n
denotes a parameter update
law that is generated from the following expression
.
2
(t) ]
2
Y
T
2
(q, q)
2
(2.32)
where
2
R
mm
is a positive denite, diagonal gain matrix. Note that the trajectory
planning is incorporated in the controller through the gradient terms included in
(2.28) and (2.29). After substituting (2.31) into (2.27) the following closed loop error
systems can be obtained
M
2
= V
m
(q, q)
2
k
2
+Y
2
(q, q)
2
(2.33)
where
2
(t) R
p
is dened as follows
2
(t) ]
2
. (2.34)
Stability Analysis
Theorem 3 The adaptive controller given in (2.31) and (2.32) ensures that the robot
manipulator tracks an obstacle free path to the unique goal conguration in sense that
q(t) q
as t (2.35)
provided the control gain k introduced in (2.31) is selected to be suciently large.
Proof: Let V
2
(q,
2
,
2
) R denote the following nonnegative function
V
2
] (q) +
_
1
2
T
2
M
2
+
T
2
1
2
2
_
. (2.36)
where R is an adjustable, positive constant. After taking the time derivative of
(2.36) the following expression can be obtained
V
2
= [(q)]
T
q +
T
2
_
k
2
+Y
2
(q, q)
2
_
T
2
1
2
.
2
(2.37)
17
where (2.3), (2.26), (2.33), and (2.34) were utilized. By utilizing (2.25), (2.32), and
the triangle inequality, the following expression can be obtained
V
2
1
2
(q)
2
(k 2)
2
2
. (2.38)
Provided k is selected suciently large to satisfy
k >
2
, (2.39)
it is clear from (2.2), (2.36), and (2.38) that
0 (q(t)) + (q, t) (q(0)) + (q(0), 0) (2.40)
where (q, t) R is dened as
(q, t) ]
_
m
2
(q)
2
2
(t)
2
+
max
{
1
2
}
_
_
_
2
(t)
_
_
_
2
_
. (2.41)
From (2.34), (2.40), and (2.41) it is clear that
2
(t), (q),
2
(t),
2
(t) L
. Let the
region D
0
be dened as follows
D
0
] {q(t) 0 (q(t)) (q(0)) + (q(0), 0)} . (2.42)
Hence (2.36), (2.38) and (2.40) can be utilized to show that q(t) D
0
provided
q(0) D
0
(i.e., q(t) D
0
q(0) D
0
). Based on property P1 given above, we now
know that (q) L
q(0) D
0
. Since
2
(t), (q) L
q(0) D
0
, (2.25) can
be used to conclude that q(t) L
q(0) D
0
; hence, property P1) and (2.29) can be
used to conclude that f(q, q) L
q(0) D
0
. Based on the previous boundedness
statements, (2.28) can be used to show that all signals are bounded q(0) D
0
.
Since all signals are bounded, it easy to show that
2
(t),
d
dt
((q)) L
q(0) D
0
,
and hence, (q),
2
(t) are uniformly continuous q(0) D
0
. From (2.38), it can
also be determined that (q),
2
(t) L
2
q(0) D
0
. From these facts, Barbalats
lemma [25] can be used to show that (q),
2
(t) 0 as t q(0) D
0
. Since
(q) 0 , property P4) given above can be used to prove that q(t) q
as t
q(0) D
0
. To ensure that q(t) will remain in a collisionfree region, we must account
18
for the eects of the (q(0), 0) term introduced in the denition of the region D
0
given in (2.42). To this end, we rst dene the region D
1
as follows
D
1
] {q(t) 0 (q(t)) < 1} (2.43)
where D
1
denotes the largest collisionfree region. It is now clear from (2.42) and
(2.43) that if the weighting constant is selected suciently small to satisfy
(q(0), 0) + (q(0)) < 1, (2.44)
then D
0
D
1
, and hence, the robot manipulator tracks an obstacle free path.
Experimental Verication
To investigate the behavior of the proposed trajectory planning controllers, both
the Adaptive VFC and Navigation Function Control Extension approaches were im
plemented on two links of the Barrett Whole Arm Manipulator (WAM).
Experimental Setup
The WAM is a highly dexterous backdriveable manipulator. The biggest challenge
faced during both experimental trials was not knowing the actual dynamics of the
WAM. To overcome this obstacle, 5 links of the robot were locked at a xed, specied
angle during the experiment and the remaining links of the manipulator were used as
a 2link planer robot manipulator (see Figure 2.1). In this conguration, the dynamics
of the robot can be expressed in the following form [26]
=
_
M
11
M
12
M
21
M
22
_ _
q
1
q
2
_
+
_
V
m
11
V
m
12
V
m
21
V
m
22
_ _
q
1
q
2
_
+
_
f
d
1
0
0 f
d
2
_ _
q
1
q
2
_
(2.45)
the entries of the inertia matrix are given as
M
11
= p
1
+ 2p
2
cos(q
2
)
M
12
= p
3
+p
2
cos(q
2
)
M
21
= p
3
+p
2
cos(q
2
)
M
22
= p
3
(2.46)
where p
1
, p
2
, p
3
denote the mass parameters. The entries of the centripetalCoriolis
matrix are formed to satisfy the skewsymmetric property between the inertia matrix
19
and the centripetalCoriolis matrix and are given as follows
V
M
11
= p
2
sin(q
2
) q
2
V
M
12
= p
2
sin(q
2
) q
1
p
2
sin(q
2
) q
2
V
M
21
= p
2
sin(q
2
) q
1
V
M
22
= 0.
(2.47)
The dynamic friction coecients were set equal to the following values: f
d
1
= 6.8 and
f
d
2
= 3.8. The gravitational eects do not appear in (2.45) because of the selection
of locked and actuated joints. The constant parameter vector dened in (2.4) was
constructed as follows
=
_
p
1
p
2
p
3
T
. (2.48)
All links of the WAM manipulator are driven by brushless motors supplied with
sinusoidal electronic commutation. Each axis has an encoder located at the motor
shaft for link position measurements. Since no tachometers are present for velocity
measurements, link velocity signals are calculated via a ltered backwards dierence
algorithm. An AMD Athlon 1.2GHz PC running QNX 6.1 RTP (Real Time Plat
form), a realtime microkernel based operating system, hosts the control algorithms
which were written in C++. To facilitate real time graphing, data logging and
online gain tuning, the inhouse graphical user interface, Qmotor 3.0 [21], was used.
Data acquisition and control implementation were performed at a frequency of 1.0kHz
using the ServoToGo I/O board.
Remark 3 The joint angles were measured using encoders having a resolution of
4096 counts per revolutions which may be considered inadequate for high precision
maneuvers. The WAM, noted for its light weight and speed, also imposes limitations
on the bandwidth of the control. Hence the constraints in the physical systems used
for testing, prevented the further increase of control gains even though additional
torque capacity was available for control. As the main intention in conducting these
experiments was merely to show that the control could be implemented on a physical
system, we believe the magnitude of the errors are not a reection of the performance
of these controllers.
20
Remark 4 For the Navigation Function Control Extension experiment, a workspace
with obstacles was dened in the WAMs task space. Four circular obstacles were
placed in the workspace and sized such that a path could be found around the obstacles.
From (2.40) and (2.41), the selection of the starting and goal points aect the actual
size of the workspace and obstacles, in the task space. This change in size is attributed
to the addition of (q(0), 0). Because the actual dynamics are unknown for the WAM,
it is impossible to calculate (q(0), 0), therefore the actual size of the obstacles and the
workspace were found experimentally and can be see in Figure 2.6. In Figure 2.6, a
small increase in the physical obstacles along with a small decrease in the size of the
workspace were determined.
Experimental Results
Adaptive VFC Approach
An experiment using two links of the WAM was utilized to demonstrate the perfor
mance of the adaptive VFC given in (2.9) and (2.10). For the experiment, a velocity
eld for a planar, circular taskspace contour was proven to meet the qualifying con
dition given in (1.2), (see Appendix for proof ) . Specically, the following contour
was utilized [4]
(x) = K(x)f(x)
_
2(x
1
x
c1
)
2(x
2
x
c2
)
_
+c(x)
_
2(x
2
x
c2
)
2(x
1
x
c1
)
_
. (2.49)
In (2.49), x
c1
= 0.54155[m] and x
c2
= 0.044075[m] denote the circle center, and the
functions f(x), K(x), and c(x) R are dened as follows
f(x) = (x
1
x
c1
)
2
+ (x
2
x
c2
)
2
r
2
o
(2.50)
K(x) =
k
0
_
f
2
(x)
_
_
_
f(x)
x
_
_
_ +
c(x) =
c
0
exp
_
_
f
2
(x)
_
_
_
_
f(x)
x
_
_
_
where r
o
= 0.2[m] denotes the circle diameter; = 0.005[m
3
] and = 20[m
1
] are
auxiliary parameters; and k
0
= 0.25[ms
1
] and c
0
= 0.25[ms
1
] are tracking speed
21
parameters. Values for the above parameters were chosen based on the velocity eld
presented in [4].
The following forward kinematics for the WAM,
_
x
1
x
2
_
=
_
1
cos(q
1
) +
2
cos(q
1
+q
2
)
1
sin(q
1
) +
2
sin(q
1
+q
2
)
_
(2.51)
and manipulator Jacobian
J(q) =
_
1
sin(q
1
)
2
sin(q
1
+q
2
)
2
sin(q
1
+q
2
)
1
cos(q
1
) +
2
cos(q
2
+ q
1
)
2
cos(q
2
+q
1
)
_
(2.52)
where utilized to implement this control problem. In (2.51) and (2.52), the robot
link lengths denoted by
1
= 0.558,
2
= 0.291[m]. From (2.49)(2.52) the jointspace
velocity eld can be determined as follows (q) = J
1
(q)(x). Upon starting each
experimental run, a PD controller was utilized to move the WAM to the following
initial position
q =
_
0 90 90 60 90 20 0
T
, all in degrees (2.53)
then links 2, 3, 5, 6 and 7 were locked (the desired setpoint of the PD controller was
set to the initial values) while links 1 and 4 were driven by the proposed controller.
The control gains were adjusted by trial and error, with the best performance found
with the following values
K = diag(25, 15) = diag(3, 1, 5)
where diag() denotes a diagonal matrix with the arguments as the diagonal entries.
Figures 2.2  2.5 depicts the trajectories of the WAMs endeector, the velocity eld
tracking errors, the parameter estimates, and the control torque inputs, respectively.
Navigation Function Control Extension
To illustrate the performance of the controller given in (2.31) and (2.32), an ex
periment using two links of the WAM was also performed which navigated the robots
end eector from an initial position to the taskspace goal point, denoted by x
][x
1
,
22
Figure 2.1 Front view of the experimental setup.
23
Figure 2.2 Experimental and Desired Trajectories
24
Figure 2.3 Velocity eld tracking errors
25
Figure 2.4 Parameter Estimates
26
Figure 2.5 Control torques
27
x
2
]
T
R
2
. The same forward kinematic and Jacobian as dened in (2.51) and (2.52)
were utilized. A taskspace potential function (x) was chosen as follows
(x) =
x x
2
_
x x
2
+
0
4
_
1/
(2.54)
where x(t) ][x
1
(t), x
2
(t)]
T
R
2
. In (2.54), R denotes some positive integer. If
is selected large enough, it is proven in [12] that (x, x
2
(x),
3
(x),
4
(x) R are dened as follows
0
= r
2
0
(x
1
x
1r
0
)
2
(x
2
x
2r
0
)
2
(2.55)
1
= (x
1
x
1r
1
)
2
+ (x
2
x
2r
1
)
2
r
2
1
2
= (x
1
x
1r
2
)
2
+ (x
2
x
2r
2
)
2
r
2
2
3
= (x
1
x
1r
3
)
2
+ (x
2
x
2r
3
)
2
r
2
3
4
= (x
1
x
1r
4
)
2
+ (x
2
x
2r
4
)
2
r
2
4
.
In (2.55), (x
1
x
1r
i
) and (x
2
x
2r
i
) where i = 0, 1, 2, 3, 4 are the centers of the
boundary and obstacles respectively, r
0
, r
1,
r
2
, r
3
, r
4
R are the radii of the boundary
and obstacles respectively. From (2.54) and (2.55) it is clear that the model space is
a circle that excludes four smaller circles described by the obstacle functions
1
(x),
2
(x),
3
(x),
4
(x). For this experiment the modelspace conguration is selected as
follows
x
1r
0
= 0.5064 x
2r
0
= 0.0275 r
0
= 0.28
x
1r
1
= 0.63703 x
2r
1
= 0.11342 r
1
= 0.03
x
1r
2
= 0.4011 x
2r
2
= 0.0735 r
2
= 0.03
x
1r
3
= 0.3788 x
2r
3
= 0.1529 r
3
= 0.03
x
1r
4
= 0.6336 x
2r
4
= 0.12689 r
4
= 0.03
where all values are in meters.
The control gains were adjusted to the following values which yielded the best
performance
= 14 k = 45
2
= diag (0.02, 0.01, 0.01) .
28
Upon starting each experimental run, a PD controller was utilized to move the WAM
to the following initial position
q =
_
58.84 90 90 140.72 11.5 84.5 0
T
, all in degrees (2.56)
then links 2, 3, 5, 6 and 7 were locked (the desired setpoint of the PD controller was
set to the initial values) while links 1 and 4 were driven by the proposed controller.
The actual trajectory of the WAM robots end eector can be seen in Figure 2.6.
From Remark 4, the outer circle pairs in Figure 2.6 depicts the physical and actual
boundaries for the work space. The four inner circle pairs depict the physical and
actual boundaries for the obstacles. Figure 2.6 illustrates that the robots end eector
avoids the actual obstacles as it moves to the goal point. The parameter estimates
and control torque inputs are provided in Figures 2.7 and 2.8, respectively.
29
Figure 2.6 Experimental Trajectory for Navigation Function based Controller
30
Figure 2.7 Parameter estimates
31
Figure 2.8 Control torque inputs.
CHAPTER 3
NONLINEAR ADAPTIVE MODEL FREE CONTROL
OF AN AEROELASTIC 2D LIFTING
SURFACE
Conguration of the 2D Wing Structural Model
Figure 3.1 shows a schematic for a plungingpitching typical wingap section that
is considered in the present analysis. This model has been widely used in aeroelastic
analysis [46],[47]. The plunging (h) and pitching () displacements are restrained by
a pair of springs attached to the elastic axis of the airfoil (EA) with spring constants
k
h
and k
b
mx
b I
_ _
h
_
+
_
c
h
0
0 c
_ _
h
_
+
_
k
h
0
0 k
()
_ _
h
_
=
_
L
M
_
(3.1)
where k
() denotes the pitch spring constant a particular choice for which is pre
sented in Section 5. The quasisteady aerodynamic lift (L) and moment (M) can be
33
Figure 3.1 2D wing section aeroelastic model
34
modeled as [72]
L = U
2
bc
l
_
+
h
U
+
_
1
2
a
_
b
U
_
+ U
2
bc
l
M = U
2
b
2
c
m
_
+
h
U
+
_
1
2
a
_
b
U
_
+ U
2
b
2
c
m
(3.2)
One can transform the governing EOM of (3.1) into the following equivalent state
space form as follows [66]
z = f(z) +g(z)
y = z
2
(3.3)
where z (t) =
_
z
1
(t) z
2
(t) z
3
(t) z
4
(t)
T
R
4
is a vector of system states that
is dened as follows
z =
_
h
h
T
(3.4)
(t) R
1
is a ap deection control input, y (t) R
1
denotes the designated output,
while f(z), g (z) R
1
assume the following form
f(z) =
z
3
z
4
k
1
z
1
(k
2
U
2
+p (z
2
)) z
2
c
1
z
3
c
2
z
4
k
3
z
1
(k
4
U
2
+q (z
2
)) z
2
c
3
z
3
c
4
z
4
g(z) =
0
0
g
3
U
2
g
4
U
2
, g
4
= 0
(3.5)
where p (z
2
) , q (z
2
) R
1
are continuous, linear parameterizable nonlinearities in the
output variable resulting from the nonlinear pitch spring constant k
A, C
_
where
A R
44
, C R
14
are explicitly dened as
A =
0 1 0 0
k
4
U
2
c
4
k
3
c
3
0 0 0 1
k
2
U
2
c
2
k
1
c
1
, C =
1
0
0
0
. (3.6)
A sucient condition for the system of (3.3) to be observable from the pitch angle
output (t) is given as
d
obs
= k
2
3
k
3
c
3
c
1
+c
2
3
k
1
= 0
35
which is easily satised by the nominal model parameters of (3.49) presented in Sec
tion 5. After a series of coordinate transformations, we can then express the governing
equations for the aeroelastic model in the following convenient state space form
x = Ax +(y) +B
y = C
T
x =
_
1 0 0 0
x
(3.7)
where
= U
2
is an auxiliary control input, x(t) =
_
x
1
(t) x
2
(t) x
3
(t) x
4
(t)
T
R
4
is a new vector of system states, A R
44
, B R
4
are explicitly dened as
A =
0 1 0 0
0 0 1 0
0 0 0 1
0 0 0 0
, B =
(3.8)
where
i
i = 2, 3, 4 are constants that are explicitly dened in Appendix B. In (3.7)
above, (y) =
_
1
(y)
2
(y)
3
(y)
4
(y)
T
R
4
is a smooth vector eld that
can be linearly parameterized as follows
(y) = W(y) =
p
j=1
j
W
j
(y) (3.9)
where R
p
is a vector of constant unknowns, W(y) R
4p
is a measurable,
nonlinear regression matrix while the notation W
j
() R
4
denotes the j
th
column
of the regression matrix W () j = 1...p. For a particular choice of the pitch
spring nonlinearity k
0
= A
0
+kC
T
(x
0
) (3.13)
37
where k R
4
is a gain vector chosen so that A
0
= A kC
T
is Hurwitz. Given
that A and C
T
x = y are measurable, one can see that the design of (3.13) is readily
implementable. Moving on to the second term in the rst equation of (3.7), one can
now exploit the linear parameterization of (3.9) and the fact that A
0
is Hurwitz in
order to design a set of implementable observers
j
(t) R
4
for W
j
(y) as follows
j
= A
0
j
+W
j
(y) j = 1....p (3.14)
Moving on to the last term in the rst equation of (3.7), the following facts are readily
noticeable: (a) B is unmeasurable, and (b) B
=
4
j=2
j
e
j
(3.15)
where the notation e
j
represents the j
th
standard basis vector on R
4
. Motivated by
the observation design of (3.14), it is easy to design the following implementable set
of observers for (t) as follows
i
= A
0
i
+e
i
i = 2, 3, 4 (3.16)
Given (3.13),(3.14), and (3.16), we are now in a position to patch together an unmea
surable state estimate x(t) as follows
x =
0
+
p
j=1
j
+
4
j=2
j
(3.17)
By taking the derivative of (3.11) and substituting for the dynamics of (3.7) and
(3.17), we can obtain the following exponentially stable estimation error system
.
x= A
0
x (3.18)
Although the state estimate is unmeasurable, the worth of the design above lies in
the linear parameterizability of the right hand side of the expression of (3.17)as well
as the exponential stability of the estimation error as will be amply evident in Section
4.
38
Adaptive Control Design and Stability Analysis
After taking the derivative of the regulation error e
1
(t) and substituting (3.7) for
the system dynamics, we obtain
e
1
= x
2
+
p
j=1
j
W
j,1
(y) (3.19)
Since x
2
(t) is unmeasurable, we add and subtract (3.17) to the right hand side of the
above equation in order to obtain the open loop error dynamics as follows
e
1
=
0,2
+
p
j=1
j
[
j,2
+W
j,1
(y)] +
4
j=2
j,2
+ x
2
(3.20)
where (3.11) has been utilized where the notation
j,i
denotes the i
th
element in the j
th
column vector of . From (3.16), it is easy to notice that
2,2
(t) is only one derivative
removed from the control input
(t) and is a suitable candidate for a virtual control
input. By applying the backstepping methodology in [74], we dene an auxiliary error
signal e
2
(t) R
1
as follows
e
2
=
2,2
2,2,d
(3.21)
where
2,2d
(t) R
1
is a yet to be designed backstepping signal. After adding and
subtracting
2,2d
(t) as well as a feedback term (c
11
d
11
) e
1
to the right hand side
of (3.20) and rearranging the terms in a manner amenable to the pursuant analysis,
one obtains
e
1
=
2
_
1
2
(c
11
+d
11
) e
1
+
1
2
02
+
p
j=1
1
2
j
[
j,2
+W
j,1
(y)] +
4
j=3
1
2
j
j,2
_
+
2
2,2d
(c
11
+d
11
) e
1
+
2
e
2
+ x
2
(3.22)
where c
11
, d
11
are positive gain constants. It is easy to see that the terms inside
the brackets in the above equation can be linearly parameterized as
T
1
0
where
0
,
1
(t) R
p1
denote, respectively, a vector of unknown constants and a measurable
regression vector and are explicitly written as follows
T
0
=
_
1
2
1
2
1
1
2
2
.....
1
2
p
1
2
3
1
2
4
(3.23)
39
T
1
=
_
c
11
e
1
+d
11
e
1
+
0,2
1,2
+W
1,1
(y)
2,2
+W
2,1
(y) .....
p,2
+W
p,1
(y)
3,2
4,2
(3.24)
One can now succinctly rewrite the expression of (3.22) as follows
e
1
=
2
_
T
1
0
+
2,2d
(c
11
+d
11
) e
1
+
2
e
2
+ x
2
(3.25)
Given the structure of (3.25) above, we design the desired backstepping signal
2,2d
(t)
as follows
2,2d
=
T
1
0
(3.26)
After substituting this control input into the open loop expression of (3.25), one can
obtain the following expression
e
1
=
2
T
1
0
(c
11
+d
11
) e
1
+
2
e
2
+ x
2
(3.27)
where the denition of (3.12) has been utilized. At this stage, one can dene a
Lyapunov like function to perform a preliminary stability analysis at this stage as
follows
V
1
=
1
2
e
2
1
+

2

2
T
0
0
+d
1
11
x
T
P x (3.28)
where P R
44
is a positivedenite symmetric matrix such that PA
0
+A
T
0
P = I.
After taking the derivative of V
1
(t) along the system trajectories of (3.18) and (3.27)
and rearranging the terms, we obtain
V
1
= c
11
e
2
1
+
2

_
T
0
1
e
1
sign(
2
)
T
0
.
0
_
+
2
e
1
e
2
d
11
e
2
1
+e
1
x
2
d
11
x
T
x (3.29)
From (3.29), the choice for a dynamic adaptive update law is made as follows
.
0
= sign (
2
)
1
e
1
(3.30)
After substituting (3.30) into (3.29) and performing some algebraic manipulation, an
upperbound for
V
1
(t) can be obtained in the following manner
V
1
c
11
e
2
1
3
4
d
1
11
x
T
x +
2
e
1
e
2
(3.31)
where the rst two terms are negative denite terms and the last term is an indenite
interconnection term which is dealt with in the next stage of control design. Our next
40
step involves design of our actual control input
(t) which is easily reachable through
dierentiation of (3.21) along the dynamics of (3.16) as follows
e
2
= k
2
2,1
+
2,3
+
2,2d
(3.32)
From (3.26) and (3.24), it is obvious that
2,2d
() can be represented in the following
manner
2,2d
=
2,2d
(y, y
d
,
0,2
,
1,2
,
2,2
, ...,
p,2
,
3,2
,
4,2
,
0
) (3.33)
Since
3,2
(t) ,
4,2
(t) are more than one derivative away from the control, potential
singularities in the control design are avoided when the desired control signal
2,2d
()
is dierentiated as follows
2,2d
=
2,2d
y
_
02
+
p
j=1
j
[
j,2
+W
j,1
(y)] +
4
j=2
j,2
+ x
2
_
+
2,2d
0
[A
0
0
+ky]
+
p
j=1
2,2d
j
[A
0
j
+W
j
(y)] +
4
j=3
2,2d
j
A
0
j
+
2,2d
0
[sign (
2
)
1
e
1
]
(3.34)
It is now possible to separate out (3.34) into its measurable and immeasurable com
ponents as follows
2,2d
=
2,2dm
+
2,2du
+
2,2d
y
x
2
(3.35)
where
2,2dm
(t) R
1
denotes the measurable components that is matched with the
control input
(t) and can be done away via direct cancellation,
2,2du
(t) R
1
denotes unmeasurable components that are linearly parameterizable and dealt with
via the design of an adaptive update law, while the last term can be damped out
owing to the exponential stability of (3.18) (For explicit expressions for
2,2dm
(t) and
2,2du
(t), we refer the reader to the Appendix B). We are now in a position to rewrite
the openloop dynamics for e
2
(t) in the following manner
e
2
= k
2
2,1
+
2,3
+
2,2dm
2,2d
y
x
2
2
e
1
+ [
2
e
1
2,2du
] (3.36)
where we have added and subtracted
2
e
1
to the right hand side of (3.32) and (3.35)
has been utilized. It is important to note that the bracketed term in the above
expression can be parameterized as
T
2
1
where
1
,
2
(t) R
p1
denote, respectively,
41
a vector of unknown constants and a measurable regression vector and are explicitly
written as follows
T
1
=
_
1
2
.....
p
2
3
4
(3.37)
T
2
=
_
2,2d
y
{
1,2
+W
1,1
(y)} ...
2,2d
y
{
p,2
+W
p,1
(y)}
2,2d
y
2,2
+e
1
3,2
4,2
_
(3.38)
Motivated by our preliminary stability analysis and the structure of (3.36), we design
our control input
(t) as follows
= c
22
e
2
d
22
_
2,2d
y
_
2
e
2
2,3
+k
2
2,1
+
2,2dm
T
2
1
(3.39)
where c
22
, d
22
are positive gain constants. After substituting this control input into
(3.36), we obtain the closed loop dynamics for e
2
(t) in the following manner
e
2
= c
22
e
2
d
22
_
2,2d
y
_
2
e
2
2,2d
y
x
2
2
e
1
+
T
2
1
(3.40)
Motivated by the subsequent stability analysis, we design an update law for
1
(t) as
follows
.
1
=
2
e
2
(3.41)
Toward a nal stability analysis, we augment the function of (3.28) as follows
V
2
= V
1
+
1
2
e
2
2
+
T
1
1
+d
1
22
x
T
P x (3.42)
The time derivative of V
2
(t) along the closedloop trajectories of (3.40) and (3.41)
yields the following upperbound
V
2
c
11
e
2
1
c
22
e
2
2
3
4
_
d
1
11
+d
1
22
_
x
T
x (3.43)
where we utilized the nonlinear damping argument [75]. Since
V
2
(t) is negative semi
denite and V
2
(t) is positive denite, we can conclude that V
2
(t) L
. Therefore,
from (3.28) and (3.42), it is easy to see that
0
(t) ,
1
(t) L
and e
1
(t) , e
2
(t) , x(t)
L
L
2
. From the apriori boundedness of y
d
,
0
,
1
, we can say that y (t) ,
0
(t) ,
1
(t)
L
4,1
(t) ,
4,2
(t) ,
4,3
(t) can be readily established. Hence,
1
(t) ,
2,2d
(t) L
from
(3.24) and (3.26) which implies from the denition of (3.21) that
2,2
(t) L
. From
this and the preceding assumptions, one can state that
2
(t) ,
(t) , (t) L
. From
(3.16), one can guarantee the boundedness of all
i
i = 2, 3, 4. Thus, all the states of
the system are bounded in closedloop operation. From (3.27) and (3.40), one can see
that e
1
(t) , e
2
(t) L
+p
1
+p
2
= p
2
u p
2
= 0 (3.44)
where (t) R
1
denotes the actual control surface deection while u(t) is the ap
actuator output and the de facto control input signal. In this Section, we will discuss
3 dierent strategies to include the eect of the dynamics of the ap.
If p
1
and p
2
are chosen such that the dynamics of (3.44) are faster than the dynam
ics of (3.1), then, similar to a cascaded control structure, we can neglect the coupling
of the plunge and pitch motion of the wing section. The signal (t) introduced in
(3.1) and designed subsequently in (3.39) can be treated as a desired ap deection
d
(t) and the control input u (t) can be simply chosen to be
u(t) =
d
+p
1
d
+p
2
d
such that the closedlop system for (3.44) becomes
+p
1
+p
2
= 0 (3.45)
which is an exponentially stable system and where =
d
denotes the error be
tween the desired and actual ap deection. Thus, (t) tracks
d
(t) exponentially
43
fast. However, if the actuator has slow dynamics, then one must consider the inter
connection between the dynamics of (3.1) and (3.44). One must then rewrite (3.36)
as follows
e
2
= k
2
2,1
+
2,3
+U
2
d
2,2dm
2,2d
y
x
2
2
e
1
+
T
2
1
+U
2
(3.46)
and design
d
(t) as follows
d
= U
2
_
c
22
e
2
d
22
_
2,2d
y
_
2
e
2
2,3
+k
2
2,1
+
2,2dm
T
2
1
+U
2
_
(3.47)
A substitution of the above desired ap deection signal into (3.46) leaves a U
2
r
mismatched indenite term in the closedloop dynamics for e
2
(t) where r (t) ] (t)+
(t) is a ltered error variable. This mismatch must be compensated for in the control
design for u(t) . We begin by writing the openloop dynamics for the actuator as
follows
r = + =
d
p
1
p
2
+ +p
2
u
Motivated by the structure of our dynamics, we can design the control input signal
as follows
u(t) = p
1
2
_
d
+p
1
+p
2
k
u
r U
2
e
2
_
(3.48)
which yields the actuator dynamics closedloop form of
r = k
u
r U
2
e
2
We can now augment the function of (3.42) as V
3
= V
2
+
1
2
r
2
which yields after
dierentiation along the system closedloop trajectories the following upperbound
V
3
c
11
e
2
1
c
22
e
2
2
3
4
_
d
1
11
+d
1
22
_
x
T
x k
u
r
2
Again, a signal chasing argument would show that lim
t
e
1
(t) , e
2
(t) , x(t) , r (t) =
0. It is to be noted here that the control redesign of (3.47) and (3.48) does not
aect either the state estimation strategy or the parameter update laws. However,
we needed to utilize measurements of the state variables (t) ,
(t) as well as the
model parameters p
1
and p
2
. In order to mitigate this dependence and design a true
44
output feedback control strategy with measurements of only the pitch displacement
as well as adaptively compensate for uncertainty in p
1
and p
2
, one would need to
comprehensively redesign the estimation, adaptation, and control strategies. The
methodology followed in this research would still be valid but the estimation strategy
would have to include observers for the new states, i.e.,
1
(t) and its time derivative.
Additionally, the parameter update laws would need to be augmented to compensate
for the eects of the actuator model in terms of p
1
and p
2
.
Simulation Results
The model described in (3.3) was simulated using the output feedback observation,
control, and estimation algorithm of (3.13), (3.14), (3.16), (3.30), (3.39), and (3.41).
The values for the model parameters were taken from [65] and are listed below
b = 0.135 [m] k
h
= 2844.4 [Nm
1
] c
h
= 27.43 [Nm
1
s
1
]
c
= 0.065 [kgm
2
] x
= [0.0873 (b +ab)] /b
a = 0.4
(3.49)
The nonlinear pitch spring stiness was represented by a quintic polynomial as follows
k
y
(y) =
5
i=1
i
y
i1
(3.50)
where the unknown
i
extracted from experimental data [66] are given as
{
i
} =
_
2.8 62.3 3709.7 24195.6 48756.9
T
The rst simulation was run with freestream velocity U = 20 [ms
1
] while the initial
conditions for the system states, observed variables, and estimates were chosen as
follows
(0) = 0.1 [rad] (0) = 0 [rads
1
] h(0) = 0 [m]
h(0) = 0 [ms
1
]
0
(0) = 0
1
(0) = 0
0
(0) = 0
j
(0) = 0
j
(0) = 0
(3.51)
Figure 3.2 shows the closedloop plunging, pitching, and control surface deection
timehistories for the case when the actuation is on at time zero. In the presence
of active control, the system states are regulated to the origin fairly rapidly. It is
45
interesting to note that the adaptive control strategy is able to quickly regulate the
plunging and pitching response in the presence of large uncertainty in the parameters.
The second simulation was run with freestream velocity U = 15 [ms
1
] with the same
initial conditions as (3.51). However, in this case, the system is allowed to evolve
openloop (i.e., (t) 0) for 15 [s] to observe the development of an LCO. During
the rst 15 seconds, the estimation strategy is allowed to evolve according to the
dynamics of (3.13), (3.14), (3.16). This is possible because the estimation strategy
is designed to be stable independent of the control methodology used. However, the
estimation strategy of (3.30) and (3.41) are turned o because of their dependence
on the control strategy of (3.39). At t = 15 [s], the control is turned on and the
immediate attenuation of oscillations can be seen in Figures 3.3 and 3.4. Again, the
response is seen to be quick in the presence of large model uncertainty.
Remark 6 It is worth remarking that the methodology presented here can be ex
tended to the compressible ight speed regimes. In this sense, pertinent aerodynamics
for the compressible subsonic, supersonic and hypersonic ight speed regimes have to
be applied [73]. However, the goal of this research was restricted to the issue of the
illustration of the methodology and to that of highlighting the importance of the im
plementation of the nonlinear adaptive control on the lifting surfaces equipped with a
ap.
46
Figure 3.2 Time evolution of the closedloop plunging and pitching deections and
ap deection for U = 20 [ms
1
]; U = 1.6U
Flutter
47
Figure 3.3 Timehistory of pitching deection for U =[15 ms
1
], U = 1.25U
Flutter
48
Figure 3.4 Phasespace of the pitching deection for U = 15 [ms
1
], U = 1.25U
Flutter
CHAPTER 4
TRACKING CONTROL OF AN UNDERACTUATED
UNMANNED UNDERWATER
VEHICLE.
System Model
The kinematics of the UUV attitude can be represented by the unit quaternion
[86] q(t) ]
_
q
o
(t) q
v
(t)
R R
3
, which describes the orientation of an orthogonal
coordinate frame B attached to the UUV center of mass with respect to an inertial
reference frame I expressed in I and is governed by the following dierential equation
[80]
q
v
=
1
2
(q
v
+q
o
)
q
o
=
1
2
q
T
v
(4.1)
where (t) R
3
is the angular velocity of B with respect to I expressed in B.The
rotation matrix that translates vehicle coordinates into inertial coordinates is denoted
by R(q) R
33
and is calculated from the following formula [87]
R
T
(q) =
_
q
2
o
q
T
v
q
v
_
I
3
+ 2q
v
q
T
v
2q
o
S (q
v
) (4.2)
where I
3
R
33
is the 3 3 identity matrix, and S () R
33
is a skewsymmetric
matrix with the following mapping characteristics
S (y) =
0 y
3
y
2
y
3
0 y
1
y
2
y
1
0
y R
3
. (4.3)
The translational kinematic relationship for the UUV is given by the following ex
pression [84]
p = Rv (4.4)
where p (t) R
3
represents the position of the origin of B expressed in I, v (t) R
3
denotes the translational velocity of the UUV with respect to I expressed in B.
50
The translational and rotational equations of motion for the UUV are given by the
following [78]
M v = S () Mv +h(v) +B
1
u
1
J w = S (v) Mv S () J +g () +B
2
u
2
(4.5)
where M, J R
33
denote the positive denite, constant mass and inertia matrices,
respectively, h(v) R
31
captures the hydrodynamic damping interactions, B
1
=
_
1 0 0
T
R
3
, u
1
(t) R
1
represents the translational force input, g () R
31
is the matrix of gravitational and buoyant eects, B
2
= I
3
R
33
, and u
2
R
3
denotes the rotation vector input.
Remark 7 The following properties of the rotation matrix for R(q) and the skew
symmetric matrix S () will be utilized in the subsequent control development[78]
I)
R = RS ()
II) R
T
R = I
3
III) S
T
(y) = S (y) y R
3
(4.6)
As previously mentioned, Property I of (4.6) represents a slight shift in the structure
of the dynamics for R(q) [78] as compared to previous work.
Problem Formulation
Our main objective is to design the translational force input u
1
(t) and the rota
tional torque input u
2
(t) such that p (t) tracks a suciently smooth desired position
trajectory p
d
(t) R
3
(i.e., p
d
(t), p
d
(t), p
d
(t),
...
p
d
(t) L
1
_
_
_
1
(v) (4.14)
where the parameter mismatch term is represented by
1
(t) =
1
1
R
m
,
1
R
m
represents the best guess estimate of
1
, and
1
(v (t)) R
1
denotes a positive
bounding function that is designed to be nondecreasing in v (t). The linearly
parameterizing assumption of (4.13) should not be considered a limiting constraint
on the structure of h(v) as the eects of parameter mismatch for the nonLP case
could be incorporated into the robust control structure.
After substituting (4.13) into (4.12) for h(v), the openloop ltered tracking error
dynamics can be rewritten as follows
r = S () r +Y
1
1
M
1
e
p
+M
1
e
v
R
T
p
d
+ (M
1
I
3
) R
T
p
d
+M
1
e
p
+ [B
1
u
1
S () ]
(4.15)
where the fact that S () = S () has been utilized and the term M
1
e
p
(t) R
3
has been added and subtracted to the right hand side of (4.15). The bracketed term
of (4.15) can be manipulated into the following form
[B
1
u
1
S () ] = B
(4.16)
53
where the auxiliary matrix B
R
33
and the auxiliary vector (t) R
3
are dened
by the following
B
1 0 0
0 0
1
0
1
0
=
_
u
1
2
3
T
. (4.17)
At this point, the scalar control input u
1
(t) is designed in the following manner
u
1
=
_
1 0 0
u
1
+
1
(4.18)
where u
1
(t) R
3
represents an auxiliary control input vector. After substituting
(4.18) into (4.16), the openloop ltered tracking error dynamics for r (t) can be
expressed as follows
r = S () r +Y
1
1
M
1
e
p
+M
1
e
v
R
T
p
d
+ (M
1
I
3
) R
T
p
d
+M
1
e
p
+ [B
u
1
+B
z]
(4.19)
where the auxiliary vector signal z (t) R
3
is dened by the following
z = B
z
u
1
, B
z
=
0 0 0
0 1 0
0 0 1
. (4.20)
Based on the structure of (4.19), the auxiliary control input u
1
(t) is designed in the
following manner
u
1
= B
1
_
M
1
e
v
+R
T
p
d
(M
1
I
3
) R
T
p
d
M
1
e
p
Y
1
1
k
r
r
_
2
1
(v
s
) r
1
(v
m
) r
m
+
1
__
(4.21)
where k
r
R
1
denotes a positive scalar control gain,
1
R
1
represents a small
positive constant, and the functions
s
and
m
are dened in the following manner
y
s
=
_
y
T
y +
y
m
=
_
y
T
y +
= y
s
y R
3
(4.22)
where R
1
denotes a small positive scalar constant. After substituting (4.21) into
the openloop dynamics of (4.19), the closedloop ltered tracking error dynamics for
r (t) are given by the following expression
r = k
r
r S () r M
1
e
p
+B
z
+
_
Y
1
(v)
2
1
(v
s
) r
1
(v
m
) r
m
+
1
_
. (4.23)
54
Remark 8 Based on the denitions of (4.22) and the nondecreasing characteristic
of
1
(v (t)), the following inequality will be used in the subsequent stability to bound
the eects of the parameter mismatch term Y
1
(v)
1
(v
s
)
1
(v) >
1
(v
m
) . (4.24)
Remark 9 The functions given in (4.22) are utilized in lieu of the standard Euclidean
norm in u
1
(t) so as to ensure that the time derivative of
1
(v (t)) is welldened
when backstepping on the signal z (t). That is, the time derivative of
s
and
m
can be calculated from the following expression
d
dt
y
s
=
d
dt
y
m
=
y
T
y
(y
T
y + )
y R
3
(4.25)
The remainder of the control design involves the development of the torque input
u
2
(t) such that the auxiliary variable z (t) is regulated to zero. That is if z (t) = 0,
then the auxiliary control input signal u
1
(t) is injected into the ltered position
tracking error dynamics (4.19) via the angular velocity vector (t) in order to promote
position tracking.
After taking the time derivative of z (t), multiplying both sides of the resulting ex
pression by the unknown inertia matrix J, and substituting in the rotational dynamics
of (4.5), the openloop dynamics for z (t) are given by the following expression
J z = S (v) Mv +
_
g () S () J JB
z
.
u
1
_
+B
2
u
2
(4.26)
where the explicit elements of
.
u
1
(t) are provided in the Appendix. At this point, the
structure of g () is assumed to be of a form to allow the bracketed term of (4.26) to
be linearly parameterized into the following form
Y
2
(p, , v)
2
= g () S () J JB
z
.
u
1
(4.27)
where Y
2
(p, , v) R
3n
denotes a known regression vector, and
2
R
n
is the vector
of unknown constants. In addition, it is also assumed that the following bounding
relationship exists
_
_
_Y
2
(p, v, )
2
_
_
_
2
() (4.28)
55
where (t) =
_
p
T
v
T
T
R
9
, the parameter mismatch term is represented
by
2
(t) =
2
2
R
m
,
2
R
m
represents the best guess estimate of
2
,
and
2
() R
1
denotes a positive bounding function that is designed to be non
decreasing in (t). If the structure of g () does not allow for (4.27), then the eects
of the bracketed terms of (4.26) could be compensated for through the development
of an appropriate bounding function.
After substituting (4.27) into (4.26), the open loop dynamics for z (t) are given
by the following expression
J z = S (v) Mv +Y
2
2
+B
2
u
2
. (4.29)
Based on the structure of (4.29) and the subsequent stability analysis, the control
torque u
2
(t) is designed in the following manner
u
2
= B
1
2
_
S (v) Mv Y
2
2
k
z
z B
2
(
s
) z
2
(
m
) z
m
+
2
_
(4.30)
where k
z
R
1
is represents a positive, scalar control gain and
2
represents a small
positive constant. After substituting (4.30) into (4.29), the closedloop dynamics for
z (t) are given by the following
J z = k
z
z B
T
r +
_
Y
2
2
2
(
s
) z
2
(
m
) z
m
+
2
_
. (4.31)
Theorem 4 The translational force input u
1
(t) of (4.18) and the rotational torque
input u
2
(t) of (4.30) ensure that the tracking error signal e
p
(t) is exponentially driven
into an arbitrarily small neighborhood about zero in the sense that
e
p
(t) V (0) exp
_
2
t
_
+ (4.32)
where
3
,
2
, R
1
are positive constants (explicitly dened in the subsequent proof )
and V (0) R
1
represents the initial condition of the subsequent Lyapunov candidate
and is explicitly calculated as
V (0) =
1
2
r
T
(0) r (0) +
1
2
z
T
(0) Jz (0) +
1
2
e
T
p
(0) e
p
(0) . (4.33)
56
Proof. In order to illustrate the tracking result of (4.32), the following nonnegative
scalar function, denoted by V (t), is dened as follows
V =
1
2
r
T
r +
1
2
z
T
Jz +
1
2
e
T
p
e
p
, (4.34)
and V (t) can be upper and lower bounded by the following inequality
2
V
2
2
(4.35)
where =
_
r
T
z
T
e
T
p
T
R
9
and the constant parameters
1
,
2
R
1
are given
by
1
= min
_
1
2
,
1
2
min
(J)
_
,
2
= max
_
1
2
,
1
2
min
(J)
_
(4.36)
where
min
{J} denotes the minimum eigenvalue of the matrix J. After taking the
time derivative of (4.34), substituting in the closedloop dynamics of (4.23), (4.31),
and (4.8), and cancelling common terms, the time derivative of V (t) is given by the
following
V = k
r
r
2
k
z
z
2
e
T
p
M
1
e
p
+e
T
_
M
1
+ (M
1
I
3
) R
T
p
d
_
+
_
r
T
Y
1
2
1
(v
s
) r
2
1
(v
m
) r
m
+
1
_
+
_
z
T
Y
2
2
(
s
) z
2
2
(
m
) z
m
+
2
_
. (4.37)
After utilizing (4.14), (4.28), and (4.24),
V (t) of (A.5) can be upperbounded in the
following manner
V k
r
r
2
k
z
z
2
min
{M
1
}
2
3
2
_
e
2
p
+
_
1
+
2
+
2
0
2
3
_
(4.38)
where
3
R
1
represents a positive constant and
0
R
1
denotes a positive bounding
constant that satises the following inequality
0
>
_
_
M
1
+
_
M
1
I
3
_
R
T
p
d
_
_
. (4.39)
If the control gain is selected such that
>
3
2
min
{M
1
}
, (4.40)
57
then the time derivative of V (t) can be rewritten as
V
3
2
+
2
V + (4.41)
where
3
= min
_
k
r
, k
z
,
min
{M
1
}
2
3
2
_
R
1
, =
_
1
+
2
+
2
0
2
3
_
R
1
,
and (4.35) has been utilized. From (4.41), it can be observed that V (t) is driven
exponentially into a ball with radius . Standard linear arguments can then be applied
to (4.41) to obtain (4.32) in the sense that
e
p
(t) < V (t) V (0) exp
_
2
t
_
+ . (4.42)
Standard signal tracing can now be employed to illustrate that all signals remain
bounded during closedloop operation.
Simulation Results
In order to evaluate performance and to observe the details of implementation, the
UUV system of (4.1) through (4.5) was simulated in MATLABs Simulink software
package. The explicit structure of the matrices of (4.5) are given as
M =
m
1
0 0
0 m
2
0
0 0 m
3
, J =
J
1
0 0
0 J
2
0
0 0 J
3
(4.43)
h(v) =
h
1
+h
2
v
1
 0 0
0 h
3
+h
4
v
2
 0
0 0 h
5
+h
6
v
3

, (4.44)
g () =
g
1
+g
2

1
 0 0
0 g
3
+g
4

2
 0
0 0 g
5
+g
6

3

(4.45)
where the parameter values for the system were selected according to [83]
m
1
= 215 (kg) m
2
= 265 (kg)
m
3
= 265 (kg) J
1
= 40 (kg m
2
)
J
2
= 80 (kg m
2
) J
3
= 80 (kg m
2
)
h
1
= 70 (N) h
2
= 100 (N sec /m
2
)
h
3
= 100 (N) h
4
= 200 (N sec /m)
h
5
= 200 (N) h
6
= 50 (N sec /m)
g
1
= 30 (Nm) g
2
= 50 (Nm sec)
g
3
= 50 (Nm) g
4
= 100 (Nm sec)
g
5
= 50 (Nm) g
6
= 100 (Nm sec)
(4.46)
58
The initial attitude of the vehicle was set to q(0) =
_
0.9486, 0.1826, 0.1826, 0.1826
T
,
and the desired position trajectory p
d
(t) was selected as the following soft start sinu
soidal trajectory
p
d
(t) =
1.0 sin(0.1t)
_
1.0 e
(0.01t
2
)
_
1.0 sin(0.1t)
_
1.0 e
(0.01t
2
)
_
1.0 sin(0.1t)
_
1.0 e
(0.01t
2
)
_
(m) . (4.47)
Based on the structure of (4.44) and (4.45), the parameter estimate vectors
1
R
6
and
2
R
9
of (4.21) and (4.30), respectively, are explicitly given as
1
=
_
h
1
h
2
h
3
h
4
h
5
h
6
2
=
_
g
1
g
2
g
3
g
4
g
5
g
6
J
1
J
2
J
3
, (4.48)
and each estimate was set to 50% of its corresponding true value. In an eort to
achieve the best position tracking error, the following control gains were determined
through a trial and error fashion
k
r
= 600.0 k
z
= 600.0 = 700.0
1
= 0.1
= 0.001
1
= 0.01
2
= 0.01
. (4.49)
The resulting tracking error signal e
p
(t) is shown Figure 1.
59
Figure 4.1 Position Tracking Error e
p
(t)
CHAPTER 5
CONCLUSIONS
In the second chapter two trajectory planning and adaptive tracking controllers are
presented. The benchmark adaptive tracking controller by Slotine [25] was modied
to achieve velocity eld tracking in the presence of parametric uncertainty in the
robot dynamics. By incorporating a norm squared gradient term to the VFC, the
boundedness of all signals can be proven without the typical assumption that bounds
the integral of the velocity eld. An extension was then provided that also modies a
standard adaptive controller by incorporating a gradient based term. Using standard
backstepping techniques, a Lyapunov analysis was used to prove that a navigation
function could be incorporated in the control design to ensure the robot remained
on an obstacle free path within an expanded conguration space to reach a goal
conguration. Experimentation results illustrated the performance of the tracking
controller and the online path planning capabilities of the controller.
In chapter three, results related to the adaptive control of a nonlinear plunging
pitching wing section operating in an incompressible ight speed have been presented.
We used a true output feedback strategy that utilizes only measurements of the
pitching variable . The performance of the adaptive model free control is analyzed.
It clearly appears that the model free control is able to eciently control the LCO.
Modication of the control law to include the apactuator dynamics is also presented.
Future work will involve the use of an unsteady aerodynamic model instead of a quasi
steady one.
In chapter four, a controller has been presented that provides three dimensional
position tracking (i.e., longitudinal, latitudinal, and depth) of an unmanned under
water vehicle with only one translational actuator and three axial torque actuators.
The robust control structure also compensates for parameter uncertainty in the vehicle
inertia, hydrodynamic damping coecients, and gravitational/buoyancy coecients.
The presented controller extends upon the work of [78] by removing the full rank on
61
the input matrix condition by utilizing a modied translational velocity error signal
and by carefully crafting the tracking error systems from the stability analysis.
Future work will include the investigation of incorporating ywheels into the un
deractuated UUV application to serve as both energy source and torque actuator. In
addition, exciting applications are emerging for UUVs to serve as an actuator plat
form. For example, position tracking enabled units could be incorporated within a
platoon of underwater vehicles [91]. The emerging area of active sonar arrays will
require the array to reposition itself to better detect the angle of arrival of an inbound
target. A position tracking capable underwater vehicle actuated by ywheels wheel
may best serve as the sonar array actuation source since ywheels will cause a min
imum disturbance to the surrounding water thereby preventing potential distortion
in the incoming sonar signal.
APPENDICES
63
Appendix A
Experiment Velocity Field Selection
This VFC development is based on the selection of a velocity eld that is rst order
dierentiable, and that a rst order dierentiable, nonnegative function V (q) R
exists such that the following inequality holds
V (x)
x
(x)
3
(x) +
0
(A.1)
where
V (q)
q
denotes the partial derivative of V (q) with respect to q(t),
3
() R is
a class K function
3
, and
0
R is a nonnegative constant. For the experiment, a
velocity eld for a planar, circular taskspace contour was selected which satises the
inequality given in (A.1). Specically, the following contour was utilized
x = (x) = 2K(x)f(x)
_
(x
1
x
c1
)
(x
2
x
c2
)
_
+c(x)
_
2(x
2
x
c2
)
2(x
1
x
c1
)
_
. (A.2)
In (A.2), x
c1
= 0.54155[m] and x
c2
= 0.044075[m] denote the circle center, and the
functions f(x), K(x), and c(x) R are dened as follows
f(x) = (x
1
x
c1
)
2
+ (x
2
x
c2
)
2
r
2
o
(A.3)
K(x) =
k
0
_
f
2
(x)
_
_
_
f(x)
x
_
_
_ +
c(x) =
c
0
exp
_
_
f
2
(x)
_
_
_
_
f(x)
x
_
_
_
.
To prove that (A.2) meets (A.1), let V (x) R denote the following nonnegative,
continuous dierentiable function
V (x) ]
1
2
x
T
x. (A.4)
Taking the time derivative of (A.4), the following can be written
V = x
T
(x)
3
(x) +
0
(A.5)
3
A continuous function : [0, ) [0, ) is said to belong to class K if it is strictly increasing
and (0) = 0 [9].
64
where
3
(x) and
0
were dened in (A.1). To prove this inequality we write
x
T
(x) = x
T
[2K(x)f(x)x + 2K(x)f(x)x
c
] +c(x)
_
x
1
x
2
_
2(x
2
x
c2
)
2(x
1
x
c1
)
_
(A.6)
where x =
_
x
1
x
2
T
and x
c
=
_
x
c1
x
c2
T
. After simplifying the expression in
(A.6), the following expression can be written
x
T
(x) = 2K(x)f(x)x
T
x + 2K(x)f(x)x
T
x
c
+ 2c(x) (x
1
x
c2
x
c1
x
2
) . (A.7)
The following inequalities will be applied to (A.7)
(x
1
x
c2
x
c1
x
2
) x x
c
(A.8)
x x
c
x
2
+
1
x
c
2
(A.9)
where R
+
constant. It is now possible to write (A.7) as
x
T
(x) 2 [K(x)f(x) (K(x)f(x) +c(x)) ] x
2
(A.10)
+
2 [K(x)f(x) +c(x)]
x
c
2
where (A.8) and (A.9) were utilized. If is selected such that
K(x)f(x)
K(x)f(x) +c(x)
(A.11)
then it is trivial to show that
x
T
(x)
3
(x) +
0
(A.12)
where
3
(x) = 2 [K(x)f(x) (K(x)f(x) +c(x)) ] x
2
(A.13)
and
0
=
2 [K(x)f(x) +c(x)]
x
c
2
. (A.14)
Proof of Lemma
Lemma 5 Given a continuously dierentiable function, denoted by V (q), that satis
es the following inequalities
0 <
1
(q) V (q)
2
(q) +
b
(A.15)
65
with a time derivative that satises the following inequality
V (q)
3
(q) +
0
, (A.16)
then q(t) is GUB, where
1
(),
2
(),
3
() are class K functions, and
0
,
b
R denote
positive constants.
Proof:
4
Let R be a positive function dened as follows
]
1
3
(
0
) > 0 (A.17)
where
1
3
() denotes the inverse of
3
(), and let B(0, ) denote a ball centered about
the origin with a radius of . Consider the following 2 possible cases.
The initial condition q(t
0
) lies outside the ball B(0, ) as follows
< q(t
0
)
1
(A.18)
where
1
R is a positive constant. To facilitate further analysis, we dene the
operator d() as follows
d(
1
) ] (
1
1
2
)(
1
) +
1
1
(
b
) > 0 (A.19)
where (
1
1
2
) denotes the composition of the inverse of
1
() with
2
() (i.e., the
inverse of the function
1
() is applied to the function
2
()). After substituting the
constant d(
1
) into
1
(), the following inequalities can be determined
1
(d
1
) =
2
(
1
) +
b
2
(q(t
0
)) V (q(t
0
)) (A.20)
where the inequalities provided in (A.15) and (A.18) were utilized.
Assume that q () R for t
0
t < lies outside the ball B(0, ) as follows
< q () . (A.21)
From (A.16) and (A.21), the following inequality can be determined
V (q ())
3
() +
0
,
4
This proof is based on the proof for Theorem 2.14 in [23].
66
and hence, from the denition for in (A.17), it is clear that
V (q ()) 0. (A.22)
By utilizing (A.20) and the result in (A.22), the following inequalities can be devel
oped for some constant
1
(d
1
) V (q(t
0
)) V (q()) V (q( +))
1
(q( +)) . (A.23)
Since
1
() is a class K function, (A.19) and (A.23) can be used to develop the following
inequality
q(t) d(
1
) = (
1
1
2
)(
1
) +
1
1
(
b
) t t
0
(A.24)
provided the assumption in (A.21) is satised. If the assumption in (A.21) is not
satised, then
q(t) =
1
3
(
0
) t t
0
. (A.25)
Hence, q(t) is GUB for Case A.
The initial condition q(t
0
) lies inside the ball B(0, ) as follows
q(t
0
)
1
. (A.26)
If q(t) remains in the ball, then the inequality developed in (A.25) will be satised.
If q(t) leaves the ball, then the results from Case A can be applied. Hence, q(t) is
GUB for Case B.
Code for Velocity Field Control
// ============================================================
// QMotor  A PC Based RealTime Graphical Control Environment
// (c) 2000 QRTS
// 
// Control Program : vfc.cpp
// Description : Velocity field controller for two link robot
// Author : Anup Lonkar
67
// Start Date : Tue Feb 03 16:36:12 UTC 2004
// =============================================================
//  QRTS libraries 
#include ControlProgram.hpp
#include IOBoardClient.hpp
//  C standard libraries 
#include <math.h>
//  WAM includes 
#include ManipulatorControl.hpp
#include WAMControlClient.hpp
#include WAMControl.hpp
#include ButterworthFilter.hpp
#include Matrix.hpp
#include ColumnVector.hpp
#define DEG TO RAD 0.01745329252
#define RAD TO DEG 57.29577951308
//===============================================================
//vfc class derived from the WAMControl class
//===============================================================
class vfc : public WAMControl
{
protected:
//  Log Variables 
ColumnVector<2> q; //Joint angles
ColumnVector<2> qdot; //Joint velocities
ColumnVector<2> z; //endeffector coordinates in base frame
ColumnVector<2> zdot; //endeffector velocities
ColumnVector<2> Vz; //VF in cartesian coods
ColumnVector<2> Vq; //VF in terms of joint angles
double fz; //functions used in Vz the velocity function
68
double A;
double B;
double C;
double Kz;
ColumnVector<5> theta hat; //Adaptation variable
ColumnVector<5> theta hat dot;
ColumnVector<5> oldtheta hat dot;
double normVsq; //The bound in the control
ColumnVector<2> eta; // velocity field tracking error
ColumnVector<2> tau; //control input in link space
//  Variables for the regression matrix 
double den1;
double Asq;
double exmu;
Matrix<2,5> Y; // The regression matrix
//  Control Parameters 
double rad; //radius of the circlur contour
double centerOfTaskSpaceX, //center coods of the workspace
centerOfTaskSpaceY;
Matrix<5,5> Gamma; //Gain in the adaptive update law
Matrix<2,2> K; //Gain matrix in control
double c0; //Constant parameters used in the VF
double k0;
double mu;
double epsilon;
double l[2]; //Link lengths
//  PD Control variables 
double d initPosition[7]; //Initial joint position
double PDCycleTime, PDSpeed, offset; //PD control run time
double controlRunTime; //VF control algortihm runtime
69
double PDSetPoint[7], old PDSetPoint[7]; //PD control set points
//  Friction compensation terms 
double fric[2], fric comp[2]; //Static friction compensation terms
double torque thres[2];
double zeta[2]; //vel limit gain for static friction compensation
double fdp[2], fdm[2]; //Dynamic friction compensation
//  Other variables 
Matrix<2,2> J; // Jacobian matrix
Matrix<2,2> Jinv; // Inverse of the Jacobian matrix
double flag; // Required for the function moveFinalPosition()
double gain[2];
double alpha[2]; // Gain used in constructing eta
double gg;
ButterworthFilter<double> velocity filter1;
ButterworthFilter<double> velocity filter2;
double cutOffFrequency1;
double q1old, q2old;
ColumnVector<2> newqdot;
//  Clients 
IOBoardClient *iobc;
public:
vfc(int argc, char *argv[]) : WAMControl (argc, argv){};
// Constructor. Usually no need to make changes here
~vfc () {};
// Destructor. Usually no need to make changes here
//  User Functions 
// Functions to be written by the user in order to implement
// his control. The user does not need to implement all of them,
// but usually at least enterControl(), startControl(), control()
// and exitControl() are implemented.
70
virtual int enterControl();
virtual int startControl();
virtual void calculateControlLaw();
virtual int stopControl();
virtual int exitControl();
virtual void moveFinalPosition();
};
//========================================================
// vfc::enterControl
// 
// This function is called when the control program is
// loaded. In standalone mode, this happens immediately.
// When using the GUI, it happens when the user loads the
// control program. Use this function to register control
// parameters, to register log variables and to initialize
// control parameters. After this function has completed,
// the base class ControlProgram will try to load a
// configuration file and eventually overwrite the initialized
// values. To indicate an error in enterControl() and to
// prevent the loading of the control, set the d status data
// member to error and return 1
//==========================================================
int vfc::enterControl()
{
WAMControl::enterControl();
//  Register the log variables 
registerLogVariable (q, q, Joint angles);
registerLogVariable (qdot, qdot, Joint velo);
registerLogVariable (z, z, EE cord. in base frame);
71
registerLogVariable (zdot, zdot, EE velocities);
registerLogVariable (tau, tau, Joint level torques);
registerLogVariable (eta, eta, VF tracking error);
registerLogVariable ((double *) fric, fric, fric compn, 2);
registerLogVariable (theta hat, theta hat, theta hat);
registerLogVariable (Vz, Vz, VF in task space coods);
registerLogVariable (Vq, Vq, VF in joint coods);
registerLogVariable (&fz, fz, Contour function);
registerLogVariable (&Kz, Kz, Contour function);
registerLogVariable (&A, A, A);
registerLogVariable (&B, B, B);
registerLogVariable (&C, C, C);
registerLogVariable (&normVsq, normVsq, norm V squared);
registerLogVariable (newqdot, newqdot, Joint velocities);
//  Register the control parameters 
registerControlParameter (&PDSpeed, PDSpeed, PDSpeed);
registerControlParameter ((double *) PDSetPoint, PDSetPoint,
PDSetPoint in degrees, 7);
registerControlParameter (&PDCycleTime, PDCycleTime,
PD runtime);
registerControlParameter (&controlRunTime, controlRunTime,
Control run time);
registerControlParameter (&rad, rad, Radius of contour);
registerControlParameter (¢erOfTaskSpaceX,
centerOfTaskSpaceX,X cood of center of Task Space);
registerControlParameter (¢erOfTaskSpaceY,
centerOfTaskSpaceY,Y cood of center of Task Space);
registerControlParameter (Gamma, Gamma,
Gain matrix for update);
72
registerControlParameter (K, K, Gain matrix in control);
registerControlParameter ((double *) torque thres,
torque thres,
Toruqe threshold for friction compensation, 2);
registerControlParameter ((double *) fric comp, fric comp,
Static friction compensation, 2);
registerControlParameter ((double *) zeta, zeta, zeta, 2);
registerControlParameter ((double *) fdp, fdp,
Dynamic friction compensation positive velocity, 2);
registerControlParameter ((double *) fdm, fdm,
Dynamic friction compensation negative velocity, 2);
registerControlParameter (&c0, c0, c0);
registerControlParameter (&k0, k0, k0);
registerControlParameter (&mu, mu, mu);
registerControlParameter (&epsilon, epsilon,
epsilon);
registerControlParameter ((double *) l, l,
Link lengths, 2);
registerControlParameter ((double *) gain, gain,
gain, 2);
registerControlParameter ((double *) alpha, alpha,
alpha, 2);
registerControlParameter (&gg, gg,
gain before adaptation in control);
registerControlParameter (&cutOffFrequency1,
cutOffFrequency1,Cutoff Frequency for velocity filter);
// Set all control parameters initially to zero
clearAllControlParameters();
73
// Start message
d messageStream
<<endl<<<<d applicationName<<<<endl<<endl;
return 0;
}
//===============================================================
// vfc::startControl
// 
// Called each time a control run is started. If running from
// the GUI, this will be called each time the START button is
// pushed. Do setup that must occur before each control run here
// (eg. initializing some counters, connections to needed servers).
// Log variables are initialized here. To indicate an error in
//enterControl() and to prevent control execution, set the d status
// data member to error and return 1
//================================================================
int vfc::startControl()
{
WAMControl::startControl();
clearAllLogVariables();
//  Initialize the filters  //
velocity filter1.setCutOffFrequency(cutOffFrequency1);
velocity filter1.setDampingRatio(0.9);
velocity filter1.setSamplingTime(d controlPeriod);
velocity filter1.setAutoInit();
velocity filter2.setCutOffFrequency(cutOffFrequency1);
velocity filter2.setDampingRatio(0.9);
velocity filter2.setSamplingTime(d controlPeriod);
74
velocity filter2.setAutoInit();
getCurrentPosition();
for(int i=0;i<d numActiveJoints;i++)
{
d initPosition[i] = d position[i];
d unfilteredDesiredPosition[i] = d position[i];
d controlTorque[i] = 0.0;
old PDSetPoint[i]=PDSetPoint[i];
}
WAMControl::enableArmPower();
offset=0.0;
tau(1)=0.0;
tau(2)=0.0;
flag = 1;
oldtheta hat dot = 0.0,0.0,0.0,0.0,0.0;
theta hat = 0.5,0.5,0.5,5.0,5.0;
return 0;
}
//=========================================================
// vfc::calculateControlLaw
//
//
//=========================================================
void vfc::calculateControlLaw()
{
75
if(d elapsedTime > (PDCycleTime + controlRunTime) )
{
moveFinalPosition();
}
else
{
for (int i=0; i <d numActiveJoints; i++)
{
if(PDSetPoint[i] != old PDSetPoint[i]) // then change
//the initial position and apply the time offset
{
//cout<<changed initPosition<<endl;
offset = d elapsedTime;
for (int j=0; j < d numActiveJoints; j++)
d initPosition[j] = d position[j];
}
//Trajectory calculation
double blah = exp(PDSpeed * (d elapsedTimeoffset) *
(d elapsedTimeoffset) * (d elapsedTimeoffset));
d unfilteredDesiredPosition[i] = (PDSetPoint[i]  d initPosition[i])
* (1  blah) + d initPosition[i];
//save the old set points
old PDSetPoint[i] = PDSetPoint[i];
if(d elapsedTime < PDCycleTime)
{
q1old = d positionRad[0];
q2old = d positionRad[3];
76
}
}
WAMControl::calculateControlLaw();
if(d elapsedTime >= PDCycleTime)
{
q(1) = d positionRad[0];
q(2) = d positionRad[3];
newqdot(1) = velocity filter1.filter((q(1)  q1old)/d controlPeriod);
newqdot(2) = velocity filter2.filter((q(2)  q2old)/d controlPeriod);
q1old = q(1);
q2old = q(2);
qdot(1) = d velocityRad[0];
qdot(2) = d velocityRad[3];
//qdot(1) = newqdot(1);
//qdot(2) = newqdot(2);
//forward kinematics
z(1) = l[0]*cos(q(1)) + l[1]*cos(q(1)+q(2));
z(2) = l[0]*sin(q(1)) + l[1]*sin(q(1)+q(2));
//Jacobian
J(1,1) = l[0]*sin(q(1))  l[1]*sin(q(1)+q(2));
J(1,2) = l[1]*sin(q(1)+q(2));
77
J(2,1) = l[0]*cos(q(1)) + l[1]*cos(q(1)+q(2));
J(2,2) = l[1]*cos(q(1)+q(2));
Jinv = inverse(J); //Jacobian inverse
//Start with the VFC
fz = (z(1)  centerOfTaskSpaceX)*(z(1)  centerOfTaskSpaceX) +
(z(2)  centerOfTaskSpaceY)*(z(2)  centerOfTaskSpaceY) 
rad*rad;
A = sqrt(fz*fz);
B = 2*sqrt( ( (z(1)  centerOfTaskSpaceX)*(z(1)  centerOfTaskSpaceX)
+
(z(2)  centerOfTaskSpaceY)*(z(2)  centerOfTaskSpaceY) ) );
Kz = k0/(A*B + epsilon);
C = c0*exp(mu*A)/B;
Vz(1) = 2*Kz*fz*(z(1)  centerOfTaskSpaceX) 
2*C*(z(2)  centerOfTaskSpaceY);
Vz(2) = 2*Kz*fz*(z(2)  centerOfTaskSpaceY) +
2*C*(z(1)  centerOfTaskSpaceX);
Vq = Jinv*Vz;
eta(1) = qdot(1)  alpha[0]*Vq(1);
eta(2) = qdot(2)  alpha[1]*Vq(2);
/*************** Regression Matrix **************/
den1 = pow( (z(1)*z(1)2*z(1)*centerOfTaskSpaceX+
78
centerOfTaskSpaceX*centerOfTaskSpaceX+z(2)*z(2)
2*z(2)*centerOfTaskSpaceY+centerOfTaskSpaceY*
centerOfTaskSpaceY) ,0.5);
Asq = (z(1)centerOfTaskSpaceX)*(z(1)centerOfTaskSpaceX)+
(z(2)centerOfTaskSpaceY)*(z(2)centerOfTaskSpaceY)rad*rad ;
exmu = exp(mu*A);
Y(1,1) = ( sin(q(1)+q(2)) *( c0*exmu*(z(2)centerOfTaskSpaceY)/
den1  2*k0*Asq*(z(1)centerOfTaskSpaceX)/(2*A*den1+epsilon) )/
(l[0]*sin(q(2)))cos(q(1)+q(2)) *( c0*exmu*(z(1)centerOfTaskSpaceX)/
den1  2*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon) )/
(l[0]*sin(q(2))))*qdot(1)+( sin(q(1)+q(2)) *( c0*exmu*(z(2)
centerOfTaskSpaceY)/den1  2*k0*Asq*(z(1)centerOfTaskSpaceX)/
(2*A*den1+epsilon) )/(l[0]*sin(q(2)))+
cos(q(1)+q(2)) *( c0*exmu*(z(2)centerOfTaskSpaceY)/den1
2*k0*Asq*(z(1)centerOfTaskSpaceX)/(2*A*den1+epsilon))*
(cos(q(2)))/(l[0]*sin(q(2))*sin(q(2)))
cos(q(1)+q(2)) *( c0*exmu*(z(1)centerOfTaskSpaceX)/
den1  2*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon))/
(l[0]*sin(q(2)))+sin(q(1)+q(2)) *( c0*exmu*(z(1)centerOfTaskSpaceX)/
den12*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon))*
(cos(q(2)))/(l[0]*sin(q(2))*sin(q(2))))*qdot(2) ;
Y(1,2) = ( 2*cos(q(2)) *
(sin(q(1)+q(2)) *( c0*exmu*(z(2)centerOfTaskSpaceY)/
den1  2*k0*Asq*(z(1)centerOfTaskSpaceX)/(2*A*den1+epsilon) )/
(l[0]*sin(q(2)))cos(q(1)+q(2)) *( c0*exmu*(z(1)centerOfTaskSpaceX)/
den1  2*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon) )/
(l[0]*sin(q(2))))+cos(q(2)) *((l[0]*sin(q(1))l[1]*sin(q(1)+q(2)))
*(c0*exmu*(z(2)centerOfTaskSpaceY)/
den12*k0*Asq*(z(1)centerOfTaskSpaceX)/(2*A*den1+epsilon) )/
(l[1]*l[0]*sin(q(2)))+(l[0]*cos(q(1))+l[1]*cos(q(1)+q(2))) *
79
( c0*exmu*(z(1)centerOfTaskSpaceX)/
den1  2*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon) )/
(l[1]*l[0]*sin(q(2)))))*qdot(1)
+( 2*cos(q(2)) *(sin(q(1)+q(2)) *
( c0*exmu*(z(2)centerOfTaskSpaceY)/den1 
2*k0*Asq*(z(1)centerOfTaskSpaceX)/(2*A*den1+epsilon) )/
(l[0]*sin(q(2)))+cos(q(1)+q(2)) *
( c0*exmu*(z(2)centerOfTaskSpaceY)/den1 
2*k0*Asq*(z(1)centerOfTaskSpaceX)/(2*A*den1+epsilon))*
(cos(q(2)))/(l[0]*sin(q(2))*sin(q(2)))
cos(q(1)+q(2)) *( c0*exmu*(z(1)centerOfTaskSpaceX)/
den1  2*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon))/
(l[0]*sin(q(2)))+sin(q(1)+q(2)) *( c0*exmu*(z(1)centerOfTaskSpaceX)/
den12*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon))*
(cos(q(2)))/(l[0]*sin(q(2))*sin(q(2))))
+cos(q(2)) *(sin(q(1)+q(2)) *( c0*exmu*(z(2)centerOfTaskSpaceY)/
den1  2*k0*Asq*(z(1)centerOfTaskSpaceX)/(2*A*den1+epsilon) )/
(l[0]*sin(q(2)))(l[0]*cos(q(1))+l[1]*cos(q(1)+q(2))) *
( c0*exmu*(z(2)centerOfTaskSpaceY)/
den1  2*k0*Asq*(z(1)centerOfTaskSpaceX)/(2*A*den1+epsilon))*
(cos(q(2)))/(l[1]*l[0]*sin(q(2))*sin(q(2)))
+cos(q(1)+q(2)) *( c0*exmu*(z(1)centerOfTaskSpaceX)/den1 
2*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon) )/
(l[0]*sin(q(2)))(l[0]*sin(q(1))+l[1]*sin(q(1)+q(2))) *
( c0*exmu*(z(1)centerOfTaskSpaceX)/
den1  2*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon))*
(cos(q(2)))/(l[1]*l[0]*sin(q(2))*sin(q(2)))))*qdot(2)
sin(q(2))*qdot(2) *( cos(q(1)+q(2)) *
(c0*exmu*(z(2)centerOfTaskSpaceY)/
den1  2*k0*Asq*(z(1)centerOfTaskSpaceX)/(2*A*den1+epsilon))/
80
(l[0]*sin(q(2)))sin(q(1)+q(2))*(c0*exmu*(z(1)centerOfTaskSpaceX)/
den1  2*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon) )/
(l[0]*sin(q(2))))
sin(q(2))*(qdot(1)+qdot(2))*((l[0]*cos(q(1))+l[1]*cos(q(1)+q(2)))*
( c0*exmu*(z(2)centerOfTaskSpaceY)/
den1  2*k0*Asq*(z(1)centerOfTaskSpaceX)/
(2*A*den1+epsilon) )/(l[1]*l[0]*sin(q(2)))
+(l[0]*sin(q(1))+l[1]*sin(q(1)+q(2)))*
(c0*exmu*(z(1)centerOfTaskSpaceX)/
den1  2*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon) )/
(l[1]*l[0]*sin(q(2)))) ;
Y(1,3) = ( (l[0]*sin(q(1))l[1]*sin(q(1)+q(2)))*
(c0*exmu*(z(2)centerOfTaskSpaceY)/
den1  2*k0*Asq*(z(1)centerOfTaskSpaceX)/(2*A*den1+epsilon))/
(l[1]*l[0]*sin(q(2)))+(l[0]*cos(q(1))+l[1]*cos(q(1)+q(2)))*
(c0*exmu*(z(1)centerOfTaskSpaceX)/
den1  2*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon))/
(l[1]*l[0]*sin(q(2))))*qdot(1)
+(sin(q(1)+q(2)) *( c0*exmu*(z(2)centerOfTaskSpaceY)/
den1  2*k0*Asq*(z(1)centerOfTaskSpaceX)/(2*A*den1+epsilon))/
(l[0]*sin(q(2)))(l[0]*cos(q(1))+l[1]*cos(q(1)+q(2)))*
( c0*exmu*(z(2)centerOfTaskSpaceY)/
den12*k0*Asq*(z(1)centerOfTaskSpaceX)/(2*A*den1+epsilon))*
(cos(q(2)))/(l[1]*l[0]*sin(q(2))*sin(q(2)))
+cos(q(1)+q(2)) *( c0*exmu*(z(1)centerOfTaskSpaceX)/
den1  2*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon))/
(l[0]*sin(q(2)))(l[0]*sin(q(1))+l[1]*sin(q(1)+q(2)))*
( c0*exmu*(z(1)centerOfTaskSpaceX)/
den12*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon))*
(cos(q(2)))/(l[1]*l[0]*sin(q(2))*sin(q(2))))*qdot(2) ;
81
Y(1,4) = qdot(1);
Y(1,5) = 0;
Y(2,1) = 0;
Y(2,2) = cos(q(2))*( sin(q(1)+q(2))*
( c0*exmu*(z(2)centerOfTaskSpaceY)/
den1  2*k0*Asq*(z(1)centerOfTaskSpaceX)/(2*A*den1+epsilon))/
(l[0]*sin(q(2)))cos(q(1)+q(2))*( c0*exmu*(z(1)centerOfTaskSpaceX)/
den1  2*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon) )/
(l[0]*sin(q(2))))*qdot(1)
+cos(q(2))*( sin(q(1)+q(2))*(c0*exmu*(z(2)centerOfTaskSpaceY)/
den1  2*k0*Asq*(z(1)centerOfTaskSpaceX)/(2*A*den1+epsilon))/
(l[0]*sin(q(2)))+cos(q(1)+q(2))*(c0*exmu*(z(2)centerOfTaskSpaceY)/
den1  2*k0*Asq*(z(1)centerOfTaskSpaceX)/(2*A*den1+epsilon))*
(cos(q(2)))/(l[0]*sin(q(2))*sin(q(2)))
cos(q(1)+q(2)) *( c0*exmu*(z(1)centerOfTaskSpaceX)/
den1  2*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon))/
(l[0]*sin(q(2)))+sin(q(1)+q(2))*
( c0*exmu*(z(1)centerOfTaskSpaceX)/
den1  2*k0*Asq*(z(2)centerOfTaskSpaceY)/
(2*A*den1+epsilon))*(cos(q(2)))/(l[0]*sin(q(2))*sin(q(2))))*qdot(2)
+sin(q(2))*qdot(1)*(cos(q(1)+q(2))*(c0*exmu*(z(2)centerOfTaskSpaceY)/
den1  2*k0*Asq*(z(1)centerOfTaskSpaceX)/(2*A*den1+epsilon))/
(l[0]*sin(q(2)))sin(q(1)+q(2))*(c0*exmu*(z(1)centerOfTaskSpaceX)/
den1  2*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon))/
(l[0]*sin(q(2))));
Y(2,3) = (sin(q(1)+q(2)) *( c0*exmu*(z(2)centerOfTaskSpaceY)/
den1  2*k0*Asq*(z(1)centerOfTaskSpaceX)/(2*A*den1+epsilon) )/
(l[0]*sin(q(2)))cos(q(1)+q(2))*(c0*exmu*(z(1)centerOfTaskSpaceX)/
den1  2*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon))/
82
(l[0]*sin(q(2)))+(l[0]*sin(q(1))l[1]*sin(q(1)+q(2)))*
(c0*exmu*(z(2)centerOfTaskSpaceY)/
den1  2*k0*Asq*(z(1)centerOfTaskSpaceX)/(2*A*den1+epsilon))/
(l[1]*l[0]*sin(q(2)))+(l[0]*cos(q(1))+l[1]*cos(q(1)+q(2)))*
(c0*exmu*(z(1)centerOfTaskSpaceX)/den1
2*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon) )/
(l[1]*l[0]*sin(q(2))))*qdot(1)
+(cos(q(1)+q(2))*(c0*exmu*(z(2)centerOfTaskSpaceY)/
den1  2*k0*Asq*(z(1)centerOfTaskSpaceX)/
(2*A*den1+epsilon))*(cos(q(2)))/(l[0]*sin(q(2))*sin(q(2)))
+sin(q(1)+q(2))*(c0*exmu*(z(1)centerOfTaskSpaceX)/den1 
2*k0*Asq*(z(2)centerOfTaskSpaceY)/(2*A*den1+epsilon))*
(cos(q(2)))/(l[0]*sin(q(2))*sin(q(2)))
(l[0]*cos(q(1))+l[1]*cos(q(1)+q(2)))*
(c0*exmu*(z(2)centerOfTaskSpaceY)/
den1  2*k0*Asq*(z(1)centerOfTaskSpaceX)/
(2*A*den1+epsilon) )*(cos(q(2)))/(l[1]*l[0]*sin(q(2))*
sin(q(2)))(l[0]*sin(q(1))+l[1]*sin(q(1)+q(2)))*
(c0*exmu*(z(1)centerOfTaskSpaceX)/
den12*k0*Asq*(z(2)centerOfTaskSpaceY)/
(2*A*den1+epsilon))*(cos(q(2)))/(l[1]*l[0]*sin(q(2))*
sin(q(2))))*qdot(2) ;
Y(2,4) = 0;
Y(2,5) = qdot(2);
/************************************************/
//The Adaptive Update Law
theta hat dot = 1*Gamma*transpose(Y)*eta;
83
//Trapeziodal rule to calculate theta hat
theta hat = theta hat + 0.5*(theta hat dot + oldtheta hat dot)*
d controlPeriod;
oldtheta hat dot = theta hat dot;
//Norm of the bound squared
normVsq = 4*(q(1)*q(1) + q(2)*q(2));
//Control law
tau(1) = K(1,1)*eta(1)  gain[0]*normVsq*eta(1) +
gg*( Y(1,1)*theta hat(1) +
Y(1,2)*theta hat(2) +
Y(1,3)*theta hat(3) +
Y(1,4)*theta hat(4) +
Y(1,5)*theta hat(5) );
tau(2) = K(2,2)*eta(2)  gain[1]*normVsq*eta(2) +
gg*( Y(2,1)*theta hat(1) +
Y(2,2)*theta hat(2) +
Y(2,3)*theta hat(3) +
Y(2,4)*theta hat(4) +
Y(2,5)*theta hat(5));
//Compute the friction compensation
fric[0]=0.0;
fric[1]=0.0;
if(tau(1) > torque thres[0])
fric[0] = fric comp[0]*exp(zeta[0]*abs(d velocityRad[0]))
+ fdp[0]*d velocityRad[0];
if(tau(1) < torque thres[0])
84
fric[0] = fric comp[0]*exp(zeta[0]*abs(d velocityRad[0]))
+ fdm[0]*d velocityRad[0];
if(tau(2) > torque thres[1])
fric[1] = fric comp[1]*exp(zeta[1]*abs(d velocityRad[3]))
+ fdp[1]*d velocityRad[3];
if(tau(2) < torque thres[1])
fric[1] = fric comp[1]*exp(zeta[1]*abs(d velocityRad[3]))
+ fdm[1]*d velocityRad[3];
//Output the control torques to joint 1 and joint 4
//the remaining torques are computed from the PD control
d controlTorque[0] = tau(1) + fric[0];
d controlTorque[3] = tau(2) + fric[1];
}
}
}
//==============================================================
// vfc::moveFinalPosition()
//
// Function used to move the robot to a safe restposition after
// the control has been executed. Starts from any position,
// have to do a bit of work to initialize the variables to current
// joint positions before doing the PD control.
//===============================================================
void vfc::moveFinalPosition()
{
85
//Move the robot to this rest position
double finalPosition[1][4]= { {0, 90, 0, 0} };
if(flag == 1)
{
flag = 0;
offset = d elapsedTime;
// This is needed to initialize the control variables in Manipula
// torControl() so that the PD control can run for the changed robot
// position i.e after the controlLaw has executed
ManipulatorControl::initializeControlVariables();
for (int j=0; j < d numActiveJoints; j++)
{
// Start from the current position
d initPosition[j] = d position[j];
// Need to do this for the first cycle of the PD control
// to reinitialize d positionErrorRad[] in
//ManipulatorControl::control()
d positionErrorRad[j] = 0.0;
}
}
for (int i=0; i < d numActiveJoints; i++)
{
//Trajectory calculation
double halb = exp(0.1 * (d elapsedTimeoffset) *
(d elapsedTimeoffset) * (d elapsedTimeoffset));
d unfilteredDesiredPosition[i] =
(finalPosition[0][i]  d initPosition[i]) *
86
(1  halb) + d initPosition[i];
}
WAMControl::calculateControlLaw();
}
//===========================================================
// vfc::stopControl()
//
// Called each time a control run ends. If running from the
// GUI, this will be called each time the STOP button is pushed,
// or when a timed run ends, or when the control aborts itself.
// Use this function to zero out DACs and to disconnect from
//the servers
//============================================================
int vfc::stopControl()
{
WAMControl::stopControl();
// Disconnect from IO board server
delete iobc;
return 0;
}
//============================================================
// vfc::exitControl
// 
// This function is called when the control is unloaded.
// In standalone mode, this happens after one control run has
// completed. When using the GUI, it happens when the user
//loads a new control program or exits the GUI.
// This function could be used to cleanup allocated memory
//============================================================
87
int vfc::exitControl()
{
return 0;
}
//============================================================
// main()
//
// The main function instantiates the object and goes
// into the mainloop
//============================================================
main (int argc, char *argv[])
{
vfc *cp = new vfc(argc, argv);
cp>run();
delete cp;
}
Code for Navigation Function Control
//============================================================
// QMotor  A PC Based RealTime Graphical Control Environment
// (c) 2000 QRTS
// 
// Control Program : NavAdaptive.cpp
// Description : Adaptive controller for the potential field
// navigation problem of a 2link robot
// Author : Anup Lonkar
// Start Date : Fri Jan 23 14:45:14 UTC 2004
// ===========================================================
//  QRTS libraries 
#include ControlProgram.hpp
88
#include IOBoardClient.hpp
//  C standard libraries 
#include <math.h>
//  WAM includes 
#include ManipulatorControl.hpp
#include WAMControlClient.hpp
#include WAMControl.hpp
#include Matrix.hpp
#include ColumnVector.hpp
//#include LowpassFilter.hpp
#define DEG TO RAD 0.01745329252
#define RAD TO DEG 57.29577951308
//=============================================================
//NavAdaptive class derived from the WAMControl class
//=============================================================
class NavAdaptive : public WAMControl
{
protected:
//  Log Variables 
ColumnVector<2> q; //Joint angles
ColumnVector<2> qdot; //Joint velocities
ColumnVector<2> z; //EE coordinates in base frame
ColumnVector<2> zdot; //EE velocities
ColumnVector<2> posError; //x and y position error
ColumnVector<7> theta hat dot;
ColumnVector<7> oldtheta hat dot;
ColumnVector<7> theta hat;
ColumnVector<2> del psi s;//gradient of the nav function
ColumnVector<2> f s; //time der of the gradient
ColumnVector<2> eta s;
89
ColumnVector<2> tau s; //control input in task space
ColumnVector<2> tau; //control input in link space
Matrix<2,2> J;
Matrix<2,2> Jinv;
Matrix<2,2> Jinvt;
Matrix<2,2> Jdot;
//  Control Parameters 
ColumnVector<2> z s; //goal point coordinates
ColumnVector<4> oc x; //x cood of obstacles
ColumnVector<4> oc y; //y cood of obstacles
double rad; //radius of the obstacles
double centerOfTaskSpaceX, //center coods
centerOfTaskSpaceY;
double rad ws;
double d initPosition[7];
double PDCycleTime, PDSpeed, offset;
double controlRunTime;
double PDSetPoint[7], old PDSetPoint[7];
double finalPosition[7];
double l[2]; //Link lengths
double lc[2]; //lengths to CG
double m[2]; //mass of each link
double I[2]; //Link inertias
double alpha[2]; //gains for eta and theta hat
Matrix<7,7> Gamma;
Matrix<2,7> Y;
double k[2]; // diagonal elements of the gain matrix
90
double k nav; //navigation function gain
double fric[2], fric comp[2]; //Fricn compn terms
double torque thres[2];
double zeta[2]; //vel limiting gain for static fric compn
double fd[2];
//  Other variables 
double A; //numerator of navigation function
double A Dot; //time derivative of A
double PDAz1, PDAz2; //partial of A w.r.t x and y
double PDAz1 Dot, PDAz2 Dot; //time der of pdAz1 and pdAz2
double beta, beta0, beta1, beta2, beta3, beta4;
double PDbeta z1, PDbeta z2,
PDbeta0 z1, PDbeta0 z2,
PDbeta1 z1, PDbeta1 z2,
PDbeta2 z1, PDbeta2 z2,
PDbeta3 z1, PDbeta3 z2,
PDbeta4 z1, PDbeta4 z2;
double betaDot,
beta0Dot,
beta1Dot,
beta2Dot,
beta3Dot,
beta4Dot;
double PDbeta z1 Dot, PDbeta z2 Dot,
PDbeta0 z1 Dot, PDbeta0 z2 Dot,
PDbeta1 z1 Dot, PDbeta1 z2 Dot,
PDbeta2 z1 Dot, PDbeta2 z2 Dot,
PDbeta3 z1 Dot, PDbeta3 z2 Dot,
PDbeta4 z1 Dot, PDbeta4 z2 Dot;
91
double p1, p2, p3;
Matrix<2,2> D;
Matrix<2,2> Vm;
//  Temporary variables 
double d0, d1, d2, t0, t1, t2, w2,w3,w4,w5,w6,w7,w8;
double den;
ColumnVector<2> qddot;
ColumnVector<2> oldqddot;
ColumnVector<2> oldqdot;
ColumnVector<2> sgn zdot;
double flag;
double M1,M2,M3,M4;
double gain[2];
//  Clients 
IOBoardClient *iobc;
public:
NavAdaptive(int argc, char *argv[]):WAMControl(argc, argv){};
// Constructor. Usually no need to make changes here
~NavAdaptive () {};
// Destructor. Usually no need to make changes here
//  User Functions 
92
virtual int enterControl();
virtual int startControl();
virtual void calculateControlLaw();
virtual int stopControl();
virtual int exitControl();
virtual void moveFinalPosition();
};
//========================================================
// NavAdaptive::enterControl
// 
// This function is called when the control program is
// loaded. In standalone mode, this happens immediately.
// When using the GUI, it happens when the user loads the
// control program. Use this function to register control
// parameters, to register log variables and to initialize
// control parameters. After this function has completed,
// the base class ControlProgram will try to load a
//configuration file and eventually overwrite the initialized
// values. To indicate an error in enterControl() and to
//prevent the loading of the control, set the d status data
// member to error and return 1
//==========================================================
int NavAdaptive::enterControl()
{
WAMControl::enterControl();
registerLogVariable (d initPosition, d initPosition,
Initial position, 7);
registerLogVariable (q, q, Joint angles);
93
registerLogVariable (qdot, qdot, Joint velocities);
registerLogVariable (z, z, End effector in base frame);
registerLogVariable (zdot, zdot, End effector velocities);
registerLogVariable (tau, tau, Joint level torques);
registerLogVariable (tau s, tau s,
end effector level torques);
registerLogVariable (del psi s, del psi s, del psi s);
registerLogVariable (eta s, eta s, eta s);
registerLogVariable (f s, f s, Time derivative of
the gradient);
registerLogVariable (&beta, beta, beta);
registerLogVariable (&beta0, beta0, beta0);
registerLogVariable (&beta1, beta1, beta1);
registerLogVariable (&beta2, beta2, beta2);
registerLogVariable (&beta3, beta3, beta3);
registerLogVariable (&beta4, beta4, beta4);
registerLogVariable (posError, posError,
Position error in the task space);
registerLogVariable ((double *) fric, fric,
friction compensation, 2);
registerLogVariable (&p1, p1, p1);
registerLogVariable (&p2, p2, p2);
registerLogVariable (&p3, p3, p3);
registerLogVariable (theta hat, theta hat,
theta hat);
//  Register the control parameters 
registerControlParameter (&PDSpeed, PDSpeed, PDSpeed);
registerControlParameter ((double *) PDSetPoint,
94
PDSetPoint, PDSetPoint in degrees, 7);
registerControlParameter (&PDCycleTime, PDCycleTime,
PD control run duration);
registerControlParameter (&controlRunTime,
controlRunTime, Control run duration);
registerControlParameter ((double *) l, l,
Link lengths, 2);
registerControlParameter ((double *) lc, lc,
Lengths to the CG, 2);
registerControlParameter ((double *) m,
m, Mass of each Link, 2);
registerControlParameter ((double *) I, I,
Inertia of each link, 2);
registerControlParameter (z s, z s,
Goal points in the task space);
registerControlParameter (oc x, oc x,
X cood of the obstacles);
registerControlParameter (oc y, oc y,
Y cood of the obstacles);
registerControlParameter (&rad, rad,
Radius of the obstacles);
registerControlParameter (&rad ws, rad ws,
Radius of the work space);
registerControlParameter (¢erOfTaskSpaceX,
centerOfTaskSpaceX, X cood of center of Task Space);
registerControlParameter (¢erOfTaskSpaceY,
centerOfTaskSpaceY, Y cood of center of Task Space);
registerControlParameter (&k nav, k nav,
Gain k for the navigation function);
95
registerControlParameter ((double *) alpha, alpha,
Gain alpha for the navigation function,2);
registerControlParameter ((double *) k, k,
Gain matrix k, 2);
registerControlParameter (Gamma, Gamma,
Gain matrix for update);
registerControlParameter ((double *) gain, gain,
Gain for zdot,2);
registerControlParameter ((double *) torque thres,
torque thres,
Toruqe threshold for friction compensation, 2);
registerControlParameter ((double *) fric comp,
fric comp, Static friction compensation, 2);
registerControlParameter ((double *) zeta,
zeta, zeta, 2);
registerControlParameter ((double *) fd,
fd, Dynamic friction compensation parameters, 2);
// Set all control parameters initially to zero
clearAllControlParameters();
// Start message
d messageStream
<<endl<<<<d applicationName<<<<endl<<endl;
d messageStream
<<Navigation Function based Control;
return 0;
}
//=======================================================
// NavAdaptive::startControl
96
// 
// Called each time a control run is started. If running
// from the GUI, this will be called each time the START
// button is pushed. Do setup that must occur before each
// control run here (eg. initializing some counters,
// connections to needed servers). Log variables are
//initialized here. To indicate an error in enterControl()
// and to prevent control execution, set the d status data
// member to error and return 1
//========================================================
int NavAdaptive::startControl()
{
WAMControl::startControl();
clearAllLogVariables();
getCurrentPosition();
for(int i=0;i<7;i++)
{
d initPosition[i] = d position[i];
d unfilteredDesiredPosition[i] = d position[i];
d controlTorque[i] = 0.0;
old PDSetPoint[i]=PDSetPoint[i];
}
WAMControl::enableArmPower();
offset=0.0;
finalPosition[0] = 0; //Move to rest position
finalPosition[1] = 90;
finalPosition[2] = 0;
finalPosition[3] = 0;
97
finalPosition[4] = 0;
finalPosition[5] = 0;
finalPosition[6] = 0;
flag = 1;
oldtheta hat dot = 0.0,0.0,0.0,0.0,0.0,0.0,0.0;
return 0;
}
//======================================================
// NavAdaptive::calculateControlLaw
//
// Overwrite the method defined in ManipulatorControl.
//======================================================
void NavAdaptive::calculateControlLaw()
{
if(d elapsedTime > (PDCycleTime + controlRunTime) )
{
moveFinalPosition();
}
else
{
for (int i=0; i < 7; i++)
{
if(PDSetPoint[i] != old PDSetPoint[i])
// then change the initial position and apply the time offset
{
//cout<<changed initPosition<<endl;
offset = d elapsedTime;
for (int j=0; j < 7; j++)
d initPosition[j] = d position[j];
98
}
//Trajectory calculation
double blah = exp(PDSpeed * (d elapsedTimeoffset) *
(d elapsedTimeoffset) * (d elapsedTimeoffset));
d unfilteredDesiredPosition[i] =
(PDSetPoint[i]  d initPosition[i])
* (1  blah) + d initPosition[i];
//save the old set points
old PDSetPoint[i] = PDSetPoint[i];
}
WAMControl::calculateControlLaw();
if(d elapsedTime >= PDCycleTime)
{
q(1) = d positionRad[0];
q(2) = d positionRad[3];
qdot(1) = d velocityRad[0];
qdot(2) = d velocityRad[3];
//forward kinematics
z(1) = l[0]*cos(q(1)) + l[1]*cos(q(1)+q(2));
z(2) = l[0]*sin(q(1)) + l[1]*sin(q(1)+q(2));
posError(1) = z(1)  z s(1);
posError(2) = z(2)  z s(2);
//end effector velocity
99
zdot(1) = l[0]*sin(q(1))*qdot(1) 
l[1]*sin(q(1)+q(2))*(qdot(1)+qdot(2));
zdot(2) = l[0]*cos(q(1))*qdot(1) +
l[1]*cos(q(1)+q(2))*(qdot(1)+qdot(2));
//Jacobian
J(1,1) = l[0]*sin(q(1))  l[1]*sin(q(1)+q(2));
J(1,2) = l[1]*sin(q(1)+q(2));
J(2,1) = l[0]*cos(q(1)) + l[1]*cos(q(1)+q(2));
J(2,2) = l[1]*cos(q(1)+q(2));
Jinv = inverse(J); //Jacobian inverse
Jinvt = transpose(Jinv); //Jacobian inverse transpose
//Time derivative of the Jacobian
Jdot(1,1) = l[0]*cos(q(1))*qdot(1)
 l[1]*cos(q(1)+q(2))*(qdot(1)+qdot(2));
Jdot(1,2) = l[1]*cos(q(1)+q(2))*(qdot(1)+qdot(2));
Jdot(2,1) = l[0]*sin(q(1))*qdot(1)
 l[1]*sin(q(1)+q(2))*(qdot(1)+qdot(2));
Jdot(2,2) = l[1]*sin(q(1)+q(2))*(qdot(1)+qdot(2));
//Start with the navigation function
A = (z(1)  z s(1))*
(z(1)  z s(1)) + (z(2)  z s(2))*(z(2)  z s(2));
PDAz1 = 2*(z(1)  z s(1));
PDAz2 = 2*(z(2)  z s(2));
//Obstacle functions
beta0 = rad ws*rad ws 
100
(z(1)  centerOfTaskSpaceX)*(z(1)  centerOfTaskSpaceX)
(z(2)  centerOfTaskSpaceY)*(z(2)  centerOfTaskSpaceY);
beta1 = (z(1)  oc x(1))*(z(1)  oc x(1)) +
(z(2)  oc y(1))*(z(2)  oc y(1))  rad*rad;
beta2 = (z(1)  oc x(2))*(z(1)  oc x(2)) +
(z(2)  oc y(2))*(z(2)  oc y(2))  rad*rad;
beta3 = (z(1)  oc x(3))*(z(1)  oc x(3)) +
(z(2)  oc y(3))*(z(2)  oc y(3))  rad*rad;
beta4 = (z(1)  oc x(4))*(z(1)  oc x(4)) +
(z(2)  oc y(4))*(z(2)  oc y(4))  rad*rad;
beta = beta0*beta1*beta2*beta3*beta4; //All obstacles
//start with the partial derivatives of betas
PDbeta0 z1 = 2*(z(1)  centerOfTaskSpaceX);
PDbeta0 z2 = 2*(z(2)  centerOfTaskSpaceY);
PDbeta1 z1 = 2*(z(1)  oc x(1));
PDbeta1 z2 = 2*(z(2)  oc y(1));
PDbeta2 z1 = 2*(z(1)  oc x(2));
PDbeta2 z2 = 2*(z(2)  oc y(2));
PDbeta3 z1 = 2*(z(1)  oc x(3));
PDbeta3 z2 = 2*(z(2)  oc y(3));
PDbeta4 z1 = 2*(z(1)  oc x(4));
PDbeta4 z2 = 2*(z(2)  oc y(4));
PDbeta z1 = PDbeta0 z1*beta1*beta2*beta3*beta4 +
beta0*PDbeta1 z1*beta2*beta3*beta4 +
beta0*beta1*PDbeta2 z1*beta3*beta4 +
beta0*beta1*beta2*PDbeta3 z1*beta4 +
beta0*beta1*beta2*beta3*PDbeta4 z1 ;
PDbeta z2 = PDbeta0 z2*beta1*beta2*beta3*beta4 +
101
beta0*PDbeta1 z2*beta2*beta3*beta4 +
beta0*beta1*PDbeta2 z2*beta3*beta4 +
beta0*beta1*beta2*PDbeta3 z2*beta4 +
beta0*beta1*beta2*beta3*PDbeta4 z2 ;
//time derivatives of betas
beta0Dot = 2 * (z(1)  centerOfTaskSpaceX)*
zdot(1)  2 * (z(2)  centerOfTaskSpaceY)*zdot(2);
beta1Dot = 2 * (z(1)  oc x(1))*zdot(1) +
2 * (z(2)  oc y(1))*zdot(2);
beta2Dot = 2 * (z(1)  oc x(2))*zdot(1) +
2 * (z(2)  oc y(2))*zdot(2);
beta3Dot = 2 * (z(1)  oc x(3))*zdot(1) +
2 * (z(2)  oc y(3))*zdot(2);
beta4Dot = 2 * (z(1)  oc x(4))*zdot(1) +
2 * (z(2)  oc y(4))*zdot(2);
betaDot = beta0Dot*beta1*beta2*beta3*beta4 +
beta0*beta1Dot*beta2*beta3*beta4 +
beta0*beta1*beta2Dot*beta3*beta4 +
beta0*beta1*beta2*beta3Dot*beta4 +
beta0*beta1*beta2*beta3*beta4Dot ;
//time derivatives of the partial derivatives of betas
PDbeta0 z1 Dot = 2*zdot(1);
PDbeta0 z2 Dot = 2*zdot(2);
PDbeta1 z1 Dot = 2*zdot(1);
PDbeta1 z2 Dot = 2*zdot(2);
PDbeta2 z1 Dot = 2*zdot(1);
102
PDbeta2 z2 Dot = 2*zdot(2);
PDbeta3 z1 Dot = 2*zdot(1);
PDbeta3 z2 Dot = 2*zdot(2);
PDbeta4 z1 Dot = 2*zdot(1);
PDbeta4 z2 Dot = 2*zdot(2);
PDbeta z1 Dot = (PDbeta0 z1 Dot*beta1*beta2*beta3*beta4 +
PDbeta0 z1*beta1Dot*beta2*beta3*beta4 +
PDbeta0 z1*beta1*beta2Dot*beta3*beta4 +
PDbeta0 z1*beta1*beta2*beta3Dot*beta4 +
PDbeta0 z1*beta1*beta2*beta3*beta4Dot)+
(beta0Dot*PDbeta1 z1*beta2*beta3*beta4 +
beta0*PDbeta1 z1 Dot*beta2*beta3*beta4 +
beta0*PDbeta1 z1*beta2Dot*beta3*beta4 +
beta0*PDbeta1 z1*beta2*beta3Dot*beta4 +
beta0*PDbeta1 z1*beta2*beta3*beta4Dot)
+(beta0Dot*beta1*PDbeta2 z1*beta3*beta4 +
beta0*beta1Dot*PDbeta2 z1*beta3*beta4 +
beta0*beta1*PDbeta2 z1 Dot*beta3*beta4 +
beta0*beta1*PDbeta2 z1*beta3Dot*beta4 +
beta0*beta1*PDbeta2 z1*beta3*beta4Dot)
+(beta0Dot*beta1*beta2*PDbeta3 z1*beta4 +
beta0*beta1Dot*beta2*PDbeta3 z1*beta4 +
beta0*beta1*beta2Dot*PDbeta3 z1*beta4 +
beta0*beta1*beta2*PDbeta3 z1 Dot*beta4 +
beta0*beta1*beta2*PDbeta3 z1*beta4Dot )
+(beta0Dot*beta1*beta2*beta3*PDbeta4 z1 +
beta0*beta1Dot*beta2*beta3*PDbeta4 z1 +
beta0*beta1*beta2Dot*beta3*PDbeta4 z1 +
beta0*beta1*beta2*beta3Dot*PDbeta4 z1 +
beta0*beta1*beta2*beta3*PDbeta4 z1 Dot );
103
PDbeta z2 Dot = (PDbeta0 z2 Dot*beta1*beta2*beta3*beta4 +
PDbeta0 z2*beta1Dot*beta2*beta3*beta4 +
PDbeta0 z2*beta1*beta2Dot*beta3*beta4 +
PDbeta0 z2*beta1*beta2*beta3Dot*beta4 +
PDbeta0 z2*beta1*beta2*beta3*beta4Dot)
+(beta0Dot*PDbeta1 z2*beta2*beta3*beta4 +
beta0*PDbeta1 z2 Dot*beta2*beta3*beta4 +
beta0*PDbeta1 z2*beta2Dot*beta3*beta4 +
beta0*PDbeta1 z2*beta2*beta3Dot*beta4 +
beta0*PDbeta1 z2*beta2*beta3*beta4Dot) +
(beta0Dot*beta1*PDbeta2 z2*beta3*beta4 +
beta0*beta1Dot*PDbeta2 z2*beta3*beta4 +
beta0*beta1*PDbeta2 z2 Dot*beta3*beta4 +
beta0*beta1*PDbeta2 z2*beta3Dot*beta4 +
beta0*beta1*PDbeta2 z2*beta3*beta4Dot) +
(beta0Dot*beta1*beta2*PDbeta3 z2*beta4 +
beta0*beta1Dot*beta2*PDbeta3 z2*beta4 +
beta0*beta1*beta2Dot*PDbeta3 z2*beta4 +
beta0*beta1*beta2*PDbeta3 z2 Dot*beta4 +
beta0*beta1*beta2*PDbeta3 z2*beta4Dot )+
(beta0Dot*beta1*beta2*beta3*PDbeta4 z2 +
beta0*beta1Dot*beta2*beta3*PDbeta4 z2 +
beta0*beta1*beta2Dot*beta3*PDbeta4 z2 +
beta0*beta1*beta2*beta3Dot*PDbeta4 z2 +
beta0*beta1*beta2*beta3*PDbeta4 z2 Dot );
d0 = pow(A,k nav);
t0 = d0+beta;
t1 = pow((t0),1/k nav) ;
d2 = t1*k nav*t0;
104
d1 = pow((t0),1/k nav) ;
//gradient of the navigation function
M1= PDAz1/t1;
M2= (d0*k nav*PDAz1 + A*PDbeta z1)/d2;
M3= PDAz2/t1;
M4= (d0*k nav*PDAz2 + A*PDbeta z2)/d2;
del psi s(1) = M1  M2;
del psi s(2) = M3  M4;
t2 = pow(t0, ((k nav+1)/k nav));
w2 = (d0*k nav*(A Dot/A)+betaDot)/t0;
w3 = d0*k nav*k nav*(A Dot/A)*t2;
w4 = d0*t2*(k nav+1)*w2;
w5 = t2*d0*k nav;
w6 = t2*A Dot;
w7 = A*t2*w2*((k nav+1)/k nav);
w8 = A*t2;
//time derivative of the gradient function del psi
f s(1) = ( PDAz1 Dot*d1*k nav + PDAz1*d1*w2 +
PDAz1*w3 + PDAz1*w4 + w5*PDAz1 Dot + PDbeta z1*w6 +
w7*PDbeta z1 + w8*PDbeta z1 Dot ) / k nav;
f s(2) = ( PDAz2 Dot*d1*k nav + PDAz2*d1*w2 +
PDAz2*w3 + PDAz2*w4 + w5*PDAz2 Dot + PDbeta z2*w6 +
w7*PDbeta z2 + w8*PDbeta z2 Dot ) / k nav;
//compute the control (task space or end effector space)
eta s(1) = gain[0]*zdot(1) + alpha[0]*del psi s(1);
eta s(2) = gain[1]*zdot(2) + alpha[1]*del psi s(2);
105
/****** sign(zdot) ********************************/
if(zdot(1)<0.005)
sgn zdot(1) = 1.0;
else
{
if(zdot(1)>0.005)
sgn zdot(1) = 1.0;
else
sgn zdot(1) = 0.0;
}
if(zdot(2)<0.005)
sgn zdot(2) = 1.0;
else
{
if(zdot(2)>0.005)
sgn zdot(2) = 1.0;
else
sgn zdot(2) = 0.0;
}
/************RegressionMatrix******************/
Y(1,1)=Jinvt(1,1)*Jinv(1,1)*f s(1)+
Jinvt(1,1)*Jinv(1,2)*f s(2)+
Jinvt(1,1)*(Jinv(1,1)*Jdot(1,1)Jinv(1,2)*Jdot(2,1))*
del psi s(1)+Jinvt(1,1)*
(Jinv(1,1)*Jdot(1,2)Jinv(1,2)*Jdot(2,2))*del psi s(2);
106
Y(1,2) = ((2*Jinvt(1,1)*cos(q(2))+
(Jinvt(1,1)+Jinvt(1,2))*Jinv(2,1))*f s(1)+
(Jinvt(1,2)*Jinv(1,2)+(Jinvt(1,1)+
Jinvt(1,2))*Jinv(2,2))*f s(2)+(Jinvt(1,1)*
(Jinv(2,1)*Jdot(1,1)Jinv(2,2)*Jdot(2,1))+
Jinvt(1,2)*((Jinv(1,1)+Jinv(2,1))*Jdot(1,1)
(Jinv(1,2)+Jinv(2,2))*Jdot(2,1)))*del psi s(1)+
(Jinvt(1,1)*(Jinv(2,1)*Jdot(1,2)Jinv(2,2)*Jdot(2,2))+
Jinvt(1,2)*((Jinv(1,1)+Jinv(2,1))*Jdot(1,2)
(Jinv(1,2)+Jinv(2,2))*Jdot(2,2)))*del psi s(2);
Y(1,3)=(Jinvt(1,2)*Jinv(1,1)+(Jinvt(1,1)+Jinvt(1,2))*
Jinv(2,1))*f s(1)+(Jinvt(1,2)*Jinv(1,2)+(Jinvt(1,1)+
Jinvt(1,2))*Jinv(2,2))*f s(2)+(Jinvt(1,1)*(Jinv(2,1)*
Jdot(1,1)Jinv(2,2)*Jdot(2,1))+Jinvt(1,2)*((Jinv(1,1)+
Jinv(2,1))*Jdot(1,1)(Jinv(1,2)+Jinv(2,2))*Jdot(2,1)))*
del psi s(1)+(Jinvt(1,1)*(Jinv(2,1)*Jdot(1,2)Jinv(2,2)*
Jdot(2,2))+Jinvt(1,2)*((Jinv(1,1)+Jinv(2,1))*Jdot(1,2)
(Jinv(1,2)+Jinv(2,2))*Jdot(2,2)))*del psi s(2);
Y(1,4) = Jinv(1,1)*zdot(1)Jinv(1,2)*zdot(2);
Y(1,5) = 0;
Y(1,6) = sgn zdot(1);
Y(1,7) = 0;
Y(2,1) = Jinvt(2,1)*Jinv(1,1)*f s(1)+
Jinvt(2,1)*Jinv(1,2)*f s(2)+
Jinvt(2,1)*(Jinv(1,1)*Jdot(1,1)
107
Jinv(1,2)*Jdot(2,1))*del psi s(1)+
Jinvt(2,1)*(Jinv(1,1)*Jdot(1,2)
Jinv(1,2)*Jdot(2,2))*del psi s(2);
Y(2,2) = ((2*Jinvt(2,1)*cos(q(2)) +
Jinvt(2,2)*cos(q(2)))*Jinv(1,1)+
Jinvt(2,1)*cos(q(2))*Jinv(2,1))*f s(1)+
((2*Jinvt(2,1)*cos(q(2))+
Jinvt(2,2)*cos(q(2)))*Jinv(1,2)+
Jinvt(2,1)*cos(q(2))*Jinv(2,2))*f s(2)+
(Jinvt(2,1)*(2*sin(q(2))*qdot(2)
(2*cos(q(2))*Jinv(1,1)+
cos(q(2))*Jinv(2,1))*Jdot(1,1)
(2*cos(q(2))*Jinv(1,2)+
cos(q(2))*Jinv(2,2))*Jdot(2,1))+
Jinvt(2,2)*(sin(q(2))*qdot(1)
cos(q(2))*Jinv(1,1)*Jdot(1,1)
cos(q(2))*Jinv(1,2)*Jdot(2,1)))*del psi s(1)+
(Jinvt(2,1)*(sin(q(2))*qdot(2)
(2*cos(q(2))*Jinv(1,1)+
cos(q(2))*Jinv(2,1))*Jdot(1,2)
(2*cos(q(2))*Jinv(1,2)+
cos(q(2))*Jinv(2,2))*Jdot(2,2))+
Jinvt(2,2)*(cos(q(2))*Jinv(1,1)*Jdot(1,2)
cos(q(2))*Jinv(1,2)*Jdot(2,2)))*del psi s(2);
Y(2,3) = (Jinvt(2,2)*Jinv(1,1)+(Jinvt(2,1)+Jinvt(2,2))*
Jinv(2,1))*f s(1)+
(Jinvt(2,2)*Jinv(1,2)+(Jinvt(2,1)+Jinvt(2,2))*
Jinv(2,2))*f s(2)+(Jinvt(2,1)*
(Jinv(2,1)*Jdot(1,1)Jinv(2,2)*Jdot(2,1))+
108
Jinvt(2,2)*((Jinv(1,1)+Jinv(2,1))*Jdot(1,1)
(Jinv(1,2)+Jinv(2,2))*Jdot(2,1)))*del psi s(1)+
(Jinvt(2,1)*(Jinv(2,1)*Jdot(1,2)Jinv(2,2)*
Jdot(2,2))+Jinvt(2,2)*((Jinv(1,1)+Jinv(2,1))*
Jdot(1,2)(Jinv(1,2)+Jinv(2,2))*Jdot(2,2)))*
del psi s(2);
Y(2,4) = 0;
Y(2,5) = Jinv(2,1)*zdot(1)Jinv(2,2)*zdot(2);
Y(2,6) = 0;
Y(2,7) = sgn zdot(2);
//The update law
theta hat dot = Gamma*transpose(Y)*eta s;
theta hat = theta hat + 0.5*
(theta hat dot + oldtheta hat dot)*d controlPeriod;
oldtheta hat dot = theta hat dot;
//Control law
tau s(1) =  k[0]*eta s(1) 
Y(1,1)*theta hat(1)  Y(1,2)*theta hat(2)
 Y(1,3)*theta hat(3)  Y(1,4)*theta hat(4)
 Y(1,5)*theta hat(5)  Y(1,6)*theta hat(6)
 Y(1,7)*theta hat(7);
109
tau s(2) =  k[1]*eta s(2) 
Y(2,1)*theta hat(1)  Y(2,2)*theta hat(2)
 Y(2,3)*theta hat(3)  Y(2,4)*theta hat(4)
 Y(2,5)*theta hat(5)  Y(2,6)*theta hat(6)
 Y(2,7)*theta hat(7);
//torques in joint space  transpose(J)*tau s
tau(1) = J(1,1)*tau s(1) + J(2,1)*tau s(2);
tau(2) = J(1,2)*tau s(1) + J(2,2)*tau s(2);
//Compute the friction compensation
fric[0]=0.0;
fric[1]=0.0;
if(tau(1) > torque thres[0])
fric[0] = fric comp[0]*
exp(zeta[0]*abs(d velocityRad[0])) +
fd[0]*abs(d velocityRad[0]);
if(tau(1) < torque thres[0])
fric[0] = fric comp[0]*
exp(zeta[0]*abs(d velocityRad[0])) 
fd[0]*abs(d velocityRad[0]);
if(tau(2) > torque thres[1])
fric[1] = fric comp[1]*
exp(zeta[1]*abs(d velocityRad[3])) +
fd[1]*abs(d velocityRad[3]);
if(tau(2) < torque thres[1])
fric[1] = fric comp[1]*
exp(zeta[1]*abs(d velocityRad[3])) 
110
fd[1]*abs(d velocityRad[3]);
//Output the control torques to joint 1 and joint 4
//the remaining torques are computed from the PD control
d controlTorque[0] = tau(1); //+ fric[0];
d controlTorque[3] = tau(2); //+ fric[1];
}
} //end the else condition
}
//====================================================
// NavAdaptive::moveFinalPosition()
//
// Function used to move the robot to a safe rest
// position after the control has been executed.
// Starts from any position, have to do a bit of work
// to initialize the variables to current
// joint positions before doing the PD control.
//====================================================
void NavAdaptive::moveFinalPosition()
{
if(flag == 1)
{
flag = 0;
offset = d elapsedTime;
// This is needed to initialize the control variables
// in ManipulatorControl() so that the PD control can
//run for the changed robot position
111
ManipulatorControl::initializeControlVariables();
for (int j=0; j < 7; j++)
{
// Start from the current position
d initPosition[j] = d position[j];
// Need to do this for the first cycle of the PD control
// because d positionErrorRad[] is already set in
// ManipulatorControl::control()
d positionErrorRad[j] = 0.0;
}
}
for (int i=0; i < 7; i++)
{
//Trajectory calculation
double halb = exp(0.1 * (d elapsedTimeoffset)
* (d elapsedTimeoffset) * (d elapsedTimeoffset));
d unfilteredDesiredPosition[i] =
(finalPosition[i]  d initPosition[i])
* (1  halb) + d initPosition[i];
}
WAMControl::calculateControlLaw();
}
//=======================================================
// NavAdaptive::stopControl()
//
// Called each time a control run ends. If running from
// the GUI, this will be called each time the STOP button
// is pushed, or when a timed run ends, or when the
112
// control aborts itself. Use this function to zero out
// DACs and to disconnect from the servers
//========================================================
int NavAdaptive::stopControl()
{
WAMControl::stopControl();
// Disconnect from IO board server
delete iobc;
return 0;
}
//========================================================
// NavAdaptive::exitControl
// 
// This function is called when the control is unloaded.
// In standalone mode, this happens after one control run
// has completed. When using the GUI, it happens when the
// user loads a new control program or exits the GUI.
// This function could be used to cleanup allocated memory
//=========================================================
int NavAdaptive::exitControl()
{
return 0;
}
//=========================================================
// main()
//
// The main function instantiates the object and goes
// into the mainloop
//=========================================================
113
main (int argc, char *argv[])
{
NavAdaptive *cp = new NavAdaptive(argc, argv);
cp>run();
delete cp;
}
114
Appendix B
Nomenclature
= Pitch angle
b = Semichord of the airfoil
= Flap angle
c
l
, c
m
= Lift and moment coecients per angleofattack
c
l
, c
m
= Lift and moment coecients per control surface deection
c
h
, c
= Dimensionless distance from the elastic axis to the midchord, positive rearward
t = Time
T = Unsteady torque moment of ap spring
U = Freestream velocity
x, y = Horizontal and vertical coordinates
Auxiliary Constants
The auxiliary constants k
i
, c
i
i = 1...4 as well as g
3
, g
4
that were introduced in
the model description of (3.5) are explicitly dened as follows
1.
k
1
= I
k
h
d
1
k
2
= d
1
[I
bc
l
+mx
b
3
c
m
]
k
3
= mx
bk
h
d
1
k
4
= d
1
[mx
b
2
c
l
mb
2
c
m
]
c
1
= d
1
[I
(c
h
+ Ubc
l
) +mx
Ub
3
c
m
]
c
2
= d
1
[I
Ub
2
c
l
a mx
bc
+mx
Ub
4
c
m
a]
c
3
= d
1
[mx
bc
h
mx
Ub
2
c
l
mUb
2
c
m
]
c
4
= d
1
[mc
mx
Ub
3
c
l
a mUb
3
c
m
a]
g
3
= d
1
[I
bc
l
mx
b
3
c
m
]
g
4
= d
1
[mx
b
2
c
l
+mb
2
c
m
]
d = m(I
mx
2
b
2
)
a = (0.5 a)
The elements
i
i = 2, 3, 4 introduced in (3.8) are explicitly dened in as follows
2
= g
4
3
= g
3
c
3
+c
1
g
4
4
= k
1
g
4
+k
1
c
2
3
g
3
k
1
3
c
1
g
3
c
3
d
obs
g
3
k
1
3
(B.1)
115
Linear Parameterization
Given the pitch spring constant of (3.50), explicit expression for W(y) and
introduced in (3.9 ) are provided below
W(y) =
y 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 y 0 0 y
2
0 0 y
3
0 0 y
4
0 0 y
5
0 0
0 0 y 0 0 y
2
0 0 y
3
0 0 y
4
0 0 y
5
0
0 0 0 y 0 0 y
2
0 0 y
3
0 0 y
4
0 0 y
5
(B.2)
1
=
1
2
=
2
+ (k
4
U
2
md
1
1
)
3
=
3
+c
3
k
2
U
2
c
1
k
4
U
2
mx
b
1
c
3
d
1
c
1
m
1
d
1
4
= k
1
k
4
U
2
k
1
md
1
1
+k
k
2
U
2
k
mx
bd
1
5
=
2
md
1
6
=
2
(c
3
mx
bd
1
+c
1
md
1
)
7
=
2
(k
1
md
1
+k
mx
bd
1
)
8
=
3
md
1
9
=
3
(k
1
md
1
+k
mx
bd
1
)
10
=
3
(k
1
md
1
+k
mx
bd
1
)
11
=
4
md
1
12
=
4
(k
1
md
1
+k
mx
bd
1
)
13
=
4
(k
1
md
1
+k
mx
bd
1
)
14
=
5
md
1
15
=
5
(k
1
md
1
+k
mx
bd
1
)
16
=
5
(k
1
md
1
+k
mx
bd
1
)
(B.3)
where k
,
i
R are dened as follows
1
= c
1
c
4
2
= c
2
c
3
k
1
c
1
c
4
3
= k
1
3
[k
1
c
4
c
5
k
1
c
2
c
2
3
c
1
c
3
c
4
k
1
+c
1
c
2
c
3
k
3
+d
obs
c
2
]
k
= k
1
3
[c
1
c
3
k
3
k
1
c
2
3
+d
obs
]
We note here that it is simple to alter the size and denition of if it is desired that
the freestream velocity U should be a part of the measurable regression vector W ()
of (3.9).
The explicit expressions for
2,2dm
(t) and
2,2du
(t) are obtained as follows
2,2dm
=
2,2d
y
02
+
2,2d
0
[A
0
0
+ky] +
p
j=1
2,2d
j
[A
0
j
+W
j
(y)]
+
4
j=3
2,2d
j
A
0
j
+
2,2d
0
[sign (
2
)
1
e
1
]
116
2,2du
=
2,2d
y
_
p
j=1
j
[
j,2
+W
j,1
(y)] +
4
j=2
j,2
_
Stability of the zero dynamics
The state space system of (3.7) can be expanded into the following form
y = x
1
=
x
1
= x
2
+
1
(y)
x
2
= x
3
+
2
(y) +
2
x
3
= x
4
+
3
(y) +
3
x
4
=
4
(y) +
4
(B.4)
Here, we study the stability of the zero dynamics for the case when the pitch dis
placement is regulated to the origin. Mathematically, this implies that
y 0 y = x
2
0 x
2
0
which implies from the second equation of (B.4) that
u
=
1
2
[x
3
+
2
(0)] (B.5)
From (B.2), it is plain to see that W (0) = 0 which implies that
i
(0) = 0 i = 1.....4.
The zero dynamics of the system then reduce to the second order system given by
x
3
= x
4
+
3
u
x
4
=
4
u
x where x =
_
x
3
x
4
T
and A
given by
A
2
1
2
0
A
=
1
2
2
_
3
_
(
2
3
4
2
4
)
_
117
From the nominal system of (3.49) and the expressions of (B.1), we can compute
2
= 0.1655,
3
= 0.6275,
4
= 50.4645 from which Re (
A
) = 1.8958 which
makes for asymptotically stable zero dynamics. For the sake of completeness, the
eigenvalues for the zero dynamics turn out to be
A
= 1.8958 j5.7458. It is to
be noted here that a more general analysis would analyze the stability of the zero
dynamics for a range of values of y
d
.
118
Appendix C
Derivative of u
1
From the denition of (4.21), the elements of the time derivative of u
1
(t) are given
in the following expression
.
u
1
= B
1
_
M
1
e
v
+
R
T
p
d
+R
T
...
p
d
(M
1
I
3
)
_
R
T
p
d
+R
T
p
d
_
M
1
e
p
Y
1
(v)
_
k
r
+
2
1
(v
s
)
1
(v
m
) r
m
+
1
_
r
2
1
(v
s
)
1
(v
m
) r
m
+
1
_
d
dt
1
(v
s
)
_
r
+
2
1
(v
s
)
[
1
(v
m
) r
m
+
1
]
2
_
d
dt
1
(v
m
) r
m
+
1
(v
m
)
d
dt
r
m
_
r
(C.1)
where (4.8), (4.10), (4.12), and (4.6) can be substituted into (C.1) for e
p
(t), e
v
(t),
r (t), and
R(t), respectively, and (4.25) is utilized to calculate the time derivative of
r (t)
m
. The exact structure for
d
dt
1
(v (t)
s
) and
d
dt
1
(v (t)
m
) is determined by
the designers selected of the bounding function
1
(v).
BIBLIOGRAPHY
[1] J. Barraquand and J. C. Latombe, A MonteCarlo Algorithm for Path Planning
with Many Degrees of Freedom, Proc. of the IEEE Int. Conf. on Robotics
and Automation, Cincinnati, Ohio, pp. 584589, 1990.
[2] J. Barraquand, B. Langlois, and J. C. Latombe, Numerical Potential Fields
Techniques for Robot Path Planning, IEEE Trans. on Systems, Man, and
Cybernetics, Vol. 22, pp. 224241, (1992).
[3] A. Bemporad, A. De Luca, and G. Oriolo, Local Incremental Planning for a Car
Like Robot Navigating Among Obstacles, Proc. of the IEEE Int. Conf. on
Robotics and Automation, Minneapolis, Minnesota, pp. 12051211, 1996.
[4] I. Cervantes, R. Kelly, J. AlvarezRamirez, and J. Moreno, A Robust Velocity
Field Control, IEEE Trans. on Control Systems Technology, Vol. 10, No.
6, pp. 888894, (2002).
[5] C. I. Connolly, J. B. Burns, and R. Weiss, Path Planning Using Laplaces Equa
tion, Proc. of the IEEE Int. Conf. on Robotics and Automation, Cincinnati,
Ohio, pp. 21022106, 1990.
[6] S. S. Ge and Y. J. Cui, New Potential Functions for Mobile Robot Path Plan
ning, IEEE Trans. on Robotics and Automation, Vol. 16, No. 5, pp. 615
620, (2000).
[7] J. Guldner and V. I. Utkin, Sliding Mode Control for Gradient Tracking and Ro
bot Navigation Using Articial Potential Fields, IEEE Trans. on Robotics
and Automation, Vol. 11, No. 2, pp. 247254, (1995).
[8] J. Guldner, V. I. Utkin, H. Hashimoto, and F. Harashima, Tracking Gradients
of Articial Potential Field with NonHolonomic Mobile Robots, Proc. of
the American Control Conf., Seattle, Washington, pp. 28032804, 1995.
[9] H. K. Khalil, Nonlinear Systems, Third edition, Prentice Hall, 2002.
[10] O. Khatib, Commande dynamique dans lespace operational des robots manipula
teurs en presence dobstacles, Ph.D. Dissertation,
Ecole Nationale Supeieure
de lAeeronatique et de lEspace (ENSAE), France, 1980.
[11] O. Khatib, RealTime Obstacle Avoidance for Manipulators and Mobile Ro
bots, Inter. Journal of Robotics Research, Vol. 5, No. 1, pp. 9099, (1986).
[12] D. E. Koditschek, Exact Robot Navigation by Means of Potential Functions:
Some Topological Considerations, Proc. of the IEEE Int. Conf. on Robotics
and Automation, Raleigh, North Carolina, pp. 16, 1987.
[13] D. E. Koditschek and E. Rimon, Robot Navigation Functions on Manifolds with
Boundary, Adv. Appl. Math., Vol. 11, pp. 412442, (1990).
120
[14] K. J. Kyriakopoulos, H. G. Tanner, and N. J. Krikelis, Navigation of Non
holonomic Vehicles in Complex Environments with Potential Fields and
Tracking, Int. J. Intell. Contr. Syst., Vol. 1, No. 4, pp. 487495, (1996).
[15] J. C. Latombe, Robot Motion Planning, Kluwer Academic Publishers: Boston,
Massachusetts, 1991.
[16] J. P. Laumond, P. E. Jacobs, M. Taix, and R. M. Murray, A Motion Planner for
Nonholonomic Mobile Robots, IEEE Trans. on Robotics and Automation,
Vol. 10, No. 5, pp. 577593, (1994).
[17] D. Lee and P. Li, Passive Bilateral Feedforward Control of Linear Dynamically
Similar Teleoperated Manipulators, IEEE Trans. on Robotics and Automa
tion, Vol. 19, No. 3, pp. 443456 (2003).
[18] F. Lewis, C. Abdallah, and D. Dawson, Control of Robot Manipulators, New York:
MacMillan Publishing Co., 1993.
[19] J. Li and P. Li, Passive Velocity Field Control (PVFC) Approach to Robot Force
Control and Contour Following, Proc. of the Japan/USA Symposium on
Flexible Automation, Ann Arbor, Michigan, 2000.
[20] P. Li and R. Horowitz, Passive Velocity Field Control of Mechanical Manip
ulators, IEEE Trans. on Robotics and Automation, Vol. 15, No. 4, pp.
751763, (2003).
[21] M. Loer, N. Costescu, and D. Dawson, QMotor 3.0 and the QMotor Robotic
Toolkit  An Advanced PCBased RealTime Control Platform, IEEE Con
trol Systems Magazine, Vol. 22, No. 3, pp. 1226 June, 2002.
[22] A. De Luca and G. Oriolo, Local Incremental Planning for Nonholonomic Mobile
Robots, Proc. of the IEEE Int. Conf. on Robotics and Automation, San
Diego, California, pp. 104110, 1994.
[23] Z. Qu, Robust Control of Nonlinear Uncertain Systems, New York: John Wiley
& Sons, 1998.
[24] E. Rimon and D. E. Koditschek, Exact Robot Navigation Using Articial Po
tential Function, IEEE Trans. on Robotics and Automation, Vol. 8, No. 5,
pp. 501518, (1992).
[25] J.J.E. Slotine and W. Li, Applied Nonlinear Control, Englewood Cli, NJ: Pren
tice Hall, Inc., 1991.
[26] M. W. Spong and M. Vidyasagar, Robot Dynamics and Control, New York: John
Wiley and Sons, Inc., 1989.
[27] H. G. Tanner and K. J. Kyriakopoulos, Nonholonomic Motion Planning for
Mobile Manipulators, Proc. of the IEEE Int. Conf. on Robotics and Au
tomation, San Francisco, California, pp. 12331238, 2000.
[28] H. G. Tanner, S. G. Loizou, and K. J. Kyriakopoulos, Nonholonomic Naviga
tion and Control of Cooperating Mobile Manipulators, IEEE Trans. on
Robotics and Automation, Vol. 19, No. 1, pp. 5364, (2003).
121
[29] R. Volpe and P. Khosla, Articial Potential with Elliptical Isopotential Contours
for Obstacle Avoidance, Proc. of the IEEE Conf. on Decision and Control,
Los Angeles, California, pp. 180185, 1987.
[30] M. Yamakita, T. Yazawa, X.Z. Zheng, and K. Ito, An Application of Passive
Velocity Field Control to Cooperative Multiple 3Wheeled Mobile Robots,
Proceedings of the IEEE/RJS Int. Conf. on Intelligent Robots and Systems,
Victoria, B. C., Canada, pp. 368373, 1998.
[31] M. Yamakita, K. Suzuki, X.Z. Zheng, M. Katayama, and K. Ito, An Extension
of Passive Velocity Field Control to Cooperative Multiple Manipulator Sys
tems, Proc. of the IEEE/RJS Int. Conf. on Intelligent Robots and Systems,
Grenoble, France, pp. 1116, 1997.
[32] E.H. Dowell, A Modern Course in Aeroelasticity, Sijtho and Noordho, 1978.
[33] P. Marzocca, L. Librescu, and G. Chiocchia, Aeroelastic response of a 2D lifting
surfaces to gust and arbitrary explosive loading signatures, International
Journal of Impact Engineering, 25 (2001) 4165.
[34] P. Marzocca, L. Librescu, G. Chiocchia, Aeroelasticity of twodimensional lifting
surfaces via indicial function approach, The Aeronautical Journal, March
(2002) 147153.
[35] H. Horikawa and E.H. Dowell, An elementary explanation of the utter mecha
nism with active feedback controls, Journal of Aircraft, 16 (1979) 225232.
[36] J.S. Vipperman, R.L. Clark, M.D. Conner, and E.H. Dowell, Investigation of
the experimental active control of a typical section airfoil using a trailing
edge ap, Journal of Aircraft, 35 (1998) 224229.
[37] K. Lazaraus, E. Crawley, and C. Lin, Fundamental mechanisms of aeroelastic
control with control surface and strain actuation, Journal of Guidance,
Control, and Dynamics, 18 (1995) 1017.
[38] V. Mukhopadhyay, Flutter suppression control law design and testing for the
active exible wing, Journal of Aircraft, 32, (1995) 4551.
[39] V. Mukhopadhyay, Ed., Benchmark active control technology, Journal of Guid
ance, Control, and Dynamics, Part I 23 (2000) 913960. Part II 23 (2000)
10931139. Part III 24 (2001) 146192.
[40] V. Mukhopadhyay, Historical perspective on analysis and control of aeroelastic
responses, Journal of Guidance, Control, and Dynamics, 26 (2003) 673
684.
[41] S.S. Na and L. Librescu, Oscillation control of cantilevers via smart materi
als technology and optimal feedback control: actuator location and power
consumption issues, Journal of Smart Materials and Structures, 7 (1998)
833842.
[42] S.S. Na and L. Librescu, Optimal vibration control of a thinwalled anisotropic
cantilevers exposed to blast loading, Journal of Guidance, Control, and
Dynamics, 23 (2000) 491500.
122
[43] L. Librescu and S.S. Na, Bending vibration control of cantilevers via boundary
moment and combined feedback control laws, Journal of Vibration and
Controls, 4 (1998) 733746.
[44] L. Librescu, S.S. Na and S. Kim, Comparative study on vibration control
methodologies applied to adaptive anisotropic cantilevers, Proceedings of
the 43nd AIAA/ASME/ASCE/ASC Structures, Structural Dynamics, and
Materials Conference, AIAA20021539, Denver, 2002.
[45] J.C. Jr. Bruch, J.M. Sloss, S. Adali, and I.S. Sadek, Modied bangbang piezo
electric control of vibrating beams, Journal of Smart Materials and Struc
tures, 8 (1999) 647653.
[46] R.H. Scanlan and R. Rosenbaum, Introduction to the Study of Aircraft Vibration
and Flutter, The Macmillian Co., 1951.
[47] J.W. Edwards, Unsteady Aerodynamic Modeling and Active Aeroelastic Con
trol, SUDARR 504 (NASA Grant ngl05020007), Stanford University,
Feb. 1977. Also available as NASA CR148019.
[48] S.D. Olds, Modeling and LQR Control of a TwoDimensional Airfoil, MS The
sis, Department of Mathematics, Virginia Polytechnic Institute and State
University, Blacksburg, VA, April 1997.
[49] W.P. Rodden and B. Stahl, A strip method for prediction of damping in subsonic
wind tunnel and ight utter tests, Journal of Aircraft, 6 (1969) 917.
[50] D.L. York, Analysis of Flutter and Flutter Suppression via an Energy Method,
MS Thesis, Department of Aerospace and Ocean Engineering, Virginia
Polytechnic Institute and State University, Blacksburg, VA, May 1980.
[51] J.G. Leishman, Unsteady lift of an airfoil with a trailingedge ap based on
indicial concepts, Journal of Aircraft, 31 (1994) 288297.
[52] J.G. Leishman, Validation of Approximate Indicial Aerodynamic Functions for
TwoDimensional Subsonic Flow. Journal of Aircraft, Vol. 25, pp. 914922,
1988.
[53] P. Marzocca, L. Librescu and G. Chiocchia, Aeroelastic response of a 2D airfoil
in compressible ight speed regimes exposed to blast loadings, Aerospace
Science and Technology, 6 (2002) 259272.
[54] Z. Qin, P. Marzocca, and L. Librescu, Aeroelastic instability and response of
advanced aircraft wings at subsonic ight speeds, Aerospace Science and
Technology, 6 (2002) 195208.
[55] P. Marzocca, L. Librescu, and W.A. Silva, Flutter, postutter and control of a
supersonic 2D lifting surface, Journal of Guidance, Control, and Dynam
ics, 25 (2002) 962970.
[56] L. Djayapertapa and C.B. Allen, Numerical simulation of active control of tran
sonic utter, Proceedings of the 23rd ICAS Congress, Toronto 2002, 411.1
411.10.
[57] H.G. Kussner and I. Schwarz, The oscillating wing with aerodynamically bal
anced elevator, NACATM991, October 1941.
123
[58] D.J. Jones and B. H. K. Lee, Time Marching Numerical Solution of the Dy
namic Response of Nonlinear Systems, National Aeronautical Establish
ment, Aeronautical Note  25, National Research Council (Canada) No.
24131, Ottawa, Quebec, Canada, 1985.
[59] E.H. Dowell, J. Edwards, T. Strganac, Nonlinear Aeroelasticity, Journal of
Aircraft, Vol. 40, No. 5, 2003, pp. 857874.
[60] R. Lind and M. Brenner, Robust Aeroservoelastic Stability Analysis, Springer
Verlag, Great Britain, 1999.
[61] P. Friedmann, D. Guillot, and E. Presente, Adaptive control of aeroelastic in
stabilities in transonic ow and its scaling, J. Guidance, Control, and
Dynamics, Vol. 20, pp. 11901199, 1997.
[62] J.M. Barker, and G.J. Balas, Comparing linear parametervarying gain
scheduled control techniques for active utter suppression, J. Guidance,
Control and Dynamics, Vol. 23, No. 5, pp. 948955, 2000.
[63] R.C. Scott, and L.E. Pado, Active control of WindTunnel Model aeroelastic
Response Using Neural Networks, J. Guidance, Control and Dynamics,
Vol. 23, No. 6 , pp. 11001108, 2000.
[64] D.M. Guillot and P.P. Friedmann, Fundamental aeroservoelastic study com
bining unsteady computational uid mechanics with adaptive control, J.
Guidance, Control and Dynamics, Vol. 23, No. 6 , pp. 11171126, 2000.
[65] J. Ko, A.J. Kurdila, and T.W. Strganac, Nonlinear control of a prototypical wing
section with torsional nonlinearity, J. Guidance, Control and Dynamics,
Vol. 20, No. 6, pp. 11811189, 1997.
[66] J. Ko and T.W. Strganac, Stability and control of a structurally nonlinear
aeroelastic system, J. Guidance, Control, and Dynamics, Vol. 21, No. 5,
pp. 718725, 1998.
[67] J. Ko., T.W. Strganac, and A.J. Kurdila, Adaptive feedback linearization for the
control of a typical wing section with structural nonlinearity, Nonlinear
dynamics, Vol. 18 pp. 289301, 1999.
[68] Y. Zeng and S.N. Singh, Output feedback variable structure adaptive control of
an aeroelastic systems, J. Guidance, Control, and Dynamics, Vol. 21, No.
6, pp. 830837, 1998.
[69] S.N. Singh and L. Wang, Output feedback form and adaptive stabilization of
a nonlinear aeroelastic system, J. Guidance, Control and Dynamics, Vol.
25, No. 4, pp. 725732, 2002.
[70] W. Xing and S.N. Singh, Adaptive output feedback control of a nonlinear
aeroelastic structure, J. Guidance, Control and Dynamics, Vol. 23, No.
6, pp. 11091116, 2000.
[71] R. Zhang, and S.N. Singh, Adaptive output feedback control of an aeroelastic
system with unstructured uncertainties, J. Guidance, Control, and Dy
namics, Vol. No. 3, pp. 502509, 2001.
[72] Fung, An introduction to the theory of Aeroelasticity, Dover, New York, 1955.
124
[73] R.L. Bisplingho, H. Ashley, and R.L. Halfman, Aeroelasticity, Dover Publica
tions, Inc., New York, NY, 1996.
[74] M. Krstic, I. Kanellakopoulos, P. Kokotovic, Nonlinear and Adaptive Control
Design, New York, NY: John Wiley & Sons, 1995.
[75] P. Kokotovic, The Joy of Feedback: Nonlinear and Adaptive, IEEE Control
Systems Magazine, Vol. 12, pp. 717, 1992.
[76] H. K. Khalil, Nonlinear Systems, 3rd, ed., Upper Saddle River, New Jersey, Pren
tice Hall, 2002.
[77] J.J. Block and T.W. Strganac, Applied Active Control for a Nonlinear Aeroelas
tic Structure, J. Guidance, Control, and Dynamics , Vol. 21, No. 6, pp.
838845, 1998.
[78] Aguiar, A. and Joao Hespanha, Position Tracking of Underactuated Vehicles,
Proceedings of the American Control Conference, pp. 19881993, Denver,
CO, 1992.
[79] Behal, A., D. Dawson, E. Zergeroglu, and Y. Fang, Nonlinear Tracking Control
of an Underactuated Spacecraft, AIAA Journal of Guidance, Control, and
Dynamics, Vol. 25, No. 5, pp. 979985.
[80] Costic, B., M. de Queiroz, and D. Dawson, Energy Management And Attitude
Control Strategies Using Flywheels, Proceedings of the IEEE Conference
Decision and Control, Orlando, FL, Dec. 2001, pp. 34353440.
[81] Dixon, W., D. Dawson, E. Zergeroglu, and F. Zhang, Robust Tracking and Reg
ulation for Mobile Robots, International Journal of Robust and Nonlinear
Control: Special Issue on Control of Underactuated Nonlinear Systems, Vol.
10, 2000, pp. 199216.
[82] Do, K., Z. Jiang, J. Pan, and H. Nijmeijer, A Global OutputFeedback Con
troller For Stabilization and Tracking of Underactuated ODIN: A Spherical
Underwater Vehicle, Automatica, Vol. 40, pp. 117124.
[83] Fjellstad, O. and Thor I. Fossen, Quaternion Feedback Regulation of Underwater
Vehicles, Proceeding IEEE Conference on Control Applications, Strathclyde
University, Glasgow, 1994, pp. 857862
[84] Fossen, T., Guidance and Control of Ocean Vehicles, Chichester, UK: John Wiley
& Sons, 1994.
[85] Godhavn, J., Nonlinear Tracking of Underactuated Surface Vessels, Proceedings
of the 35th Conference on Decision and Control, Kobe, Japan, Dec. 1996,
pp. 975980.
[86] Huches, P.C., Spacecraft Attitude Dynamics, New York, NY: Wiley, 1994.
[87] Lizarralde, F. and J.T. Wen, Attitude Control Without Angular Velocity Mea
surement: A Passivity Approach, IEEE Transactions on Automatic Con
trol, Vol. 41, No. 3, pp. 468472.
125
[88] Pettersen, K. and H. Nijmeijer, Global Practical Stabilization and Tracking for
an Underactuated Ship  A Combined Averaging and Backstepping Ap
proach, Proceedings of the IFAC Conference on System Structure and Con
trol, Nantes, France, July 1998, pp. 5964.
[89] Pettersen, R., and O. Egeland, Exponential Stabilization of an Underactuated
Surface Vessel, Modeling, Identication and Control, Vol. 18, No. 3, pp.
239.
[90] Reyhanoglu, M., A. Schaft, N. McClamroch, and I. Kolmanovsky, Dynamics and
Control of a Class of Underactuated Mechanical Systems, IEEE Transac
tions on Automatic Control, Vol. 44, No. 9, pp. 16631670, 1999.
[91] Stillwell, D., and B. Bishop, Platoons of Underwater Vehicles, IEEE Control
Systems Magazine, December 2000, pp. 4552.
[92] Xian, B., M.S. de Queiroz, D. Dawson, and I. Walker, TaskSpace Tracking Con
trol of Robot Manipulators via Quaternion Feedback, IEEE Transactions
on Robotics and Automation, Vol. 20, No. 1,pp.160167.