Documente Academic
Documente Profesional
Documente Cultură
=
_
_
1 sin tan cos tan
0 cos sin
0
sin
cos
cos
cos
_
_
(2.1)
Dening V
T
as the total velocity and using Figure 2.2, the following relations can be
derived:
V
T
=
_
u
2
+v
2
+w
2
= arctan
w
u
(2.2)
= arcsin
v
V
T
Furthermore, when = = 0, the ight path angle can be dened as
= (2.3)
2.2.3 Equations of Motion for a Rigid Body Aircraft
The equations of motion for the aircraft can be derived from Newtons Second Law of
motion, which states that the summation of all external forces acting on a body must be
equal to the time rate of change of its momentum, and the summation of the external
2.2 AIRCRAFT DYNAMICS 21
moments acting on a body must be equal to the time rate of change of its angular mo-
mentum. In the inertial, earth-xed reference frame F
E
, Newtons Second Law can be
expressed by two vector equations [143]
F =
d
dt
(mV)
_
E
(2.4)
M =
dH
dt
_
E
(2.5)
where F represents the sum of all externally applied forces, m is the mass of the aircraft,
M represents the sum of all applied torques and H is the angular momentum.
Force Equation
First, to further evaluate the force equation (2.4) it is necessary to obtain an expression
for the time rate of change of the velocity vector with respect to earth. This process is
complicated by the fact that the velocity vector may be rotating while it is changing in
magnitude. Using the equation of Coriolis in appendix A of [16] results in
F =
d
dt
(mV)
_
B
+ mV, (2.6)
where is the total angular velocity of the aircraft with respect to the earth (inertial
reference frame). Expressing the vectors as the sum of their components with respect to
the body-xed reference frame F
B
gives
V = iu + jv + kw (2.7)
= ip + jq + kr (2.8)
where i, j and k are unit vectors along the aircrafts x
B
, y
B
and z
B
axes, respectively.
Expanding (2.6) using (2.7), (2.8) results in
F
x
= m( u +qw rv)
F
y
= m( v +ru pw) (2.9)
F
z
= m( w +pv qu)
where the external forces F
x
, F
y
and F
z
depend on the weight vector W, the aerodynamic
force vector R and the thrust vector E. It is assumed the thrust produced by the engine,
F
T
, acts parallel to the aircrafts x
B
-axis. Hence,
E
x
= F
T
E
y
= 0 (2.10)
E
z
= 0
The components of W and R along the body-axes are
W
x
= mg sin
W
y
= mg sin cos (2.11)
W
z
= mg cos cos
22 AIRCRAFT MODELING 2.2
and
R
x
=
X
R
y
=
Y (2.12)
R
z
=
Z
where g is the gravity constant. The size of the aerodynamic forces
X,
Y and
Z is
determined by the amount of air diverted by the aircraft in different directions. The
amount of air diverted by the aircraft mainly depends on the following factors:
the total velocity V
T
(or Mach number M) and density of the airow ,
the geometry of the aircraft: wing area S, wing span b and mean aerodynamic
chord c,
the orientation of the aircraft relative to the airow: angle of attack and side slip
angle ,
the control surface deections ,
the angular rates p, q, r,
There are other variables such as the time derivatives of the aerodynamic angles that also
play a role, but these effects are less prominent, since it is assumed that the aircraft is a
rigid body. This motivates the standard way of modeling the aerodynamic force:
X = qSC
X
T
(, , p, q, r, , ...)
Y = qSC
Y
T
(, , p, q, r, , ...) (2.13)
Z = qSC
Z
T
(, , p, q, r, , ...)
where q =
1
2
V
2
T
is the aerodynamic pressure. The air density is calculated according
to the International Standard Atmosphere (ISA) as given in Appendix A.2. The coef-
cients C
X
T
, C
Y
T
and C
Z
T
are usually obtained from (virtual) wind tunnel data and ight
tests. Combining equations (2.11) and (2.12) and the thrust components (2.10) with (2.9),
results in the complete body-axes force equation:
X +F
T
mg sin = m( u +qw rv)
L = qSbC
l
T
(, , p, q, r, , ...) (2.20)
M = qS cC
m
T
(, , p, q, r, , ...)
N = qSbC
n
T
(, , p, q, r, , ...)
Combining (2.18) and (2.19), the complete body-axis moment equation is formed as
L = pI
x
rI
xz
+qr(I
z
I
y
) pqI
xz
M rH
eng
= qI
y
+pq(I
x
I
z
) + (p
2
r
2
)I
xz
(2.21)
N +qH
eng
= rI
z
pI
xz
+pq(I
y
I
x
) +qrI
xz
.
24 AIRCRAFT MODELING 2.2
2.2.4 Gathering the Equations of Motion
Euler Angles
The equations of motion derived in the previous sections are now collected and written
as a system of twelve scalar rst order differential equations.
u = rv qw g sin +
1
m
(
X +F
T
) (2.22)
v = pw ru +g sincos +
1
m
Y (2.23)
w = qu pv +g cos cos +
1
m
Z (2.24)
p = (c
1
r +c
2
p)q +c
3
L +c
4
(
N +qH
eng
) (2.25)
q = c
5
pr c
6
(p
2
r
2
) +c
7
(
M rH
eng
) (2.26)
r = (c
8
p c
2
r)q +c
4
L +c
9
(
N +qH
eng
) (2.27)
=
q sin +r cos
cos
(2.30)
x
E
= u cos cos +v(cos sin sin sin cos )
+ w(cos sin cos + sin sin ) (2.31)
y
E
= u sin cos +v(sin sin sin + cos cos )
+ w(sin sin cos cos sin ) (2.32)
z
E
= u sin +v cos sin +wcos cos (2.33)
where
c
1
= (I
y
I
z
)I
z
I
2
xz
c
4
= I
xz
c
7
=
1
I
y
c
2
= (I
x
I
y
+I
z
)I
xz
c
5
=
I
z
I
x
I
y
c
8
= I
x
(I
x
I
y
) +I
2
xz
c
3
= I
z
c
6
=
I
xz
I
y
c
9
= I
x
with = I
x
I
z
I
2
xz
.
Quaternions
The above equations of motion make use of Euler angle approach for the orientation
model. The disadvantage of the Euler angle method is that the differential equations for
2.2 AIRCRAFT DYNAMICS 25
p and r become singular when pitch angle passes through
2
. To avoid these singular-
ities quaternions are used for the aircraft orientation presentation. A detailed explanation
about quaternions and their properties can be found in [127]. With the quaternions pre-
sentation the aircraft system representation consists of 13 scalar rst order differential
equations:
u = rv qw +
1
m
_
X +F
T
_
+ 2(q
1
q
3
q
0
q
2
)g (2.34)
v = pw ru +
1
m
Y + 2(q
2
q
3
+q
0
q
1
)g (2.35)
w = qu pv +
1
m
Z + (q
2
0
q
2
1
q
2
2
+q
2
3
)g (2.36)
p = (c
1
r +c
2
p)q +c
3
L +c
4
(
N +qH
eng
) (2.37)
q = c
5
pr c
6
(p
2
r
2
) +c
7
(
M rH
eng
) (2.38)
r = (c
8
p c
2
r)q +c
4
L +c
9
(
N +qH
eng
) (2.39)
q =
_
_
q
0
q
1
q
2
q
3
_
_
=
1
2
_
_
0 p q r
p 0 r q
q r 0 p
r q p 0
_
_
_
_
q
0
q
1
q
2
q
3
_
_
(2.40)
_
_
x
E
y
E
z
E
_
_
=
_
_
q
2
0
+q
2
1
q
2
2
q
2
3
2(q
1
q
2
q
0
q
3
) 2(q
1
q
3
+q
0
q
2
)
2(q
1
q
2
+q
0
q
3
) q
2
0
q
2
1
+q
2
2
q
2
3
2(q
2
q
3
q
0
q
1
)
2(q
1
q
3
q
0
q
2
) 2(q
2
q
3
+q
0
q
1
) q
2
0
q
2
1
q
2
2
+q
2
3
_
_
_
_
u
v
w
_
_
(2.41)
where
_
_
q
0
q
1
q
2
q
3
_
_
=
_
_
cos /2 cos /2 cos /2 + sin /2 sin/2 sin/2
sin /2 cos /2 cos /2 cos /2 sin/2 sin/2
cos /2 sin/2 cos /2 + sin/2 cos /2 sin/2
cos /2 cos /2 sin/2 sin/2 sin /2 cos /2
_
_
.
Using (2.40) to describe the attitude dynamics means that the four differential equa-
tions are integrated as if all quaternion components were independent. Therefore, the
normalization condition |q| =
_
q
2
0
+q
2
1
+q
2
2
+q
2
3
= 1 and the derivative constraint
q
0
q
0
+q
1
q
1
+q
2
q
2
+q
3
q
3
= 0 may not be satised after performing an integration step
due to numerical round-off errors. After each integration step the constraint may be re-
established by subtracting the discrepancy fromthe quaternion derivatives. The corrected
quaternion dynamics are [170]
q
= q q, (2.42)
where = q
0
q
0
+q
1
q
1
+q
2
q
2
+q
3
q
3
.
26 AIRCRAFT MODELING 2.3
Wind-axes Force Equations
For control design it is more convenient to transform the force equations (2.34)-(2.36) to
the wind-axes reference frame. Taking the derivative of (2.2) results in [127]
V
T
=
1
m
(D +F
T
cos cos +mg
1
) (2.43)
= q (p cos +r sin ) tan
1
mV
T
cos
(L +F
T
sin mg
3
) (2.44)
= p sin r cos +
1
mV
T
(Y F
T
cos sin +mg
2
) (2.45)
where the drag force D, the side force Y and the lift force L are dened as
D =
X cos cos
Y sin
Z sin cos
Y =
X cos sin +
Y cos
Z sin sin
L =
X sin
Z cos
and the gravity components as
g
1
= g (cos cos sin + sin sin cos + sin cos cos cos )
g
2
= g (cos sin sin + cos sin cos sin sin cos cos )
g
3
= g (sin sin + cos cos cos ) .
2.3 Control Variables and Engine Modeling
The F-16 model allows control over thrust, elevator, ailerons and rudder. The thrust is
measured in Newtons. All deections are dened positive in the conventional way, i.e.
positive thrust causes an increase in acceleration along the x
B
-axis, a positive elevator
deection results in a decrease in pitch rate, a positive aileron deection gives a decrease
in roll rate and a positive rudder deection decreases the yaw rate. The F-16 also has a
leading edge ap, which helps to y the aircraft at high angles of attack. The deection
of the leading edge ap
lef
is not controlled directly by the pilot, but is governed by
the following transfer function dependent on angle of attack and static and dynamic
pressures:
lef
= 1.38
2s + 7.25
s + 7.25
9.05
q
p
stat
+ 1.45. (2.46)
The differential elevator deection, trailing edge ap, landing gear and speed brakes are
not included in the model, since no data is publicly available. The control surfaces of
the F-16 are driven by servo-controlled actuators to produce the deections commanded
by the ight control system. The actuators of the control surfaces are modeled as a rst-
order low-pass lters with certain gain and saturation limits in range and deection rate.
These limits can be found in Table 2.1. The gains of the actuators are 1/0.136 for the
leading edge ap and 1/0.0495 for the other control surfaces. The maximum values and
2.3 CONTROL VARIABLES AND ENGINE MODELING 27
Table 2.1: The control input units and maximum values
Control units MIN. MAX. rate limit
Elevator deg -25 25 60 deg/s
Ailerons deg -21.5 21.5 80 deg/s
Rudder deg -30 30 120 deg/s
Leading edge ap deg 0 25 25 deg/s
units for all control variables are given in Table 2.1.
The Lockheed Martin F-16 is powered by an after-burning turbofan jet engine, which
is modeled taking into account throttle gearing and engine power level lag. The thrust
response is modeled with a rst order lag, where the lag time constant is a function of the
current engine power level and the commanded power. The commanded power level to
the throttle position is a linear relationship apart froma change in slope when the military
power level is reached at 0.77 throttle setting [149]:
P
c
(
th
) =
_
64.94
th
if
th
0.77
217.38
th
117.38 if
th
> 0.77
. (2.47)
Note that the throttle position is limited to the range 0
th
1. The derivative of the
actual power level P
a
is given by [149]
P
a
=
1
eng
(P
c
P
a
) , (2.48)
where
P
c
=
_
_
P
c
if P
c
50 and P
a
50
60 if P
c
50 and P
a
< 50
40 if P
c
< 50 and P
a
50
P
c
if P
c
< 50 and P
a
< 50
1
eng
=
_
_
5.0 if P
c
50 and P
a
50
1
eng
if P
c
50 and P
a
< 50
5.0 if P
c
< 50 and P
a
50
1
eng
if P
c
< 50 and P
a
< 50
1
eng
=
_
_
_
1.0 if (P
c
P
a
) 25
0.1 if (P
c
P
a
) 50
1.9 0.036 (P
c
P
a
) if 25 < (P
c
P
a
) < 50
.
28 AIRCRAFT MODELING 2.4
The engine thrust data is available in a tabular formas a function of actual power, altitude
and Mach number over the ranges 0 h 15240 m and 0 M 1 for idle, military
and maximum power settings [149]. The thrust is computed as
F
T
=
_
T
idle
+ (T
mil
T
idle
)
P
a
50
if P
a
< 50
T
mil
+ (T
max
T
mil
)
P
a
50
50
if P
a
50
. (2.49)
The engine angular momentumis assumed to be acting along the x
B
-axis with a constant
value of 216.9 kg.m
2
/s.
2.4 Geometry and Aerodynamic Data
The relevant geometry data of the F-16 can be found in Table A.1 of Appendix A. The
aerodynamic data of the F-16 model have been derived from low-speed static and dy-
namic (force oscillation) wind-tunnel tests conducted with sub-scale models in wind-
tunnel facilities at the NASA Ames and Langley Research Centers [149]. The aerody-
namic data in [149] are given in tabular form and are valid for the following subsonic
ight envelope:
20 90 degrees;
30 30 degrees.
Two examples of the aerodynamic data for the F-16 model can be found in Figure 2.3.
The pitch moment coefcient C
m
and the C
Z
both depend on three variables: angle of
attack, sideslip angle and elevator deection.
20
0
20
40
60
80
20
0
20
0.6
0.5
0.4
0.3
0.2
0.1
0
0.1
0.2
beta (deg)
alpha (deg)
C
m
(
)
(a) C
m
for
e
= 0
20
0
20
40
60
80
20
0
20
3
2
1
0
1
2
beta (deg)
alpha (deg)
C
Z
(
)
(b) C
Z
for
e
= 0
Figure 2.3: Two examples of the aerodynamic coefcient data for the F-16 obtained from wind-
tunnel tests.
The various aerodynamic contributions to a given force or moment coefcient as given
2.4 GEOMETRY AND AERODYNAMIC DATA 29
in [149] are summed as follows.
For the X-axis force coefcient C
X
T
:
C
X
T
= C
X
(, ,
e
) +C
X
lef
_
1
lef
25
_
+
q c
2V
T
_
C
X
q
() +C
X
q
lef
()
_
1
lef
25
__
(2.50)
where
C
X
lef
= C
X
lef
(, ) C
X
(, ,
e
= 0
o
).
For the Y-axis force coefcient C
Y
T
:
C
Y
T
= C
Y
(, ) +C
Y
lef
_
1
lef
25
_
+
_
C
Y
a
+C
Y
a
lef
_
1
lef
25
___
a
20
_
+ C
Y
r
_
r
30
_
+
rb
2V
T
_
C
Y
r
() +C
Y
r
lef
()
_
1
lef
25
__
+
pb
2V
T
_
C
Y
p
() +C
Y
p
lef
()
_
1
lef
25
__
(2.51)
where
C
Y
lef
= C
Y
lef
(, ) C
Y
(, )
C
Y
a
= C
Y
a
(, ) C
Y
(, )
C
Y
a
lef
= C
Y
a
lef
(, ) C
Y
lef
(, ) C
Y
a
C
Y
r
= C
Y
r
(, ) C
Y
(, ).
For the Z-axis force coefcient C
Z
T
:
C
Z
T
= C
Z
(, ,
e
) +C
Z
lef
_
1
lef
25
_
+
q c
2V
T
_
C
Z
q
() +C
Z
q
lef
()
_
1
lef
25
__
(2.52)
where
C
Z
lef
= C
Z
lef
(, ) C
Z
(, ,
e
= 0
o
).
30 AIRCRAFT MODELING 2.5
For the rolling-moment coefcient C
l
T
:
C
l
T
= C
l
(, ,
e
) +C
l
lef
_
1
lef
25
_
+
_
C
l
a
+C
l
a
lef
_
1
lef
25
___
a
20
_
+ C
l
r
_
r
30
_
+
rb
2V
T
_
C
l
r
() +C
l
r
lef
()
_
1
lef
25
__
+
pb
2V
T
_
C
l
p
() +C
l
p
lef
()
_
1
lef
25
__
+C
l
() (2.53)
where
C
l
lef
= C
l
lef
(, ) C
l
(, ,
e
= 0
o
)
C
l
a
= C
l
a
(, ) C
l
(, ,
e
= 0
o
)
C
l
a
lef
= C
l
a
lef
(, ) C
l
lef
(, ) C
l
a
C
l
r
= C
l
r
(, ) C
l
(, ,
e
= 0
o
).
For the pitching-moment coefcient C
m
T
:
C
m
T
= C
m
(, ,
e
) +C
Z
T
_
x
cg
r
x
cg
_
+C
m
lef
_
1
lef
25
_
(2.54)
+
q c
2V
T
_
C
m
q
() +C
m
q
lef
()
__
1
lef
25
_
+C
m
() +C
m
ds
(,
e
)
where
C
m
lef
= C
m
lef
(, ) C
m
(, ,
e
= 0
o
).
For the yawing-moment coefcient C
n
T
:
C
n
T
= C
n
(, ,
e
) +C
n
lef
_
1
lef
25
_
C
Y
T
_
x
cg
r
x
cg
_
c
b
+
_
C
n
a
+C
n
a
lef
_
1
lef
25
___
a
20
_
+ C
n
r
_
r
30
_
+
rb
2V
T
_
C
n
r
() +C
n
r
lef
()
_
1
lef
25
__
+
pb
2V
T
_
C
n
p
() +C
n
p
lef
()
_
1
lef
25
__
+C
n
() (2.55)
where
C
n
lef
= C
n
lef
(, ) C
n
(, ,
e
= 0
o
)
C
n
a
= C
n
a
(, ) C
n
(, ,
e
= 0
o
)
C
n
a
lef
= C
n
a
lef
(, ) C
n
lef
(, ) C
n
a
C
n
r
= C
n
r
(, ) C
n
(, ,
e
= 0
o
).
2.5 BASELINE FLIGHT CONTROL SYSTEM 31
2.5 Baseline Flight Control System
The NASA technical report [149] also contains a description of a stability and control
augmentation system for the F-16 model. This ight control system is a simplied ver-
sion of the actual baseline F-16 ight controller, which retains its main characteristics. A
description of the different control loops of the system is given in this section, for more
details see [149].
2.5.1 Longitudinal Control
A diagram of the longitudinal ight control system can be found in Figure A.2 of Ap-
pendix A.3. It is a command augmentation system where the pilot commands normal
acceleration with a longitudinal stick input. Washed-out pitch rate and ltered normal
acceleration are fed back to achieve the desired response. A forward-loop integration is
included to make the steady-state acceleration response match the commanded acceler-
ation. At low Mach numbers the F-16 model has a minor negative static longitudinal
stability; therefore angle of attack feedback is used to provide articial static stability.
The pitch control system incorporates an angle of attack limiting system, where again
angle of attack feedback is used to modify the pilot-commanded normal acceleration.
The resulting angle of attack limit is about 25 deg in 1g ight. Finally, the system also
makes sure that the pitch control is deected in the proper direction to oppose the nose-up
coupling moment generated by rapid rolling at high angles of attack.
2.5.2 Lateral Control
The lateral ight control system is depicted in the block diagramgiven in Figure A.3. The
pilot can command roll rates up to 308 deg/s through the lateral stick movement. Above
angles of attack of 29 deg, an automatic departure-prevention system is activated. This
system disengages the roll-rate control augmentation system and uses yaw rate feedback
to drive the roll control surfaces to oppose any yaw rate buildup. At high angles of attack
the pilot-commanded roll rate is limited to prevent pitch-out departures. The roll rate
limiting is scheduled on angle of attack, elevator deection and dynamic pressure.
2.5.3 Directional Control
A scheme of the directional control system can be found in Figure A.4. The pilot rudder
input is computed directly frompedal force and is limited to 30 deg. Between 20 and 30
deg angle of attack this command signal is gradually reduced to zero to prevent departures
fromexcessive pilot rudder usage at high angles of attack. Also, between 20 and 40 deg/s
roll rate the command signal is gradually reduced to zero to prevent pitch-out departures.
Yaw stability augmentation consists of lateral acceleration and approximated stability
yaw rate feedback. The stability-axis yaw damper provides increased lateral-directional
damping in addition to reducing sideslip during high angle of attack roll maneuvers.
32 AIRCRAFT MODELING 2.6
An aileron-rudder interconnection exists to improve coordination and roll performance.
At low speeds the gain for the interconnection is scheduled as a linear function of angle
of attack. As in the lateral control system, above angles of attack of 29 deg, a departure-
/spin-prevention mode is activated which uses the rudder to oppose any yaw rate buildup.
2.6 MATLAB/Simulink
c
Implementation
The F-16 dynamic model is written as a C S-function in MATLAB/Simulink
c
. The in-
puts of the model are the control surface deections and the throttle setting. The outputs
are the aircraft states and the dimensionless normal accelerations n
y
and n
z
. The aero-
dynamic data, interpolation functions, the engine model and the ISA atmosphere model
are obtained from separate C les. A rudimentary trim function obtained from [173] is
included. The baseline ight control system and the leading edge ap control system are
constructed with Simulink blocks. Sensor models have been obtained from ADMIRE
[63] and are also included in the Simulink model. Full state measurement is assumed to
be available for the control systems. Note that wind or turbulence effects are not taken
into account in the simulation model.
Figure 2.4 depicts the resulting Simulink model of the closed-loop system. The Flight-
gear block can be used to y the aircraft on a desktop computer with a joystick in the
open-source Flightgear ight simulator in real-time. All simulation model les are in-
cluded on cdrom, but can also be downloaded from www.mathworks.com. Descriptions
are included in the header of each le.
Figure 2.4: The MATLAB/Simulink
c
F-16 model with baseline ight control system.
Chapter 3
Backstepping
In this chapter the backstepping approach to control design is introduced. Since all the
adaptive design methods discussed throughout the chapters of this thesis are based on the
backstepping technique, this chapter, together with the next chapter about adaptive back-
stepping, form the theoretical basis of the thesis. First, the Lyapunov theory and stability
concepts on which backstepping is based are reviewed. After that, the design approach
itself is introduced and its characteristics are explained with illustrative examples.
3.1 Introduction
Backstepping is a systematic, Lyapunov-based method for nonlinear control design. The
backstepping method can be applied to a broad class of systems. The name backstep-
ping refers to the recursive nature of the design procedure. The design procedure starts
at the scalar equation which is separated by the largest number of integrations from the
control input and steps back toward the control input. Each step an intermediate or
virtual control law is calculated and in the last step the real control law is found. Two
comprehensive textbooks that deal with backstepping and Lyapunov theory are [106] and
especially [118]. The origins of the backstepping method are traced in the survey paper
by Kokotovic [110].
An important feature of backstepping is the exibility of the method, for instance deal-
ing with nonlinearities is a designer choice. If a nonlinearity acts stabilizing, i.e. it is
useful in a sense, it can be retained in the closed-loop system. This is in contrast with
the NDI and FBL methods. An additional advantage is that the controller relies on less
precise model information: the designer does not need to know the size of a stabilizing
nonlinearity. In [75, 77, 78] this notion is used to design a robust nonlinear controller
for a ghter aircraft model. Other examples of backstepping control designs where the
cancellation of useful nonlinearities is avoided can be found in [116, 118].
However, it is often difcult to ascertain if a nonlinearity in the aircraft dynamics acts
33
34 BACKSTEPPING 3.2
stabilizing over the entire ight envelope, especially with model uncertainties or sudden
changes in the aircrafts dynamic behavior. Therefore, this feature of backstepping is
not exploited in this thesis. Instead, the research focuses on more advanced adaptive
backstepping techniques that guarantee stability and convergence even in the presence
of unknown parameters. Nevertheless, this chapter serves as an introduction before the
more complex adaptive backstepping techniques are introduced in Chapter 4.
This chapter starts with a discussion on Lyapunov theory and stability concepts. Lya-
punovs direct method, which forms the basis of the backstepping technique, is outlined.
In Section 3.3 the idea behind backstepping is introduced on a general second order
nonlinear system and extended to a recursive procedure for higher order systems. The
chapter closes with an example where the backstepping procedure is applied to the pitch
autopilot design for a longitudinal missile model.
3.2 Lyapunov Theory and Stability Concepts
3.2.1 Lyapunov Stability Denitions
Consider the nonlinear dynamical system
x = f(x(t), t), x(t
0
) = x
0
(3.1)
where x(t) R
n
and f : R
n
R
+
R
n
is locally Lipschitz in x and piecewise
continuous in t.
Denition 3.1 (Lipschitz condition). A function f(x, t) satises a Lipschitz condition on
D with Lipschitz constant L if
|f(x, t) f(x, y)| L|x y|, (3.2)
for all points (x, t) and (y, t) in D.
1
An equilibriumpoint x
e
R
n
of (3.1) is such that f(x
e
) = 0. It can be assumed, without
loss of generality, that the system (3.1) has an equilibrium point x
e
= 0. The following
denition gives the stability of this equilibrium point [106].
Denition 3.2 (Stability in the sense of Lyapunov). The equilibrium point x
e
= 0 of the
system (3.1) is
stable if for each > 0 and any t
0
> 0, there exists a (, t
0
) > 0 such that
|x(t
0
)| < (, t
0
) |x(t)| < t t
0
;
1
Note that Lipschitz continuity is a stronger condition than continuity. For example, the function f(x) =
x is continuous on D = [0, ), but it is not Lipschitz continuous on D since its slope approaches innity as
x approaches zero.
3.2 LYAPUNOV THEORY AND STABILITY CONCEPTS 35
uniformly stable if for each > 0 and any t
0
> 0, there exists a () > 0 such
that
|x(t
0
)| < () |x(t)| < t t
0
;
unstable if it is not stable;
asymptotically stable if it is stable, and for any t
0
> 0, there exists an (t
0
) > 0
such that
|x(t
0
)| < (t
0
) |x(t)| 0 as t ;
uniformly asymptotically stable if it is uniformly stable, and there exists a > 0
independent of t such that > 0 there exists a T() > 0 such that
|x(t
0
)| < |x(t)| < t t
0
+T();
exponentially stable if for any > 0 there exists a () > 0 such that
|x(t
0
)| < |x(t)| < e
(tt
0
)
t > t
0
0
for some > 0.
Stability in the sense of Lyapunov is a very mild requirement on equilibrium points. In
particular, it includes the idea that solutions are bounded, but at the same time requires
that the bound on the solution can be made arbitrarily small by restriction of the size
of the initial condition. The main difference between stability and uniform stability is
that in the latter case is independent of t
0
. Asymptotic stability additionally requires
solutions to converge to the origin, while exponential stability requires this convergence
rate to be exponential. Lyapunov stability can be further illustrated in R
2
by Figure 3.1.
All trajectories that start in the inner disc will remain in the outer disc forever (bounded).
Figure 3.1: Different types of stability illustrated in R
2
[136].
The set of initial conditions D = {x
0
R
n
|x(t
0
) = x
0
and |x(t)| as t }
36 BACKSTEPPING 3.2
is the domain of attraction of the origin. If D is equal to R
n
, then the origin is said
to be globally asymptotically stable. A globally asymptotically stable equilibrium point
implies that x
e
is the unique equilibrium point, i.e. all solutions, regardless of their
starting point, converge to this point.
In some relevant cases it may not be possible to prove stability of x
e
, but it may still be
possible to use Lyapunov analysis to show boundedness of the solution [106].
Denition 3.3 (Boundedness). The equilibrium point x
e
= 0 of the system (3.1) is
uniformly ultimately bounded if there exist positive constants R, T(R), and b
such that |x(t
0
)| R implies that
|x(t)| < b t > t
0
+T;
globally uniformly ultimately bounded if it is uniformly ultimately bounded and
R = .
The constant b is referred to as the ultimate bound.
3.2.2 Lyapunovs Direct Method
To be of practical interest the stability conditions must not require that the differential
equation (3.1) has to be explicitly solved, since this is in general not possible analytically.
The Russian A. M. Lyapunov [135] found another way of proving stability, nowadays
referred to as Lyapunovs direct method (or Lyapunovs second method). The method is
a generalization of the idea that if there is some measure of energy in a system, then
studying the rate of change of the energy in the system is a way to ascertain stability. To
make this more precise, this measure of energy has to be dened in a more formal way.
Let B(r) be a ball of size r around the origin, B(r) = {x R
n
: |x| < r}.
Denition 3.4. A continuous function V (x) is
positive denite on B(r) if V (0) = 0 and V (x) > 0, x B(r) such that
x = 0;
positive semi-denite on B(r) if V (0) = 0 and V (x) 0 x B(r) such that
x = 0;
negative(semi-)denite on B(r) if V (x) is positive (semi-)denite;
radially unbounded if V (0) = 0, V > 0 on R
n
{0}, and V (x) as
|x| .
A continuous function V (x, t) is
3.2 LYAPUNOV THEORY AND STABILITY CONCEPTS 37
positive denite on R B(r) if there exists a positive denite function (x) on
B(r) such that
V (0, t) = 0, t 0 and V (x, t) (x), t 0, x B(r);
radially unbounded if there exists a radially unbounded function (x) such that
V (0, t) = 0, t 0 and V (x, t) (x), t 0, x R
n
;
decrescent on R B(r) if there exists a positive denite function (x) on B(r)
such that
V (x, t) (x), t 0, x B(r).
Using these denitions, the following theorem can be used to determine stability for a
system by studying an appropriate Lyapunov (energy) function V (x, t). The time deriva-
tive of V (x, t) is taken along the trajectories of the system (3.1)
x=f(x,t)
=
V
t
+
V
x
f(x, t).
Theorem 3.5 (Lyapunovs Direct Method). Let V (x, t) : R
+
D R
+
be a continu-
ously differentiable and positive denite function, where D is an open region containing
the origin.
If
V
x=f(x,t)
is negative semi-denite for x D, then the equilibrium x
e
= 0 is
stable.
If V (x, t) is decrescent and
V
x=f(x,t)
is negative semi-denite for x D, then
the equilibrium x
e
= 0 is uniformly stable.
If
V
x=f(x,t)
is negative denite for x D, then the equilibriumx
e
= 0 is asymp-
totically stable.
If V (x, t) is decrescent and
V
x=f(x,t)
is negative denite for x D, then the
equilibrium x
e
= 0 is uniformly asymptotically stable.
If there exist three positive constants c
1
, c
2
and c
3
such that c
1
|x|
2
V (x, t)
c
2
|x|
2
and
V
x=f(x,t)
c
3
|x|
2
for all t 0 and for all x D, then the
equilibrium x
e
= 0 is exponentially stable.
Proof: The proof can be found in chapter 4 of [106].
38 BACKSTEPPING 3.2
The requirement for negative deniteness of the derivative of the Lyapunov function
to guarantee asymptotic convergence is quite stringent. It may still be possible to con-
clude asymptotic convergence when this derivative is only negative semi-denite using
LaSalles invariance theorem (Theorem B.7 in Appendix B.1). However, this theorem is
only valid for autonomous systems. For time-varying systems Barbalats useful lemma
can be used [118].
Lemma 3.6 (Barbalats Lemma). Let : R
+
R be a uniformly continuous function
on [0, ). If lim
t
_
t
0
()d exists and is nite, then
lim
t
(t) = 0.
Combining this lemma with Lyapunovs direct method leads to the powerful theorem by
LaSalle and Yoshizawa.
Theorem 3.7 (LaSalle-Yoshizawa). Let x
e
= 0 be an equilibrium point of (3.1) and
suppose that f is locally Lipschitz in x uniformly in t. Let V : R
n
R
+
R
+
be a
continuously differentiable function such that
1
(x) V (x, t)
2
(x)
V =
V
t
+
V
x
f(x, t) W(x) 0
t 0, x R
n
, where
1
and
2
are continuous positive denite functions and where
W is a continuous function. Then all solutions of (3.1) satisfy
lim
t
W(x(t)) = 0.
In addition, if W(x) is positive denite, then the equilibriumx
e
= 0 is globally uniformly
asymptotically stable.
Proof: The detailed proof can be found in Appendix B.1.
The key advantage of this theorem is that it can be applied without nding the solutions
of (3.1). Unfortunately, Theorem3.7 does not give an actual prescription for determining
the Lyapunov function V (x, t). Since the theorem only gives sufcient conditions, it can
be tedious to nd the correct Lyapunov function to establish the stability of an equilib-
rium point. However, the converse of the theorem also exists: if an equilibrium point is
stable, then there exists a function V (x, t) satisfying the conditions of the theorem. A
more formal explanation of Lyapunov stability theory can be found in Appendix B.1.
3.2.3 Lyapunov Theory and Control Design
In this section the Lyapunov function concept is extended to control design, i.e. Lya-
punov theory will now be applied to create a closed-loop system with desirable stability
3.2 LYAPUNOV THEORY AND STABILITY CONCEPTS 39
properties. Consider the nonlinear system to be controlled
x = f(x, u), x R
n
, u R, f(0, 0) = 0 (3.3)
where x is the system state and u the control input. The control objective is to design
a feedback control law (x) for the control input u such that the equilibrium x = 0 is
globally asymptotically stable. To prove stability a function V (x) is needed as a Lya-
punov candidate, and it is required that its derivative along the solutions of (3.3) satises
V = x x = x(x
3
+x +u). (3.8)
There exist multiple choices of control law to render the above expression negative
(semi-)denite. The most obvious choice is the control law
u = x
3
cx, c > 1, (3.9)
40 BACKSTEPPING 3.2
which is equivalent to applying FBL, since it cancels all nonlinearities thus resulting
in the linear feedback system: x = (c 1)x. Obviously, this control law does not
recognize the fact that x
3
is a useful nonlinearity when stabilizing around x = 0
and thereby wastes control effort canceling this term. Also, the presence of x
3
in the
control law (3.9) is dangerous from a robustness perspective. Suppose that the true
system is equal to x = 0.99x
3
+x + u, applying control law (3.9) could lead to an
unstable closed-loop system.
As an alternative the much simpler feedback
u = cx, c > 1 (3.10)
is selected. This results in
V = x
4
(c 1)x
2
< 0 for x = 0. By Theorem 3.7
this control law again renders the origin globally asymptotically stable. However, the
new control is more efcient and also more robust to model uncertainty as compared
to the previous control (3.9).
This can be illustrated using numerical simulations. Plots of the closed-loop system
response for both controllers can be found in Figure 3.2. The rst plot of Figure 3.2
shows the regulation of the states for both controllers for x(0) = 5 and control gain
c = 2. As expected the system with the second smart controller (3.10) has a more
rapid convergence because it makes use of the stabilizing nonlinearity. The bottom
plot of Figure 3.2 illustrates that far less control effort is required when the stabilizing
nonlinearity is not canceled.
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
1
2
3
4
5
time (s)
s
t
a
t
e
x
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
20
0
20
40
60
80
100
120
time (s)
i
n
p
u
t
u
fbl
smart
Figure 3.2: Regulation of x and control effort u for both stabilizing controllers with x(0) = 5 and
k = 2.
3.3 BACKSTEPPING BASICS 41
The main deciency of the CLF concept as a design tool is that for more complex nonlin-
ear systems a CLF is in general not known and the task of nding one may be as difcult
as that of designing a stabilizing feedback law. At the end of the 1980s backstepping
was introduced in a number of papers, e.g. [111, 191, 201], as a recursive design tool to
solve this problem for several important classes of nonlinear systems.
3.3 Backstepping Basics
The previous section dealt with the general Lyapunov theory and introduced the concept
of the CLF. It was stated that if a CLF exists, a control law which makes the closed-loop
system globally asymptotically stable can be found. However, it can be a problem to nd
a CLF or the corresponding control law. Using the backstepping procedure a CLF and a
control law can be found simultaneously as will be illustrated in this section.
3.3.1 Integrator Backstepping
Consider the second order system
x
1
= f(x
1
) +g(x
1
)x
2
(3.11)
x
2
= u (3.12)
where (x
1
, x
2
) R
2
are the states, u R is the control input and g(x
1
) = 0. The
control objective is to track the smooth reference signal y
r
(t) (all derivatives known
and bounded) with the state x
1
. This tracking control problem can be transformed to a
regulation problem by introducing the tracking error variable z
1
= x
1
y
r
and rewriting
the x
1
-subsystem in terms of this variable as
z
1
= f(x
1
) +g(x
1
)x
2
y
r
(3.13)
The idea behind backstepping is to regard the state x
2
as a control input for the z
1
-
subsystem. By a correct choice of x
2
the z
1
-subsystem can be made globally asymptoti-
cally stable. Since x
2
is just a state variable and not the real control input, x
2
is called a
virtual control and its desired value x
des
2
(x
1
, y
r
, y
r
) a stabilizing function. For the
z
1
-subsystem a CLF V
1
(z
1
) can be selected such that the stabilizing virtual control law
renders its time-derivative along the solutions of (3.13) negative (semi-)denite, i.e.
V
1
=
V
1
z
1
[f(x
1
) +g(x
1
)(x
1
, y
r
, y
r
) y
r
] W(z
1
), (3.14)
where W(z
1
) is positive denite. The difference between the virtual control x
2
and its
desired value (x
1
, y
r
, y
r
) is dened as the tracking error variable
z
2
= x
2
x
des
2
= x
2
(x
1
, y
r
, y
r
). (3.15)
42 BACKSTEPPING 3.3
The system can now be rewritten in terms of the new state z
2
as
z
1
= f +g (z
2
+) y
r
(3.16)
z
2
= u
x
1
[f +g (z
2
+)]
y
r
y
r
y
r
y
r
, (3.17)
where the time derivative of can be computed analytically, since it is a known expres-
sion. The task is now to nd a control law for u that ensures that z
2
converges to zero,
i.e. x
2
converges to its desired value . To help nd this stabilizing control law, a CLF
for the complete (z
1
, z
2
)-system is needed. The most obvious solution is to augment the
CLF of the rst design step, V
1
, with an additional quadratic term that penalizes the error
z
2
as
V
2
(z
1
, z
2
) = V
1
(z
1
) +
1
2
z
2
2
. (3.18)
Taking the derivative of V
2
results in
V
2
=
V
1
+z
2
z
2
=
V
1
+z
2
_
u
x
1
[f +g (z
2
+)]
y
r
y
r
y
r
y
r
_
=
V
1
z
1
[f +g (z
2
+) y
r
]
+z
2
_
u
x
1
[f +g (z
2
+)]
y
r
y
r
y
r
y
r
_
=
V
1
z
1
[f +g y
r
]
+z
2
_
V
1
z
1
g +u
x
1
[f +g (z
2
+)]
y
r
y
r
y
r
y
r
_
W(z
1
) +z
2
_
V
1
z
1
g +u
x
1
[f +g (z
2
+)]
y
r
y
r
y
r
y
r
_
,
where the cross term
V
1
z
1
gz
2
due to the presence of z
2
in (3.16) is grouped together with
u. The rst term of the above expression is already negative denite by the choice of
the stabilizing function and the bracketed term can be made negative semi-denite by
selecting the control law
u = cz
2
V
1
z
1
g +
x
1
[f +g (z
2
+)] +
y
r
y
r
+
y
r
y
r
, (3.19)
where the gain c > 0. This control law yields
V
2
W(z
1
) cz
2
2
,
and thus by Theorem 3.7 renders the equilibrium (z
1
, z
1
) = 0 globally stable. Further-
more, the tracking problem is solved, since lim
t
x
1
y
r
. Note that selecting the
3.3 BACKSTEPPING BASICS 43
CLF quadratic with a corresponding (virtual) feedback control law is usually the most
straightforward choice. However, other choices of CLF are also possible and in some
cases may even result in a more efcient controller by e.g. not canceling stabilizing
nonlinearities. This is demonstrated in the following example [75].
Example 3.2 (A second order system)
Consider the scalar system of Example 3.1 augmented with an integrator
x
1
= x
3
1
+x
1
+x
2
(3.20)
x
2
= u. (3.21)
The control objective is to regulate x
1
to zero. A control law for the x
1
-subsystem
was already found in Example 3.1. This control law is now used as a virtual control
law for x
2
with c = 2:
x
des
2
= 2x
1
. (3.22)
The error between x
2
and its desired value is dened as the tracking error z
z = x
2
= x
2
+ 2x
1
. (3.23)
Rewriting the system in terms of the state x
1
and z satises
x
1
= x
3
1
x
1
+z (3.24)
z = u + 2(x
3
1
x
1
+z). (3.25)
Now the CLF of Example 3.1 is augmented for the (x
1
, z)-system with an extra term
that penalizes the tracking error z as
V
2
(x
1
, z) =
1
2
x
2
1
+
1
2
z
2
. (3.26)
Taking the derivative of V
2
results in
V
2
= x
1
x
1
+ z z = x
4
1
x
2
1
+z(u 2x
3
1
x
1
+ 2z).
Examining (3.27) reveals that all indenite terms can be canceled by the control law
u = c
2
z + 2x
3
1
+x
1
, c
2
> 2. (3.27)
By Theorem 3.7 the control law 3.27 stabilizes the (x
1
, z)-system. However, it may
be possible to nd another, more efcient controller that recognizes the naturally sta-
bilizing dynamics of the x
1
-subsystem. In order to nd this efcient controller the
denition of the CLF V
2
is postponed. Consider the CLF
V
2
(x
1
, z) = Q(x
1
) +
1
2
z
2
, (3.28)
44 BACKSTEPPING 3.3
where Q(x
1
) is a CLF for the x
1
-subsystem. Taking the derivative of V
2
results in
V
2
= Q
(x
3
1
+x
1
) +z(Q
+u 2x
3
1
2x
1
+ 2z).
The extended design freedom can now be used to cancel the indenite terms by se-
lecting Q
= 2x
3
1
+ 2x
1
, i.e.
Q(x
1
) =
1
2
x
4
1
+x
2
1
(3.29)
which is positive denite and thus a valid choice of CLF. This reduces the derivative
of V
2
to
V
2
= 2x
6
1
4x
4
1
2x
2
1
+z(u + 2z).
A much simpler control law
u = c
2
z, c
2
> 2 (3.30)
can now be selected to render the derivative of the CLF V
2
negative semi-denite.
Plots of the closed-loop system response of both controllers can be found in Figure
3.3. Backstepping controller 1 only takes the stabilizing nonlinearity into account in
the rst design step and backstepping controller 2 was found using the non-quadratic
CLF. The system is initialized at x
1
(0) = 2, x(2) = 2 and the control gains are
selected as c = 2, c
2
= 3. The required control effort for both controllers is much
lower when compared to a full cancellation FBL controller. This example illustrates
the design freedom the backstepping technique gives the control engineer.
3.3.2 Extension to Higher Order Systems
The backstepping procedure demonstrated on second order systems in the previous sec-
tion can be applied recursively to higher order systems. The only difference is that there
are more virtual states to backstep through. Starting with the state furthest from the
actual control, each step of the backstepping technique can be divided into three parts:
1. Introduce a virtual control and an error state, and rewrite the current state equation
in terms of these,
2. Choose a CLF for the system, treating it as a nal stage,
3. Choose a stabilizing feedback term for the virtual control that makes the CLF
stabilizable.
The CLF is augmented at subsequent steps to reect the presence of new virtual states,
but the same three stages are followed at each step.
3.3 BACKSTEPPING BASICS 45
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0.5
0
0.5
1
1.5
2
time (s)
s
t
a
t
e
x
1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
2
1
0
time (s)
s
t
a
t
e
x
2
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
5
0
5
10
time (s)
i
n
p
u
t
u
bs1
bs2
Figure 3.3: Response of x
1
, x
2
and control effort u for both backstepping controllers with x
1
(0) =
2, x
2
(0) = 2 and c = 2, c
2
= 3.
The backstepping procedure for general strict feedback systems is now stated more for-
mally, consider the nonlinear system
x
1
= f
1
(x
1
) +g
1
(x
1
)x
2
x
2
= f
2
(x
1
, x
2
) +g
2
(x
1
, x
2
)x
3
.
.
.
x
i
= f
i
(x
1
, x
2
, ..., x
i
) +g
i
(x
1
, x
2
, ..., x
i
)x
i+1
(3.31)
.
.
.
x
n
= f
n
(x
1
, x
2
, ..., x
n
) +g
n
(x
1
, x
2
, ..., x
n
)u
where x
i
R, u R and g
i
= 0. The control objective is to force the output y = x
1
to asymptotically track the reference signal y
r
(t) whose rst n derivatives are assumed
to be known and bounded. The backstepping procedure starts by dening the tracking
errors as
z
1
= x
1
y
r
z
i
= x
i
i1
, i = 2, ..., n. (3.32)
46 BACKSTEPPING 3.3
The system (3.31) can be rewritten in terms of these new variables as
z
1
= f
1
(x
1
) +g
1
(x
1
)x
2
y
r
z
2
= f
2
(x
1
, x
2
) +g
2
(x
1
, x
2
)x
3
1
.
.
.
z
i
= f
i
(x
1
, x
2
, ..., x
i
) +g
i
(x
1
, x
2
, ..., x
i
)x
i+1
i1
(3.33)
.
.
.
z
n
= f
n
(x
1
, x
2
, ..., x
n
) +g
n
(x
1
, x
2
, ..., x
n
)u
n1
The clfs are selected as
V
i
= V
i1
+
1
2
z
2
i
, i = 1, ..., n. (3.34)
and the (virtual) feedback controls as
1
=
1
g
1
[c
1
z
1
f
1
+ y
r
]
i
=
1
g
i
[g
i1
z
i1
c
i
z
i
f
i
+
i1
] , i = 2, ..., n
u =
n
(3.35)
with gains c
i
> 0.
Theorem 3.9 (Backstepping Design for Tracking). If V
n
is radially unbounded and g
i
=
0 holds globally, then the closed-loop system, consisting of the tracking error dynamics of
(3.33) and the control u specied according to (3.35), has a globally stable equilibriumat
(z
1
, z
2
, ..., z
n
) = 0 and lim
t
z
i
= 0. In particular, this means that global asymptotic
tracking is achieved:
lim
t
[x
1
y
r
] = 0.
Proof: The time derivative of V
n
along the solutions of (3.33) is
V
n
=
n
i=1
c
i
z
2
i
,
which proves that the equilibrium (z
1
, z
2
, ..., z
n
) = 0 is globally uniformly stable. By
Theorem 3.7 it follows further that lim
t
z
i
= 0.
A block scheme of the resulting closed-loop system for n = 3 and a constant refer-
ence signal y
r
is shown in Figure 3.4. The recursive nature of the procedure is clearly
visible. This concludes the discussion of the theory behind backstepping. In [118] it
is demonstrated that the procedure can be applied to all nonlinear systems of a lower
triangular form, including multivariable systems.
3.3 BACKSTEPPING BASICS 47
Backstepping controller
( )
1 1
g x
( )
1 1
f x
( )
2 1 2
, g x x
( )
2 1 2
, f x x
( )
1 1
,
r
x y
( )
2 1 2
, ,
r
x x y
1
x
r
y
2
x
3
x
( ) ,
r
u x y
( )
3
g x
xx
( )
3
f x
-
- -
Figure 3.4: Closed-loop dynamics of a general strict feedback control system with backstepping
controller for n = 3. It is assumed that y
(i)
r
= 0, i = 1, 2, 3.
3.3.3 Example: Longitudinal Missile Control
In this section the backstepping method is demonstrated in a rst ight control example:
the tracking control design for a longitudinal missile model. A second order nonlinear
model of a generic surface-to-air missile has been obtained from [109]. The model is
nonlinear, but not overly complex. The model consists of the longitudinal force and
moment equations representative of a missile traveling at an altitude of approximately
6000 meters, with aerodynamic coefcients represented as third order polynomials in
angle of attack and Mach number M.
The nonlinear equations of motion in the pitch plane are given by
= q +
qS
mV
T
_
C
z
(, M) +b
z
(M)
(3.36)
q =
qSd
I
yy
_
C
m
(, M) +b
m
(M)
, (3.37)
while the aerodynamic coefcients of the model are approximated by
b
z
(M) = 1.6238M 6.7240,
b
m
(M) = 12.0393M 48.2246,
C
z
(, M) =
z1
() +
z2
()M,
C
m
(, M) =
m1
() +
m2
()M,
48 BACKSTEPPING 3.3
where
z1
() = 288.7
3
+ 50.32|| 23.89,
z2
() = 13.53|| + 4.185,
m1
() = 303.1
3
246.3|| 37.56,
m2
() = 71.51|| + 10.01.
These approximations are valid for the ight envelope 10
o
< < 10
o
and 1.8 <
M < 2.6. To facilitate the control design, the nonlinear missile model (3.36) and (3.37)
is rewritten in the more general state-space form as
x
1
= x
2
+f
1
(x
1
) +g
1
u (3.38)
x
2
= f
2
(x
1
) +g
2
u, (3.39)
where
x
1
= , x
2
= q,
f
1
(x
1
) = C
1
_
z1
(x
1
) +
z2
(x
1
)M
,
f
2
(x
1
) = C
2
_
m1
(x
1
) +
m2
(x
1
)M
,
g
1
= C
1
b
z
, g
2
= C
2
b
m
,
C
1
=
qS
mV
T
, C
2
=
qSd
I
yy
.
The control objective considered here is to design an autopilot with the backstepping
method that tracks a commanded reference y
r
(all derivatives known and bounded) with
the angle of attack x
1
. It is assumed that the aerodynamic force and moment functions
are exactly known and the Mach number M is treated as a parameter available for mea-
surement. Furthermore, the contribution of the n deection on the right-hand side of the
force equation (3.38) is ignored during the control design, since the backstepping method
can only handle nonlinear systems of lower-triangular form, i.e. the assumption is made
that the n surface is a pure moment generator. This a valid assumption for most types
of aircraft and aerodynamically controlled missiles, often made in ight control system
design, see e.g. [56, 76].
The backstepping procedure starts by dening the tracking errors as
z
1
= x
1
y
r
z
2
= x
2
1
where
1
is the virtual control to be designed in this rst design step.
Step 1: The z
1
-dynamics satisfy
z
1
= x
2
+f
1
y
r
= z
2
+
1
+f
1
y
r
. (3.40)
Consider a candidate CLF V
1
for the z
1
-subsystem dened as
V
1
(z
1
) =
1
2
_
z
2
1
+ k
1
2
1
, (3.41)
3.3 BACKSTEPPING BASICS 49
where the gain k
1
> 0 and the integrator term
1
=
_
t
0
z
1
dt are introduced to robustify
the control design against the effect of the neglected control term. The derivative of V
1
along the solutions of (3.40) is given by
V
1
= z
1
z
1
+k
1
1
z
1
= z
1
[z
2
+
1
+f
1
y
r
+k
1
1
] .
The virtual control
1
is selected as
1
= c
1
z
1
k
1
1
f
1
+ y
r
, c
1
> 0 (3.42)
to render the derivative
V
1
= c
1
z
2
1
+z
1
z
2
.
The cross term z
1
z
2
will be dealt with in the second design step.
Step 2: The z
2
-dynamics are given by
z
2
= f
2
+ g
2
u
1
, (3.43)
where
1
= c
1
(x
2
+f
1
y
r
) k
1
z
1
f
1
+ y
r
. The CLF V
1
is augmented with an
additional term to penalize z
2
as
V
2
(z
1
, z
2
) = V
1
+
1
2
z
2
2
. (3.44)
The derivative of V
2
along the solutions of (3.40) and (3.43) satises
V
2
= c
1
z
2
1
+z
1
z
2
+z
2
[f
2
+g
2
u
1
] = c
1
z
2
1
+z
2
[z
1
+f
2
+g
2
u
1
] .
A control law for u can now be dened to cancel all indenite terms, the most straight-
forward choice is given by
u =
1
g
2
[c
2
z
2
z
1
f
2
+
1
] .
By Theorem 3.7 lim
t
z
1
, z
2
= 0, which means that the reference signal y
r
is asymp-
totically tracked with x
1
.
Numerical simulations of the longitudinal missile model with the backstepping controller
have been performed in MATLAB/Simulink
c
. A third order xed time step solver with
sample time 0.01s was used. First, consider the simulations using the idealized missile
model, i.e. the lower triangular model as used for the control design with g
1
= 0. Figure
3.5 shows the response of the system states and the control input for a series of angle
of attack doublets at Mach 2.0. The red line represents the reference signal, while the
closed-loop response of the system for three different gain selections is plotted in blue.
As can be seen in the plots perfect tracking is achieved by increasing the control gains.
However, when the full missile model is used, with g
1
= 0, the controllers without inte-
gral gain only achieve bounded tracking as can be seen in Figure 3.6. Setting the integral
gain k
1
= 10 removes the bounded tracking error. Many other methods for robustifying
50 BACKSTEPPING 3.3
the backstepping design against unmodeled dynamics can be found in literature. How-
ever, for large uncertainties these robust methods fail to give adequate performance or
they tend to lead to conservative control laws. Adaptive backstepping is a more sophis-
ticated method of dealing with large model uncertainties and is the subject of the next
chapter.
0 5 10 15 20 25 30
10
0
10
a
n
g
l
e
o
f
a
t
t
a
c
k
(
d
e
g
)
0 5 10 15 20 25 30
40
20
0
20
40
p
i
t
c
h
r
a
t
e
(
d
e
g
/
s
)
0 5 10 15 20 25 30
10
0
10
time (s)
c
o
n
t
r
o
l
d
e
f
l
e
c
t
i
o
n
(
d
e
g
)
c
1
,c
2
=1,k
1
=0
c
1
,c
2
=10,k
1
=0
c
1
,c
2
=10,k
1
=10
reference
Figure 3.5: Numerical Simulations at Mach 2.0 of the idealized longitudinal missile model with
backstepping control law for 3 different gain selections.
3.3 BACKSTEPPING BASICS 51
0 5 10 15 20 25 30
10
0
10
a
n
g
l
e
o
f
a
t
t
a
c
k
(
d
e
g
)
0 5 10 15 20 25 30
50
0
50
p
i
t
c
h
r
a
t
e
(
d
e
g
/
s
)
0 5 10 15 20 25 30
10
0
10
time (s)
c
o
n
t
r
o
l
d
e
f
l
e
c
t
i
o
n
(
d
e
g
)
c
1
,c
2
=1,k
1
=0
c
1
,c
2
=10,k
1
=0
c
1
,c
2
=10,k
1
=10
reference
Figure 3.6: Numerical Simulations at Mach 2.0 of the full longitudinal missile model with back-
stepping control law for 3 different gain selections.
Chapter 4
Adaptive Backstepping
In the previous chapter the basic ideas of the backstepping control design approach for
nonlinear systems were explained. The backstepping approach allows the designer to
construct controllers for a wide range of nonlinear systems in a structured, recursive
way. However, the method assumes that an accurate system model is available and this
may not be the case for real world physical systems. In this chapter the backstepping
framework is extended with a dynamic feedback part that constantly updates the static
feedback control part to deal with nonlinear systems with parametric uncertainties. In the
rst part of the chapter the concept of dynamic feedback is explained in a simple example
and after that the standard tuning functions adaptive backstepping method is derived.
An overview of methods to deal with non-parametric uncertainties such as measurement
noise is also presented. In the second part command lters are introduced to simplify
the adaptive backstepping method and to make the dynamic update laws more robust to
input saturation.
4.1 Introduction
Backstepping can be used to stabilize a large class of nonlinear systems in a structured
manner, while giving the control designer a lot of freedom. However, the true potential
of backstepping was discovered only when the approach was developed for nonlinear
systems with structured uncertainty. With adaptive backstepping [101, 117] global stabi-
lization is achieved in the presence of unknown parameters, and with robust backstepping
[64, 66, 87] it is achieved in the presence of disturbances. The ease with which uncer-
tainties and unknown parameters can be incorporated in the backstepping procedure is
what makes the method so interesting.
Robust backstepping and other robust nonlinear control techniques have been studied
extensively in literature. However, these methods tend to yield rather conservative con-
trol laws, especially for cases where the uncertainties are large. Furthermore, nonlinear
53
54 ADAPTIVE BACKSTEPPING 4.2
damping terms and switching control functions are often used to guarantee robustness
in the presence of uncertainties, which may result in undesirable high gain control or
chattering in the control signal. High gain feedback may cause several problems, such as
saturation of the control (actuators), high sensitivity to measurement noise, excitation of
unmodeled dynamics and large transient errors.
Adaptive backstepping control has a more sophisticated way of dealing with large un-
certainties. Adaptive backstepping controllers do not only employ static feedback like
the controllers designed in the previous section, but also contain a dynamic feedback
part. This dynamic part of the control law is used as a parameter update law to continu-
ously adapt the static part to new parameter estimates. Adaptive backstepping achieves
boundedness of the closed-loop states and convergence of the tracking error to zero for
nonlinear systems with parametric uncertainties.
The rst adaptive backstepping method [101] employed overparametrization, i.e. more
than one update law was used for each parameter. Overparametrization is not necessarily
disadvantageous from a performance point of view, but it is not very efcient in a numer-
ical implementation of the controller due to the resulting higher dynamical order. With
the introduction of the tuning functions adaptive backstepping method [117] the over-
parametrization was removed so that only one dynamic update law for each unknown
parameter is needed. The rst part of this chapter, Section 4.2, discusses the tuning
functions adaptive backstepping approach. Dynamic feedback is introduced on a second
order system, after which the method is extended to higher order systems.
The tuning functions approach has a number of shortcomings, two of the most important
being its analytical complexity and its sensitivity to input saturation. In Section 4.3 the
constrained adaptive backstepping method is introduced, which makes use of command
lters to completely remove these drawbacks. The use of lters in the backstepping
framework was rst proposed as dynamic surface control in [212, 213] to remove the
tedious analytical calculation of the time derivatives of the virtual control laws at each
design step. In [58] the idea of using command lters is extended in such a way that the
dynamic update laws of adaptive backstepping are robustied against the effects of the
input saturation, resulting in the constrained adaptive backstepping approach.
4.2 Tuning Functions Adaptive Backstepping
In this section the tuning functions adaptive backstepping method as conceived in [117]
is discussed. The ideas of the recursive backstepping approach of the previous chapter
are extended to nonlinear systems with parametric uncertainties. Dynamic feedback is
employed as parameter update law to continuously adapt the static feedback control to
new parameter estimates. The controller is still constructed in a recursive manner, in-
troducing a virtual control law and intermediate update laws at each design step, while
extending the CLF, until the control law and the dynamic update laws are found in the
last design step.
4.2 TUNING FUNCTIONS ADAPTIVE BACKSTEPPING 55
4.2.1 Dynamic Feedback
The difference between a static and a dynamic nonlinear design will be illustrated using
the scalar system of Example 3.1 augmented with an unknown constant parameter in
front of the nonlinear term.
Example 4.1 (An uncertain scalar system)
Consider the feedback linearizable system
x = x
3
+x +u (4.1)
where R is an unknown constant parameter. The control objective is regulation of
x to zero. If where known, the control
u = x
3
cx, c > 1, (4.2)
would render the derivative of V
0
(x) =
1
2
x
2
negative denite:
V
0
= (c1)x
2
. Since
is not known, its certainty equivalence form is employed in which is replaced by
the parameter estimate
:
u =
x
3
cx, c > 1. (4.3)
Substituting (4.3) into (4.1) gives
x =
x
3
(c 1)x, (4.4)
where
is the parameter estimation error, dened as
. (4.5)
The derivative of V
0
(x) =
1
2
x
2
now satises
V
0
=
x
4
(c 1)x
2
. (4.6)
It is not possible to conclude anything about the stability of (4.4), since the rst term
of (4.6) contains the unknown parameter error
. The idea is now to extend the control
law with a dynamic update law for
. To design this update law, V
0
is augmented with
a quadratic term to penalize the parameter estimation error
as
V
1
(x,
) =
1
2
x
2
+
1
2
2
, (4.7)
where > 0 is the adaptation gain. The derivative of this function is
V
1
= x x +
1
=
x
4
(c 1)x
2
+
1
(4.8)
= (c 1)x
2
+
_
x
4
+
1
_
.
56 ADAPTIVE BACKSTEPPING 4.2
The above equation still contains an indenite term with the unknown
. However, the
dynamics of
can now be utilized, which means that the indenite term can be
canceled with an appropriate choice of
= x
4
(4.9)
yields
V
1
= (c 1)x
2
0. (4.10)
It can be concluded that the equilibrium (x,
) = 0 is globally stable and by Theorem
3.7 the regulation property lim
t
x = 0 is satised. Note that since the parameter
estimation error term in (4.8) is completely canceled, it cannot be concluded that the
parameter estimation error
converges to zero. This is a characteristic of this type
of Lyapunov based adaptive controllers: the idea is to satisfy a total system stability
criterion, the CLF, rather than to optimize the error in estimation. The advantage is
that global asymptotic stability of the closed-loop system is guaranteed. This is in
contrast with a traditional estimation-based design, where the identiers are too slow
to deal with nonlinear system dynamics [118].
The resulting adaptive system consists of (4.1) with control law (4.3) and update law
(4.9). The response of the closed-loop system with = 1 for several values of update
gain can be found in Figure 4.1. The initial state of the system is x(0) = 2, the
control gain c = 2 and the initial parameter estimate
(0) = 0 . As can be seen
from the gure, the adaptive controller manages to stabilize the uncertain nonlinear
system. The parameter estimate converges to a constant value for each of the update
gain selections, but never converges to the true parameter value.
The adaptive design of the above example is very simple because the uncertainty is in the
span of the control , i.e. matched. Adaptive backstepping extends the design approach
of the example to a recursive procedure that can deal with nonlinear systems containing
parametric uncertainties that are separated by one or more integrators from the control
input.
Consider the second order system
x
1
= (x
1
)
T
+x
2
(4.11)
x
2
= u (4.12)
where (x
1
, x
2
) R
2
are the states, u R is the control input, (x
1
) is a smooth, non-
linear function vector, i.e. the regressor vector, and is a vector of unknown constant
parameters. The control objective is to track the smooth reference signal y
r
(t) (all deriva-
tives known and bounded) with the state x
1
. The adaptive backstepping procedure starts
by introducing the tracking errors z
1
= x
1
y
r
and z
2
= x
2
. The virtual control
is now dened in terms of the parameter estimate
as
(x
1
,
, y
r
, y
r
) = c
1
z
1
T
+ y
r
, c
1
> 0. (4.13)
4.2 TUNING FUNCTIONS ADAPTIVE BACKSTEPPING 57
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
1
2
3
time (s)
s
t
a
t
e
x
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
50
40
30
20
10
0
time (s)
i
n
p
u
t
u
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
2
4
6
time (s)
p
a
r
a
m
e
t
e
r
e
s
t
i
m
a
t
e
gamma = 0.1
gamma = 1
gamma = 10
Figure 4.1: State x, control effort u and parameter estimate
for initial values x(0) = 2,
(0) = 0
and control gain c = 2 with different values of update gain . The parameter estimate does not
converge to the true parameter value = 1.
This virtual control reduces the (z
1
, z
2
)-dynamics to
z
1
=
T
+z
2
c
1
z
1
(4.14)
z
2
= u
x
1
x
1
y
r
y
r
y
r
y
r
, (4.15)
where
=
is the parameter estimation error. A CLF is dened that not only
penalizes the tracking errors, but also the estimation error as
V (z
1
, z
2
,
) =
1
2
_
z
2
1
+z
2
2
+
_
(4.16)
with =
T
> 0. The time derivative of V along the solutions of (4.14) is
V = c
1
z
2
1
+z
1
z
2
+
T
z
1
+z
2
_
u
x
1
x
1
y
r
y
r
y
r
y
r
= c
1
z
2
1
+z
2
_
z
1
+u +
x
1
T
x
1
x
2
y
r
y
r
y
r
y
r
1
_
_
z
1
x
1
z
2
__
.
58 ADAPTIVE BACKSTEPPING 4.2
In order to render the derivative of the CLF V negative denite, a control law for u and a
dynamic update law for
are selected as
u = c
2
z
2
z
1
x
1
T
+
x
1
x
2
+
y
r
y
r
+
y
r
y
r
+
(4.17)
=
_
z
1
x
1
z
2
_
(4.18)
where c
2
> 0. This results in
V = c
1
z
2
1
c
2
z
2
2
and it follows that the equilibrium (z
1
, z
2
,
) = 0 is globally uniformly stable. Further-
more, lim
t
z
1
, z
2
0, i.e. global asymptotic tracking is achieved. Note again that
convergence of the parameter estimate
is guaranteed, but not necessarily convergence
to the real value of . In this adaptive backstepping design the choice of parameter update
law was postponed until the second design step. This will become a lot more complicated
for higher order systems as considered in the next part of the chapter.
4.2.2 Extension to Higher Order Systems
The adaptive backstepping method is now extended to higher order systems. Consider
the strict feedback system
x
i
= f
i
( x
i
) +g
i
( x
i
)x
i+1
, i = 1, ..., n 1
x
n
= f
n
(x) +g
n
(x)u (4.19)
where x
i
R, u R and x
i
= (x
1
, x
2
, ..., x
i
). Unlike before, the smooth functions f
i
and g
i
now contain the unknown dynamics of the system and will have to be approxi-
mated. It is assumed that g
i
does not change sign, i.e. g
i
> 0 or g
i
< 0, in the domain of
operation. For most physical systems at least the sign of these functions is known. It is
assumed that there exist vectors
f
i
and
g
i
such that
f
i
( x
i
) =
f
i
( x
i
)
T
f
i
g
i
( x
i
) =
g
i
( x
i
)
T
g
i
,
where
f
i
( x
i
,
f
i
) =
f
i
( x
i
)
T
f
i
g
i
( x
i
,
g
i
) =
g
i
( x
i
)
T
g
i
and the parameter estimation errors as
f
i
=
f
i
f
i
and
g
i
=
g
i
g
i
. The system
(4.20) can be rewritten as
x
i
=
f
i
( x
i
)
T
f
i
+
g
i
( x
i
)
T
g
i
x
i+1
x
n
=
f
n
( x
n
)
T
f
n
+
g
n
( x
n
)
T
g
n
u.
4.2 TUNING FUNCTIONS ADAPTIVE BACKSTEPPING 59
The control objective is to force the output y = x
1
to asymptotically track the refer-
ence signal y
r
(t) whose rst n derivatives are assumed to be known and bounded. The
adaptive backstepping procedure is initiated by dening the tracking errors as
z
1
= x
1
y
r
z
i
= x
i
i1
, i = 2, ..., n. (4.20)
Step 1: The task in the rst design step is to stabilize the z
1
-subsystem given by
z
1
=
T
f
1
f
1
+
T
g
1
g
1
(z
2
+
1
) y
r
. (4.21)
Consider the CLF V
1
given by
V
1
=
1
2
z
2
1
+
1
2
T
f
1
1
f
1
f
1
+
1
2
T
g
1
1
g
1
g
1
, (4.22)
where
=
T
V
1
= z
1
_
T
f
1
f
1
+
T
g
1
g
1
(z
2
+
1
) y
r
_
T
f
1
1
f
1
_
f
1
f
1
f
1
z
1
_
T
g
1
1
g
1
_
g
1
g
1
g
1
x
2
z
1
_
.
To cancel the indenite terms the virtual control
1
and the intermediate update laws
f
11
,
g
11
are dened as
1
=
1
T
g
1
g
1
_
c
1
z
1
T
f
1
f
1
+ y
r
_
(4.23)
f
11
=
f
1
f
1
z
1
(4.24)
g
11
=
g
1
g
1
x
2
z
1
, (4.25)
where c
1
> 0. Similar to the construction of the control law, the parameter update laws
are build up recursively in the adaptive backstepping design to prevent overparametriza-
tion. These intermediate update functions are called tuning functions and therefore this
method is often referred to as the tuning functions approach in literature [117]. Substi-
tuting these expressions in the derivative of V
1
leads to
V
1
= c
1
z
2
1
+
T
g
1
g
1
z
1
z
2
T
f
1
1
f
1
_
f
1
f
11
_
T
g
1
1
g
1
_
g
1
g
11
_
.
If this would be the nal design step, the update laws would cancel the last two indenite
terms and z
2
0, reducing the derivative to
V
1
= c
1
z
2
1
Hence, the z
1
-system would be stabilized. Hence, the task in the next design step is to
make sure that z
2
converges to zero.
Step 2: The z
2
-dynamics satisfy
z
2
=
T
f
2
f
2
+
T
g
2
g
2
(z
3
+
2
)
1
. (4.26)
60 ADAPTIVE BACKSTEPPING 4.2
The CLF V
1
is now augmented with additional terms penalizing z
2
and the parameter
estimation errors
f
2
,
g
2
, i.e.
V
2
= V
1
+
1
2
z
2
2
+
1
2
T
f
2
1
f
2
f
2
+
1
2
T
g
2
1
g
2
g
2
. (4.27)
Taking the time derivative of V
2
along the solutions of (4.21), (4.26) results in
V
2
= c
1
z
2
1
+
T
g
1
g
1
z
1
z
2
T
f
1
1
f
1
_
f
1
f
11
+
f
1
f
1
1
x
1
z
2
_
T
g
1
1
g
1
_
g
1
g
11
+
g
1
g
1
1
x
1
x
2
z
2
_
+z
2
_
T
f
2
f
2
+
T
g
2
g
2
(z
3
+
2
)
1
_
T
f
2
1
f
2
_
f
2
f
2
f
2
z
2
_
T
g
2
1
g
2
_
g
2
g
2
g
2
x
3
z
2
_
,
where
1
represents the known parts of the dynamics of
1
and is dened as
1
=
1
x
1
_
T
f
1
f
1
+
T
g
1
g
1
x
2
_
+
1
f
1
f
1
+
1
g
1
g
1
+
1
y
r
y
r
+
1
y
r
y
r
.
The virtual control and intermediate update laws are selected as
2
=
1
T
g
2
g
2
_
c
2
z
2
T
g
1
g
1
z
1
T
f
2
f
2
+
1
_
(4.28)
f
12
=
f
11
f
1
f
1
1
x
1
z
2
=
f
1
f
1
_
z
1
1
x
1
z
2
_
(4.29)
g
12
=
g
11
g
1
g
1
1
x
1
x
2
z
2
=
g
1
g
1
x
3
_
z
1
1
x
1
z
2
_
(4.30)
f
22
=
f
2
f
2
z
2
(4.31)
g
22
=
g
2
g
2
x
3
z
2
. (4.32)
Substituting the above expressions in the derivative of V
2
gives
V
2
= c
1
z
2
1
c
2
z
2
2
+
T
g
2
g
2
z
2
z
3
T
f
1
1
f
1
_
f
1
f
12
_
T
g
1
1
g
1
_
g
1
g
12
_
T
f
2
1
f
2
_
f
2
f
22
_
T
g
2
1
g
2
_
g
2
g
22
_
.
This concludes the second design step.
Step i: The design steps until step n (where the real control u enters) are identical. The
z
i
-dynamics are given by
z
i
=
T
f
i
f
i
+
T
g
i
g
i
(z
i+1
i
)
i1
. (4.33)
4.2 TUNING FUNCTIONS ADAPTIVE BACKSTEPPING 61
The CLF for step i is dened as
V
i
= V
i1
+
1
2
z
2
i
+
1
2
T
f
i
1
f
i
f
i
+
1
2
T
g
i
1
g
i
g
i
. (4.34)
The time derivative of V
i
along the solutions of (4.33) satises
V
i
=
i1
j=1
c
j
z
2
j
+
T
g
i1
g
i1
z
i1
z
i
i1
k=1
T
f
k
1
f
k
_
f
k
f
k(i1)
+
f
k
f
k
i1
x
k
z
i
_
i1
k=1
T
g
k
1
g
k
_
g
k
g
k(i1)
+
g
k
g
k
i1
x
k
x
k+1
z
i
_
+z
i
_
T
f
i
f
i
+
T
g
i
g
i
(z
i+1
+
i
)
i1
_
T
f
i
1
f
i
_
f
i
f
i
f
i
z
i
_
T
g
i
1
g
i
_
g
i
g
i
g
i
x
i+1
z
i
_
,
where
i1
is given by
i1
=
i1
k=1
i1
x
k
_
T
f
k
f
k
+
T
g
k
g
k
x
k+1
_
+
i1
k=1
_
i1
f
k
f
k
+
i1
g
k
g
k
_
+
i
k=1
i1
y
(k1)
r
y
(k)
r
.
Now the intermediate update laws and the virtual control
i
are selected as
i
=
1
T
g
i
g
i
_
c
i
z
i
T
g
i1
g
i1
z
i1
T
f
i
f
i
+
i1
_
(4.35)
f
ki
=
f
k(i1)
f
k
f
k
i1
x
k
z
i
(4.36)
g
ki
=
g
k(i1)
g
k
g
k
i1
x
k
x
k+1
z
i
(4.37)
f
ii
=
f
i
f
i
z
i
(4.38)
g
ii
=
g
i
g
i
x
i+1
z
i
, (4.39)
for k = 1, 2, ..., i 1. This renders the derivative of V
i
equal to
V
i
=
i
j=1
c
j
z
2
j
+
T
g
i
g
i
z
i
z
i+1
k=1
T
f
k
1
f
k
_
f
k
f
ki
_
k=1
T
g
k
1
g
k
_
g
k
g
ki
_
.
62 ADAPTIVE BACKSTEPPING 4.2
Step n: In the nal step the control law and the complete update laws are dened. Con-
sider the nal Lyapunov function
V
n
= V
n1
+
1
2
z
2
n
+
1
2
T
f
n
1
f
n
f
n
+
1
2
T
g
n
1
g
n
g
n
=
1
2
n
k=1
_
z
2
k
+
T
f
k
1
f
k
f
k
+
T
g
k
1
g
k
g
k
_
. (4.40)
To render the derivative of V
n
negative semi-denite, the real control and update laws are
selected as
u =
1
T
g
n
g
n
_
c
n
z
n
T
g
n1
g
n1
z
n1
T
f
n
f
n
+
n1
_
(4.41)
f
k
=
f
k(n1)
f
k
f
k
n1
x
k
z
n
=
f
k
f
k
_
_
z
k
n1
j=k
j
x
k
z
j+1
_
_
(4.42)
g
k
= P
_
g
k(n1)
g
k
g
k
n1
x
k
x
k+1
z
n
_
= P
_
_
g
k
g
k
x
k+1
_
_
z
k
n1
j=k
j
x
k
z
j+1
_
_
_
_
(4.43)
f
n
=
f
n
f
n
z
n
(4.44)
g
n
= P (
g
n
g
n
uz
n
) , (4.45)
where P represents the parameter projection operator to prevent singularity problems, i.e.
zero crossings, in the domain of operation. While the functions g
i
= 0, the update laws
for
i
can still cross through zero if this modication is not made. Parameter projection
can be used to keep the parameter estimate within a desired bounded and convex region.
In section 4.2.3 the parameter projection method is discussed in more detail. Substituting
(4.41)-(4.45) in the derivative of V
n
renders it equal to
V
n
=
n
j=1
c
j
z
2
j
.
Theorem 4.1. The closed-loop system consisting of the system (4.20), the control (4.41)
and the dynamic update laws (4.42)-(4.45) has a globally uniformly stable equilibrium
at (z
i
,
f
i
,
g
i
) = 0 and lim
t
z
i
= 0, i = 1, ..., n.
Proof: The closed-loop stability result follows directly from Theorem 3.7.
4.2 TUNING FUNCTIONS ADAPTIVE BACKSTEPPING 63
A block scheme of the resulting closed-loop system with tuning functions controller for
n = 3 and a constant reference signal y
r
is shown in Figure 4.2. It is clear that controller
and update laws are part of one integrated system.
Adaptive backstepping controller
( )
1 1
g x
( )
1 1
f x
( )
2 1 2
, g x x
( )
2 1 2
, f x x
( )
1 1
,
r
x y
( )
2 1 2
, ,
r
x x y
1
x
r
y
2
x
3
x
( ) ,
r
u x y
( )
3
g x
xx
( )
3
f x
-
- -
1 1
,
f g
3 3
,
f g
2 2
,
f g
Figure 4.2: Closed-loop dynamics of an uncertain strict feedback control system with adaptive
backstepping controller for n = 3. It is assumed that y
(i)
r
= 0, i = 1, 2, 3.
4.2.3 Robustness Considerations
The adaptive backstepping control design of Theorem 4.1 is based on ideal plant models
with parametric uncertainties. However, in practice the controllers will be designed for
real world physical systems, which means they have to deal with non-parametric uncer-
tainties such as
low-frequency unmodeled dynamics, e.g. structural vibrations;
measurement noise;
computational round-off errors and sampling delays;
time variations of the unknown parameters.
64 ADAPTIVE BACKSTEPPING 4.2
When the input signal (or the reference signal) of the system is persistently exciting (PE)
[3], i.e. the reference signal is sufciently rich [28], these uncertainties will hardly af-
fect the robustness of the adaptive backstepping design. The PE property guarantees
exponential stability in the absence of modeling errors which in turn guarantees bounded
states in the presence of bounded modeling error inputs provided the modeling error term
does not destroy the PE property of the input.
However, when the reference signal is not persistently exciting even very small uncer-
tainties may already lead to problems. For example, the estimated parameters will, in
general, not converge to their true values. Although a parameter estimation error of zero
can be useful (e.g. for system health monitoring), it is not a necessary condition to guar-
antee stability of the adaptive backstepping design. A more serious problem is that the
adaptation process will have difculty to distinguish between parameter information and
noise. This may cause the estimated parameters to drift slowly. More examples of in-
stability phenomena in adaptive systems can be found in [87]. The lack of robustness
is primarily due to the adaptive law which is nonlinear in general and therefore more
susceptible to modeling error effects.
Several methods of robustifying the update laws have been suggested in literature over
the years, an overview is given in [87]. These techniques have in common that they all
aim to guarantee that the properties of the modied adaptive laws are as close as possi-
ble to the ideal properties despite the presence of the non-parametric uncertainties. The
different methods are now discussed briey for the general parameter update law
= z. (4.46)
Dead-Zones
The dead-zone modication method is based on the observation that small tracking errors
are mostly due to noise and disturbances. The most obvious solution is to turn off the
adaptation process if the tracking errors are within certain bounds. This gives a closed-
loop system with bounded tracking errors. Modifying the update law (4.46) using the
dead-zone technique results in
= (z +), =
_
0 if |z|
0
z if |z| <
0
(4.47)
or
= (z +), =
_
_
_
0
if z <
0
0
if z >
0
z if |z|
0
(4.48)
for a continuous version to prevent computational problems.
Leakage Terms
The idea behind leakage terms is to modify the update laws so that the time derivative
of the Lyapunov function used to analyze closed-loop stability becomes negative in the
4.2 TUNING FUNCTIONS ADAPTIVE BACKSTEPPING 65
space of the parameter estimates when these parameters exceed certain bounds. Basi-
cally, leakage terms add damping to the update laws:
= (z
), (4.49)
where the term
with > 0 converts the pure integral action of the update law (4.46)
to a leaky integration and is therefore referred to as the leakage. Several choices are
possible for the leakage term , the most widely used choices are called -modication
[86] and e-modication [147]. These modications are as follows
-modication:
= (z
) (4.50)
e-modication:
= (z |z|
) (4.51)
where > 0 is a small constant. The advantage of e-modication is that the leakage
term will go to zero as the tracking error converges to zero.
Parameter Projection
A last effective method for eliminating parameter drift and keeping the parameter esti-
mates within some designer dened bounds is to use the projection method to constrain
the parameter estimates to lie inside a bounded convex set in the parameter space. Let
this convex region S be dened as
S { R
p
= P (z) =
_
_
z if
S
0
or if
S and g
T
z 0
z
gg
T
g
T
g
z otherwise
(4.53)
where S
0
is the interior of S, S the boundary of S and g =
dg
d
. If the parameter
estimate
is inside the desired region S, then the standard adaptive law is implemented.
If
is on the boundary of S and its derivative is directed outside the region, then the
derivative is projected onto the hyperplane tangent to S. Hence, the projection keeps
the parameter estimation vector within the desired convex region S at all time.
66 ADAPTIVE BACKSTEPPING 4.2
4.2.4 Example: Adaptive Longitudinal Missile Control
In this section the missile example of Chapter 3 is revisited. The generalized dynamics
of the missile (3.38), (3.39) are repeated here for convenience sake:
x
1
= x
2
+f
1
(x
1
) +g
1
u (4.54)
x
2
= f
2
(x
1
) +g
2
u, (4.55)
where f
1
, f
2
, g
1
and g
2
are nowunknown nonlinear functions containing the aerodynamic
stability and control derivatives. For the control design the g
1
u
1
-term is again neglected
so that the system is in a lower triangular form. It is assumed that the sign of g
2
is known
and xed. The unknown functions are rewritten in a parametric form with unknown
parameter vectors
f
1
,
f
2
and
g
2
as
f
1
(x
1
) =
f
1
(x
1
)
T
f
1
f
2
(x
1
) =
f
2
(x
1
)
T
f
2
g
2
=
T
g
2
g
2
where the regressors
are given by
f
1
= C
1
_
x
3
1
, x
1
|x
1
|, x
1
f
2
= C
2
_
x
3
1
, x
1
|x
1
|, x
1
g
2
= C
2
.
Then the estimates of the nonlinear functions are dened as
f
1
(x
1
,
f
1
) =
f
1
(x
1
)
T
f
1
f
2
(x
1
,
f
2
) =
f
2
(x
1
)
T
f
2
g
2
(
g
2
) =
T
g
2
g
2
and the parameter estimation errors as
1
where
1
is the virtual control to be designed in the rst design step. The tuning functions
adaptive backstepping method is now used to solve this control problem.
Step 1: The z
1
-dynamics satisfy
z
1
= z
2
+
1
+
T
f
1
f
1
y
r
. (4.56)
Consider the candidate CLF V
1
for the z
1
-subsystem dened as
V
1
(z
1
,
f
1
) =
1
2
_
z
2
1
+k
1
2
1
+
T
f
1
1
f
1
f
1
_
, (4.57)
4.2 TUNING FUNCTIONS ADAPTIVE BACKSTEPPING 67
where the gain k
1
> 0 and the integrator term
1
=
_
t
0
z
1
dt are again introduced to
robustify the design against the neglected g
1
u
1
-term. The derivative of V
1
along the
solutions of (4.56) is given by
V
1
= z
1
_
z
2
+
1
+
T
f
1
f
1
y
r
+k
1
1
_
T
f
1
1
f
1
_
f
1
f
1
f
1
z
1
_
.
To cancel all indenite terms, the virtual control
1
is selected as
1
= c
1
z
1
k
1
T
f
1
f
1
+ y
r
, c
1
> 0 (4.58)
and the intermediate update law for
f
1
as
f
11
=
f
1
f
1
z
1
. (4.59)
This renders the derivative equal to
V
1
= c
1
z
2
1
+z
1
z
2
T
f
1
1
f
1
_
f
1
f
11
_
.
This concludes the outer loop design.
Step 2: The z
2
-dynamics are given by
z
2
=
T
f
2
f
2
+
T
g
2
g
2
u
1
. (4.60)
Consider the CLF V
2
for the complete system
V
2
(z
1
, z
2
,
f
1
,
f
2
,
g
2
) = V
1
(z
1
,
f
1
) +
1
2
_
z
2
2
+
T
f
2
1
f
2
f
2
+
T
g
2
1
g
2
g
2
_
. (4.61)
The derivative of V
2
along the solutions of (4.56) and (4.60) satises
V
2
= c
1
z
2
1
+z
1
z
2
T
f
1
1
f
1
_
f
1
f
11
+
f
1
f
1
1
x
1
z
2
_
+ z
2
_
T
f
2
f
2
+
T
g
2
g
2
u
1
_
T
f
2
1
f
2
_
f
2
f
2
f
2
z
2
_
T
g
2
1
g
2
_
g
2
g
2
g
2
uz
2
_
,
where
1
is given by
1
=
1
x
1
T
f
1
f
1
+
1
f
1
f
1
+
1
y
r
y
r
+
1
y
r
y
r
.
The control law and the update laws are selected as
u =
1
T
g
2
g
2
_
c
2
z
2
z
1
T
f
2
f
2
+
1
_
(4.62)
f
1
=
f
11
f
1
f
1
1
x
1
z
2
(4.63)
f
2
=
f
2
f
2
z
2
(4.64)
g
2
= P (
g
2
g
2
uz
2
) , (4.65)
68 ADAPTIVE BACKSTEPPING 4.3
where the projection operator P is introduced to ensure that the estimate of g
2
does not
change sign. The above adaptive control law renders the derivative of V
2
equal to
V
2
= c
1
z
2
1
c
2
z
2
2
.
By Theorem 3.7 lim
t
z
1
, z
2
= 0, which means that the reference signal y
r
is again
asymptotically tracked with x
1
.
The resulting closed-loop system has been implemented in MATLAB/Simulink
c
. In
Figure 4.3 the response of the system with 4 different gain selections to a number of an-
gle of attack doublets at Mach 2.2 is shown. The onboard model contains the data of the
missile at Mach 2.0. The control gains are selected as c
1
= c
2
= 10 for all simulations,
the integral gain k
1
is either 0 (noint) or 10 (int) and the update gains
f
1
=
f
2
= 0I,
g
2
= 0 (nonad) or
f
1
=
f
2
= 10I,
g
2
= 0.01 (ad).
As can be seen from Figure 4.3, the modeling error is severe enough to render the system
unstable, when adaptation is turned off and no integral gain is sued. Adding an integral
gain ensures that the missile follows its reference again, but the transient performance is
not acceptable. Turning adaptation on instead gives a much better response, but there is
still a very small tracking error in the outer loop. This is due to the neglected g
1
u-term.
The regressors are not dened rich enough to fully cancel the effect of these unmodeled
dynamics. Therefore, the nal simulation with adaptation turned on and an integral gain
shows the best response.
The parameter estimation errors of the two simulations with adaptation turned on are
plotted in Figure 4.4. The errors can be seen to converge to constant values. However, the
true values are not found. This is a characteristic of the integrated adaptive approaches:
the estimation is performed to meet a total system stability criterion, the control Lya-
punov function, rather than to optimize the error in the estimation. Hence, convergence
of the parameters to their true values is not guaranteed. Note that dead-zones can be
added to the update laws to prevent the parameter drift due to numerical round-off errors.
4.3 Constrained Adaptive Backstepping
In the previous section the tuning functions adaptive backstepping method was derived.
The complexity of the design procedure is mainly due to the calculation of the derivatives
of the virtual controls at each intermediate design step. Especially for high order systems
or complex multivariable systems such as aircraft dynamics, it becomes very tedious to
calculate the derivatives analytically.
In this section an alternative approach involving command lters is introduced to reduce
the algebraic complexity of the adaptive backstepping control lawformulated in Theorem
4.1. This approach is sometimes referred to as dynamic surface control in literature [212,
213]. An additional advantage of this approach is that it also eliminates the methods
restriction to nonlinear systems of a lower triangular form. Finally, the command lters
can also be used to incorporate magnitude and rate limits on the input and states used as
virtual controls in the design [58, 60, 61, 163]. For example, when a magnitude limit on
4.3 CONSTRAINED ADAPTIVE BACKSTEPPING 69
0 5 10 15 20 25 30
20
10
0
10
20
a
n
g
l
e
o
f
a
t
t
a
c
k
(
d
e
g
)
0 5 10 15 20 25 30
50
0
50
p
i
t
c
h
r
a
t
e
(
d
e
g
/
s
)
0 5 10 15 20 25 30
20
10
0
10
20
time (s)
c
o
n
t
r
o
l
d
e
f
l
e
c
t
i
o
n
(
d
e
g
)
nonad,noint
nonad,int
ad,noint
ad,int
reference
Figure 4.3: Numerical Simulations at Mach 2.2 of the longitudinal missile model with adaptive
backstepping control law with uncertainty in the onboard model. Results are shown for 4 different
gain selections, including 2 with adaptation turned off.
the input is in effect and the desired control cannot be achieved, then the tracking errors
will in general become larger and will no longer be the result of function approximation
errors exclusively. Since the dynamic parameter update laws of the adaptive backstepping
method are driven by the tracking errors, care must be taken that they do not unlearn
when the limits on the control input are in effect.
The command ltered approach for preventing corruption of the parameter estimation
process can be seen as a combination of training signal hedging [4, 105] and pseudo-
control hedging [91, 206]. Training signal hedging involves modifying the tracking error
denitions used in the parameter update laws to remove the effects of the saturation.
In the pseudo-control hedging method the commanded input to the next control loop
is altered so that the generated control signal is implementable without exceeding the
constraints.
4.3.1 Command Filtering Approach
Consider the non-triangular, feedback passive system
x
i
= f
i
(x) +g
i
(x)x
i+1
, i = 1, ..., n 1
x
n
= f
n
(x) +g
n
(x)u, (4.66)
70 ADAPTIVE BACKSTEPPING 4.3
0 5 10 15 20 25 30
4
2
0
x 10
3
t
h
e
t
a
t
i
l
d
e
f
1
1
0 5 10 15 20 25 30
2.73
2.72
2.71
t
h
e
t
a
t
i
l
d
e
f
1
2
0 5 10 15 20 25 30
0.7
0.8
t
h
e
t
a
t
i
l
d
e
f
1
3
0 5 10 15 20 25 30
0.1
0.05
0
t
h
e
t
a
t
i
l
d
e
f
2
1
0 5 10 15 20 25 30
13.6
13.8
14
14.2
14.4
t
h
e
t
a
t
i
l
d
e
f
2
2
0 5 10 15 20 25 30
2
0
2
time (s)
t
h
e
t
a
t
i
l
d
e
f
2
3
0 5 10 15 20 25 30
2.39
2.4
2.41
time (s)
t
h
e
t
a
t
i
l
d
e
g
2
ad,noint
ad,int
Figure 4.4: The parameter estimation errors for the two simulations of the longitudinal missile
model with adaptive backstepping control law at Mach 2.2 with adaptation turned on.
where x = (x
1
, ..., x
n
) is the state, x
i
R and u R the control signal. The smooth
functions f
i
and g
i
are again unknown. The sign of all g
i
(x) is known and g
i
(x) = 0.
The control objective is to asymptotically track the reference signal x
1,r
(t) with rst
derivative known. The tracking errors are dened as
z
i
= x
i
x
i,r
, (4.67)
where x
i,r
, i = 2, ..., n will be dened by the backstepping controller.
Step 1: As with the standard adaptive backstepping procedure, the rst virtual control is
dened as
1
=
1
T
g
1
g
1
_
c
1
z
1
T
f
1
f
1
+ x
1,r
_
(4.68)
where c
1
> 0. However, instead of directly applying this virtual control, a new signal
x
0
2,r
is dened as
x
0
2,r
=
1
2
, (4.69)
where
2
will be dened in design step 2. The signal x
0
2,r
is ltered with a second order
command lter to produce x
2,r
and its derivative x
2,r
. It is possible to enforce magnitude
and rate limits with this lter, see Appendix C for details. The effect that the use of this
command lter has on the tracking error z
1
is estimated by the stable linear lter
1
= c
1
1
+
T
g
1
g
1
_
x
2,r
x
0
2,r
_
. (4.70)
4.3 CONSTRAINED ADAPTIVE BACKSTEPPING 71
Note that by design of the second order command lter, the signal (x
2,r
x
0
2,r
) is
bounded and, when no limits are in effect, small. It is now possible to introduce the
compensated tracking errors as
z
i
= z
i
i
, i = 1, ..., n. (4.71)
Select the rst CLF V
1
as a quadratic function of the compensated tracking error z
1
and
the estimation errors:
V
1
=
1
2
z
2
1
+
1
2
T
f
1
1
f
1
f
1
+
1
2
T
g
1
1
g
1
g
1
. (4.72)
Taking the derivative of V
1
results in
V
1
= z
1
_
T
f
1
f
1
+
T
g
1
g
1
(z
2
+x
2,r
) x
1,r
1
_
T
f
1
1
f
1
_
f
1
f
1
f
1
z
1
_
T
g
1
1
g
1
_
g
1
g
1
g
1
x
2
z
1
_
= z
1
_
T
f
1
f
1
+
T
g
1
g
1
(z
2
+x
0
2,r
) x
1,r
+c
1
1
_
T
f
1
1
f
1
_
f
1
f
1
f
1
z
1
_
T
g
1
1
g
1
_
g
1
g
1
g
1
x
2
z
1
_
= c
1
z
2
1
+
T
g
1
g
1
z
1
z
2
T
f
1
1
f
1
_
f
1
f
1
f
1
z
1
_
T
g
1
1
g
1
_
g
1
g
1
g
1
x
2
z
1
_
.
Selecting the dynamic update laws as
f
1
=
f
1
f
1
z
1
(4.73)
g
1
= P (
g
1
g
1
x
2
z
1
) (4.74)
nishes the rst design step. The update laws for
f
1
and
f
1
are dened immediately,
since there will be no additional derivative terms in the next steps due to the command
lters. Note that the update laws are driven by the compensated tracking error.
Step i: (i = 2, ..., n 1) The virtual controls are dened as
i
=
1
T
g
i
g
i
_
c
i
z
i
T
g
i1
g
i1
z
i1
T
f
i
f
i
+ x
i,r
_
(4.75)
where c
i
> 0 and the command lter inputs as
x
0
i,r
=
i1
i
. (4.76)
The effect that the use of the command lters has on the tracking errors is estimated by
i
= c
i
i
+
T
g
i
g
i
_
x
i+1,r
x
0
i+1,r
_
. (4.77)
72 ADAPTIVE BACKSTEPPING 4.3
Finally, the update laws are given by
f
i
=
f
i
f
i
z
i
(4.78)
g
i
= P (
g
i
g
i
x
i+1
z
i
) . (4.79)
Step n: In the nal design step the actual controller is found by ltering
u
0
=
n
=
1
T
g
n
g
n
_
c
n
z
n
T
g
n1
g
n1
z
n1
T
f
n
f
n
+ x
n,r
_
, (4.80)
to generate u. The effect that the use of this lter has on the tracking error z
n
is estimated
by
n
= c
n
n
+
T
g
n
g
n
_
u u
0
_
(4.81)
and the update laws are dened as
f
n
=
f
n
f
n
z
n
(4.82)
g
n
= P (
g
n
g
n
u z
n
) . (4.83)
Theorem 4.2. The closed-loop system consisting of the system (4.66), the control (4.80)
and update laws (4.73), (4.74), (4.78), (4.79), (4.82), (4.83) has a globally uniformly
stable equilibrium at ( z
i
,
f
i
,
g
i
) = 0, i = 1, ..., n. Furthermore, lim
t
z
i
= 0.
Proof: Consider the CLF
V
n
=
1
2
n
i=1
_
z
2
i
+
T
f
i
1
f
i
f
i
+
T
g
i
1
g
i
g
i
_
, (4.84)
which, along the solutions of the closed-loop system with the control (4.80) and update
laws (4.73), (4.78), (4.82), has the time derivative
V
n
=
n
i=1
c
i
z
2
i
.
Hence, by Theorem 3.7 the stated stability properties follow.
The above theorem guarantees desirable properties for the compensated tracking errors
z
i
. The difference between z
i
and the real tracking errors z
i
is
i
, which is the output of
the stable lters
i
= c
i
i
+
T
g
i
g
i
_
x
i+1,r
x
0
i+1,r
_
.
The magnitude of the input to this lter is determined by the design of the command lter
for x
0
i+1,r
. If there are no magnitude or rate limits in effect on the command lters and
4.3 CONSTRAINED ADAPTIVE BACKSTEPPING 73
their bandwidth is selected sufciently high, the error
_
x
i+1,r
x
0
i+1,r
_
will be small
during transients and zero under steady-state conditions. Hence, the performance of the
command ltered adaptive backstepping approach can be made arbitrarily close to that
of the standard adaptive backstepping approach of Section 4.2. A formal proof of this
statement can be found in [59]. This rigorous proof is based on singular perturbation
theory and makes use of Tikhonovs Theorem as given in [106].
If the limits on the command lter are in effect, the real tracking errors z
i
may increase,
but the compensated tracking errors z
i
that drive the estimation process are unaffected.
Hence, the dynamic update laws will not unlearn due to magnitude or rate limits on the
input and states used for virtual control.
4.3.2 Example: Constrained Adaptive Longitudinal Missile Control
In this section the command ltered adaptive backstepping approach is applied to the
tracking control design for the longitudinal missile model (3.38), (3.39) of the earlier ex-
amples. The nonlinear functions containing the aerodynamic stability and control deriva-
tives f
1
, f
2
, g
1
and g
2
are again unknown. Furthermore, it is again assumed that the sign
of g
2
is known and xed. Since the command ltered adaptive backstepping method can
deal with non-triangular nonlinear systems the g
1
u
1
-term does not have to be neglected
during the control design.
The tracking errors are dened as
z
1
= x
1
y
r
z
2
= x
2
x
2,r
. (4.85)
where x
2,r
is the command ltered virtual control. The virtual controls are dened as
1
= c
1
z
1
T
f
1
f
1
T
g
1
g
1
u + y
r
, c
1
> 0 (4.86)
2
=
1
T
g
2
g
2
_
c
2
z
2
z
1
T
f
2
f
2
+ x
2,r
_
, c
2
> 0, (4.87)
where
z
i
= z
i
i
, i = 1, 2 (4.88)
are the compensated tracking errors. The signals
x
0
2,r
=
1
2
(4.89)
u
0
=
2
(4.90)
are ltered with second order command lters to produce x
2,r
, its derivative x
2,r
and u.
The effect that the use of these command lters has on the tracking errors is measured by
1
= c
1
1
+
_
x
2,r
x
0
2,r
_
(4.91)
2
= c
2
2
+
T
g
2
g
2
_
u u
0
_
. (4.92)
74 ADAPTIVE BACKSTEPPING 4.3
Finally, the update laws are given by
f
1
=
f
1
f
1
z
1
(4.93)
g
1
=
g
1
g
1
u z
1
(4.94)
f
2
=
f
2
f
2
z
2
(4.95)
g
2
= P (
g
2
g
2
u z
2
) , (4.96)
where
=
T
> 0 are the update gains. The adaptive controller renders the derivative
of the CLF
V =
1
2
2
i=1
_
z
2
i
+
T
f
i
1
f
i
f
i
+
T
g
i
1
g
i
g
i
_
(4.97)
equal to
V = c
1
z
2
1
c
2
z
2
2
.
(4.98)
By Theorem 3.7 the equilibrium ( z
i
,
f
i
,
g
i
) = 0 for i = 1, 2 is globally stable and the
compensated tracking errors z
1
, z
2
converge asymptotically to zero.
The resulting constrained adaptive backstepping controller can be compared with the
standard adaptive backstepping controller of Section 4.2.4 in MATLAB/Simulink
c
sim-
ulations. For the tuning functions controller the control gains are selected as c
1
= c
2
=
k
1
= 10 and the update gains as
f
1
=
f
2
= 10I,
g
2
= 0.01. The gains of the com-
mand ltered controller are selected the same, except that the update gains of the outer
loop are selected as
f
1
= 1000I and
g
1
= 1. The outer loop update laws of both
designs differ, but with these update gain selections the response of both controllers is
nearly identical. Of course, the command ltered controller does not need the integral
term to achieve perfect tracking since it does not neglect the effect of the control surface
deections on the aerodynamic forces.
The results of a simulation with an upper magnitude limit of 9.5 degrees on the control in-
put are more interesting, as can be seen in Figure 4.5. The maneuver has been performed
at Mach 2.2 with onboard model for Mach 2.0. The performance of the standard adaptive
backstepping degrades severely when compared to the performance without saturation in
Figure 4.3 of Section 4.2.4. The reason for this loss in performance can be found in
Figure 4.6 where the parameter estimation errors are plotted. During periods of control
saturation the tracking errors increase, since the parameter update laws are driven by
the tracking errors (which are now no longer the result of the function approximation
errors exclusively) so they tend to unlearn. The update laws of the command ltered
controller are driven by the compensated tracking errors, where the effect of the magni-
tude limit has been removed by proper denition of the command lters. As a result the
performance of the constrained adaptive backstepping controller is much better.
4.3 CONSTRAINED ADAPTIVE BACKSTEPPING 75
0 5 10 15 20 25 30
20
10
0
10
20
a
n
g
l
e
o
f
a
t
t
a
c
k
(
d
e
g
)
0 5 10 15 20 25 30
50
0
50
p
i
t
c
h
r
a
t
e
(
d
e
g
/
s
)
0 5 10 15 20 25 30
20
10
0
10
20
time (s)
c
o
n
t
r
o
l
d
e
f
l
e
c
t
i
o
n
(
d
e
g
)
tuning
cabs
reference
Figure 4.5: Numerical simulations at Mach 2.2 of the longitudinal missile model with the tuning
functions versus the constrained adaptive backstepping (cabs) control law and an upper magnitude
limit on the control input of 9.5 deg.
0 5 10 15 20 25 30
0.04
0.02
0
0.02
0.04
t
h
e
t
a
t
i
l
d
e
f
1
1
0 5 10 15 20 25 30
2.8
2.6
2.4
t
h
e
t
a
t
i
l
d
e
f
1
2
0 5 10 15 20 25 30
0.5
1
1.5
2
2.5
t
h
e
t
a
t
i
l
d
e
f
1
3
0 5 10 15 20 25 30
3
2
1
0
1
t
h
e
t
a
t
i
l
d
e
f
2
1
0 5 10 15 20 25 30
10
15
20
t
h
e
t
a
t
i
l
d
e
f
2
2
0 5 10 15 20 25 30
10
0
10
20
30
t
h
e
t
a
t
i
l
d
e
f
2
3
0 5 10 15 20 25 30
2.1
2.2
2.3
2.4
time (s)
t
h
e
t
a
t
i
l
d
e
g
2
tuning
cabs
0 5 10 15 20 25 30
0.3247
0.3248
0.3249
time (s)
t
h
e
t
a
t
i
l
d
e
g
1
Figure 4.6: The parameter estimation errors for both adaptive backstepping designs. The update
laws of the tuning functions adaptive backstepping controller unlearn during periods when the
upper limit on the input is in effect.
Chapter 5
Inverse Optimal Adaptive
Backstepping
The static and dynamic parts of the adaptive backstepping controllers of the previous
chapter are designed simultaneously in a recursive manner. The very strong stability and
convergence properties of the controllers can be proved using a single control Lyapunov
function. A drawback of this approach is that, because there is strong coupling between
the static and dynamic parts, it is unclear how changes in the adaptation gain affect the
tracking performance. This makes tuning of the controllers a very tedious and nonintu-
itive process. In this chapter an attempt is made to develop an adaptive backstepping
control approach that is optimal with respect to some meaningful cost functional. Be-
sides optimal control being an intuitively appealing approach, the resulting control laws
inherently possess certain robustness properties.
5.1 Introduction
The adaptive backstepping designs of Chapter 4 are focused on achieving stability and
convergence rather than performance or optimality. Some performance bounds can be
derived for the tracking errors, the system states and the estimated parameters, but those
bounds do not contain any estimates of the necessary control effort [121]. Furthermore,
increasing the update gains results in more rapid parameter convergence, but it is unclear
how the transient tracking performance is affected. The advantages of a control law that
is optimal with respect to some meaningful cost functional
1
are its inherent robustness
properties with respect to external disturbances and model uncertainties, as in the case of
linear quadratic control or H
(x). When
this optimal control law is applied, J(x) will decrease along the trajectory, since the
cost-to-go must continuously decrease by the principle of optimality [15]. This means
that J(x) is a Lyapunov function for the controlled system: V (x) = J(x). The functions
V (x) and u
(x) are related to each other by the following optimality condition [175,
194].
Theorem 5.1 (Optimality and Stability). Suppose that there exists a continuously dif-
ferentiable positive semi-denite function V (x) which satises the Hamilton-Jacobi-
Bellman equation [14]
l(x) +L
f
V (x)
1
4
L
g
V (x)R
1
(x)(L
g
V (x))
T
= 0, V (0) = 0 (5.3)
such that the feedback control
u
(x) =
1
2
R
1
(x)(L
g
V (x))
T
(5.4)
achieves asymptotic stability of the equilibrium x = 0. Then u
= u +
1
2
R
1
(x)(L
g
V (x))
T
(5.5)
into (5.2) and using the HJB-identity results in:
J =
_
0
_
l +v
T
Rv v
T
(L
g
V )
T
+
1
4
L
g
V R
1
(L
g
V )
T
_
dt
=
_
0
_
L
f
V +
1
2
L
g
V R
1
(L
g
V )
T
L
g
V v
_
dt +
_
0
v
T
Rv dt
=
_
0
V
x
(f +gu)dt +
_
0
v
T
Rv dt
=
_
0
dV
dt
dt +
_
0
v
T
Rv dt
= V (x(0)) lim
T
V (x(T)) +
_
0
v
T
Rv dt.
The above limit of V (x(T)) is zero since the cost functional (5.2) is only minimized over
those u which achieve lim
t
x(t) = 0, thus
J = V (x(0)) +
_
0
v
T
Rv dt.
80 INVERSE OPTIMAL ADAPTIVE BACKSTEPPING 5.3
It is easy to see that the minimumof J is V (x(0)). This minimumis reached for v(t) 0,
which proves that u
(x) given by (5.4) is optimal and that V (x) is the optimal value
function. In [70] and [175] it is shown that, besides optimal control being an intuitively
appealing approach, optimal control laws inherently possess certain robustness proper-
ties for the closed-loop system, including stability margins. However, a direct optimal
control approach requires the solving of the Hamilton-Jacobi-Bellman equation which is
in general not feasible.
5.2.2 Inverse Optimal Control
The fact that the robustness achieved as a result of optimality is largely independent of the
choice of functions l(x) 0 and R(x) > 0 motivated the development of inverse optimal
control design methods [65, 66]. In the inverse approach a Lyapunov function V (x) is
given and the task is to determine whether a control law such as (5.4) is optimal for a
cost functional of the form (5.2). The term inverse refers to the fact that the functions
l(x) and R(x) are determined after the design of the stabilizing feedback control instead
of being selected beforehand by the designer.
Denition 5.2. A stabilizing control law u(x) solves an inverse optimal control problem
for the system
x = f(x) +g(x)u (5.6)
if it can be expressed as
u(x) = k(x) =
1
2
R
1
(x)(L
g
V (x))
T
, R(x) > 0, (5.7)
where V (x) is a positive semi-denite function, such that the negative semi-deniteness
of
V is achieved with the control (5.7), that is
V = L
f
V (x)
1
2
L
g
V (x)k(x) 0. (5.8)
When the function l(x) is selected equal to
V :
l(x) := L
f
V (x) +
1
2
L
g
V (x)k(x) 0 (5.9)
then V (x) is a solution of the HJB equation
l(x) +L
v
V (x)
1
4
(L
g
V (x))R
1
(x)(L
g
V (x))
T
= 0. (5.10)
5.3 Adaptive Backstepping and Optimality
Since the introduction of adaptive backstepping in the beginning of the 1990s, there
have been numerous publications that consider the inverse optimal problem and control
5.3 ADAPTIVE BACKSTEPPING AND OPTIMALITY 81
Lyapunov function designs, e.g. [57, 122] and [128]. Textbooks that deal with the subject
are [115] and [175]. However, inverse optimal adaptive backstepping control is only
considered in [128] and [134]. In [128] an inverse optimal adaptive tracking control
design for a general class of nonlinear systems is derived and [134] extends the results to
a nonlinear multi-input multi-output system with external disturbances.
In this section the approach of [128] is repeated in an organized manner and theoretical
transient performance bounds are given. The section concludes with an evaluation of the
performance and numerical sensitivity of the inverse optimal design approach applied to
the longitudinal missile pitch autopilot example as discussed in the earlier chapters.
5.3.1 Inverse Optimal Design Procedure
Consider the class of parametric strict feedback systems
x
i
= x
i+1
+
i
( x
i
)
T
, i = 1, ..., n 1
x
n
= u +
n
(x)
T
(5.11)
where x
i
R, u R and x
i
= (x
1
, x
2
, ..., x
i
). The vector contains the unknown
constant parameters of the system.
The control objective is to force the output y = x
1
to asymptotically track the reference
signal y
r
(t) whose rst n derivatives are assumed to be known and bounded. To simplify
the control design, the tracking control problem is rst transformed to a regulation prob-
lem. For any given smooth function y
r
(t) there exist functions
1
(t),
2
(t, ), ...,
n
(t, )
and
r
(t, ) such that
i
=
i+1
+
i
(
i
)
T
, i = 1, ..., n 1
n
=
r
(t, ) +
n
()
T
(5.12)
y
r
(t) =
1
(t).
Since
1
, i = 1, ..., n 1
x
rn
=
r
(t,
) +
rn
(x
r
)
T
+
n
(5.13)
y
r
(t) =
1
(t).
The dynamics of the tracking error e = x x
r
satisfy
e
i
= e
i+1
+
i
(e
1
, ..., e
i
,
)
T
+
ri
(x
r1
, ..., x
rn
,
)
T
i
, i = 1, ..., n 1
e
n
= u +
n
(e
1
, ..., e
i
,
)
T
+
rn
(x
r
,
)
T
n
(5.14)
82 INVERSE OPTIMAL ADAPTIVE BACKSTEPPING 5.3
where u = u
r
(t,
) and
i
=
i
(x
1
, ..., x
i
)
ri
(x
r1
, ..., x
ri
), i = 1, ..., n. Now the
inverse optimal tracking problemhas been transformed into an inverse optimal regulation
problem. Dene the error states as
z
i
= e
i
i1
, i = 1, ..., n (5.15)
where
i1
are the virtual controls to be designed by applying the tuning functions adap-
tive backstepping method of Theorem 4.1. After that, the real control u is chosen in a
form that is inverse optimal.
Step i: (i = 1,...,n-1)
i
(t, e
i
) = c
i
z
i
z
i1
+
i1
t
+
i1
k=1
i1
e
k
e
k+1
T
i
i1
k=1
(
ki
+
ik
) z
k
ii
z
i
, c
i
> 0 (5.16)
where for notational convenience
(t, e
i
) =
i
i1
k=1
i1
e
k
k
(5.17)
ik
=
_
_
i1
+
i
i1
j=2
i1
e
j
_
_
k
. (5.18)
Step n: Consider the control Lyapunov function
V
n
=
1
2
n
k=1
z
2
n
+
1
2
. (5.19)
Taking the derivative of V
n
and substituting (5.16) gives
V
n
=
n1
k=1
c
k
z
2
k
+z
n
_
z
n1
+ u +
n1
k=1
(
kn
+
nk
) z
k
+
nn
z
n
i1
t
i1
k=1
i1
e
k
e
k+1
+
T
i
_
(5.20)
+
T
(
n
),
where
i
=
i1
+
i
z
i
, i = 1, ..., n. (5.21)
5.3 ADAPTIVE BACKSTEPPING AND OPTIMALITY 83
To eliminate the parameter estimation error
=
from
V
n
, the update law
=
n
(5.22)
is selected. Now the actual control u can be dened. Following the standard adaptive
backstepping procedure of Theorem 4.1 it is possible to dene a control u which cancels
all indenite terms and render
V
n
negative semi-denite. However, this controller is not
designed in a way that it can be guaranteed to be optimal. By Theorem 5.1 a control law
of the form
u = r
1
(z,
)
V
e
g, r(t, e,
) > 0 t, e,
(5.23)
is suggested. For this control problem (5.23) simplies to
u = r
1
(z,
)z
n
, (5.24)
i.e. z
n
has to be a factor of the control. In order to get rid of the indenite terms without
canceling them, nonlinear damping terms [118] are introduced. Since the expressions
i1
t
n1
k=1
n1
e
k
e
k+1
and
T
n
vanish at z = 0 there exist smooth functions
k
such that
i1
t
n1
k=1
n1
e
k
e
k+1
+
T
n
=
n
k=1
k
z
k
, k = 1, ..., n. (5.25)
Thus (5.20) becomes
V
n
=
n1
k=1
c
k
z
2
k
+z
n
u +
n
k=1
z
k
k
z
n
, (5.26)
where
k
=
k
+
kn
+
nk
, k = 1, ..., n 2
n1
= 1 +
n1
+
(n1)n
+
n(n1)
(5.27)
n
=
n
+
nn
.
A control law of the form (5.24) with
r(t, e,
) =
_
c
n
+
n
k=1
2
k
2c
k
_
1
> 0, c
n
> 0, t, e,
(5.28)
results in
V
n
=
1
2
n
k=1
c
k
z
2
k
k=1
c
k
2
_
z
k
k
c
k
z
n
_
2
. (5.29)
84 INVERSE OPTIMAL ADAPTIVE BACKSTEPPING 5.3
Note that incorporating command lters in this inverse optimal technique is not possible,
since the ltered derivatives of the virtual controls cannot be damped out in the same
way. By Theorem 3.7, it can be concluded that the tracking control problem is solved,
since
V
n
is negative semi-denite. The properties of the controller are summarized in the
following theorem.
Theorem 5.3 (Inverse optimal adaptive backstepping). The dynamic feedback control
law
u
= r
1
(t, e,
)z
n
, 2
=
n
=
n
j=1
j
z
j
, (5.30)
does not only stabilize the system (5.14) with respect to the control Lyapunov function
(5.19), but is also optimal with respect to the cost functional
J = lim
t
|
(t)|
2
1
+
_
0
_
l(t, e,
) +r(t, e,
) u
2
_
dt, R
p
(5.31)
where
l(z,
) = 2
V
n
+( 2)r
1
z
2
n
(5.32)
with a value function
J
= |
|
2
1
+|z|
2
. (5.33)
Proof: Since 2, r(t, e,
) > 0, and
V
n
negative denite, it is clear that l(t, e,
) is
positive denite. Therefore J dened in (5.31) is a meaningful cost functional which
puts an integral penalty on both z and u (with complicated nonlinear scaling in terms
of the parameter estimate), as well as on the terminal value of |
= u + r
1
(t, e,
)z
n
(5.34)
5.3 ADAPTIVE BACKSTEPPING AND OPTIMALITY 85
into J together with (5.26) gives
J = lim
t
|
|
2
1 +
_
0
_
2
_
n1
k=1
c
k
z
2
k
r
1
z
2
n
+
n
k=1
z
k
k
z
n
_
+rv
2
2vz
n
+
2
r
1
z
2
n
_
dt
= lim
t
|
|
2
1
2
_
0
_
n1
k=1
c
k
z
2
k
+z
n
u +
n
k=1
z
k
k
z
n
_
dt +
_
0
rv
2
dt
= lim
t
|
|
2
1 2
_
0
dV
n
+
_
0
rv
2
dt (5.35)
= 2V
n
(z(0),
(0)) +|
(0)|
2
1
2 lim
t
V
s
(z(t)) +
_
0
rv
2
dt,
where V
s
=
1
2
n
k=1
z
2
n
. It was already shown that the control law u together with the
update law for
stabilizes the closed-loop system, which means lim
t
z(t) = 0 and
thus lim
t
V
s
(z(t)) = 0. Therefore the minimum of (5.35) is reached only if v = 0
and thus the control u = u
|
2
1
(5.36)
+ 2
_
0
_
n
k=1
c
k
z
2
k
+
n
k=1
c
k
_
z
k
k
c
k
z
n
_
2
+
u
2
2
_
c
n
+
n
k=1
2
k
2c
k
_
_
dt
with a value function
J
= 2|
|
2
1
+ 2|z|
2
. (5.37)
Therefore
2
_
0
_
n
k=1
c
k
z
2
k
+
u
2
2
_
c
n
+
n
k=1
2
k
2c
k
_
_
dt
2
_
0
_
n
k=1
c
k
z
2
k
+
n
k=1
c
k
_
z
k
k
c
k
z
n
_
2
+
u
2
2
_
c
n
+
n
k=1
2
k
2c
k
_
_
dt
J
= 2|
(0)|
2
1 + 2|z(0)|
2
(5.38)
86 INVERSE OPTIMAL ADAPTIVE BACKSTEPPING 5.3
which yields the inequality
_
0
_
_
n
k=1
c
k
z
2
k
+
u
2
2
_
c
n
+
n
k=1
2
k
2c
k
_
_
_
dt |
(0)|
2
1
+|z(0)|
2
. (5.39)
The dependency on z(0) can be eliminated by employing trajectory initialization: z(0) =
0. This results in the L
2
performance bound
_
0
_
_
n
k=1
c
k
z
2
k
+
u
2
2
_
c
n
+
n
k=1
2
k
2c
k
_
_
_
dt |
(0)|
2
1
. (5.40)
5.3.3 Example: Inverse Optimal Adaptive Longitudinal Missile Control
The nonlinear adaptive controller developed in this chapter is inverse optimal with respect
to a cost functional that penalizes the tracking errors and the control effort. However,
nonlinear damping terms are used to achieve this inverse optimality. In [164] the numer-
ical sensitivity of the tuning functions adaptive backstepping method, with added non-
linear damping terms to robustify the controller against unknown external disturbances,
is studied. Increasing the nonlinear damping gains improves tracking performance, but
leads to undesirable high frequency components in the control signal. This illustrates
that using nonlinear damping in the feedback controller must be done with care, since it
can easily result in high gain feedback.
The effect of the nonlinear damping terms used in the inverse optimal design will become
more clear in the example outlined in this section. The inverse optimal nonlinear adaptive
control approach is applied to the longitudinal missile control example of Sections 3.3.3
and 4.2.4. The generalized dynamics of the missile (3.38), (3.39) are repeated here for
convenience sake:
x
1
= x
2
+f
1
(x
1
) +g
1
u (5.41)
x
2
= f
2
(x
1
) +g
2
u, (5.42)
where f
1
, f
2
, g
1
and g
2
are unknown nonlinear functions containing the aerodynamic sta-
bility and control derivatives. For the control design the g
1
u
1
-term has to be neglected
so that the system is of a lower triangular form. The control objective is to track the
reference signal y
r
(t) with the state x
1
. According to the inverse optimal adaptive back-
stepping procedure, the functions
1
(t),
2
(t, ) and
r
(t, ) have to be selected such
that
1
=
2
+
T
f
1
(
1
)
f
1
2
=
r
(t,
f
1
,
f
2
) +
T
f
2
(
1
)
f
2
(5.43)
y
r
(t) =
1
(t).
5.3 ADAPTIVE BACKSTEPPING AND OPTIMALITY 87
Hence,
1
= y
r
2
= y
r
T
f
1
(
1
)
f
1
(5.44)
r
= y
r
1
_
T
f
1
(
1
)
f
1
_
y
r
T
f
2
(
1
)
f
2
.
Since
y
r
= 0 it follows that
1
R
2
,
can be replaced
by its estimate
f
1
(t),
f
2
(t)), which satises
x
r1
= x
r2
+
T
r
f
1
(x
r1
)
f
1
x
r2
=
r
+
T
r
f
2
(x
r1
)
f
2
+
2
f
1
f
1
(5.45)
y
r
(t) = x
r1
(t).
Dening the tracking error e = x x
r
, the system can be rewritten as
e
1
= e
2
+
T
f
1
f
1
+
T
r
f
1
f
1
(5.46)
e
2
= u +
T
f
2
f
2
+
T
r
f
2
f
2
2
f
1
f
1
(5.47)
where u =
T
g
1
g
1
u
r
and
f
1
) = c
1
z
1
T
f
1
f
1
, (5.49)
where c
1
> 0 and the update laws as
f
1
=
f
1
_
f
1
z
1
f
1
1
e
1
z
2
_
(5.50)
f
2
=
f
2
f
2
z
2
(5.51)
g
2
= P (
g
2
g
2
uz
2
) . (5.52)
Consider the CLF
V
2
=
1
2
_
z
2
1
+z
2
2
+
T
f
1
1
f
1
f
1
+
T
f
2
1
f
2
f
2
+
T
g
2
1
g
2
g
2
_
. (5.53)
88 INVERSE OPTIMAL ADAPTIVE BACKSTEPPING 5.3
Taking the derivative of V
2
along the solutions of (5.49)-(5.52) results in
V
2
= c
1
z
2
1
+z
2
_
z
1
+
T
f
2
f
2
+ u
1
e
1
T
f
1
f
1
_
1
f
1
+
2
f
1
_
_
f
1
z
1
f
1
1
e
1
z
2
_
_
. (5.54)
Instead of canceling all indenite terms, scaling nonlinear damping terms are introduced
as
1
= 1
_
1
f
1
+
2
f
1
_
f
1
+
1
(5.55)
2
=
_
1
f
1
+
2
f
1
_
1
e
1
f
2
+
2
, (5.56)
where
1
e
1
e
2
1
e
1
T
f
1
f
1
=
1
z
1
+
2
z
2
. (5.57)
This renders (5.54) equal to
V
2
= c
1
z
2
1
+z
2
u +z
1
1
z
2
+z
2
2
z
2
. (5.58)
Finally, substituting the control law
u =
_
c
2
+
2
1
2c
1
+
2
2
2c
2
_
z
2
, c
2
> 0, (5.59)
gives
V
2
=
1
2
c
1
z
2
1
1
2
c
2
z
2
2
c
1
2
_
z
1
1
c
1
z
2
_
2
c
2
2
_
z
2
2
c
2
z
2
_
2
. (5.60)
By Theorem 5.3 the inverse optimal tracking control problem is solved. An integral term
with gain k
1
0 can be added to the outer loop design to compensate for the neglected
control effectiveness term as was done with the tuning functions autopilot design of Sec-
tion 4.2.4.
The resulting inverse optimal closed-loop system is implemented in the MATLAB/
Simulink
c
environment to evaluate the performance and the numerical sensitivity. The
gains are selected as c
1
= 18, k
1
= c
2
= 10,
f
1
=
f
2
= 10I,
g
2
= 0.01. The simu-
lation is again performed with a third order xed step solver with a sample time of 0.01s.
The control signal is fed through a low pass lter to remove high frequency components
that crash the solver. The controller is very sensitive to variations in the control gain c
1
.
The response of the system for a simulation at Mach 2.2 with onboard model data for
5.4 CONCLUSIONS 89
Mach 2.0 can be found in Figure 5.1. Tracking performance is excellent, there is not
even a bad transient at the start of the rst doublet as was the case with the tuning func-
tions design of Section 4.2.4. However, some high frequency components are visible in
the control signal at 5, 10, 15, 20 and 25 seconds, despite the use of the low pass lter.
This aggressive behavior is further illustrated in Figure 5.2, where the parameter estima-
tion errors are plotted. There is hardly any adaptation, since the controller already forces
the tracking errors rapidly to zero. In fact, turning adaptation off does not inuence the
tracking performance.
The control law of the inverse optimal design contains the large nonlinear damping terms
2
1
2c
1
and
2
2
2c
2
. Especially the rst term can grow very large and vary in size rapidly as it
contains the derivatives of the virtual control law, as is illustrated in Figure 5.3. The con-
trol law is numerically very sensitive due the fast nonlinear growth resulting from these
terms. It is not possible to reduce
2
1
2c
1
, since
1
is also dependent on c
1
. For other con-
trol applications where the derivatives of the intermediate control law are much smaller,
such as attitude control problems, the design approach may be benecial, because the
nonlinear growth will be more restricted.
5.4 Conclusions
In this chapter inverse optimal control theory is used to modify the last step of the tuning
functions adaptive backstepping approach of Chapter 4. The goal is to introduce a cost
functional to simplify the closed-loop performance tuning of the adaptive controller and
to exploit the inherent robustness properties of optimal controllers. However, nonlinear
damping terms were utilized to achieve the inverse optimality, resulting in high gain
feedback terms in the design. The numerical sensitivity due to the high gain feedback
terms makes the inverse optimal approach less suitable than the adaptive designs of the
previous chapter for the complex ight control design problems considered in this thesis.
Furthermore, the complexity of the cost functional associated with the inverse optimal
design does not make performance tuning any easier.
90 INVERSE OPTIMAL ADAPTIVE BACKSTEPPING 5.4
0 5 10 15 20 25 30
10
0
10
a
n
g
l
e
o
f
a
t
t
a
c
k
(
d
e
g
)
0 5 10 15 20 25 30
50
0
50
p
i
t
c
h
r
a
t
e
(
d
e
g
/
s
)
0 5 10 15 20 25 30
20
10
0
10
20
time (s)
c
o
n
t
r
o
l
d
e
f
l
e
c
t
i
o
n
(
d
e
g
)
Figure 5.1: Numerical simulations at Mach 2.2 of the longitudinal missile model using an inverse
optimal adaptive backstepping control law with uncertainty in the onboard model.
0 5 10 15 20 25 30
4
2
0
x 10
4
t
h
e
t
a
t
i
l
d
e
f
1
1
0 5 10 15 20 25 30
2.707
2.706
t
h
e
t
a
t
i
l
d
e
f
1
2
0 5 10 15 20 25 30
0.835
0.84
t
h
e
t
a
t
i
l
d
e
f
1
3
0 5 10 15 20 25 30
4
2
0
x 10
3
t
h
e
t
a
t
i
l
d
e
f
2
1
0 5 10 15 20 25 30
14.29
14.3
t
h
e
t
a
t
i
l
d
e
f
2
2
0 5 10 15 20 25 30
1.95
2
time (s)
t
h
e
t
a
t
i
l
d
e
f
2
3
0 5 10 15 20 25 30
2.4
2.405
2.41
time (s)
t
h
e
t
a
t
i
l
d
e
g
2
Figure 5.2: The parameter estimation errors for the inverse optimal adaptive backstepping design.
The aggressive control law prevents the update laws from any serious adaptation.
5.4 CONCLUSIONS 91
0 5 10 15 20 25 30
0
50
100
150
200
P
h
i
1
2
/
2
/
c
1
0 5 10 15 20 25 30
0.05
0
0.05
z
2
0 5 10 15 20 25 30
0
50
100
150
200
250
time (s)
i
n
v
(
r
)
Figure 5.3: The size and variations of the nonlinear damping terms and the error state z
2
during
the missile simulation.
Chapter 6
Comparison of Integrated and
Modular Adaptive Flight Control
The constrained adaptive backstepping approach of Chapter 4 is applied to the design
of a ight control system for a simplied, nonlinear over-actuated ghter aircraft model
valid at two ight conditions. It is demonstrated that the extension of the adaptive con-
trol method to multi-input multi-output systems is straightforward. A comparison with
a more traditional modular adaptive controller that employs a least squares identier is
made to illustrate the advantages and disadvantages of an integrated adaptive design.
Furthermore, the interactions between several control allocation algorithms and the on-
line model identication for simulations with actuator failures are studied. The control
design for this simplied aircraft model will provide valuable insights before attempt-
ing the more complex ight control design for the high-delity F-16 dynamic model of
Chapter 2.
6.1 Introduction
In this chapter a nonlinear adaptive backstepping based recongurable ight control sys-
tem is designed for a simplied aircraft model, before attempting the more complex
F-16 model of Chapter 2. As a study case the control design problem for a nonlinear
over-actuated ghter aircraft model is selected. The key simplications made here are
constant velocity and no lift or drag effects of the control surfaces. Furthermore, aerody-
namic data is only available for two ight conditions.
Since the aircraft model considered in this chapter is over-actuated, some form of control
allocation has to be applied to distribute the desired control moments over the actuators.
However, a characteristic of the adaptive backstepping designs as discussed in Chapter 4
is that the Lyapunov-based identiers of the method only yield pseudo-estimates of the
93
94 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.2
unknown parameters, since the estimation is performed to satisfy a total system stability
criterion rather than to optimize the error in estimation. As a result the parameter esti-
mates are not guaranteed to converge to their true values over time and it is not clear what
effect this will have on the control allocation. Therefore, as an interesting side study, the
combination of constrained adaptive backstepping with two common types of control al-
location methods with different weightings will also be examined.
Furthermore, the integrated adaptive backstepping ight controller will be compared with
a more traditional modular adaptive design which makes use of a separate least-squares
identier. This type of modular adaptive controller is referred to as estimation-based
designs in literature. An estimation-based adaptive control design does not suffer from
the restriction of a Lyapunov update law, since it achieves modularity of controller and
identier: any stabilizing controller can be combined with any identier. Especially a
least-squares based identier is of interest, since this type of identier possesses excel-
lent convergence properties and guaranteed parameter convergence to constant values.
In [131, 132, 188] an adaptive NDI design with recursive least-squares identier is used
for the design of a recongurable ight control system for a y-by-wire Boeing 747.
However, theoretical stability and convergence results for the closed-loop system are
not provided, since the least-squares identier, like all traditional identiers, is not fast
enough to capture the potential faster-than-linear growth of nonlinear systems. Hence,
the certainty equivalence principle does not hold and an alternative solution will have to
be found.
In [119, 120] a robust backstepping controller is introduced which achieves input-to-state
stability (ISS) with respect to the parameter estimation errors and the derivative of the
parameter estimate. Nonlinear state lters are used to compensate for the time varying
nature of the parameter estimation errors so that standard gradient or least-squares iden-
tiers can be applied. The resulting identier module guarantees boundedness of the
parameter estimation errors. The modular nonlinear adaptive ight controller will be de-
signed using this approach in combination with the different control allocation methods
so that a comparison can be made.
This chapter starts with a discussion on the problem of applying classical estimation-
based adaptive control designs to uncertain nonlinear systems. After that, the theory
behind modular adaptive backstepping with a least-squares identier is explained. In the
second part of the chapter the aircraft model is introduced and the integrated and modular
adaptive backstepping ight control designs are constructed. The concept of control allo-
cation is explained and three common types of algorithms are introduced in both design
frameworks. Finally, the aircraft model with the adaptive ight controllers is evaluated
in numerical simulations where several types of actuator lockup failure scenarios are
performed.
6.2 Modular Adaptive Backstepping
One of the goals in this chapter is to compare a recongurable ight controller based on
the constrained adaptive backstepping technique with one based on a more traditional
6.2 MODULAR ADAPTIVE BACKSTEPPING 95
modular adaptive design where the controller and identier are separate modules. How-
ever, the latter adaptive design method fails to achieve any global stability results for
systems whose nonlinearities are not linearly bounded. In this section a robust backstep-
ping design with least-squares identier is developed with strong provable stability and
convergence properties.
6.2.1 Problem Statement
Before the modular adaptive backstepping approach is derived, the problem of applying
traditional estimation-based adaptive control designs to nonlinear systems is illustrated
in the following simple example.
Example 6.1
Consider the scalar nonlinear system
x = u +x
2
, (6.1)
where is an unknown constant parameter. A stabilizing certainty equivalence con-
troller is given by
u = x
x
2
, (6.2)
where
the parameter estimate of . The parameter estimation error is dened as
= x
3
(6.3)
renders the derivative of the control Lyapunov function V =
1
2
x
2
+
1
2
2
negative
semi-denite, i.e.
V = x
2
. (6.4)
An alternative solution to this adaptive control problem is to employ a standard identi-
er to provide the estimate for the certainty equivalence controller (6.2). However, in
general, the signal x is not available for measurement and thus (6.1) cannot be solved
for unknown . This problem is solved by ltering both sides of (6.1) by
1
s+1
:
s
s + 1
x =
1
s + 1
u +
1
s + 1
x
2
. (6.5)
Introducing the lters
x
f
= x
f
+x
2
(6.6)
u
f
= = u
f
+u +x = u
f
x
2
(6.7)
96 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.2
makes it possible to rewrite (6.5) as
x(t) = (t)x
f
(t) +u
f
(t). (6.8)
Since is unknown its estimate
has to be used. The corresponding predicted value
of x is
x(t) =
(t)x
f
(t) +u
f
(t), (6.9)
and the prediction error e is dened as
e = x x =
x
f
. (6.10)
To achieve the minimum of e
2
a parameter update law for
has to be dened. A
standard normalized gradient update law is selected:
=
x
f
1 +x
T
f
x
f
e. (6.11)
Substituting (6.10) and
results in
x
2
f
1 +x
T
f
x
f
. (6.12)
Hence, the parameter estimation error converges to zero. However, since this is a lin-
ear differential equation the error cannot converge faster than exponentially. Consider
the most favorable case where
= e
t
(0). (6.13)
The closed-loop system with controller (6.2) is
x = x +
x
2
. (6.14)
Substitution of (6.13) into (6.14) yields the equation
x = x +x
2
e
t
(0), (6.15)
whose explicit solution is
x(t) =
2x(0)
x(0)
(0)e
t
+
_
2 x(0)
(0)
_
e
t
. (6.16)
If x(0)
(0) > 2
the solution escapes to innity in nite time, that is
x(t) as t
1
2
ln
x(0)
(0)
x(0)
(0) 2
. (6.17)
6.2 MODULAR ADAPTIVE BACKSTEPPING 97
This is illustrated in Figure 6.1, where the response of the system (6.1) with both the
Lyapunov- and the estimation-based adaptive control design is plotted. The identier
of the estimation-based design is not fast enough to cope with the potential faster-than-
linear growth of nonlinear systems and converges to innity resulting in a simulation
crash.
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
5
0
5
time (s)
s
t
a
t
e
x
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
20
15
10
5
0
time (s)
i
n
p
u
t
u
estbased
Lyapbased
Figure 6.1: State x and control effort u of the Lyapunov- and estimation-based adaptive controllers
for initial values x(0) = 2 and
(0) = 0. The real value of is 2. The normalized gradient based
identier of the estimation-based controller is not fast enough to cope with the nonlinear growth.
The above simple example illustrates the notion that to achieve stability either a faster
identier is needed, such as the adaptive backstepping designs of Chapter 4, or a robust
controller that can deal with disturbances such as large transient parameter estimation
errors resulting from a slower identier.
6.2.2 Input-to-state Stable Backstepping
In this section a robust backstepping controller which is input-to-state stable (ISS) with
respect to the parameter estimation error is constructed. In other words, the states of the
closed-loop system remain bounded when the parameter estimation error is bounded, and
when the parameter estimation error converges to zero the closed-loop system states will
also converge to zero. A formal denition of input-to-state stability is given in Appendix
B.2.
The ISS backstepping design procedure is largely identical to the static feedback design
98 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.2
part of the command ltered adaptive backstepping approach as given by Theorem 4.2.
The only difference is that the virtual and real control laws are augmented with additional
nonlinear damping terms, i.e.
i
=
1
T
g
i
g
i
_
c
i
z
i
s
i
z
i
T
g
i1
g
i1
z
i1
T
f
i
f
i
+ x
i,r
_
, i = 1, ..., n
u
0
=
n
, (6.18)
where s
i
, i = 1, ..., n are nonlinear damping terms dened as
s
i
=
1i
T
f
i
f
i
+
2i
T
g
i
g
i
x
2
i+1
, i = 1, ..., n, (6.19)
with
> 0 and u x
n+1
for the ease of notation. Note that when compared to
the complex nonlinear damping terms used in the inverse optimal design of Chapter
5, the size of the above damping terms is much easier to control. Consider again the
general system (4.66) and the control Lyapunov function V =
1
2
n
i=1
z
2
i
. Applying the
approach of Theorem4.2, excluding the update laws but including the nonlinear damping
terms s
i
dened above, reduces the derivative of V to
V =
n
i=1
_
(c
i
+s
i
) z
2
i
+
T
f
i
f
i
z
i
+
T
g
i
g
i
x
i+1
z
i
_
=
n
i=1
_
_
c
i
z
2
i
1i
_
f
i
z
i
f
i
2
1i
_
T
_
f
i
z
i
f
i
2
1i
_
+
1
4
1i
T
f
i
f
i
2i
_
g
i
x
i+1
z
i
g
i
2
2i
_
T
_
g
i
x
i+1
z
i
g
i
2
2i
_
+
1
4
2i
T
g
i
g
i
_
_
i=1
_
c
i
z
2
i
+
1
4
1i
T
f
i
f
i
+
1
4
2i
T
g
i
g
i
_
.
If the parameter estimation errors
are bounded
V , is negative outside a compact set,
which demonstrates that the modied tracking errors z
i
are decreasing outside the com-
pact set and are hence bounded. The size of the bounds is determined by the damping
gains
. Furthermore, if the parameter estimation errors are converging to zero, then the
modied tracking errors will also converge to zero. From an input-output point of view,
the nonlinear damping terms render the closed-loop system input-to-state stable with re-
spect to the parameter estimation errors. The values of
0
=
_
A
0
F(x, u)
T
F(x, u)P
(
0
+x) h(x, u),
0
R
n
(6.21)
T
=
_
A
0
F(x, u)
T
F(x, u)P
T
+F(x, u)
T
, R
pn
, (6.22)
where > 0 and A
0
is an arbitrary constant matrix such that
PA
0
+A
T
0
P = I, P = P
T
> 0. (6.23)
The estimation error vector is dened as
= x +
0
T
, R
n
, (6.24)
along with
= x +
0
T
, R
n
. (6.25)
Then is governed by
=
_
A
0
F(x, u)
T
F(x, u)P
_
, (6.26)
which is exponentially decaying. The least-squares update law for
and the covariance
update are dened as
=
1 +trace (
T
)
(6.27)
=
T
1 +trace (
T
)
, (0) = (0)
T
> 0, (6.28)
where 0 is the normalization coefcient. The properties of the least-squares identi-
er are given by the following Lemma from [118].
Lemma 6.1. Let the maximal interval of existence of solutions of (6.20), (6.21)-(6.22)
with (6.27)-(6.28) be [0, t
f
). Then for 0, the following identier properties hold:
1.
L
[0, t
f
)
2. L
2
[0, t
f
) L
[0, t
f
)
100 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.3
3.
L
2
[0, t
f
) L
[0, t
f
)
Proof: Along the solutions of (6.22) the following holds:
d
dt
_
P
T
_
=
_
PA
0
+A
T
0
P
_
T
2PF
T
FP
T
+ PF
T
+FP
T
=
T
2
_
FP
T
1
2
I
p
_
T
_
FP
T
1
2
I
p
_
+
1
2
I
p
. (6.29)
Taking the Frobenius norm results in
d
dt
trace
_
P
T
_
= ||
2
F
2
FP
T
1
2
I
2
F
+
1
2
trace {I
p
}
||
2
F
+
p
2
. (6.30)
This proves that L
[0, t
f
). From (6.26) it follows that
d
dt
_
| |
2
P
_
| |
2
, (6.31)
which implies that L
2
[0, t
f
) L
[0, t
f
). Consider the function
U =
1
2
|
|
2
(t)
1 +| |
2
P
(6.32)
which is positive denite because (t)
1
is positive denite for each t. The derivative of
U after some manipulations satises
U
||
2
1 +trace {
T
}
.
The fact that
U is non-positive proves that
L
[0, t
f
) Integration of the above in-
equality yields
_
1 +trace {
T
}
L
2
[0, t
f
).
Since is bounded, then L
2
[0, t
f
). Due to =
T
+ and the boundedness
of it follows that L
[0, t
f
), which in turn proves that
1+trace(
T
)
L
[0, t
f
). Finally, the square-integrability of and the boundedness of prove that
1+trace(
T
)
L
2
[0, t
f
).
The robust backstepping controller of Section 6.2.2 allows the use of any identier which
can independently guarantee that the parameter estimation errors and their derivatives are
bounded. The least-squares identier with x-swapping lter as introduced in this section
has these properties. This concludes the dicussion on the theory behind the modular
adaptive backstepping approach in which the controller and identier are designed sepa-
rately.
6.3 AIRCRAFT MODEL DESCRIPTION 101
6.3 Aircraft Model Description
Before the adaptive ight control designs are discussed, the aircraft dynamic model for
which the controllers are designed is introduced in this section. The simplied nonlinear
aircraft dynamic model has been obtained from[159]. The aircraft dynamic model (6.33)
somewhat resembles that of an F-18 model.
_
_
_
_
_
_
_
_
_
_
p
q
r
_
_
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
_
_
q p +z
+ (g
0
/V )(cos cos cos
0
)
y
+p(sin
0
+ ) r cos
0
+ (g
0
/V ) cos sin
p +q tan sin +r tan cos
q cos r sin
l
+l
q
q +l
r
r + (l
+l
r
r) +l
p
p i
1
qr
m
+m
q
q +i
2
pr m
p +m
(g
0
/V )(cos cos cos
0
)
n
+n
r
r +n
p
p +n
p
p i
3
pq +n
q
q
_
_
_
_
_
_
_
_
_
_
+
_
_
_
_
_
_
_
_
_
_
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
l
el
l
er
l
al
l
ar
0 0 l
r
m
el
m
er
m
al
m
ar
m
lef
m
tef
m
r
n
el
n
er
n
al
n
ar
0 0 n
r
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
el
er
al
ar
lef
tef
r
_
_
_
_
_
_
_
_
_
_
(6.33)
Aerodynamic data are available in Tables 6.1 and 6.2 for two trimmed ight conditions:
ight condition 1 at an altitude of 30000 ft and a Mach number of 0.7, and ight condi-
tion 2 at 40000 ft altitude and a Mach number of 0.6. The model has seven independent
control surfaces, i.e. left and right elevators, left and right ailerons, leading and trailing
edge aps, and collective rudders. A layout of the aircraft and its control surfaces can
be seen in Figure 6.2. The main simplications made in the dynamic model are constant
airspeed and no lift or drag effects on the control surfaces. The latter simplications have
been made to get the system into a lower triangular form required for standard adaptive
backstepping and feedback linearization designs. The designs considered in this chapter
do not suffer from this shortcoming since command lters are used to generate the inter-
mediate control laws. The aircraft model includes second order actuator dynamics. The
magnitude, rate and bandwidth limits of the actuators are specied in Table 6.3.
Table 6.1: Aircraft model parameters for trim condition I, h = 30000 ft and M = 0.7.
l
= 11.04 l
q
= 0 l
r
= 0.4164 l
= 19.72 l
r
= 4.709
l
p
= 1.4096 z
= 0.6257 y
= 0.1244 m
= 5.432 m
= 0.1258
m
q
= 0.3373 n
= 2.558 n
r
= 0.1122 n
p
= 0.0328 n
p
= 0.0026
n
q
= 0 l
el
= 6.3176 l
er
= 6.3176 l
al
= 7.9354 l
ar
= 7.9354
l
r
= 1.8930 i
1
= 0.7966 i
2
= 0.9595 i
3
= 0.6914 m
el
= 4.5176
m
er
= 4.5176 m
al
= 0.8368 m
ar
= 0.8368 m
lef
= 1.2320 m
tef
= 0.9893
m
r
= 0 g
0
= 9.80665 n
el
= 0.2814 n
er
= 0.2814 n
al
= 0.0698
n
ar
= 0.0698 n
r
= 1.7422 V = 212.14
0
= 0.0681
0
= 0.0681
All stability and control derivatives introduced in (6.33) are considered to be unknown
102 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.3
Table 6.2: Aircraft model parameters for trim condition II, h = 40000 ft and M = 0.6.
l
= 7.0104 l
q
= 0 l
r
= 0.3529 l
= 16.4015 l
r
= 1.0461
l
p
= 0.7331 z
= 0.2876 y
= 0.0700 m
= 1.4592 m
= 0.0177
m
q
= 0.1286 n
= 1.3612 n
r
= 0.0619 n
p
= 0.0177 n
p
= 0.0696
n
q
= 0 l
el
= 2.7203 l
er
= 2.7203 l
al
= 4.2438 l
ar
= 4.2438
l
r
= 0.8920 i
1
= 0.7966 i
2
= 0.9595 i
3
= 0.6914 m
el
= 1.9782
m
er
= 1.9782 m
al
= 0.3183 m
ar
= 0.3183 m
lef
= 0.4048 m
tef
= 0.3034
m
r
= 0 g
0
= 9.80665 n
el
= 0.1262 n
er
= 0.1262 n
al
= 0.0963
n
ar
= 0.0963 n
r
= 0.8018 V = 177.09
0
= 0.1447
0
= 0.1447
Figure 6.2: The control surfaces of the ghter aircraft model. The control surfaces which will lock
in place during the various simulation scenarios are indicated.
and will be estimated online by the parameter estimation process of the adaptive control
laws. The system (6.33) is rewritten in a more suitable form for the control design as
X
1
= H
1
(X
1
, X
u
) +
1
(X
1
, X
u
)
T
1
+B
1
(X
1
, X
u
)X
2
X
2
= H
2
(X
1
, X
2
, X
u
) +
2
(X
1
, X
2
, X
u
)
T
2
+B
2
U (6.34)
X
u
= H
u
(X
1
, X
2
, X
u
)
where X
1
= (, , )
T
, X
2
= (p, q, r)
T
, U = (
el
,
er
,
al
,
ar
,
lef
,
tef
,
r
)
T
and the
uncontrolled state X
u
= . The known nonlinear aircraft dynamics are represented by
the vector functions H
1
(X
1
, X
u
), H
2
(X
1
, X
2
, X
u
) and H
u
(X
1
, X
2
, X
u
) and the matrix
function B
1
(X
1
, X
u
). The functions
1
(X
1
, X
u
) and
2
(X
1
, X
2
, X
u
) are the regres-
sor matrices, while
1
,
2
and B
2
are vectors and a matrix containing the unknown
6.4 FLIGHT CONTROL DESIGN 103
Table 6.3: Aircraft model actuator specications.
Surface
Deection Rate Bandwidth
Limit [deg] Limit [deg/s] [rad/s]
Horizontal Stabilizer [-24, 10.5] 40 50
Ailerons [-25, 45] 100 50
Leading Edge Flaps [-3, 33] 15 50
Trailing Edge Flaps [-8, 45] 18 50
Rudder [-30, 30] 82 50
parameters of the system, dened as
1
= (z
, y
)
T
2
= ( l
, l
p
, l
q
, l
r
, l
, l
r
, l
0
, m
, m
q
, m
, m
0
, n
, n
p
, n
q
, n
r
, n
p
, n
0
)
T
B
2
=
_
_
l
el
l
er
l
al
l
ar
0 0 l
r
m
el
m
er
m
al
m
ar
m
lef
m
tef
m
r
n
el
n
er
n
al
n
ar
0 0 n
r
_
_
.
Note that the parameters l
0
, m
0
and n
0
have been added to the vector to compensate
for additional trim moments caused by locked actuators.
6.4 Flight Control Design
Now that the system has been rewritten in a structured form, the actual control design
methods can be discussed. The control objective is to track a smooth reference signal
X
1,r
with state vector X
1
. The reference X
1,r
and its derivative
X
1,r
are generated
by linear second order lters, which can also be used to enforce the desired transient
response of the controllers. The static feedback loops of the integrated and modular
adaptive controllers are designed identical for comparison purposes and are therefore
derived rst. After that, the dynamic part of both controllers is introduced and their
closed-loop stability properties are discussed.
6.4.1 Feedback Control Design
The static feedback control design can be divided in two parts, an outer loop to control
the aerodynamic angles and the roll angle using the angular rates, and an inner loop
to control the angular rates using the control surfaces. The design procedure starts by
104 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.4
dening the tracking errors as
Z
1
=
_
_
_
_
_
_
r
_
_
= X
1
X
1,r
(6.35)
Z
2
=
_
_
p
q
r
_
_
_
_
p
r
q
r
r
r
_
_
= X
2
X
2,r
, (6.36)
where X
2,r
is the virtual control law to be dened.
Step 1: The Z
1
-dynamics satisfy
Z
1
= B
1
Z
2
+B
1
X
2,r
+H
1
+
T
1
1
. (6.37)
To stabilize (6.37), a stabilizing function X
0
2,r
is dened as
X
0
2,r
= B
1
1
_
C
1
Z
1
S
1
Z
1
H
1
T
1
1
+
X
1,r
_
2
, (6.38)
where
1
is the estimate of
1
, C
1
is a positive denite gain matrix and
S
1
=
1
T
1
1
. (6.39)
The compensated tracking error
Z
1
and
2
are to be dened. The stabilizing function
(6.38) is now fed through second order low pass lters as dened in Appendix C to
produce the virtual control law X
2,r
and its derivative. These lters can also be used to
enforce rate and magnitude limits on the signals. The magnitude and rate limits can be
selected equal to the physical limits of the actual actuators or states of the aircraft. The
effect that the use of these lters has on the tracking errors can be captured with the stable
linear lter
1
= C
1
1
+B
1
_
X
2,r
X
0
2,r
_
. (6.40)
The compensated tracking error
Z
1
is dened as
Z
1
= Z
1
1
. (6.41)
This concludes the outer loop design.
Step 2: The inner loop design starts with the Z
2
-dynamics, which are given by
Z
2
= B
2
U +H
2
+
T
2
2
X
2,r
. (6.42)
To stabilize (6.42), the stabilizing function M
0
des
is dened as
B
2
U
0
= C
2
Z
2
S
2
Z
2
B
T
1
Z
1
H
2
T
2
2
+
X
2,r
= M
0
des
, (6.43)
where C
2
is a positive denite gain matrix,
B
2
is the estimate of B
2
and
S
2
=
2
T
2
2
+
3
i=1
2i
U
2
i
. (6.44)
6.4 FLIGHT CONTROL DESIGN 105
Note that the matrix
B
2
is a 3 7 matrix. In Section 6.5 several control allocation
algorithms are introduced to determine U
0
. The real control
B
2
U = M
des
is found by
ltering
B
2
U
0
. Finally, the stable linear lter
2
= C
2
2
+M
des
M
0
des
(6.45)
is dened. The derivative of the control Lyapunov function
V =
1
2
Z
T
1
Z
1
+
1
2
Z
T
2
Z
2
(6.46)
along the trajectories of the closed-loop system is reduced to
Z
T
1
C
1
Z
1
Z
T
2
C
2
Z
2
+
1
4
1
T
1
1
+
1
4
2
T
2
2
+
7
j=1
1
4
2j
B
T
2j
B
2j
,
where
B
2j
represents the j-th column of the matrix
B
2
. Fromthe above expression it can
be deduced that the compensated tracking errors
Z
1
,
Z
2
are globally uniformly bounded
if the parameter estimation errors are bounded. The size of the bounds is determined by
the damping gains
i=1
trace
_
T
i
1
i
i
_
+
7
j=1
trace
_
B
T
2j
1
B
2j
B
2j
_
_
_
, (6.47)
where
=
T
> 0 are the update gain matrices. Selecting the update laws
1
=
1
1
Z
1
2
=
2
2
Z
2
(6.48)
B
2j
= P
B
2j
_
B
2j
Z
2
U
j
_
where U
j
represents the j-th element of the control vector U, reduces the derivative of
V
a
along the trajectories of the closed-loop system to
V
a
=
Z
T
1
(C
1
+S
1
)
Z
1
Z
T
2
(C
2
+S
2
)
Z
2
,
which is negative semi-denite. Hence, the modied tracking errors
Z
1
,
Z
2
converge
asymptotically to zero. Note that the nonlinear damping gains are not needed to guaran-
tee stability of this integrated adaptive design. However, for the purpose of comparison
106 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.4
the static feedback parts of both controllers are kept the same. Furthermore, the damping
terms can be used to improve transient performance bounds of the integrated design as
demonstrated in [118], although selecting them too large will result in high gain control
and related numerical problems.
The update laws (6.48) are driven by the compensated tracking errors
Z
i
. If the mag-
nitude or rate limits of the command lters (selected equal to limits of the actuators
or states) are reached, the real tracking errors Z
i
may increase. However, the modied
tracking errors
Z
i
will still converge to zero, since the effect of these constraints has been
ltered out. In this way unlearning of the update laws is prevented. Note that the update
laws for
B
2
include a projection operator to ensure that certain elements of the matrix
do not change sign and full rank is maintained at all times. For most elements the sign
is known based on physical principles. The update laws are also robustied against pa-
rameter drift with continuous dead-zones and e-modication. A scheme of the integrated
adaptive control law can be found in Figure 6.3.
Prefilters
Sensor
Processing
Backstepping
Control Law
(Onboard Model )
Command Filters
Constraint Effect
Estimator
Pilot
Commands
Online Model
Identification
Control
Allocation
U
U
U
X
Y Z Z
_
Z
_
0
M
des
X = H(X, U) +F
T
(X, U), (6.49)
where X = (X
T
1
, X
T
2
, X
T
u
)
T
represents the system states, H(X, U) are the known sys-
tem dynamics, = (
T
1
,
T
2
, B
T
21
, ..., B
T
27
)
T
is a vector containing the unknown con-
6.5 CONTROL ALLOCATION 107
stant parameters and F(X, U) the known regressor matrix. The x-swapping lter and
prediction error are dened as
0
=
_
A
0
F
T
(X, U)F(X, U)P
(
0
+X) H(X, U) (6.50)
T
=
_
A
0
F
T
(X, U)F(X, U)P
T
+F
T
(X, U) (6.51)
= X +
0
T
, (6.52)
where > 0 and A
0
is an arbitrary constant matrix such that
PA
0
+A
T
0
P = I, P = P
T
> 0. (6.53)
The least-squares update law for
and the covariance update are dened as
=
1 +trace (
T
)
(6.54)
1 +trace (
T
)
, (6.55)
where 0 is the normalization coefcient and 0 is a forgetting factor. By
Lemma 6.1 the modular controller with x-swapping lters and least-squares update law
achieves global asymptotic tracking of the modied tracking errors. Despite using a
mild forgetting factor in (6.55), the covariance matrix can become small after a period
of tracking, and hence reduces the ability of the identier to adjust to abrupt changes in
the system parameters. A possible solution to this problem can be found by resetting
the covariance matrix when a sudden change is detected. After an abrupt change in
the system parameters, the estimation error will be large. Therefore a good monitoring
candidate is the ratio between the current estimation error and the mean estimation error
over an interval t
> T
(6.56)
where T
B
2
U
0
= M
0
des
, (6.57)
108 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.5
where
B
2
is a 3 7 matrix obtained from the identiers. Without constraints on U
0
, the
expression (6.57) has innite solutions. In the presence of magnitude and rate constraints
on U
0
, this equation has either an innite number of solution, an unique solution, or no
solution at all. Two different control allocation methods will be discussed in this section,
one based on the weighted pseudo-inverse and one based on quadratic programming. The
control allocation methods applied in this section are quite basic methods, many more
sophisticated methods exist. Overviews of the numerous control allocation techniques
can be found in [17, 52, 76, 154].
6.5.1 Weighted Pseudo-inverse
Asimple and computationally efcient solution to the control allocation problemis found
by utilizing the weighted pseudo-inverse (WPI). Consider the following quadratic cost
function
J = (U
0
)
T
WU
0
(6.58)
where W is a weighting matrix. The solution of (6.58) is given by
U
0
= W
1
(
B
2
)
T
_
B
2
W
1
(
B
2
)
T
_
1
M
0
des
. (6.59)
The above equation provides an unique solution to (6.57), but it does not take any con-
straints on the control effectors into account. The WPI approach can therefore be inter-
preted as a very crude approach to control allocation. When W = I, the solution of
(6.59) is referred to as the pseudo-inverse (PI) solution.
6.5.2 Quadratic Programming
The main disadvantage of the WPI method is that it does not take magnitude and rate
constraints on the control effectors into account. When online solving of an optimization
problem is allowed, these constraints can be taken into account. Quadratic optimization
problems, or quadratic programs, can be solved very efciently and are therefore inter-
esting for online applications. The quadratic programming (QP) solution will be feasible
Prefilters
Sensor
Processing
Pilot
Commands
Control
Allocation
Y Z M
des
Online Model Identification
Least Squares
Identifier
x-Swapping Filter
U
U
Backstepping
Control Law
(Onboard Model )
X
X
0
,
T
Figure 6.4: Modular adaptive control framework.
6.5 CONTROL ALLOCATION 109
when the desired moment vector is within the attainable moment set (AMS), and infea-
sible if it is outside.
In [181] two approaches to modify the QP solution are proposed to guarantee that the
solution will always be feasible: direction preserving and sign preserving. The direction
preserving method scales down the magnitude of the desired moment with a scaling fac-
tor such that it falls within the attainable moment set. The sign preserving method is
very similar, but allows the scaling to be split amongst the three components of M
0
des
individually as
roll
,
pitch
and
yaw
. The difference between the scaling methods is
illustrated in Figure 6.5.
Figure 6.5: Illustration of two quadratic programming solutions: (a) Direction preserving method
(b) Sign preserving method [181].
The sign preserving control allocation method makes more effective use of the available
control authority, and therefore this method is implemented in the ight control designs.
The QP is formulated as [181]
min
U
0
,
1
2
x
T
Hx +c
T
U
x (6.60)
s.t.
B
2
U
0
T
M
0
des
= 0
_
_
U
lb
0
0
0
_
_
U
0
roll
pitch
yaw
_
_
U
ub
1
1
1
_
_
where x = ((U
0
)
T
, 1
roll
, 1
pitch
, 1
yaw
)
T
,
=
_
_
roll
0 0
0
pitch
0
0 0
yaw
_
_
,
H =
_
Q
U
0
0 Q
_
.
110 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.6
The weighting matrices Q
U
, Q
and c
U
are user specied. The scaling factors are more
heavily weighted than the control inputs to make sure that all the available control au-
thority is used: Q
Q
U
.
6.6 Numerical Simulation Results
The control designs are evaluated on their tracking performance and parameter estima-
tion accuracy for several failure scenarios during two separate maneuvers of 60 seconds.
The task given to the controllers is to track roll angle and angle of attack reference sig-
nals, while the sideslip angle is regulated to zero. The simulations are performed in
MATLAB/Simulink
c
with a third order solver and 0.01s sampling time. The controllers,
identiers and aircraft model are all written as M S-functions.
6.6.1 Tuning the Controllers
The gains of both controllers are selected as C
1
= I, C
2
= 2I and all damping terms
are taken equal to 0.01. These gains were selected after a trial-and-error procedure
in order to get an acceptable nominal tracking response. Note that Lyapunov stability
theory only requires the control gains to be larger than zero, but it is natural to select
the gains of the inner loop largest. The dynamics and limits of the outer loop command
lters are selected equal to the actuator dynamics of the aircraft model. The inner loop
command lters do not contain any limits on the virtual control signals.
With the tuning of the static feedback designs nished, the identiers can be tuned. Again
the theory for the integrated adaptive design only requires the update gains to be larger
than zero. Selecting larger gains results in a more rapid parameter convergence at the cost
of more control effort. However, the effect of the update gains on the transient perfor-
mance of the closed-loop system is unclear, since the dynamic behavior of the tracking
error driven update laws can be quite unexpected. As such, it turns out to be very time
consuming to nd an unique set of update gains of the Lyapunov-based identier for a
range of failure types and two different ight conditions. This is a clear disadvantage
of the integrated adaptive design. All tracking error driven update laws are normalized
and the update gains related to the symmetric coefcients are selected equal to 10 and
the gains related to the asymmetric coefcients equal to 3. The constant related to the
e-modication (see Section 4.2.3) is taken equal to 0.01 and the continuous dead-zone
bounds are taken equal to 0.01 deg in the outer loop and 0.1 deg/s in the inner loop.
The tuning of the least-squares identier is much more straight-forward, since the gain
scaling is more or less automated and the dynamic behavior is similar to the aircraft
model. However, the selection of a proper resetting threshold may take some time. All
diagonal elements of the update gain matrix are initialized at 10 and the resetting thresh-
old is selected as T
e
(,
e
)
e
+
C
l
Z
T
and
C
l
m
T
= C
l
m
(, ) +C
l
Z
T
_
x
cg
r
x
cg
_
+
q c
2V
T
C
l
m
q
() +C
l
m
e
(,
e
)
e
+
C
l
m
T
,
where
C
l
Z
T
and
C
l
m
T
are the estimates of the modeling errors. The other force and
moment coefcients of the low-delity model are similarly dened. All coefcient terms
are again given in tabular form. Note that the higher order elevator deection dependent
terms are contained in the base terms C
l
Z
and C
l
m
.
The next step is to further specify the estimates of the modeling errors
C
l
Z
T
and
C
l
m
T
in
such a way that they can account for all possible uncertainties. If the failure scenarios
are limited to symmetric damage and/or control surface failures the polynomial structure
of the estimates can be selected identical to the known onboard model structure, i.e.
C
l
Z
T
=
C
l
Z
(, ) +
q c
2V
T
C
l
Z
q
() +
C
l
Z
e
(,
e
)
e
(7.1)
7.2 FLIGHT ENVELOPE PARTITIONING 123
and
C
l
m
T
=
C
l
m
(, ) +
q c
2V
T
C
l
m
q
() +
C
l
m
e
(,
e
)
e
, (7.2)
where each polynomial coefcient term
C
l
i=1
w
i
F
i
(u). (7.3)
1
This formula is usually referred to as the Cox-De Boor recursion formula
126 F-16 TRAJECTORY CONTROL DESIGN 7.2
As an example, the input could be the angle of attack over an input space of 0 to 10
degrees and the output one of the coefcients of the polynomial approximators (7.1) and
(7.2). Note that (7.3) can also be written in the standard notation used throughout this
thesis as
y = (u)
T
, (7.4)
where (u) = (F
1
(u), ..., F
n
(u))
T
is the known regressor and = [w
1
, ..., w
n
]
T
a
vector of unknown constant parameters.
Figure 7.3: The output function y as a combination of third order B-splines and weights.
Two-dimensional B-spline Networks
Two-dimensional B-spline networks have two input nodes. The rst hidden layer, as with
the one-dimensional network, consists of nodes, to which a basis function F is applied.
This is shown in Figure 7.4 below. To the rst input a group of n nodes are applied,
and to the second input a group of m nodes are applied. The second hidden layer now
consists of nodes which each have two inputs u
1
and u
2
. For every combination of a
node from one group and and a node from the second group, a node exists. To each node
of the second hidden layer, a function G is applied which is now a multiplication of the
two inputs multiplied by a weight w. Again the output node sums the results of all second
hidden layer nodes:
y =
n
i=1
m
j=1
w
i+n(j1)
F
1i
(u
1
)F
2j
(u
2
). (7.5)
7.2 FLIGHT ENVELOPE PARTITIONING 127
Figure 7.4: Two-dimensional network [44].
When the spline functions of the various nodes are properly spaced, any two-dimensional
function can be approximated. The extension to n-dimensional networks is evidently
straightforward.
B-spline Network Learning
Learning of B-spline networks can be done in several ways, the most common way to
adapt is after each sample:
w
i
= eF
i
(u) (7.6)
where w
i
is the adaptation of weight i, F
i
is B-spline function i and u is the input.
Given a certain input u, only a limited number of splines F
i
(u) are nonzero. Therefore
only a few weights are adapted after each sample, i.e. the adaptation is local. There are
two practical methods for the network learning process available:
ofine learning, where a previously obtained set of data is available and the net-
work learns from this environment. The complete data set can be presented at the
same time, i.e. batch learning, but it is also possible to use only a part of the set at
each time step for training, i.e. stochastic learning. The learning phase is separated
from the simulation phase, i.e. ofine learning;
online learning, when no data set is available to train the network, the network can
be trained during a simulation. The network learns to include the new data points
in the network. Since the learning phase takes place during the simulation phase,
this is called online learning.
Application of B-spline Networks
Based on the denition of the B-spline networks and the properties of B-splines, it can
be concluded that B-spline networks have several characteristics that make them very
suitable for online adaptive control:
128 F-16 TRAJECTORY CONTROL DESIGN 7.3
Because only a small number of B-spline basis functions is non-zero at any given
time step, the weight updating scheme is local. This has the advantage that only a
few update laws are used at the same time, resulting in a lower computational load.
Another advantage is that the network retains information of all ight conditions,
since the local adaptation does not interfere with points outside the closed neigh-
borhood. This means the approximator has memory capabilities, and hence learns
instead of simply adapting the weights.
The spline outputs are always positive and normalized, which provides numerical
stability.
7.2.3 Resulting Approximation Model
For the F-16 aerodynamic model error approximation each of the coefcient terms in
(7.1) and (7.2) are represented by a B-spline network. Third order B-spline basis func-
tions are used and the grid for each of scheduling parameters , and
e
is selected as
2.5 degrees. In earlier work this combination provided enough accuracy to estimate the
model errors, even in the case of an aircraft model with sudden changes in the dynamic
behavior [189]. Note, however, that [203] demonstrated that less partitions are needed
to accurately identify the nominal aerodynamic F-16 model. Since sudden, unexpected
changes in the model are considered in this work, more partitions are used. Note that
using more partitions does not mean that more models are updated at a certain time step;
this is determined by approximation structures, the order of the B-spline functions and
the order of the B-spline networks. The local behavior of the approximation process with
B-spline networks is illustrated in one of the simulation scenarios in Section 8.6.
7.3 Trajectory Control Design
In this section, a nonlinear adaptive autopilot is designed for the inertial trajectory control
of the six-degrees-of-freedom, high-delity F-16 aircraft model as introduced in Chapter
2. The control system is decomposed in four backstepping feedback loops, see Figure
7.5, constructed using a single control Lyapunov function. The aerodynamic force and
moment functions of the aircraft model are assumed not to be exactly known during the
control design phase and will be approximated online. B-spline networks are used to
partition the ight envelope into multiple connecting regions in the manner that was dis-
cussed in the previous section. In each partition a locally valid linear-in-the-parameters
nonlinear aircraft model is dened, of which the unknown parameters are adapted online
by Lyapunov based update laws. These update laws take aircraft state and input con-
straints into account so that they do not corrupt the parameter estimation process. The
performance of the proposed control system will be assessed in numerical simulations of
several types of trajectories at different ight conditions. Simulations with a locked con-
trol surface and uncertainties in the aerodynamic forces and moments are also included.
7.3 TRAJECTORY CONTROL DESIGN 129
The section is outlined as follows. First, a motivation for applying the proposed con-
trol approach to this problem is given. After that, the nonlinear dynamics of the aircraft
model are written in a suitable form for the control design in Section 7.3.2. In Section
7.3.3 the adaptive control design is presented as decomposed in four feedback loops,
after which the identication process with the B-spline neural networks is discussed in
Section 7.3.4. Section 7.4 validates the performance of the control law using numerical
simulations performed in MATLAB/Simulink
c
. Finally, a summary of the results and
the conclusions are given in Section 7.5.
7.3.1 Motivation
In recent years the advancements in micro-electronics and precise navigation systems
have led to an enormous rise of interest [43] in (partially) automated unmanned air ve-
hicle (UAV) designs for a large variety of missions in both civil [160, 209] and military
aviation [200]. Inertial trajectory control is essential for these UAVs, since they are
usually required to follow predetermined paths through certain target points in the three-
dimensional air space [29, 96, 97, 151, 171, 172, 184]. Other situations where trajectory
control is desired include formation control, aerial refueling and autonomous landing
maneuvers [68, 155, 156, 168, 182, 207]. This has lead to a lot of literature dedicated
to formation and ight path control for UAVs, but also for other types of (un)manned
vehicles [80, 146].
Two different approaches can be distinguished in the design of these trajectory control
systems. The most popular approach is to separate the guidance and control laws: A
given reference trajectory is converted by the guidance laws to velocity and attitude
commands for the actual ight controller which in turn generates the actuator signals
[155, 156, 172]. For example, in [172] it is assumed a ight path angle control autopilot
exists and a guidance lawis constructed that takes heading rate and velocity constraints of
the vehicle into account. The same holds for the formation control schemes of [155, 156].
Usually the assumption is made that the autopilot response to heading and airspeed com-
mands is rst order in nature to simplify the design.
The other design approach is to integrate the guidance and control laws into one systemto
achieve better stability guarantees and improve performance. For instance, [96] utilizes
an integrated guidance and control approach to trajectory tracking where the trimmed
ight conditions along the reference trajectory are the command input to the tracking
controllers. In [184] a combination of sliding mode control and adaptive control is used
Figure 7.5: Four loop feedback design for ight path control.
130 F-16 TRAJECTORY CONTROL DESIGN 7.3
for ight path control of an F/A-18 model.
In this section, a Lyapunov-based adaptive backstepping approach is used to design a
ight path controller for a nonlinear, high-delity F-16 model in three-dimensional air
space. It is assumed that the aerodynamic force and moment functions of the model are
not known exactly and that they can change during ight due to structural damage or
control surface failures. There is plenty of literature available on adaptive backstepping
designs for the control of aircraft and missiles; see e.g. [58, 61, 76, 109, 176, 183]. How-
ever, most of these designs consider control of the aerodynamic angles , and or
the angular rates. The design of a trajectory controller is much more complicated since
the system to be controlled is of a higher relative degree. This presents difculties for a
standard adaptive backstepping design since the derivatives of the intermediate control
variables have to be calculated analytically in each design step. Calculating the deriva-
tives of the intermediate control variables in each design step leads to a rapid explosion
of terms.
This phenomenon is the main motivation for the authors of [184] to select a sliding mode
design for the outer feedback loops: It simplies the design considerably. Another dis-
advantage of standard backstepping designs and indeed most feedback linearizing de-
signs is that the contribution of the control surface deections to the aerodynamic forces
cannot be taken into account. For these reasons the constrained adaptive backstepping
approach as explained in Section 4.3 is used in this chapter. Furthermore, to simplify the
approximation of the unknown aerodynamic force and moment functions and to reduce
computational load, the ight envelope is partitioned into multiple, connecting operating
regions as discussed in the previous section.
7.3.2 Aircraft Model Description
The aircraft model used in this study is that of an F-16 ghter aircraft with geometry
and aerodynamic data as reported in Section 2.4. The control inputs of the model are
the elevator, ailerons, rudder and leading edge aps, as well as the throttle setting. The
leading edge aps are controlled separately and will not be used for the control design.
The control surface actuators are modeled as rst-order low pass lters with rate and
magnitude limits as given in Table 2.1. In Section 2.2.4 a representation of the equations
of motion for the F-16 model was given. These differential equations can be rewritten in
the following form, which is more suitable for the trajectory control problem:
X
0
=
_
_
V
T
cos cos
V
T
sin cos
V
T
sin
_
_
(7.7)
X
1
=
_
_
1
m
(D +F
T
cos cos ) g sin
1
mV
T
cos
(Lsin +Y cos +F
T
(sinsin cos sin cos ))
1
mV
T
(Lcos Y sin +F
T
(cos sin sin + sincos ))
g
V
T
cos
_
_
(7.8)
7.3 TRAJECTORY CONTROL DESIGN 131
X
2
=
_
_
cos
cos
0
sin
cos
cos tan 1 sintan
sin 0 cos
_
_
X
3
+
_
_
0 sin + cos sintan cos tan
0
cos sin
cos
cos
cos
0 cos cos sin
_
_
X
1
(7.9)
X
3
=
_
_
(c
1
r +c
2
p) q +c
3
L +c
4
_
N +qH
eng
_
c
5
pr c
6
_
p
2
r
2
_
q +c
7
_
M rH
eng
_
(c
8
p c
2
r) q +c
4
L +c
9
_
N +qH
eng
_
_
_
(7.10)
where X
0
= (x, y, z)
T
, X
1
= (V
T
, , )
T
, X
2
= (, , )
T
, X
3
= (p, q, r)
T
and the
denition of the inertia terms c
i
, i = 1, ..., 9 is given in Section 2.2.4.
These twelve differential equations are sufcient to describe the complete motion of the
rigid-body aircraft. Other states such as the attitude angles , and are functions of
X = (X
T
0
, X
T
1
, X
T
2
, X
T
3
)
T
.
7.3.3 Adaptive Control Design
In this section the aim is to develop an adaptive guidance and control system that asymp-
totically tracks a smooth, prescribed inertial trajectory Y
ref
= (x
ref
, y
ref
, z
ref
)
T
with
position states X
0
= (x, y, z)
T
. Furthermore, the sideslip angle has to be kept at
zero to enable coordinated turning. It is assumed that the reference trajectory Y
ref
=
(x
ref
, y
ref
, z
ref
)
T
satises
x
ref
= V
ref
cos
ref
y
ref
= V
ref
sin
ref
(7.11)
with V
ref
,
ref
, z
ref
and their derivatives continuous and bounded. It is also assumed
that the components of the total aerodynamic forces L, Y, D and moments
L,
M,
N are
uncertain, so these will have to be estimated. The available controls are the control
surface deections (
e
,
a
,
r
)
T
and the engine thrust F
T
. The Lyapunov-based control
design based on Section 4.3 is done in four feedback loops, starting at the outer loop.
Inertial Position Control
The outer loop feedback control design is initiated by transforming the tracking control
problem into a regulation problem:
Z
0
=
_
_
z
01
z
02
z
03
_
_
=
_
_
cos sin 0
sin cos 0
0 0 1
_
_
_
X
0
Y
ref
_
, (7.12)
132 F-16 TRAJECTORY CONTROL DESIGN 7.3
where a new rotating reference frame for control, that is xed to the aircraft and aligned
with the horizontal component of the velocity vector, is introduced [168, 172]. Differen-
tiating (7.12) gives
Z
0
=
_
_
V
T
+z
02
V
ref
cos(
ref
)
z
01
+V
ref
sin(
ref
)
z
ref
V
T
sin
_
_
. (7.13)
The idea is to design virtual control laws for the ight path angles , and the total
airspeed V
T
to control the position errors Z
0
. However, from (7.13) it is clear that it is
not yet possible to do something about z
02
in this design step. The virtual control laws
are selected as
V
des,0
= V
ref
cos
_
ref
_
c
01
z
01
(7.14)
des,0
= arcsin
_
c
03
z
03
z
ref
V
T
_
, /2 < < /2, (7.15)
where c
01
, c
03
> 0 are the control gains. The actual, implementable virtual control
signals V
des
and
des
as well as their derivatives
V
des
and
des
are obtained by ltering
the virtual signals with a second order low pass lter with optional magnitude and rate
limits in place. As an example the state space representation of the lter for V
des,0
is
given by
_
q
1
(t)
q
2
(t)
_
=
_
q
2
2
V
V
_
S
R
_
2
V
2
V
V
[S
M
(V
des,0
) q
1
]
_
q
2
_
_
(7.16)
_
V
des
V
des
_
=
_
q
1
q
2
_
(7.17)
where S
M
() and S
R
() represent the magnitude and rate limit functions as given in
Appendix C. These functions enforce the state V
T
to stay within the dened limits. Note
that if the signal V
des,0
is bounded, then V
des
and
V
des
are also bounded and continuous
signals. When the magnitude and rate limits are not in effect the transfer function from
V
des,0
to V
des
is given by
V
des
(s)
V
des,0
(s)
=
2
v
s
2
+ 2
v
v
+
2
v
(7.18)
and the error V
des,0
V
des
can be made arbitrarily small by selecting the bandwidth of
the lter sufciently large.
Flight Path Angle and Airspeed Control
In the second loop the objective is to steer V
T
and to their desired values as determined
in the previous section. Furthermore, the heading angle has to track the reference signal
ref
, while the tracking error z
02
is also regulated to zero. The available (virtual) controls
7.3 TRAJECTORY CONTROL DESIGN 133
in this step are the aerodynamic angles and as well as the thrust F
T
. Note that the
aerodynamic forces also depend on the control surface deections U = (
e
,
a
,
r
)
T
.
These forces are quite small, since the surfaces are primarily moment generators. How-
ever, since the current control surface deections will be available from the command
lters that are used in the inner design loop, they can be taken into account in the control
design. The relevant equations of motion for this design step are given by
X
1
= A
1
F
1
(X, U) +B
1
G
1
(X, U, X
2
) +H
1
(X) (7.19)
where
A
1
=
1
mV
T
_
_
0 0 V
T
0
cos
cos
0
0 sin 0
_
_
, H
1
=
_
_
g sin
T
mV
T
cos
cos sin cos
T
mV
T
cos sin sin
g
V
T
cos
_
_
,
B
1
=
1
mV
T
_
_
V
T
cos cos 0 0
0
1
cos
0
0 0 1
_
_
,
are known (matrix) functions, and
F
1
=
_
_
L(X, U)
Y (X, U)
D(X, U)
_
_
, G
1
=
_
_
F
T
(L(X, U) +F
T
sin ) sin
(L(X, U) +F
T
sin ) cos
_
_
are functions containing the uncertain aerodynamic forces. Note that the intermediate
control variables and do not appear afne in the X
1
-subsystem, which complicates
the design somewhat. Since the control objective in this step is to track the smooth
reference signal X
des
1
= (V
des
,
ref
,
des
)
T
with X
1
= (V
T
, , )
T
, the tracking errors
are dened as
Z
1
=
_
_
z
11
z
12
z
13
_
_
= X
1
X
des
1
. (7.20)
To regulate Z
1
and z
02
to zero simultaneously, the following equation needs to be satis-
ed [98]
B
1
G
1
(X, U, X
2
) =
_
_
c
11
z
11
V
ref
(c
02
z
02
+c
12
sin z
12
)
c
13
z
13
_
_
A
1
F
1
H
1
+
X
des
1
, (7.21)
where
F
1
is the estimate of F
1
and where
G
1
(X, U, X
2
) =
_
_
F
T
_
L
0
(X, U) +
L
(X, U) +F
T
sin
_
sin
_
L
0
(X, U) +
L
(X, U) +F
T
sin
_
cos
_
_
(7.22)
134 F-16 TRAJECTORY CONTROL DESIGN 7.3
with the estimate of the lift force decomposed as
L(X, U) =
L
0
(X, U) +
L
(X, U).
The estimate of the aerodynamic forces
F
1
is dened as
F
1
=
T
F
1
(X, U)
F
1
(7.23)
where
T
F
1
is the known regressor function and
F
1
is a vector with unknown constant
parameters. It is assumed that there exists a vector
F
1
such that
F
1
=
T
F
1
(X, U)
F
1
. (7.24)
This means the estimation error can be dened as
F
1
=
F
1
F
1
. The next step
is to determine the desired values
des
and
des
. The right-hand side of (7.21) is en-
tirely known, so the left-hand side can be determined and the desired values extracted.
Introducing the coordinate transformation
x
_
L
0
(X, U) +
L
(X, U) +F
T
sin
_
cos (7.25)
y
_
L
0
(X, U) +
L
(X, U) +F
T
sin
_
sin , (7.26)
which can be seen as a transformation from the two-dimensional polar coordinates
_
L
0
(X, U) +
L
(X, U) +T sin
_
and to cartesian coordinates x and y. The de-
sired signals (F
des,0
T
, y
0
, x
0
)
T
are given by
B
1
_
_
F
des,0
T
y
0
x
0
_
_
=
_
_
c
11
z
11
V
ref
(c
02
z
02
+c
12
sin z
12
)
c
13
z
13
_
_
A
1
F
1
H
1
+
X
des
1
, (7.27)
thus the virtual control signals are equal to
(X, U)
des,0
=
_
x
2
0
+y
2
0
L
0
(X, U) F
T
sin (7.28)
and
des,0
=
_
_
arctan
_
y
0
x
0
_
if x
0
> 0
arctan
_
y
0
x
0
_
+ if x
0
< 0 and y
0
0
arctan
_
y
0
x
0
_
if x
0
< 0 and y
0
< 0
2
if x
0
= 0 and y
0
> 0
2
if x
0
= 0 and y
0
< 0
. (7.29)
Filtering the virtual signals to account for magnitude, rate and bandwidth limits will give
the implementable virtual controls
des
,
des
and their derivatives. The sideslip angle
command was already dened as
ref
= 0, thus X
des
2
= (
des
,
des
, 0)
T
and its derivative
are completely dened.
7.3 TRAJECTORY CONTROL DESIGN 135
However, care must be taken since the desired virtual control
des,0
is undened when
both x
0
and y
0
are equal to zero making the system momentarily uncontrollable. This
sign change of
_
L
0
(X, U) +
L
(X, U) +F
T
sin
_
can only occur at very low or
negative angles of attack. This situation was not encountered during the maneuvers sim-
ulated in this study. To solve the problem altogether, the designer could measure the
rate of change for x
0
and y
0
and device a rule base set to change sign when these terms
approach zero. Furthermore, problems will also occur at high angles of attack when the
control effectiveness term
L
X
2
= A
2
F
1
(X, U) +B
2
(X)X
3
+H
2
(X) (7.30)
where
A
2
=
1
mV
T
_
_
(tan + tan sin) tan cos 0
1
cos
0 0
0 1 0
_
_
B
2
=
_
_
cos
cos
0
sin
cos
cos tan 1 sin tan
sin 0 cos
_
_
H
2
=
1
mV
T
_
_
T
0
g
V
T
tan cos cos
F
T
sin
cos
+
g
V
T
cos cos
F
T
cos cos +
g
V
T
cos sin
_
_
,
are known (matrix) functions with
T
0
= F
T
(sintan sin + sin tan cos sin tan cos ) .
The tracking errors are dened as
Z
2
= X
2
X
des
2
. (7.31)
To stabilize the Z
2
-subsystem a virtual feedback control X
des,0
3
is dened as
B
2
X
des,0
3
= C
2
Z
2
A
2
F
1
H
2
+
X
des
2
, C
2
= C
T
2
> 0. (7.32)
The implementable virtual control, i.e. the reference signal for the inner loop, X
des
3
and
its derivative are again obtained by ltering the virtual control signal X
des,0
3
with a second
order command limiting lter.
136 F-16 TRAJECTORY CONTROL DESIGN 7.3
Angular Rate Control
In the fourth step, an inner feedback loop for the control of the body-axis angular rates
X
3
= (p, q, r)
T
is constructed. The control inputs for the inner loop are the control
surface deections U = (
e
,
a
,
r
)
T
. The dynamics of the angular rates can be written
as
X
3
= A
3
(F
3
(X, U) +B
3
(X)U) +H
3
(X) (7.33)
where
A
3
=
_
_
c
3
0 c
4
0 c
7
0
c
4
0 c
9
_
_
, H
3
=
_
_
(c
1
r +c
2
p) q
c
5
pr c
6
(p
2
r
2
)
(c
8
p c
2
r) q
_
_
are known (matrix) functions, and
F
3
=
_
_
L
0
M
0
N
0
_
_
, B
3
=
_
_
r
_
_
are unknown (matrix) functions that have to be approximated. Note that for a more
convenient presentation the aerodynamic moments have been decomposed, e.g.
M(X, U) =
M
0
(X, U) +
M
e
+
M
a
+
M
r
(7.34)
where the higher order control surface dependencies are still contained in
M
0
(X, U).
The control objective in this feedback loop is to track the reference signal X
des
3
=
(p
ref
, q
ref
, r
ref
)
T
with the angular rates X
3
. Dening the tracking errors
Z
3
= X
3
X
des
3
(7.35)
and taking the derivatives results in
Z
3
= A
3
(F
3
(X, U) +B
3
(X)U) +H
3
(X)
X
des
3
. (7.36)
To stabilize the system of (7.36) the desired control U
0
is dened as
A
3
B
3
U
0
= C
3
Z
3
A
3
F
3
H
3
+
X
des
3
, C
3
= C
T
3
> 0 (7.37)
where
F
3
and
B
3
are the estimates of the unknown nonlinear aerodynamic moment func-
tions F
3
and B
3
, respectively. The F-16 model is not over-actuated, i.e. the B
3
matrix
is square. If this is not the case some form of control allocation would be required,
for instance the QP method used in the ight control problem discussed in the previous
chapter. The estimates are dened as
F
3
=
T
F
3
(X, U)
F
3
B
3j
=
T
B
3j
(X)
B
3j
for j = 1, ..., 3 (7.38)
7.3 TRAJECTORY CONTROL DESIGN 137
where
T
F
3
,
T
B
3j
are the known regressor functions and
F
3
,
B
3j
are vectors with un-
known constant parameters, also note that
B
3j
represents the ith column of
B
3
. It is
assumed that there exist vectors
F
3
,
B
3j
such that
F
3
=
T
F
3
(X, U)
F
3
B
3j
=
T
B
3j
(X)
B
3j
. (7.39)
This means the estimation errors can be dened as
F
3
=
F
3
F
3
and
B
3j
=
B
3j
B
3j
. The actual control signal U is found by applying a command lter similar
to (7.16) to U
0
.
Update Laws and Stability Properties
The static part of the trajectory control design has been completed. In this section the
stability properties of the control law are discussed and dynamic update laws for the
unknown parameters are derived. Dene the control Lyapunov function
V =
1
2
_
Z
T
0
Z
0
+z
2
11
+
2 2 cos z
12
c
02
+z
2
13
+ Z
T
2
Z
2
+Z
T
3
Z
3
_
+
1
2
_
trace
_
T
F
1
1
F
1
F
1
_
+ trace
_
T
F
3
1
F
3
F
3
__
+
1
2
3
j=1
trace
_
T
B
3j
1
B
3j
B
3j
_
, (7.40)
with the update gains matrices
F
1
=
T
F
1
> 0,
F
3
=
T
F
3
> 0 and
B
3j
=
T
B
3j
> 0.
Taking the derivative of V along the trajectories of the closed-loop system gives
V = c
01
z
2
01
+z
02
z
01
+
_
V
T
V
des,0
_
z
01
+z
02
_
z
01
+V
ref
sin z
12
_
c
03
z
2
03
V
T
_
sin sin
des,0
_
z
03
c
11
z
2
11
V
ref
_
sin z
12
z
02
+
c
12
c
02
sin
2
z
12
_
c
13
z
2
13
+Z
T
1
_
A
1
T
F
1
F
1
+B
1
_
G
1
(X
2
)
G
1
(X
2
)
__
+Z
T
1
B
1
_
G
1
(X
2
)
G
1
(X
des,0
2
)
_
Z
T
2
C
2
Z
2
+Z
T
2
A
2
T
F
1
F
1
+Z
T
2
B
2
_
X
3
X
des,0
3
_
(7.41)
Z
T
3
C
3
Z
3
+Z
T
3
A
3
_
T
F
3
F
3
+
3
j=1
T
B
3j
B
3j
U
i
_
+Z
T
3
A
3
B
3
_
U U
0
_
trace
_
T
F
1
1
F
1
F
1
_
trace
_
T
F
3
1
F
3
F
3
_
j=1
trace
_
T
B
3j
1
B
3j
B
3j
_
.
138 F-16 TRAJECTORY CONTROL DESIGN 7.3
To cancel the terms depending on the estimation errors in (7.41), the update laws are
selected as
F
1
=
F
1
F
1
_
A
T
1a
Z
1
+A
T
2
Z
2
_
F
3
=
F
3
F
3
A
T
3
Z
3
B
3j
= P
B
3j
_
B
3j
B
3j
A
T
3
Z
3
U
j
_
, (7.42)
with A
1a
T
F
1
F
1
= A
1
T
F
1
F
1
+ B
1
_
G
1
(X
2
)
G
1
(X
2
)
_
. The update laws for
B
3
include a projection operator to ensure that certain elements of the matrix do not change
sign and full rank is maintained always. For most elements the sign is known based on
physical principles. Substituting the update laws in (7.41) leads to
V = c
01
z
2
01
c
03
z
2
03
c
11
z
2
11
V
ref
c
12
c
02
sin
2
z
12
c
13
z
2
13
Z
T
2
C
2
Z
2
Z
T
3
C
3
Z
3
+
_
V
T
V
des,0
_
z
01
V
T
_
sin sin
des,0
_
z
03
+Z
T
1
B
1
_
G
1
(X
2
)
G
1
(X
des,0
2
)
_
+Z
T
2
B
2
_
X
3
X
des,0
3
_
+Z
T
3
A
3
B
3
_
U U
0
_
, (7.43)
where the rst line is already negative semi-denite which is needed to prove stability in
the sense of Lyapunov. Since the Lyapunov function V (7.40) is not radially unbounded,
only local asymptotic stability can be guaranteed [98]. This is sufcient for the domain of
operation considered here if the control law is properly initialized to ensure z
12
/2.
However, the derivative expression of V also includes indenite error terms due to the
tracking errors and due to the command lters used in the design. As mentioned before,
when no rate or magnitude limits are in effect the difference between the input and output
of the lters can be made small by selecting the bandwidth of the lters sufciently larger
than the bandwidth of the input signal. Also, when no limits are in effect and the small,
bounded difference between the input and output of the command lters is neglected, the
feedback controller designed in the previous sections will converge the tracking errors to
zero.
Naturally, when control or state limits are in effect the system will in general not track
the reference signal asymptotically. A problem with adaptive control is that this can
lead to corruption of the parameter estimation process, since the tracking errors that are
driving this process are no longer caused by the function approximation errors alone. To
solve this problem a modied denition of the tracking errors is used in the update laws
where the effect of the magnitude and rate limits has been removed. Dene the modied
tracking errors
Z
1
= Z
1
Z
2
= Z
2
2
(7.44)
Z
3
= Z
3
3
7.3 TRAJECTORY CONTROL DESIGN 139
with the linear lters
1
= C
1
1
+B
1
_
G
1
(X, U, X
2
)
G
1
(X, U, X
des,0
2
)
_
2
= C
2
2
+B
2
_
X
3
X
des,0
3
_
(7.45)
3
= C
3
3
+A
3
B
3
_
U U
0
_
.
The modied errors will still converge to zero when the constraints are in effect. The
resulting update laws are given by
F
1
=
F
1
F
1
_
A
T
1a
Z
1
+A
T
2
Z
2
_
F
3
=
F
3
F
3
A
T
3
Z
3
(7.46)
B
3j
= P
B
3j
_
B
3j
B
3j
A
T
3
Z
3
U
j
_
.
To better illustrate the structure of the control system a scheme of the adaptive inner loop
controller is shown in Figure 7.6.
Command
Limiting Filter
( )
3 3 3 3 3
X A F BU H = + +
3 3 0
3 3 3 3 3 3
ref
A BU
C Z A F H X
=
+
( )
3 3 3 3 3 0
C A B U U = +
3 3 3 3 3
3 3 3 3 3
T
F F F
T
B i F i F i i
A Z
A Z U
=
=
3
ref
X 0
U
U 3
Z
Figure 7.6: Inner loop control system
7.3.4 Model Identication
To simplify the approximation of the unknown aerodynamic force and moment functions,
and thereby reducing computational load, the ight envelope is partitioned into multiple,
connecting operating regions with a locally valid linear-in-the-parameters model dened
in each region. B-spline networks are used to interpolate between the local nonlinear
models to ensure smooth transitions. In the previous section parameter update laws (7.46)
140 F-16 TRAJECTORY CONTROL DESIGN 7.3
were dened for the unknown aerodynamic functions which were written as
F
1
=
T
F
1
(X, U)
F
1
F
3
=
T
F
3
(X, U)
F
3
B
3j
=
T
B
3j
(X)
B
3j
. (7.47)
These unknown vectors and known regressor vectors can now be further dened. The
total force approximations are dened as
L = L
0
+ qS
_
C
L
0
(, ) +
C
L
(,
e
) +
C
L
q
()
q c
2V
T
+
C
L
e
(,
e
)
e
_
Y = Y
0
+ qS
_
C
Y
0
(, ) +
C
Y
p
()
pb
2V
T
+
C
Y
r
()
rb
2V
T
+
C
Y
a
(, )
a
+
C
Y
r
(, )
r
_
(7.48)
D = D
0
+ qS
_
C
D
0
(, ) +
C
D
q
()
q c
2V
T
+
C
D
e
(,
e
)
e
_
,
and the moment approximations
L =
L
0
+ qS
_
C
L
0
(, ) +
C
L
p
()
pb
2V
T
+
C
L
r
()
rb
2V
T
+
C
L
a
(, )
a
+
C
L
r
(, )
r
_
M =
M
0
+ qS
_
C
M
0
(, ) +
C
M
q
()
q c
2V
T
+
C
M
e
(,
e
)
e
_
(7.49)
N =
N
0
+ qS
_
C
N
0
(, ) +
C
N
p
()
pb
2V
T
+
C
N
r
()
rb
2V
T
+
C
N
a
(, )
a
+
C
N
r
(, )
r
_
,
where L
0
, Y
0
, D
0
,
L
0
,
M
0
and
N
0
represent the known, nominal values of the aerody-
namic forces and moments. Note that the approximation polynomial structures are some-
what different from two example structures in Section 7.2. Estimating the aerodynamic
forces in a wind-axes reference frame is more natural for this control problem. Further-
more, an additional term can be found in the lift force approximation for the lift curve,
since this term is needed in the ight path control loop.
These approximations do not account for asymmetric failures that will introduce cou-
pling of the longitudinal and lateral motions of the aircraft. If a failure occurs which
introduces a parameter dependency that is not included in the approximation, stability
can no longer be guaranteed. However, the failure scenarios considered in the next sec-
tion are limited to symmetric structural damage or actuator failure scenarios. Therefore,
these uncertainties can all be modeled with the above approximation structures. The
7.4 NUMERICAL SIMULATION RESULTS 141
total nonlinear function approximations are divided into simpler linear-in-the-parameter
nonlinear coefcient approximations, e.g.
C
L
0
(, ) =
T
C
L
0
(, )
C
L
0
, (7.50)
where the unknown parameter vector
C
L
0
contains the B-spline network weights, i.e.
the unknown parameters, and
C
L
0
is a regressor vector containing the B-spline basis
functions. All other coefcient estimates are dened in similar fashion. In this case a
two-dimensional network is used with input nodes for and . Different scheduling
parameters can be selected for each unknown coefcient. Third order B-splines spaced
2.5 degrees and three scheduling variables, , and
e
, have been used to partitioning
the ight envelope. With these approximators a sufcient model accuracy was obtained.
Following the notation of (7.50) the estimates of the aerodynamic forces and moments
can be written as
L =
T
L
(, ,
e
)
L
,
L =
T
L
(, ,
e
)
L,
Y =
T
Y
(, ,
e
)
Y
,
M =
T
M
(, ,
e
)
M,
(7.51)
D =
T
D
(, ,
e
)
D
,
N =
T
N
(, ,
e
)
N
,
which is a notation equivalent to the one used in (7.47). Therefore, the update laws
(7.46) can be used adapt the B-spline network weights. However, the update laws have
not yet been robustied against non-parametric uncertainties. In this study dead zones
and e-modication are used to protect the estimated parameters from drifting.
7.4 Numerical Simulation Results
This section presents the simulation results from the application of the adaptive ight
path controller to the high-delity, six-degrees-of-freedomF-16 model of Section 7.3.2.
Both the adaptive ight control law and the aircraft model are written as C S-functions
in MATLAB/Simulink
c
. C S-functions are much more efcient than the Matlab S-
functions used for the simplied F-18 model of Chapter 6, which means that the simu-
lations can easily be performed in real-time despite the increase complexity of aircraft
model and controller. The tracking error driven update laws now have around 17000
states, but only a small number of updates is non-zero at each time step due to the local
model approximation structure. The simulations are performed at three different starting
ight conditions with the following trim conditions:
1. h = 5000 m, V
T
= 200 m/s, = = 2.774 deg;
2. h = 0 m, V
T
= 250 m/s, = = 2.406 deg;
3. h = 2500 m, V
T
= 150 m/s, = = 0.447 deg;
where h is the altitude of the aircraft, and all other trim states are equal to zero. Further-
more, two maneuvers are considered:
142 F-16 TRAJECTORY CONTROL DESIGN 7.4
1. a climbing helical path;
2. a reconnaissance and surveillance maneuver.
This last maneuver involves turns in both directions and some altitude changes. The
simulations of both maneuvers last 300 seconds. The reference trajectories are gener-
ated with second order linear lters to ensure smooth trajectories. The onboard model
in the nominal case contains the low-delity data, which means the online model iden-
tication has to compensate for any (small) differences between the low-delity data of
the onboard model and high-delity data of the aircraft model. To properly evaluate the
effectiveness of the online model identication, all maneuvers will also be performed
with a 30% deviation in all aerodynamic stability and control derivatives used by the
controller, i.e. it is assumed that the onboard model is very inaccurate. Finally, the same
maneuvers are also simulated with a lockup at 10 degrees of the left aileron.
7.4.1 Controller Parameter Tuning
The tuning process starts with the selection of the gains of the static control law and the
bandwidths of the command lters. Lyapunov stability theory only requires the control
gains to be larger than zero, but it is natural to select the gains of the inner loop largest.
Larger gains will of course result in smaller tracking errors, but at the cost of more control
effort. It is possible to derive certain performance bounds that can serve as guidelines
for tuning, see e.g. [121]. However, getting the desired closed-loop response is still
an extensive trial-and-error procedure. The control gains were selected as c
01
= 0.1,
c
02
= 1.10
5
, c
03
= 0.5, c
11
= 0.01, c
12
= 2.5, c
13
= 0.5, C
2
= diag(1, 1, 1) and
C
3
= diag(2, 2, 2).
The bandwidths of the command lters for the actual control variables
e
,
a
,
r
are
chosen equal to the bandwidths of the F-16 model actuators. The outer loop lters have
the smallest bandwidths. The selection of the other bandwidths is again trial-and-error.
A higher bandwidth in a certain feedback loop will result in more aggressive commands
to the next feedback loop. All damping ratios are equal to 1.0. It is possible to add
magnitude and rate limits to each of the lters. In this study magnitude limits on the
aerodynamic bank angle and the ight path angle are used to avoid singularities
in the control laws. Rate and magnitude limits, equal to the ones of the actuators, are
enforced on the actual control variables. The selected command lter parameters can be
found in Table 7.2.
As soon as the controller gains and command lters parameters have been dened, the
update law gains can be selected. Again the theory only requires that the gains should
be larger than zero. Larger update gains means higher learning rates and thus more rapid
changes in the B-spline network weights. It is not difcult to nd a gain selection that
results in a good performance at all ight conditions and with the failures considered in
this section. This is probably because all ight path maneuvers are relatively slow and
smooth.
7.4 NUMERICAL SIMULATION RESULTS 143
Table 7.2: Command lter parameters.
Command variable
n
(rad/s) mag. limit rate limit
V
des
5
des
3 80 deg
des
8 80 deg
des
8
p
des
20
q
des
20
r
des
10
e
40.4 25 deg 60 deg/s
a
40.4 21.5 deg 80 deg/s
r
40.4 30 deg 120 deg/s
7.4.2 Maneuver 1: Upward Spiral
In this section the results of the numerical simulations of the rst test maneuver, the
climbing helical path, are discussed. For each of the three ight conditions ve cases
are considered: nominal, the aerodynamic stability and control derivatives used in the
control law perturbed with +30%, and with 30% w.r.t. to the real values of the model,
a lockup of the left aileron at +10 degrees, and a lockup at 10 degrees. No actuator
sensor information is used. In Figure D.6 of Appendix D.2 the results of the simulation
without uncertainty starting at ight condtion 1 are plotted. The maneuver involves a
climbing spiral to the left with an increase in airspeed. It can be seen that the control law
manages to track the reference signal very well and that closed-loop tracking is achieved.
The sideslip angle does not become any larger than 0.02 deg. The aerodynamic bank
angle does reach the limit set by the command lter, but this has no consequences
for the performance. The use of dead-zones ensures that the parameter update laws are
indeed not updating during this maneuver without any uncertainties. The responses at
the two other ight conditions are virtually the same, although less thrust is needed due
to the lower altitude of ight condition 2 and the lower airspeed of ight condition 3.
The other control surfaces are also more efcient. This is illustrated in Tables 7.3 to 7.5,
where the mean absolute values (MAVs) of the outer loop tracking errors, control sur-
face deections and thrust can be found. Plots of the parameter estimation errors are not
included. However, the errors converge to constant values, but not to zero as is common
with Lyapunov based update laws.
The response of the closed-loop system during the same maneuver starting at ight con-
dition 1, but with +30% uncertainty in the aerodynamic coefcients, is shown in Figure
D.7. It can be observed that the tracking errors of the outer loop are now much larger,
but in the end the steady-state tracking error converges to zero. The sideslip angle still
144 F-16 TRAJECTORY CONTROL DESIGN 7.4
Table 7.3: Maneuver 1 at ight condition 1: Mean absolute values of the tracking errors and
control inputs.
Case (z
01
, z
02
, z
03
)
MAV
(m) (
e
,
a
,
r
)
MAV
(deg) T
MAV
(N)
nominal (0.33,0.24,0.24) (4.63, 0.12, 0.10) 5.59e+04
+30% uncertainty (4.56,3.75,1.07) (4.59, 0.13, 0.11) 5.57e+04
30% uncertainty (5.15,3.88,1.10) (4.68, 0.16, 0.11) 5.62e+04
+10% deg. locked left aileron (0.39,0.32,0.78) (4.63, 0.56, 0.74) 5.59e+04
10% deg. locked left aileron (0.31,0.25,1.12) (4.63, 0.46, 1.16) 5.59e+04
remains within 0.02 degrees. Some small oscillations are visible in Figure D.7J, but these
stay well within the rate and magnitude limits of the actuators. In Tables 7.3 to 7.5 the
MAVs of the tracking errors and control inputs are shown for all ight conditions with
this uncertainty. As was already seen in the plots, the average tracking errors increase, but
the magnitude of the control inputs stays approximately the same. The same simulations
have been performed for a 30%perturbation in the stability and control derivatives used
by the control law, the results are also shown in the tables. It appears that underestimated
initial values of the unknown parameters lead to larger tracking errors than overestimates
for this maneuver.
Finally, the maneuver is performed with the left aileron locked at 10 degrees, i.e.
damaged
a
= 0.5(
a
10
180
). Figure D.8 shows the response at ight condition 3 with
the aileron locked at 10 degrees. Except for some small oscillations in the response of
roll rate p and aileron deection
a
at the start of the simulation, there is no real change
in performance visible. This is conrmed by the numbers of Table 7.5. However, from
Tables 7.3 and 7.4 it can be observed that aileron and rudder deections become larger
for both locked aileron failure cases, while tracking performance does hardly decline.
Table 7.4: Maneuver 1 at ight condition 2: Mean absolute values of the tracking errors and
control inputs.
Case (z
01
, z
02
, z
03
)
MAV
(m) (
e
,
a
,
r
)
MAV
(deg) T
MAV
(N)
nominal (0.30,0.23,0.21) (3.97, 0.14, 0.21) 3.14e+04
+30% uncertainty (1.55,1.33,0.41) (3.96, 0.15, 0.23) 3.14e+04
30% uncertainty (2.01,1.53,0.52) (3.98, 0.15, 0.20) 3.14e+04
+10% deg. locked left aileron (0.36,0.33,0.72) (3.97, 0.25, 1.20) 3.14e+04
10% deg. locked left aileron (0.30,0.28,1.01) (3.96, 0.40, 1.52) 3.14e+04
Table 7.5: Maneuver 1 at ight condition 3: Mean absolute values of the tracking errors and
control inputs.
Case (z
01
, z
02
, z
03
)
MAV
(m) (
e
,
a
,
r
)
MAV
(deg) T
MAV
(N)
nominal (0.33,0.22,0.27) (3.37, 0.08, 0.08) 4.41e+04
+30% uncertainty (2.01,1.43,0.61) (3.40, 0.10, 0.08) 4.44e+04
30% uncertainty (2.16,1.49,0.77) (3.38, 0.09, 0.08) 4.41e+04
+10% deg. locked left aileron (0.32,0.33,0.29) (3.38, 0.08, 0.09) 4.41e+04
10% deg. locked left aileron (0.34,0.24,0.30) (3.38, 0.08, 0.09) 4.41e+04
7.5 NUMERICAL SIMULATION RESULTS 145
7.4.3 Maneuver 2: Reconnaissance
The second maneuver, called reconnaissance and surveillance, involves turns in both di-
rections and altitude changes, but airspeed is kept constant. Plots of the simulation at
ight condition 3 with 30% uncertainty are shown in Figure D.9. Tracking perfor-
mance is again excellent and the steady-state tracking errors converge to zero. There are
some small oscillations in the rudder deection, but these are within the limits of the
actuator. To provide some insight in the online estimation process, the time histories of
the estimated coefcient errors are plotted in Figure D.10. The errors in the individual
components of the force and moment coefcients do in general not converge to the true
error values, as is expected with Lyapunov based update laws. However, the total force
and moment coefcients are identied correctly which explains the good tracking per-
formance.
The MAVs of the tracking errors and control inputs are compared with the ones for the
nominal case in Table 7.8. It can be observed that the average tracking errors have not
increased much for this uncertainty case. The degradation of performance for the uncer-
tainty cases is somewhat worse at the other two ight conditions as can be seen in Tables
7.6 and 7.7. The sideslip angle always remains within 0.05 degrees for all ight condi-
tions and uncertainties. Corresponding with the results of maneuver 1 overestimation of
the unknown parameters again leads to smaller tracking errors.
Table 7.6: Maneuver 2 at ight condition 1: Mean absolute values of the tracking errors and
control inputs.
Case (z
01
, z
02
, z
03
)
MAV
(m) (
e
,
a
,
r
)
MAV
(deg) T
MAV
(N)
nominal (0.42,0.39,0.46) (3.17, 0.16, 0.13) 2.25e+04
+30% uncertainty (2.69,2.30,1.13) (3.16, 0.16, 0.14) 2.25e+04
30% uncertainty (3.02,2.40,1.12) (3.19, 0.18, 0.14) 2.25e+04
+10% deg. locked left aileron (0.43,0.40,0.45) (3.17, 0.17, 0.16) 2.25e+04
10% deg. locked left aileron (0.42,0.39,0.46) (3.17, 0.17, 0.15) 2.25e+04
Simulations of maneuver 2 with the locked aileron are also performed. Figure D.11
shows the results for ight condition 1 with a locked aileron at +10 degrees. Some very
small oscillations are again visible in the roll rate, aileron and rudder responses, but
tracking performance is good and the steady-state convergence is achieved. Table 7.6
conrms that the results of the simulations with actuator failure hardly differ from the
nominal one. There is only a small increase in the use of the lateral control surfaces. The
same holds at the other ight conditions as can be seen in Tables 7.7 and 7.8.
146 F-16 TRAJECTORY CONTROL DESIGN 7.5
Table 7.7: Maneuver 2 at ight condition 2: Mean absolute values of the tracking errors and
control inputs.
Case (z
01
, z
02
, z
03
)
MAV
(m) (
e
,
a
,
r
)
MAV
(deg) T
MAV
(N)
nominal (0.58,0.49,0.34) (2.95, 0.18, 0.21) 1.62e+04
+30% uncertainty (1.27,1.10,0.48) (2.95, 0.19, 0.22) 1.62e+04
30% uncertainty (1.73,1.24,0.55) (2.97, 0.19, 0.21) 1.61e+04
+10% deg. locked left aileron (0.58,0.50,0.35) (2.95, 0.20, 0.22) 1.62e+04
10% deg. locked left aileron (0.59,0.51,0.34) (2.95, 0.22, 0.22) 1.62e+04
Table 7.8: Maneuver 2 at ight condition 3: Mean absolute values of the tracking errors and
control inputs.
Case (z
01
, z
02
, z
03
)
MAV
(m) (
e
,
a
,
r
)
MAV
(deg) T
MAV
(N)
nominal (0.49,0.40,0.56) (2.39, 0.12, 0.12) 2.33e+04
+30% uncertainty (0.97,0.78,0.54) (2.39, 0.12, 0.13) 2.33e+04
30% uncertainty (0.97,0.56,0.85) (2.40, 0.13, 0.12) 2.33e+04
+10% deg. locked left aileron (0.48,0.40,0.58) (2.39, 0.12, 0.13) 2.33e+04
10% deg. locked left aileron (0.49,0.40,0.56) (2.40, 0.13, 0.13) 2.33e+04
7.5 Conclusions
In this chapter, a nonlinear adaptive ight path control system is designed for a high-
delity F-16 model. The controller is based on a backstepping approach with four feed-
back loops which are designed using a single control Lyapunov function to guarantee
stability. The uncertain aerodynamic forces and moments of the aircraft are approxi-
mated online with B-spline neural networks for which the weights are adapted by Lya-
punov based update laws. Numerical simulations of two test maneuvers were performed
at several ight conditions to verify the performance of the control law. Actuator failures
and uncertainties in the stability and control derivatives were introduced to evaluate the
parameter estimation process.
Several observations can be made based on the simulation results:
1. The results show that trajectory control can still be accomplished with the inves-
tigated uncertainties and failures, while good tracking performance is maintained.
Compared to other nonlinear adaptive trajectory control designs found in litera-
ture, such as standard adaptive backstepping or sliding mode control in combina-
tion with feedback linearization, the approach is much simpler to apply, while the
online estimation process is more robust to saturation effects.
2. The ight envelope partitioning approach used to simplify the estimation process
makes real-time implementation of the adaptive control system feasible, while it
also keeps the estimation process more transparent. All performed simulations
easily run real-time in MATLAB/Simulink
c
with a standard third order solver at
100 Hz.
3. In the general case, a detailed design study is needed to dene the necessary par-
titions and approximator structure. For the F-16 aerodynamic model earlier mod-
7.5 CONCLUSIONS 147
eling studies have already been performed and the data is already available in a
suitable tabular form.
4. Tuning of the integrated update laws of the backstepping controller is, in general,
a time consuming trial-and-error process, since increasing the gains can lead to
unexpected closed-loop system behavior. However, the maneuvers own with the
trajectory controller are relatively slow and smooth, especially for this ghter air-
craft model. This smooth maneuvering simplied the tuning of the update gains,
since it was not hard to nd a gain selection that provided adequate performance
for all considered failure scenarios and ight conditions. However, in Chapter 6
more aggressive maneuvering with a much simpler aircraft model was considered,
while nding an update gain selection that gave good performance at both ight
conditions for all failure types was much more difcult, if not impossible. In the
next chapter the stability and control augmentation system design for the F-16
model is considered and simulations involving more aggressive maneuvering will
again be performed. Hence, update gain tuning is expected to be much more time
consuming.
Chapter 8
F-16 Stability and Control
Augmentation Design
This chapter once again considers an adaptive ight control design for the high-delity
F-16 model, but here a stability and control augmentation system (SCAS) is developed in-
stead of a trajectory autopilot. This means that the ight control system must provide the
pilot with the handling qualities he or she desires. Command lters are used to enforce
these handling qualities and a frequency response analysis is included to verify that they
have been satised in the nominal case. The ight envelope partitioning method, which
results in multiple local models, is again used to simplify the online model identication.
In the nal part of the chapter the constrained adaptive backstepping based SCAS is
compared with the baseline F-16 ight control system and an adaptive ight control sys-
tem that makes use of a least squares identier in several realistic maneuvers and failure
scenarios. Furthermore, sensor models and time delays are introduced in the numerical
simulations.
8.1 Introduction
Nowadays most modern ghter aircraft are designed statically relaxed stable or even un-
stable in certain modes to allow for extreme maneuverability. As a result these aircraft
have to be equipped with a stability and control augmentation system (SCAS) that ar-
ticially stabilizes the aircraft and provides the pilot with desirable ying and handling
qualities. Briey stated, the ying and handling qualities of an aircraft are those proper-
ties which describe the ease and effectiveness with which it responds to pilot commands
in the execution of a ight task [45]. Flying qualities can be seen as being task related,
while handling qualities are response related.
In this chapter the constrained adaptive backstepping approach with B-spline networks
149
150 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 8.2
is used to design a SCAS for a nonlinear, high-delity F-16 model which satises the
handling qualities requirements [1] across the entire ight envelope of the model. It is
assumed that the aerodynamic force and moment functions of the model are not known
exactly and that they can change during ight due to structural damage or control surface
failures. There is plenty of literature available on adaptive backstepping designs for the
control of aircraft and missiles, see e.g. [107, 183]. However, none of these publications
considers the ying qualities during the controller design phase or performs a handling
qualities evaluation after the design is nished. An exception is [93], where a longi-
tudinal adaptive backstepping controller is designed for a simplied supersonic aircraft
model. The controller parameters are tuned explicitly via short period handling qualities
specications [1]. The work in this chapter considers a full six degrees-of-freedomhigh-
delity aircraft model and enforces the handling qualities requirements with command
lters during the control design process.
A second adaptive SCAS is designed using the modular adaptive backstepping method
with recursive least squares as detailed in Section 6.2. In this method the control law and
identier are designed as separate models as so often done for adaptive control for linear
systems. Since the certainty equivalence principle does not hold in general for nonlinear
systems, the modular control law has to be robustied against the time-varying charac-
ter of the parameter estimates. The estimation error and the derivative of the parameter
estimate are viewed as an unknown disturbance input, which are attenuated by adding
nonlinear damping terms to the control law. As identier the well-established recursive
least-squares method in combination with an abrupt change detection algorithm is used.
As was illustrated in Chapter 6, a potential advantage of the modular method is that the
true values of the uncertain parameters can be found since the estimation is not driven by
the tracking error but rather by the state of the system.
Both fault tolerant SCAS systems are compared with the baseline F-16 ight control
system in numerical simulations where the F-16 model suffers several types of sudden
changes in the dynamic behavior. The comparison focuses on the performance, estima-
tion accuracy, computation time and controller tuning. In the rst part of the chapter both
adaptive ight control designs are derived. In the second part the tuning of the controllers
and the handling qualities analysis are discussed, followed by the results of the numerical
simulations in MATLAB/Simulink
c
.
8.2 Flight Control Design
A full description of the F-16 model together with all necessary data can be found in
Chapter 2, the relevant equations of motion are repeated here for convenience sake:
V
T
=
1
m
(D +F
T
cos cos +mg
1
) (8.1)
= q
s
p
s
tan +
(L F
T
sin +mg
3
)
mV
T
cos
(8.2)
= r
s
+
(Y F
T
cos sin +mg
2
)
mV
T
(8.3)
8.2 FLIGHT CONTROL DESIGN 151
p = (c
1
r +c
2
p) q +c
3
L +c
4
_
N +H
eng
q
_
(8.4)
q = c
5
pr c
6
_
p
2
r
2
_
+c
7
_
M H
eng
r
_
(8.5)
r = (c
8
p c
2
r) q +c
4
L +c
9
_
N +H
eng
q
_
(8.6)
The goal of this study is to design a SCAS that tracks pilot commands with responses
that satisfy the handling qualities, across the entire ight envelope of the aircraft, in the
presence of uncertain aerodynamic parameters. The pilot commands should control the
responses as follows: Longitudinal stick deection commands angle of attack
0
com
, lat-
eral stick deection commands stability-axis roll rate p
0
s,com
and the pedals command
the sideslip angle
0
com
. The total velocity command V
0
T,com
is achieved with the to-
tal engine thrust F
T
, which is in turn controlled with the throttle lever deection. The
commanded signals are fed through command lters to produce the signals
com
,
com
,
p
s,com
, V
T,com
and their derivatives. The command lters are also used for specifying
the desired aircraft handling qualities.
8.2.1 Outer Loop Design
The control design procedure starts by dening the new tracking error states as
Z
1
=
_
_
V
T
_
_
_
_
V
T,com
com
com
_
_
= X
1
X
1,com
(8.7)
Z
2
=
_
_
p
s
q
s
r
s
_
_
_
_
p
s,com
q
s,des
r
s,des
_
_
= X
2
X
2,com
, (8.8)
with q
s,des
and r
s,des
the intermediate control laws that will be dened by the adaptive
backstepping controller. The time derivative of Z
1
can be written as
Z
1
= A
1
F
1
+H
1
+B
11
X
2
+B
12
_
_
F
T
0
0
_
_
X
1,com
, (8.9)
where
A
1
=
1
mV
T
_
_
0 0 V
T
1
cos
0 0
0 1 0
_
_
,
H
1
=
1
m
_
_
mg
1
p
s
tan +
F
T
sin +mg
3
V
T
cos
F
T
cos sin +mg
2
V
T
_
_
,
B
11
=
_
_
0 0 0
0 1 0
0 0 1
_
_
, B
12
=
_
_
cos cos 0 0
0 0 0
0 0 0
_
_
,
152 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 8.2
are known (matrix) functions, and F
1
= [L, Y, D]
T
is a vector containing the uncertain
aerodynamic forces. Furthermore, let
_
_
F
0
T
q
0
s,des
r
0
s,des
_
_
= B
1
1
_
C
1
Z
1
K
1
1
A
1
F
1
H
1
+
X
1,com
B
11
2
_
,
(8.10)
where B
1
= B
11
+ B
12
and
1
=
_
t
0
Z
1
(t)dt be a feedback control law with C
1
=
C
T
1
> 0, K
1
= K
T
1
0,
F
1
the estimate of F
1
,
2
and
Z
1
to be dened later. The
estimate of the aerodynamic forces
F
1
is dened as
F
1
=
T
F
1
(X, U)
F
1
, (8.11)
where
T
F
1
is the known regressor function and
F
1
is a vector with unknown constant
parameters. It is assumed that there exists a vector
F
1
F
1
=
T
F
1
(X, U)
F
1
, (8.12)
so that the estimation error can be dened as
F
1
=
F
1
F
1
. Part of the feedback
control law (8.10) is now fed through second order low pass lters to produce the signals
F
T
, q
s,des
, r
s,des
and their derivatives. These lters can also be used to enforce rate and
magnitude limits on the signals, see the appendix of [61]. The effect that the use of these
command lters has on the tracking errors can be captured with the stable linear lter
1
=C
1
1
+B
11
_
X
2,com
X
0
2,com
_
+B
12
_
_
F
T
F
0
T
0
0
_
_
. (8.13)
Dene the modied tracking errors as
Z
i
= Z
i
i
, i = 1, 2. (8.14)
8.2.2 Inner Loop Design
Taking the derivative of Z
2
results in
Z
2
= A
2
(F
2
+G
2
U) +H
2
X
2,com
(8.15)
where U = (
e
,
a
,
r
)
T
is the control vector,
A
2
= T
s/b
_
_
c
3
0 c
4
0 c
7
0
c
4
0 c
9
_
_
,
H
2
=
_
_
r
s
0
p
s
_
_
+T
s/b
_
_
(c
1
r +c
2
p) q + c
4
h
e
q
c
5
pr c
6
_
p
2
r
2
_
c
7
h
e
r
(c
8
p c
2
r) q + c
9
h
e
q
_
_
,
8.2 FLIGHT CONTROL DESIGN 153
are known (matrix) functions, and
F
2
=
_
_
L
0
M
0
N
0
_
_
, G
2
=
_
_
r
_
_
,
are unknown (matrix) functions containing the aerodynamic moment components. Note
that for a more convenient presentation the aerodynamic moments have been decom-
posed, e.g.
M(X, U) =
M
0
(X, U) +
M
e
+
M
a
+
M
r
(8.16)
where the higher order control surface dependencies are still contained in
M
0
(X, U). To
stabilize the system (8.15) the desired control U
0
is dened as
A
2
G
2
U
0
=C
2
Z
2
K
2
2
B
T
11
Z
1
A
2
F
2
H
2
+
X
2,com
, (8.17)
where
2
=
_
t
0
Z
2
(t)dt with C
2
= C
T
2
> 0, K
2
= K
T
2
0 and where
F
2
and
G
2
are the estimates of the unknown nonlinear aerodynamic moment functions F
2
and G
2
,
respectively. The estimates are dened as
F
2
=
T
F
2
(X, U)
F
2
(8.18)
G
2j
=
T
G
2j
(X)
G
2j
for j = 1, 2, 3 (8.19)
where
T
F
2
,
T
G
2j
are the known regressor functions and
F
2
,
G
2j
are vectors with un-
known constant parameters, also note that
G
2j
represents the jth column of
G
2
. It is
assumed that there exist vectors
F
2
,
G
2j
such that
F
2
=
T
F
2
(X, U)
F
2
G
2j
=
T
G
2j
(X)
G
2j
. (8.20)
This means the estimation errors can be dened as
F
2
=
F
2
F
2
and
G
2j
=
G
2j
G
2j
. The actual control U is found by again applying command lters, as was
also done in the outer loop design. Finally, with the denition of the stable linear lter
2
= C
2
2
+A
1
G
2
_
U U
0
_
, (8.21)
the static part of the control design is nished.
8.2.3 Update Laws and Stability Properties
In this section the stability properties of the control law are discussed and dynamic update
laws for the unknown parameters are derived. Dene the control Lyapunov function
V =
1
2
2
i=1
_
Z
T
i
Z
i
+
T
i
K
i
i
_
+
1
2
_
trace
_
T
F
1
1
F
1
F
1
_
+ trace
_
T
F
2
1
F
2
F
2
_
+
3
i=1
trace
_
T
G
2j
1
G
2j
G
2j
_
_
154 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 8.3
with the update gains matrices
F
1
=
T
F
1
> 0,
F
2
=
T
F
2
> 0 and
G
2j
=
T
G
2j
> 0.
Selecting the update laws
F
1
=
F
1
F
1
A
T
1
Z
1
F
2
=
F
2
F
2
A
T
2
Z
2
(8.22)
G
2j
= P
G
2j
_
G
2j
G
2j
A
T
2
Z
2
U
j
_
and substituting (8.10), (8.13), (8.21) and (7.37) reduces the derivative of V along the
trajectories of the closed-loop system to
V =
Z
T
1
C
1
Z
1
Z
T
2
C
2
Z
2
, (8.23)
which is negative semi-denite. By using Theorem 3.7 it can be shown that
Z 0 as
t . When the command lters are properly designed and the limits on the lters
are not in effect,
Z
i
will converge to the close neighborhood of Z
i
. If the limits are in
effect the actual tracking errors Z
i
may increase, but the modied tracking errors
Z
i
will
still converge to zero and the update laws will not unlearn, since they are driven by the
modied tracking error denitions. Note that the update law for
G
2
include a projection
operator to ensure that certain elements of the matrix do not change sign and full rank is
maintained always. For most elements the sign is known based on physical principles.
8.3 Integrated Model Identication
As was explained in Chapter 7, to simplify the approximation of the unknown aerody-
namic force and moment functions, and thereby reducing computational load to make
real-time implementation feasible, the ight envelope is partitioned into multiple, con-
necting operating regions.
In the previous section parameter update laws (8.22) for the unknown aerodynamic func-
tions (8.11)-(8.19) were dened. Now these unknown vectors and known regressor vec-
tors will be further specied. The total force and moment approximations are written in
the standard coefcient notation. The total nonlinear function approximations are divided
into simpler linear-in-the-parameter nonlinear coefcient approximations, e.g.
C
L
0
(, ) =
T
C
L
0
(, )
C
L
0
, (8.24)
where the unknown parameter vector
C
L
0
contains the network weights, i.e. the un-
known parameters, and
C
L
0
is a regressor vector containing the B-spline basis func-
tions. All other coefcient estimates are dened in similar fashion. In this case a two-
dimensional network is used with input nodes for and . Different scheduling param-
eters can be selected for each unknown coefcient. In this chapter third order B-splines
spaced 2.5 degrees and up to three scheduling variables, , ,
e
, depending on co-
efcient are once again used. With these approximators sufcient model accuracy is
8.4 MODULAR MODEL IDENTIFICATION 155
obtained. Following the notation of (7.50) the estimates of the aerodynamic forces and
moments can be written as
L =
T
L
(, ,
e
)
L
,
L =
T
L
(, ,
e
)
L,
Y =
T
Y
(, ,
e
)
Y
,
M =
T
M
(, ,
e
)
M,
(8.25)
D =
T
D
(, ,
e
)
D
,
N =
T
N
(, ,
e
)
N
,
which is a notation equivalent to the one used in (8.11)-(8.19). Therefore, the update laws
(8.22) can be used to adapt the B-spline network weights. A scheme of the integrated
adaptive backstepping controller can be found in Figure 6.4 of Chapter 6.
8.4 Modular Model Identication
An alternative to the Lyapunov-based indirect adaptive laws of the previous section is to
separately design the identier and the control law. This approach is referred to as the
modular control design and was discussed in Section 6.2. The modular adaptive design
is not limited to Lyapunov-based identiers, but allows for more freedomin the selection
of model identication. Especially (recursive) least-squares identication is of interest,
since it is considered to have good convergence properties and its parameter estimates
converge to true, constant values if the system is sufciently excited.
A comparison of Lyapunov and least-squares model identication for a simplied air-
craft model in Chapter 6 demonstrated the more accurate approximation potential of the
latter approach. A disadvantage of the design is that nonlinear damping terms have to be
used to robustify the controller against the slowness of the parameter estimation method.
These nonlinear damping terms can lead to high gain control and related numerical prob-
lems. Another disadvantage is that the least-squares identier with nonlinear regressor
lter is of a much higher dynamical order than the Lyapunov identier of the integrated
model identication method.
First, the intermediate control (8.10) and the control (7.37) are augmented with the addi-
tional nonlinear damping terms S
1
Z
1
and S
2
Z
2
respectively, where
S
1
=
1
A
1
T
F
1
F
1
A
T
1
(8.26)
S
2
=
2
A
2
T
F
2
F
2
A
T
2
+A
2
_
_
3
j=1
2j
T
G
2j
G
2j
U
2
j
_
_
A
T
2
(8.27)
with the scalar gains
1
,
2
,
11
,
12
,
13
> 0. With these additional terms the derivative
of the control Lyapunov function V (6.46) becomes
V =
Z
T
1
(C
1
+S
1
)
Z
1
Z
T
2
(C
2
+S
2
)
Z
2
+
Z
T
1
A
1
F
1
F
1
+
Z
T
2
A
2
_
F
2
F
2
+
3
j=1
G
2j
G
2j
U
j
_
(8.28)
Z
T
1
C
1
Z
1
Z
T
2
C
2
Z
2
+
1
4
1
T
F
1
F
1
+
1
4
2
T
F
2
F
2
+
3
j=1
1
4
2j
T
G
2j
G
2j
,
156 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 8.4
which demonstrates that the controller achieves boundedness of the modied tracking
errors
Z
i
if the parameter estimation errors are bounded. The size of the bounds is de-
termined by the damping gains
0
=
_
A
0
F
T
(X, U)F(X, U)P
(
0
+X) H(X, U) (8.29)
T
=
_
A
0
F
T
(X, U)F(X, U)P
T
+F
T
(X, U) (8.30)
= X +
0
T
, (8.31)
where H(X, U) are the known dynamics, F(X, U) is the known regressor matrix, > 0
and A
0
is an arbitrary constant matrix such that
PA
0
+A
T
0
P = I, P = P
T
> 0. (8.32)
The least-squares update law for
and the covariance update are dened as
1 +trace (
T
)
(8.33)
1 +trace (
T
)
, (8.34)
where 0 is the normalization coefcient and 0 is the forgetting factor. By
Lemma 6.1 the modular controller with x-swapping lters and least-squares update law
achieve global asymptotic tracking of the modied tracking errors. Note that ight enve-
lope partitioning is again used for the modular design, only the parameters of the locally
valid nonlinear linear-in-the-parameter models in the current partitions are updated at
each time step. Although the whole updating process is slightly different, the same B-
spline neural networks are used. In this way the modular adaptive design has the same
memory capabilities as the integrated design. Note that for the modular adaptive design
the covariance matrix also has to be stored in each partition, which leads to a signicant
increase in identier states. However, again only a few partitions are updated at each
time step.
Despite using a mild forgetting factor in (8.34), the covariance matrix can become small
after a period of tracking, and hence reduces the ability of the identier to adjust to abrupt
8.5 CONTROLLER TUNING AND COMMAND FILTER DESIGN 157
changes in the system parameters. A possible solution to this is by resetting the covari-
ance matrix when a sudden change is detected. After an abrupt change in the system
parameters, the estimation error will be large. Therefore a good monitoring candidate is
the ratio between the current estimation error and the mean estimation error over an inter-
val t
. After a failure, the estimation error will be large compared to the mean estimation
error, and thus an abrupt change is declared when
> T
(8.35)
where T
com,0
=
2
s
2
+ 2
s +
2
com
com,0
=
s
2
+ 2
s +
2
,
where T
p
= 0.5,
= 0.8,
= 1.25 and
roll
=
K
s
_
s
2
+ 2
s +
2
_
e
p
s
(s + 1/T
s
) (s + 1/T
r
) [s
2
+ 2
d
d
s +
2
d
]
q
pitch
=
K
s (s + 1/T
1
) (s + 1/T
2
) e
q
s
_
s
2
+ 2
p
p
s +
2
p
_
s
2
+ 2
sp
sp
s +
2
sp
yaw
=
A
(s + 1/T
1
) (s + 1/T
2
) (s + 1/T
3
) e
s
(s + 1/T
s
) (s + 1/T
r
) [s
2
+ 2
d
d
s +
2
d
]
.
For level 1 handling qualities the LOES parameters must satisfy the ranges
0.28 CAP 3.6, T
r
1.0 s,
sp
> 1.0 rad/s,
d
0.4,
0.35
sp
1.3,
d
d
0.4 rad/s,
where CAP =
2
sp
/(n
z
/) is the Control Anticipation Parameter and the equivalent time
delays
must be less than 0.10 seconds. Guidelines for estimating the substantial num-
ber of parameters in the LOES transfer functions are given in [1, 47]. For the longitudinal
8.6 NUMERICAL SIMULATIONS AND RESULTS 159
response the pitch attitude bandwidth versus phase delay criterion [1] is also taken into
account as recommended by [199]. Plots of the CAP versus the short period frequency
sp
can be found in Figure 8.2, while the bandwidth criterion plot appears in Figure 8.3.
It can be seen that both criteria predict level 1 handling qualities. Short period damping
sp
values were between 0.60 and 0.82, while the largest effective time delay was 0.084 s.
The gain margin was larger than 6 dB and the phase margin larger than 45 deg at all test
conditions. Finally, the Neal-Smith criterion [12] also predicts level 1 handling qualities.
The Neal-Smith method estimates the amount of pilot compensation required to prevent
pilot-in-the-loop resonance.
10
0
10
1
10
2
10
1
10
0
10
1
10
2
n
z
/a
s
p
(
r
a
d
/
s
)
Category A Flight Phases
Level 1
Level 3
Level 2
Level 2
Figure 8.2: LOES short period frequency estimates.
Plots of the LOES roll mode time constant and effective time delay requirements can be
found in Figure 8.4 and the LOES Dutch roll frequency
d
and damping
d
requirements
in Figure 8.5. The gures demonstrate that also for lateral maneuvering all criteria for
level 1 handling qualities are met.
8.6 Numerical Simulations and Results
This section presents numerical simulation results fromthe application of the control sys-
tems developed in the previous sections to the high-delity, six-degrees-of-freedomF-16
model in a number of failure scenarios and maneuvers. The controllers are evaluated on
160 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 8.6
0 1 2 3 4 5 6
0
0.05
0.1
0.15
0.2
0.25
Pitch Attitude Bandwidth
BW
(rad/s)
P
h
a
s
e
D
e
l
a
y
p
(
s
e
c
)
Pitch Attitude Bandwidth vs. Phase Delay Criterion
Cat. A
Cat. A
Cat. C
Cat. C
Level 1/2
Level 2/3
Figure 8.3: Pitch attitude bandwidth vs.phase delay.
0 1 2 3 4 5 6
x 10
4
0
0.5
1
1.5
2
R
o
l
l
M
o
d
e
T
i
m
e
C
o
n
s
t
a
n
t
(
s
e
c
)
Roll Requirements
0 1 2 3 4 5 6
x 10
4
0
0.05
0.1
0.15
0.2
0.25
E
f
f
e
c
t
i
v
e
T
i
m
e
D
e
l
a
y
(
s
e
c
)
Dynamic Pressure (N/m
2
)
Level 2
Level 2
Level 3
Level 3
Level 1
Level 1
Figure 8.4: Roll mode time constant and effective time delay.
8.6 NUMERICAL SIMULATIONS AND RESULTS 161
their tracking performance and parameter estimation accuracy. Both the control laws and
the aircraft model are written as C S-functions in MATLAB/Simulink
c
. Sensor models
taken from [63] and transport delays of 20 ms have been added to the controller to model
an onboard computer implementation of the control laws.
The analysis in the previous part demonstrates that it is quite straightforward to use the
command lters to enforce desired handling qualities of the adaptive backstepping con-
trollers. However, one of the goals in this section is to compare the adaptive designs
directly with the baseline F-16 control system of Section 2.5. For the purpose of this
comparison, the command lters, stick shaping functions and command limiting func-
tions in the numerical simulations are selected in such a way that the response of the
adaptive designs on the nominal F-16 model will be approximately the same as the base-
line control system response over the entire ight envelope. One problem is that a lon-
gitudinal stick command to the baseline controller generates a mixed pitch rate and load
factor response, while the adaptive designs generate an angle of attack response. The
desired mixed response is transformed to an angle of attack command for the adaptive
controllers using the nominal aircraft model data.
To verify whether the baseline control system achieves level 1 handling qualities or not,
simulations of frequency sweeps were again performed. The small amplitude responses
have been matched to LOES models. As expected the baseline control system also satis-
es these criteria over the entire F-16 model ight envelope.
0 0.2 0.4 0.6 0.8 1
0
1
2
3
4
5
6
d
(
r
a
d
/
s
)
d
Dutch Roll Data
Level 1
Level 2
Level 3
Figure 8.5: Dutch roll frequency vs. damping.
162 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 8.6
Table 8.1: Flight conditions used for evaluation.
Flight condition Mach number Altitude (m) dynamic pressure (kN/m
2
) (deg)
FC1 0.8 8000 15.95 1.80
FC2 0.6 12000 4.87 9.40
FC3 0.6 5000 13.61 2.46
FC4 0.4 10000 2.96 14.99
FC5 0.8 2000 35.61 0.04
8.6.1 Simulation Scenarios
The simulated failure scenarios are limited to locked right ailerons at zero, four different
offsets from zero, two longitudinal center of gravity shifts and a sudden change in the
pitch damping (C
m
q
) for a total of eight different failure cases. Each simulation lasts
between 150 and 200 seconds, after 20 seconds a failure is introduced. All simulation
runs start at one of ve different trimmed ight conditions as given in Table 8.1.
This gives a total of forty failure scenarios for each controller, the simulation results of
three typical ones are discussed in detail in the next sections.
8.6.2 Simulation Results with C
m
q
= 0
The rst series of simulations considers a sudden reduction of the longitudinal damping
coefcient C
m
q
to zero at all ight conditions. This is not a very critical change, since
the tracking performance of both the baseline and the backstepping controller with adap-
tation disabled is hardly affected. It does however serve as a nice example to evaluate
the ability of the adaptation schemes to accurately estimate inaccuracies in the onboard
model. Figure D.12 of Appendix D.3 contains the simulation results for the integrated
design starting at ight condition 2 with the longitudinal stick commanding a series of
pitch doublets, after 20 seconds of simulation the sudden change in C
m
q
takes place. The
left hand side plots show the inputs and response of the aircraft in solid lines, while the
dotted lines are the reference trajectories. Tracking performance both before and after
the change in pitch damping is excellent.
The solid lines in the right hand side plots of Figure D.12 show the changes in aerody-
namic coefcients w.r.t. the nominal values divided by the maximum absolute value of
the real aerodynamic coefcients to normalize them. The dotted lines are the normalized
real differences between the altered and nominal aircraft model. The change in C
m
q
is
clearly visible in the plots. However, the tracking error based update laws of the inte-
grated controller compensate by estimating changes in C
m
0
and C
m
e
instead, which
leads to the same total pitching moment. The time histories drag estimation and the total
airspeed are not depicted in the gure. The ight control system is not able to follow this
maneuver and at the same time hold the aircraft at the correct airspeed, hence there is
some estimation of a non-existing drag coefcient error.
8.6 NUMERICAL SIMULATIONS AND RESULTS 163
It is expected that the estimation-based update laws of the modular design will manage to
nd the correct parameter values, since the reference signal own should be rich enough
with information. The results of the same simulation scenario for the F-16 with the mod-
ular controller can be seen in Figure D.13. The tracking performance of this controller is
also excellent, and, as can be seen from the right hand side plots, the correct change in
parameter value is found by the model identication.
The results of other simulations of this failure scenario are in correspondence with the
single case discussed above. Tracking performance is always good, but as expected only
the modular controller manages to nd the true aerodynamic coefcient values. Natu-
rally, the speed at which the true values are found depends on the richness of information
in the reference signal.
8.6.3 Simulation Results with Longitudinal c.g. Shifts
The second series of simulations considers a more complex failure: Longitudinal center
of gravity shifts. Especially backward shifts can be quite critical, since they work desta-
bilizing and can even result in a loss of static stability margin. All pitching and yawing
aerodynamic moment coefcients will change as a result of a longitudinal c.g. shift. The
baseline classical controller is designed to deal with longitudinal c.g. shifts and, as is
demonstrated in [149], can even deal with shifts of 0.06 c. The tracking performance
degrades somewhat, but is still acceptable. However, for a non-adaptive model inversion
based design the changes are far more critical and stability loss often occurs for destabi-
lizing shifts, even with the integral gains.
Figure D.14 contains the simulation results for F-16 model with the integrated adaptive
controller starting at ight condition 1 with the longitudinal stick commanding a series of
small amplitude pitch doublets, after 20 seconds the c.g. instantly shifts backward 0.06 c
and the small positive static margin is lost. Without adaptation stability is lost immedi-
ately, but as can be seen in the left hand side plots with adaptation turned on the tracking
performance of the integrated design is acceptable although small tracking errors remain.
The right hand side plots demonstrate that the estimates again do not converge to their
true values, and the change in yawing moment is not estimated at all, since it does not
result in large enough tracking errors.
In Figure D.15 the total pitch moment coefcient is plotted against the angle of attack
with a pitch rate and elevator deection of zero, both before (blue line) and after the
failure occurs (red line). The difference or the error is plotted in Figure D.16 together
with the estimated error generated by the adaptive backstepping controller at the end of
the simulation. It is interesting to note that the pitch moment coefcient error is only
learned over the portion of the ight envelope over which training samples have been
accumulated. This is due to the local nature of the B-spline networks used for the ight
envelope partitioning.
The plots of the results for the modular design for the same scenario can be found in
Figure D.17. The tracking performance of the modular design is somewhat disappoint-
ing; even after 200 seconds of simulation there still remains a signicant tracking error.
Also the parameter estimates do not converge to their true values and the total recon-
164 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 8.7
structed pitching moment is not equal to the real moment. However, if the same simu-
lation is performed without ight envelope partitioning with B-splines on a semi-linear
aircraft model the tracking performance and parameter convergence is excellent. It seems
the ight envelope partitioning negatively affects the estimation capabilities of the least-
squares algorithm for this failure scenario.
The simulation results of the rest of the c.g. shift failure scenarios correspond to this
single case: The tracking performance of the integrated design is better than for the mod-
ular design, with the modular design struggling to estimate the correct parameter values.
Tracking performance of both controllers is better for stabilizing c.g. shifts.
8.6.4 Simulation Results with Aileron Lock-ups
In the last series of simulations right aileron lockups or hard-overs are considered. At
20 seconds simulation time the right aileron suddenly moves to a certain offset: -21.5,
-10.75, 0, 10 or 21.5 degrees. Note that the public domain F-16 model does not contain a
differential elevator, hence only the rudder and the left aileron can be used to compensate
for these failures. Both the baseline control system and the adaptive SCAS designs with
adaptation turned off cannot compensate for the additional rolling and yawing moments
themselves, which means a very high workload for the pilot.
The results of a simulation performed with the integrated controller at ight condition 4
with a right aileron lock up at -10.75 degrees can be seen in Figure D.18. One lateral stick
doublet is performed before the failure occurs and three more 60 seconds after. As can
be seen the controller manages to compensate for most of the additional rolling moment
and after that the stability roll rate tracking error slowly converges to zero. Additional
sideslip is generated in the doublets and tracking performance improves over time. The
other plots of Figure D.18 demonstrate that parameter convergence to the true values is
not achieved. The change in yawing moment is even estimated as having an opposite
sign. However, tracking performance is adequate and improving over time.
Figure D.19 contains the results of the same scenario using the modular controller. It
can be seen that the aileron failure is quickly compensated for by the modular adaptive
controller: All tracking errors quickly converge to zero. However, the controller again
fails to identify the true aerodynamic coefcient changes. The total reconstructed forces
and moments are correct, but the individual coefcients do not match their true values.
This is partly because the reference signal is not rich enough, but also due to the ight
envelope partitioning. The same simulation without partitioning on the semi-linear F-16
model gives much better estimates.
The results of the above simulations were again characteristic for all scenarios with
aileron lockup failures. Tracking performance of the modular controller is excellent,
but parameter convergence to the true values is seldom achieved. The adaptation of the
integrated design is less aggressive, mainly due to the use of the continuous dead-zones,
but tracking performance is still good.
8.7 CONCLUSIONS 165
8.7 Conclusions
In this chapter two Lyapunov-based nonlinear adaptive stability and control augmen-
tation systems are designed for a high-delity F-16 model. The rst controller is an
integrated design with feedback control and dynamic tracking error based update law
designed simultaneously using a control Lyapunov function. The second design is an
ISS-backstepping controller with a separate recursive least-squares identier. In order
to make real-time implementation of the controllers feasible, the ight envelope is parti-
tioned in locally valid linear-in-the-parameters models using B-spline networks. Only a
few local aerodynamic models are updated at each time step, while information of other
local models is stored. The controllers are designed in such a way that they have nearly
identical handling qualities as the baseline F-16 control system over the entire subsonic
ight envelope for the nominal, undamaged aircraft model. Numerical simulations with
several types of failures were performed to verify the robust performance of the control
laws. The results show that good tracking performance can still be accomplished with
these failures and that pilot workload is reduced.
Several important observations can be made based on the simulation results and the com-
parison:
1. Results of numerical simulations show that adaptive ight controllers provide a
signicant improvement over a non-adaptive NDI design with integral gains for the
simulated failure cases. Both adaptive designs showno degradation in performance
with the added sensor dynamics and time delays. The ight envelope partitioning
method makes real-time implementation of both controllers feasible, although the
difference in required computational load and storage space is quite signicant.
For the least-squares identier each locally valid aerodynamic model has its own
covariance matrix.
2. In general, the modular adaptive design provides the best estimates of the indi-
vidual aerodynamic coefcients. However, the nonlinear damping gains of the
modular design should be tuned with care to avoid high gain feedback signals.
3. The gain tuning of the update laws of the integrated adaptive controller is a very
time consuming process, since changing the gains can give very unexpected tran-
sients in the closed-loop tracking performance. This is especially true for aggres-
sive maneuvering. In the next chapter an alternative Lyapunov based parameter
estimation method is investigated.
4. Tuning of the modular identier is a much less involved task. However, the re-
cursive least-squares identier combined with ight envelope partitioning has un-
expected problems to estimate the true parameters. A different parametrization of
the approximator structure or another tuning setting may solve these problems.
5. Enforcing desired handling qualities using the command lters is a trivial task in
the nominal case, since most specications can be implemented directly. The han-
dling qualities can be veried using frequency sweeps and lower order model ts.
166 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 8.7
Measurements of the handling qualities when a sudden aerodynamic change oc-
curs and the adaptation becomes active have not been obtained, since the dynamic
behavior of the closed-loop system is constantly changing, making it impossible
to t a lower order equivalent model.
Chapter 9
Immersion and Invariance Adaptive
Backstepping
The earlier chapters have shown that the dynamic part of integrated adaptive backstep-
ping designs is very difcult to tune, since it is unclear how a higher update gain affects
the closed-loop tracking performance of the control system. Furthermore, the dynamic
behavior of the controllers is very unpredictable. In this chapter the dynamic part of the
controller is replaced with a new kind of estimator based on the immersion and invari-
ance approach. This approach allows for prescribed stable dynamics to be assigned to
the parameter estimation error and is therefore much easier to tune. The new immersion
and invariance backstepping technique is used to design a new stability and control aug-
mentation system for the F-16, which is compared to the designs of the previous chapter.
This chapter can be seen as a follow up of Chapter 5, where an attempt was made to sim-
plify the performance tuning of the controllers by designing an inverse optimal adaptive
backstepping controller.
9.1 Introduction
In the past two decades a considerable amount of literature has been devoted to nonlinear
adaptive control design methods for a variety of ight control problems where parametric
uncertainties in the system dynamics are involved, see e.g. [61, 124, 132, 150]. Recur-
sive, Lyapunov-based adaptive backstepping is among the most widely studied of these
methods. The main attractions of adaptive backstepping based control laws lie in their
provable convergence and stability properties as well as in the fact that they can be ap-
plied to a broad class of nonlinear systems.
However, despite a number of renements over the years, the adaptive backstepping
method also has a number of shortcomings. The most important of these is that the pa-
167
168 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.2
rameter estimation error is only guaranteed to be bounded and converging to an unknown
constant value, yet little can be said about its dynamical behavior. Unexpected dynam-
ical behavior of the parameter update laws may lead to an undesired transient response
of the closed-loop system. Furthermore, increasing the adaptation gain will lead to faster
parameter convergence, but will not necessarily improve the response of the closed-loop
system. This makes it impossible to properly tune an adaptive backstepping controller,
especially for large and complex systems such as the high-delity F-16 model.
One solution to this problem is to introduce a modular input-to-state stable backstep-
ping approach with a separate identier that is not of the Lyapunov-type, e.g. the well
known recursive least-squares identier. Since the certainty equivalence principle does
not hold in general for nonlinear systems, the control law has to be robustied against
the time-varying character of the parameter estimates. However, the nonlinear damping
terms introduced to achieve this robustness can lead to undesirable high gain control.
Furthermore, the controller loses some of the strong stability properties with respect to
the integrated adaptive backstepping approach. Finally, in a real-time application for a
complex system, the high dynamic order resulting from using a least-squares identier
with the necessary regressor ltering may be undesirable.
In [40, 102, 103], a different class of Lyapunov-based adaptive controllers has been de-
veloped based on the immersion and invariance (I&I) methodology [7]. This approach
allows for prescribed stable dynamics to be assigned to the parameter estimation error,
thus leading to a modular control scheme which is much easier to tune than an adaptive
backstepping controller. However, this shaping of the dynamics relies on the solution of a
partial differential matrix inequality, which is difcult to solve for multivariable systems.
This limitation is removed in [104] using a dynamic extension consisting of output lters
and dynamic scaling factors added to the estimator dynamics.
In this study, the approach of [104] is used to derive a nonlinear adaptive estimator which
in combination with a static backstepping feedback controller results in a nonlinear adap-
tive control framework with guaranteed global asymptotic stability of the closed-loop
system. The new design technique is applied to the ight control design problem for
the over-actuated, six-degrees-of-freedom ghter aircraft model of Chapter 6 and after
that to the SCAS design for the F-16 model of Chapter 2. The results of the numerical
simulations are compared directly to the results for the integrated and modular adaptive
backstepping controllers of Chapters 6 and 8.
9.2 The Immersion and Invariance Concept
Immersion and invariance is a relatively new approach to designing nonlinear controllers
or estimators for (uncertain) nonlinear systems [6]. As the name suggests, the method re-
lies on the well known notions of system immersion and manifold invariance
1
, but used
from another perspective. The idea behind the I&I approach is to capture the desired
behavior of the system to be controlled with a target dynamical system. This way the
1
Formal denitions of immersion and invariant manifolds can be found in Appendix B.3
9.2 THE IMMERSION AND INVARIANCE CONCEPT 169
control problem is reduced to the design of a control law which guarantees that the con-
trolled system asymptotically behaves like the target system.
The I&I method is applicable to a variety of control problems, but it is easiest to illustrate
the approach with a basic stabilization problem of an equilibrium point of a nonlinear
system. Consider the general system
x = f(x) +g(x)u, (9.1)
where x R
n
and u R
m
. The control problem is to nd a state feedback control law
u = v(x) such that the closed-loop system has a globally asymptotic stable equilibrium
at the origin. The rst step of the I&I approach is to nd a target dynamical system
= (x), (9.2)
where R
p
, p < n, which has a globally asymptotically stable equilibrium at the
origin, a smooth mapping x = (), and a control law v(x) such that
f(()) +g(())v(()) =
(). (9.3)
If these conditions hold, then any trajectory x(t) of the closed-loop system
x = f(x) +g(x)v(x), (9.4)
is the image through the mapping () of a trajectory (t) of the target system (9.2).
Note that the rank of is equal to the dimension of . The second step is to nd a control
law that renders the manifold x = () attractive and keeps the closed-loop trajectories
bounded. This way the closed-loop system will asymptotically behave like the desired
target system and hence stability is ensured.
From the above discussion, it follows that the control problem has been transformed into
the problem of the selection of a target dynamical system. This is, in general, a non-
trivial task, since the solvability of the underlying control design problem depends on
this selection. However, in many cases of practical interest it is possible to identify natu-
ral target dynamics. Examples of different applications are given in [6].
In this thesis, the focus lies on adaptive control, hence the I&I approach is used to develop
a framework for adaptive stabilization of nonlinear systems with parametric uncertain-
ties. Consider again the system (9.1) with an equilibrium x
e
to be stabilized, but where
the functions f(x) and g(x) now depend on an unknown parameter vector R
q
. The
goal is to nd an adaptive state feedback control law of the form
u = v(x,
) (9.5)
= w(x,
),
such that all trajectories of the closed-loop system (9.1), (9.5) are bounded and
lim
t
x = x
e
. To this end it is assumed that a full-information control law v(x, )
exists. The I&I adaptive control problem is then dened as follows [7].
170 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.2
Denition 9.1. The system (9.1) is said to be adaptively I&I stabilizable if there exist
functions (x) and w(x) such that all trajectories of the extended system
x = f(x) +g(x)v(x,
+(x)) (9.6)
= w(x,
),
are bounded and satisfy
lim
t
_
g(x(t))v(x(t),
(t) +(x(t))) g(x(t))v(x(t), )
_
= 0. (9.7)
It is not difcult to see that for all trajectories staying on the manifold
M=
_
(x,
) R
n
R
q
|
+(x) = 0
_
condition (9.7) holds. Moreover, by Denition 9.1, adaptive I&I stabilizability implies
that
lim
t
x = x
e
. (9.8)
Note that the adaptive controller designed with the I&I approach is not of the certainty
equivalence type in the strict sense, i.e. the parameter estimate is not used directly by the
static feedback controller. Furthermore, note that, in general, f(x) and g(x) depend on
the unknown and therefore the parameter estimate
does not necessarily converge to
the true parameter values. However, in many cases it is also possible to establish global
stability of the equilibrium (x,
) = (x
e
, ). This is illustrated in the following example.
Example 9.1 (Adaptive controller design)
Consider the feedback linearizable system
x = x
3
+x +u, (9.9)
where R is an unknown constant parameter. If were known, the equilibrium
point x = 0 would be globally asymptotically stabilized by the control law
u = x
3
cx, c > 1. (9.10)
Since is not known, is replaced by its estimate
in the certainty equivalence
controller
u =
x
3
cx
= w,
where w is the parameter update law. As before, the control Lyapunov function is
selected as
V (x,
) =
1
2
x
2
+
1
2
(
)
2
, (9.11)
9.2 THE IMMERSION AND INVARIANCE CONCEPT 171
with > 0. Selecting the update law
w = x
4
(9.12)
renders the derivative of V equal to
V = (c 1)x
2
. (9.13)
By Theorem 3.7 the equilibrium (x,
) = 0 is globally stable and lim
t
x = 0.
However, no conclusions can be drawn about the behavior of the parameter estimation
error
= w (9.14)
and by dening the one-dimensional manifold
M=
_
(x,
) R
2
|
+(x) = 0
_
in the extended space (x,
), where (x) is a continuous function yet to be specied.
If the manifold Mis invariant, the dynamics of the x-subsystem of (9.14) restricted
to this manifold can be written as
x = (
+(x))x
3
+x +u. (9.15)
Hence, the dynamics of the system are completely known and the equilibrium x = 0
can be asymptotically stabilized by the control law
u = cx (
+(x))x
3
, c > 1. (9.16)
To render this design feasible, the rst step of the I&I approach consists of nding
an update law w that renders the manifold M invariant. To this end, consider the
dynamics of the off-the-manifold coordinate, i.e. the estimation error
+(x), (9.17)
which are given by
= w +
x
_
(
+(x) )x
3
+x +u
_
. (9.18)
If the update law w is selected as
w =
x
_
(
+(x))x
3
+x +u
_
=
x
(c 1)x (9.19)
172 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.3
the manifold Mis invariant and the off-the-manifold dynamics are described by
=
x
x
3
. (9.20)
Consider the Lyapunov function V =
1
2
2
, whose time derivative along the trajecto-
ries of (9.20) satises
V =
x
1
x
3
2
, (9.21)
where > 0. To render this expression negative semi-denite, a possible choice for
the function (x) is given as
(x) =
x
2
2
. (9.22)
An alternative solution with dead-zones is given as
(x) =
_
_
_
2
(x
0
)
2
if x >
0
2
(x +
0
)
2
if x <
0
0 if |x|
0
, (9.23)
with
0
> 0 the dead-zone constant. It can be concluded that the system (9.20) has a
globally stable equilibrium at zero and lim
t
x
3
= 0. The resulting closed-loop
system can be written in (x, )-coordinates as
x = (c 1)x x
3
= x
4
(9.24)
which has a global stable equilibriumat the origin and x converges to zero. Moreover,
the extra (x)x
3
-term in the control law (9.16) renders the closed-loop system input-
to-state stable with respect to the parameter estimation error
.
The response of the closed-loop system with the I&I adaptive controller is compared
to the response of the system with the standard adaptive controller designed at the
beginning of this example. The tuning parameters of both designs are selected the
same. The real is equal to 2, but the initial parameter estimate is 0. As can be seen
in Figure 9.1, both controllers manage to regulate the state to zero. Note that it is not
guaranteed that the estimate of the I&I design converges to the true value, only that
lim
t
x
3
= 0.
The closed-loop system (9.24) can be regarded as a cascaded interconnection between
two stable systems which can be tuned via the constants c and . This modularity makes
the I&I adaptive controller much easier to tune than the standard adaptive design. As a
result, the performance of the adaptive system can be signicantly improved.
9.3 EXTENSION TO HIGHER ORDER SYSTEMS 173
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
1
2
3
time (s)
s
t
a
t
e
x
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
50
40
30
20
10
0
time (s)
i
n
p
u
t
u
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
1
2
3
4
5
time (s)
p
a
r
a
m
e
t
e
r
e
s
t
i
m
a
t
e
standard
I&I
Figure 9.1: State x, control effort u and parameter estimate
for initial values x(0) = 2,
(0) =
0, control gain c = 2 and update gain = 1 for the closed-loop system with standard adaptive
design and with the I&I adaptive design.
9.3 Extension to Higher Order Systems
Extending the I&I approach outlined in the last section to higher-order nonlinear systems
with unmatched uncertainties is by no means straight-forward. In [102] an attempt is
made for the class of lower-triangular nonlinear systems of the form
x
i
= x
i+1
+
i
(x
1
, ..., x
i
)
T
, i = 1, ..., n 1
x
n
= u +
n
(x)
T
(9.25)
where x
i
R, i = 1, ..., n are the states, u R the control input,
i
the smooth
regressors and R
p
a vector of unknown constant parameters. The control problem is
to track the smooth reference signal y
r
(t) (all derivatives known and bounded) with the
state x
1
. The adaptive control design is done in two steps. First, an overparametrized
estimator of order np for the unknown parameter vector is designed. In the second
step a controller is designed that ensures that lim
t
x
1
= y
r
and all other states are
bounded.
9.3.1 Estimator Design
The estimator design starts by dening the estimation errors as
i
=
i
+
i
(x
1
, ..., x
i
), 1 = 1, ..., n, (9.26)
174 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.3
where
i
are the estimator states and
i
are continuously differentiable functions to de-
ned later. The dynamics of
i
are given by
i
=
i
+
i
k=1
i
x
k
_
x
k+1
+
k
(x
1
, ..., x
k
)
T
_
=
i
+
i
k=1
i
x
k
_
x
k+1
+
k
(x
1
, ..., x
k
)
T
_
i
+
i
(x
1
, ..., x
i
)
i
__
,
where x
n+1
= u for the ease of notation. Update laws for
i
can be dened as
i
=
i
k=1
i
x
k
_
x
k+1
+
k
(x
1
, ..., x
k
)
T
_
i
+
i
(x
1
, ..., x
i
)
__
(9.27)
to cancel all the known parts of the
i
dynamics, resulting in
i
=
_
i
k=1
i
x
k
k
(x
1
, ..., x
k
)
T
_
i
. (9.28)
The system (9.28) for i = 1, ..., n can be seen as a linear time varying system with a
block diagonal dynamic matrix. Hence, the problem of designing an estimator
i
is now
reduced to the problem of nding functions
i
such that the diagonal blocks are rendered
negative semi-denite. In [102] the functions
i
are selected as
i
(x
1
, ..., x
i
) =
i
_
x
i
o
i
(x
1
, ..., x
i1
, )d +
i
(x
i
),
i
> 0, (9.29)
where
i
are continuously differentiable functions that satisfy the partial differential ma-
trix inequality
F
i
(x
1
, ..., x
i
)
T
+F
i
(x
1
, ..., x
i
) 0, i = 2, ..., n, (9.30)
where
F
i
(x
1
, ..., x
i
) =
i
i1
k=1
x
k
__
x
i
0
i
(x
1
, ..., x
i1
, )d
_
k
(x
1
, ..., x
k
)
T
+
i
x
i
i
(x
1
, ..., x
i
)
T
.
Note that the solvability of (9.30) strongly depends on the structure of the regressors
i
. For instance, in the case that
i
only depends on x
i
, the trivial solution
i
(x
i
) = 0
satises the inequality. If (9.30) is solvable, the following lemma can be established [6].
Lemma 9.2. Consider the system (9.25), where the functions
i
are given by (9.29) and
functions
i
exist which satisfy (9.30). Then the system (9.25) has a globally uniformly
stable equilibrium at the origin,
i
(t) L
and
i
(x
1
(t), ..., x
i
(t))
T
i
(t) L
2
, for
all i = 1, ..., n and for all x
1
(t), ..., x
i
(t). If, in addition,
i
and its time derivative are
bounded, then
i
(x
1
, ..., x
i
)
T
i
converges to zero.
9.3 EXTENSION TO HIGHER ORDER SYSTEMS 175
Proof: Consider the Lyapunov function W() =
n
i=1
T
i
i
, whose time derivative
along the trajectories of (9.25) is given as
W =
n
i=1
T
i
_
i
k=1
i
x
k
k
(x
1
, ..., x
k
)
T
_
i
=
n
i=1
T
i
_
2
i
T
i
+F
1
+F
T
1
i
n
i=1
2
i
(
T
i
i
)
2
,
where (9.30) was used to obtain the last inequality. The stability properties follow di-
rectly from Theorem 3.7.
Note that the above inequality holds for any u. Furthermore, by denition (9.26) an
asymptotically converging estimate of each unknown term
T
i
of the system (9.25) is
given by
i
(x
1
, ..., x
i
)
T
(
+
i
(x
1
, ..., x
i
)). (9.31)
Note that an estimate of the
T
i
terms is obtained, instead of only an estimate of the
parameter as with the Lyapunov based update laws of the earlier chapters.
9.3.2 Control Design
The properties of the estimator will now be exploited with a backstepping control law.
The design procedure starts by dening the tracking errors as
z
1
= x
1
y
r
z
i
= x
i
i1
, i = 2, ..., n, (9.32)
where
1
=
1
(z
1
)
T
1
(
1
+
1
) + y
r
, (9.34)
where
1
(z
1
) is a stabilizing function to be dened, reduces the z
1
-dynamics to
z
1
= z
2
T
1
1
. (9.35)
Assume for the moment that z
2
0, i.e.
1
is the real control. Then the above expression
can be seen as a stable system perturbed by an L
2
signal. Consider now the Lyapunov
176 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.4
function V
1
(z
1
,
1
) = z
2
1
+
T
1
1
. Taking the derivative along the trajectories (9.35) and
(9.28) results in
V
1
= 2
1
z
1
2
T
1
1
z
1
2
1
(
T
1
1
)
2
.
= 2
1
z
1
+z
2
1
_
1
T
1
1
+
z
1
_
2
(2
1
)(
T
1
1
)
2
2
1
z
1
+z
2
1
(2
1
)(
T
1
1
)
2
,
where > 0 is a constant. Substituting
1
= (c
1
+/2) z
1
, with gain c
1
> 0, reduces
the derivative of V
1
to
V
1
2c
1
z
2
1
(2
1
)(
T
1
1
)
2
.
By Theorem 3.7 it follows that if
1
1
2
the closed-loop (z
1
,
1
)-subsystem has a
globally uniformly stable equilibrium at (0, ) and lim
t
z
1
= 0, lim
t
T
1
1
= 0.
Since, z
2
= 0, the approach is extended to design a backstepping controller for the
complete system, i.e.
i+1
=
i
T
i
(
i
+
i
) +
i1
k=1
i
x
k
_
x
k+1
+
T
k
(
k
+
k
)
_
+
i1
k=1
k
+y
(i)
r
u =
n+1
, (9.36)
where
i
=
_
c
i
+
2
_
z
i
+
2
i1
k=1
_
i
x
k
_
2
z
i
+z
i1
, i = 1, ..., n,
where c
i
> 0 and > 0. Note that nonlinear damping terms have to be introduced to
compensate for derivative terms of the virtual controls. This is necessary, since command
lters are not used in this backstepping design. To proof stability of the closed-loop
system with the above backstepping control law and the I&I based estimator designed in
the previous section, the Lyapunov function V (z, ) = W() +
n
k=1
z
2
k
is introduced.
Taking the derivative of V results in
V = 2
n
i=1
c
i
z
2
1
n
i=1
_
2
i
(n i + 1)
_
(
T
i
i
)
2
.
It can be concluded that, if 2
i
1
T
i
i
= 0. This concludes the over-
parametrized, nonlinear adaptive control design, which can be used as an alternative to
the tuning functions adaptive backstepping approach if functions
i
(x
i
) can be found that
satisfy (9.30), as is demonstrated in a wing rock example in [102].
9.4 DYNAMIC SCALING AND FILTERS 177
9.4 Dynamic Scaling and Filters
In the previous section a rst attempt was made to design an adaptive backstepping con-
troller with I&I based estimator. The estimator allows for prescribed dynamics to be
assigned to the parameter estimation error, which leads to a modular adaptive backstep-
ping design that is much easier to tune than the integrated approaches discussed in earlier
chapters. Furthermore, the modular design does not suffer from the weaknesses of the
certainty equivalence modular design of Section 6.2. However, the shaping of the dynam-
ics relies on the solution of a partial differential matrix inequality, which is, in general,
very difcult to solve for most physical systems.
This limitation of the estimator design was removed in [104] with the introduction of a
dynamic scaling factor in the estimator dynamics and by adding an output lter to the
design. Dynamic scaling has been widely used in the design of high-gain observers, see
e.g. [165]. In this section an I&I estimator with dynamic scaling and output lter is
combined with a command ltered backstepping control design approach to arrive at a
modular adaptive control framework.
Consider the class of linearly parametrized systems of the form
x
i
= x
i+1
+
i
(x)
T
i
, i = 1, ..., n, (9.37)
with states x
i
R, i = i, ..., n and control input u R. Note that for notational
convenience x
n+1
= u. The functions
i
(x, u) are the known, smooth regressors and
i
R
p
i
are vectors of unknown constant parameters. The control objective is to track a
smooth reference signal x
1,r
, for which the rst derivative is known and bounded, with
the state x
1
.
9.4.1 Estimator Design with Dynamic Scaling
The construction of an estimator for
i
starts by dening the scaled estimation errors as
i
=
i
+
i
(x
i
, x)
r
i
, i = 1, ..., n, (9.38)
where r
i
are scalar dynamic scaling factors,
i
are the estimator states and
i
(x
i
, x)
continuously differentiable vector functions yet to be specied. Let e
i
= x
i
x
i
, then
the ltered states x
i
are obtained from
x
i
= x
i+1
+
i
(x)
T
_
i
+
i
(x
i
, x)
_
k
i
(x, r, e)e
i
, (9.39)
178 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.4
where k
i
(x, r, e) are positive functions. Using the above denitions, the dynamics of
i
are given by
i
=
1
r
i
_
_
i
+
i
x
i
_
x
i+1
+
i
(x)
T
i
_
+
n
j=1
i
x
j
x
j
_
_
r
i
r
i
i
=
1
r
i
_
_
i
+
i
x
i
_
x
i+1
+
i
(x)
T
_
i
+
i
(x
i
, x) r
i
i
__
+
n
j=1
i
x
j
x
j
_
_
r
i
r
i
i
.
By selecting the update laws for
i
as
i
=
i
x
i
_
x
i+1
+
i
(x)
T
_
i
+
i
(x
i
, x)
__
j=1
i
x
j
x
j
, (9.40)
the dynamics of
i
are reduced to
i
=
i
x
i
i
(x)
T
r
i
r
i
i
=
_
i
x
i
i
(x)
T
+
r
i
r
i
_
i
. (9.41)
The system (9.41) can again be seen as a linear time varying system with a block diag-
onal dynamic matrix. In order to render the diagonal blocks negative semi-denite, the
functions
i
(x
i
, x) are selected as
i
(x
i
, x) =
i
_
x
i
0
i
( x
1
, ..., x
i1
, , x
i+1
, ..., x
n
)d, (9.42)
where
i
> 0. Since the regressors
i
(x) are continuously differentiable, the expression
n
j=1
e
j
ij
(x, e) =
i
(x)
i
( x
1
, ..., x
i1
, , x
i+1
, ..., x
n
),
ii
0, (9.43)
holds for some functions
ij
(x, e). Substituting (9.42) and (9.43) into (9.41) yields the
i
-dynamics
i
=
i
i
(x)
i
(x)
T
i
+
i
n
j=1
e
j
ij
(x, e)
i
(x)
T
r
i
r
i
i
. (9.44)
Furthermore, from (9.37) and (9.39), the dynamics of e
i
= x
i
x
i
are given by
e
i
= k
i
(x, r, e)e
i
+r
i
i
(x)
T
i
. (9.45)
The system consisting of (9.44) and (9.45) has an equilibrium at zero, which can be
rendered globally uniformly stable by selecting the dynamics of the scaling factors r
i
and the functions k
i
(x, r, e) as dened in the following lemma [104].
9.4 DYNAMIC SCALING AND FILTERS 179
Lemma 9.3. Consider the system (9.37) and let
r
i
= c
i
r
i
n
j=1
e
2
j
|
ij
(x, e)|
2
, r
i
(0) = 1, (9.46)
with c
i
i
n/2, where |.| denotes the 2-norm, and
k
i
(x, r, e) =
i
r
2
i
+
n
j=1
c
j
r
2
j
|
ji
(x, e)|
2
(9.47)
where
i
> 0 and > 0 are constants. Then the system consisting of (9.44), (9.45)
and (9.46) has a globally uniformly stable manifold of equilibria dened by M =
{(, r, e)| = e = 0}. Moreover,
i
(t) L
, r
i
(t) L
, e
i
(t) L
2
L
and
i
(x(t))
T
i
(t) L
2
for all i = 1, ..., n. If, in addition,
i
(x(t)) and its time derivative
are bounded, it follows that lim
t
i
(x(t))
T
i
= 0.
Proof: Consider the Lyapunov function V
i
(
i
) =
1
2
i
T
i
i
. Taking the time derivative
of V
i
along the trajectories of (9.44) results in
V
i
= (
T
i
i
)
2
+
n
j=1
e
j
T
i
ij
T
i
i
r
i
i
r
i
|
i
|
2
= (
T
i
i
)
2
+
n
j=1
_
1
2n
(
T
i
i
)
2
+
n
2
e
2
j
(
T
ij
i
)
2
_
j=1
_
1
2n
T
i
i
_
n
2
e
j
T
ij
i
_
2
r
i
i
r
i
|
i
|
2
1
2
(
T
i
i
)
2
+
n
2
n
j=1
e
2
j
(
T
ij
i
)
2
r
i
i
r
i
|
i
|
2
.
Substituting the dynamic scaling terms r
i
as given by (9.46) and by applying the inequal-
ity |
T
ij
i
| < |
ij
||
i
|, the remaining indenite term can be canceled such that
V
i
1
2
(
T
i
i
)
2
< 0,
i
= 0.
Hence, the system (9.44) has a globally uniformly stable equilibrium at the origin,
i
(t) L
and
i
(x(t))
T
i
(t) L
2
for all i = 1, ..., n. If
i
(x(t)) and its time
derivative are bounded, it follows from Barbalats lemma that lim
t
i
(x(t))
T
i
= 0.
This implies that an asymptotic estimate of each parametric uncertainty term
i
(x)
T
i
in (9.37) is given by the term
i
(x)
T
_
i
+
i
(x
i
, x)
_
.
The next design step is to select the positive functions k
i
(x, r, e) in such a way that the
180 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.4
dynamics of e
i
, given by (9.45), become globally asymptotically stable. Taking the time
derivative of the augmented Lyapunov function W
i
(
i
, e
i
) =
1
2
e
2
i
+
1
i
V
i
, with constant
i
> 0, results in
W
i
k
i
e
2
i
+r
i
T
i
i
e
i
1
2
i
(
T
i
i
)
2
= k
i
e
2
i
+
i
2
r
2
i
e
2
i
_
_
i
2
r
i
e
i
2
i
T
i
i
_
2
_
k
i
i
2
r
2
i
_
e
2
i
.
It is clear that selecting k
i
(x, r, e) >
i
2
r
2
i
renders the above expression negative denite,
thus the equilibrium (
i
, e
i
) = (0, 0) is globally uniformly stable and e
i
(t) L
2
L
.
A nal design step has to be made to ensure that the dynamic scalings r
i
remain bounded.
Consider the Lyapunov function V
e
(e, , r) =
n
i=1
_
W
i
(
i
, e
i
) +
2
r
2
i
V
e
n
i=1
_
k
i
(x, r, e)
i
2
r
2
i
_
e
2
i
+
n
i=1
_
_
c
i
r
2
i
n
j=1
e
2
j
|
ij
(x, e)|
2
_
_
.
Selecting k
i
(x, r, e) as given by (9.47) to cancel the indenite terms, ensures
V
e
n
i=1
i
2
r
2
i
e
2
i
, which proves that r
i
(t) L
and lim
t
e
i
(t) = 0. The functions
k
i
(x, r, e) contain a nonlinear damping term to achieve boundedness of r
i
, but the con-
stant multiplying the damping term can be chosen arbitrarily small.
This completes the design of the estimator, which consists of output lters (9.39), update
laws (9.40) and dynamic scalings (9.46). Note that the estimator, in general, employs
overparametrization, which is not necessarily disadvantageous from a performance point
of view. However, in a numerical implementation it can lead to a higher computational
load. The total order of the estimator is
n
i=1
p
i
+ 2n.
9.4.2 Command Filtered Control Law Design
In this section the command ltered backstepping approach is used to close the loop and
complete the adaptive control design. The procedure starts by dening the tracking errors
as
z
i
= x
i
x
i,r
, i = 1, ..., n (9.48)
where x
i,r
are the intermediate control laws to be designed. The modied tracking errors
are dened as
z
i
= z
i
i
. (9.49)
9.4 DYNAMIC SCALING AND FILTERS 181
with the signals
i
to be dened. The dynamics of z
i
can be written as
z
i
= z
i+1
+x
i+1,r
+
T
i
i
i
z
n
= u +
T
n
n
x
n,r
n
. (9.50)
The idea is now to design a control law that renders the closed-loop system L
2
stable
from the perturbation inputs
T
i
i
to the output z
1
and keeps all signals bounded. To
stabilize (9.50) the following desired (intermediate) controls are proposed:
x
0
i+1,r
=
i
+ z
i1
T
i
_
i
+
i
_
i+1
, i = 1, ..., n 1,
u
0
=
n
+ z
n1
T
n
_
n
+
n
_
+ x
n,r
(9.51)
with the stabilizing functions
i
given as
i
= c
i
z
i
+
r
2
i
2
z
i
+
k
i
i
,
for i = 1, ..., n, where c
i
> 0, > 0 and
k
i
0 are constants. The integral terms are
dened as
i
=
_
t
0
z
i
(t)dt.
The desired (intermediate) control laws (9.51) are fed through second order low pass
lters to produce the actual intermediate controls x
i+1,r
, u and their derivatives. The
effect that the use of these lters has on the tracking errors can be captured with the
stable linear lters
i
= c
i
i
+
_
x
i+1,r
x
0
i+1,r
_
, i = 1, ..., n 1
n
= c
n
n
+
_
u u
0
_
. (9.52)
The stability properties of the adaptive control framework based on this command ltered
backstepping controller in combination with the I&I based estimator design of Lemma
9.3 can be proved using the control Lyapunov function
V
c
( z, ) =
n
i=1
_
z
2
i
+
1
2
2
i
+
T
i
i
_
. (9.53)
182 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.5
Taking the time derivative of V
c
and following some of the steps used in the proof of
Lemma 9.3 results in
V
c
2
n1
i=1
z
i
_
z
i+1
+x
i+1,r
+
T
i
i
i
_
+ 2 z
n
_
u +
T
n
n
x
n,r
n
_
+
n
i=1
i
z
i
i=1
_
2
i
1
2
_
(
T
i
i
)
2
= 2
n
i=1
z
i
_
c
i
z
i
+
i
r
2
i
2
z
i
+
T
i
i
r
i
_
i=1
_
2
i
1
2
_
(
T
i
i
)
2
= 2
n
i=1
c
i
z
2
i
n
i=1
_
1
T
i
i
+
r
i
z
i
_
2
i=1
_
2
i
1
2
1
_
(
T
i
i
)
2
2
n
i=1
c
i
z
2
i
n
i=1
_
2
i
1
2
1
_
(
T
i
i
)
2
.
It can be concluded that, if
i
+2
4
, the closed-loop system consisting of (9.37), (9.51)
and the I&I based estimator of the previous section, which consists of output lters
(9.39), update laws (9.40) and dynamic scalings (9.46), has a globally stable equilib-
rium. Furthermore, by Theorem 3.7 lim
t
z
i
= 0 and lim
t
T
i
i
= 0 (if and
its time derivative are bounded). When the command lters are properly designed, i.e.
with bandwidths sufciently high, and no rate or magnitude limits are in effect, z
i
will
converge to the close neighborhood of the real tracking errors z
i
. This concludes the
discussion on the modular I&I based adaptive backstepping control design.
9.5 Adaptive Flight Control Example
In this section the approach discussed in Section 9.4 is used to construct a nonlinear adap-
tive ight control law for the simplied aircraft model of Chapter 6 with the equations
of motion given by (6.33). It will be demonstrated that the I&I estimator design with
dynamic scalings can be applied directly to a multivariable system. The control objec-
tive is to track smooth reference signals with , and . It is assumed that all stability
and control derivatives are unknown. A scheme of the proposed modular adaptive ight
controller is depicted in Figure 9.2.
Before the adaptive control design procedure begins, the aircraft dynamic model (6.33)
is rewritten in a more general form. Dene the states x
1
= , x
2
= , x
3
= , x
4
=
, x
5
= p, x
6
= q, x
7
= r and the control inputs u = (
el
,
er
,
al
,
ar
,
lef
,
tef
,
r
)
T
,
then the system (6.33) can be rewritten as
x
i
= f
i
(x) +
i
(x, u)
T
i
, i = 1, ..., 7, (9.54)
9.5 ADAPTIVE FLIGHT CONTROL EXAMPLE 183
Prefilters
Sensor
Processing
Pilot
Commands
Control
Allocation
m
des
Nonlinear Adaptive Estimator
Parameter Update
Laws
Output Filters
Backstepping
Control Law
(Onboard Model )
Dynamic Scaling
x x
e
x
u
u z y
r
Figure 9.2: Modular adaptive I&I backstepping control framework.
with f
i
(x) the known functions, the unknown parameter vectors
1
= 0,
2
= z
,
3
= y
,
4
= 0,
5
= (l
p
, l
q
, l
r
, l
, l
r
, l
0
, l
el
, l
er
, l
al
, l
ar
, l
r
)
T
,
6
=
_
m
, m
q
, m
, m
0
, m
el
, m
er
, m
al
, m
ar
, m
lef
, m
tef
, m
r
_
T
,
7
= (n
, n
p
, n
q
, n
r
, n
p
, n
0
, n
el
, n
er
, n
al
, n
ar
, n
r
)
T
,
and the regressors
1
(x, u) = 0,
2
(x, u) = x
2
0
,
3
(x, u) = x
3
,
4
(x, u) = 0,
5
(x, u) = (x
5
, x
6
, x
7
, x
3
(x
2
0
), x
7
(x
2
0
), 1, u
1
, u
2
, u
3
, u
4
, u
7
)
T
,
6
(x, u) =
_
x
2
0
, x
6
, x
5
x
3
+
g
0
V
(cos x
4
cos x
1
cos
0
), 1, u
1
, ..., u
7
_
T
,
7
(x, u) = (x
3
, x
5
, x
6
, x
7
, x
5
(x
2
0
), 1, u
1
, u
2
, u
3
, u
4
, u
7
)
T
.
Note that the parameters l
0
, m
0
and n
0
have been added to the unknown parameter vec-
tors to compensate for any additional moments caused by failures, e.g. actuator hard-
overs.
9.5.1 Adaptive Control Design
The design of the command ltered backstepping feedback control design is identical
to the static backstepping part of the ight control design of Chapter 6. Note that the
nonlinear damping terms are not needed for the combination with an I&I based estimator
to guarantee stability, but they are kept in for the sake of comparison. The I&I estimator
design of Section 9.4.1 can be applied directly to the rewritten aircraft equations of mo-
tion (9.54).
Following the estimator design procedure of Section 9.4.1, the scaled estimation errors
184 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.5
are dened as
i
=
i
+
i
(x
i
, x)
r
i
, i = 2, 3, 5, 6, 7. (9.55)
Let the output errors be given by e
i
= x
i
x
i
, then the output lters are dened as
x
i
= f
i
+
T
i
_
i
+
i
_
k
i
e
i
, i = 2, 3, 5, 6, 7.
Note that no output lters are needed for x
1
- and x
4
-dynamics, since they contain no
uncertainties. The estimator dynamics are given by
i
=
i
x
i
( x
i
+k
i
e
i
)
7
j=1
i
x
j
x
j
7
k=1
i
u
k
u
k
,
where the functions
i
(x
i
, x) are obtained from (9.42), i.e.
2
=
2
_
1
2
x
2
2
0
x
2
_
,
3
=
3
1
2
x
2
3
,
5
=
5
x
5
_
x
3
, x
6
, x
7
, x
3
( x
2
0
), x
7
( x
2
0
),
1
2
x
5
, 1, u
1
, u
2
, u
3
, u
4
, u
7
_
T
,
6
=
6
x
6
_
x
2
0
,
1
2
x
6
, x
5
x
3
+
g
0
V
(cos x
4
cos x
1
cos
0
) , 1, u
1
, ..., u
7
_
T
,
7
=
7
x
7
_
x
3
,
1
2
x
7
, x
5
, x
5
( x
2
0
), x
6
, 1, u
1
, u
2
, u
3
, u
4
, u
7
_
T
,
with
i
> 0. Note that the derivative of the control vector is required in the estimator
design. This derivative is obtained directly from the command lters used in the last step
of the static backstepping control design. Taking the time derivative of the functions
results in
2
=
2
2
,
3
=
3
3
,
5
=
5
5
e
2
(0, 0, 0, 0, x
3
, x
7
, 0, 0, 0, 0, 0, 0)
T
5
e
3
(0, 1, 0, 0,
0
x
2
e
2
, 0, 0, 0, 0, 0, 0, 0)
T
5
e
6
(0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0)
T
5
e
7
(0, 0, 0, 1, 0,
0
x
2
e
2
, 0, 0, 0, 0, 0, 0)
T
+
7
k=1
5
u
k
u
k
=
5
5
e
2
52
5
e
3
53
5
e
6
56
5
e
7
57
+
7
k=1
5
u
k
u
k
,
9.5 ADAPTIVE FLIGHT CONTROL EXAMPLE 185
6
=
6
6
e
2
(0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0)
T
6
e
3
(0, 0, 0, x
5
+e
5
, 0, 0, 0, 0, 0, 0, 0)
T
6
e
5
(0, 0, 0, x
3
, 0, 0, 0, 0, 0, 0, 0)
T
+
7
k=1
6
u
k
u
k
=
6
6
e
2
62
6
e
3
63
6
e
5
65
+
7
k=1
6
u
k
u
k
,
7
=
7
7
e
2
(0, 0, 0, 0, x
5
, 0, 0, 0, 0, 0, 0)
T
7
e
3
(0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0)
T
7
e
5
(0, 0, 0, 1,
0
x
2
e
2
, 0, 0, 0, 0, 0, 0)
T
7
e
6
(0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0)
T
+
7
k=1
7
u
k
u
k
=
7
7
e
2
72
7
e
3
73
7
e
5
75
7
e
6
76
+
7
k=1
7
u
k
u
k
,
where the bracketed terms correspond to the functions
ij
(x, e) of (9.43). Finally, from
(9.46) and (9.47) the dynamic scaling parameters r
i
and the gains k
i
are given by
r
i
=
5
2
i
r
i
7
j=1
e
2
j
|
ij
(x, e)|
2
and
k
i
(x, r, e) =
i
r
2
i
+
7
j=1
c
j
r
2
j
|
ji
(x, e)|
2
,
with
i
> 0, > 0 and r
i
(0) = 1. This completes the nonlinear estimator design for the
over-actuated aircraft model. The tracking performance and parameter estimation capa-
bilities of the adaptive controller resulting from combining this nonlinear estimator with
the command ltered adaptive backstepping approach can now be evaluated in numerical
simulations.
9.5.2 Numerical Simulation Results
This section presents the simulation results from the application of the adaptive ight
controller developed in the previous section to the over-actuated ghter aircraft model
of Section 6.3, implemented in the MATLAB/Simulink
c
environment. The simulations
are performed at two ight conditions for which the aerodynamic data can be found in
Tables 6.1 and 6.2.
The command ltered, static backstepping controller is tuned in a trial-and-error proce-
dure on the nominal aircraft model. The nal control and nonlinear damping gains were
186 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.5
chosen identical to the ones used for the adaptive controllers in Chapter 6.
Tuning of the I&I estimator is relatively straight-forward, since increasing the adaptation
gains
i
not only increases the adaptation rate but also improves the closed-loop perfor-
mance. This is in contrast with the integrated adaptive backstepping approach used in
Chapter 6 where increasing the adaptation gains can lead to a worsened transient per-
formance. The inuence of the size of the other estimator parameters,
i
and , on the
tracking performance is very limited, simply selecting them inside the bounds dened in
Section 9.4 is enough to guarantee convergence of the ltered states to the true states and
boundedness of the dynamic scaling parameters. The nal gain and parameter selection
is:
i
= 10,
i
= 0.01, i = 1, ..., 7 and = 0.01.
Simulation with Left Aileron Runaway
In this rst simulation a mixed maneuver involving a series of angle of attack and roll
angle doublets is considered. The aircraft model starts at ight condition 2 in a trimmed
horizontal ight where after 1 second of simulation time the left aileron suffers a hard-
over failure and moves to its limit of 45 degrees. This failure results in a large additional
rolling moment and minor additional pitch and yawing moments. Note that the adaptive
controller does not use any sensor measurements of the control surface position or any
other form of fault detection.
The results of this simulation can be found in Figure D.20 of Appendix D. Note that this
maneuver is identical to scenario 3 of Chapter 6, which means the results in Figure D.20
can be compared directly with the plots of Figures D.1 and D.2.
The adaptive controller manages to rapidly return the states to their reference values
after the failure. Of course, the coupling between longitudinal and lateral motion is more
prominent in the response after the failure. It can be seen in Figure D.20(c) that the
total moment coefcients post-failure are estimated rapidly and accurately. However, the
individual parameters have not converged to their true values since this maneuver alone
does not provide the estimator with enough information.
In Figure D.20(d) some additional parameters of the I&I estimator are plotted, i.e. the
dynamic scaling parameters r
.
All signals are behaving as they should be, the dynamic scalings converge to constant
values, the lter states follow the aircraft model states and the prediction errors converge
to zero. In the control surface plots it can be seen that most of the additional moment is
compensated by the right aileron and the left elevator. The simple pseudo-inverse control
allocation scheme does not give any preference to certain control surfaces or axes. The
tracking performance and parameter convergence of the adaptive controller are very good
for this failure case.
Simulation with Left Elevator Runaway
The second simulation is again of a mixed maneuver involving a series of angle of attack
and roll angle doublets at ight condition 2. The aircraft model starts in a straight, hori-
zontal ight where after 1 second of simulation time the left elevator suffers a hard-over
9.6 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 187
failure and moves to its limit of 10.5 degrees. The simulation results of this maneuver can
be found in Figure D.21, which is again divided in 4 subplots. The results of this same
simulation scenario for the adaptive controllers of Chapter 6 can be found in Figures D.3
and D.4. Note however, that a more sophisticated control allocation approach was used
there.
The results demonstrate again that the adaptive controller performs excellent. The total
moment coefcients are rapidly found by the estimator and tracking performance is ex-
cellent. However, the individual components of the parameter estimate vectors do not
converge to their true values. It is interesting to note that the new adaptive design man-
ages to recover good performance without saturating any of the other control surfaces for
this failures scenario, unlike the adaptive ight controllers of Chapter 6.
9.6 F-16 Stability and Control Augmentation Design
The next step is to apply the I&I adaptive backstepping approach to the problem of de-
signing a SCAS for the high-delity F-16 model of Chapter 2 and compare its perfor-
mance with the integrated and modular SCAS designs of the previous chapter. First, the
I&I estimator for the dynamic F-16 model uncertainties will be derived, after that, the
simulation scenarios of the previous chapter are performed once again for the adaptive
backstepping ight controller with the new estimator.
9.6.1 Adaptive Control Design
The static nonlinear backstepping SCAS design has already been discussed in Section
8.2. This ight controller will be used again, but the tracking error driven adaptation
process is replaced by an I&I based estimator. The I&I estimator with dynamic scaling
of Section 9.4 can be applied directly to the F-16 model if the multiple model approach
with B-spline networks is once again selected to simplify the approximation process. The
size and number of the networks is selected identical to the ones used for the adaptive
backstepping ight control laws of Chapter 8.
Before the design of the estimator starts, the relevant equations of motion are written in
the more general form
x
i
= f
i
(x, u) +
i
(x, u)
T
i
, i = 1, ..., 6, (9.56)
with the states x
1
= V
T
, x
2
= , x
3
= , x
4
= p, x
5
= q, x
6
= r and the inputs
u
1
=
e
, u
2
=
a
, u
3
=
r
. Here f
i
(x) represent the known parts of the F-16 model
dynamics given by
f
1
(x) =
1
m
[D
0
+F
T
cos x
2
cos x
3
+mg
1
]
f
2
(x) = x
5
(x
4
cos x
2
+x
6
sin x
2
) tanx
3
+
L
0
F
T
sin x
2
+mg
3
mx
1
cos x
3
f
3
(x) = (x
6
cos x
2
x
4
sin x
2
) +
Y
0
F
T
cos x
2
sin x
3
+mg
2
mx
1
188 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.6
f
4
(x) = (c
1
x
6
+c
2
x
4
) x
5
+c
3
L
0
+c
4
_
N
0
+H
eng
x
5
_
f
5
(x) = c
5
x
4
x
6
c
6
_
x
2
4
x
2
6
_
+c
7
_
M
0
H
eng
x
6
_
f
6
(x) = (c
8
x
4
c
2
x
6
) x
5
+c
4
L
0
+c
9
_
N
0
+H
eng
x
5
_
,
where L
0
, Y
0
, D
0
,
L
0
,
M
0
and
N
0
are the known, nominal values of the aerodynamic
forces and moments. The second term of (9.56) describes the uncertainties in the aircraft
model. As an example the approximation of the uncertainty in the total drag is given as
1
(x, u)
T
1
=
qS
m
_
C
D
0
(x
2
, x
3
) +
C
D
q
(x
2
)
x
5
c
2x
1
+
C
D
e
(x
2
, u
1
)u
1
_
=
qS
m
_
T
C
D
0
(x
2
, x
3
),
T
C
D
q
(x
2
)
x
5
c
2x
1
,
T
C
D
e
(x
2
, u
1
)u
1
_
_
_
C
D
0
C
D
q
C
D
e
_
_
,
where
T
C
D
() are vectors containing the third-order B-spline basis functions that form
a rst or second order B-spline network, and where
C
D
2
(x, u)
T
2
=
qS
mx
1
cos x
3
_
C
L
0
(x
2
, x
3
) +
C
L
q
(x
2
)
x
5
c
2x
1
+
C
L
e
(x
2
, u
1
)u
1
_
3
(x, u)
T
3
=
qS
mx
1
_
C
Y
0
(x
2
, x
3
) +
C
Y
p
(x
2
)
x
4
b
2x
1
+
C
Y
r
(x
2
)
x
6
b
2x
1
+
C
Y
a
(x
2
, x
3
)u
2
+
C
Y
r
(x
2
, x
3
)u
3
_
4
(x, u)
T
4
= c
3
qSb
_
C
L
0
(x
2
, x
3
) +
C
L
p
(x
2
)
x
4
b
2x
1
+
C
L
r
(x
2
)
x
6
b
2x
1
+
C
L
a
(x
2
, x
3
)u
2
+
C
L
r
(x
2
, x
3
)u
3
_
5
(x, u)
T
5
= c
7
qS c
_
C
M
0
(x
2
, x
3
) +
C
M
q
(x
2
)
x
5
c
2x
1
+
C
M
e
(x
2
, u
1
)u
1
_
6
(x, u)
T
6
= c
9
qSb
_
C
N
0
(x
2
, x
3
) +
C
N
p
(x
2
)
x
4
b
2x
1
+
C
N
r
(x
2
)
x
6
b
2x
1
+
C
N
a
(x
2
, x
3
)u
2
+
C
N
r
(x
2
, x
3
)u
3
_
,
where all the coefcients are again approximated with B-spline networks. Note that,
to avoid overparametrization, the roll and yaw moment error approximators are not de-
signed to estimate the real errors, but rather pseudo-estimates. It is possible to estimate
the real errors, but this would result in additional update laws and thus increase the dy-
namic order of the adaptation process.
Now that the system is rewritten in the standard form, the I&I estimator design of Section
9.6 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 189
9.4 can be followed directly. The scaled estimation errors are again dened as
i
=
i
+
i
(x
i
, x)
r
i
, i = 1, ..., 6. (9.57)
In Section 9.4, the functions
i
in the above expression were selected as
i
(x
i
, x) =
i
_
x
i
0
i
( x
1
, ..., x
i1
, , x
i+1
, ..., x
n
)d, (9.58)
where
i
are the adaptation gains. The analytic calculation of
i
(x
i
, x) for the F-16
model is relatively time-consuming, since the regressors
j=1
e
j
ij
(x, e) =
i
(x)
i
( x
1
, ..., x
i1
, , x
i+1
, ..., x
n
),
ii
0, (9.59)
has to be solved for some functions
ij
(x, e). This is an even more tedious process due
to the B-spline basis functions. However, it is still possible to solve the above expres-
sion. This concludes the discussion on the I&I estimator design for the high-delity F-16
dynamic model.
9.6.2 Numerical Simulation Results
In this section the numerical simulation results are presented for the application of the
ight control system with I&I based estimator, derived in the previous section, to the
high-delity, six-degrees-of-freedom F-16 model in a number of failure scenarios and
maneuvers. The scenarios are identical to the ones considered in the previous chapter,
so that closed-loop responses for the new adaptive ight control design can be com-
pared directly with the earlier results. For that same reason, the control gains and com-
mand lter parameters of the backstepping SCAS design are selected the same as in the
previous chapter. The I&I estimator is tuned in a trial and error procedure, using the
bounds derived in Section 9.4. Tuning is quite intuitive as expected and it is not difcult
to nd an adaptation gain selection that provides good results in all considered failure
scenarios. The nal gain and parameter selection for the estimator is:
2
,
5
= 0.1,
1
,
3
,
4
,
6
= 0.01,
i
= 0.01, i = 1, ..., 6 and = 0.01.
Simulation Results with C
m
q
= 0
The rst series of simulations considers again the sudden reduction of the longitudinal
damping coefcient C
m
q
to zero. As discussed before, this is not a very critical change,
but it does however serve as a nice example to evaluate the ability of the adaptation
scheme to accurately estimate inaccuracies in the onboard model. Figure D.22 of Ap-
pendix D.4 contains the simulation results for the I&I backstepping design starting at
190 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.6
ight condition 2 with the longitudinal stick commanding a series of pitch doublets, after
20 seconds of simulation the sudden change in C
m
q
takes place. The left hand side plots
show the inputs and response of the aircraft in solid lines, while the dotted lines are the
reference trajectories. Tracking performance both before and after the change in pitch
damping is excellent. The time histories of the dynamic scalings and the output errors
of the lters used by the I&I estimator are not shown, but the scalings all converge to
constant values and the errors converge to zero as expected.
The solid lines in the right hand side plots of Figure D.22 represent the changes in aero-
dynamic coefcients w.r.t. the nominal values, divided by the maximum absolute value
of the real aerodynamic coefcients to normalize them. The dotted lines are the normal-
ized true errors between the altered and nominal aircraft model. The change in C
m
q
is
clearly visible in the plots. It can be seen that the estimator does not succeed in estimating
the individual components of the pitch moment correctly. This is to be expected for such
an insignicant error. The simulation results of this failure at the other ight conditions
exhibit the same characteristics.
Simulation Results with Longitudinal c.g. Shifts
A second series of simulations considers a much more complex failure scenario where
the longitudinal center of gravity of the aircraft model is shifted. Especially backward
shifts can be quite critical, since they work destabilizing and can even result in a loss of
static stability margin. All pitching and yawing aerodynamic moment coefcients will
change as a result of a longitudinal c.g. shift. For a model inversion based design the
changes are far more critical and stability loss often will occur for destabilizing shifts
without robust of adaptive compensation.
Figure D.23 depicts the simulation results for the F-16 model with the I&I based back-
stepping controller starting at ight condition 1 with the longitudinal stick commanding
a series of small amplitude pitch doublets, after 20 seconds the c.g. instantly shifts back-
ward 0.06 c and the small positive static margin is lost. As can be seen in the left hand
side plots the tracking performance of the I&I based backstepping design is very good.
However, once again the right hand side plots demonstrate that the individual compo-
nents are not estimated correctly.
Compared to the results of Chapter 8, the tracking performance of the new adaptive de-
sign in this simulation scenario is superior to the performance of the other two adaptive
designs. The integrated adaptive design of Chapter 8 is also more aggressive in its re-
sponse, resulting from the non-ideal adaptation gains selected after the difcult tuning
process.
Simulation Results with Aileron Lockups
The last series of simulations considers controlling the aircraft model with right aileron
lock ups or hard-overs. At 20 seconds simulation time the right aileron suddenly moves
to a certain offset: -21.5, -10.75, 0, 10 or 21.5 degrees. It should again be noted that the
public domain F-16 model does not contain a differential elevator, hence only the rudder
9.7 CONCLUSIONS 191
and the left aileron can be used to compensate for these failures. The pilot should be able
to compensate for this failure, but it would result in a very high workload.
The results of a simulation performed with the integrated controller at ight condition
4 with a right aileron lockup at -10.75 degrees can be seen in Figure D.24. One lateral
stick doublet is performed before the failure occurs and three more are performed after
60 seconds of further simulation. The response of the I&I adaptive design resembles the
response of the modular adaptive design in Chapter 8, i.e. much better than the response
of the integrated adaptive backstepping controller. The I&I based adaptive design still
has a pretty good tracking performance after the failure and even the sideslip angle is
regulated back to zero. The additional forces and moments resulting from the error are
identied correctly, but the individual components are not.
9.7 Conclusions
In this chapter, the immersion and invariance technique is combined with backstepping
and the resulting adaptive control scheme is applied to the ight control problems of
Chapter 6 and 8. The control scheme makes use of an invariant manifold based estimator
with dynamic scalings and output lters to help guarantee attractivity of the manifold.
The controller itself is based on the backstepping approach with command lters to avoid
the analytic computation of the virtual control derivatives. Global asymptotic stability of
the closed-loop system and parameter convergence of the complete adaptive controller
can be proved with a single Lyapunov function. The controllers have been evaluated
in numerical simulations and the results have been compared with the integrated and
modular adaptive designs of Chapters 6 and 8.
Based on the simulation results several observations can be made:
1. The main advantage of the invariant manifold approach over a conventional adap-
tive backstepping controller with tracking error driven update laws is that it allows
for prescribed stable dynamics to be assigned to the parameter estimation error.
Furthermore, this approach does not suffer from undesired transient performance
resulting from unexpected dynamical behavior of parameter update laws that are
strongly coupled with the static feedback control part. As a result the adaptive con-
troller is much easier to tune, since a large update gain will improve the closed-loop
transient performance. Therefore, it is possible to achieve a better performance of
the closed-loop system, as is demonstrated in several simulation scenarios. In fact,
the closed-loop system resulting from the application of the I&I based adaptive
backstepping controller can be seen as a cascade interconnection between two sta-
ble systems with prescribed asymptotic properties.
2. The new I&I based modular adaptive controller does not require nonlinear damp-
ing terms, that could potentially result in high gain feedback, to proof closed-loop
stability. This is a big advantage over the modular backstepping control design
with least-squares identier. Obviously, least-squares still has the appeal that it
192 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.7
has the capability of automatically adjusting the adaptation gain matrix. However,
this comes at the cost of a higher dynamic order of the estimator.
3. A minor disadvantage of the I&I based modular adaptive backstepping approach is
that the estimator employs overparametrization, which means that, in general, the
dynamic order of the estimator is higher than for an integrated adaptive backstep-
ping controller. Hence, the computational load is also higher. However, this does
not play a role in the aircraft control design problems considered in Chapters 6 and
8. Though for the trajectory control problem of Chapter 7 the I&I estimator would
require more states than the tracking error driven update laws of the constrained
adaptive backstepping controller.
4. Another disadvantage is that the analytical derivation of the I&I estimator in com-
bination with the B-spline networks used for the partitioning of the F-16 model is
relatively time-consuming.
Chapter 10
Conclusions and Recommendations
This thesis describes the development of adaptive ight control systems for a modern
ghter aircraft. Adaptive backstepping techniques in combination with online model
identication based on multiple models connected with B-spline networks have been used
as the main design tools. Several algorithms have been considered for the online model
adaptation. In this chapter the main conclusions of the research are provided. New re-
search questions can be formulated based on these conclusions and these are formulated
in the form of recommendations for further research.
10.1 Conclusions
This thesis has aimed to contribute to the development of computationally efcient recon-
gurable or adaptive ight control systems using nonlinear control design techniques and
online model identication, all based on well founded mathematical proofs. As the main
design framework the adaptive backstepping technique was investigated, this choice was
based on the strong stability and convergence properties of the method as discussed in
the introduction. For the online model identication a multiple model approach based
on ight envelope partitioning was proposed to keep the required computational load at
an acceptable level and create a numerically stable algorithm. The considered methods
have been investigated and adapted throughout the thesis to improve their weaknesses
for the considered ight control problems. Finally, numerical simulations involving a
high-delity F-16 dynamic model with several types of uncertainties and failures have
been used to validate the proposed adaptive ight control designs. The main conclusions
and results of the thesis are summarized below.
193
194 CONCLUSIONS AND RECOMMENDATIONS 10.1
Constrained Adaptive Backstepping
The standard adaptive backstepping approach has a number of shortcomings, two of the
most important being its analytical complexity and its sensitivity to input saturation. The
analytical complexity of the design procedure is mainly due to the calculation of the
derivatives of the virtual controls at each intermediate design step. Especially for high
order systems or complex multivariable systems such as aircraft dynamics, it becomes
very tedious to calculate the derivatives analytically. The parameter update laws of the
standard adaptive backstepping procedure are driven by the tracking errors, which makes
them sensitive to input saturation. If input saturation is in effect and the desired control
cannot be achieved, the tracking errors will in general become larger and no longer be
the result of function approximation errors exclusively. As a result the parameter update
laws may start to unlearn.
In Chapter 4 both shortcoming are solved by introducing command lters in the design
approach. The idea is to lter the virtual controls to calculate the derivatives and at
the same time enforce the input or state limits. The effect that these limits have on
the tracking errors is measured using a set of rst order linear lters. Compensated
tracking errors where the effect of the limits has been removed are dened and used to
drive the parameter update laws. If there are no magnitude or rate limits in effect on
the command lters and their bandwidth is selected sufciently high, the performance
of the constrained adaptive backstepping approach can be made arbitrarily close to that
of the standard adaptive backstepping approach. If the limits on the command lters are
in effect, the real tracking errors may increase, but the compensated tracking errors that
drive the estimation process are unaffected. Hence, the dynamic update laws will not
unlearn due to magnitude or rate limits on the input and states used for (virtual) control.
An additional advantage of the command lters in the design is that the application is
no longer restricted to uncertain nonlinear systems of a lower triangular form. For these
reasons, the constrained adaptive backstepping approach serves as a basis for all the
control designs developed in this thesis.
Inverse Optimal Adaptive Backstepping
The tuning functions and constrained adaptive backstepping designs are both focused on
achieving stability and convergence rather than performance or optimality. To this end
the static and dynamic parts of the adaptive backstepping controllers are designed si-
multaneously in a recursive manner. This way the very strong stability and convergence
properties of the controllers can be proved using a single control Lyapunov function. A
drawback of this design approach is that because there is strong coupling between the
static and dynamic feedback parts, it is unclear how changes in the adaptation gain affect
the tracking performance.
In an attempt to solve this problem, inverse optimal control theory was combined with the
tuning functions backstepping approach to develop an inverse optimal adaptive backstep-
ping control design for a general class of nonlinear systems with parametric uncertainties
in Chapter 5. An additional advantage of a control law that is (inverse) optimal with re-
10.1 CONCLUSIONS 195
spect to some meaningful cost functional are its inherent robustness properties with
respect to external disturbances and model uncertainties.
However, nonlinear damping terms were utilized to achieve the inverse optimality, result-
ing in high gain feedback terms in the design. These nonlinear damping terms resulted in
a very robust control design, but also in a very numerically sensitivity design. The non-
linear damping terms even removed the need for parameter adaptation. Furthermore, the
complexity of the cost functional associated with the inverse optimal design did not make
performance tuning any more transparent. It can be concluded that the inverse optimal
adaptive backstepping approach is unsuitable for the type of control problems considered
in this thesis.
Integrated Versus Modular Adaptive Backstepping Flight Control
In Chapter 6 the constrained adaptive backstepping approach was applied to the de-
sign of a ight control system for a simplied, nonlinear over-actuated ghter aircraft
model valid at two ight conditions. It is demonstrated that the extension of the adap-
tive backstepping control method to multi-input multi-output systems is straightforward.
A comparison with a more traditional modular adaptive controller that employs a least
squares identier was made to illustrate the advantages and disadvantages of an inte-
grated adaptive design. The modular controller employs regressor ltering and nonlinear
damping terms to guarantee closed-loop stability and robustify the design against po-
tential faster-than-linear growth of the nonlinear systems. Furthermore, the interactions
between several control allocation algorithms and the online model identication for sim-
ulations with actuator failures were studied.
The results of numerical simulations demonstrated that both adaptive ight controllers
provide a signicant improvement over a non-adaptive NDI/backstepping design in the
presence of actuator lockup failures. The success rate and performance of both adaptive
designs with a simple pseudo inverse control allocation is comparable for most failure
cases. However, in combination with weighted control allocation methods the success
rate and also the performance of the modular adaptive design is shown to be superior.
This is mainly due to the better parameter estimates obtained by the least squares identi-
cation method. The Lyapunov-based update laws of the constrained adaptive backstep-
ping design, in general, do not estimate the true value of the unknown parameters. It is
shown that especially the estimate of the control effectiveness of the damaged surfaces
is much more accurate using the modular adaptive design. It can be concluded that the
constrained adaptive backstepping approach is best used in combination with the simple
pseudo inverse control allocation to prevent unexpected results.
An advantage of the constrained adaptive backstepping design is that even for this sim-
ple example the computational load is much lower, since the gradient based identier has
less states than the least-squares identier and does not require any regressor ltering.
Furthermore, the modular design adaptive design requires regressor ltering and nonlin-
ear damping terms to compensate for the fact that the least-squares identier is to slow
to deal with nonlinear growth, i.e. the certainty equivalence principle does not hold. The
high gain associated with nonlinear damping terms can lead to numerical instability prob-
196 CONCLUSIONS AND RECOMMENDATIONS 10.1
lems. The identier of the constrained adaptive backstepping design is much faster and
does not suffer fromthis problem. For these reasons the integrated adaptive backstepping
approach is deemed more suitable than the modular approach to design a recongurable
ight control systems and is therefor tested on the .
Full Envelope Adaptive Backstepping Flight Control
In Chapters 7 and 8 two control design problems for the high-delity, subsonic F-16
dynamic model were considered: Trajectory control and SCAS design. The trajectory
control problem is quite challenging, since the system to be controlled has a high relative
degree, resulting in a multivariable, four loop adaptive feedback design. The SCAS de-
sign on the other hand can be compared directly with the baseline ight control system
of the F-16.
A ight envelope partitioning method is used to capture the globally valid nonlinear aero-
dynamic model into multiple locally valid aerodynamic models. The Lyapunov-based
update laws of the adaptive backstepping method only update a few local models at each
time step, thereby keeping the computational load of the algorithm at a minimum and
making real-time implementation feasible. An additional advantage of using multiple,
local models is that information of the models that are not updated at a certain time step
is retained, thereby giving the approximator memory capabilities. B-spline networks are
used to ensure smooth transitions between the different regions and have been selected
for their excellent numerical properties. The partitioning for the F-16 has been done man-
ually based on earlier modeling studies and the fact that the aerodynamic data is already
available in a suitable tabular form.
Numerical simulation results of several maneuvers demonstrate that trajectory control
can still be accomplished with the investigated uncertainties and failures, while good
tracking performance is maintained. Compared to the other nonlinear adaptive trajectory
control designs in literature, such as standard adaptive backstepping or sliding mode con-
trol in combination with feedback linearization, the approach is much simpler to apply,
while the online estimation process is more robust to saturation effects.
Results of numerical simulations for the SCAS design demonstrate that the adaptive con-
troller provides a signicant improvement over a non-adaptive NDI design for the sim-
ulated failure cases. The adaptive design shows no degradation in performance with the
added sensor dynamics and time delays. Feeding the reference signal through command
lters makes it trivial to enforce desired handling qualities for the constrained adaptive
backstepping controller in the nominal case. The handling qualities were veried using
frequency sweeps and lower order equivalent model analysis.
However, the adaptation gain tuning for the update laws of the constrained adaptive back-
stepping controller is a very time consuming and unintuitive process, since changing the
identier gains can result in unexpected transients in the closed-loop tracking perfor-
mance. This is especially true for the SCAS design, since more aggressive maneuvering
is considered. It is very difcult to nd a set of gains that gives an adequate performance
for all considered failure cases at the selected ight conditions. These results demonstrate
that an alternative to the tracking error driven identier has to be found when complex
10.1 CONCLUSIONS 197
ight control problems are considered.
I&I Adaptive Backstepping
Despite a number of renements introduced in this thesis, the adaptive backstepping
method with tracking error driven gradient update laws still has a major shortcoming.
The estimation error is only guaranteed to be bounded and converging to an unknown
constant. However, not much can be said about its dynamical behavior which may be un-
acceptable in terms of the closed-loop transient performance. Increasing the adaptation
gain will not necessarily improve the response of the system, due to the strong coupling
between system and estimator dynamics. The modular adaptive backstepping designs
with least-squares identier as derived in Chapter 6 do not suffer from this problem.
However, this type of design requires unwanted nonlinear damping terms to compensate
for the slowness of the estimation based identier.
In Chapter 9 an alternative way of constructing a nonlinear estimator is introduced,
based on the I&I approach. This approach allows for prescribed stable dynamics to be
assigned to the parameter estimation error. The resulting estimator is combined with the
command ltered backstepping approach to form a modular adaptive control scheme.
Robust nonlinear damping terms are not required in the backstepping design, since the
I&I based estimator is fast enough to capture the potential faster-than-linear growth of
nonlinear systems. The new modular scheme is much easier to tune than the ones result-
ing fromthe constrained adaptive backstepping approach. In fact, the closed-loop system
resulting from the application of the I&I based adaptive backstepping controller can be
seen as a cascaded interconnection between two stable systems with prescribed asymp-
totic properties. As a result, the performance of the closed-loop system with adaptive
controller can be improved signicantly.
The ight control problems of Chapters 6 and 8 have been tackled again using the new
I&I based modular adaptive backstepping scheme. A comparison of the simulation re-
sults has demonstrated that it is indeed possible to achieve a much higher level of tracking
performance with the new design technique. Moreover, the I&I based modular adaptive
backstepping approach has even stronger provable stability and convergence properties
than the integrated adaptive backstepping approaches discussed in this thesis, while at the
same time achieving a modularity in the design of the controller and identier modules.
It can be concluded that the I&I based modular adaptive backstepping design has great
potential for these type of control problems: The resulting adaptive ight control systems
perform excellent in the considered failure scenarios, while the identier tuning process
is relatively straight-forward.
A minor disadvantage of the I&I based modular adaptive backstepping approach is that
the estimator employs overparametrization, i.e. in general more than one update law is
used to estimate each unknown parameter. Overparametrization is not necessarily dis-
advantageous from a performance point of view, but it is less efcient in a numerical
implementation of the controller. Overparametrization does not play a role in the aero-
dynamic angle control design problems considered in Chapters 6 and 8. However, for
the trajectory control problem of Chapter 7 the I&I estimator would require more states
198 CONCLUSIONS AND RECOMMENDATIONS 10.1
than the tracking error driven update laws of the constrained adaptive backstepping ap-
proach. Another minor disadvantage is that the analytical derivation of the I&I estimator
in combination with the B-spline networks used for the partitioning of the F-16 model is
relatively time consuming, but this additional effort is marginal when compared to effort
required to perform the tuning process of the integrated adaptive backstepping update
laws.
Comparison of Adaptive Flight Control Frameworks
The overall performance of the three methods used for ight control design in this thesis
is now compared using several important criteria, such as design complexity and track-
ing performance. A table with the results of this comparison can be found in Figure
10.1. It can be seen that the modular adaptive backstepping design with I&I identier
outperforms the other methods, while also being the only method that does not display
an unacceptable performance for any of the criteria.
Figure 10.1: Comparison of the overall performance of integrated adaptive backstepping control,
modular adaptive backstepping control with RLS identier and modular adaptive backstepping
control with I&I identier. Green indicates the best performing method, yellow the second best
and orange the worst. A red table cell indicates an unacceptable performance.
A short explanation of each criterion is given.
1. Design complexity: The static feedback design of the controllers is nearly identi-
cal, but the identier designs for the modular approaches are more complex than
for the integrated design. The analytical derivation of the I&I estimator is the most
time consuming, especially in combination with ight envelope partitioning
10.1 CONCLUSIONS 199
2. Dynamic order: The dynamic order of the RLS identier is by far the highest. The
I&I estimator requires a couple of extra lter states when compared to the tracking
error driven update laws of the integrated adaptive backstepping. Furthermore, due
to the overparametrization in the design, the dynamic order of the I&I estimator
can increase for some control problems.
3. Estimation quality: With sufcient excitation the RLS identier will nd the true
parameters of the system. The I&I identier will nd the total force and moment
coefcients, but not the individual parameters. Finally, the parameter estimates of
the integrated adaptive backstepping controller will in general never converge to
their true values.
4. Numerical stability: The nonlinear damping terms used for the modular design
with RLS identier can lead to numerical problems. The integrated adaptive de-
sign is the simplest and therefore the most numerically stable, although it should
be noted that no problems were encountered for the modular design with I&I esti-
mator.
5. Tracking performance: The tracking performance of the modular designs is bet-
ter in general, with the I&I designs just outperforming the RLS designs.
6. Transient performance: Unexpected behavior of the update laws can lead to bad
transient performance for the integrated design. The nonlinear damping terms
sometimes result in unwanted oscillations with the modular RLS design.
7. Tuning complexity: Integrated backstepping designs are very hard or impossi-
ble to tune for complex systems due to the strong coupling between controller
and identier. Tuning of I&I estimator is quite straightforward, but the tuning of
the RLS identier is almost automatic. However, nding the correct resetting algo-
rithmand nonlinear damping gains requires additional effort. Therefore, the tuning
of the modular adaptive backstepping controller with I&I identier is the least time
consuming and the most transparent.
Final Conclusions
On the basis of the research performed in this thesis, it can be concluded that a RFC
system based on the modular adaptive backstepping method with I&I estimator shows a
lot of potential, since it possesses all the features aimed at in the thesis goal:
a single nonlinear backstepping controller with an I&I estimator is used for the
entire ight envelope. The stability and convergence properties of the resulting
closed-loop system are guaranteed by Lyapunov theory and have been veried in
numerical simulations. Due to the modularity of the design, systematic gain tuning
can be performed to achieve the desired closed-loop performance.
the numerical simulation results with the F-16 model suffering from various types
of sudden actuator failures and large aerodynamic uncertainties demonstrate that
200 CONCLUSIONS AND RECOMMENDATIONS 10.2
the performance of the RFC system is superior with respect to a non-adaptive NDI
based control system or the baseline gain-scheduled control system for the con-
sidered situations. By extending the regressors, i.e. the local aerodynamic model
polynomial structures, of the identier, the adaptive controller should also be able
to take asymmetric (structural) failures, which introduce additional coupling be-
tween longitudinal and lateral motion of the aircraft, into account.
by making use of a multiple model approach based on ight envelope partitioning
with B-spline networks the computational load of the numerically stable adaptive
control algorithm is relatively low. The algorithm can easily run real-time on a
budget desktop computer. However, the current processors of onboard comput-
ers are sized for the current generation of ight controllers and are not powerful
enough to run the proposed adaptive control algorithm in real-time. Manufactur-
ers and clients will have to be convinced that the benets of RFC are worth the
additional hardware cost and weight.
10.2 Recommendations
New questions and research directions can be formulated based on the research presented
in this thesis. These recommendations are formulated in this section:
As already discussed in the introduction, accurate failure models of realistic (struc-
tural) damage are lacking for the high-delity F-16 model used as the main study
object in this thesis. For this reason, the evaluation of the adaptive ight controllers
was limited to simulation scenarios with actuator failures, symmetric center of
gravity shifts and uncertainties is individual aerodynamic coefcients. If more re-
alistic aerodynamic data for asymmetric failures such as partial surface loss could
be obtained, the results of the study would be more valuable. Furthermore, the
adaptive controllers could be extended with an FDIE module that performs actua-
tor health monitoring, thereby simplifying the task of online model identication.
This was not done in this thesis work to make the limited failure scenarios more
challenging.
A multiple local model approach resulting from ight envelope partitioning was
used to simplify the online model approximation and thereby reducing compu-
tational load. The aerodynamic model of the F-16 considered in this thesis was
already examined in many earlier studies and is already in a form that lends it-
self for partitioning into locally valid models. However, in the more general case
nding a proper local approximation structure and partitioning may be a time con-
suming study in itself.
Many more advanced local approximation and learning algorithms are currently
being developed, see e.g. [50, 145, 204]. In [145] an algorithm is proposed that
employs nonlinear function approximation with automatic growth of the learning
network according to the nonlinearities and the working domain of the control sys-
tem. The unknown function in the dynamical system is approximated by piecewise
10.2 RECOMMENDATIONS 201
linear models using a nonparametric regression technique. Local models are allo-
cated as necessary and their parameters are optimized online. Such an advanced
technique eliminates the need for manual partitioning and the structure automat-
ically adapts itself in the case of failures. However, it is unclear if a real-time
implementation would be feasible for the F-16 model or similar high-delity air-
craft models.
For some simulations with sudden failure cases, the adaptive controllers managed
to stabilize the aircraft, but the commanded maneuver proved too challenging for
the damaged aircraft. Hence, an adaptive controller by itself may not be sufcient
for a good recongurable ight control system. The pilot or guidance system also
needs to be aware of the characteristics of the failure, since the post-failure ight
envelope might be a lot smaller. This statement has resulted in a whole new area
of research, usually referred to as adaptive ight envelope estimation and/or pro-
tection, see e.g. [198, 211]. It is possible to indicate to the pilot which axis have
suffered a failure with the adaptive controllers developed in this thesis, so that he is
made aware that there is a failure and that he should y more carefully. However,
the development of fully adaptive ight envelope protection systems or at least
systems that help make the pilot aware of the type and the size of failure should be
a key research focus for the coming years.
Two main reasons for the gap between the research and the application of adap-
tive or recongurable ight control methods can be identied. Firstly, many of the
adaptive control techniques cannot be applied to existing aircraft without replacing
the already certied ight control laws. Secondly, the verication and validation
procedures needed to certify the novel reconguration methods have not received
the necessary attention. For these reasons, some designers have been developing
retrot adaptive control systems which leave the baseline ight control system
intact, see e.g. [141, 158].
The nonlinear adaptive designs developed in this thesis can not be used in a retrot
manner, since all current ight control systems are based on linear design tech-
niques. However, this may change when the rst aircraft with NDI based ight
control systems become available. Nevertheless, more research should be devoted
to the verication and validation procedures that can be applied directly to non-
linear and even adaptive control designs. The linear analysis tools currently used
have a lot of shortcomings.
The contributions of this thesis are mainly of a theoretical nature, since all results
were obtained from numerical ight simulations with preprogrammed maneuvers.
No piloted simulations were performed. However, the adaptive ight control sys-
tems developed in the thesis have been test own by the author on a desktop com-
puter using a joystick and a ight simulator. Compared to a normal NDI controller
and the baseline controller, the workload was indeed lowered for most of the fail-
ures considered. Results of these simulated test ights are not included in the
thesis, since the author is not a professional pilot. Nevertheless, simulations with
202 CONCLUSIONS AND RECOMMENDATIONS 10.2
actual test pilots should be performed to examine the interactions between pilots
and the adaptive control systems. The fast reaction of the pilot to the unexpected
movements caused by an unknown, sudden change in dynamic behavior of the air-
craft in combination with the immediate online adaptation may lead to unexpected
results. Pilots may need to learn to thrust the adaptive element in the ight con-
trol system, as was already observed in an earlier study involving a damaged large
passenger aircraft [133].
As discussed in the conclusions, the I&I based estimator used in this thesis em-
ploys overparametrization. From a numerical point of view it would be benecial
to obtain a single estimate of each unknown parameter. Moreover, should this
be achieved, it may be possible to combine the I&I based estimator with a least-
squares adaptation. The ability of least-squares to even out adaptation rates would
almost completely automate the tuning of the adaptive control design.
The regressor lters employed in the modular control designs with least-squares,
as derived in Chapter 6, results in a high dynamic order of the estimators and may
also make the estimator less responsive, thus affecting the performance. The com-
bination with I&I can possibly remove the need for these lters as demonstrated
for linearly parametrized nonlinear control systems in the normal form in [114].
Furthermore, the need for nonlinear damping is also removed. However, to extend
the suggested approach of [114] to the broader class of strict-feedback systems
would require overparametrization. This would mean employing multiple Riccati
differential equations, which is of course unacceptable. Hence, the use of over-
parametrization should certainly be avoided if least-squares are considered.
Control engineering is a broad eld of study that encompasses many applications.
The adaptive backstepping techniques discussed in this thesis were studied and
evaluated purely for their usefulness in ight control design problems. Obviously,
most of the techniques studied in this thesis can be and have been used for other
types of control system problems in literature and sometimes even in practice.
However, the shortcomings and modications discussed in this thesis may not be
relevant in other control design problems.
Appendix A
F-16 Model
A.1 F-16 Geometry
Figure A.1: F-16 of the Royal Netherlands Air Force Demo Team. Picture by courtesy of the F-16
Demo Team.
203
204 APPENDIX A
Table A.1: F-16 parameters.
Parameter Symbol Value
aircraft mass (kg) m 9295.44
wing span (m) b 9.144
wing area (m
2
) S 27.87
mean aerodynamic chord (m) c 3.45
roll moment of inertia (kg.m
2
) I
x
12874.8
pitch moment of inertia (kg.m
2
) I
y
75673.6
yaw moment of inertia (kg.m
2
) I
z
85552.1
product moment of inertia (kg.m
2
) I
xz
1331.4
product moment of inertia (kg.m
2
) I
xy
0.0
product moment of inertia (kg.m
2
) I
yz
0.0
c.g. location (m) x
cg
0.3 c
reference c.g. location (m) x
cgr
0.35 c
engine angular momentum (kg.m
2
/s) H
eng
216.9
A.2 ISA Atmospheric Model
For the atmospheric data an approximation of the International Standard Atmosphere
(ISA) is used [143].
T =
_
T
0
+h if h 11000
T
(h=11000)
if h > 11000
p =
_
_
_
p
0
_
1 +
h
T
0
_
g
0
R
if h 11000
p
(h=11000)
e
g
RT
(h=11000)
(h11000)
if h > 11000
=
p
RT
a =
_
RT,
where T
0
= 288.15 K is the temperature at sea level, p
0
= 101325 N/m
2
the pressure
at sea level, R = 287.05 J/kg.K the gas constant of air, g
0
= 9.80665 m/s
2
the gravity
constant at sea level, = dT/dh = 0.0065 K/mthe temperature gradient and = 1.41
the isentropic expansion factor for air. Given the aircrafts altitude (h in meters) it returns
the current temperature (T in Kelvin), the current air pressure (p in N/m
2
), the current
air density ( in kg/m
3
) and the speed of sound (a in m/s).
FLIGHT CONTROL SYSTEM 205
A.3 Flight Control System
These gures contain the schemes of the baseline ight control system of the F-16 model.
More details can be found in [149].
Figure A.2: Baseline pitch axis control loop of the F-16 model.
206 APPENDIX A
Figure A.3: Baseline F-16 roll axis control loop of the F-16 model.
Figure A.4: Baseline F-16 yaw axis control loop of the F-16 model.
Appendix B
System and Stability Concepts
This appendix claries certain system and stability concepts that are used in the main
body of the thesis. Most proofs are not included, but can be found in the main references
for this appendix: [106, 118, 192].
B.1 Lyapunov Stability and Convergence
For completeness, the main results of Lyapunov stability theory as discussed in Section
3.2 are reviewed. More comprehensive accounts can be found in [106] and [118].
Consider the non-autonomous system
x = f(x, t) (B.1)
where f : R
n
R R
n
is locally Lipschitz in x and piecewise continuous in t.
Denition B.1. The origin x = 0 is the equilibrium point for (B.1) if
f(0, t) = 0, t 0. (B.2)
The following comparison functions are useful tools to create more transparent stability
denitions.
Denition B.2. A continuous function : [0, a) R
+
is said to be of class K if
it is strictly increasing and (0) = 0. It is said to be of class K
if a = and
lim
r
(r) = .
Denition B.3. A continuous function : [0, a) R
+
R
+
is said to be of class KL
if, for each xed s, the mapping (r, s) is of class K with respect to r and, for each xed
r, the mapping (r, s) is decreasing with respect to s and lim
s
(r, s) = 0. It is said
207
208 APPENDIX B
to be of class KL
if, in addition, for each xed s the mapping (r, s) belongs to class
K
with respect to r.
Using these comparison functions the stability denitions of Chapter 3 are restated.
Denition B.4. The equilibrium point x = 0 of (B.1) is
uniformly stable, if there exists a class K function () and a positive constant c,
independent of t
0
, such that
|x(t)| (|x(t
0
)|), t t
0
0, x(t
0
)| |x(t
0
)| < c; (B.3)
uniformly asymptotically stable, if there exists a class KL function (, ) and a
positive constant c, independent of t
0
, such that
|x(t)| (|x(t
0
)|, t t
0
), t t
0
0, x(t
0
)| |x(t
0
)| < c; (B.4)
exponentially stable, if (B.4) is satised with (r, s) = kre
s
, k > 0, > 0;
globally uniformly stable, if (B.3) is satised with K
for
any initial state x(t
0
);
globally exponentially stable, if (B.4) is satised for any initial state x(t
0
) and with
(r, s) = kre
s
, k > 0, > 0.
Based on these denitions, the main Lyapunov stability theorem is then formulated as
follows.
Theorem B.5. Let x = 0 be an equilibrium point of (B.1) and D = {x R
n
||x| < r}.
Let V : D R
n
R
+
be a continuously differentiable function such that t 0, x
D,
1
(|x|) V (x, t)
2
(|x|) (B.5)
V
t
+
V
x
f(x, t)
3
(|x|). (B.6)
Then the equilibrium x = 0 is
uniformly stable, if
1
and
2
are class K functions on [0, r) and
3
() 0 on
[0, r);
uniformly asymptotically stable, if
1
,
2
and
3
are class K functions on [0, r);
exponentially stable, if
i
() = k
i
on [0, r), k
i
> 0, > 0, i = 1, 2, 3;
LYAPUNOV STABILITY AND CONVERGENCE 209
globally uniformly stable, if D = R
n
,
1
and
2
are class K
functions, and
3
() 0 on R
+
;
globally uniformly asymptotically stable, if D = R
n
,
1
and
2
are class K
functions, and
3
is a class K function on R
+
; and
globally exponentially stable, if D = R
n
and
i
() = k
i
on R
+
, k
i
> 0, > 0,
i = 1, 2, 3.
The key advantage of this theoremis that it can be applied without solving the differential
equation (B.1). However, analysis of dynamic systems can result in situations where
the derivative of the Lyapunov function is only negative semi-denite. For autonomous
systems it may still be possible to conclude asymptotic stability in these situations via
the concept of invariant sets, i.e. LaSalles Invariance Theorem.
Denition B.6. A set is a positively invariant set of a dynamic systemif every trajectory
starting in at t = 0 remains in for all t > 0.
For instance, any equilibrium of a system is an invariant set, but also the set of all equi-
libria of a system is an invariant set. Using the concept of invariant sets, the following
invariant set theorem can be stated.
Theorem B.7. For an autonomous system x = f(x), with f continuous on the domain
D, let V (x) : D R be a function with continuous rst partial derivatives on D. If
1. the compact set D is a positively invariant set of the system;
2.
V 0 x ;
then every solution x(t) originating in converges to M as t , where R = {x
|
0
()d exists and is nite, then
lim
t
(t) = 0.
Note that the uniform continuity of needed can be proven by showing either that
L
([0, )) or that (t) is Lipschitz on [0, ). Finally, the theorem due to LaSalle and
Yoshizawa is stated.
1
A function f : D R R is uniformly continuous if, for any > 0, there exists () > 0 such that
|x y| < |f(x0 f(y)| < , for all x, y D.
210 APPENDIX B
Theorem B.9. Let x = 0 be an equilibrium point of (B.1) and suppose that f is locally
Lipschitz in x uniformly in t. Let V : R
n
R
+
R
+
be a continuously differentiable
function such that
1
(|x|) V (x, t)
2
(|x|) (B.7)
V =
V
t
+
V
x
f(x, t) W(x) 0, (B.8)
t 0, x R
n
, where
1
and
2
are class K
V (x(), )d
= lim
t
{V (x(t
0
), t
0
) V (x(t), t)}
= V (x(t
0
), t
0
) V
, (B.10)
which means that
_
t
0
W(x())d exists and is nite. It remains to show that W(x(t))
is also uniformly continuous. Since |x(t)| B and f is locally Lipschitz in x uniformly
in t, it can be observed that for any t t
0
0,
|x(t) x(t
0
)| = |
_
t
t
0
f(x(), )d| L
_
t
t
0
|x()|d
LB|t t
0
|, (B.11)
where L is the Lipschitz constant of f on {|x| B}. Selecting () =
LB
results in
|x(t) x(t
0
)| < , |t t
0
| (), (B.12)
which means that x(t) is uniformly continuous. Since W is continuous, it is uniformly
continuous on the compact set {|x| B}. It can be concluded that W(x(t)) is uniformly
continuous from the uniform continuity of W(x) and x(t). Hence, it satises the condi-
tions of Lemma B.8, which in turn guarantees that W(x(t)) 0 as t .
If, in addition, W(x) is positive denite, there exists a class K function
3
such that
W(x)
3
(|x|). By Theorem B.7 it can be concluded that x = 0 is globally uniformly
asymptotically stable.
INPUT-TO-STATE STABILITY 211
B.2 Input-to-state Stability
This section recalls the notion of input-to-state stability (ISS) [192, 193]. The ISS con-
cept plays an important role in the modular backstepping design technique as derived in
Section 6.2.2.
Consider the system
x = f(t, x, u), (B.13)
where f is piecewise continuous in t and locally Lipschitz in x and u.
Denition B.10. The system (B.13) is said to be input-to-state stable (ISS) if there exist a
class KL function and a class K function , such that, for any x(t
0
) and for any input
u that is continuous and bounded on [0, ) the solution exists for all t 0 and satises
|x(t)| (|x(t
0
)|, t t
0
) +
_
sup
[t
0
,t]
|u()|
_
(B.14)
for all t t
0
and t such that 0 t
0
t.
The function () is often referred to as an ISS gain for the system (B.13). The above
denition implies that an ISS system is bounded-input bounded-state stable and has a
globally uniformly asymptotically stable equilibrium at zero when u(t) = 0.
The ISS property can be equivalently characterized in terms of Lyapunov functions, as
the following theorem shows.
Theorem B.11. The system (B.13) is ISS if and only if there exists a continuously differ-
entiable function V : R
+
R
n
R
+
such that for all x R
n
and u R
m
,
1
(|x|) V (x, t)
2
(|x|) (B.15)
|x| (|u|)
V
t
+
V
x
f(t, x, u)
3
(|x|) , (B.16)
where
1
,
2
and are class K
functions and
3
is a class K function.
Note that an ISS gain for the system (B.13) can be obtained from the above theorem as
=
1
1
2
.
B.3 Invariant Manifolds and System Immersion
This section gives the denition of an invariant manifold [210] and of system immersion
[36], since these notions are used in Chapter 9.
Consider the autonomous system
x = f(x), y = h(x), (B.17)
with state x R
n
and output y R
m
.
212 APPENDIX B
Denition B.12. The manifold M = {x R
n
|s(x) = 0}, with s(x) smooth, is said to
be (positively) invariant for x = f(x) if s(x(0)) = 0, which implies s(x(t)) = 0, for all
t 0.
Consider now the (target) system
()
and
h(()) = ()
for all R
p
.
Hence, roughly stated, a system
1
is said to be immersed into a system
2
if the input-
output mapping of
2
is a restriction of the input-output mapping of
1
, i.e. any output
response generated by
2
is also an output response of
1
for a restricted set of initial
conditions.
Appendix C
Command Filters
This appendix covers the second order command lters which are used for reference
signal generation and in the intermediate steps of the constrained adaptive backstepping
approach (taken from [61]).
Figure C.1: Filter that generates the command and command derivative while enforcing magni-
tude, bandwidth and rate limits constraints [61].
Figure C.1 shows an example of a lter which produces a magnitude, rate and band-
width limited signal x
c
and its derivative x
c
by ltering a signal x
0
c
. The state space
representation of this lter is
_
q
1
(t)
q
2
(t)
_
=
_
q
2
2
n
_
S
R
_
2
n
2
n
[S
M
(x
0
c
) q
1
]
_
q
2
_
_
(C.1)
_
x
c
x
c
_
=
_
q
1
q
2
_
, (C.2)
213
214 APPENDIX C
where S
M
() and S
R
() represent the magnitude and rate limit functions, respectively.
The functions S
M
and S
R
are dened similarly:
S
M
(x) =
_
_
_
M if x M
x if |x| < M
M if x M
.
Note that if the signal x
0
c
is bounded, then x
c
and x
c
are also bounded and continuous
signals. Note also that x
c
is computed without differentiation. When the state must re-
main in some operating envelope dened by the magnitude limit M and the rate limit R,
the command lter ensures that the commanded trajectory and its derivative satisfy these
same constraints.
If the only objective in the design of the command lter is to compute x
c
and its deriva-
tive, then M and R are innitely large and the limiters do not need to be included in the
lter implementation. In the linear range of the functions S
M
and S
R
the lter dynamics
are
_
q
1
(t)
q
2
(t)
_
=
_
0 1
2
n
2
n
_ _
q
1
q
2
_
+
_
0
2
n
_
x
0
c
(C.3)
_
x
c
x
c
_
=
_
q
1
q
2
_
, (C.4)
with the transfer function from the input to the rst output dened as
X
c
(s)
X
0
c
(s)
=
2
n
s
2
+ 2
n
s +
2
n
. (C.5)
When command limiting is not in effect, the error x
c
x
0
c
can be made arbitrarily small
by selecting
n
sufciently larger than the bandwidth of the signal x
0
c
. When command
ltering is in effect, the error x
c
x
0
c
will be bounded since both x
c
and x
0
c
are bounded.
Appendix D
Additional Figures
This appendix contains the results for some of the numerical simulations performed in
Chapters 6 to 9.
215
216 APPENDIX D
D.1 Simulation Results of Chapter 6
0 10 20 30 40 50 60
90
60
30
0
30
60
90
(
d
e
g
)
response reference
0 10 20 30 40 50 60
20
10
0
10
20
30
40
(
d
e
g
)
0 10 20 30 40 50 60
4
2
0
2
4
time (s)
(
d
e
g
)
(a) Reference tracking
0 10 20 30 40 50 60
40
20
0
20
40
60
a
l
,
a
r
,
r
(
d
e
g
)
al
ar
r
0 10 20 30 40 50 60
30
20
10
0
10
20
e
l
,
e
r
,
l
e
f
,
t
e
f
(
d
e
g
)
time [s]
el
er
lef
tef
(b) Surface deections
0 10 20 30 40 50 60
5
0
5
l
t
o
t
(
)
realized estimated
0 10 20 30 40 50 60
1
0.5
0
0.5
m
t
o
t
(
)
0 10 20 30 40 50 60
0.5
0
0.5
n
t
o
t
(
)
time (s)
(c) Control moment
0 10 20 30 40 50 60
5
0
5
l
*
(
el
er
al
ar
0 10 20 30 40 50 60
3
2
1
0
m
*
(
)
0 10 20 30 40 50 60
0.2
0.1
0
0.1
0.2
n
*
(
)
time (s)
(d) Parameter estimation
Figure D.1: Simulation scenario 3 results for the integrated adaptive controller combined with PI
control allocation where the aircraft experiences a hard-over of the left aileron to 45 degrees after
1 second.
SIMULATION RESULTS OF CHAPTER 6 217
0 10 20 30 40 50 60
90
60
30
0
30
60
90
(
d
e
g
)
response reference
0 10 20 30 40 50 60
20
10
0
10
20
30
40
(
d
e
g
)
0 10 20 30 40 50 60
2
0
2
4
time (s)
(
d
e
g
)
(a) Reference tracking
0 10 20 30 40 50 60
20
0
20
40
60
a
l
,
a
r
,
r
(
d
e
g
)
al
ar
r
0 10 20 30 40 50 60
30
20
10
0
10
20
e
l
,
e
r
,
l
e
f
,
t
e
f
(
d
e
g
)
time [s]
el
er
lef
tef
(b) Surface deections
0 10 20 30 40 50 60
5
0
5
l
t
o
t
(
)
realized estimated
0 10 20 30 40 50 60
1
0
1
2
m
t
o
t
(
)
0 10 20 30 40 50 60
0.5
0
0.5
n
t
o
t
(
)
time (s)
(c) Control moment
0 10 20 30 40 50 60
5
0
5
l
*
(
el
er
al
ar
0 10 20 30 40 50 60
4
2
0
2
m
*
(
)
0 10 20 30 40 50 60
0.2
0.1
0
0.1
0.2
n
*
(
)
time (s)
(d) Parameter estimation
Figure D.2: Simulation scenario 3 results for the modular adaptive controller combined with PI
control allocation where the aircraft experiences a hard-over of the left aileron to 45 degrees after
1 second.
218 APPENDIX D
0 10 20 30 40 50 60
90
60
30
0
30
60
90
(
d
e
g
)
response reference
0 10 20 30 40 50 60
20
10
0
10
20
30
40
(
d
e
g
)
0 10 20 30 40 50 60
2
0
time (s)
(
d
e
g
)
(a) Reference tracking
0 10 20 30 40 50 60
40
20
0
20
40
a
l
,
a
r
,
r
(
d
e
g
)
al
ar
r
0 10 20 30 40 50 60
20
10
0
10
20
e
l
,
e
r
,
l
e
f
,
t
e
f
(
d
e
g
)
time [s]
el
er
lef
tef
(b) Surface deections
0 10 20 30 40 50 60
4
2
0
2
l
t
o
t
(
)
realized estimated
0 10 20 30 40 50 60
0.5
0
0.5
1
m
t
o
t
(
)
0 10 20 30 40 50 60
0.5
0
0.5
n
t
o
t
(
)
time (s)
(c) Control moment
0 10 20 30 40 50 60
5
0
5
l
*
(
el
er
al
ar
0 10 20 30 40 50 60
2
1.5
1
0.5
0
m
*
(
)
0 10 20 30 40 50 60
0.2
0.1
0
0.1
0.2
n
*
(
)
time (s)
(d) Parameter estimation
Figure D.3: Simulation scenario 4 results for the integrated adaptive controller combined with QP
W
U
2
control allocation where the aircraft experiences a hard-over of the left horizontal stabilizer
to 10.5 degrees.
SIMULATION RESULTS OF CHAPTER 6 219
0 10 20 30 40 50 60
90
60
30
0
30
60
90
(
d
e
g
)
response reference
0 10 20 30 40 50 60
20
10
0
10
20
30
40
(
d
e
g
)
0 10 20 30 40 50 60
2
0
time (s)
(
d
e
g
)
(a) Reference tracking
0 10 20 30 40 50 60
40
20
0
20
40
a
l
,
a
r
,
r
(
d
e
g
)
al
ar
r
0 10 20 30 40 50 60
20
10
0
10
20
e
l
,
e
r
,
l
e
f
,
t
e
f
(
d
e
g
)
time [s]
el
er
lef
tef
(b) Surface deections
0 10 20 30 40 50 60
4
2
0
2
l
t
o
t
(
)
realized estimated
0 10 20 30 40 50 60
0.5
0
0.5
m
t
o
t
(
)
0 10 20 30 40 50 60
1
0.5
0
0.5
n
t
o
t
(
)
time (s)
(c) Control moment
0 10 20 30 40 50 60
5
0
5
l
*
(
el
er
al
ar
0 10 20 30 40 50 60
2
1
0
1
m
*
(
)
0 10 20 30 40 50 60
0.2
0.1
0
0.1
0.2
n
*
(
)
time (s)
(d) Parameter estimation
Figure D.4: Simulation scenario 4 results for the modular adaptive controller combined with QP
W
U
2
control allocation where the aircraft experiences a hard-over of the left horizontal stabilizer
to 10.5 degrees.
220 APPENDIX D
0 10 20 30 40 50 60
90
60
30
0
30
60
90
(
d
e
g
)
response reference
0 10 20 30 40 50 60
10
0
10
20
(
d
e
g
)
0 10 20 30 40 50 60
2
0
2
4
time (s)
(
d
e
g
)
(a) Integrated adaptive control with WPI W
U
1
0 10 20 30 40 50 60
90
60
30
0
30
60
90
(
d
e
g
)
response reference
0 10 20 30 40 50 60
20
10
0
10
20
(
d
e
g
)
0 10 20 30 40 50 60
0
2
time (s)
(
d
e
g
)
(b) Integrated adaptive control with WPI W
U
2
0 10 20 30 40 50 60
90
60
30
0
30
60
90
(
d
e
g
)
response reference
0 10 20 30 40 50 60
10
0
10
20
(
d
e
g
)
0 10 20 30 40 50 60
2
0
2
time (s)
(
d
e
g
)
(c) Modular adaptive control with WPI W
U
1
0 10 20 30 40 50 60
90
60
30
0
30
60
90
(
d
e
g
)
response reference
0 10 20 30 40 50 60
20
10
0
10
20
(
d
e
g
)
0 10 20 30 40 50 60
4
2
0
2
4
time (s)
(
d
e
g
)
(d) Modular adaptive control with WPI W
U
2
Figure D.5: Simulation scenario 2 results for both controllers with WPI control allocation where
the aircraft experiences a left horizontal stabilizer locked at 0 degrees.
SIMULATION RESULTS OF CHAPTER 7 221
D.2 Simulation Results of Chapter 7
0
1
2
3
x 10
4
1
0
1
x 10
4
5000
10000
North Distance (m)
(A)
East Distance (m)
A
l
t
i
t
u
d
e
(
m
)
0 100 200 300
4
2
0
2
4
time (s)
T
r
a
c
k
i
n
g
e
r
r
o
r
s
Z
0
(
m
)
(B )
z
01
z
02
z
03
0 100 200 300
195
200
205
210
215
time (s)
V
(
m
/
s
)
(C)
0 100 200 300
1000
0
1000
2000
time (s)
(
d
e
g
)
(D)
0 100 200 300
2
0
2
4
6
time (s)
(
d
e
g
)
(E )
0 100 200 300
50
0
50
100
time (s)
(
d
e
g
)
(F)
0 100 200 300
1000
0
1000
2000
time (s)
(
d
e
g
)
(G)
0 100 200 300
5
0
5
10
15
time (s)
p
,
q
,
r
(
d
e
g
/
s
)
(H)
p q r
0 100 200 300
0
5
10
15
x 10
4
time (s)
T
h
r
u
s
t
(
N
)
(I)
0 100 200 300
10
5
0
5
time (s)
e
,
a
,
r
(
d
e
g
)
(J)
e
a
r
Figure D.6: Maneuver 1: Climbing helical path performed at ight condition 1 without any un-
certainty or actuator failures.
222 APPENDIX D
0
1
2
3
x 10
4
1
0
1
x 10
4
0
2000
4000
North Distance (m)
(A)
East Distance (m)
A
l
t
i
t
u
d
e
(
m
)
0 100 200 300
10
5
0
5
10
time (s)
T
r
a
c
k
i
n
g
e
r
r
o
r
s
Z
0
(
m
)
(B )
z
01
z
02
z
03
0 100 200 300
140
150
160
170
180
time (s)
V
(
m
/
s
)
(C)
0 100 200 300
2000
1000
0
1000
2000
time (s)
(
d
e
g
)
(D)
0 100 200 300
10
5
0
5
10
time (s)
(
d
e
g
)
(E )
0 100 200 300
100
50
0
50
100
time (s)
(
d
e
g
)
(F)
0 100 200 300
2000
1000
0
1000
2000
time (s)
(
d
e
g
)
(G)
0 100 200 300
20
10
0
10
20
time (s)
p
,
q
,
r
(
d
e
g
/
s
)
(H)
p q r
0 100 200 300
0
5
10
x 10
4
time (s)
T
h
r
u
s
t
(
N
)
(I)
0 100 200 300
10
5
0
5
10
time (s)
e
,
a
,
r
(
d
e
g
)
(J)
e
a
r
Figure D.7: Maneuver 1: Climbing helical path performed at ight condition 2 with +30% un-
certainty in the aerodynamic coefcients.
SIMULATION RESULTS OF CHAPTER 7 223
0
1
2
3
x 10
4
1
0
1
x 10
4
2000
4000
6000
North Distance (m)
(A)
East Distance (m)
A
l
t
i
t
u
d
e
(
m
)
0 100 200 300
5
0
5
time (s)
T
r
a
c
k
i
n
g
e
r
r
o
r
s
Z
0
(
m
)
(B )
z
01
z
02
z
03
0 100 200 300
240
250
260
270
280
time (s)
V
(
m
/
s
)
(C)
0 100 200 300
2000
1000
0
1000
2000
time (s)
(
d
e
g
)
(D)
0 100 200 300
10
5
0
5
10
time (s)
(
d
e
g
)
(E )
0 100 200 300
100
50
0
50
100
time (s)
(
d
e
g
)
(F)
0 100 200 300
2000
1000
0
1000
2000
time (s)
(
d
e
g
)
(G)
0 100 200 300
20
10
0
10
20
time (s)
p
,
q
,
r
(
d
e
g
/
s
)
(H)
p q r
0 100 200 300
0
5
10
x 10
4
time (s)
T
h
r
u
s
t
(
N
)
(I)
0 100 200 300
10
5
0
5
10
time (s)
e
,
a
,
r
(
d
e
g
)
(J)
e
a
r
Figure D.8: Maneuver 1: Climbing helical path performed at ight condition 3 with left aileron
locked at 10 deg.
224 APPENDIX D
0
1
2
3
x 10
4
1
0
1
x 10
4
2000
4000
6000
North Distance (m)
(A)
East Distance (m)
A
l
t
i
t
u
d
e
(
m
)
0 100 200 300
20
10
0
10
20
time (s)
T
r
a
c
k
i
n
g
e
r
r
o
r
s
Z
0
(
m
)
(B )
z
01
z
02
z
03
0 100 200 300
245
250
255
time (s)
V
(
m
/
s
)
(C)
0 100 200 300
1000
500
0
500
1000
time (s)
(
d
e
g
)
(D)
0 100 200 300
10
5
0
5
10
time (s)
(
d
e
g
)
(E )
0 100 200 300
100
50
0
50
100
time (s)
(
d
e
g
)
(F)
0 100 200 300
1000
500
0
500
1000
time (s)
(
d
e
g
)
(G)
0 100 200 300
50
0
50
time (s)
p
,
q
,
r
(
d
e
g
/
s
)
(H)
p q r
0 100 200 300
0
5
10
x 10
4
time (s)
T
h
r
u
s
t
(
N
)
(I)
0 100 200 300
10
5
0
5
10
time (s)
e
,
a
,
r
(
d
e
g
)
(J)
e
a
r
Figure D.9: Maneuver 2: Reconnaissance and surveillance performed at ight condition 3 with
30% uncertainty in the aerodynamic coefcients.
SIMULATION RESULTS OF CHAPTER 7 225
0 50 100 150 200 250 300
2
0
2
4
6
8
10
x 10
3
time (s)
C
L
c
o
m
p
o
n
e
n
t
s
CL0 CLq CLa CLde
0 50 100 150 200 250 300
1
0
1
2
3
x 10
3
time (s)
C
Y
c
o
m
p
o
n
e
n
t
s
CY0 CYp CYr CYda CYdr
0 50 100 150 200 250 300
0.02
0
0.02
0.04
0.06
0.08
time (s)
C
D
c
o
m
p
o
n
e
n
t
s
CD0 CDq CDde
0 50 100 150 200 250 300
2
0
2
x 10
4
time (s)
C
l
c
o
m
p
o
n
e
n
t
s
Cl0 Clp Clr Clda Cldr
0 50 100 150 200 250 300
0
5
10
x 10
6
time (s)
C
m
c
o
m
p
o
n
e
n
t
s
Cm0 Cmq Cmde
0 50 100 150 200 250 300
4
2
0
2
x 10
7
time (s)
C
n
c
o
m
p
o
n
e
n
t
s
Cn0 Cnp Cnr Cnda Cndr
Figure D.10: Maneuver 2: Estimated errors for the reconnaissance and surveillance performed at
ight condition 3 with 30% uncertainty in the aerodynamic coefcients.
226 APPENDIX D
0
1
2
3
x 10
4
1
0
1
x 10
4
2000
4000
6000
North Distance (m)
(A)
East Distance (m)
A
l
t
i
t
u
d
e
(
m
)
0 100 200 300
5
0
5
time (s)
T
r
a
c
k
i
n
g
e
r
r
o
r
s
Z
0
(
m
)
(B )
z
01
z
02
z
03
0 100 200 300
199.5
200
200.5
time (s)
V
(
m
/
s
)
(C)
0 100 200 300
1000
500
0
500
1000
time (s)
(
d
e
g
)
(D)
0 100 200 300
10
5
0
5
10
time (s)
(
d
e
g
)
(E )
0 100 200 300
100
50
0
50
100
time (s)
(
d
e
g
)
(F)
0 100 200 300
1000
500
0
500
1000
time (s)
(
d
e
g
)
(G)
0 100 200 300
50
0
50
time (s)
p
,
q
,
r
(
d
e
g
/
s
)
(H)
p q r
0 100 200 300
0
0.5
1
1.5
2
x 10
5
time (s)
T
h
r
u
s
t
(
N
)
(I)
0 100 200 300
10
5
0
5
10
time (s)
e
,
a
,
r
(
d
e
g
)
(J)
e
a
r
Figure D.11: Maneuver 2: Reconnaissance and surveillance path performed at ight condition 1
with left aileron locked at +10 deg.
SIMULATION RESULTS OF CHAPTER 8 227
D.3 Simulation Results of Chapter 8
0 50 100 150 200
0.5
0
0.5
1
s
t
i
c
k
/
r
u
d
d
e
r
d
e
f
l
e
c
t
i
o
n
(
s
l
p
0 50 100 150 200
10
0
10
20
30
a
n
g
l
e
o
f
a
t
t
a
c
k
/
s
i
d
e
s
l
i
p
(
d
e
g
)
0 50 100 150 200
1
0
1
2
3
n
o
r
m
a
l
a
c
c
.
(
g
)
n
y
n
z
0 50 100 150 200
10
0
10
20
a
n
g
u
l
a
r
r
a
t
e
s
(
d
e
g
/
s
)
ps q rs
0 50 100 150 200
10
5
0
5
time (s)
s
u
r
f
a
c
e
d
e
f
l
e
c
t
i
o
n
s
(
d
e
g
)
e
r
ar
al
0 50 100 150 200
4
2
0
2
x 10
7
C
L
0 50 100 150 200
0.5
0
0.5
1
C
m
0 50 100 150 200
1
0.5
0
0.5
1
C
Y
0 50 100 150 200
1
0.5
0
0.5
1
C
l
0 50 100 150 200
1
0.5
0
0.5
1
time (s)
C
n
CL0 CLq CLde
Cm0 Cmq Cmde
CY0 CYp CYr CYde CYda CYdr
Cl0 Clp Clr Clde Clda Cldr
Cn0 Cnp Cnr Cnde Cnda Cndr
Figure D.12: Simulation results for the integrated adaptive controller at ight condition 2 and
failure scenario 1: C
m
q
= 0 after 20 seconds.
228 APPENDIX D
0 50 100 150 200
0.5
0
0.5
1
s
t
i
c
k
/
r
u
d
d
e
r
d
e
f
l
e
c
t
i
o
n
(
s
l
p
0 50 100 150 200
10
0
10
20
30
a
n
g
l
e
o
f
a
t
t
a
c
k
/
s
i
d
e
s
l
i
p
(
d
e
g
)
0 50 100 150 200
1
0
1
2
3
n
o
r
m
a
l
a
c
c
.
(
g
)
n
y
n
z
0 50 100 150 200
10
0
10
20
a
n
g
u
l
a
r
r
a
t
e
s
(
d
e
g
/
s
)
ps q rs
0 50 100 150 200
10
5
0
5
time (s)
s
u
r
f
a
c
e
d
e
f
l
e
c
t
i
o
n
s
(
d
e
g
)
e
r
ar
al
0 50 100 150 200
10
5
0
5
x 10
6
C
L
0 50 100 150 200
0.5
0
0.5
1
C
m
0 50 100 150 200
1
0
1
2
x 10
6
C
Y
0 50 100 150 200
5
0
5
x 10
7
C
l
0 50 100 150 200
4
2
0
2
4
x 10
5
time (s)
C
n
CL0 CLq CLde
Cm0 Cmq Cmde
CY0 CYp CYr CYde CYda CYdr
Cl0 Clp Clr Clde Clda Cldr
Cn0 Cnp Cnr Cnde Cnda Cndr
Figure D.13: Simulation results for the modular adaptive controller at ight condition 2 and
failure scenario 1: C
m
q
= 0 after 20 seconds.
SIMULATION RESULTS OF CHAPTER 8 229
0 50 100 150 200
0.2
0
0.2
0.4
0.6
s
t
i
c
k
/
r
u
d
d
e
r
d
e
f
l
e
c
t
i
o
n
(
s
l
p
0 50 100 150 200
2
0
2
4
6
a
n
g
l
e
o
f
a
t
t
a
c
k
/
s
i
d
e
s
l
i
p
(
d
e
g
)
0 50 100 150 200
1
0
1
2
n
o
r
m
a
l
a
c
c
.
(
g
)
n
y
n
z
0 50 100 150 200
4
2
0
2
4
a
n
g
u
l
a
r
r
a
t
e
s
(
d
e
g
/
s
)
ps q rs
0 50 100 150 200
4
2
0
2
time (s)
s
u
r
f
a
c
e
d
e
f
l
e
c
t
i
o
n
s
(
d
e
g
)
e
r
ar
al
0 50 100 150 200
2
0
2
4
6
x 10
8
C
L
0 50 100 150 200
0.5
0
0.5
1
1.5
C
m
0 50 100 150 200
1
0.5
0
0.5
1
C
Y
0 50 100 150 200
1
0.5
0
0.5
1
C
l
0 50 100 150 200
0.1
0.05
0
0.05
0.1
time (s)
C
n
CL0 CLq CLde
Cm0 Cmq Cmde
CY0 CYp CYr CYde CYda CYdr
Cl0 Clp Clr Clde Clda Cldr
Cn0 Cnp Cnr Cnde Cnda Cndr
Figure D.14: Simulation results for the integrated adaptive controller at ight condition 1 and
failure scenario 2: Loss of longitudinal static stability margin after 20 seconds.
230 APPENDIX D
20 10 0 10 20 30 40 50 60 70 80 90
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.1
0.2
angle of attack (deg)
C
m
Figure D.15: Simulation results for the integrated adaptive controller at ight condition 1 and
failure scenario 2: Body pitch moment coefcient versus angle of attack. The blue line represents
the nominal values, the red line are the post-failure values.
Figure D.16: Simulation results for the integrated adaptive controller at ight condition 1 and
failure scenario 2: Body pitch moment coefcient error versus angle of attack. The blue line
represents the actual error, the red line the estimated error at the end of the simulation.
SIMULATION RESULTS OF CHAPTER 8 231
0 50 100 150 200
0.2
0
0.2
0.4
0.6
s
t
i
c
k
/
r
u
d
d
e
r
d
e
f
l
e
c
t
i
o
n
(
s
l
p
0 50 100 150 200
5
0
5
10
a
n
g
l
e
o
f
a
t
t
a
c
k
/
s
i
d
e
s
l
i
p
(
d
e
g
)
0 50 100 150 200
2
0
2
4
n
o
r
m
a
l
a
c
c
.
(
g
)
n
y
n
z
0 50 100 150 200
5
0
5
10
15
a
n
g
u
l
a
r
r
a
t
e
s
(
d
e
g
/
s
)
ps q rs
0 50 100 150 200
10
5
0
5
time (s)
s
u
r
f
a
c
e
d
e
f
l
e
c
t
i
o
n
s
(
d
e
g
)
e
r
ar
al
0 50 100 150 200
4
2
0
2
4
x 10
7
C
L
0 50 100 150 200
2
0
2
4
6
C
m
0 50 100 150 200
1
0
1
2
x 10
9
C
Y
0 50 100 150 200
0.01
0.005
0
0.005
0.01
C
l
0 50 100 150 200
0.4
0.2
0
0.2
0.4
time (s)
C
n
CL0 CLq CLde
Cm0 Cmq Cmde
CY0 CYp CYr CYde CYda CYdr
Cl0 Clp Clr Clde Clda Cldr
Cn0 Cnp Cnr Cnde Cnda Cndr
Figure D.17: Simulation results for the modular adaptive controller at ight condition 1 and
failure scenario 2: Loss of longitudinal static stability margin after 20 seconds.
232 APPENDIX D
0 50 100 150
0.4
0.2
0
0.2
0.4
s
t
i
c
k
/
r
u
d
d
e
r
d
e
f
l
e
c
t
i
o
n
(
s
l
p
0 50 100 150
10
0
10
20
a
n
g
l
e
o
f
a
t
t
a
c
k
/
s
i
d
e
s
l
i
p
(
d
e
g
)
0 50 100 150
1
0
1
2
n
o
r
m
a
l
a
c
c
.
(
g
)
n
y
n
z
0 50 100 150
40
20
0
20
40
a
n
g
u
l
a
r
r
a
t
e
s
(
d
e
g
/
s
)
ps q rs
0 50 100 150
20
10
0
10
20
time (s)
s
u
r
f
a
c
e
d
e
f
l
e
c
t
i
o
n
s
(
d
e
g
)
e
r
ar
al
0 50 100 150
1
0
1
2
x 10
9
C
L
0 50 100 150
0.05
0
0.05
C
m
0 50 100 150
2
0
2
4
x 10
4
C
Y
0 50 100 150
0.5
0
0.5
1
1.5
C
l
0 50 100 150
0.2
0
0.2
0.4
0.6
time (s)
C
n
CL0 CLq CLde
Cm0 Cmq Cmde
CY0 CYp CYr CYde CYda CYdr
Cl0 Clp Clr Clde Clda Cldr
Cn0 Cnp Cnr Cnde Cnda Cndr
Figure D.18: Simulation results for the integrated adaptive controller at ight condition 4 and
failure scenario 5: Right aileron negatively locked at half of maximum deection after 20 seconds.
SIMULATION RESULTS OF CHAPTER 8 233
0 50 100 150
0.4
0.2
0
0.2
0.4
s
t
i
c
k
/
r
u
d
d
e
r
d
e
f
l
e
c
t
i
o
n
(
s
l
p
0 50 100 150
10
0
10
20
a
n
g
l
e
o
f
a
t
t
a
c
k
/
s
i
d
e
s
l
i
p
(
d
e
g
)
0 50 100 150
0.5
0
0.5
1
1.5
n
o
r
m
a
l
a
c
c
.
(
g
)
n
y
n
z
0 50 100 150
20
10
0
10
20
a
n
g
u
l
a
r
r
a
t
e
s
(
d
e
g
/
s
)
ps q rs
0 50 100 150
20
10
0
10
20
time (s)
s
u
r
f
a
c
e
d
e
f
l
e
c
t
i
o
n
s
(
d
e
g
)
e
r
ar
al
0 50 100 150
1
0
1
2
3
C
L
0 50 100 150
2
0
2
4
6
x 10
10
C
m
0 50 100 150
2
1
0
1
C
Y
0 50 100 150
4
2
0
2
4
C
l
0 50 100 150
5
0
5
10
time (s)
C
n
CL0 CLq CLde
Cm0 Cmq Cmde
CY0 CYp CYr CYde CYda CYdr
Cl0 Clp Clr Clde Clda Cldr
Cn0 Cnp Cnr Cnde Cnda Cndr
Figure D.19: Simulation results for the modular adaptive controller at ight condition 4 and
failure scenario 5: Right aileron negatively locked at half of maximum deection after 20 seconds.
234 APPENDIX D
D.4 Simulation Results of Chapter 9
0 10 20 30 40 50 60
90
60
30
0
30
60
90
(
d
e
g
)
response reference
0 10 20 30 40 50 60
20
10
0
10
20
30
40
(
d
e
g
)
0 10 20 30 40 50 60
2
0
2
time (s)
(
d
e
g
)
(a) Reference tracking
0 10 20 30 40 50 60
40
20
0
20
40
60
a
l
,
a
r
,
r
(
d
e
g
)
al
ar
r
0 10 20 30 40 50 60
30
20
10
0
10
20
e
l
,
e
r
,
l
e
f
,
t
e
f
(
d
e
g
)
time [s]
el
er
lef
tef
(b) Surface deections
0 10 20 30 40 50 60
2
1
0
1
l
t
o
t
(
)
realized estimated
0 10 20 30 40 50 60
0.5
0
0.5
m
t
o
t
(
)
0 10 20 30 40 50 60
0.5
0
0.5
n
t
o
t
(
)
time (s)
(c) Control moment
0 10 20 30 40 50 60
1
1.1
1.2
1.3
1.4
r
r
2
r
3
r
5
r
6
r
7
0 10 20 30 40 50 60
100
50
0
50
x
h
a
t
(
d
e
g
o
r
d
e
g
/
s
)
xhat
2
xhat
3
xhat
5
xhat
6
xhat
7
0 10 20 30 40 50 60
10
5
0
5
e
(
d
e
g
o
r
d
e
g
/
s
)
time (s)
e
2
e
3
e
5
e
6
e
7
(d) Estimator Parameters
Figure D.20: Simulation scenario 3 results for the modular adaptive controller with I&I estimator
where the aircraft experiences a hard-over of the left aileron to 45 degrees after 1 second.
SIMULATION RESULTS OF CHAPTER 9 235
0 10 20 30 40 50 60
90
60
30
0
30
60
90
(
d
e
g
)
response reference
0 10 20 30 40 50 60
20
10
0
10
20
30
40
(
d
e
g
)
0 10 20 30 40 50 60
1
0.5
0
0.5
time (s)
(
d
e
g
)
(a) Reference tracking
0 10 20 30 40 50 60
40
20
0
20
40
a
l
,
a
r
,
r
(
d
e
g
)
al
ar
r
0 10 20 30 40 50 60
20
10
0
10
20
e
l
,
e
r
,
l
e
f
,
t
e
f
(
d
e
g
)
time [s]
el
er
lef
tef
(b) Surface deections
0 10 20 30 40 50 60
2
1
0
1
2
l
t
o
t
(
)
realized estimated
0 10 20 30 40 50 60
0.5
0
0.5
m
t
o
t
(
)
0 10 20 30 40 50 60
0.5
0
0.5
n
t
o
t
(
)
time (s)
(c) Control moment
0 10 20 30 40 50 60
5
0
5
l
*
(
el
er
al
ar
0 10 20 30 40 50 60
2
1.5
1
0.5
0
m
*
(
)
0 10 20 30 40 50 60
0.2
0.1
0
0.1
0.2
n
*
(
)
time (s)
(d) Parameter estimation
Figure D.21: Simulation scenario 4 results for the modular adaptive controller with I&I estimator
where the aircraft experiences a hard-over of the left horizontal stabilizer to 10.5 degrees after 1
second.
236 APPENDIX D
0 50 100 150 200
1
0.5
0
0.5
1
s
t
i
c
k
/
r
u
d
d
e
r
d
e
f
l
e
c
t
i
o
n
(
s
l
p
0 50 100 150 200
50
0
50
a
n
g
l
e
o
f
a
t
t
a
c
k
/
s
i
d
e
s
l
i
p
(
d
e
g
)
0 50 100 150 200
5
0
5
n
o
r
m
a
l
a
c
c
.
(
g
)
n
y
n
z
0 50 100 150 200
20
10
0
10
20
a
n
g
u
l
a
r
r
a
t
e
s
(
d
e
g
/
s
)
ps q rs
0 50 100 150 200
10
5
0
5
10
time (s)
s
u
r
f
a
c
e
d
e
f
l
e
c
t
i
o
n
s
(
d
e
g
)
e
r
ar
al
0 50 100 150 200
1
0.5
0
0.5
1
C
L
0 50 100 150 200
1
0.5
0
0.5
1
C
m
0 50 100 150 200
0.2
0.1
0
0.1
0.2
C
Y
0 50 100 150 200
0.2
0.1
0
0.1
0.2
C
l
0 50 100 150 200
0.5
0
0.5
time (s)
C
n
CL0 CLq CLde
Cm0 Cmq Cmde
CY0 CYp CYr CYde CYda CYdr
Cl0 Clp Clr Clde Clda Cldr
Cn0 Cnp Cnr Cnde Cnda Cndr
Figure D.22: Simulation results for the I&I based adaptive controller at ight condition 2 and
failure scenario 1: C
m
q
= 0 after 20 seconds.
SIMULATION RESULTS OF CHAPTER 9 237
0 50 100 150 200
0.5
0
0.5
s
t
i
c
k
/
r
u
d
d
e
r
d
e
f
l
e
c
t
i
o
n
(
s
l
p
0 50 100 150 200
5
0
5
a
n
g
l
e
o
f
a
t
t
a
c
k
/
s
i
d
e
s
l
i
p
(
d
e
g
)
0 50 100 150 200
2
1
0
1
2
n
o
r
m
a
l
a
c
c
.
(
g
)
n
y
n
z
0 50 100 150 200
5
0
5
a
n
g
u
l
a
r
r
a
t
e
s
(
d
e
g
/
s
)
ps q rs
0 50 100 150 200
5
0
5
time (s)
s
u
r
f
a
c
e
d
e
f
l
e
c
t
i
o
n
s
(
d
e
g
)
e
r
ar
al
0 50 100 150 200
1
0.5
0
0.5
1
C
L
0 50 100 150 200
2
1
0
1
2
C
m
0 50 100 150 200
0.02
0.01
0
0.01
0.02
C
Y
0 50 100 150 200
0.01
0.005
0
0.005
0.01
C
l
0 50 100 150 200
0.5
0
0.5
time (s)
C
n
CL0 CLq CLde
Cm0 Cmq Cmde
CY0 CYp CYr CYde CYda CYdr
Cl0 Clp Clr Clde Clda Cldr
Cn0 Cnp Cnr Cnde Cnda Cndr
Figure D.23: Simulation results for the I&I based adaptive controller at ight condition 1 and
failure scenario 2: Loss of longitudinal static stability margin after 20 seconds.
238 APPENDIX D
0 50 100 150 200
0.5
0
0.5
s
t
i
c
k
/
r
u
d
d
e
r
d
e
f
l
e
c
t
i
o
n
(
s
l
p
0 50 100 150 200
20
10
0
10
20
a
n
g
l
e
o
f
a
t
t
a
c
k
/
s
i
d
e
s
l
i
p
(
d
e
g
)
0 50 100 150 200
2
1
0
1
2
n
o
r
m
a
l
a
c
c
.
(
g
)
n
y
n
z
0 50 100 150 200
20
10
0
10
20
a
n
g
u
l
a
r
r
a
t
e
s
(
d
e
g
/
s
)
ps q rs
0 50 100 150 200
20
10
0
10
20
time (s)
s
u
r
f
a
c
e
d
e
f
l
e
c
t
i
o
n
s
(
d
e
g
)
e
r
ar
al
0 50 100 150 200
1
0.5
0
0.5
1
C
L
0 50 100 150 200
0.01
0.005
0
0.005
0.01
C
m
0 50 100 150 200
0.5
0
0.5
C
Y
0 50 100 150 200
2
1
0
1
2
C
l
0 50 100 150 200
1
0.5
0
0.5
1
time (s)
C
n
CL0 CLq CLde
Cm0 Cmq Cmde
CY0 CYp CYr CYde CYda CYdr
Cl0 Clp Clr Clde Clda Cldr
Cn0 Cnp Cnr Cnde Cnda Cndr
Figure D.24: Simulation results for the I&I based adaptive controller at ight condition 4 and
failure scenario 5: Right aileron negatively locked at half of maximum deection after 20 seconds.
Bibliography
[1] Military Standard, Flying Qualities of Piloted Aircraft, MIL-STD-1797B, 2006,
2006.
[2] F. Ahmed-Zaid, P. Ioannou, K. Gousman, and R. Rooney. Accommodation of
Failures in the F-16 Aircraft Using Adaptive Control. IEEE Control Syst. Mag.,
11:7378, 1991.
[3] B. D. O. Anderson and C. R. Johnson. Exponential Convergence of Adaptive
Identication and Control Algorithms. Automatica, 18:113, 1982.
[4] A. M. Annaswamy and J. E. Wong. Adaptive Control in the Presence of Saturation
Nonlinearity. Int. Journal of Adaptive Control and Signal Processing, 11:319,
1997.
[5] Z. Artstein. Stabilization with Relaxed Controls. Nonlinear Analysis, TMA-
7:11631173, 1983.
[6] A. Astol, D. Karagiannis, and R. Ortega. Nonlinear and Adaptive Control with
Applications. Springer-Verlag, 2008.
[7] A. Astol and R. Ortega. Immersion and Invariance: A New Tool for Stabilization
and Adaptive Control of Nonlinear Systems. IEEE Transactions on Automatic
Control, 48(4):590606, 2003.
[8] K. J.
Astr om. Adaptive Control Around 1960. IEEE Control Systems Magazine,
16(3):4449, 1996.
[9] K. J.
Astr om and B. Wittenmark. Adaptive Control. Addison Wesley, 1989.
[10] R. Babuska. Fuzzy Modeling for Control, pages 4952. Kluwer Academic Pub-
lishers, 1998.
239
240 BIBLIOGRAPHY
[11] B. J. Bacon and I. M. Gregory. General Equations of Motion for a Damaged Asym-
metric Aircraft. In Proc. of the AIAA Atmospheric Flight Mechanics Conference
and Exhibit, 2007.
[12] R. E. Bailey and R. E. Smith. Analysis of Augmented Aircraft Flying Qualities
Through Application of the Neal-Smith Criterion. In Proc. of the Guidance and
Control Conference, number AIAA 81-1776, 1981.
[13] R. V. Beard. Failure Accommodation in Linear Systems Through Self-
Organization. PhD thesis, Department of Aeronautics and Astronautics, Mas-
sachusetts Institute of Technology, Cambridge, 1971.
[14] R. E. Bellman. Dynamic Programming. Princeton, NJ, 1957.
[15] D. P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientic,
3rd edition, 2005.
[16] J. H. Blakelock. Automatic Control of Aircraft and Missiles. John Wiley & Sons,
2nd edition, 1991.
[17] M. Bodson. Evaluation of Optimization Methods for Control Allocation. Journal
of Guidance, Control and Dynamics, 25(4):703711, 2002.
[18] M. Bodson and J. E. Groszkiewicz. Multivariable Adaptive Algorithms for Recon-
gurable Flight Control. In Proc. of the 33rd Conference on Decision and Control,
Dec. 1994.
[19] M. Bodson and J. E. Groszkiewicz. Multivariable Adaptive Algorithms for Re-
congurable Flight Control. IEEE Transactions on Control Systems Technology,
5(2):217229, Mar. 1997.
[20] K. Bordignon and J. Bessolo. Control Allocation for the X-35B. In Proc. of the
2002 Biennial International Powered Lift Conference and Exhibit, 2002.
[21] J. Bosworth. Flight Results of the NF-15B Intelligent Flight Control System
(IFCS) Aircraft with Adaptation to a Longitudinally Destabilized Plant. In Proc.
of the AIAA Guidance, Navigation and Control Conference and Exhibit, 2008.
[22] J. Bosworth and P. Williams-Hayes. Stabilator Failure Adaptation from Flight
Tests of NF-15B Intelligent Flight Control System. Journal of Aerospace Com-
puting, Information, and Communication, 6(3):187206, 2009.
[23] J. A. Boudreau and H. I. Berman. Dispersed and Recongurable Digital Flight
Control Systems. Technical report, Grumman Aerospace Corp., 1979.
[24] J. D. Bo skovic, S. M. Li, and R. K. Mehra. Recongurable Flight Control Design
Using Multiple Switching Controllers and On-line Estimation of Damage-Related
Parameters. In Proc. of the 2000 IEEE International Conference on Control Ap-
plications, 2000.
BIBLIOGRAPHY 241
[25] J. D. Bo skovic and R. K. Mehra. A Multiple Model-Based Recongurable Flight
Control System Design. Proc. of the 37th IEEE Conf. on Decision and Control,
1998.
[26] J. D. Bo skovic and R. K. Mehra. Multiple Model-Based Adaptive Recongurable
Formation Flight Control Design. In Proc. of the 41st IEEE Conference of Decison
and Control, Dec. 2002.
[27] J. D. Bo skovic, R. Prasanth, and R. K. Mehra. Retrot Recongurable Flight
Control. In Proc. of the AIAA Guidance, Navigation, and Control Conference and
Exhibit, 2005.
[28] S. Boyd and S. Sastry. Necessary and Sufcient Conditions for Parameter Con-
vergence in Adaptive Control. Automatica, 22:629638, 1986.
[29] D. P. Boyle and G. E. Chamitof. Autonomous Maneuver Tracking for Self-Piloted
Vehicles. Journal of Guidance, Control and Dynamics, 22:5867, 1999.
[30] J. S. Brinker and K. A. Wise. Recongurable Flight Control for Tailless Advanced
Fighter Aircraft. In Proc. of the 1998 AIAA Guidance, Navigation and Control
Conference, Aug. 1998.
[31] J. S. Brinker and K. A. Wise. Nonlinear Simulation Analysis of a Tailless Ad-
vanced Fighter Aircraft Recongurable Flight Control Law. In Proc. of the AIAA
Guidance, Navigation, and Control Conference and Exhibit, 1999.
[32] F. W. Burcham, J. J. Burken, T. A. Maine, and J. Bull. Emergency Flight Control
Using Only Engine Thrust and Lateral Center-of-Gravity Offset: A First Look.
Technical report, NASA, 1997.
[33] F. W. Burcham, J. J. Burken, T. A. Maine, and C. G. Fullerton. Development and
Flight Test of an Emergency Flight Control System Using Only Engine Thrust on
an MD-11 Transport Airplane. Technical report, NASA, Oct. 1997.
[34] J. J. Burken, P. Lu, and Z. Wu. Recongurable Flight Control Designs with Ap-
plication to the X-33 Vehicle. Technical report, NASA, 1999.
[35] J. J. Burken, P. Lu, Z. Wu, and C. Bahm. Two Recongurable Flight-Control
Design Methods: Robust Servomechanism and Control Allocation. Journal of
Guidance, Control and Dynamics, 24(3):482493, May-June 2001.
[36] C. I. Byrnes, F. D. Priscoli, and A. Isidori. Output Regulation of Uncertain Non-
linear Systems. Birkhauser, 1997.
[37] A. J. Calise, N. Hovakimyan, and M. Idan. Adaptive Output Feedback Control
of Nonlinear Systems Using Neural Networks. Automatica, 37(8):12011211,
March 2001.
242 BIBLIOGRAPHY
[38] A. J. Calise, S. Lee, and M. Sharma. Development of a Recongurable Flight
Control law for the X-36 Tailless Fighter Aircraft. In Proc. of the AIAA Guidance,
Navigation, and Control Conference and Exhibit, Aug. 2000.
[39] A. J. Calise, S. Lee, and M. Sharma. Development of a Recongurable Flight
Control Law for Tailless Aircraft. Journal of Guidance, Control and Dynamics,
24(5):896902, Sep.-Oct. 2001.
[40] D. Carnevale, D. Karagiannis, and A. Astol. Reduced-Order Observer Design
for Nonlinear Systems. In Proc. of the European Control Conference, 2007.
[41] R. Chen and J. Speyer. Sensor and Actuator Fault Reconstruction. Journal of
Guidance, Control and Dynamics, 27:186196, 2004.
[42] K. W. E. Cheng, H. Wang, and D. Sutanto. Adaptive B-Spline Network Control
for Three-Phase PWM AC-DC Voltage Source Converter. In Proc. of the IEEE
1999 International Conference on Power Electronics and Drive Systems, 1999.
[43] B. T. Clough. Unmanned Aerial Vehicles: Autonomous Control Challenges, a Re-
searchers Perspective. Journal of Aerospace Computing, Information, and Com-
munication, 2:327347, 2005.
[44] Controllab Products B.V., www.20sim.com. 20 Sims Control Toolbox, 20-simhelp
les, 2005.
[45] M. V. Cook. Flight Dynamics Principles. Butterworth-Heinemann, 1997.
[46] M. Cox. Algorithms for Spline Curves and Surfaces. Technical report, MPL
Report DITC 166, 1990.
[47] T. J. Curry. Estimation of Handling Qualities Parameters of the Tu-144 Super-
sonic Transport Aircraft From Flight Test Data. Technical report, NASA CR-
2000210290, August 2000.
[48] R. R. da Costa, Q. P. Chu, and J. A. Mulder. Reentry Flight Controller Design
Using Nonlinear Dynamic Inversion. Journal of Spacecraft and Rockets, 40:64
71, 2003.
[49] M. Daehlen and T. Lyche. Box Splines and Applications. Springer-Verlag, 1991.
[50] C. C. de Visser, Q. P. Chu, and J. A. Mulder. A New Approach to Linear Regres-
sion with Multivariate Splines. Automatica, 45:29032909, 2009.
[51] E. de Weerdt, Q. P. Chu, and J. A. Mulder. Neural Network Aerodynamic Model
Identication for Aerospace Reconguration. In Proc. of the AIAA Guidance,
Navigation, and Control Conference and Exhibit, 2005.
[52] W. C. Durham. Constrained Control Allocation. Journal of Guidance, Control
and Dynamics, 16(4):717725, 1993.
BIBLIOGRAPHY 243
[53] L. Egbert and I. Halley. Stabilator reconguration ight testing on the F/A-18/E/F.
In Proc. of the SAE Control and Guidance Meeting, Mar. 2001.
[54] D. F. Enns. Control Allocation Approaches. In Proc. of the AIAA Guidance,
Navigation, and Control Conference and Exhibit, Aug. 1998.
[55] R. A. Eslinger and P. R. Chandler. Self-Repairing Flight Control System Program
Overview. In Proc. of the IEEE National Aerospaceand Electronics Conference,
1988.
[56] B. Etkin and L. D. Reid. Dynamics of Flight: Stability and Control. John Wiley
& Sons, 3rd edition, 1996.
[57] K. Ezal, Z. Pan, and P. Kokotovic. Locally Optimal and Robust Backstepping
Design. IEEE Transactions on Automatic Control, 45:260271, 2000.
[58] J. A. Farrell, M. Polycarpou, and M. Sharma. Adaptive Backstepping with Magni-
tude, Rate, and Bandwidth Constraints: Aircraft Longitude Control. In Proc. of the
American Control Conference, pages 38983903, Evanston, IL, 2003. American
Control Conference Council.
[59] J. A. Farrell, M. Polycarpou, M. Sharma, and W. Dong. Command Filtered Back-
stepping. IEEE Transactions on Automatic Control, 54(6):13911395, 2009.
[60] J. A. Farrell, M. Sharma, and M. Polycarpou. On-line Approximation Based
Aircraft Longitudinal Control. In Proc. of the American Control Conference,
Evanston, IL, 2003. American Control Conference Council.
[61] J. A. Farrell, M. Sharma, and M. Polycarpou. Backstepping Based Flight Control
with Adaptive Function Approximation. AIAA Journal of Guidance, Control and
Dynamics, 28(6):10891102, Jan. 2005.
[62] S. Ferrari and M. Jensenius. Robust and Recongurable Flight Control by Neu-
ral Networks. In Proc. of the AIAA 5th Aviation, Technology, Integration, and
Operations Conference (ATIO), 2005.
[63] L. Forssell and U. Nilsson. ADMIRE - The Aero-Data Model in a Research Envi-
ronment. Technical report, FOI, 2005.
[64] R. A. Freeman and P. Kokotovic. Backstepping Design of Robust Controllers for
a Class of Nonlinear Systems. In Proceedings of the IFAC Nonlinear Control
Systems Design Symposium, 1992.
[65] R. A. Freeman and P. Kokotovic. Inverse Optimality in Robust Stabilization. SIAM
J. Control and Optimization, 34(4):13651391, july 1996.
[66] R. A. Freeman and P. Kokotovic. Robust Nonlinear Control Design: State-space
and Lyapunov Techniques. Birkhauser, 1996.
244 BIBLIOGRAPHY
[67] R. A. Freeman and J. A. Primbs. Control Lyapunov Functions: New Ideas From
an Old Source. In Proc. of the 35th Conference on Decision and Control, 1996.
[68] A. Fujimori, M. Kurozumi, P. N. Nikiforuk, and M. M. Gupta. Flight Control
Design of an Automatic Landing Flight Experiment Vehicle. Journal of Guidance,
Control and Dynamics, 23:373376, 2000.
[69] R. J. Gadient and G. L. Weltz. Adaptive/Recongurable Flight Control Augmen-
tation Design Applied to High-Winged Transport Aircraft. In Proc. of the AIAA
Guidance, Navigation and Control Conference and Exhibit, 2004.
[70] T. Glad. Robustness of Nonlinear State Feedback - ASurvey. Automatica, 23:425
435, 1987.
[71] M. Gopinathan, J. D. Bo skovic, R. K. Mehra, and C. Rago. A Multiple Model
Predictive Scheme for Fault-Tolerant Flight Control Design. In Proc. of the 37th
IEEE Conference on Decision and Control, 1998.
[72] K. D. Graham, T. B. Cunningham, and C. Shure. Aircraft Flight Control Surviv-
ability Through Use of Computational Techniques. Technical report, Naval Air
Development Center, Report 77028-30, May 1980.
[73] J. E. Groszkiewicz and M. Bodson. Flight Control Reconguration Using Adap-
tive Methods. In Proc. of the 34th Conf. on Decision and Control, 1995.
[74] R. Hallouzi and M. Verhaegen. Fault-Tolerant Subspace Predictive Control Ap-
plied to a Boeing 747 Model. Journal of Guidance, Control and Dynamics,
31:873883, 2008.
[75] O. H arkeg ard. Flight Control Design Using Backstepping. Masters thesis,
Linkoping University, 2001.
[76] O. H arkeg ard. Backstepping and Control Allocation with Applications to Flight
Control. PhD thesis, Linkoping University, 2003.
[77] O. H arkeg ard and S. T. Glad. A Backstepping Design for Flight Path Angle Con-
trol. In Proc. of the 39th Conference on Decision and Control, 2000.
[78] O. H arkeg ard and S. T. Glad. Flight Control Design Using Backstepping. In Proc.
of the 5th IFAC Symposium on Nonlinear Control Systems, 2001.
[79] S. Haykin. Neural Networks: A Comprehensive Foundation. Prentice Hall, 1994.
[80] A. Healy and D. Liebard. Multivariable Sliding Mode Control for Autonomous
Diving and Steering of Unmanned Underwater Vehicles. IEEE Journal of Oceanic
Engineering, 18:327339, 1993.
BIBLIOGRAPHY 245
[81] R. A. Hess and C. McLean. Development of a Design Methodology for Recong-
urable Flight Control Systems. In Proc. of the 38th Aerospace Sciences Meeting
and Exhibit, Jan. 2000.
[82] R. A. Hess and S. R. Wells. Sliding Mode Control Applied to Recongurable
Flight Control Design. In Proc. of the 40th AIAA Aerospace Sciences Meeting and
Exhibit, Jan. 2002.
[83] R. A. Hess, S. R. Wells, and T. K. Vetter. MIMO Sliding Mode Control as an
Alternative to Recongurable Flight Control Designs. In Proc. of the American
Control Conference, May 2002.
[84] M. Huzmezan and J. M. Maciejowski. Recongurable Control Methods and Re-
lated Issued - A Survey. Technical report, Department of Engineering, University
of Cambridge, Aug. 1997. Technical report prepared for the DERA under the
Research Agreement no.ASF/3455.
[85] S. Hyung and Y. Kim. Recongurable Flight Control System Design Using Dis-
crete Model Reference Adaptive Control. In Proc. of the AIAA Guidance, Navi-
gation and Control Conference and Exhibit, Aug. 2005.
[86] P. A. Ioannou and P. V. Kokotovic. Instability Analysis and Improvement of Ro-
bustness of Adaptive Control. Automatica, 20(5):583594, 1984.
[87] P. A. Ioannou and J. Sun. Stable and Robust Adaptive Control. Prentice-Hall,
1995.
[88] A. Isidori. Nonlinear Control Systems. Springer, 3rd edition, 1995.
[89] V. Janardhan, D. Schmitz, and S. N. Balakrishnan. Development and Implemen-
tation of New Nonlinear Control Concepts for a UA. In Proc. of the 23rd Digital
Avionics Systems Conference, 2004.
[90] V. Janardhan, D. Schmitz, and S. N. Balakrishnan. Nonlinear control concepts for
a UA. IEEE Aerospace and Electronic Systems Magazine, 2006.
[91] E. N. Johnson and A. J. Calise. Neural Network Adaptive Control of Systems with
Input Saturation. In Proc. of the American Control Conference, pages 25572562,
2001.
[92] C. N. Jones and J. M. Maciejowski. Recongurable Flight Control: First Year
Report. Technical report, Department of Engineering, University of Cambridge,
March 2005.
[93] H. S. Ju and C. C. Tsai. Longitudinal Axis Flight Control law Design by Adaptive
Backstepping. In Proc. of the IEEE Transactions on Aerospace and Electtronic
Systems, 2007.
246 BIBLIOGRAPHY
[94] M. M. Kale and A. J. Chippereld. Recongurable Flight Control Strategies Using
Model Predictive Control. In Proc. of the 2002 IEEE International Symposium on
Intelligent Control, 2002.
[95] M. M. Kale and A. J. Chippereld. Robust and Stabilized MPC Formulations
for Fault Tolerant and Recongurable Flight Control. In Proc. of the 2004 IEEE
International Symposium on Intelligent Control, 2004.
[96] I. Kaminer, A. Pascoal, E. Hallberg, and C. Silvestre. Trajectory Tracking for
Autonomous Vehicles: An Integrated Approach to Guidance and Control. Journal
of Guidance, Control, and Dynamics, 21:2938, 1998.
[97] I. Kaminer, O. Yakimenko, V. Dobrokhodov, A. Pascoal, N. Hovakimyan, C. Cao,
A. Young, and V. Patel. Coordinated Path Following for Time-Critical Missions
of Multiple UAVs via L1 Adaptive Output Feedback Controllers. In Proc. of the
AIAA Guidance, Navigation and Control Conference and Exhibit, 2007.
[98] Y. J. Kanayama, Y. Kimura, F. Miyazaki, and T. Noguchi. A Stable Tracking Con-
trol Method for an Autonomous Mobile Robot. In Proc. of the IEEE International
Conference on Robotics and Automation, 1990.
[99] S. Kanev and M. Verhaegen. Controller Reconguration for Non-linear Systems.
Control Engineering Practice, 8:12231235, Oct. 2000.
[100] S. Kanev, M. Verhaegen, and G. Nijsse. AMethod for the Design of Fault-Tolerant
Systems in Case of Sensor and Actuator Faults. In Proc. of the European Control
Conference, Sept. 2001.
[101] I. Kannelakopoulos, P. V. Kokotovic, and A. S. Morse. Systematic Design of
Adaptive Controllers for Feedback Linearizable Systems. IEEE Transactions on
Automatic Control, 36(11):12411253, Nov. 1991.
[102] D. Karagiannis and A. Astol. Nonlinear Observer Design Using Invariant Man-
ifolds and Applications. In Proc. of the 44th IEEE Conf. Decision and Control,
2005.
[103] D. Karagiannis and A. Astol. Nonlinear Adaptive Control of Systems in Feed-
back Form: An Alternative to Adaptive Backstepping. Systems and Control Let-
ters, 57:733739, 2008.
[104] D. Karagiannis and A. Astol. Observer Design for a Class of Nonlinear Systems
using Dynamic Scaling with Application to Adaptive Control. In Proc. of the 47th
IEEE Conference on Decision and Control, 2008.
[105] S. P. Karason and A. M. Annaswamy. Adaptive Control in the Presence of Input
Constraints. IEEE Trans. on Automatic Control, 39(11):23252330, 1994.
[106] H. K. Khalil. Nonlinear Systems. Prentice Hall, 3rd edition, 2002.
BIBLIOGRAPHY 247
[107] K. S. Kim, K. J. Lee, and Y. Kim. Recongurable Flight Control System Design
Using Direct Adaptive Method. Journal of Guidance, Control and Dynamics,
26(4):543550, July-Aug. 2003.
[108] K. S. Kim, K. J. Lee, and Y. S. Kim. Model Following Recongurable Flight
Control System Design Using Direct Adaptive Scheme. In Proc. of the AIAA
Guidance Navigation and Control Conference and Exhibit, Aug. 2002.
[109] S. H. Kim, Y. S. Kim, and C. Song. A Robust Adaptive Nonlinear Control Ap-
proach to Missile Autopilot Design. Control Engineering Practice, 12(2):149
154, 2004.
[110] P. V. Kokotovic and M. Arcak. Constructive Nonlinear Control: A Historical
Perspective. Automatica, 37:637662, 2001.
[111] P. V. Kokotovic and H. J. Sussmann. A Positive Real Condition for Global Stabi-
lization of Nonlinear Systems. Systems and Control Letters, 19:177185, 1989.
[112] I. Konstantopoulos. Eigenstructure Assignment In Recongurable Control Sys-
tems. citeseer.ist.psu.edu/152208.html, 1996.
[113] M. Krstic. Optimal Adaptive Control - Contradiction in terms or a Matter of
Choosing the Right Cost Functional? IEEE Transactions on Automatic Control,
53(8):19421947, 2008.
[114] M. Krstic. On Using Least-squares Updates Without Regressor Filtering in Iden-
tication and Adaptive Control of Nonlinear Systems. Automatics, 45:731735,
2009.
[115] M. Krstic and H. Deng. Stabilization of Nonlinear Uncertain Systems. Springer,
1998.
[116] M. Krstic, D. Fontaine, P. V. Kokotovic, and J. D. Paduano. Useful Nonlinearities
and Global Stabilization of Bifurcations in a Model of Jet Engine Surge and Stall.
IEEE Transactions on Automatic Control, 43(12):17391745, 1998.
[117] M. Krstic, I. Kanellakopoulos, and P. V. Kokotovic. Adaptive Nonlinear Control
Without Overparametrization. Systems and Control Letters, 19:177185, sept.
1992.
[118] M. Krstic, I. Kanellakopoulos, and P. V. Kokotovic. Nonlinear and Adaptive Con-
trol Design. John Wiley & Sons, 1995.
[119] M. Krstic and P. V. Kokotovic. Adaptive Nonlinear Design with Controller-
Identier Separation and Swapping. IEEE Transactions on Automatic Control,
40(3):426440, March 1995.
[120] M. Krstic and P. V. Kokotovic. Modular Approach to Adaptive Nonlinear Stabi-
lization. Automatica, 32:625629, 1996.
248 BIBLIOGRAPHY
[121] M. Krstic, P. V. Kokotovic, and I. Kanellakopoulos. Transient Performance Im-
provement with a NewClass of Adaptive Controllers. Systems and Control Letters,
21:451461, 1993.
[122] M. Krstic and P. Tsiotras. Inverse optimality results for the attitude motion of a
rigid spacecraft. IEEE Transactions on Automatic Control, 44:10421049, 1999.
[123] E. Lavretsky and N. Hovakimyan. Positive -modication for Stable Adaptation
in Dynamic Inversion Based Adaptive Control with Input Saturation. In Proc. of
the American Control Conference, pages 33733378, 2005.
[124] E. Lavretsky, N. Hovakimyan, and C. Cao. Adaptive Design for Uncertain Sys-
tems with Nonlinear-in-Control Dynamics. In Proc. of the AIAA Guidance, Navi-
gation, and Control Conference and Exhibit, 2007.
[125] T. Lee and Y. Kim. Nonlinear Adaptive Flight Control Using backstepping
and Neural Networks Controller. Journal of Guidance, Control and Dynamics,
24(4):675682, July-Aug. 2001.
[126] G. G. Lendaris, R. A. Santiago, and M. S. Carroll. Proposed Framework for Ap-
plying Adaptive Critics in Real-Time Realm. Proceedings of the 2002 Interna-
tional Joint Conference on Neural Networks, 2002.
[127] B. L. Lewis and F. L. Stevens. Aircraft Control and Simulation. John Wiley &
Sons, 1992.
[128] Z. H. Li and M. Krstic. Optimal Design of Adaptive Tracking Controllers for
Nonlinear Systems. Automatica, 33:14591473, 1997.
[129] D. M. Littleboy and P. R. Smith. Using Bifurcation Methods to Aid Nonlinear
Dynamic Inversion Control Law Design. Journal of Guidance, Control, and Dy-
namics, 21:632638, 1998.
[130] J. L ofberg. Backstepping with Local LQ Performance and Global Approximation
of Quadratic Performance. In Proc. of the American Control Conference, 2000.
[131] T. J. J. Lombaerts, Q. P. Chu, J. A. Mulder, and D. A. Joosten. Real Time Damaged
Aircraft Model Identication for Reconguring Flight Control. In Proc. of the
AIAA Guidance, Navigation, and Control Conference and Exhibit, 2007.
[132] T. J. J. Lombaerts, H. Huisman, Q. Chu, J. A. Mulder, and D. Joosten. Nonlin-
ear Reconguring Flight Control Based on Online Physical Model Identication.
Journal of Guidance, Control, and Dynamics, 32(3):727748, 2009.
[133] T. J. J. Lombaerts, M. H. Smaili, O. Stroosma, Q. P. Chu, J. A. Mulder, and
D. Joosten. Piloted Simulator Evaluation Results of New Fault-Tolerant Flight
Control Algorithm. Journal of Guidance, Control, and Dynamics, 32(6):1747
1765, 2009.
BIBLIOGRAPHY 249
[134] W. Luo, Y. C. Chu, and K. V. Ling. Inverse Optimal Adaptive Control for Attitude
Tracking of Spacecraft. IEEE Transactions on Automatic Control, 50(11):1639
1654, 2005.
[135] A. M. Lyapunov. The General Problem of the Stability of Motion. Taylor &
Francis, 1992. English translation of the original publication in Russian from
1892.
[136] C. Manzie. Advanced Control Lecture Notes. Melbourne School of Engineering,
2004.
[137] P. S. Maybeck and R. D. Stevens. Recongurable Flight Control via Multiple
Model Adaptive Control Methods. Proc. of the 29th Conf. on Decision and Con-
trol, 1990.
[138] P. S. Maybeck and R. D. Stevens. Recongurable Flight Control via Multiple
Model Adaptive Control Methods. IEEE Transactions on Aerospace and Elec-
tronic Systems, 27(3), May 1991.
[139] D. McRuer and D. Graham. Eighty Years of Flight Control - Triumphs and Pitfalls
of the Systems Approach. Journal of Guidance, Control, and Dynamics, 4:353
362, 1981.
[140] M. Mears, S. Pruett, and J. Houtz. URV Flight Test of an ADA implemented,
Self-Repairing Flight Control System. WL-TR-92-3101, Aug. 1992.
[141] J. Monaco, D. Ward, and A. Bateman. A Retrot Architecture for Model-Based
Adaptive Flight Control. In Proc. of the AIAA 1st Intelligent Systems Technical
Conference, 2004.
[142] G. Moon, H. Lee, and Y. Kim. Recongurable Flight Control LawBased on Model
Following Scheme and Parameter Estimation. In Proc. of the AIAA Guidance,
Navigation, and Control Conference and Exhibit, Aug. 2005.
[143] J. A. Mulder, W. H. J. J. van Staveren, J. C. van der Vaart, and E. de Weerdt.
Flight Dynamics, Lecture Notes AE3-302. Technical report, Delft University of
Technology, 2006.
[144] R. Murray-Smith and T. A. Johansen, editors. Multiple Model Approaches to
Modelling and Control. Taylor & Francis, 1997.
[145] J. Nakanishi, J. A. Farrell, and S. Schaal. Composite Adaptive Control with Lo-
cally Weighted Statistical Learning. Neural Networks, 18:7190, 2005.
[146] M. Narasimhan, H. Dong, R. Mittal, and S. N. Singh. Optimal Yaw Regulation
and Trajectory Control of Biorobotic AUV Using Mechanical Fins Based on CFD
Parametrization. Journal of Fluids Engineering, 128:687698, 2006.
250 BIBLIOGRAPHY
[147] K. S. Narendra and A. M. Annaswamy. Stable Adaptive Systems. Prentice Hall,
1989.
[148] K. S. Narendra and K. Parthasarathy. Identication and Control of Dynamical
Systems Using Neural Networks. IEEE Transactions on Neural Networks, 1990.
[149] L. T. Nguyen, M. E. Ogburn, W. P. Gilbert, K. S. Kibler, P. W. Brown, and P. L.
Deal. Simulator Study of Stall Post-stall Characteristics of a Fighter Airplane with
Relaxed Longitudinal Static Stability. Technical report, NASA Langley Research
Center, 1979.
[150] N. Nguyen and K. Krishnakumar. Hybrid Intelligent Flight Control with Adaptive
Learning Parameter Estimation. Journal of Aerospace Computing, Information,
and Communication, 6:171186, 2009.
[151] T. S. No, B. M. Min, R. H. Stone, and J. E. K. C. Wong. Control and Simulation of
Arbitrary Flight Trajectory-Tracking. Control Engineering Practice, 13:601612,
2005.
[152] M. Oosterom, P. Bergsten, and R. Babuska. Fuzzy Gain-Scheduled H-innity
Flight Control Law Design. In Proc. of the AIAA Guidance, Navigation, and
Control Conference and Exhibit, 2002.
[153] M. Oosterom, G. Schram, R. Babuska, and H. B. Verbruggen. Automated Pro-
cedure for Gain Scheduled Flight Control Law Design. In Proc. of the AIAA
Guidance, Navigation, and Control Conference and Exhibit, 2000.
[154] M. Oppenheimer and D. Doman. A Method for Including Control Effector In-
teractions in the Control Allocation Problem. In Proc. of the AIAA Guidance,
Navigation and Control Conference and Exhibit, 2007.
[155] M. Pachter, J. J. DAzzo, J. L. dargan, and J. L. A. W. Proud. Automatic Formation
Flight Control. Journal of Guidance, Control and Dynamics, 17(6), 1994.
[156] M. Pachter, J. J. DAzzo, and A. W. Proud. Tight Formation Control. Journal of
Guidance, Control and Dynamics, 24:246254, 2001.
[157] M. Pachter and E. B. Nelson. Recongurable Flight Control. IMechE, 219, 2005.
[158] A. B. Page, J. Monaco, and D. Meloney. Flight Testing of a Retrot Recong-
urable Control Law Architecture Using an F/A-18C. In Proc. of the AIAA Guid-
ance, Navigation, and Control Conference and Exhibit, 2006.
[159] A. B. Page and M. L. Steinberg. Effects of Control Allocation Algorithms on a
Nonlinear Adaptive Design. Technical report, AIAA-99-4282, 1999.
[160] B. Papadales and M. Downing. UAV Science Missions: A Business Perspective.
In Infotech@Aerospace, 2005.
BIBLIOGRAPHY 251
[161] A. A. Pashilkar, N. Sundararajan, and P. Saratchandran. Adaptive Nonlinear Neu-
ral Controller for Aircraft Under Actuator Failures. Journal of Guidance, Control,
and Dynamics, 30:835847, 2007.
[162] R. J. Patton. Fault-Tolerant Control: The 1997 Situation. In Proc. IFAC Safepro-
cess 97, pages 10331055, 1997.
[163] M. M. Polycarpou, J. A. Farrell, and M. Sharma. Robust On-line Approximation
Control of Uncertain Nonlinear Systems Subject to Constraints. In Proc. of the 9th
IEEE International Conference on Engineering of Complex Computer Systems,
pages 6674, 2004.
[164] F. Pozo, F. Ikhouane, and J. Rodellar. Numerical Issues in backstepping Control:
Sensitivity and Parameter Tuning. Journal of the Franklin Institute, 345:891905,
2008.
[165] L. Praly. Asymptotic Stabilization via Output Feedback for Lower Triangular Sys-
tems with Output Dependent Incremental Rate. IEEE Transactions on Automatic
Control, 48:11031108, 2003.
[166] J. A. Primbs. Nonlinear Optimal Control: A Receding Horizon Approach. PhD
thesis, California Institute of Technology, Pasadena, California, 1999.
[167] J. A. Primbs, V. Nevistic, and J. C. Doyle. A Receding Horizon Generalization
of Pointwise Min-Norm Controllers. IEEE Transactions On Automatic Control,
45:898909, 2000.
[168] A. W. Proud, M. Pachter, and J. J. DAzzo. Close Formation Flight Control. In
Proc. of the AIAA Guidance, Navigation and Control Conference, 1999.
[169] H. Rauch, R. Kline-Schoder, J. Adams, and H. Youssef. Fault Detection, Isolation
and Reconguration for Aircraft Using Neural Networks. In Proc. of the AIAA
11th Applied Aerodynamics Conference, 1993.
[170] K. Refson. Moldys Users Manual. Department of Earth Sciences, May 2001.
[171] W. Ren and E. Atkins. Nonlinear Trajectory Tracking for Fixed Wing UAVs via
Backstepping and Parameter Adaptation. In Proc. of the AIAA Guidance, Naviga-
tion and Control Conference and Exhibit, Aug. 2005.
[172] W. Ren and R. W. Beard. Trajectory Tracking for Unmanned Air Vehicles With
Velocity and Heading Rate Constraints. IEEE Transactions on Control Systems
Technology, 12:706716, 2004.
[173] R. S. Russell. Nonlinear F-16 Simulations Using Simulink and Matlab. Technical
report, University of Minnesota, 2003.
[174] I. J. Schoenberg. Spline Functions and the Problem of Graduation. Proceedings
of the National Academy of Sciences, 52:947950, 1964.
252 BIBLIOGRAPHY
[175] R. Sepulchre, M. Janovic, and P. V. Kokotovic. Constructive Nonlinear Control.
Springer, 1997.
[176] D. H. Shin and Y. Kim. Recongurable Flight Control System Design Using
Adaptive Neural Networks. IEEE Transactions on Control Systems Technology,
12(1):87100, Jan. 2004.
[177] D. H. Shin and Y. Kim. Nonlinear Discrete-Time Recongurable Flight Control
Law Using Neural Networks. IEEE Transactions on Control Systems Technology,
14(3):408422, May 2006.
[178] Y. Shin. Neural Network Based Adaptive Control for Nonlinear Dynamic Regimes.
PhD thesis, Georgia Institute of Technology, 2005.
[179] Y. B. Shtessel and J. Bufngton. Multiple Time Scale Flight Control Using Re-
congurable Sliding Modes. AIAA Journal on Guidance, Control and Dynamics,
22(6):873883, 1999.
[180] Y. B. Shtessel, J. Bufngton, M. Pachter, P. Chandler, and S. Banda. Recon-
gurable Flight Control on Sliding Modes Addressing Actuator Deection and
Deection Rate Saturation. AIAA-98-4112, 1998.
[181] A. T. Simmons and A. S. Hodel. Control Allocation for the X-33 Using Existing
and Novel Quadratic Programming Techniques. In Proc. of the American Control
Conference, 2004.
[182] S. N. Singh, P. Chandler, C. Schumacher, S. Banda, and M. Pachter. Adaptive
Feedback Linearizing Nonlinear Close Formation Control of UAVs. In Proc. of
the American Control Conference, pages 854858, June 2000.
[183] S. N. Singh and M. Steinberg. Adaptive Control of Feedback Linearizable Nonlin-
ear Systems With Application to Flight Control. In Proc. of the AIAA Guidance,
Navigation and Control Conference, July 1996.
[184] S. N. Singh, M. L. Steinberg, and A. B. Page. Nonlinear Adaptive and Sliding
Mode Flight Path Control of F/A-18 Model. IEEE Transactions on Aerospace and
Electronic Systems, 39:12501262, 2003.
[185] W. Siwakosit and R. A. Hess. Multi-Input/Multi-Output Recongurable Flight
Control Design. Journal of Guidance, Control and Dynamics, 24(6), Nov.-Dec.
2001.
[186] J. J. E. Slotine and W. Li. Composite Adaptive Control of Robot Manipulators.
Automatica, 25:509519, 1989.
[187] J. J. E. Slotine and W. Li. Applied Nonlinear Control. Prentice Hall, 1991.
BIBLIOGRAPHY 253
[188] H. Smaili, J. H. Breeman, T. J. J. Lombaerts, and D. Joosten. A Simulation Bench-
mark for Integrated Fault Tolerant Flight Control Evaluation. In Proc. of the AIAA
Modeling and Simulation Technologies Conference and Exhibit, 2006.
[189] L. Sonneveldt. Constrained nonlinear adaptive backstepping ight control - ap-
plication to an f-16/matv model. Masters thesis, Delft University of Technology,
2006.
[190] E. D. Sontag. A Lyapunov-like Characterization of Asymptotic Controllability.
SIAM Journal of Control and Optimization, 21:462471, 1983.
[191] E. D. Sontag. A Universal Construction of Artsteins Theorem on Nonlinear
Stabilization. Systems & Control Letters, 13:117123, 1989.
[192] E. D. Sontag. Smooth Stabilization Implies Coprime Factorization. IEEE Trans-
actions on Automatic Control, 34:435443, 1989.
[193] E. D. Sontag. On the Input-to-state Stability Property. European Journal of Con-
trol, 1:2436, 1995.
[194] E. D. Sontag. Mathematical Control Theory: Deterministic Finite Dimensional
Systems. Springer, New York, 2nd edition, 1998.
[195] M. Steinberg. Historical Overview of Research in Recongurable Flight Control.
IMechE, 219, 2005.
[196] M. L. Steinberg. Comparison of Intelligent, Adaptive and Nonlinear Flight Control
Laws. Journal of Guidance, Control and Dynamics, 24(4):693699, July-Aug.
2001.
[197] R. F. Stengel. Intelligent Failure-Tolerant Control. IEEE Control Systems Maga-
zine, 11(4):1423, 1991.
[198] L. Tang, M. Roemer, J. Ge, A. Crassidis, J. Prasad, and C. Belcastro. Methodolo-
gies for Adaptive Flight Envelope Estimation and Protection. In Proc. of the AIAA
Guidance, Navigation, and Control Conference, 2009.
[199] M. B. Tischler. Advances in Aircraft Flight Control. Taylor & Francis, 1996.
[200] S. Tsach, J. Chemla, and D. Penn. UAV Systems Development in IAI - Past,
Present and Future. In Proc. of the 2nd AIAA Unmanned Unlimited Systems,
Technologies, and Operations - Aerospace, Land, and Sea Conference, 2003.
[201] J. Tsinias. Existence of Control Lyapunov Functions and Applications to State
Feedback Stabilizability of Nonlinear Systems. Journal of Control and Optimiza-
tion, 29:457473, 1991.
254 BIBLIOGRAPHY
[202] E. R. van Oort, Q. P. Chu, and J. A. Mulder. Robust Model Predictive Control of a
Feedback Linearized Nonlinear F-16/MATVAircraft Model. In Proc. of the AIAA
Guidance, Navigation and Control Conference and Exhibit, 2006.
[203] J. C. van Tooren. Fuzzy Aerodynamic Modeling and Identication - Application
to the F-16 Aerodynamic Model. Masters thesis, Delft University of Technology,
2006.
[204] S. Vijayakumar and S. Schaal. Locally Weighted Projection Regression: Incre-
mental Real Time Learning in High Dimensional Space. In Proc. of the 17th
International Conference on Machine Learning, 2000.
[205] G. P. Walker and D. A. Allen. X-35B STOVL Flight Control Law Design and Fly-
ing Qualities. In Proc. of the International Powered Lift Conference and Exhibit,
2002.
[206] H. Wang and J. Sun. Modied Reference Adaptive Control with Saturated Inputs.
In Proc. of the Conf. on Decision and Control, pages 32553256, 1992.
[207] J. Wang, V. Patel, C. Cao, N. Hovakimyan, and E. Lavretsky. Novel L1 Adap-
tive Control Methodology for Aerial Refueling with Guaranteed Transient Perfor-
mance. Journal of Guidance, Control and Dynamics, 31:182193, 2008.
[208] D. G. Ward, M. Sharma, N. D. Richards, J. D. Luca, and M. Mears. Intelligent
Control of Un-Manned Air Vehicles: Program Summary and Representative Re-
sults. In Proc. of the 2nd AIAA Unmanned Unlimited Systems, Technologies and
Operations Aerospace, Land and Sea, 2003.
[209] S. Wegener, D. Sullivan, J. Frank, and F. Enomoto. UAV Autonomous Operations
for Airborne Science Missions. In Proc. of the AIAA 3rd Unmanned Unlimited
Technical Conference, Workshop and Exhibit, 2004.
[210] S. Wiggins. Introduction to Applied Nonlinear Dynamical Systems and Chaos.
Springer-Verlag, 1990.
[211] I. Yavrucuk, J. Prasad, and S. Unnikrishnan. Envelope Protection for Au-
tonomous Unmanned Aerial Vehicles. Journal of Guidance, Control, and Dy-
namics, 32(1):248261, 2009.
[212] P.-C. P. Yip. Robust and Adaptive Nonlinear Control Using Dynamic Surface
Controller with Applications to Intelligent Vehicle Highway Systems. PhD thesis,
University of California at Berkeley, 1997.
[213] P.-C. P. Yip and J. K. Hedrick. Adaptive Dynamic Surface Control: A Simpli-
ed Algorithm for Adaptive Backstepping Control of Nonlinear Systems. Int. J.
Control, 71(5):959979, 1998.
BIBLIOGRAPHY 255
[214] Y. Zhang and J. Jiang. Integrated Design of Recongurable Fault-Tolerant Control
Systems. Journal of Guidance, 24(1), July 2000.
[215] K. Zhou and J. Doyle. Essentials of Robust Control. Prentice Hall, 1997.
[216] A. Zolghadri. A Redundancy-based Strategy for Safety Management in Modern
Civil Aircraft. Control Engineering Practice, 8:545554, 2000.
[217] A. Zolghadri, D. Henry, and M. Monsion. Design of Nonlinear Observers for Fault
Diagnosis: A Case Study. Control Engineering Practice, 4:15351544, 1996.
Nomenclature
Abbreviations
ABS Adaptive Backstepping
ADMIRE Aerodata Model in Research Environment
AIAA American Institute of Aeronautics and Astronautics
AMS Attainable Moment Set
BS Backstepping
CA Control Allocation
CABS Constrained Adaptive Backstepping
CAP Control Anticipation Parameter
CFD Computational Fluid Dynamics
CG Center of Gravity
CLF Control Lyapunov Function
DOF Degrees of Freedom
DUT Delft university of Technology
EA Eigenstructure Assignment
FBL Feedback Linearization
FBW Fly-By-Wire
257
258 BIBLIOGRAPHY
FDIE Fault Detection, Isolation and Estimation
HJB Hamilton-Jacobi-Bellman
I&I Immersion and Invariance
IEEE Institute of Electrical and Electronics Engineers
IMM Interacting Multiple Model
ISS Input-to-state Stable
LOES Lower Order Equivalent System
LQR Linear Quadratic Regulator
MAV Mean Absolute Value
MMST Multiple Model Switching and Tuning
MPC Model Predictive Control
MRAC Model Reference Adaptive Control
NASA National Aeronautics and Space Administration
NDI Nonlinear Dynamic Inversion
NLR National Aerospace Laboratory
NN Neural Network
PCA Propulsion Controlled Aircraft
PE Persistently Exciting
PIM Pseudo Inverse Method
QFT Quantitative Feedback Theory
QP Quadratic Programming
RFC Recongurable Flight Control
RLS Recursive Least-Squares
RMS Root Mean Square
SCAS Stability and Control Augmentation System
SMC Sliding Mode Control
UAV Unmanned Aerial Vehicle
BIBLIOGRAPHY 259
USAF United States Air Force
WPI Weighted Pseudo-Inverse
Greek Symbols
Aerodynamic angle of attack
Virtual control
Aerodynamic angle of sideslip
a
Aileron deection angle
e
Elevator deection angle
r
Rudder deection angle
al
Left aileron deection angle
ar
Right aileron deection angle
el
Left elevator deection angle
er
Right elevator deection angle
lef
Leading edge ap deection angle
tef
Trailing edge ap deection angle
th
Throttle position
Update gain
Invariant manifold
eng
Engine lag time constant
Aircraft body axis pitch angle
Regressor vector
Roman Symbols
c Mean aerodynamic chord length
Control gain
e
Prediction error
F
B
Body-xed reference frame
F
E
Earth-xed reference frame
F
O
Vehicle carried local earth reference frame
F
S
Stability axes reference frame
F
T
Total thrust
BIBLIOGRAPHY 261
F
W
Wind axes reference frame
g Gravity acceleration
g
1
, g
2
, g
3
Wind axes gravity components
h Altitude
H
eng
Engine angular momentum
I
x
Roll moment of inertia
I
y
Pitch moment of inertia
I
z
Yaw moment of inertia
I
xy
, I
xz
, I
yz
Product moments of inertia
k
Integral gain
M Mach number
m Total aircraft mass
n
y
Normal Acceleration in body y-axis
n
z
Normal Acceleration in body z-axis
p Body axis roll rate
P
a
Engine power, percent of maximum power
P
c
Commanded engine power to the engine, percent of maximum power
P
c
Commanded engine power based on throttle position, percent of max-
imum power
p
s
Stability axis roll rate
p
stat
Static air pressure
q Body axis pitch rate
q
0
, q
1
, q
2
, q
3
Quaternion components
q
s
Stability axis pitch rate
r Body axis yaw rate
r
System state
x
E
, y
E
, z
E
Aircraft position w.r.t. reference point
x
cg
r
Reference center of gravity location
x
cg
Center of gravity location
y System output
y
r
Reference signal
z
Tracking error
Samenvatting
Onder de invloed van technologische ontwikkelingen in de lucht- en ruimtevaart techniek
zijn tijdens de afgelopen decennia de prestatie-eisen voor moderne gevechtsvliegtuigen
alsmaar hoger geworden, terwijl tegelijkertijd ook de grootte van het gewenste opera-
tionele vliegdomein ink is toegenomen. Om een extreme wendbaarheid te bereiken,
worden deze vliegtuigen vaak a erodynamisch instabiel ontworpen en uitgerust met re-
dundante besturingsactuatoren. Een goed voorbeeld hiervan is de Lockheed Martin
F-22 Raptor, die gebruik maakt van een zogenaamd thrust vectored control systeem
om een hogere mate van wendbaarheid te bereiken. Daarbij worden de overlevings-
eisen in de moderne oorlogsvoering steeds strenger voor zowel bemande als onbemande
gevechtsvliegtuigen. Het vormt een enorme uitdaging voor regeltechnisch ingenieurs om
rekening te houden met al deze eisen bij het ontwerp van de besturingssystemen voor dit
type vliegtuigen.
Tot op heden worden de meeste besturingssystemen voor vliegtuigen ontworpen met be-
hulp van gelineariseerde vliegtuigmodellen die elk geldig zijn op een trimconditie in het
operationele vliegdomein. Door gebruik te maken van de gevestigde klassieke regeltech-
nieken kan een lineaire regelaar worden afgeleid voor elk lokaal model. De versterkings-
factoren van de verschillende lineaire regelaars kunnen worden opgeslagen in tabellen
en door te interpoleren wordt in feite een unieke regelaar verkregen, die geldig is in
het gehele operationele vliegdomein. Echter, een probleem van deze aanpak is dat het
voor complexe niet-lineaire systemen zoals moderne gevechtsvliegtuigen niet mogelijk
is hoge prestatie- en robuustheidseisen te garanderen. Niet-lineaire regelmethodes zijn
ontwikkeld omde tekortkomingen van deze klassieke aanpak op te lossen. De theoretisch
ontwikkelde nonlinear dynamic inversion (NDI) methode is de bekendste en de meeste
gebruikte van deze technieken.
NDI is een regelmethode die expliciet kan omgaan met systemen die niet-lineariteiten
bevatten. Door het toepassen van niet-lineaire terugkoppeling en toestandtransformaties
kan het niet-lineare systeem worden omgezet in een constant lineair systeem, zonder
263
264 SAMENVATTING
gebruik te maken van lineaire benaderingen van het systeem. Vervolgens kan er een
klassieke regelaar voor het resulterende lineaire systeem worden ontworpen. Echter, om
een perfecte niet-lineaire dynamische inversie toe te passen is er een zeer nauwkeurig
systeemmodel nodig. Het is een erg kostbaar en langdurig proces om zo een model voor
een gevechtsvliegtuig af te leiden, aangezien er windtunnel experimenten, computational
uid dynamics (CFD) berekeningen en een uitgebreid testvluchtprogramma voor nodig
zijn. Het resulterende, empirische vliegtuigmodel zal nooit 100% accuraat zijn. De
tekortkomingen in het model kunnen worden gecompenseerd door een robuuste lineaire
regelaar voor het met NDI gelineariseerde systeem af te leiden. Maar zelfs dan kun-
nen de gewenste vliegprestaties niet worden gehandhaafd in het geval van grove fouten
als gevolg van grote, plotselinge veranderingen in de vliegtuig dynamica. Bijvoorbeeld
veroorzaakt door structurele schade of het falen van een actuator.
Voor een elegantere oplossing om om te gaan met grote modelonzekerheden kan er wor-
den gekeken naar een adaptief regelsysteem met een vorm van real-time modelidenti-
catie. De recente ontwikkelingen in computers en beschikbare rekenkracht hebben het
mogelijk gemaakt om meer complexe, adaptieve vliegtuigbesturingssystemen te imple-
menteren. Natuurlijk heeft een adaptief regelsysteemde potentie ommeer te doen dan het
compenseren van modelonzekerheden; het is ook mogelijk om plotselinge veranderin-
gen in het dynamisch gedrag van het vliegtuig te identiceren. Dergelijke veranderingen
zullen over het algemeen leiden tot een verhoogde werkdruk van de piloot of zelfs tot
compleet verlies van de controle over het vliegtuig. Als de systeemdynamica van het
vliegtuig na de schade correct kan worden geschat door het modelidenticatiesysteem,
dan kunnen het teveel aan besturingsactuatoren en de y-by-wire structuur van moderne
gevechtsvliegtuigen worden benut om het besturingssysteem te hercongureren.
Er zijn verscheidene methodes beschikbaar om een schatter te ontwerpen die het vlieg-
tuigmodel dat gebruikt wordt door het besturingssysteem kan updaten, bijvoorbeeld neu-
rale netwerken of least squares technieken. Een nadeel van een adaptief ontwerp met een
aparte schatter is dat het certainty equivalence principe niet geldig is voor niet-lineaire
systemen. Met andere woorden, de dynamica van de schatter is niet snel genoeg om
om te gaan met de mogelijk sneller-dan-lineaire groei van instabiliteiten in niet-lineaire
systemen. Om dit probleem te overwinnen is een regelaar met sterke parametrische
robuustheidseigenschappen nodig. Als alternatieve oplossing kunnen de regelaar en
schatter als een gentegreerd systeem worden ontworpen met behulp van de adaptive
backstepping methode. Adaptive backstepping biedt de mogelijkheid een regelaar af te
leiden voor een brede klasse van niet-lineaire systemen met parametrische onzekerheden,
door systematisch een Lyapunov functie te construeren voor het gesloten lus systeem.
Het hoofddoel van dit proefschrift is om de geschiktheid te onderzoeken van de niet-
lineaire adaptive backstepping techniek in combinatie met real-time model identicatie
voor het ontwerp van een hercongureerbaar vliegtuigbesturingssysteem voor een
modern gevechtsvliegtuig. Dit systeem moet beschikken over de volgende kenmerken:
Er wordt gebruikt gemaakt van een enkele niet-lineaire adaptieve regelaar die
geldig is voor het gehele operationele domein van het vliegtuig en waarvan de
SAMENVATTING 265
prestatie- en stabiliteitseigenschappen theoretisch aantoonbaar zijn.
Het besturingssysteem verbetert de prestaties en de overlevingskansen van het
vliegtuig wanneer er verstoringen optreden als gevolg van schade.
De algoritmes die het regelsysteembeschrijven bezitten uitstekende numerieke sta-
biliteitseigenschappen en de benodigde rekenkracht is klein (een real-time
implementatie is haalbaar).
Adaptive backstepping is een recursieve, niet-lineaire ontwerpmethode die is gebaseerd
op Lyapunovs stabiliteitstheorie en gebruik maakt van dynamische parameter-update-
wetten om te compenseren voor parametrische onzekerheden. De gedachte achter back-
stepping is om een regelaar recursief af te leiden door sommige toestandsvariabelen als
virtuele systeeminputs te beschouwen en hier tussenliggende, virtuele regelaars voor
te ontwerpen. Backstepping realiseert globale asymptotische stabiliteit voor de toes-
tandsvariabelen van het gesloten lus systeem. Het bewijs van deze eigenschappen is
een direct gevolg van de recursieve procedure, aangezien op deze manier een Lyapunov
functie wordt geconstrueerd voor het gehele systeem, inclusief de parameterschattingen.
De tracking-fouten drijven het parameterschattingsproces van de procedure. Tevens is
het mogelijk om fysieke beperkingen van systeeminputs en toestandsvariabelen mee te
nemen in het ontwerp, zodat het identicatieproces niet wordt verstoord tijdens periodes
van actuatorsaturatie. Een keerzijde van de gentegreerde adaptive backstepping meth-
ode is dat de geschatte parameters slechts pseudo-schattingen zijn van de echte onzekere
parameters. Er is geen enkele garantie dat de echte waardes van de onzekere parame-
ters worden gevonden, aangezien de adaptatie alleen probeert te voldoen aan een totaal
systeemstabiliteitscriterium, oftewel de Lyapunov functie. Verder is het zo dat het ver-
hogen van de adaptatieversterkingsfactoren niet noodzakelijkerwijs de responsie van het
gesloten lus systeem verbetert, doordat er een sterke koppeling is tussen de regelaar en
dynamica van de schatter.
De immersion en invariance (I&I) methode biedt een alternatieve manier om een niet-
lineaire schatter te construeren. Met deze aanpak is het mogelijk om voorgeschreven
stabiele dynamica toe te wijzen aan de parameterschattingsfout. De resulterende schat-
ter wordt gecombineerd met een backstepping regelaar om tot een modulaire, adaptieve
regelmethode te komen. De op basis van I&I ontworpen schatter is snel genoeg om om
te gaan met de potenti ele sneller-dan-lineaire groei van niet-lineaire systemen. De resul-
terende modulaire regelmethode is veel makkelijker te tunen dan de standaard adaptive
backstepping methode waarbij de schatter wordt aangepast op basis van de tracking-
fouten. Het is zelfs zo dat het gesloten lus systeem, verkregen door de toepassing van
de op I&I gebaseerde adaptive backstepping regelaar, kan worden gezien als een meer-
trapsverbinding tussen twee stabiele systemen met voorgeschreven asymptotische karak-
teristieken. Het gevolg is dat de prestaties van het gesloten lus systeem met de nieuwe
adaptieve regelaar signicant kunnen worden verbeterd.
Omeen real-time implementatie van adaptieve regelaars mogelijk te maken moet de com-
plexiteit zoveel mogelijk beperkt worden. Als oplossing wordt het operationele vlieg-
266 SAMENVATTING
domein verdeeld in meerdere gebieden, met in ieder gebied een lokaal geldig vliegtuig-
model. Op deze manier hoeft de schatter maar een paar lokale modellen te updaten op
iedere tijdstap, waarmee de benodigde rekenkracht van het algoritme gereduceerd wordt.
Een ander voordeel van het gebruik van meerdere, lokale modellen is dat informatie van
modellen die niet worden geupdate in een zekere tijdstap wordt onthouden. Met an-
dere woorden, de schatter heeft geheugencapaciteiten. B-spline netwerken, geselecteerd
vanwege hun uitstekende numerieke eigenschappen, worden gebruikt om te zorgen voor
vloeiende overgangen tussen de lokale modellen in de verschillende gebieden van het
vliegdomein.
De adaptive backstepping besturingssystemen, die ontwikkeld zijn in deze thesis, zijn
toegepast op een hoogwaardig dynamisch F-16 model en ge evalueerd in numerieke sim-
ulaties die zijn toegespitst op verscheidene regelproblemen. De adaptieve vliegtuigrege-
laars zijn vergeleken met het standaard F-16 besturingssysteem, dat is gebaseerd op
klassieke regeltechniek methodes, en een niet-adaptief NDI-ontwerp. De prestaties zijn
vergeleken in simulatie scenarios op verschillende vliegcondities waar het vliegtuig
model plotseling te maken krijgt met een falende actuator, longitudinale zwaartepuntsver-
schuivingen en veranderingen in de a erodynamische co efci enten. Alle numerieke
simulaties kunnen zonder problemen in real-time uitgevoerd worden op een standaard
desktop computer. De resultaten van de numerieke simulaties tonen aan dat de verschil-
lende adaptieve regelaars een signicante verbetering geven qua prestaties ten opzichte
van een op NDI-gebaseerd besturingssysteem voor de gesimuleerde schade gevallen.
Het modulaire adaptive backstepping ontwerp met I&I schatter geeft de beste prestaties
en is het makkelijkst te tunen van alle onderzochte adaptieve vliegtuigbesturingssys-
temen. Verder beschikt de regelaar met I&I schatter over de sterkste stabiliteits- en
convergentie-eigenschappen. In vergelijking met de standaard adaptive backstepping
regelaars is de complexiteit van het ontwerp en de benodigde rekenkracht wat hoger,
maar kunnen de regelaar en schatter wel los van elkaar ontworpen en getuned wor-
den. Op basis van het onderzoek, dat is uitgevoerd voor dit proefschrift, kan worden
geconcludeerd dat een RFC systeem gebaseerd op het modulaire adaptive backstepping
ontwerp met I&I schatter een hoop potentie heeft, aangezien het over alle eigenschappen
beschikt die zijn genoemd in de doelstellingen.
Het wordt aangeraden om aanvullend onderzoek te doen naar de prestaties van het RFC
systeem gebaseerd op het modulaire adaptive backstepping ontwerp met I&I schatter
in andere simulatie scenarios. De evaluatie van de adaptieve vliegtuigbesturingssyste-
men in dit proefschrift is beperkt tot simulatie scenarios met falende actuatoren, sym-
metrische zwaartepuntsverschuivingen en onzekerheden in de a erodynamische co ef-
ci enten. Het onderzoek zou van meer waarde zijn als er ook simulaties met asym-
metrische verstoringen, zoals gedeeltelijk vleugelverlies, waren uitgevoerd. Een apart
onderzoek is dan wel eerst nodig om de benodigde realistische a erodynamische data
voor het F-16 model te verkrijgen. Het is nog een open probleem om een adaptief ight
envelope protection systeem te ontwikkelen, dat de gereduceerde ight envelope van het
beschadige vliegtuig kan schatten en door kan geven aan de regelaar, de piloot en het
SAMENVATTING 267
guidance systeem. Tenslotte is het belangrijk om het voorgestelde RFC systeem te eva-
lueren en valideren met testpiloten. De werkdruk van de piloot en de stuureigenschap-
pen na een schadegeval met het RFC systeem moeten worden vergeleken met die van de
standaard regelaar. Tegelijkertijd kan een studie worden uitgevoerd met betrekking tot
de interactie tussen de reacties van de piloot en de acties van het adaptieve element van
het besturingssysteem bij het plots optreden van schade of een actuator falen.
Acknowledgements
This thesis is the result of four years of research within the Aerospace Software and
Technology Institute (ASTI) at the Delft University of Technology. During this period,
many people contributed to the realization of this work. I am very grateful to all of these
people, but I would like to mention some of them in particular.
First of all, I would like to thank my supervisor Dr. Ping Chu, my colleague Eddy van
Oort and my promotor Prof. Bob Mulder.
Dr. Ping Chu convinced me to pursue a Ph.D. degree and I am indebted for his en-
thusiastic scientic support that has kept me motivated in these past years. Moreover, I
always enjoyed our social discussions on practically anything. I want to thank Eddy van
Oort for his cooperation and the many inspiring discussions we have had. Eddy started
his related Ph.D. research a few months after me: The modular adaptive backstepping
ight control designs with least-squares identier, used for comparison in this thesis,
were mainly designed by him. I will always have many fond memories of the trips we
made to conference meetings around the world. I am very grateful to Prof. Bob Mulder
for his scientic support, his expert advice and for being my promotor. Thanks to Prof.
Bob Mulders extensive knowledge and experience in the eld of aerospace control and
simulation, he could always provide me with a fresh perspective on my work.
This research would not have been possible without the efforts of Prof. Lt. Gen. (ret.)
Ben Droste, former commander of the Royal Netherlands Air Force and former dean of
the faculty of Aerospace Engineering, and the support of the National Aerospace Labora-
tory (NLR). I would like to thank the people at the NLR and especially Jan Breeman for
their scientic input and support. I am also indebted to my thesis committee for taking
the time to read this book and making the (long) trip to The Netherlands.
269
270 ACKNOWLEDGEMENTS
I would like to thank all of my colleagues at ASTI, in particular Erikjan van Kampen,
Elwin de Weerdt, Meine Oosten and Vera van Bragt. I am also grateful to the people at
the Control and Simulation Division of the Delft University of Technology, especially
to Thomas Lombaerts and Bertine Markus for their assistance with the administrative
aspects of the thesis.
I would like to express my gratitude to the people at Lockheed Martin and the Royal
Netherlands Air Force, as well as to the many reviewers that read the journal papers con-
taining parts of this research, for providing me with valuable scientic input and practical
expertise.
Last but certainly not least, I am truly grateful to my family, especially my parents, my
brother Rutger and my girlfriend Rianne for their love and continuous support.
Rotterdam, Lars Sonneveldt
May 2010
Curriculum Vitae
Lars Sonneveldt was born in Rotterdam, The Netherlands on July 29, 1982. From 1994
to 2000 he attended the Emmaus College in Rotterdam, obtaining the Gymnasium cer-
ticate.
In 2000 he started his studies at the Delft University of Technology, Faculty of Aerospace
Engineering. In 2004 he completed an internship at the Command and Control depart-
ment of TNO-FEL in The Hague and obtained his B.Sc. degree. After that, he enrolled
with the Control and Simulation Division for his masters program, specializing in ight
control problems. In June 2006 he received his M.Sc. degree for his study on the suit-
ability of new nonlinear adaptive control techniques for ight control design.
In 2006 he started as a Ph.D. student at the Delft University of Technology within the
Aerospace Software and Technology Institute (ASTI). His Ph.D. research was conducted
in cooperation with the National Aerospace Laboratory (NLR) in Amsterdam and un-
der the supervision of the Control and Simulation Division at the Faculty of Aerospace
Engineering.
271