Documente Academic
Documente Profesional
Documente Cultură
Daniel Wiese
dpwiese@mit.edu
Wednesday 18-Feb-2015
Last time:
Identification of multiple parameters in first order plant
Error model 1 and 3
Determining update law using Lyapunov functions
Adaptive control
Today:
Quick overview of error models 1 and 3
Finish adaptive control of a first-order plant
Using tuning gain in adaptive control
Stability
θ:
e parameter error
˙
Adaptive law: θe = −eu
Lyapunov function: e = 1 θe> θe
V (θ) 2
˙
V̇ = θe> θe
= −θe> eu
= −e2 ≤ 0
⇒ θ(t)
e is bounded for all t ≥ t0
(dpwiese@mit.edu) Wednesday 18-Feb-2015 3 / 41
Error Model 3:
θ:
e parameter error
˙
Adaptive law: θe = −eu
Lyapunov function: e = 1 e2 + θe> θe
V (e, θ) 2
˙
V̇ = eė + θe> θe
˙
= −ae2 + eθe> u + θe> θe
˙
= −ae2 + θe> (eu + θ)
e
= −ae2 ≤ 0
⇒ e(t) and θ(t)
e are bounded for all t ≥ t0
(dpwiese@mit.edu) Wednesday 18-Feb-2015 4 / 41
Stability using Error Models
ap unknown
kp unknown, but with known sign
(dpwiese@mit.edu) Wednesday 18-Feb-2015 6 / 41
Certainty Equivalence Principle
Replace θ∗ and k ∗ by their estimates θ(t) and k(t) and determine stable
update laws.
Plant with control law substituted in, and after some algebra:
ẋp = am xp + kp θ(t)x
e p + km r + kp k̃(t)r
Reference Model:
ẋm = am xm + km r
Define the tracking error as
e = xp − xm
Error model 3:
ė = am e + kp θ(t)x
e p + kp k̃(t)r
>
= am e + k p e
θ (t)ω
Plant:
ẋp = am xp + kp θ(t)x
e p + km r + kp k̃(t)r
Reference Model:
ẋm = am xm + km r
Define the tracking error as
e = xp − xm
Error model 3:
ė = am e + kp θ(t)x
e p + kp k̃(t)r
>
xp
θ (t)ω
= am e + k p e ω =
r
θ(t)
e
θ
e =
k̃(t)
>
error model: ė = am e + kpe
θ (t)ω
1 2 >
Lyapunov function: V = e + |kp |θ θ
e e
2
Take the time derivative of V
>˙
V̇ = eė + e
θ e
θ
>˙
= am e2 + kp eθe> ω + |kp |e
θ eθ
˙
= am e2 + θe> (kp eω + |kp |e
θ)
˙
Propose the update law: θe = −sign(kp )eω
V̇ = am e2 ≤ 0
⇒ e(t) and θ(t)
e are bounded for all t ≥ t0
(dpwiese@mit.edu) Wednesday 18-Feb-2015 15 / 41
Adaptive Gain γ
Same plant, reference model, and control las as before give the same error
model:
ė = am e + kp θ(t)x
e p + kp k̃(t)r
Now introduce a gain γ in the update law
θ̇(t) = −γsign(kp )exp
k̇(t) = −γsign(kp )er
Choose the following candidate Lyapunov function and differentiate
1 2 |kp | e>e
V = e + θ θ
2 γ
>˙
V̇ = eė + e θ e
θ
|kp | e˙
>
2
= am e + θ kp eφ +
e θ
γ
= am e2 ≤ 0
(dpwiese@mit.edu) Wednesday 18-Feb-2015 17 / 41
Adaptive Gain Example
Simulation Parameters: am = −1, km = 1, ap = 1, kp = 2
γ =1 γ =10 γ =100
3 x 3 x 3 x
xpm xpm xpm
2.5 2.5 2.5
2 2 2
State
State
State
1.5 1.5 1.5
1 1 1
0 0 0
0 10 20 30 0 10 20 30 0 10 20 30
time [s] time [s] time [s]
1 1 1
θ θ θ
k k k
0 0 0
Parameter
Parameter
Parameter
−1 −1 −1
−2 −2 −2
−3 −3 −3
0 10 20 30 0 10 20 30 0 10 20 30
time [s] time [s] time [s]
There are ways to reduce the oscillations, which we will see later in the
course.
We will look at this from a quantitative view in coming lectures as well
For now, we will go through stability in a little more depth.
ẋ(t) = f (x(t), t)
(1)
x(t0 ) = x0
Definition: autonomous (pg 45) If the RHS of (1) does not depend
on t, the equation is called autonomous.
Both linear and nonlinear systems can have multiple equilibrium points
Linear systems will have a single equilibrium point, or an infinity of
non-isolated equilibrium points e.g. mass
ẋ 0 1 x 0
= + 1 f
ẍ 0 0 ẋ m
Definition 2.1: stable (pg 51) The equilibrium state xeq of (1) is said
to be stable if for every > 0 and t0 ≥ 0, there exists a δ(, t0 ) > 0 such
that kx0 k < δ implies that kx(t; x0 , t0 )k < ∀t ≥ t0 .
Definition 2.2: attractive (pg 51) The equilibrium state xeq of (1) is
said to be attractive if for some ρ > 0 and every η > 0 and t0 > 0, there
exists a number T (η, x0 , t0 ) such that kx0 k < ρ implies that
kx(t; x0 , t0 )k < η ∀t ≥ t0 + T .
Definition 2.4: uniformly stable (pg 52) The equilibrium state xeq of
(1) is said to be uniformly stable if in Definition 2.1 δ is independent of
initial time.
ẍ = −x + (x2 − 1)ẋ
Ẍ = −X + (X 2 − 1)Ẋ
write as
Ẋ1 X2
=
Ẋ2 −X1 + (X12 − 1)X2
where X1 = X2 = 0 is an equilibrium point. The nonlinear system is
represented as
Ẋ = f (X)
We want to linearize about the origin to get
∂f
ẋ = x = Ax
∂x eq
With
f1 = X2
f2 = −X1 + (X12 − 1)X2
Using Lyapunov’s Indirect Method, we can conclude that the origin of the
nonlinear system Ẍ = −X + (X 2 − 1)Ẋ is stable.
(dpwiese@mit.edu) Wednesday 18-Feb-2015 29 / 41
Lyapunov’s Direct (2nd ) Method
Again, we will have another tool which gives us information about the
stability of an equilibrium point for a nonlinear system, without
solving the differential equation
Now that we have learned more about stability, we know what this
means
But what more can we say?
(dpwiese@mit.edu) Wednesday 18-Feb-2015 31 / 41
Asymptotic Convergence of e(t) to Zero
Given two real numbers, the notion of the “size” of these numbers is
apparent
However, given quantity such as a vector, matrix, or time-varying
signal, we may by interested in how “big” they are when compared to
another vector, matrix, or time-varying signal respectively
A norm is a non-negative measure of the magnitude of a given
quantity that satisfy three basic properties
We will not go into the details of norms in lecture, but will post a
short handout online
Perhaps the signal is very large at one instance of time, and small
everywhere else, or maybe it is moderately large for all time
Whatever the case may be, we wish to have some way to quantify
these varying degrees of the “largeness” of a signal
For the vector valued signal x(t) the following norms are given
L∞ Norm
kx(t)kL∞ = sup kx(t)k
t
e ∈ L∞
e ∈ L2
Notice:
Replace g with e in the corollary, and that is what we need to prove!
So we need to show e ∈ L2 ∩ L∞ for the adaptive systems we have
seen so far
Note that Z t
V̇ (τ )dτ = V (t) − V (0)
0
Since V is non increasing and positive definite, V (0) − V (t) ≤ V (0). This
gives Z t
− V̇ (τ )dτ ≤ V (0)
0
which is equivalent to
Z t
|am | ke(τ )k2 dτ ≤ V (0) < ∞
0
Z t
|am | ke(τ )k2 dτ ≤ V (0) < ∞
0
simplifies to sZ
t
ke(τ )k2 dτ < ∞
0
Recognize that this is just ke(t)kL2 < ∞ we write e ∈ L2
Today we:
Reviewed and finished stability for the first-order adaptive control
problem from last lecture
Introduced the adaptive gain γ and showed qualitatively some of its
effects on the performance of the system
Covered lots of different stability definitions
Went over Lyapunov’s first and second methods for proving stability
for nonlinear systems
Stated what signal norms were, and discussed the existence of signals
in normed spaces
Gave Barbalat’s Lemma and showed how to use it to show e → 0