Sunteți pe pagina 1din 628

# Nonlinear Systems and Control

Lecture # 1
Introduction

– p. 1/1
Nonlinear State Model

ẋ1 = f1 (t, x1 , . . . , xn , u1 , . . . , up )
ẋ2 = f2 (t, x1 , . . . , xn , u1 , . . . , up )
.. ..
. .
ẋn = fn (t, x1 , . . . , xn , u1 , . . . , up )

variable t

## x1 , x2 , . . ., xn the state variables

– p. 2/1
   
x1 f1 (t, x, u)
 
u1
   
   
   

 x2 

  f2 (t, x, u)

 
u
   
 2 
     
 ..   .. 
x= . , u =   , f (t, x, u) =  .
   
   .  

  ..  

.. ..
     
.
 

   
 . 

 u p

xn fn (t, x, u)

ẋ = f (t, x, u)

– p. 3/1
ẋ = f (t, x, u)
y = h(t, x, u)

## x is the state, u is the input

y is the output (q -dimensional vector)
Special Cases:
Linear systems:

ẋ = A(t)x + B(t)u
y = C(t)x + D(t)u

ẋ = f (t, x)

## Results from ẋ = f (t, x, u) with u = γ(t, x)

– p. 4/1
Autonomous System:

ẋ = f (x)

Time-Invariant System:

ẋ = f (x, u)
y = h(x, u)

## A time-invariant state model has a time-invariance property

with respect to shifting the initial time from t0 to t0 + a,
provided the input waveform is applied from t0 + a rather
than t0

– p. 5/1
Existence and Uniqueness of Solutions
ẋ = f (t, x)

## f (t, x) is piecewise continuous in t and locally Lipschitz in

x over the domain of interest

## f (t, x) is piecewise continuous in t on an interval J ⊂ R if

for every bounded subinterval J0 ⊂ J , f is continuous in t
for all t ∈ J0 , except, possibly, at a finite number of points
where f may have finite-jump discontinuities

## f (t, x) is locally Lipschitz in x at a point x0 if there is a

neighborhood N (x0 , r) = {x ∈ Rn | kx − x0 k < r}
where f (t, x) satisfies the Lipschitz condition
kf (t, x) − f (t, y)k ≤ Lkx − yk, L > 0
– p. 6/1
A function f (t, x) is locally Lipschitz in x on a domain
(open and connected set) D ⊂ Rn if it is locally Lipschitz at
every point x0 ∈ D

|f (y) − f (x)|
≤L
|y − x|

## On a plot of f (x) versus x, a straight line joining any two

points of f (x) cannot have a slope whose absolute value is
greater than L

## Any function f (x) that has infinite slope at some point is

not locally Lipschitz at that point
– p. 7/1
A discontinuous function is not locally Lipschitz at the points
of discontinuity

## The function f (x) = x1/3 is not locally Lipschitz at x = 0

since
f ′ (x) = (1/3)x−2/3 → ∞ a x → 0
On the other hand, if f ′ (x) is continuous at a point x0 then
f (x) is locally Lipschitz at the same point because
continuity of f ′ (x) ensures that |f ′ (x)| is bounded by a
constant k in a neighborhood of x0 ; which implies that
f (x) satisfies the Lipschitz condition L = k

## More generally, if for t ∈ J ⊂ R and x in a domain

D ⊂ Rn , f (t, x) and its partial derivatives ∂fi /∂xj are
continuous, then f (t, x) is locally Lipschitz in x on D
– p. 8/1
Lemma: Let f (t, x) be piecewise continuous in t and
locally Lipschitz in x at x0 , for all t ∈ [t0 , t1 ]. Then, there is
δ > 0 such that the state equation ẋ = f (t, x), with
x(t0 ) = x0 , has a unique solution over [t0 , t0 + δ]

## Without the local Lipschitz condition, we cannot ensure

uniqueness of the solution. For example, ẋ = x1/3 has
x(t) = (2t/3)3/2 and x(t) ≡ 0 as two different solutions
when the initial state is x(0) = 0

## The lemma is a local result because it guarantees existence

and uniqueness of the solution over an interval [t0 , t0 + δ],
but this interval might not include a given interval [t0 , t1 ].
Indeed the solution may cease to exist after some time

– p. 9/1
Example:
ẋ = −x2
f (x) = −x2 is locally Lipschitz for all x
1
x(0) = −1 ⇒ x(t) =
(t − 1)

x(t) → −∞ as t → 1
the solution has a finite escape time at t = 1

## In general, if f (t, x) is locally Lipschitz over a domain D

and the solution of ẋ = f (t, x) has a finite escape time te ,
then the solution x(t) must leave every compact (closed
and bounded) subset of D as t → te
– p. 10/1
Global Existence and Uniqueness

## If f (t, x) and its partial derivatives ∂fi /∂xj are continuous

for all x ∈ Rn , then f (t, x) is globally Lipschitz in x if and
only if the partial derivatives ∂fi /∂xj are globally bounded,
uniformly in t

## f (x) = −x2 is locally Lipschitz for all x but not globally

Lipschitz because f ′ (x) = −2x is not globally bounded

– p. 11/1
Lemma: Let f (t, x) be piecewise continuous in t and
globally Lipschitz in x for all t ∈ [t0 , t1 ]. Then, the state
equation ẋ = f (t, x), with x(t0 ) = x0 , has a unique
solution over [t0 , t1 ]

## The global Lipschitz condition is satisfied for linear systems

of the form
ẋ = A(t)x + g(t)
but it is a restrictive condition for general nonlinear systems

– p. 12/1
Lemma: Let f (t, x) be piecewise continuous in t and
locally Lipschitz in x for all t ≥ t0 and all x in a domain
D ⊂ Rn . Let W be a compact subset of D , and suppose
that every solution of

## with x0 ∈ W lies entirely in W . Then, there is a unique

solution that is defined for all t ≥ t0

– p. 13/1
Example:
ẋ = −x3 = f (x)
f (x) is locally Lipschitz on R, but not globally Lipschitz
because f ′ (x) = −3x2 is not globally bounded

## If, at any instant of time, x(t) is positive, the derivative ẋ(t)

will be negative. Similarly, if x(t) is negative, the derivative
ẋ(t) will be positive

## Therefore, starting from any initial condition x(0) = a, the

solution cannot leave the compact set {x ∈ R | |x| ≤ |a|}

## Thus, the equation has a unique solution for all t ≥ 0

– p. 14/1
Equilibrium Points

## A point x = x∗ in the state space is said to be an

equilibrium point of ẋ = f (t, x) if

x(t0 ) = x∗ ⇒ x(t) ≡ x∗ , ∀ t ≥ t0

## For the autonomous system ẋ = f (x), the equilibrium

points are the real solutions of the equation

f (x) = 0

## An equilibrium point could be isolated; that is, there are no

other equilibrium points in its vicinity, or there could be a
continuum of equilibrium points

– p. 15/1
A linear system ẋ = Ax can have an isolated equilibrium
point at x = 0 (if A is nonsingular) or a continuum of
equilibrium points in the null space of A (if A is singular)

## It cannot have multiple isolated equilibrium points , for if xa

and xb are two equilibrium points, then by linearity any point
on the line αxa + (1 − α)xb connecting xa and xb will be
an equilibrium point

## A nonlinear state equation can have multiple isolated

equilibrium points .For example, the state equation

## has equilibrium points at (x1 = nπ, x2 = 0) for

n = 0, ±1, ±2, · · ·
– p. 16/1
Linearization

## A common engineering practice in analyzing a nonlinear

system is to linearize it about some nominal operating point
and analyze the resulting linear model

## What are the limitations of linearization?

Since linearization is an approximation in the
neighborhood of an operating point, it can only predict
the “local” behavior of the nonlinear system in the
vicinity of that point. It cannot predict the “nonlocal” or
“global” behavior

## There are “essentially nonlinear phenomena” that can

take place only in the presence of nonlinearity
– p. 17/1
Nonlinear Phenomena
Finite escape time

Limit cycles

Chaos

## Multiple modes of behavior

– p. 18/1
Nonlinear Systems and Control
Lecture # 2
Examples of Nonlinear Systems

– p. 1/1
Pendulum Equation

l
θ

mg

## mlθ̈ = −mg sin θ − klθ̇

x1 = θ, x2 = θ̇

– p. 2/1
ẋ1 = x2
g k
ẋ2 = − sin x1 − x2
l m
Equilibrium Points:

0 = x2
g k
0 = − sin x1 − x2
l m
(nπ, 0) for n = 0, ±1, ±2, . . .
Nontrivial equilibrium points at (0, 0) and (π, 0)

– p. 3/1
Pendulum without friction:

ẋ1 = x2
g
ẋ2 = − sin x1
l
Pendulum with torque input:

ẋ1 = x2
g k 1
ẋ2 = − sin x1 − x2 + T
l m ml2

– p. 4/1
Tunnel-Diode Circuit
iL  L
  s i,mA
+ vL
X
X



1
CC iC CC iR i=h(v)
R
P
P



P
P
P



P
P


P


0.5
P
P

+ +
vC C J
J
J

vR 0
E
−0.5
(a) 0 0.5 1 v,V
(b)

dvC diL
iC = C , vL = L
dt dt
x1 = vC , x2 = iL , u=E
– p. 5/1
iC + iR − iL = 0 ⇒ iC = −h(x1 ) + x2

## vC − E + RiL + vL = 0 ⇒ vL = −x1 − Rx2 + u

1
ẋ1 = [−h(x1 ) + x2 ]
C
1
ẋ2 = [−x1 − Rx2 + u]
L
Equilibrium Points:

0 = −h(x1 ) + x2
0 = −x1 − Rx2 + u

– p. 6/1
E 1
h(x1 ) = − x1
R R

i
R
1.2
1
0.8
Q
Q 2
0.6 1
0.4
Q
0.2 3

0
0 0.5 1 v
R

– p. 7/1
Mass–Spring System

 Ff
Fsp

B B B m F-
B B B p -y

mÿ + Ff + Fsp = F

Sources of nonlinearity:
Nonlinear spring restoring force Fsp = g(y)

– p. 8/1
Fsp = g(y)

## g(y) = k(1 − a2 y 2 )y, |ay| < 1 (softening spring)

g(y) = k(1 + a2 y 2 )y (hardening spring)
Ff may have components due to static, Coulomb, and
viscous friction

## When the mass is at rest, there is a static friction force Fs

that acts parallel to the surface and is limited to ±µs mg
(0 < µs < 1). Fs takes whatever value, between its limits,
to keep the mass at rest

## Once motion has started, the resistive force Ff is modeled

as a function of the sliding velocity v = ẏ
– p. 9/1
Ff Ff

v v

(a) (b)

F Ff
f

v v

(c) (d)

(a) Coulomb friction; (b) Coulomb plus linear viscous friction; (c) static, Coulomb, and linear
viscous friction; (d) static, Coulomb, and linear viscous friction—Stribeck effect
– p. 10/1
Negative-Resistance Oscillator

i

X
X



+ i = h(v)


C L


v Resistive
Element
CC
iC CC
iL v

(a)
(b)

## h(v) → ∞ as v → ∞, and h(v) → −∞ as v → −∞

– p. 11/1
iC + iL + i = 0

dv 1
Z t
C + v(s) ds + h(v) = 0
dt L −∞

## Differentiating with respect to t and multiplying by L:

d2 v ′ dv
CL + v + Lh (v) =0
dt2 dt

τ = t/ CL
dv √ dv d2 v d2 v
= CL , = CL
dτ dt dτ 2 dt2

– p. 12/1
Denote the derivative of v with respect to τ by v̇

p
v̈ + εh (v)v̇ + v = 0, ε = L/C

## Special case: Van der Pol equation

h(v) = −v + 13 v 3

v̈ − ε(1 − v 2 )v̇ + v = 0
State model: x1 = v, x2 = v̇

ẋ1 = x2
ẋ2 = −x1 − εh′ (x1 )x2

– p. 13/1
Another State Model: z1 = iL , z2 = vC
1
ż1 = z2
ε
ż2 = −ε[z1 + h(z2 )]

## Change of variables: z = T (x)

x1 = v = z2

r
dv dv L
x2 = = CL = [−iL − h(vC )]
dτ dt C
= ε[−z1 − h(z2 )]
" # " #
−h(x1 ) − 1ε x2 −1 z2
T (x) = , T (z) =
x1 −εz1 − εh(z2 )
– p. 14/1

P lant : ẏp = ap yp + kp u

## Ref erenceM odel : ẏm = am ym + km r

u(t) = θ1∗ r(t) + θ2∗ yp (t)
km am − ap
θ1∗ = and θ2∗ =
kp kp
When ap and kp are unknown, we may use

## where θ1 (t) and θ2 (t) are adjusted on-line

– p. 15/1

θ̇1 = −γ(yp − ym )r
θ̇2 = −γ(yp − ym )yp , γ>0

## State Variables: eo = yp − ym , φ1 = θ1 − θ1∗ , φ2 = θ2 − θ2∗

ẏm = ap ym + kp (θ1∗ r + θ2∗ ym )
ẏp = ap yp + kp (θ1 r + θ2 yp )

## ėo = ap eo + kp (θ1 − θ1∗ )r + kp (θ2 yp − θ2∗ ym )

= · · · · · · + kp [θ2∗ yp − θ2∗ yp ]
= (ap + kp θ2∗ )eo + kp (θ1 − θ1∗ )r + kp (θ2 − θ2∗ )yp
– p. 16/1
Closed-Loop System:

## ėo = am eo + kp φ1 r(t) + kp φ2 [eo + ym (t)]

φ̇1 = −γeo r(t)
φ̇2 = −γeo [eo + ym (t)]

– p. 17/1
Nonlinear Systems and Control
Lecture # 3
Second-Order Systems

– p. 1/?
ẋ1 = f1 (x1 , x2 ) = f1 (x)
ẋ2 = f2 (x1 , x2 ) = f2 (x)

## Let x(t) = (x1 (t), x2 (t)) be a solution that starts at initial

state x0 = (x10 , x20 ). The locus in the x1 –x2 plane of the
solution x(t) for all t ≥ 0 is a curve that passes through the
point x0 . This curve is called a trajectory or orbit
The x1 –x2 plane is called the state plane or phase plane
The family of all trajectories is called the phase portrait
The vector field f (x) = (f1 (x), f2 (x)) is tangent to the
trajectory at point x because
dx2 f2 (x)
=
dx1 f1 (x)

– p. 2/?
Vector Field diagram

## Represent f (x) as a vector based at x; that is, assign to x

the directed line segment from x to x + f (x)
x2

x + fq (x) = (3, 2)

f (x)


*
q 

x = (1, 1)

x1
Repeat at every point in a grid covering the plane

– p. 3/?
6

2
2
0
x

−2

−4

−6
−5 0 5
x
1

## ẋ1 = x2 , ẋ2 = −10 sin x1

– p. 4/?
Numerical Construction of the Phase Portrait:

## Select an initial point x0 and calculate the trajectory

through it by solving

ẋ = f (x), x(0) = x0

## in forward time (with positive t) and in reverse time (with

negative t)

ẋ = −f (x), x(0) = x0

## Repeat the process interactively

– p. 5/?
Qualitative Behavior of Linear Systems

## x(t) = M exp(Jr t)M −1 x0

" # " # " # " #
λ1 0 λ 0 λ 1 α −β
Jr = or or or
0 λ2 0 λ 0 λ β α

x(t) = M z(t)
ż = Jr z(t)

– p. 6/?
Case 1. Both eigenvalues are real: λ1 6= λ2 6= 0

M = [v1 , v2 ]

## v1 & v2 are the real eigenvectors associated with λ1 & λ2

ż1 = λ1 z1 , ż2 = λ2 z2

## z1 (t) = z10 eλ1 t , z2 (t) = z20 eλ2 t

λ /λ1
z2 = cz1 2 , c = z20 /(z10 )λ2 /λ1
The shape of the phase portrait depends on the signs of λ1
and λ2

– p. 7/?
λ2 < λ1 < 0

## Call λ2 the fast eigenvalue (v2 the fast eigenvector) and λ1

the slow eigenvalue (v1 the slow eigenvector)

## The trajectory tends to the origin along the curve

λ /λ
z2 = cz1 2 1 with λ2 /λ1 > 1

## dz2 λ2 [(λ2 /λ1 )−1]

=c z1
dz1 λ1
– p. 8/?
z2

z1

Stable Node

λ2 > λ1 > 0
Reverse arrowheads =⇒ Unstable Node – p. 9/?
x2 v2 x2 v2

v1 v1

x1 x1

(a) (b)

– p. 10/?
λ2 < 0 < λ1

## eλ1 t → ∞, while eλ2 t → 0 as t → ∞

Call λ2 the stable eigenvalue (v2 the stable eigenvector)
and λ1 the unstable eigenvalue (v1 the unstable
eigenvector)
λ /λ1
z2 = cz1 2 , λ2 /λ1 < 0

– p. 11/?
z2 x2
v1
v2

z1 x1

(a)
(b)

## Phase Portrait of a Saddle Point

– p. 12/?
Case 2. Complex eigenvalues: λ1,2 = α ± jβ

## ż1 = αz1 − βz2 , ż2 = βz1 + αz2

−1 z2
q  
r = z12 + z22 , θ = tan
z1
r(t) = r0 eαt and θ(t) = θ0 + βt

α < 0 ⇒ r(t) → 0 as t → ∞
α > 0 ⇒ r(t) → ∞ as t → ∞
α = 0 ⇒ r(t) ≡ r0 ∀ t

– p. 13/?
z2 (a) z2 (b) z
2 (c)

z1 z1 z1

## Stable Focus Unstable Focus Center

x2 x2 x2
(a) (b) (c)

x1 x x1
1

– p. 14/?
Effect of Perturbations

parameters

## A node (with distinct eigenvalues), a saddle or a focus is

structurally stable because the qualitative behavior remains
the same under arbitrarily small perturbations in A

## A stable node with multiple eigenvalues could become a

stable node or a stable focus under arbitrarily small
perturbations in A

– p. 15/?
A center is not structurally stable
" #
µ 1
−1 µ

Eigenvalues = µ ± j
µ < 0 ⇒ Stable Focus
µ > 0 ⇒ Unstable Focus

– p. 16/?
Nonlinear Systems and Control
Lecture # 4
Qualitative Behavior Near
Equilibrium Points
&
Multiple Equilibria

– p. 1/?
The qualitative behavior of a nonlinear system near an
equilibrium point can take one of the patterns we have seen
with linear systems. Correspondingly the equilibrium points
are classified as stable node, unstable node, saddle, stable
focus, unstable focus, or center

## Can we determine the type of the equilibrium point of a

nonlinear system by linearization?

– p. 2/?
Let p = (p1 , p2 ) be an equilibrium point of the system

## where f1 and f2 are continuously differentiable

Expand f1 and f2 in Taylor series about (p1 , p2 )

## ẋ1 = f1 (p1 , p2 ) + a11 (x1 − p1 ) + a12 (x2 − p2 ) + H.O.T.

ẋ2 = f2 (p1 , p2 ) + a21 (x1 − p1 ) + a22 (x2 − p2 ) + H.O.T.

## ∂f1 (x1 , x2 ) ∂f1 (x1 , x2 )

a11 = , a12 =
∂x1
x=p ∂x2
x=p
∂f2 (x1 , x2 ) ∂f2 (x1 , x2 )

a21 = , a22 =
∂x1
x=p ∂x2
x=p

– p. 3/?
f1 (p1 , p2 ) = f2 (p1 , p2 ) = 0

y1 = x1 − p1 y2 = x2 − p2

## ẏ1 = ẋ1 = a11 y1 + a12 y2 + H.O.T.

ẏ2 = ẋ2 = a21 y1 + a22 y2 + H.O.T.

ẏ ≈ Ay
   ∂f ∂f1

1
a11 a12 ∂x1 ∂x2

∂f
A= = =
   

∂x x=p
 
∂f2 ∂f2
a21 a22 ∂x1 ∂x2

x=p

– p. 4/?
Eigenvalues of A Type of equilibrium point
of the nonlinear system
λ2 < λ1 < 0 Stable Node
λ2 > λ1 > 0 Unstable Node
λ2 < 0 < λ1 Saddle
α ± jβ, α < 0 Stable Focus
α ± jβ, α > 0 Unstable Focus
±jβ Linearization Fails

– p. 5/?
Example
ẋ1 = −x2 − µx1 (x21 + x22 )
ẋ2 = x1 − µx2 (x21 + x22 )

x = 0 is an equilibrium point
" #
∂f −µ(3x21 + x22 ) −(1 + 2µx1 x2 )
=
∂x (1 − 2µx1 x2 ) −µ(x21 + 3x22 )
" #
∂f 0 −1
A= =
∂x x=0
1 0

## x1 = r cos θ and x2 = r sin θ ⇒ ṙ = −µr 3 and θ̇ = 1

Stable focus when µ > 0 and Unstable focus when µ < 0
– p. 6/?
For a saddle point, we can use linearization to generate the
stable and unstable trajectories

## Let the eigenvalues of the linearization be λ1 > 0 > λ2 and

the corresponding eigenvectors be v1 and v2

## The stable and unstable trajectories will be tangent to the

stable and unstable eigenvectors, respectively, as they
approach the equilibrium point p

## α is a small positive number

– p. 7/?
Multiple Equilibria

## Example: Tunnel-diode circuit

ẋ1 = 0.5[−h(x1 ) + x2 ]
ẋ2 = 0.2(−x1 − 1.5x2 + 1.2)

i
R
1

## 0.8 Q1 = (0.063, 0.758)

Q Q
0.6 2
1
Q2 = (0.285, 0.61)
0.4

0.2 Q
3 Q3 = (0.884, 0.21)
0
0 0.5 1 v
R
– p. 8/?
" #
∂f −0.5h′ (x1 ) 0.5
=
∂x −0.2 −0.3

" #
−3.598 0.5
A1 = , Eigenvalues : − 3.57, −0.33
−0.2 −0.3
" #
1.82 0.5
A2 = , Eigenvalues : 1.77, −0.25
−0.2 −0.3
" #
−1.427 0.5
A3 = , Eigenvalues : − 1.33, −0.4
−0.2 −0.3

## Q1 is a stable node; Q2 is a saddle; Q3 is a stable node

– p. 9/?
2 3 4 5
x ’ = 0.5 ( − 17.76 x + 103.79 x − 229.62 x + 226.31 x − 83.72 x + y)
y ’ = 0.2 ( − x − 1.5 y + 1.2)

1.5

1
y

0.5

−0.5
−0.5 0 0.5 1 1.5
x

– p. 10/?
Hysteresis characteristics of the tunnel-diode circuit

u = E, y = vR

i y
1.2
R
F
1 1 C

0.8 0.8

Q Q 0.6
0.6 1 2
D
0.4 0.4

0.2 Q 0.2
3 B
E A
0
0 0 1 2 3
0 0.5 1 v
R u

– p. 11/?
Nonlinear Systems and Control
Lecture # 5
Limit Cycles

– p. 1/?
Oscillation: A system oscillates when it has a nontrivial
periodic solution

x(t + T ) = x(t), ∀ t ≥ 0

" #
0 −β
ż = z
β 0

## z1 (t) = r0 cos(βt + θ0 ), z2 (t) = r0 sin(βt + θ0 )

−1 z2 (0)
q  
r0 = z12 (0) + z22 (0), θ0 = tan
z1 (0)

– p. 2/?
The linear oscillation is not practical because
It is not structurally stable. Infinitesimally small
perturbations may change the type of the equilibrium
point to a stable focus (decaying oscillation) or unstable
focus (growing oscillation)

conditions

## The same problems exist with oscillation of nonlinear

systems due to a center equilibrium point (e.g., pendulum
without friction)

– p. 3/?
1

Limit Cycles:

## Example: Negative Resistance Oscillator

i

X
X



+ i = h(v)


C L



v Resistive
Element
CC
iC CC
iL v

(a)
(b)

– p. 4/?
ẋ1 = x2
ẋ2 = −x1 − εh′ (x1 )x2

## There is a unique equilibrium point at the origin

 
0 1
∂f

A= =
 
∂x x=0

−1 −εh′ (0)

λ2 + εh′ (0)λ + 1 = 0
h′ (0) < 0 ⇒ Unstable Focus or Unstable Node

– p. 5/?
Energy Analysis:

E = 12 CvC
2
+ 12 Li2L

1
vC = x1 and iL = −h(x1 ) − x2
ε
E = 12 C{x21 + [εh(x1 ) + x2 ]2 }

## Ė = C{x1 ẋ1 + [εh(x1 ) + x2 ][εh′ (x1 )ẋ1 + ẋ2 ]}

= C{x1 x2 + [εh(x1 ) + x2 ][εh′ (x1 )x2 − x1 − εh′ (x1 )x2 ]}
= C[x1 x2 − εx1 h(x1 ) − x1 x2 ]
= −εCx1 h(x1 )

– p. 6/?
−a x
b 1

Ė = −εCx1 h(x1 )

– p. 7/?
Example: Van der Pol Oscillator

ẋ1 = x2
ẋ2 = −x1 + ε(1 − x21 )x2

4 x2 x
3 2
3
2 2

1 1

0 0
x x
1 −1 1
−1
−2 −2
−3
−2 0 2 4 −2 0 2 4
(a) (b)

ε = 0.2 ε=1
– p. 8/?
1
ż1 = z2
ε
ż2 = −ε(z1 − z2 + 13 z23 )

3
10 x z
2 2
2

5 1

0
0 z
−1 1
x
1
−5 −2

−3
−5 0 5 10 −2 0 2
(a) (b)

ε=5

– p. 9/?
x x
2 2

x x
1 1

(a) (b)

## Stable Limit Cycle Unstable Limit Cycle

– p. 10/?
Example: Wien-Bridge Oscillator

s R1
B B B
C1
+ v1
#
BB BB BB 

"!
+ +
v2 C2 R2 gv
P
P




( 2)
P
P
P



P
P
P



P
P

Equivalent Circuit
– p. 11/?
State variables x1 = v1 and x2 = v2
1
ẋ1 = [−x1 + x2 − g(x2 )]
C1 R1
1 1
ẋ2 = − [−x1 + x2 − g(x2 )] − x2
C2 R1 C2 R2
There is a unique equilibrium point at x = 0

Numerical data: C1 = C2 = R1 = R2 = 1

## g(v) = 3.234v − 2.195v 3 + 0.666v 5

– p. 12/?
x ’ = − x + y − (3.234 y − 2.195 y3 + 0.666 y5)
y ’ = − ( − x + y − (3.234 y − 2.195 y3 + 0.666 y5)) − y

0.8

0.6

0.4

0.2

0
y

−0.2

−0.4

−0.6

−0.8

−1

## −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

x

– p. 13/?
x ’ = − x + y − (3.234 y − 2.195 y3 + 0.666 y5)
y ’ = − ( − x + y − (3.234 y − 2.195 y3 + 0.666 y5)) − y

0
y

−2

−4

−6

−6 −4 −2 0 2 4 6
x

– p. 14/?
Nonlinear Systems and Control
Lecture # 6
Bifurcation

– p. 1/?
Bifurcation is a change in the equilibrium points or periodic
orbits, or in their stability properties, as a parameter is
varied

Example

ẋ1 = µ − x21
ẋ2 = −x2

## Find the equilibrium points and their types for different

values of µ

For µ > 0 there are two equilibrium points at ( µ, 0) and

(− µ, 0)

– p. 2/?

Linearization at ( µ, 0):
" √ #
−2 µ 0
0 −1

( µ, 0) is a stable node

Linearization at (− µ, 0):
" √ #
2 µ 0
0 −1

(− µ, 0) is a saddle

– p. 3/?
ẋ1 = µ − x21 , ẋ2 = −x2

## As µ decreases, the saddle and node approach each other,

collide at µ = 0, and disappear for µ < 0

x x x
2 2 2

x x x
1 1 1

## µ>0 µ=0 µ<0

– p. 4/?
µ is called the bifurcation parameter and µ = 0 is the
bifurcation point

Bifurcation Diagram

µ

– p. 5/?
Example
ẋ1 = µx1 − x21 , ẋ2 = −x2
Two equilibrium points at (0, 0) and (µ, 0)
" #
µ 0
The Jacobian at (0, 0) is
0 −1

## (0, 0) is a stable node for µ < 0 and a saddle for µ > 0

" #
−µ 0
The Jacobian at (µ, 0) is
0 −1

## (µ, 0) is a saddle for µ < 0 and a stable node for µ > 0

An eigenvalue crosses the origin as µ crosses zero

– p. 6/?
While the equilibrium points persist through the bifurcation
point µ = 0, (0, 0) changes from a stable node to a saddle
and (µ, 0) changes from a saddle to a stable node

µ µ
(a) Saddle−node bifurcation (b) Transcritical bifurcation

## dangerous or hard safe or soft

– p. 7/?
Example
ẋ1 = µx1 − x31 , ẋ2 = −x2
For µ < 0, there is a stable node at the origin

## For µ > 0, there are three equilibrium points: a saddle at

√ √
(0, 0) and stable nodes at ( µ, 0), and (− µ, 0)

µ
(c) Supercritical pitchfork bifurcation

– p. 8/?
Example
ẋ1 = µx1 + x31 , ẋ2 = −x2
For µ < 0, there are three equilibrium points: a stable node

at (0, 0) and two saddles at (± −µ, 0)

## For µ > 0, there is a saddle at (0, 0)

µ
(d) Subcritical pitchfork bifurcation

– p. 9/?
Notice the difference between supercritical and subcritical
pitchfork bifurcations

µ µ
(c) Supercritical pitchfork bifurcation (d) Subcritical pitchfork bifurcation

## safe or soft dangerous or hard

– p. 10/?
Example: Tunnel diode Circuit
1
ẋ1 = [−h(x1 ) + x2 ]
C
1
ẋ2 = [−x1 − Rx2 + µ]
L
x2

x = h(x )
2 1

A B x1 A B µ
(a) (b)

– p. 11/?
Example

## ẋ1 = x1 (µ − x21 − x22 ) − x2

ẋ2 = x2 (µ − x21 − x22 ) + x1

" #
µ −1
Linearization:
1 µ

## A pair of complex eigenvalues cross the imaginary axis as

µ crosses zero

– p. 12/?
ṙ = µr − r 3 and θ̇ = 1

For µ > 0, there is a stable limit cycle at r = µ

x x
2 2

x x
1 1

µ<0 µ>0

## Supercritical Hopf bifurcation

– p. 13/?
Example
2 2 2 2 2
 
ẋ1 = x1 µ + (x1 + x2 ) − (x1 + x2 ) − x2
2 2 2 2 2
 
ẋ2 = x2 µ + (x1 + x2 ) − (x1 + x2 ) + x1

" #
µ −1
Linearization:
1 µ

## A pair of complex eigenvalues cross the imaginary axis as

µ crosses zero

– p. 14/?
ṙ = µr + r 3 − r 5 and θ̇ = 1

Sketch of µr + r 3 − r 5 :

r r

µ<0 µ>0

## For small |µ|, the stable limit cycles are approximated by

r = 1, while the unstable limit cycle for µ < 0 is
p
approximated by r = |µ|

– p. 15/?
As µ increases from negative to positive values, the stable
focus at the origin merges with the unstable limit cycle and
bifurcates into unstable focus

## Subcritical Hopf bifurcation

µ µ
(e) Supercritical Hopf bifurcation (f) Subcrtitical Hopf bifurcation

## safe or soft dangerous or hard

– p. 16/?
All six types of bifurcation occur in the vicinity of an
equilibrium point. They are called local bifurcations

## Example of Global Bifurcation

ẋ1 = x2
ẋ2 = µx2 + x1 − x21 + x1 x2

## There are two equilibrium points at (0, 0) and (1, 0). By

linearization, we can see that (0, 0) is always a saddle,
while (1, 0) is an unstable focus for −1 < µ < 1

## Limit analysis to the range −1 < µ < 1

– p. 17/?
x x
2 2
µ=−0.95 µ=−0.88

x x
1 1

x2 x2

µ=−0.8645 µ=−0.8

x x
1 1

– p. 18/?
Nonlinear Systems and Control
Lecture # 7
Stability of Equilibrium Points
Basic Concepts & Linearization

– p. 1/?
ẋ = f (x)

## For convenience, we state all definitions and theorems for

the case when the equilibrium point is at the origin of Rn ;
that is, x̄ = 0. No loss of generality

y = x − x̄

def
ẏ = ẋ = f (x) = f (y + x̄) = g(y), where g(0) = 0

– p. 2/?
Definition: The equilibrium point x = 0 of ẋ = f (x) is
stable if for each ε > 0 there is δ > 0 (dependent on ε)
such that

such that

## kx(0)k < δ ⇒ lim x(t) = 0

t→∞

– p. 3/?
First-Order Systems (n = 1)

## The behavior of x(t) in the neighborhood of the origin can

be determined by examining the sign of f (x)

## The ε–δ requirement for stability is violated if xf (x) > 0 on

either side of the origin

x x x

## Unstable Unstable Unstable

– p. 4/?
The origin is stable if and only if xf (x) ≤ 0 in some
neighborhood of the origin

x x x

## Stable Stable Stable

– p. 5/?
The origin is asymptotically stable if and only if xf (x) < 0
in some neighborhood of the origin

f(x) f(x)

−a b x x

(a) (b)

## Asymptotically Stable Globally Asymptotically Stable

– p. 6/?
Definition: Let the origin be an asymptotically stable
equilibrium point of the system ẋ = f (x), where f is a
locally Lipschitz function defined over a domain D ⊂ Rn
( 0 ∈ D)
The region of attraction (also called region of
asymptotic stability, domain of attraction, or basin) is the
set of all points x0 in D such that the solution of

ẋ = f (x), x(0) = x0

## is defined for all t ≥ 0 and converges to the origin as t

tends to infinity

## The origin is said to be globally asymptotically stable if

the region of attraction is the whole space Rn

– p. 7/?
Second-Order Systems (n = 2)

## Type of equilibrium point Stability Property

Center
Stable Node
Stable Focus
Unstable Node
Unstable Focus

– p. 8/?
Example: Tunnel Diode Circuit
x ’ = 0.5 ( − 17.76 x + 103.79 x2 − 229.62 x3 + 226.31 x4 − 83.72 x5 + y)
y ’ = 0.2 ( − x − 1.5 y + 1.2)

1.5

1
y

0.5

−0.5
−0.5 0 0.5 1 1.5
x
– p. 9/?
Example: Pendulum Without Friction
x’=y
y ’ = − 10 sin(x)

0
y

−2

−4

−6

−8

−4 −3 −2 −1 0 1 2 3 4
x
– p. 10/?
Example: Pendulum With Friction
x’=y
y ’ = − 10 sin(x) − y

0
y

−2

−4

−6

−8

−4 −3 −2 −1 0 1 2 3 4
x
– p. 11/?
Linear Time-Invariant Systems

ẋ = Ax

x(t) = exp(At)x(0)
P −1 AP = J = block diag[J1 , J2 , . . . , Jr ]
 
λi 1 0 ... ... 0
 0 λ 1 0 . . . 0 
 i 
 . ... .. 
 .. . 
Ji =  .
 
. . . .

 . 0 
 ..
 
 . . . . 1 

0 . . . . . . . . . 0 λi m×m

– p. 12/?
r X
X mi
exp(At) = P exp(Jt)P −1 = tk−1 exp(λi t)Rik
i=1 k=1

## Re[λi ] > 0 for some i ⇒ Unstable

Re[λi ] ≤ 0 ∀ i & mi > 1 for Re[λi ] = 0 ⇒ Unstable
Re[λi ] ≤ 0 ∀ i & mi = 1 for Re[λi ] = 0 ⇒ Stable
If an n × n matrix A has a repeated eigenvalue λi of
algebraic multiplicity qi , then the Jordan blocks of λi have
order one if and only if rank(A − λi I) = n − qi

– p. 13/?
Theorem: The equilibrium point x = 0 of ẋ = Ax is stable if
and only if all eigenvalues of A satisfy Re[λi ] ≤ 0 and for
every eigenvalue with Re[λi ] = 0 and algebraic multiplicity
qi ≥ 2, rank(A − λi I) = n − qi , where n is the dimension
of x. The equilibrium point x = 0 is globally asymptotically
stable if and only if all eigenvalues of A satisfy Re[λi ] < 0

Hurwitz matrix

## When the origin of a linear system is asymptotically stable,

its solution satisfies the inequality

## kx(t)k ≤ kkx(0)ke−λt , ∀t≥0

k ≥ 1, λ > 0
– p. 14/?
Exponential Stability

## Definition: The equilibrium point x = 0 of ẋ = f (x) is said

to be exponentially stable if

## It is said to be globally exponentially stable if the inequality

is satisfied for any initial state x(0)

## Exponential Stability ⇒ Asymptotic Stability

– p. 15/?
Example
ẋ = −x3
The origin is asymptotically stable

x(0)
x(t) = p
1 + 2tx2 (0)

## x(t) does not satisfy |x(t)| ≤ ke−λt |x(0)| because

e2λt
|x(t)| ≤ ke−λt |x(0)| ⇒ ≤ k2
1 + 2tx2 (0)

e2λt
Impossible because lim =∞
t→∞ 1+ 2tx2 (0)

– p. 16/?
Linearization
ẋ = f (x), f (0) = 0
f is continuously differentiable over D = {kxk < r}

∂f
J(x) = (x)
∂x
h(σ) = f (σx) for 0 ≤ σ ≤ 1
h′ (σ) = J(σx)x
Z 1
h(1) − h(0) = h′ (σ) dσ, h(0) = f (0) = 0
0
Z 1
f (x) = J(σx) dσ x
0

– p. 17/?
Z 1
f (x) = J(σx) dσ x
0

## Set A = J(0) and add and subtract Ax

Z 1
f (x) = [A + G(x)]x, where G(x) = [J(σx) − J(0)] dσ
0

G(x) → 0 as x → 0
This suggests that in a small neighborhood of the origin we
can approximate the nonlinear system ẋ = f (x) by its
linearization about the origin ẋ = Ax

– p. 18/?
Theorem:
The origin is exponentially stable if and only if
Re[λi ] < 0 for all eigenvalues of A

## Linearization fails when Re[λi ] ≤ 0 for all i, with

Re[λi ] = 0 for some i
Example
ẋ = ax3

∂f 2

A= = 3ax x=0 = 0
∂x x=0

## Stable if a = 0; Asymp stable if a < 0; Unstable if a > 0

When a < 0, the origin is not exponentially stable

– p. 19/?
Nonlinear Systems and Control
Lecture # 8
Lyapunov Stability

– p. 1/1
Let V (x) be a continuously differentiable function defined in
a domain D ⊂ Rn ; 0 ∈ D . The derivative of V along the
trajectories of ẋ = f (x) is
n n
X ∂V X ∂V
V̇ (x) = ẋi = fi (x)
i=1
∂xi i=1
∂xi
 
f1 (x)
i f2 (x)
h 
∂V ∂V ∂V
 
= ∂x1
, ∂x2
, ... , ∂xn
 .. 
.
 
 
fn (x)
∂V
= f (x)
∂x

– p. 2/1
If φ(t; x) is the solution of ẋ = f (x) that starts at initial
state x at time t = 0, then
d

V̇ (x) = V (φ(t; x))
dt t=0

ẋ = f (x)

## If V̇ (x) is positive, V will increase along the solution of

ẋ = f (x)

– p. 3/1
Lyapunov’s Theorem:
If there is V (x) such that

## V (0) = 0 and V (x) > 0, ∀ x ∈ D/{0}

V̇ (x) ≤ 0, ∀x∈D
then the origin is a stable

Moreover, if

## then the origin is asymptotically stable

– p. 4/1
Furthermore, if V (x) > 0, ∀ x 6= 0,

kxk → ∞ ⇒ V (x) → ∞

## and V̇ (x) < 0, ∀ x 6= 0, then the origin is globally

asymptotically stable

## Proof: D 0 < r ≤ ε, Br = {kxk ≤ r}

Br
α = min V (x) > 0
kxk=r

Ω 0<β<α
β
Ωβ = {x ∈ Br | V (x) ≤ β}
kxk ≤ δ ⇒ V (x) < β

– p. 5/1
Solutions starting in Ωβ stay in Ωβ because V̇ (x) ≤ 0 in Ωβ

## kx(0)k < δ ⇒ kx(t)k < r ≤ ε, ∀ t ≥ 0

⇒ The origin is stable
Now suppose V̇ (x) < 0 ∀ x ∈ D/{0}. V (x(t) is
monotonically decreasing and V (x(t)) ≥ 0

lim V (x(t)) = c ≥ 0
t→∞

t→∞

## Suppose c > 0. By continuity of V (x), there is d > 0 such

that Bd ⊂ Ωc . Then, x(t) lies outside Bd for all t ≥ 0
– p. 6/1
γ = − max V̇ (x)
d≤kxk≤r
Z t
V (x(t)) = V (x(0)) + V̇ (x(τ )) dτ ≤ V (x(0)) − γt
0
This inequality contradicts the assumption c > 0
⇒ The origin is asymptotically stable

## The condition kxk → ∞ ⇒ V (x) → ∞ implies that the

set Ωc = {x ∈ Rn | V (x) ≤ c} is compact for every c > 0.
This is so because for any c > 0, there is r > 0 such that
V (x) > c whenever kxk > r . Thus, Ωc ⊂ Br . All solutions
starting Ωc will converge to the origin. For any point
p ∈ Rn , choosing c = V (p) ensures that p ∈ Ωc
⇒ The origin is globally asymptotically stable
– p. 7/1
Terminology
V (0) = 0, V (x) ≥ 0 for x 6= 0 Positive semidefinite
V (0) = 0, V (x) > 0 for x 6= 0 Positive definite
V (0) = 0, V (x) ≤ 0 for x 6= 0 Negative semidefinite
V (0) = 0, V (x) < 0 for x 6= 0 Negative definite
kxk → ∞ ⇒ V (x) → ∞ Radially unbounded

## Lyapunov’ Theorem: The origin is stable if there is a

continuously differentiable positive definite function V (x) so
that V̇ (x) is negative semidefinite, and it is asymptotically
stable if V̇ (x) is negative definite. It is globally
asymptotically stable if the conditions for asymptotic
stability hold globally and V (x) is radially unbounded
– p. 8/1
A continuously differentiable function V (x) satisfying the
conditions for stability is called a Lyapunov function. The
surface V (x) = c, for some c > 0, is called a Lyapunov
surface or a level surface
c3
c2
V (x) = c 1

c 1 <c 2 <c 3

– p. 9/1
Why do we need the radial unboundedness condition to
show global asymptotic stability?
It ensures that Ωc = {x ∈ Rn | V (x) ≤ c} is bounded for
every c > 0
Without it Ωc might not bounded for large c
Example
x21 2
V (x) = + x 2
1 + x21
x2 c
c
c
c
c
c
hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
c
c x1
c
c
c
c
c
– p. 10/1
Nonlinear Systems and Control
Lecture # 9
Lyapunov Stability

– p. 1/1
n X
X n
V (x) = xT P x = pij xi xj , P = PT
i=1 j=1

## λmin (P )kxk2 ≤ xT P x ≤ λmax (P )kxk2

P ≥ 0 (Positive semidefinite) if and only if λi (P ) ≥ 0 ∀i
P > 0 (Positive definite) if and only if λi (P ) > 0 ∀i
V (x) is positive definite if and only if P is positive definite
V (x) is positive semidefinite if and only if P is positive
semidefinite
P > 0 if and only if all the leading principal minors of P are
positive
– p. 2/1
Linear Systems
ẋ = Ax
V (x) = xT P x, P = PT > 0
def
V̇ (x) = x P ẋ + ẋ P x = x (P A + A P )x = −xT Qx
T T T T

P A + AT P = −Q

## If P > 0, then A is Hurwitz

Matlab: P = lyap(A′ , Q)

– p. 3/1
Theorem A matrix A is Hurwitz if and only if for any
Q = QT > 0 there is P = P T > 0 that satisfies the
Lyapunov equation

P A + AT P = −Q

## Idea of the proof: Sufficiency follows from Lyapunov’s

theorem. Necessity is shown by verifying that
Z ∞
P = exp(AT t)Q exp(At) dt
0

## is positive definite and satisfies the Lyapunov equation

– p. 4/1
Linearization

ẋ = f (x) = [A + G(x)]x

G(x) → 0 as x → 0
Suppose A is Hurwitz. Choose Q = QT > 0 and solve the
Lyapunov equation P A + AT P = −Q for P . Use
V (x) = xT P x as a Lyapunov function candidate for
ẋ = f (x)

## V̇ (x) = xT P f (x) + f T (x)P x

= xT P [A + G(x)]x + xT [AT + GT (x)]P x
= xT (P A + AT P )x + 2xT P G(x)x
= −xT Qx + 2xT P G(x)x
– p. 5/1
V̇ (x) ≤ −xT Qx + 2kP k kG(x)k kxk2

## V̇ (x) < −[λmin (Q) − 2γkP k]kxk2 , ∀ kxk < r

Choose
λmin (Q)
γ<
2kP k
V (x) = xT P x is a Lyapunov function for ẋ = f (x)

– p. 6/1
We can use V (x) = xT P x to estimate the region of
attraction

kxk=r

## {xT P x < c} ⊂ {kxk < r}

All trajectories starting in the set {xT P x < c} approach the
origin as t tends to ∞. Hence, the set {xT P x < c} is a
subset of the region of attraction (an estimate of the region
of attraction)

– p. 7/1
Example

ẋ1 = −x2
ẋ2 = x1 + (x21 − 1)x2
" #
∂f

0 −1
A= =
∂x x=0
1 −1

has eigenvalues (−1 ± j 3)/2. Hence the origin is
asymptotically stable
" #
T 1.5 −0.5
Take Q = I, P A + A P = −I ⇒ P =
−0.5 1

λmin (P ) = 0.691
– p. 8/1
V (x) = xT P x = 1.5x21 − x1 x2 + x22

## V̇ (x) = (3x1 − x2 )(−x2 ) + (−x1 + 2x2 )[x1 + (x21 − 1)x2 ]

= −(x21 + x22 ) − (x31 x2 − 2x21 x22 )

2 2 5
V̇ (x) ≤ −kxk +|x1 | |x1 x2 | |x1 −2x2 | ≤ −kxk + kxk4
2
1 2

where |x1 | ≤ kxk, |x1 x2 | ≤ 2 kxk , |x1 − 2x2 | ≤ 5kxk

2 2 def 2
V̇ (x) < 0 for 0 < kxk < √ = r
5
2 2
Take c = λmin (P )r = 0.691 × √ = 0.618
5
{V (x) < c} is an estimate of the region of attraction
– p. 9/1
Example:
ẋ = −g(x)
g(0) = 0; xg(x) > 0, ∀ x 6= 0 and x ∈ (−a, a)
Z x
V (x) = g(y) dy
0

∂V
V̇ (x) = [−g(x)] = −g 2 (x) < 0, ∀ x ∈ (−a, a), x 6= 0
∂x
The origin is asymptotically stable

## If xg(x) > 0 for all x 6= 0, use

Z x
V (x) = 21 x2 + g(y) dy
0
– p. 10/1
Z x
V (x) = 21 x2 + g(y) dy
0

V (x) ≥ 12 x2

## The origin is globally asymptotically stable

– p. 11/1
Example: Pendulum equation without friction

ẋ1 = x2
ẋ2 = − a sin x1

## V (x) = a(1 − cos x1 ) + 12 x22

V (0) = 0 and V (x) is positive definite over the domain
−2π < x1 < 2π

## Since V̇ (x) ≡ 0, the origin is not asymptotically stable

– p. 12/1
Example: Pendulum equation with friction

ẋ1 = x2
ẋ2 = − a sin x1 − bx2

1
V (x) = a(1 − cos x1 ) + x22
2
V̇ (x) = aẋ1 sin x1 + x2 ẋ2 = − bx22
The origin is stable

## V̇ (x) is not negative definite because V̇ (x) = 0 for x2 = 0

irrespective of the value of x1

– p. 13/1
The conditions of Lyapunov’s theorem are only sufficient.
Failure of a Lyapunov function candidate to satisfy the
conditions for stability or asymptotic stability does not mean
that the equilibrium point is not stable or asymptotically
stable. It only means that such stability property cannot be
established by using this Lyapunov function candidate

Try
1 T
V (x) = 2x P x +" a(1 − cos#x"1 ) #
1 p11 p12 x1
= 2 [x1 x2 ] + a(1 − cos x1 )
p12 p22 x2

## p11 > 0, p11 p22 − p212 > 0

– p. 14/1
V̇ (x) = (p11 x1 + p12 x2 + a sin x1 ) x2
+ (p12 x1 + p22 x2 ) (−a sin x1 − bx2 )
= a(1 − p22 )x2 sin x1 − ap12 x1 sin x1
+ (p11 − p12 b) x1 x2 + (p12 − p22 b) x22

## p22 = 1, p11 = bp12 ⇒ 0 < p12 < b, Take p12 = b/2

V̇ (x) = − 21 abx1 sin x1 − 12 bx22

D = {x ∈ R2 | |x1 | < π}
V (x) is positive definite and V̇ (x) is negative definite over D
The origin is asymptotically stable

– p. 15/1
Nonlinear Systems and Control
Lecture # 10
The Invariance Principle

– p. 1/1
Example: Pendulum equation with friction

ẋ1 = x2
ẋ2 = − a sin x1 − bx2

1
V (x) = a(1 − cos x1 ) + x22
2
V̇ (x) = aẋ1 sin x1 + x2 ẋ2 = − bx22
The origin is stable. V̇ (x) is not negative definite because
V̇ (x) = 0 for x2 = 0 irrespective of the value of x1

## However, near the origin, the solution cannot stay

identically in the set {x2 = 0}

– p. 2/1
Definitions: Let x(t) be a solution of ẋ = f (x)

## A point p is said to be a positive limit point of x(t) if there is

a sequence {tn }, with limn→∞ tn = ∞, such that
x(tn ) → p as n → ∞

## The set of all positive limit points of x(t) is called the

positive limit set of x(t); denoted by L+

## If x(t) approaches an asymptotically stable equilibrium

point x̄, then x̄ is the positive limit point of x(t) and L+ = x̄

## A stable limit cycle is the positive limit set of every solution

starting sufficiently near the limit cycle

– p. 3/1
A set M is an invariant set with respect to ẋ = f (x) if

x(0) ∈ M ⇒ x(t) ∈ M, ∀ t ∈ R

Examples:
Equilibrium points

Limit Cycles

## A set M is a positively invariant set with respect to

ẋ = f (x) if

x(0) ∈ M ⇒ x(t) ∈ M, ∀ t ≥ 0

## Example: The set Ωc = {V (x) ≤ c} with V̇ (x) ≤ 0 in Ωc

– p. 4/1
The distance from a point p to a set M is defined by

dist(p, M ) = inf kp − xk
x∈M

## x(t) approaches a set M as t approaches infinity, if for

each ε > 0 there is T > 0 such that

## Example: every solution x(t) starting sufficiently near a

stable limit cycle approaches the limit cycle as t → ∞

## Notice, however, that x(t) does converge to any specific

point on the limit cycle

– p. 5/1
Lemma: If a solution x(t) of ẋ = f (x) is bounded and
belongs to D for t ≥ 0, then its positive limit set L+ is a
nonempty, compact, invariant set. Moreover, x(t)
approaches L+ as t → ∞

## LaSalle’s theorem: Let f (x) be a locally Lipschitz function

defined over a domain D ⊂ Rn and Ω ⊂ D be a compact
set that is positively invariant with respect to ẋ = f (x). Let
V (x) be a continuously differentiable function defined over
D such that V̇ (x) ≤ 0 in Ω. Let E be the set of all points in
Ω where V̇ (x) = 0, and M be the largest invariant set in E .
Then every solution starting in Ω approaches M as t → ∞

– p. 6/1
Proof:

## V (x) is continuous in Ω ⇒ V (x) ≥ b = min V (x)

x∈Ω

⇒ lim V (x(t)) = a
t→∞

## x(t) ∈ Ω ⇒ x(t) is bounded ⇒ L+ exists

Moreover, L+ ⊂ Ω and x(t) approaches L+ as t → ∞
For any p ∈ L+ , there is {tn } with limn→∞ tn = ∞ such
that x(tn ) → p as n → ∞

## V (x) is continuous ⇒ V (p) = lim V (x(tn )) = a

n→∞

– p. 7/1
V (x) = a on L+ and L+ invariant ⇒ V̇ (x) = 0, ∀ x ∈ L+

L+ ⊂ M ⊂ E ⊂ Ω
x(t) approaches L+ ⇒ x(t) approaches M (as t → ∞)

– p. 8/1
Theorem: Let f (x) be a locally Lipschitz function defined
over a domain D ⊂ Rn ; 0 ∈ D . Let V (x) be a continuously
differentiable positive definite function defined over D such
that V̇ (x) ≤ 0 in D . Let S = {x ∈ D | V̇ (x) = 0}
If no solution can stay identically in S , other than the
trivial solution x(t) ≡ 0, then the origin is asymptotically
stable

## Moreover, if Γ ⊂ D is compact and positively invariant,

then it is a subset of the region of attraction

## Furthermore, if D = Rn and V (x) is radially

unbounded, then the origin is globally asymptotically
stable

– p. 9/1
Example:
ẋ1 = x2
ẋ2 = −h1 (x1 ) − h2 (x2 )
hi (0) = 0, yhi (y) > 0, for 0 < |y| < a
Z x1
V (x) = h1 (y) dy + 21 x22
0

## D = {−a < x1 < a, −a < x2 < a}

V̇ (x) = h1 (x1 )x2 +x2 [−h1 (x1 )−h2 (x2 )] = −x2 h2 (x2 ) ≤ 0

V̇ (x) = 0 ⇒ x2 h2 (x2 ) = 0 ⇒ x2 = 0
S = {x ∈ D | x2 = 0}

– p. 10/1
ẋ1 = x2 , ẋ2 = −h1 (x1 ) − h2 (x2 )

## x2 (t) ≡ 0 ⇒ ẋ2 (t) ≡ 0 ⇒ h1 (x1 (t)) ≡ 0 ⇒ x1 (t) ≡ 0

The only solution that can stay identically in S is x(t) ≡ 0

## Thus, the origin is asymptotically stable

Ry
Suppose a = ∞ and 0 h1 (z) dz → ∞ as |y| → ∞
R x1
Then, D = R2 and V (x) = 0 h1 (y) dy + 12 x22 is radially
unbounded. S = {x ∈ R2 | x2 = 0} and the only solution
that can stay identically in S is x(t) ≡ 0

## The origin is globally asymptotically stable

– p. 11/1


H






 
q
 2  
 
 r

 

  q
r 1


– p. 12/1
M (q)q̈ + C(q, q̇)q̇ + D q̇ + g(q) = u

## P (q) is the total potential energy of the links due to gravity

– p. 13/1
Investigate the use of the (PD plus gravity compensation)
control law

u = g(q) − Kp (q − q ∗ ) − Kd q̇

## to stabilize the robot at a desired position q ∗ , where Kp and

Kd are symmetric positive definite matrices

e = q − q∗ , ė = q̇

M ë = M q̈
= −C q̇ − D q̇ − g(q) + u
= −C q̇ − D q̇ − Kp (q − q ∗ ) − Kd q̇
= −C ė − D ė − Kp e − Kd ė

– p. 14/1
M ë = −C ė − D ė − Kp e − Kd ė

V = 21 ėT M (q)ė + 21 eT Kp e

V̇ = ėT M ë + 12 ėT Ṁ ė + eT Kp ė

## = −ėT C ė − ėT D ė − ėT Kp e − ėT Kd ė

+ 21 ėT Ṁ ė + eT Kp ė

1 T
= 2 ė (Ṁ − 2C)ė − ėT (Kd + D)ė

## = −ėT (Kd + D)ė ≤ 0

– p. 15/1
(Kd + D) is positive definite

## V̇ = −ėT (Kd + D)ė = 0 ⇒ ė = 0

M ë = −C ė − D ė − Kp e − Kd ė
ė(t) ≡ 0 ⇒ ë(t) ≡ 0 ⇒ Kp e(t) ≡ 0 ⇒ e(t) ≡ 0
By LaSalle’s theorem the origin (e = 0, ė = 0) is globally
asymptotically stable

– p. 16/1
Nonlinear Systems and Control
Lecture # 11
Exponential Stability
&
Region of Attraction

– p. 1/1
Exponential Stability:
The origin of ẋ = f (x) is exponentially stable if and only if
the linearization of f (x) at the origin is Hurwitz

## Theorem: Let f (x) be a locally Lipschitz function defined

over a domain D ⊂ Rn ; 0 ∈ D . Let V (x) be a
continuously differentiable function such that

## V̇ (x) ≤ −k3 kxka

for all x ∈ D , where k1 , k2 , k3 , and a are positive
constants. Then, the origin is an exponentially stable
equilibrium point of ẋ = f (x). If the assumptions hold
globally, the origin will be globally exponentially stable
– p. 2/1
Proof: Choose c > 0 small enough that

{k1 kxka ≤ c} ⊂ D

V (x) ≤ c ⇒ k1 kxka ≤ c
Ωc = {V (x) ≤ c} ⊂ { k1 kxka ≤ c} ⊂ D
Ωc is compact and positively invariant; ∀ x(0) ∈ Ωc

a k3
V̇ ≤ −k3 kxk ≤− V
k2
dV k3
≤− dt
V k2
V (x(t)) ≤ V (x(0))e−(k3 /k2 )t

– p. 3/1
1/a
V (x(t))

kx(t)k ≤
k1
" #1/a
V (x(0))e−(k3 /k2 )t

k1
" #1/a
k2 kx(0)ka e−(k3 /k2 )t

k1
1/a
k2

= e−γt kx(0)k, γ = k3 /(k2 a)
k1

– p. 4/1
Example
ẋ1 = x2
ẋ2 = −h(x1 ) − x2

## c1 y 2 ≤ yh(y) ≤ c2 y 2 , ∀ y, c1 > 0, c2 > 0

" # Z x1
1 T 1 1
V (x) = 2 x x+2 h(y) dy
1 2 0
Z x1
c1 x21 ≤ 2 h(y) dy ≤ c2 x21
0

## V̇ = [x1 + x2 + 2h(x1 )]x2 + [x1 + 2x2 ][−h(x1 ) − x2 ]

= −x1 h(x1 ) − x22 ≤ −c1 x21 − x22
The origin is globally exponentially stable
– p. 5/1
Region of Attraction

## Lemma: If x = 0 is an asymptotically stable equilibrium

point for ẋ = f (x), then its region of attraction RA is an
open, connected, invariant set. Moreover, the boundary of
RA is formed by trajectories

– p. 6/1
Example
ẋ1 = −x2
ẋ2 = x1 + (x21 − 1)x2

4
x
2
2

0
x
1
−2

−4
−4 −2 0 2 4

– p. 7/1
Example
ẋ1 = x2
ẋ2 = −x1 + 13 x31 − x2

4
x
2
2

0
x
1
−2

−4
−4 −2 0 2 4

– p. 8/1
Estimates of the Region of Attraction: Find a subset of the
region of attraction

## Warning: Let D be a domain with 0 ∈ D such that for all

x ∈ D , V (x) is positive definite and V̇ (x) is negative
definite

## Is D a subset of the region of attraction?

NO
Why?

– p. 9/1
Example: Reconsider

ẋ1 = x2
ẋ2 = −x1 + 13 x31 − x2
" #
1 T 1 1 R x1
V (x) = 2
x x+2 0 (y − 13 y 3 ) dy
1 2
3 2
= x
2 1
− 16 x41 + x1 x2 + x22

## V̇ (x) = −x21 (1 − 13 x21 ) − x22

√ √
D = {− 3 < x1 < 3}
Is D a subset of the region of attraction?

– p. 10/1
The simplest estimate is the bounded component of
{V (x) < c}, where c = minx∈∂D V (x)

## For V (x) = xT P x, where P = P T > 0, the minimum of

V (x) over ∂D is given by

## For D = {kxk < r}, min xT P x = λmin (P )r 2

kxk=r

r2
For D = {|bT x| < r}, min xT P x =
|bT x|=r bT P −1 b

## For D = {|bTi x| < ri , i = 1, . . . , p},

ri2
Take c = min ≤ min xT P x
1≤i≤p bTi P −1 bi x∈∂D

– p. 11/1
Example (Revisited)

ẋ1 = −x2
ẋ2 = x1 + (x21 − 1)x2

## V̇ (x) = −(x21 + x22 ) − (x31 x2 − 2x21 x22 )

2 def 2
2
V̇ (x) < 0 for 0 < kxk < √ = r
5
2 2
Take c = λmin (P )r = 0.691 × √ = 0.618
5
{V (x) < c} is an estimate of the region of attraction
– p. 12/1
x1 = ρ cos θ, x2 = ρ sin θ

## V̇ = −ρ2 + ρ4 cos2 θ sin θ(2 sin θ − cos θ)

≤ −ρ2 + ρ4 | cos2 θ sin θ| · |2 sin θ − cos θ|
≤ −ρ2 + ρ4 × 0.3849 × 2.2361
1
≤ −ρ2 + 0.861ρ4 < 0, for ρ2 < 0.861

2 0.691
Take c = λmin (P )r = = 0.803
0.861
Alternatively, choose c as the largest constant such that
{xT P x < c} is a subset of {V̇ (x) < 0}

– p. 13/1
2 x 3 x
2 2
2
1
1

0 x 0 x
1 1
−1
−1
−2

−2 −3
−2 −1 0 1 2 −2 0 2
(a) (b)

## (a) Contours of V̇ (x) = 0 (dashed)

V (x) = 0.8 (dash-dot), V (x) = 2.25 (solid)
(b) comparison of the region of attraction with its estimate

– p. 14/1
If D is a domain where V (x) is positive definite and V̇ (x)
is negative definite (or V̇ (x) is negative semidefinite and no
solution can stay identically in the set V̇ (x) = 0 other than
x = 0), then according to LaSalle’s theorem any compact
positively invariant subset of D is a subset of the region of
attraction

Example

ẋ1 = x2
ẋ2 = −4(x1 + x2 ) − h(x1 + x2 )

## h(0) = 0; uh(u) ≥ 0, ∀ |u| ≤ 1

– p. 15/1
" #
T T 2 1
V (x) = x P x = x x = 2x21 + 2x1 x2 + x22
1 1

## V̇ (x) = (4x1 + 2x2 )ẋ1 + 2(x1 + x2 )ẋ2

= −2x21 − 6(x1 + x2 )2 − 2(x1 + x2 )h(x1 + x2 )
≤ −2x21"− 6(x1# + x2 )2 , ∀ |x1 + x2 | ≤ 1
T 8 6
= −x x
6 6

## V̇ (x) is negative definite in {|x1 + x2 | ≤ 1}

T T 1
b = [1 1], c= min x Px = =1
|x1 +x2 |=1 bT P −1 b

– p. 16/1
σ = x1 + x2

d
σ 2 = 2σx2 − 8σ 2 − 2σh(σ) ≤ 2σx2 − 8σ 2 , ∀ |σ| ≤ 1
dt
d
On σ = 1, σ 2 ≤ 2x2 − 8 ≤ 0, ∀ x2 ≤ 4
dt
d
On σ = −1, σ 2 ≤ −2x2 − 8 ≤ 0, ∀ x2 ≥ −4
dt
c1 = V (x)|x1 =−3,x2 =4 = 10, c2 = V (x)|x1 =3,x2 =−4 = 10

## Γ = {V (x) ≤ 10 and |x1 + x2 | ≤ 1}

is a subset of the region of attraction
– p. 17/1
5 x2
(−3,4)

V(x) = 10
0 x1

V(x) = 1

(3,−4)
−5
−5 0 5
– p. 18/1
Nonlinear Systems and Control
Lecture # 12
Converse Lyapunov Functions
&
Time Varying Systems

– p. 1/1
Converse Lyapunov Theorem–Exponential Stability

## Let x = 0 be an exponentially stable equilibrium point for

the system ẋ = f (x), where f is continuously differentiable
on D = {kxk < r}. Let k, λ, and r0 be positive constants
with r0 < r/k such that

## where D0 = {kxk < r0 }. Then, there is a continuously

differentiable function V (x) that satisfies the inequalities

– p. 2/1
c1 kxk2 ≤ V (x) ≤ c2 kxk2

∂V
f (x) ≤ −c3 kxk2
∂x
∂V

≤ c4 kxk
∂x

## for all x ∈ D0 , with positive constants c1 , c2 , c3 , and c4

Moreover, if f is continuously differentiable for all x, globally
Lipschitz, and the origin is globally exponentially stable,
then V (x) is defined and satisfies the aforementioned
inequalities for all x ∈ Rn

– p. 3/1
Idea of the proof: Let ψ(t; x) be the solution of

ẏ = f (y), y(0) = x

Take
Z δ
V (x) = ψ T (t; x) ψ(t; x) dt, δ>0
0

– p. 4/1
Example: Consider the system ẋ = f (x) where f is
continuously differentiable in the neighborhood of the origin
and f (0) = 0. Show that the origin is exponentially stable
only if A = [∂f /∂x](0) is Hurwitz

## Because the origin of ẋ = f (x) is exponentially stable, let

V (x) be the function provided by the converse Lyapunov
theorem over the domain {kxk < r0 }. Use V (x) as a
Lyapunov function candidate for ẋ = Ax

– p. 5/1
∂V ∂V ∂V
Ax = f (x) − G(x)x
∂x ∂x ∂x
≤ −c3 kxk2 + c4 Lkxk2
= −(c3 − c4 L)kxk2

def
Take L < c3 /c4 , γ = (c3 − c4 L) > 0 ⇒
∂V
Ax ≤ −γkxk2 , ∀ kxk < min{r0 , r1 }
∂x
The origin of ẋ = Ax is exponentially stable

– p. 6/1
Converse Lyapunov Theorem–Asymptotic Stability

## Let x = 0 be an asymptotically stable equilibrium point for

ẋ = f (x), where f is locally Lipschitz on a domain
D ⊂ Rn that contains the origin. Let RA ⊂ D be the region
of attraction of x = 0. Then, there is a smooth, positive
definite function V (x) and a continuous, positive definite
function W (x), both defined for all x ∈ RA , such that

V (x) → ∞ as x → ∂RA

∂V
f (x) ≤ −W (x), ∀ x ∈ RA
∂x
and for any c > 0, {V (x) ≤ c} is a compact subset of RA
When RA = Rn , V (x) is radially unbounded
– p. 7/1
Time-varying Systems
ẋ = f (t, x)

## f (t, x) is piecewise continuous in t and locally Lipschitz in

x for all t ≥ 0 and all x ∈ D . The origin is an equilibrium
point at t = 0 if
f (t, 0) = 0, ∀ t ≥ 0

## While the solution of the autonomous system

ẋ = f (x), x(t0 ) = x0

## depends only on (t − t0 ), the solution of

ẋ = f (t, x), x(t0 ) = x0

## may depend on both t and t0

– p. 8/1
Comparison Functions
A scalar continuous function α(r), defined for r ∈ [0, a)
is said to belong to class K if it is strictly increasing and
α(0) = 0. It is said to belong to class K∞ if it defined
for all r ≥ 0 and α(r) → ∞ as r → ∞

## A scalar continuous function β(r, s), defined for

r ∈ [0, a) and s ∈ [0, ∞) is said to belong to class KL
if, for each fixed s, the mapping β(r, s) belongs to class
K with respect to r and, for each fixed r , the mapping
β(r, s) is decreasing with respect to s and β(r, s) → 0
as s → ∞

– p. 9/1
Example
α(r) = tan−1 (r) is strictly increasing since
α′ (r) = 1/(1 + r 2 ) > 0. It belongs to class K, but not
to class K∞ since limr→∞ α(r) = π/2 < ∞

## α(r) = r c , for any positive real number c, is strictly

increasing since α′ (r) = cr c−1 > 0. Moreover,
limr→∞ α(r) = ∞; thus, it belongs to class K∞

## α(r) = min{r, r 2 } is continuous, strictly increasing,

and limr→∞ α(r) = ∞. Hence, it belongs to class K∞

– p. 10/1
β(r, s) = r/(ksr + 1), for any positive real number k,
is strictly increasing in r since
∂β 1
= >0
∂r (ksr + 1)2

∂β −kr 2
= <0
∂s (ksr + 1)2

to class KL

## β(r, s) = r c e−s , for any positive real number c, belongs

to class KL
– p. 11/1
Definition: The equilibrium point x = 0 of ẋ = f (t, x) is
uniformly stable if there exist a class K function α and a
positive constant c, independent of t0 , such that

## uniformly asymptotically stable if there exist a class KL

function β and a positive constant c, independent of t0 ,
such that

## globally uniformly asymptotically stable if the foregoing

inequality is satisfied for any initial state x(t0 )

– p. 12/1
exponentially stable if there exist positive constants c,
k, and λ such that

## globally exponentially stable if the foregoing inequality

is satisfied for any initial state x(t0 )

– p. 13/1
Theorem: Let the origin x = 0 be an equilibrium point for
ẋ = f (t, x) and D ⊂ Rn be a domain containing x = 0.
Suppose f (t, x) is piecewise continuous in t and locally
Lipschitz in x for all t ≥ 0 and x ∈ D . Let V (t, x) be a
continuously differentiable function such that

## (1) W1 (x) ≤ V (t, x) ≤ W2 (x)

∂V ∂V
(2) + f (t, x) ≤ 0
∂t ∂x
for all t ≥ 0 and x ∈ D , where W1 (x) and W2 (x) are
continuous positive definite functions on D . Then, the origin
is uniformly stable

– p. 14/1
Theorem: Suppose the assumptions of the previous
theorem are satisfied with
∂V ∂V
+ f (t, x) ≤ −W3 (x)
∂t ∂x
for all t ≥ 0 and x ∈ D , where W3 (x) is a continuous
positive definite function on D . Then, the origin is uniformly
asymptotically stable. Moreover, if r and c are chosen such
that Br = {kxk ≤ r} ⊂ D and c < minkxk=r W1 (x), then
every trajectory starting in {x ∈ Br | W2 (x) ≤ c} satisfies
kx(t)k ≤ β(kx(t0 )k, t − t0 ), ∀ t ≥ t0 ≥ 0

## for some class KL function β . Finally, if D = Rn and

W1 (x) is radially unbounded, then the origin is globally
uniformly asymptotically stable
– p. 15/1
Theorem: Suppose the assumptions of the previous
theorem are satisfied with

## k1 kxka ≤ V (t, x) ≤ k2 kxka

∂V ∂V
+ f (t, x) ≤ −k3 kxka
∂t ∂x
for all t ≥ 0 and x ∈ D , where k1 , k2 , k3 , and a are
positive constants. Then, the origin is exponentially stable.
If the assumptions hold globally, the origin will be globally
exponentially stable.

– p. 16/1
Example:

V (x) = 12 x2

## V̇ (t, x) = −[1 + g(t)]x4 ≤ −x4 , ∀ x ∈ R, ∀ t ≥ 0

The origin is globally uniformly asymptotically stable

Example:
ẋ1 = −x1 − g(t)x2
ẋ2 = x1 − x2
0 ≤ g(t) ≤ k and ġ(t) ≤ g(t), ∀ t ≥ 0

– p. 17/1
V (t, x) = x21 + [1 + g(t)]x22

## V̇ (t, x) = −2x21 + 2x1 x2 − [2 + 2g(t) − ġ(t)]x22

2 + 2g(t) − ġ(t) ≥ 2 + 2g(t) − g(t) ≥ 2
" #
2 2 T 2 −1
V̇ (t, x) ≤ −2x1 + 2x1 x2 − 2x2 = − x x
−1 2
The origin is globally exponentially stable

– p. 18/1
Nonlinear Systems and Control
Lecture # 13
Perturbed Systems

– p. 1/?
Nominal System:

ẋ = f (x), f (0) = 0

Perturbed System:

## Case 1: The origin of the nominal system is exponentially

stable
c1 kxk2 ≤ V (x) ≤ c2 kxk2
∂V
f (x) ≤ −c3 kxk2
∂x
∂V

≤ c4 kxk
∂x
– p. 2/?
Use V (x) as a Lyapunov function candidate for the
perturbed system
∂V ∂V
V̇ (t, x) = f (x) + g(t, x)
∂x ∂x
Assume that

## kg(t, x)k ≤ γkxk, γ≥0

∂V

2
V̇ (t, x) ≤ −c3 kxk +
∂x kg(t, x)k

– p. 3/?
c3
γ<
c4

## V̇ (t, x) ≤ −(c3 − γc4 )kxk2

The origin is an exponentially stable equilibrium point of the
perturbed system

– p. 4/?
Example

ẋ1 = x2
ẋ2 = −4x1 − 2x2 + βx32 , β≥0

ẋ = Ax + g(x)
" # " #
0 1 0
A= , g(x) =
−4 −2 βx32

The eigenvalues of A are −1 ± j 3
 
3 1
2 8
P A + AT P = −I ⇒ P = 
 

1 5
8 16

– p. 5/?
T ∂V
V (x) = x P x, Ax = −xT x
∂x
c3 = 1, c4 = 2 kP k = 2λmax (P ) = 2 × 1.513 = 3.026
kg(x)k = β|x2 |3
g(x) satisfies the bound kg(x)k ≤ γkxk over compact sets
of x. Consider the compact set

## k2 = max |x2 | = max |[0 1]x|

xT P x≤c xT P x≤c

– p. 6/?
Fact:

max kLxk = c kLP −1/2 k
xT P x≤c

Proof

T 1 T 1
x Px ≤ c ⇔ x Px ≤ 1 ⇔ xT P 1/2 P 1/2 x ≤ 1
c c
1
y = √ P 1/2 x
c
√ −1/2 √
max kLxk = max kL c P yk = c kLP −1/2 k
xT P x≤c y T y≤1

– p. 7/?
√ −1/2 √
k2 = max |[0 1]x| = c k[0 1]P k = 1.8194 c
xT P x≤c

## kg(x)k ≤ β c (1.8194)2 kxk, ∀ x ∈ Ωc

kg(x)k ≤ γkxk, ∀ x ∈ Ωc , γ = β c (1.8194)2
c3 1 0.1
γ< ⇔ β< ≈
c4 3.026 × (1.8194)2 c c

## β < 0.1/c ⇒ V̇ (x) ≤ −(1 − 10βc)kxk2

Hence, the origin is exponentially stable and Ωc is an
estimate of the region of attraction

– p. 8/?
Alternative Bound on β

## V̇ (x) = −kxk2 + 2xT P g(x)

≤ −kxk2 + 18 βx32 ([2 5]x)

29
≤ −kxk2 + 8 βx2
2 kxk2

## Over Ωc , x22 ≤ (1.8194)2 c

 √ 
V̇ (x) ≤ − 1 − 829 β(1.8194)2 c kxk2
βc
 
= − 1− kxk2
0.448

## If β < 0.448/c, the origin will be exponentially stable and

Ωc will be an estimate of the region of attraction
– p. 9/?
Remark: The inequality β < 0.448/c shows a tradeoff
between the estimate of the region of attraction and the
estimate of the upper bound on β

– p. 10/?
Case 2: The origin of the nominal system is asymptotically
stable
∂V ∂V ∂V

V̇ (t, x) = f (x)+ g(t, x) ≤ −W3 (x)+ g(t, x)
∂x ∂x ∂x

## Under what condition will the following inequality hold?

∂V

∂x g(t, x) < W3 (x)

## Special Case: Quadratic-Type Lyapunov function

∂V ∂V

2
f (x) ≤ −c3 φ (x), ≤ c4 φ(x)
∂x ∂x

– p. 11/?
V̇ (t, x) ≤ −c3 φ2 (x) + c4 φ(x)kg(t, x)k
c3
If kg(t, x)k ≤ γφ(x), with γ <
c4
V̇ (t, x) ≤ −(c3 − c4 γ)φ2 (x)

– p. 12/?
Example
ẋ = −x3 + g(t, x)
V (x) = x4 is a quadratic-type Lyapunov function for the
nominal system ẋ = −x3
∂V ∂V

3 6 = 4|x|3
(−x ) = −4x ,
∂x ∂x

φ(x) = |x|3 , c3 = 4, c4 = 4
Suppose |g(t, x)| ≤ γ|x|3 , ∀ x, with γ < 1
V̇ (t, x) ≤ −4(1 − γ)φ2 (x)
Hence, the origin is a globally uniformly asymptotically
stable
– p. 13/?
Remark: A nominal system with asymptotically, but not
exponentially, stable origin is not robust to smooth
perturbations with arbitrarily small linear growth bounds

Example
ẋ = −x3 + γx
The origin is unstable for any γ > 0

– p. 14/?
Nonlinear Systems and Control
Lecture # 14
Passivity
Memoryless Functions
&
State Models

– p. 1/1
Memoryless Functions

+ y -
y
PPP
u P P
P
 u

(a)
(b)

power inflow = uy

Resistor is passive if uy ≥ 0

– p. 2/1
y y y

u u u

## (a) (b) (c)

Passive Passive Not passive

## y = h(t, u), h ∈ [0, ∞]

Vector case:
h i
y = h(t, u), hT = h1 , h2 , · · · , hp

## power inflow = Σpi=1 ui yi = uT y

– p. 3/1
Definition: y = h(t, u) is
passive if uT y ≥ 0

lossless if uT y = 0

## input strictly passive if uT y ≥ uT ϕ(u) for some

function ϕ where uT ϕ(u) > 0, ∀ u 6= 0

## output strictly passive if uT y ≥ y T ρ(y) for some

function ρ where y T ρ(y) > 0, ∀ y 6= 0

– p. 4/1
Sector Nonlinearity: h belongs to the sector [α, β]
(h ∈ [α, β]) if
αu2 ≤ uh(t, u) ≤ βu2

y=βu
y=β u
y y

y=αu
u u
y=αu

## Also, h ∈ (α, β], h ∈ [α, β), h ∈ (α, β)

– p. 5/1
αu2 ≤ uh(t, u) ≤ βu2 ⇔ [h(t, u) − αu][h(t, u) − βu] ≤ 0

## Definition: A memoryless function h(t, u) is said to belong

to the sector
[0, ∞] if uT h(t, u) ≥ 0

[K1 , ∞] if uT [h(t, u) − K1 u] ≥ 0

## [0, K2 ] with K2 = K2T > 0 if

hT (t, u)[h(t, u) − K2 u] ≤ 0

## [h(t, u) − K1 u]T [h(t, u) − K2 u] ≤ 0

– p. 6/1
Example
" #
h1 (u1 )
h(u) = , hi ∈ [αi , βi ], βi > αi i = 1, 2
h2 (u2 )
" # " #
α1 0 β1 0
K1 = , K2 =
0 α2 0 β2

h ∈ [K1 , K2 ]
" #
β1 − α1 0
K = K2 − K1 =
0 β2 − α2

– p. 7/1
Example
kh(u) − Luk ≤ γkuk
K1 = L − γI, K2 = L + γI

## [h(u) − K1 u]T [h(u) − K2 u] =

kh(u) − Luk2 − γ 2 kuk2 ≤ 0

K = K2 − K1 = 2γI

– p. 8/1
A function in the sector [K1 , K2 ] can be transformed into a
function in the sector [0, ∞] by input feedforward followed
by output feedback

+  + 
- - - y = h(t, u) - -
 K −1 
+6 −6

- K1

Feedforward K −1 Feedback
[K1 , K2 ] [0, K] [0, I] [0, ∞]
−→ −→ −→
– p. 9/1
State Models
v2 = h2(i2)  L  
y
B B B

i2 + v2 iL
- - BB BB BB -

+ + + +
u v1 i1 = h1(v1) vC C v3 i3 = h3(v3)
P
P P
P
 

P 
P
PP PP
 


P 
P
PP PP
 


P 
P
P P

i1 ?
i3
?

Lẋ1 = u − h2 (x1 ) − x2
C ẋ2 = x1 − h3 (x2 )
y = x1 + h1 (u)

– p. 10/1
V (x) = 12 Lx21 + 12 Cx22
Z t
u(s)y(s) ds ≥ V (x(t)) − V (x(0))
0

## V̇ = Lx1 ẋ1 + Cx2 ẋ2

= x1 [u − h2 (x1 ) − x2 ] + x2 [x1 − h3 (x2 )]
= x1 [u − h2 (x1 )] − x2 h3 (x2 )
= [x1 + h1 (u)]u − uh1 (u) − x1 h2 (x1 ) − x2 h3 (x2 )
= uy − uh1 (u) − x1 h2 (x1 ) − x2 h3 (x2 )

– p. 11/1
uy = V̇ + uh1 (u) + x1 h2 (x1 ) + x2 h3 (x2 )

passive

## Case 1: If h1 = h2 = h3 = 0, then uy = V̇ ; no energy

dissipation; the system is lossless

## Case 2: If h1 ∈ (0, ∞] (uh1 (u) > 0 for u 6= 0), then

uy ≥ V̇ + uh1 (u)

## The energy absorbed over [0, t] will be greater than the

increase in the stored energy, unless the input u(t) is
identically zero. This is a case of input strict passivity
– p. 12/1
Case 3: If h1 = 0 and h2 ∈ (0, ∞], then

## The energy absorbed over [0, t] will be greater than the

increase in the stored energy, unless the output y is
identically zero. This is a case of output strict passivity

## Case 4: If h2 ∈ (0, ∞) and h3 ∈ (0, ∞), then

uy ≥ V̇ + x1 h2 (x1 ) + x2 h3 (x2 )

## x1 h2 (x1 ) + x2 h3 (x2 ) is a positive definite function of x.

This is a case of state strict passivity because the energy
absorbed over [0, t] will be greater than the increase in the
stored energy, unless the state x is identically zero
– p. 13/1
Definition: The system

## is passive if there is a continuously differentiable positive

semidefinite function V (x) (the storage function) such that

T ∂V
u y ≥ V̇ = f (x, u), ∀ (x, u)
∂x
Moreover, it is said to be
lossless if uT y = V̇

## input strictly passive if uT y ≥ V̇ + uT ϕ(u) for some

function ϕ such that uT ϕ(u) > 0, ∀ u 6= 0

– p. 14/1
output strictly passive if uT y ≥ V̇ + y T ρ(y) for some
function ρ such that y T ρ(y) > 0, ∀ y = 6 0

## strictly passive if uT y ≥ V̇ + ψ(x) for some positive

definite function ψ

Example
ẋ = u, y=x
V (x) = 12 x2 ⇒ uy = V̇ ⇒ Lossless

– p. 15/1
Example

ẋ = u, y = x + h(u), h ∈ [0, ∞]

## V (x) = 21 x2 ⇒ uy = V̇ + uh(u) ⇒ Passive

h ∈ (0, ∞] ⇒ uh(u) > 0 ∀ u 6= 0
⇒ Input strictly passive
Example

ẋ = −h(x) + u, y = x, h ∈ [0, ∞]

## V (x) = 21 x2 ⇒ uy = V̇ + yh(y) ⇒ Passive

h ∈ (0, ∞] ⇒ Output strictly passive

– p. 16/1
Example

ẋ = u, y = h(x), h ∈ [0, ∞]
Z x
V (x) = h(σ) dσ ⇒ V̇ = h(x)ẋ = yu ⇒ Lossless
0
Example

## aẋ = −x + u, y = h(x), h ∈ [0, ∞]

Z x
V (x) = a h(σ) dσ ⇒ V̇ = h(x)(−x+u) = yu−xh(x)
0

yu = V̇ + xh(x) ⇒ Passive
h ∈ (0, ∞] ⇒ Strictly passive

– p. 17/1
Nonlinear Systems and Control
Lecture # 15
Positive Real Transfer Functions
&
Connection with Lyapunov Stability

– p. 1/?
Definition: A p × p proper rational transfer function matrix
G(s) is positive real if
poles of all elements of G(s) are in Re[s] ≤ 0
for all real ω for which jω is not a pole of any element of
G(s), the matrix G(jω) + GT (−jω) is positive
semidefinite
any pure imaginary pole jω of any element of G(s) is a
simple pole and the residue matrix
lims→jω (s − jω)G(s) is positive semidefinite Hermitian
G(s) is called strictly positive real if G(s − ε) is positive real
for some ε > 0

– p. 2/?
Scalar Case (p = 1):

## Re[G(jω)] is an even function of ω . The second condition

of the definition reduces to

Re[G(jω)] ≥ 0, ∀ ω ∈ [0, ∞)

## which holds when the Nyquist plot of of G(jω) lies in the

closed right-half complex plane

## This is true only if the relative degree of the transfer function

is zero or one

– p. 3/?
Lemma: A p × p proper rational transfer function matrix
G(s) is strictly positive real if and only if
G(s) is Hurwitz

ω→∞

## where q = rank[G(∞) + GT (∞)]

– p. 4/?
Scalar Case (p = 1): G(s) is strictly positive real if and only
if
G(s) is Hurwitz
Re[G(jω)] > 0, ∀ ω ∈ [0, ∞)
G(∞) > 0 or

## lim ω 2 Re[G(jω)] > 0

ω→∞

– p. 5/?
Example:
1
G(s) =
s
has a simple pole at s = 0 whose residue is 1
1
 
Re[G(jω)] = Re = 0, ∀ ω 6= 0

1
(s − ε)

## has a pole in Re[s] > 0 for any ε > 0

– p. 6/?
Example:
1
G(s) = , a > 0, is Hurwitz
s+a
a
Re[G(jω)] = > 0, ∀ ω ∈ [0, ∞)
ω2 + a2
ω2a
lim ω 2 Re[G(jω)] = lim = a > 0 ⇒ G is SPR
ω→∞ ω→∞ ω 2 + a2
Example:
1 1 − ω2
G(s) = , Re[G(jω)] =
s2 +s+1 (1 − ω 2 )2 + ω 2

G is not PR

– p. 7/?
Example:
s+2 1
 
s+1 s+2
G(s) =   is Hurwitz
 
−1 2
s+2 s+1

2(2+ω 2 ) −2jω
 
1+ω 2 4+ω 2
G(jω) + GT (−jω) =   > 0, ∀ ω ∈ R
 
2jω 4
4+ω 2 1+ω 2
" #
T 2 0
G(∞) + G (∞) = , q=1
0 0

## lim ω 2 det[G(jω) + GT (−jω)] = 4 ⇒ G is SPR

ω→∞

– p. 8/?
Positive Real Lemma: Let

## where (A, B) is controllable and (A, C) is observable.

G(s) is positive real if and only if there exist matrices
P = P T > 0, L, and W such that

P A + AT P = −LT L
P B = C T − LT W
W T W = D + DT

– p. 9/?
Kalman–Yakubovich–Popov Lemma: Let

## where (A, B) is controllable and (A, C) is observable.

G(s) is strictly positive real if and only if there exist matrices
P = P T > 0, L, and W , and a positive constant ε such
that

P A + AT P = −LT L − εP
P B = C T − LT W
W T W = D + DT

– p. 10/?
Lemma: The linear time-invariant minimal realization

ẋ = Ax + Bu
y = Cx + Du

with
G(s) = C(sI − A)−1 B + D
is
passive if G(s) is positive real

## Proof: Apply the PR and KYP Lemmas, respectively, and

use V (x) = 12 xT P x as the storage function

– p. 11/?
T ∂V
u y− (Ax + Bu)
∂x
= uT (Cx + Du) − xT P (Ax + Bu)
= uT Cx + 12 uT (D + D T )u
− 12 xT (P A + AT P )x − xT P Bu
= uT (B T P + W T L)x + 12 uT W T W u
+ 12 xT LT Lx + 12 εxT P x − xT P Bu
1
= 2 (Lx + W u)T (Lx + W u) + 12 εxT P x ≥ 12 εxT P x

## In the case of the PR Lemma, ε = 0, and we conclude that

the system is passive; in the case of the KYP Lemma,
ε > 0, and we conclude that the system is strictly passive

– p. 12/?
Connection with Lyapunov Stability

## is passive with a positive definite storage function V (x),

then the origin of ẋ = f (x, 0) is stable

Proof:

T ∂V ∂V
u y≥ f (x, u) ⇒ f (x, 0) ≤ 0
∂x ∂x

– p. 13/?
Lemma: If the system

## is strictly passive, then the origin of ẋ = f (x, 0) is

asymptotically stable. Furthermore, if the storage function
is radially unbounded, the origin will be globally
asymptotically stable

## Proof: The storage function V (x) is positive definite

T ∂V ∂V
u y≥ f (x, u) + ψ(x) ⇒ f (x, 0) ≤ −ψ(x)
∂x ∂x
Why is V (x) positive definite? Let φ(t; x) be the solution
of ż = f (z, 0), z(0) = x

– p. 14/?
V̇ ≤ −ψ(x)
Z τ
V (φ(τ, x)) − V (x) ≤ − ψ(φ(t; x)) dt, ∀ τ ∈ [0, δ]
0
Z τ
V (φ(τ, x)) ≥ 0 ⇒ V (x) ≥ ψ(φ(t; x)) dt
0
Z τ
V (x̄) = 0 ⇒ ψ(φ(t; x̄)) dt = 0, ∀ τ ∈ [0, δ]
0

## ⇒ ψ(φ(t; x̄)) ≡ 0 ⇒ φ(t; x̄) ≡ 0 ⇒ x̄ = 0

– p. 15/?
Definition: The system

## is zero-state observable if no solution of ẋ = f (x, 0) can

stay identically in S = {h(x, 0) = 0}, other than the zero
solution x(t) ≡ 0

Linear Systems

ẋ = Ax, y = Cx

## y(t) = CeAt x(0) ≡ 0 ⇔ x(0) = 0 ⇔ x(t) ≡ 0

– p. 16/?
Lemma: If the system

## is output strictly passive and zero-state observable, then

the origin of ẋ = f (x, 0) is asymptotically stable.
Furthermore, if the storage function is radially unbounded,
the origin will be globally asymptotically stable

## Proof: The storage function V (x) is positive definite

T ∂V T ∂V
u y≥ f (x, u) + y ρ(y) ⇒ f (x, 0) ≤ −y T ρ(y)
∂x ∂x

## V̇ (x(t)) ≡ 0 ⇒ y(t) ≡ 0 ⇒ x(t) ≡ 0

Apply the invariance principle
– p. 17/?
Example

## V̇ = ax31 x2 + x2 (−ax31 − kx2 + u) = −ky 2 + yu

The system is output strictly passive

## The system is zero-state observable. V is radially

unbounded. Hence, the origin of the unforced system is
globally asymptotically stable

– p. 18/?
Nonlinear Systems and Control
Lecture # 16

Theorems

– p. 1/2
e1 -
u1- y1 -
+  H1
−6

y2 e + u
?

  2 + 2
H2 

## ẋi = fi (xi , ei ), yi = hi (xi , ei )

yi = hi (t, ei )

– p. 2/2
Passivity Theorems

## Theorem 6.1: The feedback connection of two passive

systems is passive

## Theorem 6.3: Consider the feedback connection of two

dynamical systems. When u = 0, the origin of the
closed-loop system is asymptotically stable if each
feedback component is either
strictly passive, or
output strictly passive and zero-state observable
Furthermore, if the storage function for each component is
radially unbounded, the origin is globally asymptotically
stable
– p. 3/2
Theorem 6.4: Consider the feedback connection of a
strictly passive dynamical system with a passive
memoryless function. When u = 0, the origin of the
closed-loop system is uniformly asymptotically stable. if the
storage function for the dynamical system is radially
unbounded, the origin will be globally uniformly
asymptotically stable

## eT2 y2 ≥ V̇2 + y2T ρ2 (y2 ), y2T ρ(y2 ) > 0, ∀y2 6= 0

– p. 4/2
eT1 y1 +eT2 y2 = (u1 −y2 )T y1 +(u2 +y1 )T y2 = uT1 y1 +uT2 y2

## u = 0 ⇒ V̇ ≤ −ψ1 (x1 ) − y2T ρ2 (y2 )

V̇ = 0 ⇒ x1 = 0 and y2 = 0
y2 (t) ≡ 0 ⇒ e1 (t) ≡ 0 ( & x1 (t) ≡ 0) ⇒ y1 (t) ≡ 0
y1 (t) ≡ 0 ⇒ e2 (t) ≡ 0
By zero-state observability of H2 : y2 (t) ≡ 0 ⇒ x2 (t) ≡ 0
Apply the invariance principle
– p. 5/2

Example

ẋ1 = x2 ẋ3 = x4

ẋ2 = 3
−ax1 − kx2 + e1 ẋ4 = −bx3 − x34 + e2
y1 = x2 + e1 y = x
2 4
| {z } | {z }
H
H
1 2

a, b, k > 0
V1 = 14 ax41 + 21 x22

## V̇1 = ax31 x2 − ax31 x2 − kx22 + x2 e1 = −ky12 + y1 e1

With e1 = 0, y1 (t) ≡ 0 ⇔ x2 (t) ≡ 0 ⇒ x1 (t) ≡ 0
H1 is output strictly passive and zero-state observable

– p. 6/2
V2 = 12 bx23 + 12 x24

## V̇2 = bx3 x4 − bx3 x4 − x44 + x4 e2 = −y24 + y2 e2

With e2 = 0, y2 (t) ≡ 0 ⇔ x4 (t) ≡ 0 ⇒ x3 (t) ≡ 0
H2 is output strictly passive and zero-state observable

## The origin is globally asymptotically stable

– p. 7/2
Loop Transformations

## Recall that a memoryless function in the sector [K1 , K2 ]

can be transformed into a function in the sector [0, ∞] by
input feedforward followed by output feedback

+  + 
- - - y = h(t, u) - -
 K −1 
+6 −6

- K
1

– p. 8/2
+h
- -H -
1
−6

H2 

H1 is a dynamical system
H2 is a memoryless function in the sector [K1 , K2 ]

– p. 9/2
+h
- +h -
- -
H1
−6 −6

K1 

+

h H2 

6

K1 

– p. 10/2
+h
- +h -
- - K -
H1
−6 −6

K1 

+

h H2  K −1

6

K1 

– p. 11/2
H̃1
+h
- +h -
- - K ?
+
- h -
H1
−6 −6 +

K1 

+
 +
h H2  K −1 
h

6 +6

K1 
H̃2

– p. 12/2
Example

ẋ1 = x2

ẋ2 = −h(x1 ) + bx2 + e1 y2 = σ(e2 )
| {z }
y1 = x2
H2
| {z }
H1

## σ ∈ [α, β], h ∈ [α1 , ∞], b > 0, α1 > 0, k = β − α > 0

ẋ1 = x2

ẋ2 = −h(x1 ) − ax2 + ẽ1 ỹ2 = σ̃(ẽ2 )
| {z }
ỹ1 = kx2 + ẽ1 H̃2
| {z }
H̃1

## σ̃ ∈ [0, ∞], a=α−b

– p. 13/2
Assume a = α − b > 0 and show that H̃1 is strictly passive
Z x1
V1 = k h(s) ds + xT P x
0
Z x1
V1 = k h(s) ds + p11 x21 + 2p12 x1 x2 + p22 x22
0

## V̇ = kh(x1 )x2 + 2(p11 x1 + p12 x2 )x2

2(p12 x1 + p22 x2 )[−h(x1 ) − ax2 + ẽ1 ]

## Take p22 = k/2, p11 = ap12

– p. 14/2
V̇ = −2p12 x1 h(x1 ) − (ka − 2p12 )x22
+ kx2 ẽ1 + 2p12 x1 ẽ1
= −2p12 x1 h(x1 ) − (ka − 2p12 )x22
+ (kx2 + ẽ1 )ẽ1 − ẽ21 + 2p12 x1 ẽ1
ỹ1 ẽ1 = V̇ + 2p12 x1 h(x1 ) + (ka − 2p12 )x22
+ (ẽ1 − p12 x1 )2 − p212 x21
≥ V̇ + p12 (2α1 − p12 )x21 + (ka − 2p12 )x22
 
ak k
Take 0 < p12 < min , 2α1 ⇒ p212 < 2p12 = p11 p22
2 2
H̃1 is strictly passive. By Theorem 6.4 the origin is globally
asymptotically stable (when u = 0)
– p. 15/2
+
- - H1 -

−6

H2 

– p. 16/2
H̃1
+
- - H1 - W −1 (s) -

−6

H2  W (s) 

H̃2

– p. 17/2
Example

H1 :
ẋ = Ax + Be1 , y1 = Cx
" # " #
0 1 0 h i
A= , B= , C= 1 0
−1 −1 1

H2 : y2 = h(e2 ), h ∈ [0, ∞]

−1 1
C(sI − A) B= Not PR
(s2 + s + 1)
1 (as + 1)
W (s) = ⇒
as + 1 (s2 + s + 1)
h i
H̃1 : ẋ = Ax + Be1 , ỹ1 = C̃x = 1 a x
– p. 18/2
(as + 1)
(s2 + s + 1)
 
1 + jωa 1 + (a − 1)ω 2
Re =
1− ω2
+ jω (1 − ω 2 )2 + ω 2
 
2 1 + jωa
lim ω Re 2
=a−1
ω→∞ 1 − ω + jω
(as + 1)
a>1 ⇒ is SPR
(s2 + s + 1)

V1 = 12 xT P x, P A + AT P = −LT L − εP, P B = C̃ T

– p. 19/2
H̃2 : aė2 = −e2 + ẽ2 , y2 = h(e2 ), h ∈ [0, ∞]
R e2
H̃2 is strictly passive with V2 = a 0 h(s) ds. Use
Z e2
V = V1 + V2 = 21 xT P x + a h(s) ds
0

## as a Lyapunov function candidate for the original feedback

connection

– p. 20/2
1 T 1 T
V̇ = 2 x P ẋ + 2 ẋ P x + ah(e2 )ė2
1 T
= 2 x P [Ax − Bh(e 2 )] + 1
2 [Ax − Bh(e2 )]T P x
+ ah(e2 )C[Ax − Bh(e2 )]
= − 21 xT LT Lx − (ε/2)xT P x − xT C̃ T h(e2 )
+ ah(e2 )CAx
= − 12 xT LT Lx − (ε/2)xT P x
− xT [C + aCA]T h(e2 ) + ah(e2 )CAx
= − 12 xT LT Lx − (ε/2)xT P x − eT2 h(e2 )
≤ −(ε/2)xT P x

## The origin is globally asymptotically stable

– p. 21/2
Nonlinear Systems and Control
Lecture # 17

## Circle & Popov Criteria

– p. 1/2
Absolute Stability

r +  u
- -
y -
 G(s)
−6


ψ(·)

## The system is absolutely stable if (when r = 0) the origin is

globally asymptotically stable for all memoryless
time-invariant nonlinearities in a given sector

– p. 2/2
Circle Criterion
Suppose G(s) = C(sI − A)−1 B + D is SPR and
ψ ∈ [0, ∞]
ẋ = Ax + Bu
y = Cx + Du
u = −ψ(y)

## By the KYP Lemma, ∃ P = P T > 0, L, W, ε > 0

P A + AT P = −LT L − εP
P B = C T − LT W
W T W = D + DT

V (x) = 12 xT P x

– p. 3/2
1 T 1 T
V̇ = 2
x P ẋ + 2
ẋ P x
1 T T T
= 2 x (P A + A P )x + x P Bu
= − 12 xT LT Lx − 12 εxT P x + xT (C T − LT W )u
= − 12 xT LT Lx − 12 εxT P x + (Cx + Du)T u
T T T
− u Du − x L W u

uT Du = 21 uT (D + D T )u = 12 uT W T W u

V̇ = − 12 εxT P x − 1
2
(Lx + W u)T (Lx + W u) − y T ψ(y)

y T ψ(y) ≥ 0 ⇒ V̇ ≤ − 12 εxT P x
The origin is globally exponentially stable

– p. 4/2
What if ψ ∈ [K1 , ∞]?

-
+f - G(s) - -
+f -+f -
G(s) -
− − −
6 6 6
K1 

ψ(·)   ψ(·) 
+
f

6
K1  ψ̃(·)

## ψ̃ ∈ [0, ∞]; hence the origin is globally exponentially stable

if G(s)[I + K1 G(s)]−1 is SPR

– p. 5/2
What if ψ ∈ [K1 , K2 ]?

-
+f - G(s) - -
+f -+f -
G(s) - K -?+ -
f
− − − +
6 6 6
K1 

ψ(·)   ψ(·)  −1  
+
f +
f
− K +
6 6
K1  ψ̃(·)

## ψ̃ ∈ [0, ∞]; hence the origin is globally exponentially stable

if I + KG(s)[I + K1 G(s)]−1 is SPR

– p. 6/2
I + KG(s)[I + K1 G(s)]−1 = [I + K2 G(s)][I + K1 G(s)]−1

## Theorem (Circle Criterion): The system is absolutely stable

if
ψ ∈ [K1 , ∞] and G(s)[I + K1 G(s)]−1 is SPR, or
ψ ∈ [K1 , K2 ] and [I + K2 G(s)][I + K1 G(s)]−1 is SPR
Scalar Case: ψ ∈ [α, β], β > α
The system is absolutely stable if
1 + βG(s)
is Hurwitz and
1 + αG(s)

1 + βG(jω)
 
Re > 0, ∀ ω ∈ [0, ∞]
1 + αG(jω)

– p. 7/2
Case 1: α > 0
By the Nyquist criterion

1 + βG(s) 1 βG(s)
= +
1 + αG(s) 1 + αG(s) 1 + αG(s)

## is Hurwitz if the Nyquist plot of G(jω) does not intersect the

point −(1/α) + j0 and encircles it m times in the
counterclockwise direction, where m is the number of poles
of G(s) in the open right-half complex plane
1
1 + βG(jω) β + G(jω)
>0 ⇔ 1 >0
1 + αG(jω) α + G(jω)

– p. 8/2
"1 #
β + G(jω)
Re 1 > 0, ∀ ω ∈ [0, ∞]
α + G(jω)

D(α,β) q

θ θ
2 1

−1/α −1/β

## The system is absolutely stable if the Nyquist plot of G(jω)

does not enter the disk D(α, β) and encircles it m times in
the counterclockwise direction
– p. 9/2
Case 2: α = 0

1 + βG(s)

## Re[1 + βG(jω)] > 0, ∀ ω ∈ [0, ∞]

1
Re[G(jω)] > − , ∀ ω ∈ [0, ∞]
β
The system is absolutely stable if G(s) is Hurwitz and the
Nyquist plot of G(jω) lies to the right of the vertical line
defined by Re[s] = −1/β

– p. 10/2
Case 3: α < 0 < β
"1 #

1 + βG(jω)

β + G(jω)
Re > 0 ⇔ Re 1 <0
1 + αG(jω) α + G(jω)

The Nyquist plot of G(jω) must lie inside the disk D(α, β).
The Nyquist plot cannot encircle the point −(1/α) + j0.
From the Nyquist criterion, G(s) must be Hurwitz

## The system is absolutely stable if G(s) is Hurwitz and the

Nyquist plot of G(jω) lies in the interior of the disk D(α, β)

– p. 11/2
Example
4
G(s) =
(s + 1)( 12 s + 1)( 13 s + 1)

6
Im G
4

0
Re G
−2

−4
−5 0 5

– p. 12/2
Apply Case 3 with center (0, 0) and radius = 4

## Sector is (−0.25, 0.25)

Apply Case 3 with center (1.5, 0) and radius = 2.834
Sector is [−0.227, 0.714]
Apply Case 2
The Nyquist plot is to the right of Re[s] = −0.857
Sector is [0, 1.166]
[0, 1.166] includes the saturation nonlinearity

– p. 13/2
Example
4
G(s) =
(s − 1)( 12 s + 1)( 13 s + 1)

0.4 Im G

0.2

G is not Hurwitz
0
Re G
−0.2
Apply Case 1

−0.4
−4 −2 0
Center = (−3.2, 0), Radius = 0.168 ⇒ [0.2969, 0.3298]

– p. 14/2
Popov Criterion
-
+f -
G(s) -

6

ψ(·) 

ẋ = Ax + Bu
y = Cx
ui = −ψi (yi ), 1 ≤ i ≤ p

ψi ∈ [0, ki ], 1 ≤ i ≤ p, (0 < ki ≤ ∞)
G(s) = C(sI − A)−1 B
Γ = diag(γ1 , . . . , γp ), M = diag(1/k1 , · · · , 1/kp )

– p. 15/2
- M
H̃1
- ?
+ +
- i - G(s) - (I + sΓ) i -
−6 +

+
ψ(·)  (I + sΓ)−1  
i
+6
H̃2
M

## Show that H̃1 and H̃2 are passive

– p. 16/2
M + (I + sΓ)G(s)
= M + (I + sΓ)C(sI − A)−1 B
= M + C(sI − A)−1 B + ΓCs(sI − A)−1 B
= M + C(sI − A)−1 B + ΓC(sI − A + A)(sI − A)−1 B
= (C + ΓCA)(sI − A)−1 B + M + ΓCB

## If M + (I + sΓ)G(s) is SPR, then H̃1 is strictly passive

with the storage function V1 = 12 xT P x, where P is given by
the KYP equations

P A + AT P = −LT L − εP
P B = (C + ΓCA)T − LT W
W T W = 2M + ΓCB + B T C T Γ

– p. 17/2
H̃2 consists of p decoupled components:
1
γi żi = −zi + ψi (zi ) + ẽ2i , ỹ2i = ψi (zi )
ki
Z zi
V2i = γi ψi (σ) dσ
0
h i
1
V̇2i = γi ψi (zi )żi = ψi (zi ) −zi + ki ψi (zi ) + ẽ2i
1
= y2i e2i + ki ψi (zi ) [ψi (zi ) − ki z i ]

## ψi ∈ [0, ki ] ⇒ ψi (ψi − ki zi ) ≤ 0 ⇒ V̇2i ≤ y2i e2i

H̃2 is passive Rwith the storage function
Pp z
V2 = i=1 γi 0 i ψi (σ) dσ

– p. 18/2
p
X Z yi
Use V = 21 xT P x + γi ψi (σ) dσ
i=1 0

connection

## ẋ = Ax + Bu, y = Cx, u = −ψ(y)

1 T 1 T T (y)Γẏ
V̇ = 2
x P ẋ + 2
ẋ P x + ψ
1 T T P )x + xT P Bu
= 2
x (P A + A
+ ψ T (y)ΓC(Ax + Bu)
= − 12 xT LT Lx − 12 εxT P x
+ xT (C T + AT C T Γ − LT W )u
+ ψ T (y)ΓCAx + ψ T (y)ΓCBu

– p. 19/2
V̇ = − 21 εxT P x − 12 (Lx + W u)T (Lx + W u)
− ψ(y)T [y − M ψ(y)]
≤ − 12 εxT P x − ψ(y)T [y − M ψ(y)]

## ψi ∈ [0, ki ] ⇒ ψ(y)T [y−M ψ(y)] ≥ 0 ⇒ V̇ ≤ − 21 εxT P x

The origin is globally asymptotically stable
Popov Criterion: The system is absolutely stable if, for
1 ≤ i ≤ p, ψi ∈ [0, ki ] and there exists a constant γi ≥ 0,
with (1 + λk γi ) 6= 0 for every eigenvalue λk of A, such that
M + (I + sΓ)G(s) is strictly positive real

– p. 20/2
Scalar case
1
+ (1 + sγ)G(s)
k
is SPR if G(s) is Hurwitz and
1
+ Re[G(jω)] − γωIm[G(jω)] > 0, ∀ ω ∈ [0, ∞)
k
If
1
 
lim + Re[G(jω)] − γωIm[G(jω)] =0
ω→∞ k
we also need
2 1
 
lim ω + Re[G(jω)] − γωIm[G(jω)] > 0
ω→∞ k

– p. 21/2
1
+ Re[G(jω)] − γωIm[G(jω)] > 0, ∀ ω ∈ [0, ∞)
k

−1/k Re[G(jω)]

Popov Plot
– p. 22/2
Example

## ẋ2 = −αx1 − x2 − h(y) + αx1 , α>0

1
G(s) = , ψ(y) = h(y) − αy
s2 +s+α
h ∈ [α, β] ⇒ ψ ∈ [0, k] (k = β − α > 0)
α − ω 2 + γω 2
γ>1 ⇒ > 0, ∀ ω ∈ [0, ∞)
(α − ω 2 )2 + ω2
ω 2 (α − ω 2 + γω 2 )
and lim =γ−1>0
ω→∞ (α − ω 2 )2 + ω2

– p. 23/2
The system is absolutely stable for ψ ∈ [0, ∞] (h ∈ [α, ∞])

ω Im G
0.2 slope=1

0
Re G
−0.2
−0.4
−0.6
−0.8
−1
−0.5 0 0.5 1

## Compare with the circle criterion (γ = 0)

1 α − ω2 √
+ > 0, ∀ ω ∈ [0, ∞], for k < 1 + 2 α
k (α − ω 2 )2 + ω 2
– p. 24/2
Nonlinear Systems and Control
Lecture # 18

Boundedness
&
Ultimate Boundedness

– p. 1/1
Definition: The solutions of ẋ = f (t, x) are
uniformly bounded if ∃ c > 0 and for every
0 < a < c, ∃ β = β(a) > 0 such that

kx(t0 )k ≤ a ⇒ kx(t)k ≤ β, ∀ t ≥ t0 ≥ 0

## uniformly ultimately bounded with ultimate bound b if

∃ b and c and for every 0 < a < c, ∃ T = T (a, b) ≥ 0
such that

kx(t0 )k ≤ a ⇒ kx(t)k ≤ b, ∀ t ≥ t0 + T

## Drop “uniformly” if ẋ = f (x)

– p. 2/1
Lyapunov Analysis: Let V (x) be a cont. diff. positive
definite function and suppose that the sets

## are compact for some c > ε > 0

Ωε
Ωc

– p. 3/1
Suppose
∂V
V̇ (t, x) = f (t, x) ≤ −W3 (x), ∀ x ∈ Λ, ∀ t ≥ 0
∂x
W3 (x) is continuous and positive definite

## k = min W3 (x) > 0

x∈Λ

V̇ (t, x) ≤ −k, ∀ x ∈ Λ, ∀ t ≥ t0 ≥ 0
V (x(t)) ≤ V (x(t0 )) − k(t − t0 ) ≤ c − k(t − t0 )
x(t) enters the set Ωε within the interval [t0 , t0 + (c − ε)/k]

– p. 4/1
Suppose

## Choose c and ε such that Λ ⊂ {µ ≤ kxk ≤ r}

Ωε
Ωc
Br

– p. 5/1
Let α1 and α2 be class K functions such that

## V (x) ≤ c ⇒ α1 (kxk) ≤ c ⇔ kxk ≤ α−1

1 (c)

c = α1 (r) ⇒ Ωc ⊂ Br
kxk ≤ µ ⇒ V (x) ≤ α2 (µ)
ε = α2 (µ) ⇒ Bµ ⊂ Ωε
What is the ultimate bound?

## V (x) ≤ ε ⇒ α1 (kxk) ≤ ε ⇔ kxk ≤ α−1

1 (ε) = α−1
1 (α2 (µ))

– p. 6/1
Theorem (special case of Thm 4.18): Suppose

## α1 (kxk) ≤ V (x) ≤ α2 (kxk)

∂V
f (t, x) ≤ −W3 (x), ∀ kxk ≥ µ > 0
∂x
∀ t ≥ 0 and kxk ≤ r , where α1 , α2 ∈ K, W3 (x) is
continuous & positive definite, and µ < α−12 (α1 (r)). Then,
for every initial state x(t0 ) ∈ {kxk ≤ α−1
2 (α1 (r))}, there is
T ≥ 0 (dependent on x(t0 ) and µ) such that

kx(t)k ≤ α−1
1 (α2 (µ)), ∀ t ≥ t0 + T

## If the assumptions hold globally and α1 ∈ K∞ , then the

conclusion holds for any initial state x(t0 )
– p. 7/1
Remarks:
The ultimate bound is independent of the initial state

## The ultimate bound is a class K function of µ; hence,

the smaller the value of µ, the smaller the ultimate
bound. As µ → 0, the ultimate bound approaches zero

– p. 8/1
Example

## With M = 0, ẋ2 = −(1 + x21 )x1 − x2 = −h(x1 ) − x2

 
1 1
2 2 Z x1
V (x) = xT  x + 2 (y + y 3
) dy (Example 4.5)
 

1 0
2 1
 
3 1
2 2
1 4 def T
V (x) = xT  x + x = x P x + 1 4
2 x1
 
 2 1
1
2
1

– p. 9/1
λmin (P )kxk2 ≤ V (x) ≤ λmax (P )kxk2 + 12 kxk4

## V̇ = −x21 − x41 − x22 + (x1 + 2x2 )M cos ω

2 4

≤ −kxk − x1 + M 5kxk
2 4 2

= −(1 − θ)kxk − x1 − θkxk + M 5kxk
(0 < θ < 1)
2 4
√ def
≤ −(1 − θ)kxk − x1 , ∀ kxk ≥ M 5/θ = µ

## The solutions are GUUB by

s
−1 λmax (P )µ2 + µ4 /2
b = α1 (α2 (µ)) =
λmin (P )
– p. 10/1
Nonlinear Systems and Control
Lecture # 19

Perturbed Systems
&
Input-to-State Stability

– p. 1/?
Perturbed Systems: Nonvanishing Perturbation

Nominal System:

ẋ = f (x), f (0) = 0

Perturbed System:

## c1 kxk2 ≤ V (x) ≤ c2 kxk2

∂V ∂V

2

f (x) ≤ −c3 kxk , ≤ c4 kxk
∂x ∂x

∀ x ∈ Br = {kxk ≤ r}

– p. 2/?
Use V (x) to investigate ultimate boundedness of the
perturbed system
∂V ∂V
V̇ (t, x) = f (x) + g(t, x)
∂x ∂x
Assume
kg(t, x)k ≤ δ, ∀ t ≥ 0, x ∈ Br

V̇ (t, x) ≤ −c3 kxk2 + ∂V kg(t, x)k

∂x
≤ −c3 kxk2 + c4 δkxk
= −(1 − θ)c3 kxk2 − θc3 kxk2 + c4 δkxk
0<θ<1
def
≤ −(1 − θ)c3 kxk2 , ∀ kxk ≥ δc4 /(θc3 ) = µ

– p. 3/?
Apply Theorem 4.18
c1
r
kx(t0 )k ≤ α−1
2 (α1 (r)) ⇔ kx(t0 )k ≤ r
c2

δc4 c1 c3 c1
r r
µ< α−1
2 (α1 (r)) ⇔ <r ⇔ δ< θr
θc3 c2 c4 c2
c2 δc4 c2
r r
−1
b = α1 (α2 (µ)) ⇔ b = µ ⇔ b=
c1 θc3 c1
p
For all kx(t0 )k ≤ r c1 /c2 , the solutions of the perturbed
system are ultimately bounded by b

– p. 4/?
Example

## ẋ1 = x2 , ẋ2 = −4x1 − 2x2 + βx32 + d(t)

β ≥ 0, |d(t)| ≤ δ, ∀ t ≥ 0
 
3 1
2 8
T T
V (x) = x P x = x   x (Lecture 13)
 
1 5
8 16

2 1 5 2
2βx22

V̇ (t, x) = −kxk + 8 x1 x2 + 16 x2
1 5

+ 2d(t) x +
8 1
x
16 2
√ √
2 29 29δ
≤ −kxk + βk22 kxk2 + kxk
8 8
– p. 5/?

k2 = max |x2 | = 1.8194 c
xT P x≤c

Suppose β ≤ 8(1 − ζ)/( 29k22 ) (0 < ζ < 1)

29δ
V̇ (t, x) ≤ −ζkxk2 + 8
kxk

29δ def
≤ −(1 − θ)ζkxk2 , ∀ kxk ≥ 8ζθ
= µ
(0 < θ < 1)
If µ2 λmax (P ) < c, then all solutions of the perturbed
system, starting in Ωc , are uniformly ultimately bounded by
√ s
29δ λmax (P )
b=
8ζθ λmin (P )

– p. 6/?
Case 2: The origin of ẋ = f (x) is asymptotically stable

## α1 (kxk) ≤ V (x) ≤ α2 (kxk)

∂V ∂V

f (x) ≤ −α3 (kxk), ≤k
∂x ∂x

∀ x ∈ Br = {kxk ≤ r}, αi ∈ K, i = 1, 2, 3

V̇ (t, x) ≤ −α3 (kxk) + ∂V
∂x kg(t, x)k

≤ −α3 (kxk) + δk
≤ −(1 − θ)α3 (kxk) − θα3 (kxk) + δk
0<θ<1
 
−1 δk def
≤ −(1 − θ)α3 (kxk), ∀ kxk ≥ α3 θ
= µ

– p. 7/?
Apply Theorem 4.18
δk
 
µ< α−1
2 (α1 (r)) ⇔ α−1
3 < α−1
2 (α1 (r))
θ

θα3 (α−1
2 (α1 (r)))
⇔ δ<
k
−1 δk
 
−1
µ < α2 (α1 (r)) ⇔ α3 < α−12 (α1 (r))
θ
−1
θα3 (α2 (α1 (r))) c3 c1
r
⇔ δ< Compare with δ < θr
k c4 c2
Example
x
ẋ = −
1 + x2
∂V x 4x4
 
V (x) = x4 ⇒ − 2
=− 2
– p. 8/?
The origin is globally asymptotically stable

θα3 (α−1
2 (α1 (r))) θα3 (r) rθ
= =
k k 1 + r2

→ 0 as r → ∞
1+ r2
x
ẋ = − + δ, δ>0
1+ x2
1
δ> 2 ⇒ lim x(t) = ∞
t→∞

– p. 9/?
Input-to-State Stability (ISS)

## Definition: The system ẋ = f (x, u) is input-to-state stable if

there exist β ∈ KL and γ ∈ K such that for any initial state
x(t0 ) and any bounded input u(t)
!
kx(t)k ≤ β(kx(t0 )k, t − t0 ) + γ sup ku(τ )k
t0 ≤τ ≤t

BIBS stability

supt≥t0 ku(t)k

## The origin of ẋ = f (x, 0) is GAS

– p. 10/?
Theorem (Special case of Thm 4.19): Let V (x) be a
continuously differentiable function such that

## α1 (kxk) ≤ V (x) ≤ α2 (kxk)

∂V
f (x, u) ≤ −W3 (x), ∀ kxk ≥ ρ(kuk) > 0
∂x
∀ x ∈ Rn , u ∈ Rm , where α1 , α2 ∈ K∞ , ρ ∈ K, and
W3 (x) is a continuous positive definite function. Then, the
system ẋ = f (x, u) is ISS with γ = α−11 ◦ α2 ◦ ρ

## Proof: Let µ = ρ(supτ ≥t0 ku(τ )k); then

∂V
f (x, u) ≤ −W3 (x), ∀ kxk ≥ µ
∂x
– p. 11/?
Choose ε and c such that
∂V
f (x, u) ≤ −W3 (x), ∀ x ∈ Λ = {ε ≤ V (x) ≤ c}
∂x
Suppose x(t0 ) ∈ Λ and x(t) reaches Ωε at t = t0 + T . For
t0 ≤ t ≤ t0 + T , V satisfies the conditions for the uniform
asymptotic stability. Therefore, the trajectory behaves as if
the origin was uniformly asymptotically stable and satisfies

## kx(t)k ≤ β(kx(t0 )k, t − t0 ), for some β ∈ KL

For t ≥ t0 + T ,

kx(t)k ≤ α−1
1 (α2 (µ))

– p. 12/?
kx(t)k ≤ β(kx(t0 )k, t − t0 ) + α−1
1 (α2 (µ)), ∀ t ≥ t0

!
kx(t)k ≤ β(kx(t0 )k, t − t0 ) + γ sup ku(τ )k , ∀ t ≥ t0
τ ≥t0

## Since x(t) depends only on u(τ ) for t0 ≤ τ ≤ t, the

supremum on the right-hand side can be taken over [t0 , t]

– p. 13/?
Example
ẋ = −x3 + u
The origin of ẋ = −x3 is globally asymptotically stable
V = 12 x2

V̇ = −x4 + xu
= −(1 − θ)x4 − θx4 + xu
 1/3
≤ −(1 − θ)x4 , ∀ |x| ≥ |u|
θ
0<θ<1
The system is ISS with

γ(r) = (r/θ)1/3

– p. 14/?
Example
ẋ = −x − 2x3 + (1 + x2 )u2
The origin of ẋ = −x − 2x3 is globally exponentially stable
V = 12 x2

## V̇ = −x2 − 2x4 + x(1 + x2 )u2

= −x4 − x2 (1 + x2 ) + x(1 + x2 )u2
≤ −x4 , ∀ |x| ≥ u2
The system is ISS with γ(r) = r 2

– p. 15/?
Example

## V (x) = 12 x21 + 14 x42

1 2 2 1
−x21 x1 x22 x42 x42

V̇ = + − = −(x1 − 2 x2 ) − 1− 4

## Now u 6= 0, V̇ = − 12 (x1 − x22 )2 − 12 (x21 + x42 ) + x32 u

≤ − 12 (x21 + x42 ) + |x2 |3 |u|

## V̇ ≤ − 12 (1 − θ)(x21 + x42 ) − 12 θ(x21 + x42 ) + |x2 |3 |u|

(0 < θ < 1)

– p. 16/?
− 12 θ(x21 + x42 ) + |x2 |3 |u| ≤ 0
2
2|u| 2|u| 2|u|

if |x2 | ≥ or |x2 | ≤ and |x1 | ≥
θ θ θ
s
2
2|u| 2|u|

if kxk ≥ 1+
θ θ
s
2
2r 2r

ρ(r) = 1+
θ θ

## V̇ ≤ − 12 (1 − θ)(x21 + x42 ), ∀ kxk ≥ ρ(|u|)

The system is ISS

– p. 17/?
Find γ
V (x) = 12 x21 + 14 x42
1
For |x2 | ≤ |x1 |, 4
(x2
1 + x22 ) ≤ 14 x21 + 14 x21 = 12 x21 ≤ V (x)
1 1
For |x2 | ≥ |x1 |, 16 (x21 +x22 )2 ≤ 16 (x22 +x22 )2 = 14 x42 ≤ V (x)

## min 4 kxk , 16 kxk ≤ V (x) ≤ 21 kxk2 + 14 kxk4

1 2 1 4

1
α1 (r) = 4 min r , 4 r , α2 (r) = 12 r 2 + 14 r 4
 2 1 4

γ = α−1
1 ◦ α2 ◦ ρ
( 1
α−1 (s) = 2(s) 4 , if s ≤ 1
1 √
2 s, if s ≥ 1

– p. 18/?
Nonlinear Systems and Control
Lecture # 20

Input-Output Stability

– p. 1/1
Input-Output Models

y = Hu

## u(t) is a piecewise continuous function of t and belongs to

a linear space of signals
The space of bounded functions: supt≥0 ku(t)k < ∞

The
R∞ T
space of square-integrable functions:
0 u (t)u(t) dt < ∞

## Norm of a signal kuk:

kuk ≥ 0 and kuk = 0 ⇔ u = 0

– p. 2/1
Lp spaces:

## L∞ : kukL∞ = sup ku(t)k < ∞

t≥0

sZ

L2 ; kukL2 = uT (t)u(t) dt < ∞
0
Z ∞ 1/p
Lp ; kukLp = ku(t)kp dt < ∞, 1 ≤ p < ∞
0

Notation Lm
p : p is the type of p-norm used to define the
space and m is the dimension of u

– p. 3/1
Extended Space: Le = {u | uτ ∈ L, ( ∀ τ ∈ [0, ∞)}
u(t), 0 ≤ t ≤ τ
uτ is a truncation of u: uτ (t) =
0, t>τ

## Le is a linear space and L ⊂ Le

(
t, 0 ≤ t ≤ τ
Example: u(t) = t, uτ (t) =
0, t>τ

/ L∞ but uτ ∈ L∞e
u∈
Causality: A mapping H : Lm e → L q is causal if the value
e
of the output (Hu)(t) at any time t depends only on the
values of the input up to time t

(Hu)τ = (Huτ )τ
– p. 4/1
Definition: A mapping H : Lm
e → L q is L stable if ∃ α ∈ K
e
β ≥ 0 such that

k(Hu)τ kL ≤ α (kuτ kL ) + β, ∀ u ∈ Lm
e and τ ∈ [0, ∞)

## It is finite-gain L stable if ∃ γ ≥ 0 and β ≥ 0 such that

k(Hu)τ kL ≤ γkuτ kL + β, ∀ u ∈ Lm
e and τ ∈ [0, ∞)

## It is small-signal L stable (respectively, finite-gain L stable)

if ∃ r > 0 such that the inequality is satisfied for all u ∈ Lme
with sup0≤t≤τ ku(t)k ≤ r

– p. 5/1
Example: Memoryless function y = h(u)

ecu − e−cu
h(u) = a + b tanh cu = a + b , a, b, c > 0
ecu + e−cu

′ 4bc
h (u) = 2 ≤ bc ⇒ |h(u)| ≤ a+bc|u|, ∀ u ∈ R
(ecu + e−cu )
Finite-gain L∞ stable with β = a and γ = bc
h(u) = b tanh cu, |h(u)| ≤ bc|u|, ∀ u ∈ R
Z ∞ Z ∞
|h(u(t))|p dt ≤ (bc)p |u(t)|p dt, for p ∈ [1, ∞)
0 0

## Finite-gain Lp stable with β = 0 and γ = bc

– p. 6/1
h(u) = u2
!2
sup |h(u(t))| ≤ sup |u(t)|
t≥0 t≥0

## L∞ stable with β = 0 and α(r) = r 2

It is not finite-gain L∞ stable. Why?
h(u) = tan u
π tan r
 
|u| ≤ r < ⇒ |h(u)| ≤ |u|
2 r
Small-signal finite-gain Lp stable with β = 0 and γ = tan r/r

– p. 7/1
Example: SISO causal convolution operator
Z t
y(t) = h(t − σ)u(σ) dσ, h(t) = 0 for t < 0
0
Z ∞
Suppose h ∈ L1 ⇔ khkL1 = |h(σ)| dσ < ∞
0
Rt
|y(t)| ≤ 0 |h(t − σ)| |u(σ)| dσ
Rt
≤ 0 |h(t − σ)| dσ sup0≤σ≤τ |u(σ)|
Rt
= 0 |h(s)| ds sup0≤σ≤τ |u(σ)|
kyτ kL∞ ≤ khkL1 kuτ kL∞ , ∀ τ ∈ [0, ∞)
Finite-gain L∞ stable
Also, finite-gain Lp stable for p ∈ [1, ∞) (see textbook)
– p. 8/1
L Stability of State Models

## 0 = f (0, 0), 0 = h(0, 0)

Case 1: The origin of ẋ = f (x, 0) is exponentially stable

## c1 kxk2 ≤ V (x) ≤ c2 kxk2

∂V ∂V

2
f (x, 0) ≤ −c3 kxk , ≤ c4 kxk
∂x ∂x
kf (x, u)−f (x, 0)k ≤ Lkuk, kh(x, u)k ≤ η1 kxk+η2 kuk
∀ kxk ≤ r and kuk ≤ ru

– p. 9/1
V̇ = ∂V
∂x f (x, 0) + ∂V
∂x [f (x, u) − f (x, 0)] √
≤ −c3 kxk2 + c4 Lkxk kuk ≤ − cc32 V + c4 L

c1
kuk V

c3 c4 L
 
p
W (t) = V (x(t)) ⇒ Ẇ ≤ − W + √ ku(t)k
2c2 2 c1
tc c4 L
Z t (t−τ )c 3
− 2c3 −
W (t) ≤ e 2 W (0) + √ e 2c 2
ku(τ )k dτ
2 c1 0
r
c2 tc c4 L
Z t (t−τ )c 3
− 2c3 −
kx(t)k ≤ kx(0)ke 2 + e 2c 2
ku(τ )k dτ
c1 2c1 0
Z t
ky(t)k ≤ k0 kx(0)ke−at +k2 e−a(t−τ ) ku(τ )k dτ +k3 ku(t)k
0

– p. 10/1
p
Theorem 5.1: For each x(0) with kx(0)k ≤ r c1 /c2 , the
system is small-signal finite-gain Lp stable for each
p ∈ [1, ∞]

## If the assumptions hold globally, then, for each x(0) ∈ Rn ,

the system is finite-gain Lp stable for each p ∈ [1, ∞]

Example
ẋ = −x − x3 + u, y = tanh x + u

V = 12 x2 ⇒ x(−x − x3 ) ≤ −x2

c1 = c2 = 12 , c3 = c4 = 1, L = η1 = η2 = 1
Finite-gain Lp stable for each x(0) ∈ R and each p ∈ [1, ∞]
– p. 11/1
Case 2: The origin of ẋ = f (x, 0) is asymptotically stable

## Theorem 5.3: Suppose that, for all (x, u), f is locally

Lipschitz and h is continuous and satisfies

## ẋ = f (x, u), y = h(x, u)

is L∞ stable

– p. 12/1
Proof
!
kx(t)k ≤ β(kx(0)k, t)+γ sup ku(t)k , β ∈ KL, γ ∈ K
0≤t≤τ


ky(t)k ≤ α1 β(kx(0)k, t) + γ sup0≤t≤τ ku(t)k
+ α2 (ku(t)k) + η
α1 (a + b) ≤ α1 (2a) + α1 (2b)

ky(t)k ≤ α1 (2β(kx(0)k, t)) + α1 2γ sup0≤t≤τ ku(t)k
+ α2 (ku(t)k) + η
kyτ kL∞ ≤ γ0 (kuτ kL∞ ) + β0
γ0 = α1 ◦ 2γ + α2 and β0 = α1 (2β(kx(0)k, 0)) + η

– p. 13/1
Theorem (Rephrasing of Thm 5.2): Suppose f is locally
Lipschitz and h is continuous in some neighborhood of
(x = 0, u = 0). If the origin of ẋ = f (x, 0) is
asymptotically stable, then there is a constant k1 > 0 such
that for each x(0) with kx(0)k < k1 , the system

## ẋ = f (x, u), y = h(x, u)

is small-signal L∞ stable

– p. 14/1
Example

## x41 + x42 ≥ 21 kxk4

V̇ ≤ −kxk4 + 2kxk|u|
= −(1 − θ)kxk4 − θkxk4 + 2kxk|u|, 0 < θ < 1
 1/3
≤ −(1 − θ)kxk4 , ∀ kxk ≥ 2|u|
θ

ISS
L∞ stable

– p. 15/1
Nonlinear Systems and Control
Lecture # 21

L2 Gain

&

## The Small-Gain theorem

– p. 1/1
Theorem 5.4: Consider the linear time-invariant system

ẋ = Ax + Bu, y = Cx + Du

## where A is Hurwitz. Let G(s) = C(sI − A)−1 B + D .

Then, the L2 gain of the system is supω∈R kG(jω)k

– p. 2/1
Lemma: Consider the time-invariant system

## where f is locally Lipschitz and h is continuous for all

x ∈ Rn and u ∈ Rm . Let V (x) be a positive semidefinite
function such that
∂V
V̇ = f (x, u) ≤ a(γ 2 kuk2 − kyk2 ), a, γ > 0
∂x
Then, for each x(0) ∈ Rn , the system is finite-gain L2
stable and its L2 gain is less than or equal to γ . In particular
s
V (x(0))
kyτ kL2 ≤ γkuτ kL2 +
a
– p. 3/1
Proof
Z τ Z τ
V (x(τ ))−V (x(0)) ≤ aγ 2 ku(t)k2 dt− a ky(t)k2 dt
0 0

– p. 4/1
Proof
Z τ Z τ
V (x(τ ))−V (x(0)) ≤ aγ 2 ku(t)k2 dt− a ky(t)k2 dt
0 0

V (x) ≥ 0
Z τ Z τ
V (x(0))
2 2 2
ky(t)k dt ≤ γ ku(t)k dt +
0 0 a

– p. 4/1
Proof
Z τ Z τ
V (x(τ ))−V (x(0)) ≤ aγ 2 ku(t)k2 dt− a ky(t)k2 dt
0 0

V (x) ≥ 0
Z τ Z τ
V (x(0))
2 2 2
ky(t)k dt ≤ γ ku(t)k dt +
0 0 a
s
V (x(0))
kyτ kL2 ≤ γkuτ kL2 +
a

– p. 4/1
Lemma 6.5: If the system

## is output strictly passive with

uT y ≥ V̇ + δy T y, δ>0

## then it is finite-gain L2 stable and its L2 gain is less than or

equal to 1/δ

– p. 5/1
Lemma 6.5: If the system

## is output strictly passive with

uT y ≥ V̇ + δy T y, δ>0

## then it is finite-gain L2 stable and its L2 gain is less than or

equal to 1/δ
Proof

V̇ ≤ uT y − δy T y
1
= − 2δ (u − δy)T (u − δy) + 1 T
2δ u u − 2δ y T y
δ 1 T T

≤ 2 δ2 u u − y y

– p. 5/1
Example

– p. 6/1
Example

## ẋ1 = x2 , ẋ2 = −ax31 − kx2 + u, y = x2 , a, k > 0

a 4
V (x) = 4 x1 + 12 x22

– p. 6/1
Example

## ẋ1 = x2 , ẋ2 = −ax31 − kx2 + u, y = x2 , a, k > 0

a 4
V (x) = 4 x1 + 12 x22

## V̇ = ax31 x2 + x2 (−ax31 − kx2 + u)

= −kx22 + x2 u = −ky 2 + yu

– p. 6/1
Example

## ẋ1 = x2 , ẋ2 = −ax31 − kx2 + u, y = x2 , a, k > 0

a 4
V (x) = 4 x1 + 12 x22

## V̇ = ax31 x2 + x2 (−ax31 − kx2 + u)

= −kx22 + x2 u = −ky 2 + yu

## The system is finite-gain L2 stable and its L2 gain is less

than or equal to 1/k

– p. 6/1
Theorem 5.5: Consider the time-invariant system

## ẋ = f (x) + G(x)u, y = h(x)

f (0) = 0, h(0) = 0
where f and G are locally Lipschitz and h is continuous
over Rn . Suppose ∃ γ > 0 and a continuously
differentiable, positive semidefinite function V (x) that
satisfies the Hamilton–Jacobi inequality
T
∂V 1 ∂V ∂V 1

T
f (x)+ G(x)G (x) + hT (x)h(x) ≤ 0
∂x 2γ 2 ∂x ∂x 2

## ∀ x ∈ Rn . Then, for each x(0) ∈ Rn , the system is

finite-gain L2 stable and its L2 gain ≤ γ
– p. 7/1
Proof
∂V ∂V
f (x) + G(x)u =
∂x ∂x
2
T
1 2 1 ∂V ∂V
 
T
− γ u − 2 G (x) + f (x)

2 γ ∂x ∂x
T
1 ∂V ∂V 1 2

T 2
+ G(x)G (x) + γ kuk
2γ 2 ∂x ∂x 2

– p. 8/1
Proof
∂V ∂V
f (x) + G(x)u =
∂x ∂x
2
T
1 2 1 ∂V ∂V
 
T
− γ u − 2 G (x) + f (x)

2 γ ∂x ∂x
T
1 ∂V ∂V 1 2

T 2
+ G(x)G (x) + γ kuk
2γ 2 ∂x ∂x 2

1 2 2 1
V̇ ≤ γ kuk − kyk2
2 2

– p. 8/1
Example
ẋ = Ax + Bu, y = Cx

– p. 9/1
Example
ẋ = Ax + Bu, y = Cx
Suppose there is P = P T ≥ 0 that satisfies the Riccati
equation

T 1 T T
PA + A P + 2
P BB P + C C=0
γ

## for some γ > 0.

– p. 9/1
Example
ẋ = Ax + Bu, y = Cx
Suppose there is P = P T ≥ 0 that satisfies the Riccati
equation

T 1 T T
PA + A P + 2
P BB P + C C=0
γ

## for some γ > 0. Verify that V (x) = 21 xT P x satisfies the

Hamilton-Jacobi equation

– p. 9/1
Example
ẋ = Ax + Bu, y = Cx
Suppose there is P = P T ≥ 0 that satisfies the Riccati
equation

T 1 T T
PA + A P + 2
P BB P + C C=0
γ

## for some γ > 0. Verify that V (x) = 21 xT P x satisfies the

Hamilton-Jacobi equation

## The system is finite-gain L2 stable and its L2 gain is less

than or equal to γ

– p. 9/1
The Small-Gain Theorem

u1- e1- y1

H1 -
+−
6

y2 e + u
?

 H2  2 + 2


– p. 10/1
The Small-Gain Theorem

u1- e1- y1

H1 -
+−
6

y2 e + u
?

 H2  2 + 2


ky1τ kL ≤ γ1 ke1τ kL + β1 , ∀ e1 ∈ Lm
e , ∀ τ ∈ [0, ∞)

– p. 10/1
The Small-Gain Theorem

u1- e1- y1

H1 -
+−
6

y2 e + u
?

 H2  2 + 2


ky1τ kL ≤ γ1 ke1τ kL + β1 , ∀ e1 ∈ Lm
e , ∀ τ ∈ [0, ∞)

– p. 10/1
" # " # " #
u1 y1 e1
u= , y= , e=
u2 y2 e2

– p. 11/1
" # " # " #
u1 y1 e1
u= , y= , e=
u2 y2 e2

γ1 γ2 < 1

– p. 11/1
" # " # " #
u1 y1 e1
u= , y= , e=
u2 y2 e2

γ1 γ2 < 1

Proof

– p. 11/1
" # " # " #
u1 y1 e1
u= , y= , e=
u2 y2 e2

γ1 γ2 < 1

Proof

## ke1τ kL ≤ ku1τ kL + k(H2 e2 )τ kL

≤ ku1τ kL + γ2 ke2τ kL + β2

– p. 11/1
ke1τ kL ≤ ku1τ kL + γ2 (ku2τ kL + γ1 ke1τ kL + β1 ) + β2
= γ1 γ2 ke1τ kL
+ (ku1τ kL + γ2 ku2τ kL + β2 + γ2 β1 )

– p. 12/1
ke1τ kL ≤ ku1τ kL + γ2 (ku2τ kL + γ1 ke1τ kL + β1 ) + β2
= γ1 γ2 ke1τ kL
+ (ku1τ kL + γ2 ku2τ kL + β2 + γ2 β1 )

1
ke1τ kL ≤ (ku1τ kL + γ2 ku2τ kL + β2 + γ2 β1 )
1 − γ1 γ2

– p. 12/1
ke1τ kL ≤ ku1τ kL + γ2 (ku2τ kL + γ1 ke1τ kL + β1 ) + β2
= γ1 γ2 ke1τ kL
+ (ku1τ kL + γ2 ku2τ kL + β2 + γ2 β1 )

1
ke1τ kL ≤ (ku1τ kL + γ2 ku2τ kL + β2 + γ2 β1 )
1 − γ1 γ2
1
ke2τ kL ≤ (ku2τ kL + γ1 ku1τ kL + β1 + γ1 β2 )
1 − γ1 γ2

– p. 12/1
ke1τ kL ≤ ku1τ kL + γ2 (ku2τ kL + γ1 ke1τ kL + β1 ) + β2
= γ1 γ2 ke1τ kL
+ (ku1τ kL + γ2 ku2τ kL + β2 + γ2 β1 )

1
ke1τ kL ≤ (ku1τ kL + γ2 ku2τ kL + β2 + γ2 β1 )
1 − γ1 γ2
1
ke2τ kL ≤ (ku2τ kL + γ1 ku1τ kL + β1 + γ1 β2 )
1 − γ1 γ2
keτ kL ≤ ke1τ kL + ke2τ kL

– p. 12/1
Nonlinear Systems and Control
Lecture # 22

Normal Form

– p. 1/1
Relative Degree

## where f , g , and h are sufficiently smooth in a domain D

f : D → Rn and g : D → Rn are called vector fields on D
∂h def
ẏ = [f (x) + g(x)u] = Lf h(x) + Lg h(x) u
∂x
∂h
Lf h(x) = f (x)
∂x
is the Lie Derivative of h with respect to f or along f

– p. 2/1
∂(Lf h)
Lg Lf h(x) = g(x)
∂x
∂(Lf h)
L2f h(x) = Lf Lf h(x) = f (x)
∂x
∂(Lk−1
f h)
Lkf h(x) = Lf Lk−1
f h(x) = f (x)
∂x
L0f h(x) = h(x)

ẏ = Lf h(x) + Lg h(x) u
Lg h(x) = 0 ⇒ ẏ = Lf h(x)

(2) ∂(Lf h)
y = [f (x) + g(x)u] = L2f h(x) + Lg Lf h(x) u
∂x
– p. 3/1
Lg Lf h(x) = 0 ⇒ y (2) = L2f h(x)

## y (3) = L3f h(x) + Lg L2f h(x) u

ρ−1
Lg Li−1
f h(x) = 0, i = 1, 2, . . . , ρ − 1; Lg Lf h(x) 6= 0
ρ ρ−1
y (ρ) = Lf h(x) + Lg Lf h(x) u
Definition: The system

## has relative degree ρ, 1 ≤ ρ ≤ n, in D0 ⊂ D if ∀ x ∈ D0

ρ−1
Lg Li−1
f h(x) = 0, i = 1, 2, . . . , ρ − 1; Lg Lf h(x) 6= 0

– p. 4/1
Example

## ẋ1 = x2 , ẋ2 = −x1 + ε(1 − x21 )x2 + u, y = x1 , ε>0

ẏ = ẋ1 = x2
ÿ = ẋ2 = −x1 + ε(1 − x21 )x2 + u
Relative degree = 2 over R2
Example

## ẏ = ẋ2 = −x1 + ε(1 − x21 )x2 + u

Relative degree = 1 over R2
– p. 5/1
Example

## ẏ = x2 + 2x2 [−x1 + ε(1 − x21 )x2 + u]

Relative degree = 1 over {x2 6= 0}
Example: Field-controlled DC motor

## a, b, c, k, and θ are positive constants

ẏ = ẋ3 = θx1 x2
ÿ = θx1 ẋ2 + θ ẋ1 x2 = (·) + θx2 u
Relative degree = 2 over {x2 6= 0}
– p. 6/1
Normal Form

Change of variables:
 
φ1 (x)
 .. 

 . 
    
 φn−ρ (x)  φ(x) η
 
 def   def 
z = T (x) =  − − −  =  −−−  =  −−− 
 
 
 h(x)  ψ(x) ξ
..
 
.
 
 
ρ−1
Lf h(x)

## φ1 to φn−ρ are chosen such that T (x) is a diffeomorphism

on a domain D0 ⊂ D

– p. 7/1
∂φ
η̇ = [f (x) + g(x)u] = f0 (η, ξ) + g0 (η, ξ)u
∂x
ξ̇i = ξi+1 , 1≤i≤ρ−1
ξ̇ρ = Lρf h(x) + Lg Lρ−1
f h(x) u
y = ξ1

## Choose φ(x) such that T (x) is a diffeomorphism and

∂φi
g(x) = 0, for 1 ≤ i ≤ n − ρ, ∀ x ∈ D0
∂x
Always possible (at least locally)
η̇ = f0 (η, ξ)

– p. 8/1
Theorem 13.1: Suppose the system

## has relative degree ρ (≤ n) in D . If ρ = n, then for every

x0 ∈ D , a neighborhood N of x0 exists such that the map
T (x) = ψ(x), restricted to N , is a diffeomorphism on N . If
ρ < n, then, for every x0 ∈ D , a neighborhood N of x0
and smooth functions φ1 (x), . . . , φn−ρ (x) exist such that
∂φi
g(x) = 0, for 1 ≤ i ≤ n − ρ
∂x
" #
φ(x)
is satisfied for all x ∈ N and the map T (x) = ,
ψ(x)
restricted to N , is a diffeomorphism on N
– p. 9/1
Normal Form: η̇ = f0 (η, ξ)
ξ̇i = ξi+1 , 1≤i≤ρ−1
ρ ρ−1
ξ̇ρ = Lf h(x) + Lg Lf h(x) u
y = ξ1
   
0 1 0 ... 0 0
0 0 1 ... 0
 
0
   
 
.. ... .. ..
   
Ac =  . .  , Bc = 
 
 . 

 .. 
 0
 

 . 0 1 
 
0 ... ... 0 0 1
h i
Cc = 1 0 . . . 0 0
– p. 10/1
η̇ = f0 (η, ξ)
h i
ρ ρ−1
ξ̇ = Ac ξ + Bc Lf h(x) + Lg Lf h(x) u
y = Cc ξ
ρ
ρ−1 Lf h(x)
γ(x) = Lg Lf h(x), α(x) = −
Lg Lρ−1
f h(x)

ξ̇ = Ac ξ + Bc γ(x)[u − α(x)]
If x∗ is an open-loop equilibrium point at which y = 0; i.e.,
f (x∗ ) = 0 and h(x∗ ) = 0, then ψ(x∗) = 0. Take
φ(x∗ ) = 0 so that z = 0 is an open-loop equilibrium point.

– p. 11/1
Zero Dynamics

η̇ = f0 (η, ξ)
ξ̇ = Ac ξ + Bc γ(x)[u − α(x)]
y = Cc ξ

## y(t) ≡ 0 ⇒ ξ(t) ≡ 0 ⇒ u(t) ≡ α(x(t)) ⇒ η̇ = f0 (η, 0)

Definition: The equation η̇ = f0 (η, 0) is called the zero
dynamics of the system. The system is said to be minimum
phase if zero dynamics have an asymptotically stable
equilibrium point in the domain of interest (at the origin if
T (0) = 0)

## The zero dynamics can be characterized in the

x-coordinates

– p. 12/1
ρ−1
Z ∗ = {x ∈ D0 | h(x) = Lf h(x) = · · · = Lf h(x) = 0}

y(t) ≡ 0 ⇒ x(t) ∈ Z ∗
def
⇒ u = u∗ (x) = α(x)|x∈Z ∗
The restricted motion of the system is described by
def
ẋ = f ∗ (x) = [f (x) + g(x)α(x)]x∈Z ∗

– p. 13/1
Example

## ẏ = ẋ2 = −x1 + ε(1 − x21 )x2 + u ⇒ ρ = 1

y(t) ≡ 0 ⇒ x2 (t) ≡ 0 ⇒ ẋ1 = 0
Non-minimum phase

– p. 14/1
Example

2 + x23
ẋ1 = −x1 + u, ẋ2 = x3 , ẋ3 = x1 x3 + u, y = x2
1+ x23

ẏ = ẋ2 = x3
ÿ = ẋ3 = x1 x3 + u ⇒ ρ = 2
L2f h(x)
γ = Lg Lf h(x) = 1, α = − = −x1 x3
Lg Lf h(x)
Z ∗ = {x2 = x3 = 0}
u = u∗ (x) = 0 ⇒ ẋ1 = −x1
Minimum phase
– p. 15/1
Find φ(x) such that
2+x23
 
∂φ h i 1+x23
∂φ ∂φ ∂φ
φ(0) = 0, g(x) = , , 0 =0
 
∂x ∂x1 ∂x2 ∂x3 
1

and h iT
T (x) = φ(x) x2 x3

is a diffeomorphism

∂φ 2 + x23 ∂φ
· + =0
∂x1 1+ x23 ∂x3

## φ(x) = −x1 + x3 + tan−1 x3

– p. 16/1
h iT
T (x) = −x1 + x3 + tan−1 x3 , x2 , x3

is a global diffeomorphism

η = −x1 + x3 + tan−1 x3 , ξ1 = x2 , ξ2 = x3

!
2 + ξ22
−η + ξ2 + tan−1 ξ2

η̇ = 1+ 2
ξ2
1+ ξ2
ξ̇1 = ξ2
−1

ξ̇2 = −η + ξ2 + tan ξ2 ξ2 + u
y = ξ1

– p. 17/1
Nonlinear Systems and Control
Lecture # 23

Controller Form

– p. 1/1
Definition: A nonlinear system is in the controller form if

ẋ = Ax + Bγ(x)[u − α(x)]

## where (A, B) is controllable and γ(x) is a nonsingular

u = α(x) + γ −1 (x)v ⇒ ẋ = Ax + Bv

## The n-dimensional single-input (SI) system

ẋ = f (x) + g(x)u

## has relative degree n. Why?

– p. 2/1
Transform the system into the normal form
ż = Ac z + Bc γ(z)[u − α(z)], y = Cc z

## On the other hand, if there is a change of variables

ζ = S(x) that transforms the SI system
ẋ = f (x) + g(x)u

## into the controller form

ζ̇ = Aζ + Bγ(ζ)[u − α(ζ)]

## then there is a function h(x) such that the system

ẋ = f (x) + g(x)u, y = h(x)

## has relative degree n. Why?

– p. 3/1
For any controllable pair (A, B), we can find a nonsingular
matrix M that transforms (A, B) into a controllable
canonical form:

M AM −1 = Ac + Bc λT , M B = Bc

def
z = M ζ = M S(x) = T (x)
ż = Ac z + Bc γ(·)[u − α(·)]
h(x) = T1 (x)

– p. 4/1
In summary, the n-dimensional SI system

ẋ = f (x) + g(x)u

such that

## has relative degree n

Search for a smooth function h(x) such that

Lg Li−1
f h(x) = 0, i = 1, 2, . . . , n−1, and L Ln−1
g f h(x) 6= 0
h i
T (x) = h(x), Lf h(x), ··· Ln−1
f h(x)

– p. 5/1
The Lie Bracket: For two vector fields f and g , the Lie
bracket [f, g] is a third vector field defined by
∂g ∂f
[f, g](x) = f (x) − g(x)
∂x ∂x
Notation:

Properties:
[f, g] = −[g, f ]

## For constant vector fields f and g , [f, g] = 0

– p. 6/1
Example
" # " #
x2 0
f = , g=
− sin x1 − x2 x1
" # " # " # " #
0 0 x2 0 1 0
[f, g] = −
1 0 − sin x1 − x2 − cos x1 −1 x1
" #
−x1
adf g = [f, g] =
x1 + x2

– p. 7/1
" # " #
x2 −x1
f = , adf g =
− sin x1 − x2 x1 + x2

" # " #
−1 0 x2
1 1 − sin x1 − x2
" # " #
0 1 −x1

− cos x1 −1 x1 + x2
" #
−x1 − 2x2
=
x1 + x2 − sin x1 − x1 cos x1

– p. 8/1
Distribution: For vector fields f1 , f2 , . . . , fk on D ⊂ Rn , let

## The collection of all vector spaces ∆(x) for x ∈ D is called

a distribution and referred to by

∆ = span{f1 , f2 , . . . , fk }

## If dim(∆(x)) = k for all x ∈ D , we say that ∆ is a

nonsingular distribution on D , generated by f1 , . . . , fk
A distribution ∆ is involutive if

g1 ∈ ∆ and g2 ∈ ∆ ⇒ [g1 , g2 ] ∈ ∆

– p. 9/1
Lemma: If ∆ is a nonsingular distribution, generated by
f1 , . . . , fk , then it is involutive if and only if

[fi , fj ] ∈ ∆, ∀ 1 ≤ i, j ≤ k

Example: D = R3 ; ∆ = span{f1 , f2 }
   
2x2 1
f1 =  1  , f2 =  0  , dim(∆(x)) = 2, ∀ x ∈ D
   
0 x2
 
0
∂f2 ∂f1
[f1 , f2 ] = f1 − f2 =  0 
 
∂x ∂x
1

– p. 10/1
rank [f1 (x), f2 (x), [f1 , f2 ](x)] =
 
2x2 1 0
rank  1 0 0  = 3, ∀ x ∈ D
 
0 x2 1

∆ is not involutive

– p. 11/1
Example: D = {x ∈ R3 | x21 + x23 6= 0}; ∆ = span{f1 , f2 }
   
2x3 −x1
f1 =  −1  , f2 =  −2x2  , dim(∆(x)) = 2, ∀ x ∈ D
   
0 x3
 
−4x3
∂f2 ∂f1
[f1 , f2 ] = f1 − f2 =  2 
 
∂x ∂x
0
 
2x3 −x1 −4x3
rank  −1 −2x2 2  = 2, ∀ x ∈ D
 
0 x3 0
∆ is involutive

– p. 12/1
Theorem: The n-dimensional SI system

ẋ = f (x) + g(x)u

## is transformable into the controller form if and only if there is

a domain D0 such that

f g(x)] = n, ∀ x ∈ D0

and

f g} is involutive in D0

– p. 13/1
Example
" # " #
a sin x2 0
ẋ = + u
−x21 1
" #
∂f −a cos x2
adf g = [f, g] = − g=
∂x 0
" #
0 −a cos x2
1 0

## rank[g(x), adf g(x)] = 2, ∀ x such that cos x2 6= 0

span{g} is involutive
Find h such that Lg h(x) = 0, and Lg Lf h(x) 6= 0

– p. 14/1
∂h ∂h
g= = 0 ⇒ h is independent of x2
∂x ∂x2

∂h
Lf h(x) = a sin x2
∂x1
∂(Lf h) ∂(Lf h) ∂h
Lg Lf h(x) = g= = a cos x2
∂x ∂x2 ∂x1
∂h
Lg Lf h(x) 6= 0 in D0 = {x ∈ R2 | cos x2 6= 0} if ∂x1 6= 0
" # " #
h x1
Take h(x) = x1 ⇒ T (x) = =
Lf h a sin x2

– p. 15/1
Example (Field-Controlled DC Motor)
   
−ax1 1
ẋ =  −bx2 + k − cx1 x3  +  0  u
   
θx1 x2 0
   
a a2
adf g =  cx3  ; ad2f g =  (a + b)cx3
   

−θx2 (b − a)θx2 − θk
 
1 a a2
2
[g(x), adf g(x), adf g(x)] =  0 cx3 (a + b)cx3
 

0 −θx2 (b − a)θx2 − θk

– p. 16/1
det[·] = cθ(−k + 2bx2 )x3

## rank [·] = 3 for x2 6= k/2b and x3 6= 0

     
0 0 0 1 0
[g, adf g] = g= 0 0 c   0 = 0 
     
∂x
0 −θ 0 0 0
⇒ span{g, adf g} is involutive

3 k
D0 = {x ∈ R | x2 > and x3 > 0}
2b
Find h such that Lg h(x) = Lg Lf h(x) = 0; Lg L2f h(x) 6= 0
– p. 17/1
x∗ = [0, k/b, ω0 ]T , h(x∗ ) = 0

∂h ∂h
g= = 0 ⇒ h is independent of x1
∂x ∂x1
∂h ∂h
Lf h(x) = [−bx2 + k − cx1 x3 ] + θx1 x2
∂x2 ∂x3
∂h ∂h
[∂(Lf h)/∂x]g = 0 ⇒ cx3 = θx2
∂x2 ∂x3
h = c1 [θx22 + cx23 ] + c2 , Lg L2f h(x) = −2c1 cθ(k − 2bx2 )x3

## h(x∗ ) = c1 [θ(k/b)2 + cω02 ] + c2

c1 = 1, c2 = −θ(k/b)2 − cω02
– p. 18/1
Nonlinear Systems and Control
Lecture # 24

## Observer, Output Feedback

&
Strict Feedback Forms

– p. 1/1
Definition: A nonlinear system is in the observer form if

ẋ = Ax + γ(y, u), y = Cx

Observer:

## x̂˙ = Ax̂ + γ(y, u) + H(y − C x̂)

x̃ = x − x̂

x̃˙ = (A − HC)x̃
Design H such that (A − HC) is Hurwitz

– p. 2/1
Theorem: An n-dimensional single-output (SO) system

## is transformable into the observer form if and only if there is

a domain D0 such that
∂φ
 
rank (x) = n, ∀ x ∈ D0
∂x
h iT
where φ= h, Lf h, ··· Ln−1
f h

## and the unique vector field solution τ of

∂φ h iT
τ = b, where b = 0, ··· 0, 1
∂x
– p. 3/1
satisfies
j
[adif τ, adf τ ] = 0, 0 ≤ i, j ≤ n − 1

and
j
[g, adf τ ] = 0, 0≤j ≤n−2
The change of variables z = T (x) is given by
∂T h i
τ1 , τ2 , ··· τn =I
∂x
where
f τ, 1≤i≤n

– p. 4/1
Example
" # " #
β1 (x1 ) + x2 0
ẋ = + u, y = x1
f2 (x) 1
" # " #
h(x) x1
φ(x) = =
Lf h(x) β1 (x1 ) + x2
" #
∂φ ∂φ
 
1 0
= ∂β1 ; rank (x) = 2, ∀ x
∂x ∂x1 1 ∂x
" # " #
∂φ 0 0
τ = ⇒ τ =
∂x 1 1

– p. 5/1
" #" # " #
∂f ∗ 1 0 1
adf τ = [f, τ ] = − τ =− ∂f2 =− ∂f2
∂x ∗ ∂x2 1 ∂x2
" #" #
∂(adf τ ) 0 0 0
[τ, adf τ ] = τ =− ∂ 2 f2 ∂ 2 f2
∂x ∂x1 ∂x2 ∂x22 1

∂ 2 f2
[τ, adf τ ] = 0 ⇔ = 0 ⇔ f2 (x) = β2 (x1 )+x2 β3 (x1 )
∂x22
[g, τ ] = 0 (g and τ are constant vector fields)
All the conditions are satisfied

– p. 6/1
" #
0 0
τ1 = (−1) ad0f τ =τ =
1
" #
1 1
β3 (x1 )
∂T h i
τ1 , τ2 =I
∂x
" #" # " #
∂T1 ∂T1
∂x1 ∂x2 0 1 1 0
∂T2 ∂T2 =
∂x1 ∂x2
1 β3 (x1 ) 0 1
∂T1 ∂T1 ∂T1
∂x2
= 1, ∂x1
+ β3 (x1 ) ∂x 2
=0

## ∂T2 ∂T2 ∂T2

∂x2 = 0, ∂x1 + β3 (x1 ) ∂x 2
=1

– p. 7/1
∂T1 ∂T1
∂x2
=1 ⇒ ∂x1
+ β3 (x1 ) = 0
R x1
T1 (x) = x2 − 0 β3 (σ) dσ

∂T2 ∂T2
∂x2
=0 ⇒ ∂x1
= 1, T2 (x) = x1
Z x1
z1 = x2 − β3 (σ) dσ, z2 = x1
0
y = z2
" # " #
0 0 β2 (y) − β1 (y)β3 (y) + u
ż = z+ Ry
1 0 0 β3 (σ) dσ + β1 (y)
h i
y = 0 1 z

– p. 8/1
Definition: A nonlinear system is in the output feedback
form if

ẋ1 = x2 + γ1 (y)
ẋ2 = x3 + γ2 (y)
..
.
ẋρ−1 = xρ + γρ−1 (y)
ẋρ = xρ+1 + γρ (y) + bm u, bm > 0
..
.
ẋn−1 = xn + γn−1 (y) + b1 u
ẋn = γn (y) + b0 u
y = x1

– p. 9/1
Show that
The output feedback form is a special case of the
observer form

## It is minimum phase if the polynomial

bm sm + · · · + b1 s + b0

is Hurwitz

– p. 10/1
Definition: A nonlinear system is in the strict feedback form
if

ẋ = f0 (x) + g0 (x)z1
ż1 = f1 (x, z1 ) + g1 (x, z1 )z2
ż2 = f2 (x, z1 , z2 ) + g2 (x, z1 , z2 )z3
..
.
żk−1 = fk−1 (x, z1 , . . . , zk−1 ) + gk−1 (x, z1 , . . . , zk−1 )zk
żk = fk (x, z1 , . . . , zk ) + gk (x, z1 , . . . , zk )u

x ∈ Rn , z1 to zk are scalars
gi (x, z1 , . . . , zi ) 6= 0 for 1 ≤ i ≤ k

– p. 11/1
Find the relative degree if y = z1

## Find the zero dynamics if y = z1 and

fi (x, 0) = 0, ∀1≤i≤k

– p. 12/1
Nonlinear Systems and Control
Lecture # 25

Stabilization

## Basic Concepts & Linearization

– p. 1/?
We want to stabilize the system

ẋ = f (x, u)

## at the equilibrium point x = xss

0 = f (xss , uss )

xδ = x − xss , uδ = u − uss
def
ẋδ = f (xss + xδ , uss + uδ ) = fδ (xδ , uδ )
fδ (0, 0) = 0
uδ = γ(xδ ) ⇒ u = uss + γ(x − xss )
– p. 2/?
State Feedback Stabilization: Given

ẋ = f (x, u) [f (0, 0) = 0]

find
u = γ(x) [γ(0) = 0]
s.t. the origin is an asymptotically stable equilibrium point of

ẋ = f (x, γ(x))

## f and γ are locally Lipschitz functions

– p. 3/?
Linear Systems
ẋ = Ax + Bu
(A, B) is stabilizable (controllable or every uncontrollable
eigenvalue has a negative real part)

## Find K such that (A − BK) is Hurwitz

u = −Kx

Typical methods:
Eigenvalue Placement

Eigenvalue-Eigenvector Placement

LQR
– p. 4/?
Linearization
ẋ = f (x, u)
f (0, 0) = 0 and f is continuously differentiable in a domain
Dx × Du that contains the origin (x = 0, u = 0)
(Dx ⊂ Rn , Du ⊂ Rp )

ẋ = Ax + Bu

∂f ∂f
A= (x, u) ; B= (x, u)
∂x x=0,u=0 ∂u x=0,u=0

## Assume (A, B) is stabilizable. Design a matrix K such that

(A − BK) is Hurwitz

u = −Kx

– p. 5/?
Closed-loop system:

ẋ = f (x, −Kx)

∂f ∂f
 
ẋ = (x, −Kx) + (x, −Kx) (−K) x
∂x ∂u x=0
= (A − BK)x

## Since (A − BK) is Hurwitz, the origin is an exponentially

stable equilibrium point of the closed-loop system

– p. 6/?
Example (Pendulum Equation):

θ̈ = −a sin θ − bθ̇ + cT

## Stabilize the pendulum at θ = δ

0 = −a sin δ + cTss

x1 = θ − δ, x2 = θ̇, u = T − Tss

ẋ1 = x2
ẋ2 = −a[sin(x1 + δ) − sin δ] − bx2 + cu
" # " #
0 1 0 1
A= =
−a cos(x1 + δ) −b −a cos δ −b
x1 =0

– p. 7/?
" # " #
0 1 0
A= ; B=
−a cos δ −b c
h i
K= k1 k2
" #
0 1
A − BK =
−(a cos δ + ck1 ) −(b + ck2 )

a cos δ b
k1 > − , k2 > −
c c
a sin δ a sin δ
T = − Kx = − k1 (θ − δ) − k2 θ̇
c c

– p. 8/?
Notions of Stabilization

## Local Stabilization: The origin of ẋ = f (x, γ(x)) is

asymptotically stable (e.g., linearization)

## Regional Stabilization: The origin of ẋ = f (x, γ(x)) is

asymptotically stable and a given region G is a subset of
the region of attraction (for all x(0) ∈ G, limt→∞ x(t) = 0)
(e.g., G ⊂ Ωc = {V (x) ≤ c} where Ωc is an estimate of
the region of attraction)

## Global Stabilization: The origin of ẋ = f (x, γ(x)) is

globally asymptotically stable

– p. 9/?
Semiglobal Stabilization: The origin of ẋ = f (x, γ(x)) is
asymptotically stable and γ(x) can be designed such that
any given compact set (no matter how large) can be
included in the region of attraction (Typically u = γp (x) is
dependent on a parameter p such that for any compact set
G, p can be chosen to ensure that G is a subset of the
region of attraction )

## What is the difference between global stabilization and

semiglobal stabilization?

– p. 10/?
Example
ẋ = x2 + u
Linearization:
ẋ = u, u = −kx, k > 0

Closed-loop system:
ẋ = −kx + x2

## Linearization of the closed-loop system yields ẋ = −kx.

Thus, u = −kx achieves local stabilization

## The region of attraction is {x < k}. Thus, for any set

{x ≤ a} with a < k, the control u = −kx achieves
regional stabilization
– p. 11/?
The control u = −kx does not achieve global stabilization

## But it achieves semiglobal stabilization because any

compact set {|x| ≤ r} can be included in the region of
attraction by choosing k > r

The control
u = −x2 − kx
achieves global stabilization because it yields the linear
closed-loop system ẋ = −kx whose origin is globally
exponentially stable

– p. 12/?
Practical Stabilization

ẋ = f (x, u) + g(x, u, t)

f (0, 0) = 0, g(0, 0, t) 6= 0
kg(x, u, t)k ≤ δ, ∀ x ∈ Dx , u ∈ Du , t ≥ 0
There is no control u = γ(x), with γ(0) = 0, that can make
the origin of

## uniformly asymptotically stable because the origin is not an

equilibrium point

– p. 13/?
Definition: The system

ẋ = f (x, u) + g(x, u, t)

## is practically stabilizable if for any ε > 0 there is a control

law u = γ(x) such that the solutions of

kx(t)k ≤ ε, ∀t≥T

## Typically, u = γp (x) is dependent on a parameter p such

that for any ε > 0, p can be chosen to ensure that ε is an
ultimate bound

– p. 14/?
With practical stabilization, we may have
local practical stabilization
regional practical stabilization
global practical stabilization, or
semiglobal practical stabilization
depending on the region of initial states

– p. 15/?
Example

ẋ = x2 + u + d(t), |d(t)| ≤ δ, ∀ t ≥ 0

## u = −kx, k > 0, ⇒ ẋ = x2 − kx + d(t)

V = 12 x2 ⇒ V̇ = x3 − kx2 + xd(t)
   
V̇ ≤ − k3 x2 − x2 k3 − |x| − |x| k3 |x| − δ

V̇ ≤ − k3 x2 , for 3δ
k
≤ |x| ≤ k
3
3δ 3δ
Take k <ε ⇔ k≥ ε
By choosing k large enough we can achieve semiglobal
practical stabilization
– p. 16/?
ẋ = x2 + u + d(t)

## u = −x2 − kx, k > 0, ⇒ ẋ = −kx + d(t)

V = 21 x2 ⇒ V̇ = −kx2 + xd(t)
 
V̇ ≤ − k2 x2 − |x| k2 |x| − δ

## By choosing k large enough we can achieve global practical

stabilization

– p. 17/?
Nonlinear Systems and Control
Lecture # 25

Stabilization

Feedback Lineaization

– p. 1/?
Consider the nonlinear system

ẋ = f (x) + G(x)u

f (0) = 0, x ∈ Rn , u ∈ Rm
Suppose there is a change of variables z = T (x), defined
for all x ∈ D ⊂ Rn , that transforms the system into the
controller form

ż = Az + Bγ(x)[u − α(x)]

## where (A, B) is controllable and γ(x) is nonsingular for all

x∈D

u = α(x) + γ −1 (x)v ⇒ ż = Az + Bv

– p. 2/?
v = −Kz

ż = (A − BK)z

## Closed-loop system in the x-coordinates:

−1
 
ẋ = f (x) + G(x) α(x) − γ (x)KT (x)

– p. 3/?
What can we say about the stability of x = 0 as an
equilibrium point of
−1
 
ẋ = f (x) + G(x) α(x) − γ (x)KT (x)

## x = 0 is asymptotically stable because T (x) is a

diffeomorphism. Show it!

## It is globally asymptotically stable if T (x) is a global

diffeomorphism (See page 508)

– p. 4/?
What information do we need to implement the control

γ(x), and T (x)

## u = α̂(x) − γ̂ −1 (x)K T̂ (x)

Closed-loop system:

ż = (A − BK)z + Bδ(z)

δ = γ[α̂ − α + γ −1 KT − γ̂ −1 K T̂ ]
– p. 5/?
ż = (A − BK)z + Bδ(z) (∗)

## V (z) = z T P z, P (A − BK) + (A − BK)T P = −I

Lemma 13.3
If kδ(z)k ≤ kkzk for all z , where
1
0≤k<
2kP Bk

## If kδ(z)k ≤ kkzk + ε for all z , then the state z is

globally ultimately bounded by εc for some c > 0

– p. 6/?
Example (Pendulum Equation):

θ̈ = −a sin θ − bθ̇ + cT
a
x1 = θ − δ, x2 = θ̇, u = T − Tss = T − sin δ
c

ẋ1 = x2
ẋ2 = −a[sin(x1 + δ) − sin δ] − bx2 + cu

1
u= {a[sin(x1 + δ) − sin δ] − k1 x1 − k2 x2 }
c
" #
0 1
A − BK = is Hurwitz
−k1 −(k2 + b)

– p. 7/?
a 1
T =u+ sin δ = [a sin(x1 + δ) − k1 x1 − k2 x2 ]
c c
Let â and ĉ be nominal models of a and c
1
T = [â sin(x1 + δ) − k1 x1 − k2 x2 ]

ẋ = (A − BK)x + Bδ(x)
âc − aĉ c − ĉ
   
δ(x) = sin(x1 + δ1 ) − (k1 x1 + k2 x2 )
ĉ ĉ

– p. 8/?
âc − aĉ c − ĉ
   
δ(x) = sin(x1 + δ1 ) − (k1 x1 + k2 x2 )
ĉ ĉ

|δ(x)| ≤ kkxk + ε
âc − aĉ c − ĉ âc − aĉ
q
k = + 2 2
k +k , ε= | sin δ1 |
1 2
ĉ ĉ ĉ
" # " #
p11 p12 p12
P = , PB =
p12 p22 p22
1
k< q
2 p212 + p222

sin δ1 = 0 ⇒ ε = 0
– p. 9/?
Is feedback linearization a good idea?

Example
ẋ = ax − bx3 + u, a, b > 0
u = −(k + a)x + bx3 , k > 0, ⇒ ẋ = −kx
−bx3 is a damping term. Why cancel it?
u = −(k + a)x, k > 0, ⇒ ẋ = −kx − bx3
Which design is better?

– p. 10/?
Example

ẋ1 = x2
ẋ2 = −h(x1 ) + u
h(0) = 0 and x1 h(x1 ) > 0, ∀ x1 6= 0

Feedback Linearization:

u = h(x1 ) − (k1 x1 + k2 x2 )

## With y = x2 , the system is passive with

Z x1
V = h(z) dz + 12 x22
0

– p. 11/?
The control

## creates a feedback connection of two passive systems with

storage function V

V̇ = −x2 σ(x2 )

## x2 (t) ≡ 0 ⇒ ẋ2 (t) ≡ 0 ⇒ h(x1 (t)) ≡ 0 ⇒ x1 (t) ≡ 0

Asymptotic stability of the origin follows from the invariance
principle

## Which design is better? (Read Example 13.20)

– p. 12/?
Nonlinear Systems and Control
Lecture # 27

Stabilization

## Partial Feedback Linearization

– p. 1/1
Consider the nonlinear system

" # " #
η T1 (x)
z= = T (x) =
ξ T2 (x)

## defined for all x ∈ D ⊂ Rn , that transforms the system into

η̇ = f0 (η, ξ)
ξ̇ = Aξ + Bγ(x)[u − α(x)]

## (A, B) is controllable and γ(x) is nonsingular for all x ∈ D

– p. 2/1
u = α(x) + γ −1 (x)v

η̇ = f0 (η, ξ), ξ̇ = Aξ + Bv
Suppose the origin of η̇ = f0 (η, 0) is asymptotically stable

## is asymptotically stable if the origin of η̇ = f0 (η, 0) is

asymptotically stable
p
Proof: V (η, ξ) = V1 (η)+k ξT P ξ
– p. 3/1
If the origin of η̇ = f0 (η, 0) is globally asymptotically stable,
will the origin of

## be globally asymptotically stable? In general No

Example
η̇ = −η + η 2 ξ, ξ̇ = v
The origin of η̇ = −η is globally exponentially stable, but
the origin of
η̇ = −η + η 2 ξ, ξ̇ = −kξ, k>0

## is not globally asymptotically stable. The region of

attraction is {ηξ < 1 + k}

– p. 4/1
Example

η̇ = − 12 (1 + ξ2 )η 3 , ξ̇1 = ξ2 , ξ̇2 = v

## The origin of η̇ = − 12 η 3 is globally asymptotically stable

" #
2 def 0 1
v = −k ξ1 −2kξ2 = −Kξ ⇒ A−BK =
−k2 −2k

## The eigenvalues of (A − BK) are −k and −k

 
(1 + kt)e−kt te−kt

e(A−BK)t = 
 

−k2 te−kt (1 − kt)e−kt

– p. 5/1
Peaking Phenomenon:

2 −kt k
max{k te }= → ∞ as k → ∞
t e

## ξ1 (0) = 1, ξ2 (0) = 0 ⇒ ξ2 (t) = −k2 te−kt

 
η̇ = − 12 1 − k2 te−kt η 3 , η(0) = η0

η02
η 2 (t) =
1 + η02 [t + (1 + kt)e−kt − 1]
If η02 > 1, the system will have a finite escape time if k is
chosen large enough

– p. 6/1
Lemma 13.2: The origin of

## is globally asymptotically stable if the system η̇ = f0 (η, ξ)

is input-to-state stable

Proof: Use
Lemma 4.7: If ẋ1 = f1 (x1 , x2 ) is ISS and the origin of
ẋ2 = f2 (x2 ) is globally asymptotically stable, then the
origin of
ẋ1 = f1 (x1 , x2 ), ẋ2 = f2 (x2 )
is globally asymptotically stable

– p. 7/1
u = α(x) − γ −1 (x)KT2 (x)

γ(x), and T2 (x)

## δ = γ[α̂ − α + γ −1 KT2 − γ̂ −1 K T̂2 ]

– p. 8/1
Lemma 13.4
If kδ(z)k ≤ ε for all z and η̇ = f0 (η, ξ) is input-to-state
stable, then the state z is globally ultimately bounded by
a class K function of ε

## If kδ(z)k ≤ kkzk in some neighborhood of z = 0, with

sufficiently small k, and the origin of η̇ = f0 (η, 0) is
exponentially stable, then z = 0 is an exponentially
stable equilibrium point of the system

## η̇ = f0 (η, ξ), ξ̇ = (A − BK)ξ + Bδ(z)

– p. 9/1
Proof–First Part: As in Lemma 13.3

kξ(t)k≤ cε, ∀ t ≥ t0

t≥t0

## kη(t)k ≤ β0 (kη(t0 )k, t − t0 ) + γ0 (cε)

Proof–Second Part:
c1 kηk2 ≤ V1 (η) ≤ c2 kηk2

∂V1
f0 (η, 0) ≤ −c3 kηk2
∂η
∂V1

≤ c4 kηk
∂η

– p. 10/1
V (z) = bV1 (η) + ξT P ξ
" #T " #
kηk kηk
V̇ ≤ − Q
kξk kξk
" #
bc3 −(kkP Bk + bc4 L/2)
Q=
−(kkP Bk + bc4 L/2) 1 − 2kkP Bk

b=k
Q is positive definite for sufficiently small k

– p. 11/1
Nonlinear Systems and Control
Lecture # 28

Stabilization

Backstepping

– p. 1/?
η̇ = f (η) + g(η)ξ
ξ̇ = u, η ∈ Rn , ξ, u ∈ R

## View ξ as “virtual” control input to

η̇ = f (η) + g(η)ξ

## Suppose there is ξ = φ(η) that stabilizes the origin of

η̇ = f (η) + g(η)φ(η)

∂V
[f (η) + g(η)φ(η)] ≤ −W (η), ∀η∈D
∂η

– p. 2/?
z = ξ − φ(η)

## η̇ = [f (η) + g(η)φ(η)] + g(η)z

∂φ
ż = u − [f (η) + g(η)ξ]
∂η

∂φ
u= [f (η) + g(η)ξ] + v
∂η

## η̇ = [f (η) + g(η)φ(η)] + g(η)z

ż = v

– p. 3/?
Vc (η, ξ) = V (η) + 12 z 2

∂V ∂V
V̇c = [f (η) + g(η)φ(η)] + g(η)z + zv
∂η ∂η
∂V
≤ −W (η) + g(η)z + zv
∂η

∂V
v=− g(η) − kz, k > 0
∂η
V̇c ≤ −W (η) − kz 2

– p. 4/?
Example

## V (x1 ) = 12 x21 ⇒ V̇ = −x21 − x41 , ∀ x1 ∈ R

z2 = x2 − φ(x1 ) = x2 + x1 + x21

## ẋ1 = −x1 − x31 + z2

ż2 = u + (1 + 2x1 )(−x1 − x31 + z2 )

– p. 5/?
Vc (x) = 12 x21 + 21 z22

## V̇c = x1 (−x1 − x31 + z2 )

+ z2 [u + (1 + 2x1 )(−x1 − x31 + z2 )]

## V̇c = −x21 − x41

+ z2 [x1 + (1 + 2x1 )(−x1 − x31 + z2 ) + u]

– p. 6/?
Example

## ẋ1 = x21 − x31 + x2 , ẋ2 = x3

def
x3 = −x1 − (1 + 2x1 )(−x1 − x31 + z2 ) − z2 = φ(x1 , x2 )

## V (x) = 12 x21 + 21 z22 , V̇ = −x21 − x41 − z22

z3 = x3 − φ(x1 , x2 )

## ẋ1 = x21 − x31 + x2 , ẋ2 = φ(x1 , x2 ) + z3

∂φ 2 3 ∂φ
ż3 = u − (x1 − x1 + x2 ) − (φ + z3 )
∂x1 ∂x2
– p. 7/?
Vc = V + 12 z32

∂V ∂V
V̇c = (x21 − x31 + x2 ) + (z3 + φ)
∂x1 ∂x2
∂φ 2 ∂φ
 
3
+ z3 u − (x1 − x1 + x2 ) − (z3 + φ)
∂x1 ∂x2

## V̇c = −x21 − x41 − (x2 + x1 + x21 )2

∂V ∂φ 2 ∂φ
 
3
+z3 − (x1 − x1 + x2 ) − (z3 + φ) + u
∂x2 ∂x1 ∂x2

∂V ∂φ ∂φ
u=− + (x21 − x31 + x2 ) + (z3 + φ) − z3
∂x2 ∂x1 ∂x2

– p. 8/?
η̇ = f (η) + g(η)ξ
ξ̇ = fa (η, ξ) + ga (η, ξ)u, ga (η, ξ) 6= 0

1
u= [v − fa (η, ξ)]
ga (η, ξ)

η̇ = f (η) + g(η)ξ
ξ̇ = v

– p. 9/?
Strict-Feedback Form

ẋ = f0 (x) + g0 (x)z1
ż1 = f1 (x, z1 ) + g1 (x, z1 )z2
ż2 = f2 (x, z1 , z2 ) + g2 (x, z1 , z2 )z3
..
.
żk−1 = fk−1 (x, z1 , . . . , zk−1 ) + gk−1 (x, z1 , . . . , zk−1 )zk
żk = fk (x, z1 , . . . , zk ) + gk (x, z1 , . . . , zk )u

gi (x, z1 , . . . , zi ) 6= 0 for 1 ≤ i ≤ k

– p. 10/?
Example
η̇ = −η + η 2 ξ, ξ̇ = u
η̇ = −η + η 2 ξ
ξ = 0 ⇒ η̇ = −η
V0 = 12 η 2 ⇒ V̇0 = −η 2 , ∀ η ∈ R

V = 12 (η 2 + ξ2 )

V̇ = η(−η + η 2 ξ) + ξu = −η 2 + ξ(η 3 + u)
u = −η 3 − kξ, k>0
V̇ = −η 2 − kξ2 Global stabilization

– p. 11/?
Nonlinear Systems and Control
Lecture # 29

Stabilization

Passivity-Based Control

– p. 1/?
ẋ = f (x, u), y = h(x)

f (0, 0) = 0

T ∂V
u y ≥ V̇ = f (x, u)
∂x
Theorem 14.4: If the system is
(1) passive with a radially unbounded positive definite
storage function and
(2) zero-state observable,
then the origin can be globally stabilized by

## u = −φ(y), φ(0) = 0, y T φ(y) > 0 ∀ y 6= 0

– p. 2/?
Proof:
∂V
V̇ = f (x, −φ(y)) ≤ −y T φ(y) ≤ 0
∂x

## V̇ (x(t)) ≡ 0 ⇒ y(t) ≡ 0 ⇒ u(t) ≡ 0 ⇒ x(t) ≡ 0

Apply the invariance principle

## A given system may be made passive by

(1) Choice of output,
(2) Feedback,
or both

– p. 3/?
Choice of Output

∂V
ẋ = f (x) + G(x)u, f (x) ≤ 0, ∀x
∂x
No output is defined. Choose the output as
T
∂V

def
y = h(x) = G(x)
∂x

∂V ∂V
V̇ = f (x) + G(x)u ≤ y T u
∂x ∂x
Check zero-state observability

– p. 4/?
Example
ẋ1 = x2 , ẋ2 = −x31 + u

## With u = 0 V̇ = x31 x2 − x2 x31 = 0

∂V ∂V
Take y = G= = x2
∂x ∂x2
Is it zero-state observable?

## u = −kx2 or u = −(2k/π) tan−1 (x2 ) (k > 0)

– p. 5/?
Feedback Passivation

u = α(x) + β(x)v

such that

## ẋ = f (x) + G(x)α(x) + G(x)β(x)v, y = h(x)

is passive

– p. 6/?
Theorem [31]: The system

## is locally equivalent to a passive system (with a positive

definite storage function) if it has relative degree one at
x = 0 and the zero dynamics have a stable equilibrium
point at the origin with a positive definite Lyapunov function

## M = M T > 0, (Ṁ − 2C)T = −(Ṁ − 2C), D = D T ≥ 0

– p. 7/?
Stabilize the system at q = qr

e = q − qr , ė = q̇

## M (q)ë + C(q, q̇)ė + D ė + g(q) = u

(e = 0, ė = 0) is not an open-loop equilibrium point
u = g(q) − φp (e) + v, [φp (0) = 0, eT φp (e) > 0 ∀e 6= 0]
M (q)ë + C(q, q̇)ė + D ė + φp (e) = v
Z e
V = 21 ėT M (q)ė + φTp (σ) dσ
0

## V̇ = 12 ėT (Ṁ −2C)ė−ėT D ė−ėT φp (e)+ėT v+φTp (e)ė ≤ ėT v

y = ė
– p. 8/?
Is it zero-state observable? Set v = 0

## v = −φd (ė), [φd (0) = 0, ėT φd (ė) > 0 ∀ė 6= 0]

u = g(q) − φp (e) − φd (ė)
Special case:

## u = g(q) − Kp e − Kd ė, Kp = KpT > 0, Kd = KdT > 0

– p. 9/?
How does passivity-based control compare with feedback
linearization?

Example 13.20

## h(0) = 0, x1 h(x1 ) > 0, ∀ x1 6= 0

Feedback linearization:

u = h(x1 ) − (k1 x1 + k2 x2 )
" #
0 1
ẋ = x
−k1 −k2

– p. 10/?
Passivity-based control:
Z x1
V = h(z) dz + 12 x22
0

V̇ = x2 h(x1 ) − x2 h(x1 ) + x2 u = x2 u
Take y = x2
With u = 0, y(t) ≡ 0 ⇒ h(x1 (t)) ≡ 0 ⇒ x1 (t) ≡ 0
u = −σ(x2 ), [σ(0) = 0, yσ(y) > 0 ∀ y 6= 0]
ẋ1 = x2 , ẋ2 = −h(x1 ) − σ(x2 )

– p. 11/?
Linearization:
" #
0 1
, k = σ ′ (0)
−h′ (0) −k

s2 + ks + h′ (0) = 0
Sketch the root locus as k varies from zero to infinity

One of the p
two roots cannot be moved to the left of
Re[s] = − h′ (0)

– p. 12/?

## fa (0) = 0, f (0) = 0, h(0) = 0

∂V ∂V
f (x) + G(x)u ≤ y T u
∂x ∂x
∂W
fa (z) ≤ 0
∂z
U (z, x) = W (z) + V (x)
" T #
∂W ∂W

T T
U̇ ≤ F (z, y)y + y u = y u + F (z, y)
∂z ∂z

– p. 13/?
T
∂W

u=− F (z, y) + v ⇒ U̇ ≤ y T v
∂z

The system

## ż = fa (z) + F (z, y)y

T
∂W

ẋ = f (x) − G(x) F (z, y) + G(x)v
∂z
y = h(x)

function

## Read Examples 14.17 and 14.18

– p. 14/?
Nonlinear Systems and Control
Lecture # 30

Stabilization

## Control Lyapunov Functions

– p. 1/1
ẋ = f (x) + g(x)u, f (0) = 0, x ∈ Rn , u ∈ R

## Suppose there is a continuous stabilizing state feedback

control u = ψ(x) such that the origin of

ẋ = f (x) + g(x)ψ(x)

is asymptotically stable

## By the converse Lyapunov theorem, there is V (x) such that

∂V
[f (x) + g(x)ψ(x)] < 0, ∀ x ∈ D, x 6= 0
∂x
If u = ψ(x) is globally stabilizing, then D = Rn and V (x)
– p. 2/1
∂V
[f (x) + g(x)ψ(x)] < 0, ∀ x ∈ D, x 6= 0
∂x
∂V ∂V
g(x) = 0 for x ∈ D, x 6= 0 ⇒ f (x) < 0
∂x ∂x
Since ψ(x) is continuous and ψ(0) = 0, given any ε > 0,
∃ δ > 0 such that if x 6= 0 and kxk < δ , there is u with
kuk < ε such that

∂V
[f (x) + g(x)u] < 0
∂x
Small Control Property

– p. 3/1
Definition: A continuously differentiable positive definite
function V (x) is a Control Lyapunov Function (CLF) for the
system ẋ = f (x) + g(x)u if

∂V ∂V
g(x) = 0 for x ∈ D, x 6= 0 ⇒ f (x) < 0 (∗)
∂x ∂x
it satisfies the small control property
It is a Global Control Lyapunov Function if it is radially
unbounded and (∗) holds with D = Rn

## The system ẋ = f (x) + g(x)u is stabilizable by a

continuous state feedback control only if it has a CLF
Is it sufficient?
– p. 4/1
Theorem: Let V (x) be a CLF for ẋ = f (x) + g(x)u, then
origin is stabilizable by u = ψ(x), where
 q
2 4
 ∂V
∂x
f + ( ∂V
∂x
f ) + ( ∂V
∂x
g ) ∂V
− , if g 6= 0

 ( ∂x )
∂V
g ∂x
ψ(x) =

if ∂V g = 0

 0,
∂x

## The function ψ(x) is continuous for all x ∈ D0 (a

neighborhood of the origin) including x = 0. If f and g are
smooth, then ψ is smooth for x 6= 0. If V is a global CLF,
then the control u = ψ(x) is globally stabilizing

Sontag’s Formula

– p. 5/1
Proof: For properties of ψ , see Section 9.4 of [88]
∂V
[f (x) + g(x)ψ(x)]
∂x
∂V ∂V
If g(x) = 0, V̇ = f (x) < 0 for x 6= 0
∂x ∂x
∂V
If g(x) 6= 0
∂x
" r #
2  4
∂V ∂V ∂V ∂V
V̇ = ∂x f − ∂x f + ∂x f + ∂x g
r 2  4
∂V ∂V
= − ∂x f + ∂x g < 0 for x 6= 0

– p. 6/1
How can we find a CLF?

## If we know of any stabilizing control with a corresponding

Lyapunov function V , then V is a CLF
Feedback Linearization

## P (A − BK) + (A − BK)T P = −Q, Q = QT > 0

V = z T P z = T T (x)P T (x) is a CLF

Backstepping

– p. 7/1
Example:
ẋ = ax − bx3 + u, a, b > 0
Feedback Linearization:

## u = −ax + bx3 − kx (k > 0)

ẋ = −kx
V (x) = 12 x2 is a CLF
∂V ∂V
g = x, f = x(ax − bx3 )
∂x ∂x

– p. 8/1
r 2  4
∂V ∂V ∂V
∂x f + ∂x f + ∂x g
−  
∂V
∂x g
p
x(ax − bx3 ) + x2 (ax − bx3 )2 + x4
= −
x
q
= −ax + bx3 − x (a − bx2 )2 + 1
q
ψ(x) = −ax + bx3 − x (a − bx2 )2 + 1

Compare with
−ax + bx3 − kx

– p. 9/1
Method Expression
FL-u −ax + bx3 − kx
FL-CLS ẋ = −kx
p
CLF-u 3
−ax + bx − x (a − bx2 )2 + 1
p
CLF-CLS −x (a − bx2 )2 + 1

## Method Small |x| Large |x|

FL-u (−a + k)x bx3
FL-CLS ẋ = −kx ẋ = −kx

CLF-u −(a + a2 + 1)x −ax

CLF-CLS − a2 + 1x −bx3

– p. 10/1
Lemma: Let V (x) be a CLF for ẋ = f (x) + g(x)u and
suppose ∂V ∂x (0) = 0. Then, Sontag’s formula has a gain
margin [ 12 , ∞); that is, u = kψ(x) is stabilizing for all k ≥ 1
2

Proof: Let
 s 
2 4
∂V ∂V ∂V
 
1
q(x) = 2
− f+ f + g 
∂x ∂x ∂x

∂V
q(0) = 0, g 6= 0 ⇒ q > 0
∂x
∂V ∂V
g=0 ⇒ q=− f > 0 for x 6= 0
∂x ∂x
q(x) is positive definite
– p. 11/1
u = kψ(x) ⇒ ẋ = f (x) + g(x)kψ(x)

∂V ∂V
V̇ = f+ gkψ
∂x ∂x
∂V ∂V
g = 0 ⇒ V̇ = f < 0 for x 6= 0
∂x ∂x
∂V ∂V ∂V
g 6= 0, V̇ = −q + q + f+ gkψ
∂x ∂x ∂x

∂V ∂V
q+ f+
gkψ
∂x ∂x 
s
2  4
1  ∂V ∂V ∂V


= − k−2 f+ f + g 
∂x ∂x ∂x
– p. 12/1
Nonlinear Systems and Control
Lecture # 31

Stabilization

Output Feedback

– p. 1/1
In general, output feedback stabilization requires the use of
observers. In this lecture we deal with three simple cases
where an observer is not needed
Minimum Phase Relative Degree One Systems

Passive systems

## System with Passive maps from the input to the

derivative of the output

– p. 2/1
Minimum Phase Relative Degree One Systems

## f (0) = 0, h(0) = 0, Lg h(x) 6= 0, ∀x∈D

Normal Form:
" # " #
η φ
φ(0) = 0, Lg φ(x) = 0, =
y h(x)

## η̇ = f0 (η, y), ẏ = γ(x)[u − α(x)], γ(x) 6= 0

– p. 3/1
Assumptions:
The origin of η̇ = f0 (η, 0) is exponentially stable

## c1 kηk2 ≤ V1 (η) ≤ c2 kηk2

∂V1
f0 (η, 0) ≤ −c3 kηk2
∂η

∂V1
≤ c4 kηk
∂η

## kf0 (η, y) − f0 (η, 0)k ≤ L1 |y|

|α(x)γ(x)| ≤ L2 kηk + L3 |y|
γ(x) ≥ γ0 > 0

– p. 4/1
High-Gain Feedback:

u = −ky, k>0

## η̇ = f0 (η, y), ẏ = γ(x)[−ky − α(x)]

V (η, y) = V1 (η) + 12 y 2
∂V1
V̇ = f0 (η, y) − kγ(x)y 2 − α(x)γ(x)y
∂η
∂V1 ∂V1
V̇ = f0 (η, 0) + [f0 (η, y) − f0 (η, 0)]
∂η ∂η
− kγ(x)y 2 − α(x)γ(x)y
V̇ ≤ −c3 kηk2 + c4 L1 kηk |y| − kγ0 y 2 + L2 kηk |y| + L3 y 2

– p. 5/1
V̇ ≤ −c3 kηk2 + c4 L1 kηk |y| − kγ0 y 2 + L2 kηk |y| + L3 y 2
" #T " #" #
kηk c3 − 21 (L2 + c4 L1 ) kηk
V̇ ≤ −
|y| − 21 (L2 + c4 L1 ) (kγ0 − L3 ) |y|
| {z }
Q

## det(Q) = c3 (kγ0 − L3 ) − 41 (L2 + c4 L1 )2

1  1 2

det(Q) > 0 for k > c3 L3 + 4 (L2 + c4 L1 )
c3 γ 0
The origin of the closed-loop system is exponentially stable
If the assumptions hold globally, it is globally exp. stable

– p. 6/1
Passive Systems

## is passive (with a positive definite storage function) and

zero-state observable

## u = −ψ(y), ψ(0) = 0, y T ψ(y) > 0, ∀ y 6= 0

V̇ ≤ uT y = −y T ψ(y) ≤ 0

V̇ = 0 ⇒ y = 0 ⇒ u = 0
y(t) ≡ 0 ⇒ x(t) ≡ 0

– p. 7/1
Systems with Passive Maps from u to ẏ

## f (0, 0) = 0, h(0) = 0, h is cont. diff.

Suppose the system
∂h def
ẋ = f (x, u), ẏ = f (x, u) = h̃(x, u)
∂x
is passive (with a positive definite storage function) and
zero-state observable

V̇ ≤ uT ẏ

## With u = 0, ẏ(t) ≡ 0 ⇒ x(t) ≡ 0

– p. 8/1
+ h u- y
- Plant -
−6

z
ψ(·)  bs
s+a


+ h u- y ẏ
- Plant - s -
−6

z
ψ(·)  b
s+a


– p. 9/1
For 1 ≤ i ≤ m
bi s
zi is the output of driven by yi
s + ai
ui = −ψi (zi )
ai , bi > 0, ψi (0) = 0, zi ψi (zi ) > 0 ∀ zi 6= 0
żi = −ai zi + bi ẏi
Use
m
X Z zi
1
Vc (x, z) = V (x) + bi ψi (σ) dσ
i=1 0

## to prove asymptotic stability of the origin of the closed-loop

system

– p. 10/1
Example: Stabilize the pendulum

mℓθ̈ + mg sin θ = u

## at θ = δ1 using feedback from θ

x1 = θ − δ1 , x2 = θ̇
1
ẋ1 = x2 , ẋ2 = mℓ [−mg sin θ + u]
u = mg sin θ − kp x1 + v, kp > 0
kp 1
ẋ1 = x2 , ẋ2 = − mℓ x1 + mℓ v

y = x1 , ẏ = x2

– p. 11/1
V = 12 kp x21 + 12 mℓx22
h i
kp 1
V̇ = kp x1 x2 + mℓx2 − mℓ x1 + mℓ
v

V̇ = x2 v = ẏv
With v = 0, x2 (t) ≡ 0 ⇒ x1 (t) ≡ 0
y z
- bs -
s+a

ξ̇ = −aξ + y, z = b(−aξ + y)

v = −kd z, kd > 0
u = mg sin θ − kp (θ − δ1 ) − kd z

– p. 12/1
Nonlinear Systems and Control
Lecture # 32

Robust Stabilization

– p. 1/1
Example

## Sliding Manifold (Surface):

s = a1 x1 + x2 = 0

## s(t) ≡ 0 ⇒ ẋ1 = −a1 x1

a1 > 0 ⇒ lim x1 (t) = 0
t→∞

## How can we maintain it there?

– p. 2/1
ṡ = a1 ẋ1 + ẋ2 = a1 x2 + h(x) + g(x)u

Suppose

a1 x2 + h(x)
≤ ̺(x)
g(x)

V = 12 s2

## V̇ = sṡ = s[a1 x2 +h(x)]+g(x)su ≤ g(x)|s|̺(x)+g(x)su

β(x) ≥ ̺(x) + β0 , β0 > 0
s > 0, u = −β(x)

V̇ ≤ g(x)|s|̺(x) − g(x)β(x)|s|

– p. 3/1
s < 0, u = β(x)

(
1, s>0
sgn(s) =
−1, s<0

u = −β(x) sgn(s)

## V̇ ≤ −g(x)β0 |s| ≤ −g0 β0 |s|

V̇ ≤ −g0 β0 2V

– p. 4/1

V̇ ≤ −g0 β0 2V

dV √
√ ≤ −g0 β0 2 dt
V

√ V (s(t)) √
2 V ≤ −g0 β0 2 t
V (s(0))
p p 1
V (s(t)) ≤ V (s(0)) − g0 β0 √ t
2
|s(t)| ≤ |s(0)| − g0 β0 t
s(t) reaches zero in finite time
Once on the surface s = 0, the trajectory cannot leave it
– p. 5/1
s=0

## What is the region of validity?

– p. 6/1
ẋ1 = x2 ẋ2 = h(x) − g(x)β(x)sgn(s)

## ẋ1 = −a1 x1 + s ṡ = a1 x2 + h(x) − g(x)β(x)sgn(s)

sṡ ≤ −g0 β0 |s|, if β(x) ≥ ̺(x) + β0
V1 = 12 x21

## V̇1 = x1 ẋ1 = −a1 x21 + x1 s ≤ −a1 x21 + |x1 |c ≤ 0

c
∀ |s| ≤ c and |x1 | ≥
a1
 
c
Ω= |x1 | ≤ , |s| ≤ c
a1

a1 x2 + h(x)
Ω is positively invariant if ≤ ̺(x) over Ω
g(x)
– p. 7/1
 
c
Ω= |x1 | ≤ , |s| ≤ c
a1
x2 6
s=0 H
HHH HH
HH HHc
HH HH
HH c/a1 -
HH
HH HH x1
HH HH
HH HH
HH HH
H H

a1 x2 + h(x)
≤ k1 < k, ∀ x ∈ Ω
g(x)

u = −k sgn(s)

– p. 8/1
Chattering
B Sliding manifold
B
B
B
B
 B
H
HH B
s < 0 BH Y s>0
B
B
 B
H
HH B a
B HH
B YH
H

## How can we reduce or eliminate chattering?

– p. 9/1
Reduce the amplitude of the signum function

ṡ = a1 x2 + h(x) + g(x)u

[a1 x2 + ĥ(x)]
u=− +v
ĝ(x)
ṡ = δ(x) + g(x)v
 
g(x) g(x)
δ(x) = a1 1 − x2 + h(x) − ĥ(x)
ĝ(x) ĝ(x)

δ(x)
≤ ̺(x), β(x) ≥ ̺(x) + β0
g(x)

v = −β(x) sgn(s)

– p. 10/1
Replace the signum function by a high-slope saturation
function  
s
u = −β(x) sat
ε
(
y, if |y| ≤ 1
sat(y) =
sgn(y), if |y| > 1
y
sgn(y) 6 sat ε 6
1 1

-  -
y  ε y

−1  −1

– p. 11/1
How can we analyze the system?

## For |s| ≥ ε, u = −β(x) sgn(s)

With c ≥ ε
n o
c
Ω = |x1 | ≤ a1
, |s| ≤ c is positively invariant

finite time

## The boundary layer is positively invariant

– p. 12/1
Inside the boundary layer:
s
ẋ1 = −a1 x1 + s ṡ = a1 x2 + h(x) − g(x)β(x)
ε

## x1 ẋ1 ≤ −a1 x21 + |x1 |ε

0<θ<1
ε
x1 ẋ1 ≤ −(1 − θ)a1 x21 , ∀ |x1 | ≥
θa1
The trajectories reach the positively invariant set
ε
Ωε = {|x1 | ≤ , |s| ≤ ε}
θa1

in finite time
– p. 13/1
What happens inside Ωε ?

## Find the equilibrium points

s
0 = −a1 x1 + s = x2 , 0 = a1 x2 + h(x) − g(x)β(x)
ε

h(x)
φ(x1 ) =
a1 g(x)β(x) x2 =0
x1 = εφ(x1 )
Suppose x1 = εφ(x1 ) has an isolated root x̄1 = εk1

h(0) = 0 ⇒ x̄1 = 0

– p. 14/1
z1 = x1 − x̄1 , z2 = s − a1 x̄1

## x2 = −a1 x1 + s = −a1 (x1 − x̄1 ) + s − a1 x̄1 = −a1 z1 + z2

ż1 = −a1 x1 + s = −a1 z1 + z2

s
ż2 = a1 x2 + h(x) − g(x)β(x)
ε
z2 + a1 x̄1
= a1 (z2 − a1 z1 ) + h(x) − g(x)β(x)
ε
z2
ż2 = ℓ(z) − g(x)β(x)
ε
 
h(x) x̄1
ℓ(z) = a1 (z2 − a1 z1 ) + a1 g(x)β(x) −
a1 g(x)β(x) ε

– p. 15/1
z2
ż1 = −a1 z1 + z2 , ż2 = ℓ(z) − g(x)β(x)
ε
ℓ(0) = 0, |ℓ(z)| ≤ ℓ1 |z1 | + ℓ2 |z2 |
g(x)β(x) ≥ g0 β0
V = 21 z12 + 12 z22
 
z2
V̇ = z1 (−a1 z1 + z2 ) + z2 ℓ(z) − g(x)β(x)
ε
g0 β0
V̇ ≤ −a1 z12 + (1 + ℓ1 )|z1 | |z2 | + ℓ2 z22 − z22
ε

– p. 16/1
g0 β0
V̇ ≤ −a1 z12 + (1 + ℓ1 )|z1 | |z2 | + ℓ2 z22
− z22
" #T " # ε
" #
1
|z1 | a1 − 2 (1 + ℓ1
) |z1 |
V̇ ≤ − 
g0 β0
|z2 | − 12 (1 + ℓ1 ) ε − ℓ2 |z2 |
| {z }
Q
 
g0 β0
det(Q) = a1 − ℓ2 − 41 (1 + ℓ1 )2
ε
h(0) = 0 ⇒ lim x(t) = 0
"t→∞ #
x̄1
h(0) =
6 0 ⇒ lim x(t) =
t→∞ 0

– p. 17/1
Nonlinear Systems and Control
Lecture # 33

Robust Stabilization

## Sliding Mode Control

– p. 1/1
Regular Form:

η̇ = fa (η, ξ)
ξ̇ = fb (η, ξ) + g(η, ξ)u + δ(t, η, ξ, u)

η ∈ Rn−1 , ξ ∈ R, u ∈ R
fa (0, 0) = 0, fb (0, 0) = 0, g(η, ξ) ≥ g0 > 0
Sliding Manifold:
s = ξ − φ(η) = 0, φ(0) = 0

## s(t) ≡ 0 ⇒ η̇ = fa (η, φ(η))

Design φ s.t. the origin of η̇ = fa (η, φ(η)) is asymp. stable

– p. 2/1
∂φ
ṡ = fb (η, ξ) − fa (η, ξ) + g(η, ξ)u + δ(t, η, ξ, u)
∂η
1 ˆ ∂φ ˆ
 
u=− fb − fa + v or u = v
ĝ ∂η
∂φ ˆ 1
 
ˆ
u = −L fb − fa + v, L = or L = 0
∂η ĝ
ṡ = g(η, ξ)v + ∆(t, η, ξ, v)
∂φ ∂φ ˆ
 
∆ = fb − ˆ
fa + δ − gL fb − fa
∂η ∂η
∆(t, η, ξ, v)

g(η, ξ) ≤ ̺(η, ξ) + κ0 |v|

– p. 3/1
∆(t, η, ξ, v)

g(η, ξ) ≤ ̺(η, ξ) + κ0 |v|

## ̺(η, ξ) ≥ 0, 0 ≤ κ0 < 1 (Known)

sṡ = sgv + s∆ ≤ sgv + |s| |∆|
sṡ ≤ g[sv + |s|(̺ + κ0 |v|)]
v = −β(η, ξ) sgn(s)
̺(η, ξ)
β(η, ξ) ≥ + β0 , β0 > 0
1 − κ0
sṡ ≤ g[−β|s| + ̺|s| + κ0 β|s|] = g[−β(1 − κ0 )|s| + ̺|s| ]
sṡ ≤ g[−̺|s| − (1 − κ0 )β0 |s| + ̺|s| ]

– p. 4/1
sṡ ≤ −g(η, ξ)(1 − κ0 )β0 |s| ≤ −g0 β0 (1 − κ0 )|s|

s
 
v = −β(x) sat , ε>0
ε
sṡ ≤ −g0 β0 (1 − κ0 )|s|, for |s| ≥ ε
The trajectory reaches the boundary layer {|s| ≤ ε} in finite
time and remains inside thereafter

## Study the behavior of η

η̇ = fa (η, φ(η) + s)

– p. 5/1
α1 (kηk) ≤ V (η) ≤ α2 (kηk)

∂V
fa (η, φ(η) + s) ≤ −α3 (kηk), ∀ kηk ≥ γ(|s|)
∂η
|s| ≤ c ⇒ V̇ ≤ −α3 (kηk), for kηk ≥ γ(c)
α(r) = α2 (γ(r))

## V (η) ≥ α(c) ⇔ V (η) ≥ α2 (γ(c)) ⇒ α2 (kηk) ≥ α2 (γ(c))

⇒ kηk ≥ γ(c) ⇒ V̇ ≤ −α3 (kηk) ≤ −α3 (γ(c))

– p. 6/1
V
c α(.)
0

α(c)

α(ε)
ε c |s|

## is positively invariant and all trajectories starting in Ω reach

Ωε = {V (η) ≤ α(ε)} × {|s| ≤ ε} in finite time

– p. 7/1
Theorem 14.1: Suppose all the assumptions hold over Ω.
Then, for all (η(0), ξ(0)) ∈ Ω, the trajectory (η(t), ξ(t)) is
bounded for all t ≥ 0 and reaches the positively invariant
set Ωε in finite time. If the assumptions hold globally and
V (η) is radially unbounded, the foregoing conclusion holds
for any initial state
Theorem 14.2: Suppose all the assumptions hold over Ω
̺(0) = 0, κ0 = 0
The origin of η̇ = fa (η, φ(η)) is exponentially stale
Then there exits ε∗ > 0 such that for all 0 < ε < ε∗ , the
origin of the closed-loop system is exponentially stable and
Ω is a subset of its region of attraction. If the assumptions
hold globally, the origin will be globally uniformly
asymptotically stable
– p. 8/1
Example

## ẋ1 = x2 + θ1 x1 sin x2 , ẋ2 = θ2 x22 + x1 + u

|θ1 | ≤ a, |θ2 | ≤ b
x2 = −kx1 ⇒ ẋ1 = −kx1 + θ1 x1 sin x2
V1 = 12 x21 ⇒ x1 ẋ1 ≤ −kx21 + ax21
s = x2 + kx1 , k>a
ṡ = θ2 x22 + x1 + u + k(x2 + θ1 x1 sin x2 )
u = −x1 − kx2 + v ⇒ ṡ = v + ∆(x)
∆(x) = θ2 x22 + kθ1 x1 sin x2

– p. 9/1
∆(x) = θ2 x22 + kθ1 x1 sin x2

## β(x) = ak|x1 | + bx22 + β0 , β0 > 0

u = −x1 − kx2 − β(x) sgn(s)
Will
s
 
u = −x1 − kx2 − β(x) sat
ε
stabilize the origin?

– p. 10/1
Example: Normal Form

η̇ = f0 (η, ξ)
ξ̇i = ξi+1 , 1≤i≤ρ−1
ρ ρ−1
ξ̇ρ = Lf h(x) + Lg Lf h(x) u
y = ξ1

## View ξρ as input to the system

η̇ = f0 (η, ξ1 , · · · , ξρ−1 , ξρ )
ξ̇i = ξi+1 , 1≤i≤ρ−2
ξ̇ρ−1 = ξρ

## Design ξρ = φ(η, ξ1 , · · · , ξρ−1 ) to stabilize the origin

– p. 11/1
s = ξρ − φ(η, ξ1 , · · · , ξρ−1 )

## Minimum Phase Systems: The origin of η̇ = f0 (η, 0) is

asymptotically stable

s = ξρ + k1 ξ1 + · · · + kρ−1 ξρ−1

## η̇ = f0 (η, ξ1 , · · · , ξρ−1 , −k1 ξ1 − · · · − kρ−1 ξρ−1 )

    
ξ̇1 1 ξ1
 .  .  . 
 ..  ..   .. 

 =  
1
    
    
ξ̇ρ−1 −k1 −kρ−1 ξρ−1

– p. 12/1
Multi-Input Systems

η̇ = fa (η, ξ)
ξ̇ = fb (η, ξ) + G(η, ξ)E(η, ξ)u + δ(t, η, ξ, u)

η ∈ Rn−p , ξ ∈ Rp , u ∈ Rp
fa (0, 0) = 0, fb (0, 0) = 0, det(G) 6= 0, det(E) 6= 0
G = diag[g1 , g2 , · · · , gm ], gi (η, ξ) ≥ g0 > 0
Design φ s.t. the origin of η̇ = fa (η, φ(η)) is asymp. stable

s = ξ − φ(η)

∂φ
ṡ = fb (η, ξ) − fa (η, ξ) + G(η, ξ)E(η, ξ)u + δ(t, η, ξ, u)
∂η
– p. 13/1
∂φ
ṡ = fb (η, ξ) − fa (η, ξ) + G(η, ξ)E(η, ξ)u + δ(t, η, ξ, u)
∂η

∂φ ˆ
   
u=E −1 ˆ
−L fb − fa + v , L = Ĝ−1 or L = 0
∂η
ṡi = gi (η, ξ)vi + ∆i (t, η, ξ, v), 1 ≤ i ≤ p
∆i (t, η, ξ, v)

≤ ̺(η, ξ) + κ0 max |vi |, ∀ 1 ≤ i ≤ p
g (η, ξ) 1≤i≤p
i

## ̺(η, ξ) ≥ 0, 0 ≤ κ0 < 1 (Known)

̺(x)
β(x) ≥ + β0 , β0 > 0
1 − κ0

– p. 14/1
si ṡi = si gi vi + si ∆i ≤ gi {si vi + |si |[̺ + κ0 max |vi |]}
1≤i≤p

vi = −β sgn(si ), 1≤i≤p

## si ṡi ≤ gi [−β + ̺ + κ0 β]|si |

= gi [−(1 − κ0 )β + ̺]|si |
≤ gi [−̺ − (1 − κ0 )β0 + ̺]|si |
≤ −g0 β0 (1 − κ0 )|si |

Now use
si
 
vi = −β sat , 1≤i≤p
ε
Read Theorem 14.1 and 14.2 in the textbook

– p. 15/1
Nonlinear Systems and Control
Lecture # 34

Robust Stabilization

## Lyapunov Redesign &

Backstepping

– p. 1/?
Lyapunov Redesign (Min-max control)

## Nominal Model: ẋ = f (x) + G(x)u

Stabilizing Control: u = ψ(x)
∂V
[f (x) + G(x)ψ(x)] ≤ −W (x), ∀ x ∈ D, W is p.d.
∂x
u = ψ(x) + v
kδ(t, x, ψ(x) + v)k ≤ ρ(x) + κ0 kvk, 0 ≤ κ0 < 1
ẋ = f (x) + G(x)ψ(x) + G(x)[v + δ(t, x, ψ(x) + v)]
∂V ∂V
V̇ = (f + Gψ) + G(v + δ)
∂x ∂x

– p. 2/?
T ∂V
w = G
∂x

V̇ ≤ −W (x) + w T v + w T δ
w T v +w T δ ≤ w T v +kwk kδk ≤ w T v +kwk[ρ(x)+κ0 kvk]
 
w w
v = −η(x) = sgn(w) for p = 1
kwk kwk

## w T v + w T δ ≤ −ηkwk + ρkwk + κ0 ηkwk

= −η(1 − κ0 )kwk + ρkwk

ρ(x)
η(x) ≥ ⇒ w T v + w T δ ≤ 0 ⇒ V̇ ≤ −W (x)
(1 − κ0 )
– p. 3/?

 −η(x) w , if η(x)kwk ≥ ε
kwk
v=
 −η 2 (x) w , if η(x)kwk < ε
ε

η(x)kwk ≥ ε ⇒ V̇ ≤ −W (x)
For η(x)kwk < ε
 
T 2 w
V̇ ≤ −W (x) + w −η · +δ
ε
η2
≤ −W (x) − kwk2 + ρkwk + κ0 kwkkvk
ε
η2 κ0 η 2
= −W (x) − kwk2 + ρkwk + kwk2
ε ε

– p. 4/?
!
η2
V̇ ≤ −W (x) + (1 − κ0 ) − kwk2 + ηkwk
ε

y2 ε
− +y ≤ , for y ≥ 0
ε 4
(1 − κ0 )
V̇ ≤ −W (x) + ε , ∀x∈D
4
Theorem 14.3: x(t) is uniformly ultimately bounded by a
class K function of ε. If the assumptions hold globally and
V is radially unbounded, then x(t) globally uniformly
ultimately bounded
Corollary 14.1: If ρ(0) = 0 and η(x) ≥ η0 > 0 we can
recover uniform asymptotic stability

– p. 5/?
Example: Pendulum with horizontal acceleration of
suspension point
h i
m ℓθ̈ + A(t) cos θ = T /ℓ − mg sin θ

## Stabilize the pendulum at θ = π

g 1 A(t)
x1 = θ − π, x2 = θ̇, a = , c= , h(t) =
ℓ mℓ2 ℓ
ẋ1 = x2 , ẋ2 = a sin x1 + cu + h(t) cos x1
Nominal Model: ẋ1 = x2 , ẋ2 = â sin x1 + ĉu
   
â 1
ψ(x) = − sin x1 − (k1 x1 + k2 x2 )
ĉ ĉ
– p. 6/?
" #
0 1
, V (x) = xT P x
−k1 −k2
| {z }
Hurwitz
 
1 aĉ − âc
δ = sin x1 + h(t) cos x1
ĉ ĉ
    
c − ĉ c − ĉ
− (k1 x1 + k2 x2 ) + v
ĉ ĉ
q
c − ĉ âc − aĉ c − ĉ
≤ κ0 , + k2 + k2 ≤ k, |h(t)| ≤ H
ĉ 1 2
ĉ ĉ

(kkxk + H) def
|δ| ≤ + κ0 |v| = ρ(x) + κ0 |v|, (κ0 < 1)

– p. 7/?
ρ(x) H
η(x) = , η(x) ≥
(1 − κ0 ) ĉ(1 − κ0 )
" #
∂V T 0
w= G = 2x P = 2(p12 x1 + p22 x2 )
∂x 1

 −η(x)sgn(w), if η(x)|w| ≥ ε
v=
 −η 2 (x) w , if η(x)|w| < ε
ε
   
â 1
u=− sin x1 − (k1 x1 + k2 x2 ) + v
ĉ ĉ
Will this control stabilize the origin x = 0?

– p. 8/?
Backstepping

## ż1 = f1 (z1 ) + g1 (z1 )z2

ż2 = f2 (z1 , z2 ) + g2 (z1 , z2 )z3
..
.
żk−1 = fk−1 (x, z1 , . . . , zk−1 ) + gk−1 (z1 , . . . , zk−1 )zk
żk = fk (z1 , . . . , zk ) + gk (z1 , . . . , zk )u

gi 6= 0, 1≤i≤k

– p. 9/?
ż1 = f1 + g1 z2 + δ1 (z)
ż2 = f2 + g2 z3 + δ2 (z)
..
.
żk−1 = fk−1 + gk−1 zk + δk−1 (z)
żk = fk + gk u + δk (z)

## |δ1 (z)| ≤ ρ1 (z1 )

|δ2 (z)| ≤ ρ2 (z1 , z2 )
..
.
|δk−1 | ≤ ρk−1 (z1 , . . . , zk−1 )
|δk | ≤ ρk (z1 , . . . , zk )

– p. 10/?
Example:

## ẋ1 = x2 + θ1 x1 sin x2 , ẋ2 = θ2 x22 + x1 + u

|θ1 | ≤ a, |θ2 | ≤ b
δ1 = θ1 x1 sin x2 , δ2 = θ2 x22
ẋ1 = x2 + θ1 x1 sin x2 , |θ1 x1 sin x2 | ≤ a|x1 |
x2 = −k1 x1
V1 = 12 x21 , V̇1 ≤ −(k1 − a)x21 ; Take k1 = 1 + a
z2 = x2 + (1 + a)x1

– p. 11/?
ẋ1 = −(1 + a)x1 + θ1 x1 sin x2 + z2
ż2 = ψ1 (x) + ψ2 (x, θ) + u

## ψ1 = x1 + (1 + a)x2 , ψ2 = (1 + a)θ1 x1 sin x2 + θ2 x22

Vc = 21 x21 + 12 z22

## V̇c ≤ −x21 + z2 [x1 + ψ1 (x) + ψ2 (x, θ) + u]

First Approach (Example 14.13):

## V̇c ≤ −x21 − kz22 + z2 ψ2 (x, θ)

Restrict analysis to the compact set Ωc = {Vc (x) ≤ c}

– p. 12/?
ψ2 = (1 + a)θ1 x1 sin x2 + θ2 x22

## |ψ2 | ≤ a(1 + a)|x1 | + bρ|x2 |, ρ = max |x2 |

x∈Ωc

x2 = z2 − (1 + a)x1
|ψ2 | ≤ (1 + a)(a + bρ)|x1 | + bρ|z2 |

## V̇c ≤ −x21 − kz22 + (1 + a)(a + bρ)|x1 | |z2 | + bρz22

We can make V̇ neg. def. by choosing k large enough

## Can it achieve semiglobal stabilization?

– p. 13/?
Second Approach (Example 14.14):

## |ψ2 | ≤ a(1 + a)|x1 | + bx22

(
−η(x) sgn(z2 ), if η(x)|z2 | ≥ ε
v=
−η 2 (x)z2 /ε, if η(x)|z2 | < ε

## η(x) = η0 + a(1 + a)|x1 | + bx22 , η0 > 0, ε>0

ε
V̇c ≤ −x21 − kz22 +
4
Show that this control is globally stabilizing
– p. 14/?
Nonlinear Systems and Control
Lecture # 35

Tracking

## Feedback Linearization &

Sliding Mode Control

– p. 1/1
SISO relative-degree ρ system:

## ẋ = f (x) + g(x)u, y = h(x)

f (0) = 0, h(0) = 0
ρ−1
Lg Li−1
f h(x) = 0, for 1 ≤ i ≤ ρ − 1, Lg Lf h(x) 6= 0
Normal form:
η̇ = f0 (η, ξ)
ξ̇i = ξi+1 , 1≤i≤ρ−1
ξ̇ρ = Lρf h(x) + Lg Lρ−1
f h(x)u
y = ξ1

f0 (0, 0) = 0

– p. 2/1
Reference signal r(t)
r(t) and its derivatives up to r (ρ) (t) are bounded for all
t ≥ 0 and the ρth derivative r (ρ) (t) is a piecewise
continuous function of t;
the signals r ,. . . ,r (ρ) are available on-line.

## Goal: lim [y(t) − r(t)] = 0

t→∞

   
r ξ1 − r

R= ..  
e= .. 
. , . =ξ−R
r (ρ−1) ξρ − r (ρ−1)

– p. 3/1
η̇ = f0 (η, e + R)
h i
ρ ρ−1
ė = Ac e + Bc Lf h(x) + Lg Lf h(x)u − r (ρ)
   
0 1 0 ... 0 0
 
 0 0 1 ... 0  
 0

 
 .. ... ..  .. 
Ac =  . . , Bc = 
 . 

 ..   

 . 0 
1   0 
0 ... ... 0 0 1

1 h
ρ
i
u= ρ−1 −Lf h(x) + r (ρ) + v
Lg Lf h(x)
ė = Ac e + Bc v

– p. 4/1
v = −Ke ⇒ ė = (Ac − Bc K) e
| {z }
H urwitz

t→∞ t→∞

## e(t) is bounded ⇒ ξ(t) = e(t) + R(t) is bounded

η̇ = f0 (η, ξ)
Local Tracking (small kη(0)k, ke(0)k, kR(t)k):

## Minimum Phase ⇒ The origin of η̇ = f0 (η, 0) is

asymptotically stable
⇒ η is bounded for sufficiently small
kη(0)k, ke(0)k, and kR(t)k
– p. 5/1
Global Tracking (large kη(0)k, ke(0)k, kR(t)k):

Example 13.21

## ẋ1 = x2 , ẋ2 = −a sin x1 − bx2 + cu, y = x1

e1 = x1 − r, e2 = x2 − ṙ
ė1 = e2 , ė2 = −a sin x1 − bx2 + cu − r̈
1
u= [a sin x1 + bx2 + r̈ − k1 e1 − k2 e2 ]
c
ė1 = e2 , ė2 = −k1 e1 − k2 e2
See simulation in the textbook
– p. 6/1
Sliding Mode Control

## Lg h(x) = · · · = Lg Lfρ−2 h(x) = 0, Lg Lρ−1

f h(x) ≥ a > 0

η̇ = f0 (η, ξ)
ξ̇1 = ξ2
.. ..
. .
ξ̇ρ−1 = ξρ
ρ ρ−1
ξ̇ρ = Lf h(x) + Lg Lf h(x)[u + δ(t, x, u)]
y = ξ1

e=ξ−R

– p. 7/1
η̇ = f0 (η, ξ)
ė1 = e2
.. ..
. .
ėρ−1 = eρ
ėρ = Lρf h(x) + Lg Lρ−1
f h(x)[u + δ(t, x, u)] − r (ρ)
(t)

Sliding surface:

## s(t) ≡ 0 ⇒ eρ = −(k1 e1 + · · · + kρ−1 eρ−1 )

– p. 8/1
η̇ = f0 (η, ξ)
ė1 = e2
.. ..
. .
ėρ−1 = −(k1 e1 + · · · + kρ−1 eρ−1 )

 
1

 ... 

  is Hurwitz
 
 1 
−k1 −kρ−1

## Assumption: The system f0 (η, ξ) is BIBS stable

– p. 9/1
ρ−1
X
s = (k1 e1 + · · · + kρ−1 eρ−1 ) + eρ = ki ei + eρ
i=1

ρ−1
X ρ ρ−1
ṡ = ki ei+1 +Lf h(x)+Lg Lf h(x)[u+δ(t, x, u)]−r (ρ) (t)
i=1
"ρ−1 #
1 X
u=− ρ−1 ki ei+1 + Lρf h(x) − r (ρ) (t) + v
Lg Lf h(x) i=1

ṡ = Lg Lρ−1
f h(x)v + ∆(t, x, v)

∆(t, x, v)

ρ−1 ≤ ̺(x) + κ0 |v|, 0 ≤ κ0 < 1
Lg L h(x)
f

– p. 10/1
 
s
v = −β(x) sat , ε>0
ε

̺(x)
β(x) ≥ + β0 , β0 >
(1 − κ0 )
What properties can we prove for this control?

– p. 11/1
Nonlinear Systems and Control
Lecture # 36

Tracking

Equilibrium-to-Equilibrium
Transition

– p. 1/1
η̇ = f0 (η, ξ)
ξ̇i = ξi+1 , 1≤i≤ρ−1
ρ ρ−1
ξ̇ρ = Lf h(x) + Lg Lf h(x) u
| {z } | {z }
fb (η,ξ) gb (η,ξ)
y = ξ1

Equilibrium point:

0 = f0 (η̄, ξ̄)
0 = ξ̄i+1 , 1≤i≤ρ−1
0 = fb (η̄, ξ̄) + gb (η̄, ξ̄)ū
ȳ = ξ̄1

– p. 2/1
ξ̄1 = ȳ, ξ̄i = 0 for 2 ≤ i ≤ ρ − 1

fb (η̄, ȳ, 0, · · · , 0)
0 = f0 (η̄, ȳ, 0, · · · , 0), ū = −
gb (η̄, ȳ, 0, · · · , 0)
Assume f0 (η̄, ȳ, 0, · · · , 0) has a unique solution η̄ in the
domain of interest

η̄ = φη (ȳ), ū = φu (ȳ)

## By assumption (and without loss of generality)

φη (0) = 0, φu (0) = 0

– p. 3/1
Goal: Move the system from equilibrium at y = 0 to
equilibrium at y = ȳ , either asymptotically or over a finite
time period

## First Approach: Apply a step command

(
∗ 0, for t < 0
y (t) =
ȳ for t ≥ 0

Is this allowed ?

## Take r = ȳ, for t ≥ 0

r (i) = 0 for i ≥ 2

– p. 4/1
η(0) = 0, e1 (0) = −ȳ, ei (0) = 0 for i ≥ 2

## The shape of the transient response depends on the

solution of
ė = (Ac − Bc K)e
in feedback linearization
or the solution of
      
ė1 1 e1

 ė2 
 
 ... 
 e2
 
  
 . = + 
 .  

 ..   s
 .   1  .  
ėρ−1 −k1 −kρ−1 eρ−1 1

## in sliding mode control

What is the impact of the reaching phase?
– p. 5/1
Second Approach: Take r(t) as the zero-state response of
a Hurwitz transfer function driven by y∗
Typical Choice:

sρ + a1 sρ−1 + · · · + aρ−1 s + aρ

## r(0) = 0 ⇒ e1 (0) = 0 ⇒ e(0) = 0

Feedback Linearization:

## Sliding Mode Control:

– p. 6/1
The derivatives of r are generated by the pre-filter
   
1

 . ..

  
z +   ∗
ż =     y
 1   
−aρ −a1 aρ
h i
r = 1 z

r = z1 , ṙ = z2 , . . . . . . r (ρ−1) = zρ
ρ
X
r (ρ) = − aρ−i+1 zi + aρ y ∗
i=1

– p. 7/1
Example 13.22

## Move the pendulum from equilibrium at x1 = 0 to

equilibrium at x1 = π2

Constraint : |u(t)| ≤ 2
(
∗ 0, for t < 0
y = π
2 for t ≥ 0
Pre-Filter:
1
(τ s + 1)2

– p. 8/1
" # " #
0 1 0
ż = −1 −2 z+ 1 y∗
τ2 τ τ2
h i
r = 1 0 z

1 ∗ 2
r = z1 , ṙ = z2 , r̈ = (y − r) − z2
τ2 τ
h t  i
r(t) = π
2
1 − e− τ 1 + τt

u = 0.1(10 sin x1 + x2 + r̈ − k1 e1 − k2 e2 )

– p. 9/1
τ = 0.05 τ = 0.25
2 2

1.5 1.5
Output

Output
1 1
output output
reference reference
0.5 0.5

0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2

τ = 0.05 τ = 0.25

2 2

1 1
Control

0 Control 0

−1 −1

−2 −2

## 0 0.5 1 1.5 2 0 0.5 1 1.5 2

Time Time

– p. 10/1
Third Approach: Plan a trajectory (r(t), ṙ(t), . . . , r (ρ) (t))
to move from (0, 0, . . . , 0) to (ȳ, 0, . . . , 0) in finite time T
Example: ρ = 2
a
r(2)

0 T
T/2
−a

r(1)
at a(T−t)

aT2/4
r

−aT2/4+aTt−at2/2
at2/2

– p. 11/1

at2

 2
for 0 ≤ t ≤ T2
aT 2 at2
r(t) = − 4 + aT t − 2 for T2 ≤ t ≤ T

 aT 2
4 for t ≥ T

4ȳ
a= ⇒ r(t) = ȳ for t ≥ T
T2

– p. 12/1
Nonlinear Systems and Control
Lecture # 37

Observers

Linearization
and
Extended Kalman Filter (EKF)

– p. 1/1
Linear Observer via Linearization

## xδ = x − xss , uδ = u − uss , yδ = y − yss

What are A, B , C ?

## x̂˙ δ = Ax̂δ + Buδ + H(yδ − C x̂δ ), x̂ = xss + x̂δ

(A − HC) is Hurwitz
It will work locally for sufficiently small kxδ (0)k, kx̂δ (0)k,
and kuδ (t)k
– p. 2/1
Feedback Control:

## uδ = −K x̂δ , u = uss − K x̂δ

Verify that the closed-loop system has an equilibrium point
at
x = xss , x̃ = 0
and linearization at the equilibrium point yields
" # " #" #
ẋδ (A − BK) BK xδ
=
x̃˙ 0 (A − HC) x̃

## Which theorem would justify this controller locally?

– p. 3/1
Nonlinear Observer via Linearization

## x̂˙ = f (x̂, u) + H[y − h(x̂)]

Equilibrium Point: x = xss , u = uss , x̂ = xss

x̃ = x − x̂

## x̃˙ = g(x, x̃, u), g(xss , 0, uss ) = 0

Verify that linearization at the equilibrium point yields

x̃˙ = (A − HC)x̃

## Investigate the design of H and the use in feedback control

– p. 4/1
Extended Kalman Filter (EKF)

ẋ = f (x, u) + w, y = h(x, u) + v

## x̂˙ = f (x̂, u) + H(t)[y − h(x̂, u)]

x̃ = x − x̂
x̃˙ = f (x, u) + w − f (x̂, u) − H[h(x, u) + v − h(x̂, u)]
Substitute x = x̂ + x̃ and expand the RHS in a Taylor

## x̃˙ = [A(t) − H(t)C(t)]x̃ + η(x̃, t) + ξ(t)

∂f ∂h
A(t) = (x̂(t), u(t)), C(t) = (x̂(t), u(t))
∂x ∂x
η(0, t) = 0, ξ(t) = w(t) − H(t)v(t)

– p. 5/1
Assuming that x(t), u(t), w(t), v(t), and H(t) are
bounded and f and h are twice continuously differentiable,
show that

## kη(x̃, t)k ≤ k1 kx̃k2 , kξ(t)k ≤ k2

Hint:
∂f
f (x, u) − f (x̂, u) − (x̂, u)x̃
∂x
Z 1
∂f ∂f
= (σ x̃ + x̂, u) dσ x̃ − (x̂, u)x̃ (Exercise 3.23)
0 ∂x ∂x
Z 1
∂f ∂f

= (σ x̃ + x̂, u) − (x̂, u) dσ x̃
0 ∂x ∂x

– p. 6/1
Kalman Filter Design: Let Q(t) and R(t) be symmetric
positive definite matrices that satisfy

## If (A(t), C(t) is uniformly observable, then P (t) exists for

all t ≥ t0 and satisfies

## 0 < p1 I ≤ P (t) ≤ p2 I ⇒ 0 < p3 I ≤ P −1 (t) ≤ p4 I

See a texbook on optimal control or optimal estimation
H(t) = P (t)C(t)T R−1 (t)
– p. 7/1
Compute A(t) and C(t)
∂f ∂h
A(t) = (x̂(t), u(t)), C(t) = (x̂(t), u(t))
∂x ∂x
Solve the Riccati equation

Compute H(t)

## Remark: The Riccati equation and the observer equation

have to be solved simultaneously in real time because A(t)
and C(t) depend on x̂(t) and u(t)

– p. 8/1
Lemma: The origin of

## are uniformly ultimately bounded by an ultimate bound

proportional to k2

Proof:
V = x̃T P −1 x̃
−1 ˙ ˙ T −1 T d
T
V̇ = x̃ P x̃ + x̃ P x̃ + x̃ P −1 x̃
dt
– p. 9/1
d
P −1 = −P −1 Ṗ P −1
dt
V̇ = x̃T P −1 (A − P C T R−1 C)x̃
+ x̃T (AT − C T R−1 CP )P −1 x̃
− x̃T P −1 Ṗ P −1 x̃ + 2x̃T P −1 (η + ξ)

## V̇ = x̃T P −1 (AP + P AT − P C T R−1 CP − Ṗ )P −1 x̃

− x̃T C T R−1 C x̃ + 2x̃T P −1 (η + ξ)
= −x̃T (P −1 QP −1 + C T R−1 C)x̃ + 2x̃T P −1 (η + ξ)
≤ −c1 kx̃k2 + c2 kx̃k3 + c3 kx̃k (c3 ∝ k2 )

– p. 10/1
Stochastic Interpretation: When w(t) and v(t) are
zero-mean, white noise stochastic processes,
uncorrelated, i.e., E{w(t)v T (τ )} = 0, ∀t, τ , and
E{w(t)w T (τ )} = Q(t)δ(t − τ )
E{v(t)v T (τ )} = R(t)δ(t − τ )
then x̂(t) is an approximation of the minimum variance
estimate that minimizes
n o
E [y(t) − h(x̂(t), u(t))]T [y(t) − h(x̂(t), u(t))]

## and P (t) is an approximation of the covariance matrix

n o
E [x̂(t) − x(t)][x̂(t) − x(t)]T

– p. 11/1
Feedback Control: What can you say about the closed-loop
system when x̂ is used in feedback control?

– p. 12/1
Nonlinear Systems and Control
Lecture # 38

Observers

Exact Observers

– p. 1/1
Observer with Linear Error Dynamics

Observer Form:

ẋ = Ax + γ(y, u), y = Cx

## where (A, C) is observable, x ∈ Rn , u ∈ Rm , y ∈ Rp

From Lecture # 24: An n-dimensional SO system

## is transformable into the observer form if and only if

∂φ
h iT  
n−1
φ = h, Lf h, · · · Lf h , rank (x) = n
∂x
h iT ∂φ
b= 0, ··· 0, 1 , τ =b
∂x
– p. 2/1
[adif τ, adjf τ ] = 0, 0 ≤ i, j ≤ n − 1

## [g, adjf τ ] = 0, 0≤j ≤n−2

Change of variables:

f τ, 1≤i≤n

∂T h i
τ1 , τ2 , ··· τn =I
∂x
z = T (x)

– p. 3/1
ẋ = Ax + γ(y, u), y = Cx

## x̂˙ = Ax̂ + γ(y, u) + H(y − C x̂)

x̃ = x − x̂
x̃˙ = (A − HC)x̃
Design H such that (A − HC) is Hurwitz
Let u = ψ(x) be a globally stabilizing state feedback
control
u = ψ(x̂)

## x̂˙ = Ax̂ + γ(y, u) + H(y − C x̂)

– p. 4/1
How would you analyze the closed-loop system?

## ẋ = Ax + γ(Cx, ψ(x − x̃))

x̃˙ = (A − HC)x̃

We know that
the origin of ẋ = Ax + γ(Cx, ψ(x)) is globally
asymptotically stable
the origin of x̃˙ = (A − HC)x̃ is globally exponentially
stable
What additional assumptions do we need to show that the
origin of the closed-loop system is globally asymptotically
stable?

– p. 5/1
Circle Criterion Design

## where (A, C) is observable, x ∈ Rn , u ∈ Rm , y ∈ Rp ,

M x ∈ Rℓ , β(η) = [ β1 (η1 ), . . . , βℓ (ηℓ ) ]T

## x̂˙ = Ax̂ + γ(y, u) − Lβ(M x̂ − N (y − C x̂)) + H(y − C x̂)

x̃ = x − x̂
x̃˙ = (A − HC)x̃ − L[β(M x) − β(M x̂ − N (y − C x̂))]

## x̃˙ = (A − HC)x̃ − L[β(M x) − β(M x − (M + N C)x̃)]

Define
z = (M + N C)x̃
ψ(t, z) = β(M x(t)) − β(M x(t) − z)
– p. 6/1
x̃˙ = (A − HC)x̃ − Lψ(t, z)
z = (M + N C)x̃

def
G(s) = (M + N C)[sI − (A − HC)]−1 L

0 +
- n -
z -
G(s)
−6

ψ(·) 
h iT
ψ(t, z) = ψ1 (t, z1 ), . . . , ψℓ (t, zℓ )

– p. 7/1
Main Assumption: βi (·) is a nondecreasing function

dβi
≥ 0, ∀ηi ∈ R
dηi

## zi ψi (t, zi ) = zi [βi ((M x)i ) − βi ((M x)i − zi )] ≥ 0

z T ψ(t, z) ≥ 0

– p. 8/1
By the circle criterion (Theorem 7.1) the origin of

z = (M + N C)x̃

## is globally exponentially stable if

def
G(s) = (M + N C)[sI − (A − HC)]−1 L

## Design Problem: Design H and N such that G(s) is

strictly positive real
Feasibility can be investigated using LMI (Arcak &
Kokotovic, Automatica, 2001)
– p. 9/1
Example:

## ẋ1 = x2 , ẋ2 = −x31 − x32 + u, y = x1

" # " #
0 1 h i 0
A= , C= 1 0 , γ= 3
,
0 0 −y + u
" #
0 h i
3 dβ
L= , M = 0 1 , β(η) = η , = 3η 2 ≥ 0
1 dη
" #
h1
h= , N
h2

−1 s + N + h1
G(s) = (M + N C)[sI − (A − HC)] L=
s2 + h1 s + h2

– p. 10/1
From Exercise 6.7, G(s) is SPR if and only if

## h1 > 0, h2 > 0, 0 < N + h1 < h1

1
h1 = 2, h2 = 1, N =− 2
3
s+ 2
G(s) =
(s + 1)2

## x̂˙ 1 = x̂2 + 2(y − x̂1 )

3
x̂˙ 2 = −y + u − x̂2 +
3 1
2 (y − x̂1 ) + (y − x̂1 )

– p. 11/1
Let u = φ(x) be a globally stabilizing state feedback control
Closed-loop system under output feedback:

## ẋ = Ax + γ(y, φ(x − x̃)) − Lβ(M x)

x̃˙ = (A − HC)x̃ − Lψ(t, z)
z = (M + N C)x̃

well defined?

## What about the effect of uncertainty?

– p. 12/1
Nonlinear Systems and Control
Lecture # 38

Observers

High-Gain Observers
Motivating Example

– p. 1/1
ẋ1 = x2 , ẋ2 = φ(x, u), y = x1

Observer:

## φ0 (x, u) is a nominal model φ(x, u)

x̃1 = x1 − x̂1 , x̃2 = x2 − x̂2

## x̃˙ 1 = −h1 x̃1 + x̃2 , x̃˙ 2 = −h2 x̃1 + δ(x, x̃)

δ(x, x̃) = φ(x, γ(x̂)) − φ0 (x̂, γ(x̂))

– p. 2/1
" # " #
h1 −h1 1
Design H = such that Ao = is Hurwitz
h2 −h2 0
Transfer function from δ to x̃:
" #
1 1
Go (s) =
s2 + h1 s + h2 s + h1

## Design H to make supω∈R kGo (jω)k as small as possible

α1 α2
h1 = , h2 = , ε>0
ε ε2
" #
ε ε
Go (s) =
(εs)2 + α1 εs + α2 εs + α1
– p. 3/1
" #
ε ε
Go (s) =
(εs)2 + α1 εs + α2 εs + α1

Observer eigenvalues are (λ1 /ε) and (λ2 /ε) where λ1 and
λ2 are the roots of

λ2 + α1 λ + α2 = 0

## sup kGo (jω)k = O(ε)

ω∈R

– p. 4/1
x̃1
η1 = , η2 = x̃2
ε
εη̇1 = −α1 η1 + η2 , εη̇2 = −α2 η1 + εδ(x, x̃)
Ultimate bound of η is O(ε)
η decays faster than an exponential mode e−at/ε , a>0
Peaking Phenomenon:

## x1 (0) 6= x̂1 (0) ⇒ η1 (0) = O(1/ε)

1
The solution contains a term of the form e−at/ε
ε
1
e−at/ε approaches an impulse function as ε → 0
ε
– p. 5/1
Example

## State feedback control:

u = −x32 − x1 − x2

## u = −x̂32 − x̂1 − x̂2

x̂˙ 1 = x̂2 + (2/ε)(y − x̂1 )
x̂˙ 2 = (1/ε2 )(y − x̂1 )

– p. 6/1
0.5
0 SFB
−0.5 OFB ε = 0.1
1

OFB ε = 0.01
x

−1
−1.5
OFB ε = 0.005
−2
0 1 2 3 4 5 6 7 8 9 10

−1
2
x

−2

−3
0 1 2 3 4 5 6 7 8 9 10

−100
u

−200

−300

−400
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
t

– p. 7/1
ε = 0.004

0.2

0
1

−0.2
x

−0.4

−0.6
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08

−200
2
x

−400

−600
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08

2000

1000
u

−1000
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08
t

– p. 8/1
u = sat(−x̂32 − x̂1 − x̂2 )

SFB
0.15
OFB ε = 0.1
0.1 OFB ε = 0.01
OFB ε = 0.001
x1

0.05

−0.05
0 1 2 3 4 5 6 7 8 9 10

0.05

0
x2

−0.05

−0.1

0 1 2 3 4 5 6 7 8 9 10

0
u

−0.5

−1

0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
t

– p. 9/1
Region of attraction under state feedback:
2

x2
0

−1

−2
−3 −2 −1 0 1 2 3
x1

– p. 10/1
Region of attraction under outputfeedback:

0.5
x2

−0.5

−1

x1

## ε = 0.1 (dashed) and ε = 0.05 (dash-dot)

– p. 11/1
Analysis of the closed-loop system:

## ẋ1 = x2 ẋ2 = φ(x, γ(x − x̃))

εη̇1 = −α1 η1 + η2 εη̇2 = −α2 η1 + εδ(x, x̃)
η6
O(1/ε)
q q
D D
D D
D D
D D
D D
D D
O(ε) D D
W W
-
 Ωb - x
 Ωc -

– p. 12/1
What is the effect of measurement noise?

## Transfer function from y to x̂ (with φ0 = 0):

" # " #
α2 1 + (εα1 /α2 )s 1
2
→ as ε → 0
(εs) + α1 εs + α2 s s

t≥0

– p. 13/1
1.1

0.9

0.8

## the error bound 0.7

0.6

0.5

0.4

0.3

0.2

0.1
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16
the high−gain parameter epsilon

s !
kn
εopt = O , kd = sup |ẍ1 (t)|, kn = sup |v(t)|
kd t≥0 t≥0

– p. 14/1
Nonlinear Systems and Control
Lecture # 40

Observers

High-Gain Observers
Stabilization

– p. 1/1
ẋ = Ax + Bφ(x, z, u)
ż = ψ(x, z, u)
y = Cx
ζ = q(x, z)

u ∈ Rp , y ∈ Rm , ζ ∈ Rs , x ∈ Rρ , z ∈ Rℓ
A, B, C are block diagonal matrices
   
0 1 ··· ··· 0 0
 0 0 1 ··· 0   0
   

 . .  .
. .  ..
 
Ai = 
 . . 
 , B i = 

 0 ··· ··· 0 1   0
   

0 · · · · · · · · · 0 ρ ×ρ 1 ρi ×1
i i

– p. 2/1
h i m
X
Ci = 1 0 ··· ··· 0 , ρ= ρi
1×ρi
i=1

Normal form

## Example: Magnetic Suspension

ẋ1 = x2
k L0 ax23
ẋ2 = g − x2 −
m 2m(a + x1 )2
1 L0 ax2 x3
 
ẋ3 = −Rx3 + 2
+u
L(x1 ) (a + x1 )

– p. 3/1
Stabilizing (partial) state feedback controller:

u = γ(x, ζ)

## ϑ̇ = Γ(ϑ, x, ζ), u = γ(ϑ, x, ζ)

Closed-loop system under state feedback:

Ẋ = f (X ), X = (x, z, ϑ)

Observer:

## x̂˙ = Ax̂ + Bφ0 (x̂, ζ, u) + H(y − C x̂)

– p. 4/1
H is block diagonal

αi1 /ε
 

 αi2 /ε2 

 .. 
Hi = 
 . 

 i
 αρi −1 /ερi −1

αiρi /ερi ρi ×1

## sρi + αi1 sρi −1 + · · · + αiρi −1 s + αiρi

is Hurwitz and ε > 0 (small)
φ0 (x, ζ, u) is a nominal model of φ(x, z, u), which is
globally bounded in x

– p. 5/1
Theorem 14.6 (Nonlinear Separation Principle:
Suppose the origin of Ẋ = f (X ) is asymptotically stable
and R is its region of attraction. Let S be any compact set
in the interior of R and Q be any compact subset of Rρ .
Then,
∃ ε∗1 > 0 such that, for every 0 < ε ≤ ε∗1 , the solutions
(X (t), x̂(t)) of the closed-loop system, starting in
S × Q, are bounded for all t ≥ 0

## given any µ > 0, ∃ ε∗2 > 0 and T2 > 0, dependent on

µ, such that, for every 0 < ε ≤ ε∗2 , the solutions of the
closed-loop system, starting in S × Q, satisfy

## kX (t)k ≤ µ and kx̂(t)k ≤ µ, ∀ t ≥ T2

– p. 6/1
given any µ > 0, ∃ ε∗3 > 0, dependent on µ, such that,
for every 0 < ε ≤ ε∗3 , the solutions of the closed-loop
system, starting in S × Q, satisfy

## if the origin of Ẋ = f (X ) is exponentially stable, then ∃

ε∗4 > 0 such that, for every 0 < ε ≤ ε∗4 , the origin of the
closed-loop system is exponentially stable and S × Q is
a subset of its region of attraction.

– p. 7/1
Key ideas of the proof:
Representation of the closed-loop system as a
singularly perturbed one with X as the slow and η
(scaled estimation error) as the fast

## Use of a converse Lyapunov theorem to construct

positively invariant sets

## Use of global boundedness in x̂ to show that η reaches

O(ε) while X is inside a positively invariant set

– p. 8/1
Example 14.19:

!
a1 (θ − π) + θ̇
u = −k sat
µ

## θ̂˙ = ω̂ + (2/ε)(θ − θ̂)

ω̂˙ = φ0 (θ̂, u) + (1/ε2 )(θ − θ̂)
φ0 = −â sin θ̂ + ĉu
– p. 9/1
!
a1 (θ̂ − π) + ω̂
u = −k sat
µ

or
a1 (θ − π) + ω̂
 
u = −k sat
µ

– p. 10/1
(a) (b)
3.5 2

3
1
2.5

ω
θ

2 0

1.5 SFB
OFB ε = 0.05 −1
1 OFB ε = 0.01
0.5 −2
0 2 4 6 8 10 0 2 4 6 8 10

(c) (d)
3.5 3.5

3 3

2.5
2.5
2
θ

θ
2
1.5
1.5
1
1 0.5

0.5 0
0 2 4 6 8 10 0 2 4 6 8 10
Time Time

– p. 11/1
Nonlinear Systems and Control
Lecture # 41

Integral Control

– p. 1/1
ẋ = f (x, u, w)
y = h(x, w)
ym = hm (x, w)

## x ∈ Rn state, u ∈ Rp control input

y ∈ Rp controlled output, ym ∈ Rm measured output
w ∈ Rl unknown constant parameters and disturbances
Goal:
y(t) → r as t → ∞
r ∈ Rp constant reference, v = (r, w)
e(t) = y(t) − r

– p. 2/1
Assumption: e can be measured

## Steady-state condition: There is a unique pair (xss , uss )

that satisfies the equations

0 = f (xss , uss , w)
0 = h(xss , w) − r

## Can we reduce this to a stabilization problem by shifting the

equilibrium point to the origin via the change of variables

xδ = x − xss , uδ = u − uss ?

– p. 3/1
Integral Action:
σ̇ = e
Augmented System:
ẋ = f (x, u, w)
σ̇ = h(x, w) − r

## Task: Stabilize the augmented system at (xss , σss ) where

σss produces uss

r - l - R −σ
- Stabilizing u- Plant y-
+ Controller
−6
6 6
Measured
Signals

– p. 4/1
Integral Control via Linearization

State Feedback:

u = −K1 x − K2 σ − K3 e

Closed-loop system:

## ẋ = f (x, −K1 x − K2 σ − K3 (h(x, w) − r), w)

σ̇ = h(x, w) − r

Equilibrium points:

0 = f (x̄, ū, w)
0 = h(x̄, w) − r
ū = −K1 x̄ − K2 σ̄

## Unique equilibrium point at x = xss , σ = σss , u = uss

– p. 5/1
Linearization about (xss , σss ):
" #
x − xss
ξδ =
σ − σss

ξ̇δ = (A − BK)ξδ
" #
A 0 ∂f ∂h
A= , A= (x, u, w) , C =
(x, w)
C 0 ∂x eq ∂x eq
" #
B ∂f
B= , B= (x, u, w)
0 ∂u eq
h i
K = K1 + K3 C K2

– p. 6/1
(A, B) is controllable if and only if (A, B) is controllable
and " #
A B
rank =n+p
C 0
Task: Design K, independent of v , such that (A − BK) is
Hurwitz for all v

## (xss , σss ) is an exponentially stable equilibrium point of the

closed-loop system. All solutions starting in its region of
attraction approach it as t tends to infinity

e(t) → 0 as t → ∞

– p. 7/1
Pendulum Example:

θ̈ = −a sin θ − bθ̇ + cT

Regulate θ to δ
x1 = θ − δ, x2 = θ̇, u=T

ẋ1 = x2
ẋ2 = −a sin(x1 + δ) − bx2 + cu
" #
0 a
xss = , uss = sin δ
0 c

σ̇ = x1
– p. 8/1
   
0 1 0 0
A =  −a cos δ −b 0  , B =  c 
   
1 0 0 0

K1 = [k1 k2 ], K 2 = k3 , K3 = 0
(A − BK) will be Hurwitz if

## b+k2 c > 0, (b+k2 c)(a cos δ +k1 c)−k3 c > 0, k3 c > 0

a 1
Suppose ≤ ρ1 , ≤ ρ2
c c
k3
k2 > 0, k3 > 0, k1 > ρ1 + ρ2
k2
– p. 9/1
Output Feedback: We only measure e and ym

σ̇ = e = y − r
ż = F z + G1 σ + G2 ym
u = Lz + M1 σ + M2 ym + M3 e

## Task: Design F , G1 , G2 , L, M1 , M2 , and M3 ,

independent of v , such that Ac is Hurwitz for all v
 
A + BM2 Cm + BM3 C BM1 BL
Ac =  C 0 0 
 
G2 Cm G1 F

∂hm
Cm = (x, w)
∂x eq

– p. 10/1
Integral Control via Sliding Mode Design

η̇ = f0 (η, ξ, w)
ξ̇1 = ξ2
.. ..
. .
ξ̇ρ−1 = ξρ
ξ̇ρ = b(η, ξ, u, w) + a(η, ξ, w)u
y = ξ1
a(η, ξ, w) ≥ a0 > 0

Goal:
y(t) → r as t → ∞
ξss = [r, 0, . . . , 0]T

– p. 11/1
Steady-state condition: There is a unique pair (ηss , uss )
that satisfies the equations

0 = f0 (ηss , ξss , w)
0 = b(ηss , ξss , uss , w) + a(ηss , ξss , w)uss

ė0 = y − r
   
e1 ξ1 − r
 e2   ξ2
   

z = η − ηss , e=  ..  = 
  .. 
 .  .

 
eρ ξρ

– p. 12/1
def
ż = f0 (η, ξ, w) = f˜0 (z, e, w, r)
ė0 = e1
ė1 = e2
.. ..
. .
ėρ−1 = eρ
ėρ = b(η, ξ, u, w) + a(η, ξ, w)u

## Partial State Feedback: {e1 , . . . , eρ } are measured

s = k0 e0 + k1 e1 + · · · + kρ−1 eρ−1 + eρ

## λρ + kρ−1 λρ−1 + · · · + k1 λ + k0 is Hurwitz

– p. 13/1
ṡ = k0 e1 + · · · + kρ−1 eρ + b(η, ξ, u, w) + a(η, ξ, w)u

## ṡ = ∆(η, ξ, u, w, r) + a(η, ξ, w)u

∆(η, ξ, u, w, r)

≤ ̺(e) + κ0 |u|, 0 ≤ κ0 < 1
a(η, ξ, w)

s
 
u = −β(e) sat
µ
̺(e)
β(e) ≥ + β0 , β0 > 0
(1 − κ0 )
For |s| ≥ µ, sṡ ≤ −a0 (1 − κ0 )β0
What about the other state variables?

– p. 14/1
ż = f˜0 (z, e, w, r)
ζ̇ = Aζ + Bs (A is Hurwitz)
s
 
ṡ = −a(·)β(e) sat + ∆(·)
µ
ζ = [e0 , . . . , eρ−1 ]T

## α̃1 (kzk) ≤ V1 (z, w, r) ≤ α̃2 (kzk)

∂V1 ˜
f0 (z, e, w, r) ≤ −α̃3 (kzk), ∀ kzk ≥ γ̃(kek)
∂z
V2 (ζ) = ζ T P ζ, P A + AT P = −I

– p. 15/1
Ω = {|s| ≤ c} ∩ {V2 ≤ c2 ρ1 } ∩ {V1 ≤ c0 }

## Ωµ = {|s| ≤ µ} ∩ {V2 ≤ µ2 ρ1 } ∩ {V1 ≤ α̃2 (γ̃(µρ2 ))}

All trajectories starting in Ω enter Ωµ in finite time and stay
in thereafter
Inside Ωµ there is a unique equilibrium point at

(z = 0, e = 0, e0 = ē0 ), s̄ = k0 ē0 , uss = −β(0)
µ

## Under additional conditions (the origin of ż = f˜0 (z, 0, w, r)

is exponentially stable), local analysis inside Ωµ shows that
for sufficiently small µ all trajectories converge to the
equilibrium point as time tends to infinity

– p. 16/1
Output Feedback: Only e1 is measured

High-gain Observer:

ė0 = e1
k0 e0 + k1 e1 + k2 ê2 + · · · + êρ
 
u = −β sat
µ
˙êi = êi+1 + αi (e1 − ê1 ),
 
1≤i≤ρ−1
εi
αρ
 
˙êρ = (e1 − ê1 )
ερ

– p. 17/1