Sunteți pe pagina 1din 137

Introduction to Optimal Control via LMIs

Matthew M. Peet
Arizona State University

Lecture 01: Optimal Control via LMIs


Summary

Powerful New Tools


• Convex Optimization
I LMIs
I Sum-of-Squares

Many old problems have been solved


• H∞ and H2 optimal control
• Nonlinear stability analysis
• Analysis and Control of delayed and PDE systems

Many questions are still unresolved


• Control of nonlinear Systems
• Nonlinear Programming (partially resolved)

Question: What is meant by a “solution”?

M. Peet Lecture 01: 1 / 135


Outline

Lectures 1-2
1. Linear Systems
2. Convex Optimization and Linear Matrix Inequalities
3. Optimal Control
4. LMI Solutions to the H∞ and H2 Optimal Control Problems

M. Peet Lecture 01: 2 / 135


Signal Spaces
L2 space

Definition 1.
L2 [0, ∞) is the Hilbert space of functions f : R+ → Rn with inner product
Z ∞
1
hu, yiL2 = u(t)T v(t)dt
2π 0

L2 [0, ∞) inherits the norm


Z ∞
kuk2L2 = ku(t)k2 dt
0

M. Peet Lecture 01: 3 / 135


Operator Theory
Linear Operators

Definition 2.
The normed space of bounded linear operators from X to Y is denoted
L(X, Y) with norm
kP xkY
kP kL(X,Y ) := sup =K
x∈X kxkX
x6=0

• Satisfies the properties of a norm


• This type of norm is called an “induced” norm
• Notation: L(X) := L(X, X)
• If X is a Banach space, then L(X, Y ) is a Banach space
Properties: Suppose G1 ∈ L(X, Y ) and G2 ∈ L(Y, Z)
• Then G2 G1 ∈ L(X, Z).
• kG2 G1 kL(X,Z) ≤ kG2 kL(Y,Z) kG1 kL(X,Y ) .
• Composition forms an algebra.
M. Peet Lecture 01: 4 / 135
Laplace Transform

Definition 3.
Given u ∈ L2 [0, ∞), the Laplace Transform of u is û = Λu, where
Z T
û(s) = (Λu)(s) = lim u(t)e−st dt
T →∞ 0

if this limit exists.


Λ is a bounded linear operator - Λ ∈ L(L2 , H2 ).
• Λ : L2 → H2 .
• The norm kΛkL(L2 ,H2 ) is

kΛukH2
kΛk = sup =???
u∈L2 kukL2

M. Peet Lecture 01: 5 / 135


H2 - A Space of Integrable Analytic Functions
Definition 4.
A complex function is analytic if it is continuous and bounded.
A function is analytic if the Taylor series converges everywhere in the domain.
Definition 5.
A function û : C̄+ → Cn is in H2 if
1. û(s) is analytic on the Open RHP (denoted C+ )
2. For almost every real ω,

lim û(σ + ıω) = û(ıω)


σ→0+

I Which means continuous on the imaginary axis


3. Z ∞
supkû(σ + ıω)k22 < ∞
−∞ σ≥0

I Which means integrable on every vertical line.

M. Peet Lecture 01: 6 / 135


The Maximum Modulus Principle

Theorem 6 (Maximum Modulus).


An analytic function cannot obtain its extrema in the interior of the domain.

Hence if û satisfies 1) and 2), then


Z ∞ Z ∞
2
supkû(σ + ıω)k2 = kû(ıω)k22 dω
−∞ σ≥0 −∞

We equip H2 with a norm and inner product


Z ∞ Z ∞
1
kûkH2 = 2
kû(ıω)k2 dω, hû, ŷiH2 = û(ıω)∗ v̂(ıω)dω
−∞ 2π −∞

M. Peet Lecture 01: 7 / 135


Paley-Wiener

Theorem 7.
1. If u ∈ L2 [0, ∞), then Λu ∈ H2 .
2. If û ∈ H2 , then there exists a u ∈ L2 [0, ∞) such that û = Λu ( Onto).

• Shows that H2 is exactly the image of Λ on L2 [0, ∞)


• Shows the map is invertible

Definition 8.
The inverse of the Laplace transform, Λ−1 : H2 → L2 [0, ∞) is
Z ∞
−1 1
u(t) = (Λ û)(t) = eσt · eıωt û(σ + ıω)dω
2π −∞

where σ can be any real number.

M. Peet Lecture 01: 8 / 135


Corollary

Lemma 9.

hΛu, ΛyiH2 = hu, yiL2

• Thus Λ is unitary.
• L2 [0, ∞) and H2 are isomorphic.

kΛukH2
kΛk = sup =???
u∈L2 kukL2

M. Peet Lecture 01: 9 / 135


H∞ - A Space of Bounded Analytic Functions

Definition 10.
A function Ĝ : C̄+ → Cn×m is in H∞ if
1. Ĝ(s) is analytic on the CRHP, C+ .
2.
lim Ĝ(σ + ıω) = Ĝ(ıω)
σ→0+

3.
sup σ̄(Ĝ(s)) < ∞
s∈C+

• A Banach Space with norm

kĜkH∞ = ess sup σ̄(Ĝ(ıω))


ω∈R

M. Peet Lecture 01: 10 / 135


H∞ (A Signal Space) and Multiplier Operators

Every element of H∞ defines a multiplication operator.

Definition 11.
Given Ĝ ∈ H∞ , define MĜ ∈ L(H∞ )

(MĜ û)(s) = Ĝ(s)û(s)

for û ∈ H2 .

Functions vs. Operators


• Ĝ is a function of a complex variable.
• MĜ is an operator (a function of functions...).

M. Peet Lecture 01: 11 / 135


Causal LTI Systems map to H∞

For any analytic functions, û and Ĝ, the function

ŷ(s) = Ĝ(s)û(s)

is analytic.
• Thus MĜ : H2 → H2 .
• Thus Λ−1 MĜ Λ maps L2 [0, ∞) → L2 [0, ∞).

Theorem 12.
G is a Causal, Linear, Time-Invariant Operator on L2 if and only if there exists
some Ĝ ∈ H∞ such that G = Λ−1 MĜ Λ.

(ΛGu)(ıω) = Ĝ(ıω)û(ıω)

H∞ is the space of transfer functions for linear time-invariant systems.

M. Peet Lecture 01: 12 / 135


H∞ - The space of “Transfer Functions”

From Paley-Wiener, if G = Λ−1 MĜ Λ

Theorem 13.

kGkL(L2 ) = kMĜ kL(H2 ) = kĜkH∞

The Gain of the system G can be calculated as kĜkH∞


• This is the motivation for H∞ control
kGukL2
• minimize supu kukL2 .
I minimize maximum energy of the output.

Conclusion: H∞ provides a complete parametrization of the space of causal


bounded linear time-invariant operators.

M. Peet Lecture 01: 13 / 135


Rational Transfer Functions (RH∞ )
The space of bounded analytic functions, H∞ is infinite-dimensional.
• this makes it hard to design optimal controllers.
We usually restrict ourselves to state-space systems and state-space controllers.

Definition 14.
The space of rational functions is defined as
 
p(s)
R := : p, q are polynomials
q(s)

We define the following rational subspaces.

RH2 = R ∩ H2
RH∞ = R ∩ H∞

Note that RH2 and RH∞ are not complete(Banach) spaces.

M. Peet Lecture 01: 14 / 135


Rational Transfer Functions (RH∞ )

RH∞ is the set of proper rational functions with no poles in the closed right
half-plane (CRHP).

Definition 15.
p(s)
• A rational function r(s) = q(s) is Proper if the degree of p is less than or
equal to the degree of q.
p(s)
• A rational function r(s) = q(s) is Strictly Proper if the degree of p is less
than the degree of q.

Proposition 1.
1. Ĝ ∈ RH∞ if and only if Ĝ is proper with no poles on the closed right
half-plane.

M. Peet Lecture 01: 15 / 135


State-Space Systems
Define a State-Space System G : L2 → L2 by y = Gu if
ẋ(t) = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t).

Theorem 16.
• For any stable state-space system, G, there exists some Ĝ ∈ RH∞ such
that
G = Λ−1 MĜ Λ
• For any Ĝ ∈ RH∞ , the operator G = Λ−1 MĜ Λ can be represented in
state-space for some A, B, C and D where A is Hurwitz.

For state-space system, (A, B, C, D),


Ĝ(s) = C(sI − A)−1 B + D
State-Space is NOT Unique. For any invertible T ,
• Ĝ = C(sI − A)−1 B + D = CT −1 (sI − T AT −1 )−1 T B + D.
I (A, B, C, D) and (T AT −1 , T B, CT −1 , D) both represent the system G.
M. Peet Lecture 01: 16 / 135
Optimal Control Framework
tput
2-inputframework
2-output Framework

z regulated outputs exogenous inputs w


Plant
y sensed outputs actuator inputs u

We introduce the control framework by separating internal signals from external


signals.
nputs
Output are those inputs to the system that can be mani
u Signals:
• z: Output to be controlled/minimized
I Regulated output
• y: Output used by the controller
s inputs wMeasured
Iare allin real-time
other byinputs.
sensor
The same signal may appear in both outputs.
• e.g. if you can measure what you want to minimize.

outputs z are every output signal from the model.


M. Peet Lecture 01: 17 / 135
2-input 2-output Framework
tput framework

z regulated outputs exogenous inputs w


Plant
y sensed outputs actuator inputs u

Input Signals:
• w: Disturbance, Tracking Signal, etc.
nputs u exogenous
are those
I input inputs to the system that can be mani
• u: Output from controller
I Input to actuator

I Not related to external input

s inputs w are all other inputs.

M. Peet Lecture 01: 18 / 135


ngThe Optimal Control
interconnection is calledFramework
the (lower) star-product of P and K, or t
ional transformation (LFT).
The controller closes the loop from y to u.

z w
P
y u

romForwa to z is
linear given
system P , by
we have 4 subsystems.
    
P12 −w P K)−1 P
S(P, K) = Pz11 =+ PP1112K(I 22 21
y P21 P22 u

P11 : w 7→ z P12 : u 7→ z
P21 : w 7→ y P22 : u 7→ y
Note that all Pij can themselves be MIMO.
M. Peet Lecture 01: 19 / 135
12 - 4 LFTs and stability

The RegulatorExample: the regulator


u nproc yp nsensor The plant P is given
   
z1 P0 0 P0
K + P0 +
z2 =  0 0 1 
12 - 4 LFTs and stability y P0 1 P20010
Example: the regulator Suppose P0 is
If we define q = w1Formulate
+ u andthe
r=above as
u then
P0 q, The plant Pẋ is
   
n y n
   =given
proc
Ax +byB p sensor

   
z1z = y yp
1 p
P w1
n =w η
proc z1 P P0 + D
r 0=0 Cx w proc 1

z = z ==u w P= n == w
ηsensorz2 =  0 0 1  w
K + +
2
z2 u w2 0 sensor 2

  ySubstituting
P0 1 P0 u
y = y1 = r + w2 + r P q
+
0

Suppose u
z2 P=0 is q = w1
Formulate the above as
zẋ1 = r +yBq
= Ax =r+
z1 = yp nproc = w1
z2 = u
P
nsensor = w2 leadsr =
to Cx + Dq

Substituting A B 0
+ P0 +
y r K q u 
z2 =Pu = q=Cw1D+ 0w2

z1 = r y =0r +0 w02
C D I
leads to
 
A B 0 B
y K u C D 0 D

P = 
0 0 0 I 
M. Peet Lecture 01: C D I /D
20 135
z1 P0 0 P0
K + P0 +
z2 =  0 0 1 
The Regulator y P0 1 P0
Suppose P0 is
Formulate the above as
ẋ = Ax + Bq
z1 = yp nproc = w1
P r = Cx + Dq
z2 = u nsensor = w2
Substituting
+ P0 +
r q
z2 = u q = w1 +
z1 = r y =r+w
leads to

A B 0 B
y K u C D 0 D

P =
0 0 0 I
C D I D
The reconfigured plant P is given by If P0 = (A, B, C, D), then
      
z1 (t) P0 0 P0 w1 (t) A B 0 B
z2 (t) =  0 0 I  w2 (t)  C D 0 D 
y(t) P0 I P0 u(t) P =  0 0 0 I 

C D I D

M. Peet Lecture 01: 21 / 135


Diagnostics
12 - 3 LFTs and stability
12 - 3 LFTs and stability
2001.11.07.04
2001.11.07.04

Command inputs and diagnostic outputs


Command inputs and diagnostic outputs

zsystem w
zsystem wsystemsystem
System
System
yysystem usystem
usystem
system

Controller
Controller
commandinputs
command inputs diagnostic outputs
diagnostic outputs
Formulate
Formulate thetheabove
aboveasas

Plant
Plant
zsystem wsystem
zsystem
zdiag System wsystem
wcommands
zdiag System wcommands

ysystem usystem
yysystem Controller
commands udiag usystem
Controller
ycommands udiag

M. Peet Lecture 01: 22 / 135


FTs and stability 200
Tracking Control
mple: a tracking problem
u nproc nsensor

e

r
K + P0 +

Define q = nproc P
+ u, then r = w1
   

z1 = e z = z1Werr= e −proc
e = rW P0 q nproc = w2
z2 u
   
z2 = u     w1 r nsensor = w3
y Wact r W sens
y= 1 = w = w2  =  nproc 
y2 nsensor + P0 q
+ P0 + w3 nsensor
e = tracking error r = tracking input
nproc = process noise nsensor = sensor noise

M. Peet Lecture 01: 23 / 135


r
K + P0 +
Tracking Control

P r = w1

z1 = e Werr Wproc nproc = w2

z2 = u Wact Wsens nsensor = w3

+ P0 +

y K u

  z1 = r − P0 (nproc + u)
I −P0 0 −P0
0 0 0 I  z2 = u
P =
I

0 0 0  y1 = r
0 P0 I P0 y2 = w3 + P0 (nproc + u)
M. Peet Lecture 01: 24 / 135
ransformation (LFT).
Linear Fractional Transformation
Close the loop
z w
P
y u

toPlant:
z isgiven
  by   A

B1 B2

z P P12 w
= 11 where D12 
P =  C1 D11 −1
S(P, K) = P11 + P12K(I − P K) P
y P21 P22 u
C22
2 D21 D22 21

 
Controller: AK BK
u = Ky where K=
CK DK
M. Peet Lecture 01: 25 / 135
Linear Fractional Transformation

z = P11 w + P12 u
y = P21 w + P22 u
u = Ky

Solving for u,
u = KP21 w + KP22 u
Thus

(I − KP22 )u = KP21 w
u = (I − KP22 )−1 KP21 w

Now we solve for z:


 
z = P11 + P12 (I − KP22 )−1 KP21 w

M. Peet Lecture 01: 26 / 135


AK BK
LinearuFractional where
= Ky TransformationK̂ =
C K DK
This expression is called the Linear Fractional Transformation of (P, K), denoted
rconnection is S(P,called the (lower) star-product of P and
K) := P11 + P12 (I − KP22 )−1 KP21
ransformation (LFT).
AKA: Lower Star Product

z w
P
y u

to z is given by
M. Peet Lecture 01: 27 / 135
AK BK
= Ky where K̂ =
Other Fractional Transformations
C K DK K

on is called the (lower) star-product


UpperofLFT
P and K, or the (lower)
tion (LFT).
Lower LFT: Upper LFT:
S(P, K) = P22 + P21Q(I − P11Q)−1P
z w
P Q
y u

z w
K P
y u
ven by
S(P, K) := P11 +P12 (I−KP22 )−1 KP21
S̄(P, K) := P22 +P21 Q(I −P11 K)−1 P12
P, K) = P11 + P12K(I − P22K)−1 P21

M. Peet Lecture 01: 28 / 135


Other Fractional Transformations
Star Product
Star Product:

21 z1 w1
P

y u
K
z2 w2

 
K11 ) K) = P12 (I − K11 P22 )−1 K12
S(P,S(P,
S(P, K) :=
 K21 (I − P22 K11 )−1 P21 S̄(K, P22 ) 
12 S(P, K11) P12(I − K11P22)−1K12
M. Peet −1 Lecture 01: 29 / 135
Well-Posedness
The interconnection doesn’t always make sense. Suppose
 
A B1 B2  
AK BK
P =  C1 D11 D12  and K= .
CK DK
C2 D21 D22

Definition 17.
The interconnection S(P, K) is well-posed if for any smooth w and any x(0)
and xK (0), there exist functions x, xK , u, y, z such that

ẋ(t) = Ax(t) + B1 w(t) + B2 u(t) ẋK (t) = AK x(t) + BK y(t)


z(t) = C1 x(t) + D11 w(t) + D12 u(t) u(t) = CK x(t) + DK y(t)
y(t) = C2 x(t) + D21 w(t) + D22 u(t)

Note: The solution does not need to be in L2 .


• Says nothing about stability.

M. Peet Lecture 01: 30 / 135


Well-Posedness
In state-space format, the closed-loop system is:
         
ẋ(t) A 0 x(t) B2 0 u(t) B1
= + + w(t)
ẋK (t) 0 AK xK (t) 0 BK y(t) 0
   
  x(t)   u(t)
z(t) = C1 0 + D12 0 + D11 w(t)
xK (t) y(t)

From
u(t) = DK y(t) + CK xK (t)
y(t) = D22 u(t) + C2 x(t) + D21 w(t)

We have
       
I −DK u(t) 0 CK x(t) 0
= + w(t)
−D22 I y(t) C2 0 xK (t) D21

Because the rest is state-space,


 the interconnection is well-posed if and only if
I −DK
the matrix is invertible.
−D22 I
M. Peet Lecture 01: 31 / 135
Well-Posedness
Question: When is  
I −DK
−D22 I
invertible?
Answer: 2x2 matrices have a closed-form inverse
 −1  
I −DK I + DK QD22 DK Q
=
−D22 I QD22 Q

where Q = (I − D22 DK )−1 .

Proposition 2.
The interconnection S(P, K) is well-posed if and only if (I − D22 DK ) is
invertible.

• Equivalently (I − DK D22 ) is invertible.


• Sufficient conditions: DK = 0 or D22 = 0.
• To optimize over K, we will need to enforce this constraint somehow.
M. Peet Lecture 01: 32 / 135
Optimal Control

Definition 18.
The Optimal H∞ -Control Problem is

min kS(P, K)kH∞


K∈H∞

• This is the Optimal H∞ Dynamic-Output-Feedback Control Problem

Another class of optimal control problem:

Definition 19.
The Optimal H2 -Control Problem is

min kS(P, K)kH2 such that


K∈H∞

S(P, K) ∈ H∞ .

M. Peet Lecture 01: 33 / 135


Optimal Control

Choose K to minimize

kP11 + P12 (I − KP22 )−1 KP21 kH∞

 
AK BK
Equivalently choose to minimize
CK DK
  
A 0
 
B2 0

I −DK
−1 
0 CK
 B1 + B2 DK QD21

 0 AK + 0 BK −D22 I C2 0 BK QD21 
 
   −1   
   I −DK 0 CK
C1 0 + D12 0
−D22 I C2 0
D11 + D12 DK QD21
H∞

where Q = (I − D22 DK )−1 .


In either case, the problem is Nonlinear.

M. Peet Lecture 01: 34 / 135


Optimal Control
There are several ways to address the problem of nonlinearity.
kP11 + P12 (I − KP22 )−1 KP21 kH∞
Variable Substitution: The easiest way to make the problem linear is by
declaring a new variable R := (I − KP22 )−1 K
The optimization problem becomes: Choose R to minimize

12 - 15 LFTs and stability


kP11 + P12 RP21 kH∞ 2001.11.07.04

Realizability

z + P11 w

y u
P12 K + P21

P22
R

M. Peet z + Lecture 01: w 35 / 135


Optimal Control
We optimize

kP11 + P12 (I − KP22 )−1 KP21 kH∞ = kP11 + P12 RP21 kH∞

Once, we have the optimal R, we can recover the optimal K as

K = R(I + RP22 )−1

Problems:
• how to optimize k·kH∞ .
• Is the controller stable?
I Does the inverse (I + RP −1
22 ) exist? Yes.
I Is it a bounded linear operator?

I In which space?

• An important branch of control.


I Coprime factorization

I Youla parameterization

• We will sidestep this body of work.


M. Peet Lecture 01: 36 / 135
What is Optimization?
Optimization can be posed in functional form:
min objective function : subject to
x∈F
inequality constraints
which may have the form
min f0 (x) : subject to
x∈F
fi (x) ≥ 0 i = 1, · · · k
Special Cases:
• Linear Programming
I fi (x) = Ax − b ( Affine functions with fi : Rn → Rm )
I EASY: Simplex/Ellipsoid Algorithm
• Polynomial Programming
I The f : Rn → Rm are polynomials. (NP-HARD)
i
• Semidefinite Programming
I The fi : Rn → Rm×m are affine. (EASY)

For semidefinite programming, what does fi (x) ≥ 0 mean?


M. Peet Lecture 01: 37 / 135
How Hard is Optimization?
Why is Linear Programming easy and polynomial programming hard?
min f0 (x) : subject to
x∈F
fi (x) ≥ 0 i = 1, · · · k
The Geometric Representation is equivalent:
min f0 (x) : subject to x∈S
x∈F

where S := {x : fi (x) ≥ 0, i = 1, · · · , k}.

The Pure Geometric Representation:


min γ : subject to
γ,x∈F

(γ, x) ∈ S 0
where S 0 := {(γ, x) : γ − f0 (x) ≥ 0, fi (x) ≥ 0, i = 1, · · · , k}.
• Two optimization problems are Equivalent if a solution to one can be used
to construct a solution to the other.
M. Peet Lecture 01: 38 / 135
Convexity

Definition 20.
A set is convex if for any x, y ∈ Q,

{µx + (1 − µ)y : µ ∈ [0, 1]} ⊂ Q.

The line connecting any two points lies in the set.

M. Peet Lecture 01: 39 / 135


subject to Ax = b
Cx ≤ d
Convex Optimization
Example
Convex Optimization:
minimize
x 1 + x2
Definition 21.subject to 3x1 + x2 ≥ 3
Consider the optimization problemx2 ≥ 1
x1 ≤ 4
min γ : subject
−x1 + 5xto 2 ≤ 20
γ,x∈F
0
x 1 + 4x 2 ≤ 20
(γ, x) ∈ S .

The problem is Convex Optimization if the set


S 0 is convex.
Convex optimization problems have the property that the Gradient projection
algorithm (or Newton iteration with barrier functions) will always converge to
the global optimal.
The question is, of course, when is the set S 0 convex?
• For polynomial optimization, a sufficient condition is that all functions fi
are convex.
I The level set of a convex function is a convex set.

M. Peet Lecture 01: 40 / 135


Non-Convexity and Local Optima
Newton’s Algorithm: Designed to solve f (x∗ ) = 0 (is min f (x) ≥ 0?)
f (xk )
xk+1 = xk − t
f 0 (xk )

where t is the step-size. (From df /dx ∼ f (x)−f (x )
= x−x∗ )
For non-convex optimization, Newton descent may get stuck at local optima.

For constrained optimization, constraints are represented by barrier functions.


M. Peet Lecture 01: 41 / 135
Convex Cones
Definition 22.
A set is a cone if for any x ∈ Q,
{µx : µ ≥ 0} ⊂ Q.

A subspace is a cone but not all cones are subspaces.


• If the cone is also convex, it is a convex cone.
• Cones are convex if they are closed under addition.

M. Peet Lecture 01: 42 / 135


What is an Inequality Constraint?

Question: What does f (x) ≥ 0 mean.


• What does y ≥ 0 mean?
If y is a Scalar (y ∈ R), then y ≥ 0 if y ∈ [0, ∞].

Question: What if y is a vector (y ∈ Rn )?


• Then we have several options...
Examples: Let y ∈ Rn .
• Positive Orthant: y ≥ 0 if yi ≥ 0 for i = 1, · · · , n.
P
• Half-space: y ≥ 0 if yi ≥ 0 (1T y ≥ 0).
I More generally, y ≥ 0 if aT y + b ≥ 0.

• Intersection of Half-spaces: y ≥ 0 if aT i y + bi ≥ 0 for i = 1, · · · , n.


I The positive orthant is the intersection of half-spaces with b = 0 and
i
ai = ei (unit vectors).

Question: What if y is a matrix???

M. Peet Lecture 01: 43 / 135


Positivity

What is an inequality? What does ≥ 0 mean?


• An inequality implies a partial ordering:
I x ≥ y if x − y ≥ 0
• Any convex cone, C defines a partial ordering:
I x − y ≥ 0 if x − y ∈ C

• The ordering is only partial because x 6≤ 0 does not imply x ≥ 0


I −x 6∈ C does not imply x ∈ C.

I x may be indefinite.

Conclusion:
• Convex Optimization includes positivity induced from any partial ordering.
• In particular, we focus on Matrix Positivity.

M. Peet Lecture 01: 44 / 135


Matrix Positivity

Definition 23.
A symmetric matrix P ∈ Sn is Positive Semidefinite, denoted P ≥ 0 if

xT P x ≥ 0 for all x ∈ Rn

Definition 24.
A symmetric matrix P ∈ Sn is Positive Definite, denoted P > 0 if

xT P x > 0 for all x 6= 0

• P is Negative Semidefinite if −P ≥ 0
• P is Negative Definite if −P > 0
• A matrix which is neither Positive nor Negative Semidefinite is Indefinite
The set of positive or negative matrices is a convex cone.

M. Peet Lecture 01: 45 / 135


Positive Matrices

Lemma 25.
P ∈ Sn is positive definite if and only if all its eigenvalues are positive.

Things which are easy to prove:


• A Positive Definite matrix is invertible.
• The inverse of a positive definite matrix is positive definite.
• If P > 0, then T P T T ≥ 0 for any T . If T is invertible, then T P T T > 0.

Lemma 26.
1
For any P > 0, there exists a positive square root, P 2 > 0 such that
1 1
P = P 2P 2.

M. Peet Lecture 01: 46 / 135


Semidefinite Programming - Dual Form

minimize trace CX
subject to trace Ai X = bi for all i
X0

• The variable X is a symmetric matrix


• X  0 means X is positive semidefinite
• The feasible set is the intersection of an affine set with the positive
semidefinite cone 
X ∈ Sn | X  0
P
Recall trace CX = i,j Ci,j Xj,i .

M. Peet Lecture 01: 47 / 135


SDPs with Explicit Variables - Primal Form

We can also explicitly parametrize the affine set to give

minimize cT x
subject to F0 + x1 F1 + x2 F2 + · · · + xn Fn  0

where F0 , F1 , . . . , Fn are symmetric matrices.

The inequality constraint is called a Linear Matrix Inequality (LMI); e.g.,


 
x1 − 3 x1 + x2 −1
x1 + x2 x2 − 4 0 0
−1 0 x1
which is equivalent to
     
−3 0 −1 1 1 0 0 1 0
 0 −4 0  + x1 1 0 0 + x2 1 1 0  0
−1 0 0 0 0 1 0 0 0

M. Peet Lecture 01: 48 / 135


Linear Matrix Inequalities
Linear Matrix Inequalities are often a Simpler way to solve control problems.
Common Form:

Find X :
X
Ai XBi + Q > 0
i

The most important Linear Matrix Inequality is the Lyapunov Inequality.


There are several very efficient LMI/SDP Solvers for Matlab:
• SeDuMi
I Fast, but somewhat unreliable.
I See http://sedumi.ie.lehigh.edu/
• LMI Lab (Part of Matlab’s Robust Control Toolbox)
I Universally disliked

I See http://www.mathworks.com/help/robust/lmis.html

• YALMIP (a parser for other solvers)


I See http://users.isy.liu.se/johanl/yalmip/

I recommend YALMIP with solver SeDuMi.


M. Peet Lecture 01: 49 / 135
Semidefinite Programming(SDP):
Common Examples in Control

Some Simple examples of LMI conditions in control include:


• Stability

AT X + XA ≺ 0
X0
• Stabilization

AX + BZ + XAT + Z T B T ≺ 0
X0
• H2 Synthesis

min T r(W )
   
  X   AT
A B2 + X ZT + B1 B1T ≺ 0
Z B2T
 
X (CX + DZ)T
0
(CX + DZ) W
We will go beyond these examples.
M. Peet Lecture 01: 50 / 135
Lyapunov Theory

LMIs unite time-domain and frequency-domain analysis

ẋ(t) = f (x(t))

Theorem 27 (Lyapunov).
Suppose there exists a continuously differentiable function V for which V (0) = 0
and V (x) > 0 for x 6= 0. Furthermore, suppose limkxk→∞ V (x) = ∞ and

V (x(t + h)) − V (x(t)) d


lim+ = V (x(t)) < 0
h→0 h dt
for any x such that ẋ(t) = f (x(t)). Then for any x(0) ∈ R the system of
equations
ẋ(t) = f (x(t))
has a unique solution which is stable in the sense of Lyapunov.

M. Peet Lecture 01: 51 / 135


The Lyapunov Inequality (Our First LMI)
Lemma 28.
A is Hurwitz if and only if there exists a P > 0 such that

AT P + P A < 0

Proof.
Suppose there exists a P > 0 such that AT P + P A < 0.
• Define the Lyapunov function V (x) = xT P x.
• Then V (x) > 0 for x 6= 0 and V (0) = 0.
• Furthermore,
V̇ (x(t)) = ẋ(t)T P x(t) + x(t)T P ẋ(t)
= x(t)T AT P x(t) + x(t)T P Ax(t)

= x(t)T AT P + P A x(t)

• Hence V̇ (x(t)) < 0 for all x 6= 0. Thus the system is globally stable.
• Global stability implies A is Hurwitz.
M. Peet Lecture 01: 52 / 135
The Lyapunov Inequality
Proof.
For the other direction, if A is Hurwitz, let
Z ∞
T
P = eA s eAs ds
0

• Converges because A is Hurwitz.


Z ∞
• Furthermore T
PA = eA s eAs Ads
Z0 ∞ Z ∞
T T d As 
= eA s AeAs ds = eA s e ds
0 0 ds
 ∞ Z ∞
T d AT s As
= eA s eAs − e e
0 0 ds
Z ∞
T
= −I − AT eA s eAs = −I − AT P
0

T
• Thus P A + A P = −I < 0.

M. Peet Lecture 01: 53 / 135


The Lyapunov Inequality

Other Versions:
Lemma 29.
(A, B) is controllable if and only if there exists a X > 0 such that

AT X + XA + BB T ≤ 0

Lemma 30.
(C, A) is observable if and only if there exists a X > 0 such that

AX + XAT + C T C ≤ 0

M. Peet Lecture 01: 54 / 135


The Static State-Feedback Problem
Lets start with the problem of stabilization.

Definition 31.
The Static State-Feedback Problem is to find a feedback matrix K such that

ẋ(t) = Ax(t) + Bu(t)


u(t) = Kx(t)

is stable

• Find K such that A + BK is Hurwitz.

Can also be put in LMI format:

Find X > 0, K :
X(A + BK) + (A + BK)T X < 0

Problem: Bilinear in K and X.


M. Peet Lecture 01: 55 / 135
The Static State-Feedback Problem
• The bilinear problem in K and X is a common paradigm.
• Bilinear optimization is not convex.
• To convexify the problem, we use a change of variables.
Problem 1:
Find X > 0, K :
X(A + BK) + (A + BK)T X < 0
Problem 2:
Find P > 0, Z :
AP + BZ + P AT + Z T B T < 0

Definition 32.
Two optimization problems are equivalent if a solution to one will provide a
solution to the other.

Theorem 33.
Problem 1 is equivalent to Problem 2.
M. Peet Lecture 01: 56 / 135
The Dual Lyapunov Equation
Problem 1: Problem 2:
Find X > 0, : Find Y > 0, :
XA + AT X < 0 Y AT + AY < 0

Lemma 34.
Problem 1 is equivalent to problem 2.

Proof.
First we show 1) solves 2). Suppose X > 0 is a solution to Problem 1. Let
Y = X −1 > 0.
• If XA + AT X < 0, then

X −1 (XA + AT X)X −1 < 0


• Hence

X −1 (XA + AT X)X −1 = AX −1 + X −1 AT = AY + Y AT < 0

• Therefore, Problem 2 is feasible with solution Y = X −1 .


M. Peet Lecture 01: 57 / 135
The Dual Lyapunov Equation

Problem 1: Problem 2:
Find X > 0, : Find Y > 0, :
XA + AT X < 0 Y AT + AY < 0

Proof.
Now we show 2) solves 1) in a similar manner. Suppose Y > 0 is a solution to
Problem 1. Let X = Y −1 > 0.
• Then

XA + AT X = X(AX −1 + X −1 AT )X
= X(AY + Y AT )X < 0

Conclusion: If V (x) = xT P x proves stability of ẋ = Ax,


• Then V (x) = xT P −1 x proves stability of ẋ = AT x.

M. Peet Lecture 01: 58 / 135


The Stabilization Problem
Thus we rephrase Problem 1
Problem 1: Problem 2:

Find P > 0, K : Find X > 0, Z :


T
(A + BK)P + P (A + BK) < 0 AX + BZ + XAT + Z T B T < 0

Theorem 35.
Problem 1 is equivalent to Problem 2.

Proof.
We will show that 2) Solves 1). Suppose X > 0, Z solves 2). Let P = X > 0
and K = ZP −1 . Then Z = KP and

(A + BK)P + P (A + BK)T = AP + P AT + BKP + P K T B T


= AP + P AT + BZ + Z T B T < 0

Now suppose that P > 0 and K solve 1). Let X = P > 0 and Z = KP . Then
AP + P AT + BZ + Z T B T = (A + BK)P + P (A + BK)T < 0
M. Peet Lecture 01: 59 / 135
The Stabilization Problem

The result can be summarized more succinctly

Theorem 36.
(A, B) is static-state-feedback stabilizable if and only if there exists some P > 0
and Z such that
AP + P AT + BZ + Z T B T < 0
with u(t) = ZP −1 x(t).

Standard Format:
   
  P   AT
T
A B + P Z <0
Z BT

M. Peet Lecture 01: 60 / 135


The Schur complement

Before we get to the main result, recall the Schur complement.

Theorem 37 (Schur Complement).


For any S ∈ Sn , Q ∈ Sm and R ∈ Rn×m , the following are equivalent.
 
M R
1. >0
RT Q
2. Q > 0 and M − RQ−1 RT > 0

A commonly used property of positive matrices.


Also Recall: If X > 0,
• then X − I > 0 for  sufficiently small.

M. Peet Lecture 01: 61 / 135


The KYP Lemma (AKA: The Bounded Real Lemma)
The most important theorem in this lecture.
Lemma 38 (KYP Lemma).
Suppose  
A B
Ĝ(s) = .
C D
Then the following are equivalent.
• kGkH∞ ≤ γ.
• There exists a X > 0 such that
 T   
A X + XA XB 1 CT  
+ C D <0
BT X −γI γ DT

Can be used to calculate the H∞ -norm of a system


• Originally used to solve LMI’s using graphs. (Before Computers)
• Now used directly instead of graphical methods like Bode.
The feasibility constraints are linear
• Can be combined with other methods.
M. Peet Lecture 01: 62 / 135
The KYP Lemma
Proof.
We will only show that ii) implies i). The other direction requires the
Hamiltonian, which we have not discussed.
• We will show that if y = Gu, then kykL2 ≤ γkukL2 .
• From the 1 x 1 block of the LMI, we know that AT X + XA < 0, which
means A is Hurwitz.
• Because the inequality is strict, there exists some  > 0 such that
 T   
A X + XA XB 1 CT  
T + T C D
B X −(γ − )I γ D
 T   T  
A X + XA XB 1 C   0 0
= + C D + <0
BT X −γI γ DT 0 I

• Let y = Gu. Then the state-space representation is

y(t) = Cx(t) + Du(t)


ẋ(t) = Ax(t) + Bu(t) x(0) = 0
M. Peet Lecture 01: 63 / 135
The KYP Lemma
Proof.
• Let V (x) = xT Xx. Then the LMI implies
 T " T    # 
x(t) A X + XA XB 1 CT   x(t)
+ C D
u(t) BT X −(γ − )I γ DT u(t)
 T  T    T  T   
x A X + XA XB x 1 x C   x
= + C D
u BT X −(γ − )I u γ u DT u
 T  T  
x A X + XA XB x 1
= + yT y
u BT X −(γ − )I u γ
1
= xT (AT X + XA)x + xT XBu + uT B T Xx − (γ − )uT u + y T y
γ
T T T 1 T
= (Ax + Bu) Xx + x X(Ax + Bu) − (γ − )u u + y y
γ
1
= ẋ(t)T Xx(t) + x(t)T X ẋ(t) − (γ − )ku(t)k2 + ky(t)k2
γ
2 1 2
= V̇ (x(t)) − (γ − )ku(t)k + ky(t)k < 0
γ
M. Peet Lecture 01: 64 / 135
The KYP Lemma

Proof.
1
• Now we have V̇ (x(t)) − (γ − )ku(t)k2 + ky(t)k2 < 0
γ
• Integrating in time, we get
Z T 
1
V̇ (x(t)) − (γ − )ku(t)k2 + ky(t)k2 dt
0 γ
Z T Z 
2 1 T
= V (x(T )) − V (x(0)) − (γ − ) ku(t)k dt + ky(t)k2 dt < 0
0 γ 0

• Because A is Hurwitz, limT →∞ x(T ) = 0.


• Hence limT →∞ V (x(T )) = 0.
• Likewise, because x(0) = 0, we have V (x(0)) = 0.

M. Peet Lecture 01: 65 / 135


The KYP Lemma
Proof.
• Since V (x(0)) = V (x(∞)) = 0,
 Z T Z  
2 1 T 2
lim V (x(T )) − V (x(0)) − (γ − ) ku(t)k dt + ky(t)k dt
T →∞ γ 0
Z ∞ Z0 ∞
1
= 0 − 0 − (γ − ) ku(t)k2 dt + ky(t)k2 dt
0 γ 0
1
= −(γ − )kuk2L2 + kyk2L2 dt < 0
γ

• Thus
kyk2L2 dt < (γ 2 − γ)kuk2L2
• By definition, this means kGk2H∞ ≤ (γ 2 − γ) < γ 2 or

kGkH∞ < γ

M. Peet Lecture 01: 66 / 135


The Positive Real Lemma
A Passivity Condition

A Variation on the KYP lemma is the positive-real lemma

Lemma 39.
Suppose  
A B
Ĝ(s) = .
C D
Then the following are equivalent.
• G is passive. i.e. (hu, GuiL2 ≥ 0).
• There exists a P > 0 such that
 T 
A P + P A P B − CT
≤0
B T P − C −DT − D

M. Peet Lecture 01: 67 / 135


ransformation (LFT). Transformation
Recall: Linear Fractional

z w
P
y u

toPlant:
z isgiven
  by   A

B1 B2

z P P12 w
= 11 where P =  C1 D11 −1
D12 
S(P, K) = P11 + P12K(I − P K) P
y P21 P22 u
C22
2 D21 D22 21

 
Controller: AK BK
u = Ky where K=
CK DK
M. Peet Lecture 01: 68 / 135
Optimal Control

Choose K to minimize

kP11 + P12 (I − KP22 )−1 KP21 k

 
AK BK
Equivalently choose to minimize
CK DK
  
A 0
 
B2 0

I −DK
−1 
0 CK
 B1 + B2 DK QD21

 0 AK + 0 BK −D22 I C2 0 BK QD21 
 
   −1   
   I −DK 0 CK
C1 0 + D12 0
−D22 I C2 0
D11 + D12 DK QD21
H∞

where Q = (I − D22 DK )−1 .

M. Peet Lecture 01: 69 / 135


The following interconnection
Optimal Full-State isFeedback
called the (lower) star-product of P and K, or the (lower)
Control
linear-fractional transformation (LFT).

z w
P
y u

The map from w to z is given by


For the full-state feedback case, we consider a controller of the form
S(P, K) = P11 + P12K(I − P22K)−1 P21
u(t) = F x(t)
 
Controller: 0 0
u = Ky where K=
0 F
Plant:  
     A B1 B2
z P P12 w
= 11 where P =  C1 D11 D12 
y P21 P22 u
I 0 0

M. Peet Lecture 01: 70 / 135


Optimal Full-State Feedback Control

Thus the closed-loop state-space representation is


 
A + B2 F B1
S(P̂ , K̂) =
C1 + D12 F D11

By the KYP lemma, kS(P̂ , K̂)kH∞ < γ if and only if there exists some X > 0
such that
 
(A + B2 F )T X + X(A + B2 F ) XB1
B1T X −γI
 
1 (C1 + D12 F )T  
+ T (C1 + D12 F ) D11 < 0
γ D11

This is a matrix inequality, but is nonlinear


• Quadratic (Not Bilinear)
• May NOT apply variable substitution trick.

M. Peet Lecture 01: 71 / 135


Schur Complement
The KYP condition is
 T   
A X + XA XB 1 CT  
+ C D <0
BT X −γI γ DT

Recall the Schur Complement

Theorem 40 (Schur Complement).


For any S ∈ Sn , Q ∈ Sm and R ∈ Rn×m , the following are equivalent.
 
M R
1. <0
RT Q
2. Q < 0 and M − RQ−1 RT < 0

In this case, let Q = − γ1 I < 0,


 T 
A X + XA XB  T
M= R= C D
BT X −γI

Note we are making the LMI Larger.


M. Peet Lecture 01: 72 / 135
Schur Complement

The Schur Complement says that


 T   
A X + XA XB 1 CT  
+ C D <0
BT X −γI γ DT
if and only if  T 
A X + XA XB CT
 BT X −γI DT  < 0
C D −γI
This leads to the
Full-State Feedback Condition
 
(A + B2 F )T X + X(A + B2 F ) XB1 (C1 + D12 F )T
 B1T X −γI T
D11 <0
(C1 + D12 F ) D11 −γI

which is now bilinear in X and F .

M. Peet Lecture 01: 73 / 135


Dual KYP Lemma

To apply the variable substitution trick, we must also construct the dual form of
this LMI.
Lemma 41 (KYP Dual).
Suppose  
A B
Ĝ(s) = .
C D
Then the following are equivalent.
• kGkH∞ ≤ γ.
• There exists a Y > 0 such that
 
Y AT + AY B Y CT
 BT −γI DT  < 0
CY D −γI

M. Peet Lecture 01: 74 / 135


Dual KYP Lemma
Proof.
Let X = Y −1. Then 
Y AT + AY XB Y CT
 BT X −γI DT  < 0 and Y >0
CY D −γI

if and only
 −1 if X > 0 and
   −1 
Y 0 0 Y AT + AY B Y CT Y 0 0
 0 I 0  BT −γI DT   0 I 0
0 0 I CY D −γI 0 0 I
 T T

A X + XA XB C
=  BT X −γI DT  < 0.
C D −γI

By the Schur complement


 T this is equivalent
 to 
A X + XA XB 1 CT  
+ C D <0
BT X −γI γ DT

By the KYP lemma, this is equivalent to kGkH∞ ≤ γ.


M. Peet Lecture 01: 75 / 135
Full-State Feedback Optimal Control

We can now apply this result to the state-feedback problem.

Theorem 42.
The following are equivalent:
• There exists an F such that kS(P, K(0, 0, 0, F ))kH∞ ≤ γ.
• There exist Y > 0 and Z such that
 
Y AT + AY + Z T B2T + B2 Z B1 Y C1T + Z T D12
T
 B1T −γI T
D11 <0
C1 Y + D12 Z D11 −γI

One may use F = ZY −1 .

M. Peet Lecture 01: 76 / 135


Full-State Feedback Optimal Control
Proof.
Suppose there exists an F such that kS(P, K(0, 0, 0, F ))kH∞ ≤ γ. By the Dual
KYP lemma, this implies there exists a Y > 0 such that
 
Y (A + B2 F )T + (A + B2 F )Y B1 Y (C1 + D12 F )T
 B1T −γI T
D11 <0
(C1 + D12 F )Y D11 −γI

Let Z = F Y . Then
 
Y AT + Z T B2T + AY + B2 Z B1 Y C1T + Z T D12 T T
)
 B1T −γI T
D11 
C1 Y + D12 Z D11 −γI
 
Y AT + Y F T B2T + AY + B2 F Y B1 Y C1T + Y F T D12
T T
)
= B1T −γI D11T 
C1 Y + D12 F Y D11 −γI
 
Y (A + B2 F )T + (A + B2 F )Y B1 Y (C1 + D12 F )T
= B1T −γI T
D11  < 0.
(C1 + D12 F )Y D11 −γI

M. Peet Lecture 01: 77 / 135


Full-State Feedback Optimal Control
Proof.
Now suppose there exists a Y > 0 and Z such that
 
Y AT + Z T B2T + AY + B2 Z B1 Y C1T + Z T D12
T
 B1T −γI T
D11 <0
C1 Y + D12 Z D11 −γI

Let F = ZY −1 . Then
 
Y (A + B2 F )T + (A + B2 F )Y B1 Y (C1 + D12 F )T
 B1T −γI T
D11 
(C1 + D12 F )Y D11 −γI
 T T T

Y A + Y F B2 + AY + B2 F Y B1 Y C1T + Y F T D12
T

= B1T −γI T
D11 
C1 Y + D12 F Y D11 −γI
 T T T T T T

Y A + Z B2 + AY + B2 Z B1 Y C1 + Z D12
= B1T −γI T
D11 <0
C1 Y + D12 Z D11 −γI
M. Peet Lecture 01: 78 / 135
Full-State Feedback Optimal Control

Therefore the following optimization problems are equivalent


Form A

minkS(P, K(0, 0, 0, F ))kH∞


F

Form B

min γ :
γ,Y,Z
 
−Y 0 0 0
 0 Y A T
+ AY + Z T B2T + B2 Z B1 T 
Y C1T + Z T D12
 <0
 0 B1T −γI T
D11 
0 C1 Y + D12 Z D11 −γI

The optimal controller is given by F = ZY −1 .


Next: Optimal Output Feedback

M. Peet Lecture 01: 79 / 135


rconnection is called the (lower) star-product of P and
Optimal Output Feedback
ransformation (LFT).
Recall: Linear Fractional Transformation

z w
P
y u

toPlant:
z isgiven
  by   A

B1 B2

z P P12 w
= 11 where P =  C1 D11 D12 
y P21 P22 u −1
S(P, K) = P11 + P12K(I − P K) P C22
2 D21 D22 21

 
Controller: AK BK
u = Ky where K=
CK DK
M. Peet Lecture 01: 80 / 135
Optimal Control

Choose K to minimize

kP11 + P12 (I − KP22 )−1 KP21 k

 
AK BK
Equivalently choose to minimize
CK DK
  
A 0
 
B2 0

I −DK
−1 
0 CK
 B1 + B2 DK QD21

 0 AK + 0 BK −D22 I C2 0 BK QD21 
 
   −1   
   I −DK 0 CK
C1 0 + D12 0
−D22 I C2 0
D11 + D12 DK QD21
H∞

where Q = (I − D22 DK )−1 .

M. Peet Lecture 01: 81 / 135


Optimal Control
Recall that  −1  
I −DK I + DK QD22 DK Q
=
−D22 I QD22 Q
where Q = (I − D22 DK )−1 . Then
    −1  
A 0 B2 0 I −DK 0 CK
Acl := +
0 AK 0 BK −D22 I C2 0
     
A 0 B2 0 I + DK QD22 DK Q 0 CK
= +
0 AK 0 BK QD22 Q C2 0
 
A + B2 DK QC2 B2 (I + DK QD22 )CK
=
BK QC2 AK + BK QD22 CK

Likewise
  
    I + DK QD22 DK Q 0 CK
Ccl := C1 0 + D12 0
QD22 Q C2 0
 
= C1 + D12 DK QC2 D12 (I + DK QD22 )CK

M. Peet Lecture 01: 82 / 135


Optimal Output Feedback Control

Thus we have
 
A + B2 DK QC2 B2 (I + DK QD22 )CK B1 + B2 DK QD21
 BK QC2 AK + BK QD22 CK  BK QD21 

C1 + D12 DK QC2 D12 (I + DK QD22 )CK D11 + D12 DK QD21

where Q = (I − D22 DK )−1 .


• This is nonlinear in (AK , BK , CK , DK ).
• Hence we make a change of variables (First of several).

AK2 = AK + BK QD22 CK
BK2 = BK Q
CK2 = (I + DK QD22 )CK
DK2 = DK Q

M. Peet Lecture 01: 83 / 135


Optimal Output Feedback Control

This yields the system


   
A + B2 DK2 C2 B2 CK2 B1 + B2 DK2 D21
 BK2 C2 AK2  BK2 D21 

C1 + D12 DK2 C2 D12 CK2 D11 + D12 DK2 D21
 
AK2 BK2
Which is affine in .
CK2 DK2

M. Peet Lecture 01: 84 / 135


Optimal Output Feedback Control
Hence we can optimize over our new variables.
• However, the change of variables must be invertible.
If we recall that
(I − QM )−1 = I + Q(I − M Q)−1 M
then we get

I + DK QD22 = I + DK (I − D22 DK )−1 D22 = (I − DK D22 )−1

Examine the variable CK2

CK2 = (I + DK (I − D22 DK )−1 D22 )CK


= (I − DK D22 )−1 CK

Hence, given CK2 , we can recover CK as

CK = (I − DK D22 )CK2

M. Peet Lecture 01: 85 / 135


Optimal Output Feedback Control

Now suppose we have DK2 . Then

DK2 = DK Q = DK (I − D22 DK )−1

implies that

DK = DK2 (I − D22 DK ) = DK2 − DK2 D22 DK

or
(I + DK2 D22 )DK = DK2
which can be inverted to get

DK = (I + DK2 D22 )−1 DK2

M. Peet Lecture 01: 86 / 135


Optimal Output Feedback Control

Once we have CK and DK , the other variables are easily recovered as

BK = BK2 Q−1 = BK2 (I − D22 DK )


AK = AK2 − BK (I − D22 DK )−1 D22 CK

To summarize, the original variables can be recovered as

DK = (I + DK2 D22 )−1 DK2


BK = BK2 (I − D22 DK )
CK = (I − DK D22 )CK2
AK = AK2 − BK (I − D22 DK )−1 D22 CK

M. Peet Lecture 01: 87 / 135


Optimal Output Feedback Control
   
  A + B2 DK2 C2 B2 CK2 B1 + B2 DK2 D21
Acl Bcl
:=   BK2 C2 AK2  BK2 D21 
Ccl Dcl
C1 + D12 DK2 C2 D12 CK2 D11 + D12 DK2 D21
   
  A 0 B1 0 B2   
Acl Bcl A BK2 0 I 0
=0 0 0  + I 0  K2
Ccl Dcl CK2 DK2 C2 0 D21
C1 0 D11 0 D12
Or
     
A 0 0 B2 AK2 BK2 0 I
Acl = +
0 0 I 0 CK2 DK2 C2 0
     
B1 0 B2 AK2 BK2 0
Bcl = +
0 I 0 CK2 DK2 D21
  
    AK2 BK2 0 I
Ccl = C1 0 + 0 D12
CK2 DK2 C2 0
  
    AK2 BK2 0
Dcl = D11 + 0 D12
CK2 DK2 D21
M. Peet Lecture 01: 88 / 135
Optimal Output Feedback Control

Lemma 43 (Transformation Lemma).


Suppose that  
Y1 I
>0
I X1
Then there exist X2 , X3 , Y2 , Y3 such that
   −1
X1 X2 Y1 Y2
X= = = Y −1 > 0
X2T X3 Y2T Y3
 
Y1 I
where Ycl = has full rank.
Y2T 0

M. Peet Lecture 01: 89 / 135


Transformation Lemma

Proof.
• Since  
Y1 I
> 0,
I X1
by the Schur complement X1 > 0 and X1−1 − Y1 > 0. Since
I − X1 Y1 = X1 (X1−1 − Y1 ), we conclude that I − X1 Y1 is invertible.
• Choose any two square invertible matrices X2 and Y2 such that

X2 Y2T = I − X1 Y1

• Because X2 and Y2 are non-singular,


   
Y1 Y2 I 0
YclT = and Xcl =
I 0 X1 X2
are also non-singular.

M. Peet Lecture 01: 90 / 135


Transformation Lemma

Proof.
• Now define X and Y as

X = Ycl−T Xcl and −1 T


Y = Xcl Ycl .

Then
XY = Ycl−1 Xcl Xcl
−1
Ycl = I
Likewise, Y X = I. Hence, Y = X −1 .

M. Peet Lecture 01: 91 / 135


Optimal Output Feedback Control

Lemma 44 (Converse Transformation Lemma).


 
X1 X2
Given X = > 0 where X2 has full column rank. Let
X2T X3
 
−1 Y1 Y2
X =Y =
Y2T Y3

then  
Y1 I
>0
I X1
 
Y1 I
and Ycl = has full column rank.
Y2T 0

M. Peet Lecture 01: 92 / 135


Converse Transformation Lemma
Proof.  
I 0
Since X2 is full rank, Xcl = also has full column rank. Note that
X1 X2
XY = I implies
    
Y Y2 X1 X2 I 0
YclT X = 1 = = Xcl .
I 0 X2T X3 X1 X2

Hence    
Y Y2 I 0
YclT = 1 = Y = Xcl Y
I 0 X1 X2
has full column rank. Now, since XY = I implies X1 Y1 + X2 Y2T = I, we have
      
I 0 Y1 I Y1 I Y1 I
Xcl Ycl = = =
X1 X2 Y2T 0 X1 Y1 + X2 Y2T X1 I X1

Furthermore, because Ycl has full rank,


 
Y1 I T
= Xcl Ycl = Xcl Y Xcl = YclT XYcl > 0
I X1
M. Peet Lecture 01: 93 / 135
Optimal Output Feedback Control
Theorem 45.
The following are equivalent.
 
AK BK
• There exists a K̂ = such that kS(P, K)kH∞ < γ.
CK DK
 
X1 I
• There exist X1 , Y1 , An , Bn , Cn , Dn such that >0
I Y1
 
AY1 +Y1 AT +B2 Cn +CnT B2T ∗T ∗T ∗T
T T T T T T
 A + An + [B2 Dn C2 ] X1 A+A X1 +Bn C2 +C2 Bn ∗ ∗T 
 [B1 + B2 Dn D21 ] T
[XB1 + Bn D21 ] T
−γI
<0
C1 Y1 + D12 Cn C1 + D12 Dn C2 D11 +D12 Dn D21 −γI
Moreover,
   −1      −1
AK2 BK2 X X1 B2 An Bn X1 AY1 0 Y2T 0
= 2 −
CK2 DK2 0 I Cn Dn 0 0 C2 Y1 I
for any full-rank X2 and Y2 such that
   −1
X1 X2 Y1 Y2
=
X2T X3 Y2T Y3

M. Peet Lecture 01: 94 / 135


Optimal Output Feedback Control
Proof: If.
Suppose there exist X1 , Y1 , An , Bn , Cn , Dn such that the LMI is feasible. Since
 
X1 I
> 0,
I Y1

by the transformation lemma, there exist X2 , X3 , Y2 , Y3 such that


   −1
X X2 Y Y2
X := = >0
X2T X3 Y2T Y3
   
Y I AK BK
where Ycl = has full row rank. Let K = where
Y2T 0 CK DK

DK = (I + DK2 D22 )−1 DK2


BK = BK2 (I − D22 DK )
CK = (I − DK D22 )CK2
AK = AK2 − BK (I − D22 DK )−1 D22 CK .
M. Peet Lecture 01: 95 / 135
Optimal Output Feedback Control

Proof: If.
and where
   −1      T −1
AK2 BK2 X2 X1 B2 An Bn X1 AY1 0 Y2 0
= − .
CK2 DK2 0 I Cn Dn 0 0 C2 Y1 I

M. Peet Lecture 01: 96 / 135


Optimal Output Feedback Control

Proof: If.
As discussed previously, this means the closed-loop system is
   
  A 0 B1 0 B2   
Acl Bcl A BK2 0 I 0
=0 0 0  + I 0  K2
Ccl Dcl CK2 DK2 C2 0 D21
C1 0 D11 0 D12
   
A 0 B1 0 B2
=0 0 0  + I 0 
C1 0 D11 0 D12
 −1      T −1  
X2 X1 B2 An B n X1 AY1 0 Y2 0 0 I 0

0 I Cn Dn 0 0 C2 Y1 I C2 0 D21

Now look at the LMI from the KYP lemma.

M. Peet Lecture 01: 97 / 135


Optimal Output Feedback Control

Proof: If.
Expanding out, we obtain
 T  T T
 
Ycl 0 0 Acl X + XAcl XBcl Ccl Ycl 0 0
 0 I 0  T
Bcl X −γI T 
Dcl 0 I 0 =
0 0 I Ccl Dcl −γI 0 0 I
 
AY1 +Y1 AT +B2 Cn +CnT B2T ∗T ∗T ∗T
T T T T T T
 A + An + [B2 Dn C2 ] X1 A+A X1 +Bn C2 +C2 Bn ∗ ∗T 
 [B1 + B2 Dn D21 ] T
[XB1 + Bn D21 ]T
−γI
<0
C1 Y1 + D12 Cn C1 + D12 Dn C2 D11 +D12 Dn D21 −γI
 
Acl Bcl
Hence, by the KYP lemma, S(P, K) = satisfies
Ccl Dcl
kS(P, K)kH∞ < γ.

M. Peet Lecture 01: 98 / 135


Optimal Output Feedback Control
Proof: Only If.
 
AK BK
Now suppose that kS(P, K)kH∞ < γ for some K = . Since
CK DK
kS(P, K)kH∞ < γ, by the KYP lemma, there exists a
 
X1 X2
X= >0
X2T X3
such that  T 
T
Acl X + XAcl XBcl Ccl
 T
Bcl X −γI T 
Dcl <0
Ccl Dcl −γI

Because the inequalities are strict, we can assume that X2 has full row rank.
Define    
Y1 Y2 −1 Y1 I
Y = =X and Ycl =
Y2T Y3 Y2T 0
Then, according to the converse transformation
 lemma, Ycl has full row rank
and X1 I
> 0.
I Y1
M. Peet Lecture 01: 99 / 135
Optimal Output Feedback Control
Proof: Only If.
Now, using the given AK , BK , CK , DK , define the variables
     T   
An B n X2 X1 B2 AK2 BK2 Y2 0 X1 AY1 0
= + .
Cn Dn 0 I CK2 DK2 C2 Y1 I 0 0

where
AK2 = AK + BK (I − D22 DK )−1 D22 CK BK2 = BK (I − D22 DK )−1
CK2 = (I + DK (I − D22 DK )−1 D22 )CK DK2 = DK (I − D22 DK )−1

Then as before
   
  A 0 B1 0 B2
Acl Bcl
=0 0 0  + I 0 
Ccl Dcl
C1 0 D11 0 D12
 −1      T −1  
X2 X1 B2 An Bn X1 AY1 0 Y2 0 0 I 0

0 I Cn Dn 0 0 C2 Y1 I C2 0 D21

M. Peet Lecture 01: 100 / 135


Optimal Output Feedback Control

Proof: Only If.


Expanding out the LMI, we find
 T T T 
AY1 +Y1 A +B2 Cn +Cn B2 ∗T ∗T ∗T
 AT + An + [B2 Dn C2 ]T X1 A+AT X1 +Bn C2 +C2T BnT ∗T ∗T 
 [B1 + B2 Dn D21 ] T
[XB1 + Bn D21 ] T
−γI

C1 Y1 + D12 Cn C1 + D12 Dn C2 D11 +D12 Dn D21 −γI
 T  T T
 
Ycl 0 0 Acl X + XAcl XBcl Ccl Ycl 0 0
= 0 I 0  Bcl T
Xcl −γI T 
Dcl 0 I 0 < 0
0 0 I Ccl Dcl −γI 0 0 I

M. Peet Lecture 01: 101 / 135


Conclusion

To solve the H∞ -optimal state-feedback problem, we solve

min γ such that


γ,X1 ,Y1 ,An ,Bn ,Cn ,Dn
 
X1 I
>0
I Y1
 
AY1 +Y1 AT +B2 Cn +CnT B2T ∗T ∗T ∗T
T T T T T T
 A + An + [B2 Dn C2 ] X1 A+A X1 +Bn C2 +C2 Bn ∗ ∗T 
 [B1 + B2 Dn D21 ]T [XB1 + Bn D21 ]T −γI
< 0
C1 Y1 + D12 Cn C1 + D12 Dn C2 D11 +D12 Dn D21 −γI

M. Peet Lecture 01: 102 / 135


Conclusion
Then, we construct our controller using

DK = (I + DK2 D22 )−1 DK2


BK = BK2 (I − D22 DK )
CK = (I − DK D22 )CK2
AK = AK2 − BK (I − D22 DK )−1 D22 CK .

where
   −1      T −1
AK2 BK2 X2 X1 B2 An Bn X1 AY1 0 Y2 0
= − .
CK2 DK2 0 I Cn Dn 0 0 C2 Y1 I

and where X2 and Y2 are any matrices which satisfy X2 Y2 = I − X1 Y1 .


• e.g. Let Y2 = I and X2 = I − X1 Y1 .
• The optimal controller is NOT uniquely defined.
• Don’t forget to check invertibility of I − D22 DK

M. Peet Lecture 01: 103 / 135


Conclusion

The H∞ -optimal controller is a dynamic system.


 
AK B K
• Transfer Function K̂(s) =
CK DK
Minimizes the effect of external input (w) on external output (z).

kzkL2 ≤ kS(P, K)kH∞ kwkL2

• Minimum Energy Gain

M. Peet Lecture 01: 104 / 135


H2 -optimal control
Motivation

H2 -optimal control minimizes the H2 -norm of the transfer function.


• The H2 -norm has no direct interpretation.
Z ∞
1
kGk2H2 = Trace(Ĝ(ıω)∗ Ĝ(ıω))dω
2π −∞

Motivation: Assume external input is Gaussian noise with signal variance Sw


Z ∞
1
E[w(t)2 ] = Trace(Ŝw (ıω))dω
2π −∞

Theorem 46.
For an LTI system P , if w is noise with spectral density Ŝw (ıω) and z = P w,
then z is noise with density

Ŝz (ıω) = P̂ (ıω)Ŝ(ıω)P̂ (ıω)∗

M. Peet Lecture 01: 105 / 135


H2 -optimal control
Motivation

Then the output z = P w has signal variance (Power)


Z ∞
1
E[z(t)2 ] = Trace(Ĝ(ıω)∗ S(ıω)Ĝ(ıω))dω
2π −∞
≤ kSkH∞ kGk2H2

If the input signal is white noise, then Ŝ(ıω) = I and

E[z(t)2 ] = kGk2H2

M. Peet Lecture 01: 106 / 135


H2 -optimal control
Colored Noise

Now suppose the noise is colored with variance Ŝw (ıω). Now define Ĥ as
Ĥ(ıω)Ĥ(ıω)∗ = Ŝw (ıω) and the filtered system.
 
P̂11 (s)Ĥ(s) P̂12 (s)
P̂s (s) =
P̂21 (s)Ĥ(s) P̂22 (s)

Now, applying feedback to the filtered plant, we get


S(Ps , K)(s) = P11 H + P12 (I − KP22 )−1 KP21 H = S(P, K)H

Now the spectral density, Ŝz of the output of the true plant using colored noise
equals the output of the artificial plant under white noise. i.e.
Sz (s) = S(P, K)(s)Ŝw (s)S(P, K)(s)∗
= S(P, K)(s)Ĥ(s)Ĥ(s)∗ S(P, K)(s)∗ = S(Ps , K)(s)Ŝ(Ps , K)(s)∗

Thus if K minimizes the H2 -norm of the filtered plant (kŜ(Ps , K)k2H2 ), it will
minimize the variance of the true plant under the influence of colored noise with
density Ŝw .

M. Peet Lecture 01: 107 / 135


H2 -optimal control

Theorem 47.
Suppose P̂ (s) = C(sI − A)−1 B. Then the following are equivalent.
1. A is Hurwitz and kP̂ kH2 < γ.
2. There exists some X > 0 such that

trace CXC T < γ 2


AX + XAT + BB T < 0

M. Peet Lecture 01: 108 / 135


H2 -optimal control

Proof.
Suppose A is Hurwitz and kP̂ kH2 < γ. Then the Controllability Grammian is
defined as Z ∞
T
Xc = eAt BB T eA dt
0
Now recall the Laplace transform
Z ∞

Λe At
(s) = eAt e−ts dt
Z0 ∞
= e−(sI−A)t dt
0
t=−∞

= −(sI − A) −1 −(sI−A)t
e dt
t=0
= (sI − A)−1

Hence ΛCeAt B (s) = C(sI − A)−1 B.

M. Peet Lecture 01: 109 / 135


H2 -optimal control

Proof.

ΛCeAt B (s) = C(sI − A)−1 B implies

kP̂ k2H2 = kC(sI − A)−1 Bk2H2


Z ∞
1
= Trace((C(ıωI − A)−1 B)∗ (C(ıωI − A)−1 B))dω
2π 0
Z ∞
1
= Trace((C(ıωI − A)−1 B)(C(ıωI − A)−1 B)∗ )dω
2π 0
Z ∞

= Trace CeAt BB ∗ eA t C ∗ dt
−∞
= TraceCXc C T

Thus Xc ≥ 0 and TraceCXc C T = kP̂ k2H2 < γ 2 .

M. Peet Lecture 01: 110 / 135


H2 -optimal control

Proof.
Likewise TraceB T Xo B = kP̂ k2H2 where Xo is the observability Grammian. To
show that we can take strict the inequality X > 0, we simply let
Z ∞
 T
X= eAt BB T + I eA dt
0

for sufficiently small  > 0. Furthermore, we already know the controllability


grammian Xc and thus X satisfies the Lyapunov inequality.
AT X + X A + BB T < 0

These steps can be reversed to obtain necessity.

M. Peet Lecture 01: 111 / 135


H2 -optimal control
Full-State Feedback

Lets consider the full-state feedback problem


 
A B1 B2
Ĝ(s) =  C1 0 D12 
I 0 0

• D12 is the weight on control effort.


• D11 = 0 is neglected as the feed-through term.
• C2 = I as this is state-feedback.
 
0 0
K̂(s) =
0 K

M. Peet Lecture 01: 112 / 135


H2 -optimal control
Full-State Feedback

Theorem 48.
The following are equivalent.
1. kS(K, P )kH2 < γ.
2. K = ZX −1 for some Z and X > 0 where
   
  X   AT
A B2 + X ZT + B1 B1T < 0
Z B2T
   
Trace C1 X + D12 Z X −1 C1 X + D12 Z < γ 2

However, this is nonlinear, so we need to reformulate using the Schur


Complement.

M. Peet Lecture 01: 113 / 135


H2 -optimal control
Full-State Feedback

Theorem 49.
The following are equivalent.
1. kS(K, P )kH2 < γ.
2. K = ZX −1 for some Z and X > 0 where
   
  X   AT
A B2 + X Z T
+ B1 B1T < 0
Z B2T
 
X (C1 X + D12 Z)T
>0
C1 X + D12 Z W
TraceW < γ 2

Thus we can solve the H2 -optimal static full-state feedback problem.

M. Peet Lecture 01: 114 / 135


H2 -optimal control

Applying the Schur Complement gives the alternative formulation convenient for
control.
Theorem 50.
Suppose P̂ (s) = C(sI − A)−1 B. Then the following are equivalent.
1. A is Hurwitz and kP̂ kH2 < γ.
2. There exists some X, Z > 0 such that
 T   
A X + XA XB X CT
< 0, > 0, TraceZ < γ 2
BT X −γI C Z

M. Peet Lecture 01: 115 / 135


H2 -optimal control
Relationship to LQR

The LQR Problem:


• Full-State Feedback
• Choose K to minimize the cost function
Z ∞
x(t)T Qx(t) + u(t)T Ru(t)dt
0

subject to dynamic constraints

ẋ(t) = Ax(t) + Bu(t)


u(t) = Kx(t), x(0) = x0

M. Peet Lecture 01: 116 / 135


H2 -optimal control
Relationship to LQR

To solve the LQR problem using H2 optimal state-feedback control, let


 1
Q2
• C1 =
0
 
0
• D12 = 1
R2
• B2 = B and B1 = I.
So that
 
  A + BK I
A + B2 K B1
= 
1
S(P̂ , K̂) = Q2
C1 + D12 K D11 1 0
R K
2

And solve the H2 full-state feedback problem. Then if


ẋ(t) = ACL x(t) = (A + BK)x(t) = Ax(t) + Bu(t)
u(t) = Kx(t), x(0) = x0

Then x(t) = eACL t x0


M. Peet Lecture 01: 117 / 135
H2 -optimal control
Relationship to LQR

If

ẋ(t) = ACL x(t) = (A + BK)x(t) = Ax(t) + Bu(t)


u(t) = Kx(t), x(0) = x0

then x(t) = eACL t x0 and


Z ∞ Z ∞
T
x(t)T Qx(t) + u(t)T Ru(t)dt = xT0 eACL t (Q + K T RK)eACL t x0 dt
0 0
Z ∞  1 T  1
T Q 2 Q 2
= Trace xT0 eACL t 1 1 eACL t x0 dt
0 R2 K R2 K
Z ∞
T
= kx0 k2 Trace B1 eACL t (C1 + D12 K)T (C1 + D12 K)eACL t B1T dt
0
= kx0 k2 kS(K, P )k2H2

Thus LQR reduces to a special case of H2 static state-feedback.

M. Peet Lecture 01: 118 / 135


H2 -optimal output feedback control

Theorem 51 (Lall).
The following are equivalent.
 
AK B K
• There exists a K̂ = such that kS(K, P )kH2 < γ.
CK DK
• There exist X1 , Y1 , Z, An , Bn , Cn , Dn such that
 
AY1 +Y1 AT +B2 Cn +CnT B2T ∗T ∗T
 AT + An + [B2 Dn C2 ]T X1 A+AT X1 +Bn C2 +C2T BnT ∗T  < 0,
[B1 + B2 Dn D21 ]T [X1 B1 + Bn D21 ]T −I
 
Y1 I ∗T
 I X1 ∗T  > 0,
C1 Y1 + D12 Cn C1 + D12 Dn C2 Z
D11 + D12 Dn D21 = 0, trace(Z) < γ 2

M. Peet Lecture 01: 119 / 135


H2 -optimal output feedback control

As before, the controller can be recovered as


   −1      −1
AK2 BK2 X2 X1 B2 An B n X1 AY1 0 Y2T 0
= −
CK2 DK2 0 I C n D n 0 0 C2 Y1 I
for any full-rank X2 and Y2 such that
   −1
X1 X2 Y1 Y2
=
X2T X3 Y2T Y3

To find the actual controller, we use the identities:

DK = (I + DK2 D22 )−1 DK2


BK = BK2 (I − D22 DK )
CK = (I − DK D22 )CK2
AK = AK2 − BK (I − D22 DK )−1 D22 CK

M. Peet Lecture 01: 120 / 135


Robust Control
Before we finish, let us briefly touch on the use of LMIs in Robust Control.

p M q

Questions:
• Is S(∆, M ) stable for all ∆ ∈ ∆?
• Determine
sup kS(∆, M )kH∞ .
∆∈∆

M. Peet Lecture 01: 121 / 135


Robust Control

Suppose we have the system M


 
M11 M12
M=
M21 M22

Definition 52.
We say the pair (M, ∆) is Robustly Stable if (I − M22 ∆) is invertible for all
∆ ∈ ∆.

Sl (M, ∆) = M11 + M12 ∆(I − M22 ∆)−1 M21

M. Peet Lecture 01: 122 / 135


Robust Control

The structure of ∆ makes a lot of difference. e.g.


• Unstructured, Dynamic, norm-bounded:

∆ := {∆ ∈ L(L2 ) : k∆kH∞ < 1}

• Structured, Dynamic, norm-bounded:

∆ := {∆1 , ∆2 , · · · ∈ L(L2 ) : k∆i kH∞ < 1}

• Unstructured, Parametric, norm-bounded:

∆ := {∆ ∈ Rn×n : k∆k ≤ 1}

• Unstructured, Parametric, polytopic:


X X
∆ := {∆ ∈ Rn×n : ∆ = αi Hi , αi ≥ 0, αi ≤ 1}
i i

M. Peet Lecture 01: 123 / 135


Robust Control

Let’s consider a simple question: Additive Uncertainty.

M11 = 0, M12 = M21 = I

Question: Is ẋ = A(t)x(t) stable if A(t) ∈ ∆ for all t ≥ 0.

Definition 53 (Quadratic Stability).


ẋ = A(t)x(t) is Quadratically Stable for A(t) ∈ ∆ if there exists some P > 0
such that
AT P + P A < 0 for all A ∈ ∆

Theorem 54.
If ẋ = A(t)x(t) is Quadratically Stable, then it is stable for A ∈ ∆.

M. Peet Lecture 01: 124 / 135


Robust Control

We examine this problem for:


• Parametric, Polytopic Uncertainty:
X X
∆ := {∆ ∈ Rn×n : ∆ = αi Ai , αi ≥ 0, αi = 1}
i i

M. Peet Lecture 01: 125 / 135


Parametric, Polytopic Uncertainty

For the polytopic case, we have the following result

Theorem 55 (Quadratic Stability).


Let X X
∆ := {∆ ∈ Rn×n : ∆ = αi Hi , αi ≥ 0, αi = 1}
i i

Then ẋ(t) = A(t)x(t) is quadratically stable for all A ∈ ∆ if and only if there
exists some P > 0 such that

ATi P + P Ai < 0 for i = 1, · · · ,

Thus quadratic stability of systems with polytopic uncertainty is equivalent to


an LMI.

M. Peet Lecture 01: 126 / 135


Parametric, Norm-Bounded Uncertainty

A more complex uncertainty set is:

ẋ(t) = A0 x(t) + M p(t), p(t) = ∆(t)q(t),


q(t) = N x(t) + Qp(t), ∆∈∆

• Parametric, Norm-Bounded Uncertainty:

∆ := {∆ ∈ Rn×n : k∆k ≤ 1}

M. Peet Lecture 01: 127 / 135


Parametric, Norm-Bounded Uncertainty
Quadratic Stability: There exists a P > 0 such that

P (A0 x(t)+M p)+(A0 x(t)+M p)T P < 0 for all p ∈ {p : p = ∆q, q = N x+Qp}

Theorem 56.
The system

ẋ(t) = A0 x(t) + M p(t), p(t) = ∆(t)q(t),


q(t) = N x(t) + Qp(t), ∆ ∈ ∆ := {∆ ∈ Rn×n : k∆k ≤ 1}

is quadratically stable if and only if there exists some P > 0 such that
  T  
x A P + PA PM x
<0
y MT P 0 y
         
x x x −N T N −N T Q x
for all ∈ : ≤0
y y y −QT N I − QT Q y

M. Peet Lecture 01: 128 / 135


Parametric, Norm-Bounded Uncertainty

If.
If
  T  
x A P + PA PM x
<0
y MT P 0 y
         
x x x −N T N −N T Q x
for all ∈ : ≤ 0
y y y −QT N I − QT Q y

then
xT P (Ax + M y) + (Ax + M y)T P x < 0
for all x, y such that
kN x + Qyk2 ≤ kyk2
Therefore, since p = ∆q implies kpk ≤ kqk, we have quadratic stability.
The only if direction is similar.

M. Peet Lecture 01: 129 / 135


Relationship to the S-Procedure
A Classical LMI

S-procedure to the rescue!


The S-procedure asks the question:
• Is z T F z ≥ 0 for all z ∈ {x : xT Gx ≥ 0}?

Corollary 57 (S-Procedure).
z T F z ≥ 0 for all z ∈ {x : xT Gx ≥ 0} if there exists a τ ≥ 0 such that
F − τ G  0.
The S-procedure is Necessary if {x : xT Gx > 0} =
6 ∅.

M. Peet Lecture 01: 130 / 135


Parametric, Norm-Bounded Uncertainty

Theorem 58.
The system

ẋ(t) = Ax(t) + M p(t), p(t) = ∆(t)q(t),


q(t) = N x(t) + Qp(t), ∆ ∈ ∆ := {∆ ∈ Rn×n : k∆k ≤ 1}

is quadratically stable if and only if there exists some µ ≥ 0 and P > 0 such that
   
AP + P AT P N T MMT M QT
+µ < 0}
NP 0 QM T QQT − I

These approaches can be readily extended to controller synthesis.

M. Peet Lecture 01: 131 / 135


Quadratic Stability

Consider Quadratic Stability in Discrete-Time: xk+1 = Sl (M, ∆)xk .

Definition 59.
(Sl , ∆) is QS if

Sl (M, ∆)T P Sl (M, ∆) − P < 0 for all ∆ ∈ ∆

Theorem 60 (Packard and Doyle).


Let M ∈ R(n+m)×(n+m) be given with ρ(M11 ) ≤ 1 and σ(M22 ) < 1. Then the
following are equivalent.
1. The pair (M, ∆ = Rm×m ) is quadratically stable.
2. The pair (M, ∆ = Cm×m ) is quadratically stable.
3. The pair (M, ∆ = Cm×m ) is robustly stable.

M. Peet Lecture 01: 132 / 135


The Structured Singular Value
For the case of structured parametric uncertainty, we define the structured
singular value.

∆ = {∆ = diag(δ1 In1 , · · · , δs Ins , ∆s+1 , · · · , ∆s+f ) : δi ∈ F, ∆ ∈ Fnk ×nk }

• δ and ∆ represent unknown parameters.


• s is the number of scalar parameters.
• f is the number of matrix parameters.

Definition 61.
Given system M ∈ L(L2 ) and set ∆ as above, we define the Structured
Singular Value of (M, ∆) as
1
µ(M, ∆) =
inf ∆∈∆ k∆k
I−M22 ∆ is singular

M. Peet Lecture 01: 133 / 135


The Structured Singular Value

Theorem 62.
Let
∆n = {∆ ∈ ∆, k∆k ≤ µ(M, ∆)}.
Then the pair (M, ∆n ) is robustly stable.

M. Peet Lecture 01: 134 / 135


Conclusion

LMIs are a versatile tool for


• Optimal H∞ Control
• Optimal H2 Control (LQR/LQG)
• Robust Control
Next Lecture, we expand the use of LMIs exponentially
1. Nonlinear Systems Theory
2. Sum-of-Squares Nonlinear Stability Analysis
Time permitting, we will explore other applications
1. Stability and Control of Time-Delay Systems
2. Stability and Control of PDE systems.

M. Peet Lecture 01: 135 / 135

S-ar putea să vă placă și