Sunteți pe pagina 1din 95

Tutorial on Control Theory

Stefan Simrock, ITER

ICALEPCS, WTC Grenoble, France, Oct. 10-14, 2011


Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

Outline

Introduction to feedback control


Model of dynamic systems
State space
Transfer functions
Stability
Feedback analysis
Controller Design

Matlab / Simulink Tools

Example beam control


Example Plasma Control
ICALEPCS, WTC Grenoble, France, Oct. 10-14, 2011
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

1.Control Theory
Objective:
The course on control theory is concerned with the analysis and design of closed loop
control systems.
Analysis:
Closed loop system is given

determine characteristics or behavior

Design:
Desired system characteristics or behavior are specified
loop system

Input

Variable

Plant

sensor

configure or synthesize closed

Variable

Measurement of
Variable

Control-system components
3
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

1.Introduction
Definition:
A closed-loop system is a system in which certain forces (we call these inputs) are
determined, at least in part, by certain responses of the system (we call these outputs).

System
inputs

System
outputs

Closed loop system

4
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

1.Introduction
Definitions:
The system for measurement of a variable (or signal) is called a sensor.
A plant of a control system is the part of the system to be controlled.
The compensator (or controller or simply filter) provides satisfactory
characteristics for the total system.
System
input

Error

Manipulated
variable
Compensator

System
output
Plant

Sensor

Closed loop control system


Two types of control systems:
A regulator maintains a physical variable at some constant value in the
presence of perturbances.
A servomechanism describes a control system in which a physical variable is
required to follow, or track some desired time function (originally applied in order
to control a mechanical position or motion).
5
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

1.Introduction
Example 1: RF control system
Goal:
Maintain stable gradient and phase.
Solution:
Feedback for gradient amplitude and phase.
Phase
controller

amplitude
controller

Klystron

cavity

~
~

Controller

Gradient
set point
+
-

Phase detector
6
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

1.Introduction
Model:
Mathematical description of input-output relation of components combined with block
diagram.

Amplitude loop (general form):

Reference
input

error

_+

controller
amplifier

Klystron
cavity
RF power
amplifier

output

plant

Monitoring
transducer
Gradient detector

7
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

1.Introduction
RF control model using transfer functions
controller

Control input U(s)

Reference input

R(s)

_+

Klystron

Error E(s)

H c (s )

cavity

P(s)

K(s)

Output

Y(s)

M(s)

Gradient detector
A transfer function of a linear system is defined as the ratio of the Laplace
transform of the output and the Laplace transform of the input with I. C .s =zero.
Input-Output Relations
Input

Output

Transfer Function

U(s)

Y(s)

G(s) = P(s)K(s)

E(s)

Y(s)

L(s) = G(s)Hc(s)

R(s)

Y(s)

T(s) = ( 1 + L(s)M(s)) 1 L(s)


8

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

1.Introduction
Example2: Electrical circuit

i(t) R1

R2

V1(t)

V2(t)

C
Differential equations:

Laplace Transform:

1
R1 i(t) + R2 i(t) + i ( ) d = V1(t)
C0
t
1
R2 i(t) + i( ) d = V2(t)
C0
1
I(s) = V1(s)
s C
1
I(s) = V2(s)
R2 I(s) +
s C

R1 I(s) + R2 I(s) +

Transfer function:
G(s) =

V2(s)
R2 C s + 1
=
V1(s) (R1 + R2 )C s + 1

Input V1 ,output V2
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

1.Introduction
Example 3: Circuit with operational amplifier

R2

i1 R1
-

Vi

Vo
.
1
Vi(s) = R1 I 1(s) and Vo(s) = R2 +
I 1(s)
s C

V (s)
R C s + 1
G(s) = 0 = 2
Vi(s)
R1 C s

It is convenient to derive a transfer function for a circuit with a single operational


amplifier that contains input and feedback impedance:
Z f (s)

I(s)
Z i(s)

Vi(s)

Vo(s)

.
Vi(s) = Z i(s) I(s) and Vo(s) = Z f (s) I(s)

G(s) =

Z f (s)
Vo(s)
=
Vi(s)
Z i(s)

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

10

Model of Dynamic System


We will study the following dynamic system:
Parameters:
k : spring constant
: damping constant
u(t) : force
Quantity of interest:
y(t) : displacement from equilibrium

m=1

y(t)

u(t)
Differential equation: Newtons third law (m = 1)

&y&(t ) = Fext = k y (t ) y& (t ) + u (t )


&y&(t ) + y& (t ) + k y (t ) = u (t )
y (0 ) = y0 , y& (0 ) = y& 0
2

Equation is linear (i.e. no y& like terms).


Ordinary (as opposed to partial e.g. =
All coefficients constant: k


f (x,t ) = 0 )
x t

(t ) = ,(t ) = for all t

11
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

Model of Dynamic System


Stop calculating, lets paint!!!
Picture to visualize differential equation
1.Express highest order term (put it to one side)

&y&(t ) = k y (t ) y& (t ) + u (t )
2.Putt adder in front

u (t )

k y(t )

&y&(t )
y& (t )

3.Synthesize all other terms using integrators!


Block diagram

u (t )

&y&(t )

+
-

y& (t )

y (t )

k
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

12

2.1 Linear Ordinary Differential Equation (LODE)


General form of LODE:

y (n ) (t ) + an 1 y (n 1) (t ) + ... + a1 y& (t ) + a0 y (t ) = bm u (m ) (t ) + ... + b1 u& (t ) + b0 u (t )


m ,n Positive integers, m n ;

coefficients

a0 ,a1 ,...,a n 1 , b0 ,...,bm

real numbers.

Mathematical solution: hopefully you know it


Solution of LODE: y(t ) = yh (t ) + yp (t ),
Sum of homogeneous solution y h (t ) (natural response) solving

y (n ) (t ) + a n 1 y (n 1 ) (t ) + ... + a 1 y& (t ) + a 0 y (t ) = 0
And particular solution y p (t ) .
How to get natural response yh

(t) ? Characteristic polynomial

( ) = n + an 1 n 1 + a1 + a0 = 0

( 1 ) r ( r +1 ) ... ( n ) = 0
y h (t ) = (c1 + c2 t + ... + cr t r 1 ) e t + cr +1e

+ ... + cn e n t
Determination of y p (t ) relatively simple, if input u (t ) yields only a finite number of
t
r
independent derivatives. E.g.: u (t ) e , r t .
1

r +1

13
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.1Linear Ordinary Differential Equation (LODE)


Most important for control system/feedback design:

y (n) (t ) + a n 1 y (n 1 ) (t ) + ... + a1 y& (t ) + a0 y(t) = bm u ( m ) (t ) + ... + b1 u& (t ) + b0 u (t )


In general: given any linear time invariant system described by LODE can be
realized/simulated/easily visualized in a block diagram (n = 2 , m = 2 )
Control-canonical form

b2

b1
u (t )

+
-

x2

x1

b0

y (t )

a1

a0
Very useful to visualize interaction between variables!
What are x1 and x2 ????
More explanation later, for now: please simply accept it!
14
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.2 State Space Equation


Any system which can be presented by LODE can be represented in State space
form (matrix differential equation).
What do we have to do ???
Lets go back to our first example (Newtons law):

&y&(t ) + y& (t ) + k y (t ) = u (t )
1. STEP:

Deduce set off first order differential equation in variables

x j (t ) (so-called states of system)

x1 (t ) Position : y (t )
x2 (t ) Velocity : y& (t ) :
x&1 (t ) = y& (t ) = x2 (t )
x&2 (t ) = &y&(t ) = k y(t ) y& (t ) + u(t )
= k x1 (t ) x2 (t ) + u(t )
One LODE of order n transformed into n LODEs of order 1
15
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.2 State Space Equation


2. STEP:
Put everything together in a matrix differential equation:
1
x&1 (t ) 0
=
x& (t ) -k -

x1 (t ) 0
x (t ) + 1 u (t )
2
x& (t ) = A x (t ) + B u (t )

x (t )
y (t ) = [1 0 ] 1
x 2 (t )

State equation

y (t ) = C x(t ) + D u (t )

Measurement equation
Definition:
The system state x of a system at any time t0 is the amount of information that,
together with all inputs for t t0 , uniquely determines the behaviour of the system
for all t t0 .
16
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.2 State Space Equation


The linear time-invariant (LTI) analog system is described via
Standard form of the State Space Equation
x& (t ) = A x(t ) + B u (t )

State equation

y (t ) = C x(t ) + D u (t )

State equation

Where x& (t ) is the time derivative of the vector

x1 (t )
x(t ) = . And starting conditions x (t0 )
xn (t )

System completely described by state space matrixes A, B, C, D ( in the most cases


Declaration of variables
Variable
Dimension
Name
X (t )

n1
nn
n r
r 1
p 1
pn

pr

B
u (t )
y (t )

D =0

).

state vector
system matrix
input matrix
input vector
output vector
output matrix
matrix representing direct coupling
between input and output
17

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.2 State Space Equation


Why all this work with state space equation? Why bother with?
BECAUSE: Given any system of the LODE form

y (n ) (t ) + a n 1 y (n 1 ) (t ) + ... + a1 y& (t ) + a0 y (t ) = bm u (m ) (t ) + ... + b1 u& (t ) + b0 u (t )


Can be represented as

x& (t ) = A x (t ) + B u (t )
y (t ) = C x (t ) + D u (t )

with e.g. Control-Canonical Form (case n = 3 ,m = 3 ):


1
0
0
0
A = 0
0
1 , B = 0 , C = [b0 b1 b2 ] , D = b3
a0 a1 a2
1
or Observer-Canonical Form:
0 0 a0
b0
A = 1 0 a1 ,B = b1 ,C = [0 0 1] ,D = b3
0 1 a2
b2

Notation is very compact, But: not unique!!!


Computers love state space equation! (Trust us!)
Modern control (1960-now) uses state space equation.
General (vector) block diagram for easy visualization.
18
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.2 State Space Equation


Block diagrams:

Control-canonical Form:

b2
u (t )

+
-

b1
x2

x1

b0

y (t )

a0

a1
+

Observer-Canonical Form:

u (t )

b0

b2

b1

x1 +

a0

a1

x2 + +

y(t)

19
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.2 State Space Equation


Now: Solution of State Space Equation in the time domain. Out of the hatet voila:
t

x(t ) = (t ) x(0 ) + ( ) B u (t ) d
0

Natural Response + Particular Solution

y (t ) = C x(t ) + D u (t )
t

= C (t ) x(0 ) + C ( ) B u (t ) d + D u (t )
0

With the state transition matrix


A2 2 A3 3
(t ) = I + At +
t +
t + ... = e A t
2!
3!

Exponential series in the matrix A (time evolution operator) properties of (t ) (state transition matrix).
d (t )
= A (t )
dt
2 . (0 ) = I

1.

3. (t1 + t 2 ) = (t1 ) (t 2 )
4 . 1 (t ) = ( t )

Example:

0 1
0 0
1 t
2
A=

A
=
,

(
t
)
=
I
+
At
=
= eAt

0 0
0 0
0 1

Matrix A is a nilpotent matrix.


Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

20

2.3 Examples
Example:
It is given the following differential equation:

d2
d
(
)
y
t
+
4
y (t ) + 3 y (t ) = 2 u (t )
dt 2
dt
State equations of differential equation:
Let x1 (t ) = y (t ) and x2 (t ) = y& (t ) . It is:

x&1 (t ) = y& (t ) = x2 (t )
x&2 (t ) + 4 x2 (t ) + 3 x1 (t ) = 2 u(t )
x&2 (t ) = 3 x1 (t ) 4 x2 (t ) + 2 u(t )
Write the state equations in matrix form:

x1 (t )
Define system state x(t ) =
. Then it follows:
(
)
x
t
2
1
0
0
(
)
x& (t ) =
x
t
+

2 u (t )
3


y (t ) = [1 0 ] x(t )
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

21

2.3 Cavity Model


Ig

circulator Conductor

Conductor

. Zo

Ib

. Zo

Ro

Last

I g ~ Generator

Zo

.
Equivalent circuit:

I 'g

Beam-Current

Resonator

.
Coupler 1:m

Ib

Ib

Ir

Generator

I 'g ~

Rext

Ro

~ Ib

Resonator
1 & 1
U + U = I&g + I&b
C U&& +
RL
L
1 / 2: =

= 0
2 RL C 2QL

U&& + 21 / 2 U& + 02 U = 2 RL 1 / 2 I&g + I&b


m

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

22

2.3 Cavity Model


Only envelope of rf (real and imaginary part) is of interest:

U (t ) = (U r (t ) + i U i (t )) exp (i HF t)

I g (t ) = (I gr (t ) + i I gi (t )) exp(i HF t )
I b (t ) = (I b r (t ) + i I b i (t )) exp(i HF t ) = 2(I b0 r (t ) + i I b0 i (t )) exp(i HF t )

Neglect small terms in derivatives for U and I


2
(Ur (t ) + iUi (t ))
U&&r + iU&&i (t ) << HF
2 (U& + iU& (t )) << 2 (U (t)+ iU (t))
1/ 2

t2

HF

t2

(I& (t ) + iI& (t )) dt << (I (t ) + iI (t )) dt


r

t1

HF

t1

Envelope equations for real and imaginary component.

r 1

U& r (t ) + 1/ 2 Ur + Ui = HF I gr + Ib0r
Q m

r 1

U& i (t ) + 1/ 2 Ui Ur = HF I gi + Ib0i
Q m

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

23

2.3 Cavity Model


Matrix equations:

U& r (t ) 1 / 2 U r (t )
r 1
& =
U (t ) + HF 0

U i (t )
Q
1/ 2 i

1
I
(
t
)
+
I
(
t
)
b0 r

0 m gr

1 1
I gi (t ) + I b0 i (t )

With system Matrices:

1 / 2
A=

1/ 2

U r (t )
r
x (t ) =

U i (t )

r 1 0
B = HF

0
1
Q

(
)
(
)
+
I
t
I
t
b0 r
m gr

r
u (t ) =

1
I (t ) + I (t )
b0 i
m gi

General Form:

r
r
r
x& (t ) = A x (t ) + B u (t )
24
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.3 Cavity Model


Solution:

r
r
r
x (t ) = (t ) x (0 ) + (t t' ) B u (t ) dt
0

(t) = e

1 / 2 t

cos ( t ) sin ( t )
sin ( t )

(
)
cos
t

Special Case:

(
)
(
)
+
I
t
I
t
b0 r
m gr
I r
r
u (t ) =
= :
1
I gi (t ) + I b0 i (t ) I i
m

r
HF
U r (t )
Q 1 / 2 1 cos( t ) sin( t ) e 1 / 2 t I r
=

U (t ) 2 + 2
sin( t )

(
)

cos
t

1/ 2
1/ 2
i

Ii
25
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.3 Cavity Model


Integrator

Step

Gain

1
s

1
s

Scope

Integrator 1
Gain 1

Harmonic oscillator

4
3

Gain 2

Harmonic oscillator
Scope

x = Ax + Bu
y = Cx + Du

Step

State space
Step

cavity

Load Data

x = Ax + Bu
y = Cx + Du

State space
Step

Scope
26

2.3 Cavity Model


k
Step

Gain

Integrator
1
s

+
-

Gain 2
w12
dw

Gain 4
Load Data

dw

Scope
Gain 5

+
Step 1

1
s

Integrator 1
Gain 3
w12

27

2.4 Masons Rule


Masons Rule is a simple formula for reducing block diagrams. Works on continuous and
discrete. In its most general form it is messy, but For special case when all path touch

( forward path gains )

H(s) =
1- (loop path gains )
Two path are said to touch if they have a component in common, e.g. an adder.
10

H1

11

H5
2

3
9

 Forward path: F1: 1 - 10 - 11 - 5 6


F2 : 1 - 2 - 3 - 4 - 5 6

H2
H4

4
8

H3

G( f1 ) = H5 H3
G( f2 ) = H1 H2 H3

 Loop path :

I1: 3 - 4 - 5 - 8 9
I2 : 5 - 6 - 7

Check: all path touch (contain adder between 4 and 5)


 By Masons rule:

H=

G(I1 ) = H2 H4
G(I2 ) = H3

G( f1 ) + G( f 2 ) H 5 H 3 + H1 H 2 H 3 H 3 (H 5 + H1 H 2 )
=
=
1 G(l1 ) G(l2 ) 1 H 2 H 4 H 3
1 H 2 H4 H3

28

2.5 Transfer Function G (s)


Continuous-time state space model

x& (t ) = A x(t ) + B u (t )
y (t ) = C x(t ) + D u (t )

State equation
Measurement equation

Transfer function describes input-output relation of system.

U(s)

Y(s)

System

s X (s) x(0) = A X (s ) + B U(s)


X (s ) = (sI A) x (0 ) + (sI A) B U (s )
= (s ) x (0 ) + (s ) B U (s )
1

Y (s ) = C X (s ) + D U (s )
= C[ (sI A) ]x(0 ) + [c(sI A) B + D]U (s )
= C (s ) x(0 ) + C (s ) B U (s ) + D U (s )
1

Transfer function G (s ) ( pxr ) (case: x(0)=0):

G (s ) = C (sI A) B + D = C (s ) B + D
1

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

29

2.5 Transfer Function


Transfer function of TESLA cavity including 8/9-pi mode

H cont (s ) H cav (s ) = H (s ) + H 8 (s )
9

mode

8
mode
9

s + (1 / 2 )
(
1 / 2 )

H (s ) =
2
2
+ (s + (1 / 2 ) )

s + (1 / 2 )8 8

9
9
9

2
8
(
)

s
+

8
1/ 2

+ s + (1 / 2 )8
9
9

(1 / 2 )8

H 8 (s) =
9

s + (1 / 2 )

82
9

30
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.5 Transfer Function of a Closed Loop System


E (s )

R (s )

H c (s )

U (s )

G(s )

Y(s)

M (s)
We can deduce for the output of the system.

Y (s) = G(s) U (s) = G(s) Hc (s) E(s)


= G(s) Hc (s) [R(s) M (s ) Y (s)]
= L(s) R(s) L(s) M (s) Y (s)
With L(s ) the transfer function of the open loop system (controller plus plant).

(I + L(s ) M (s )) Y (s ) = L(s ) R(s )


1
Y (s ) = (I + L(s ) M (s )) L(s ) R(s )
= T (s ) R(s )
T (s ) is called : Reference Transfer Function
31
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.5 Sensitivity
The ratio of change in Transferfunction T(s) by the parameter b can be defined as:
System characteristics change with system parameter variations

S=

T(s) b
T(s) b

The sensitivity function is defined as:

SbT = lim

b 0

T(s) b
T(s) b
=
b T(s)
b T(s)

Or in General sensitivity function of a characteristics W with respect to the parameter b:

SbW =

W b
b W

Example: plant with proportional feedback given by


Reference transfer function T(s):

S HT ( j ) =

K p G p ( j )H k
1 + K p G p ( j )H k

T (s ) =

Gc (s ) = K p

K p G p (s )

G p (s ) =

K
s + 0 .1

|S|

1 + K p G p (s )H k

Kp=10

0.25K p
0.1 + 0.25 K p + j

Kp=1

Increase of H results in decrease of T


 System cant be insensitive to both H,T
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

omega
32

2.5 Disturbance Rejection


Disturbances are system influences we do not control and want to minimize its
impact on the system.
C (s ) =

G c (s) G p (s)
1 + G c (s) G p (s) H(s)

R(s) +

G d (s)
D(s)
1 + G c (s) G p (s) H(s)

= T(s) R(s) + T d (s) D(s)


D(s)

To Reject disturbances, make

Td D(s )

small!
R(s)

 Using frequency response approach to


investigate disturbance rejection
 In general Td ( j ) cant be small for all -
Design Td ( j ) small for significant
portion of system bandwidth

Gd (s)
Gp (s)

Gc (s)

C(s)

Plant

H(s)

 Reduce the Gain Gd ( j ) between dist. Input and output


 Increase the loop gain Gc( j )Gp( j ) without increasing the gain Gd ( j ).Usually
accomplished by the compensator Gc( j ) choice
 Reduce the disturbance magnitude d (t ) should always be attempted if reasonable
 Use feed forward compensation, if disturbance can be measured.
33

2.6 Stability
Now what do we know:
The impulse response tells us everything about the system response to any arbitrary
input signal u(t) .

what we have not learnt:


If we know the transfer function G(s), how can we deduce the systems behavior?
What can we say e.g. about the system stability?

Definition:
A linear time invariant system is called to be BIBO stable (Bounded-input-bounded-output)
For all bounded inputs u (t ) M 1 (for all t) exists a boundary for the output signal M 2 ,
So that y (t ) M 2 . (for all t) with M 1 and M 2 , positive real numbers.

Input never exceeds M 1 and output never exceeds M 2 , then we have BIBO
stability!
Note: it has to be valid for ALL bounded input signals!
34
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.6 Stability
Example: Y (s ) = G (s ) U (s ), integrator G (s ) =

1
s

1.Case
u (t ) = (t ), U (s ) = 1
1
y (t ) = L1 [Y (s )] = L1 = 1
s

The bounded input signal causes a bounded output signal.


2.Case
u (t ) = 1 , U (s ) =

1
s

1
y (t ) = L 1 [Y (s )] = L 1 2 = t
s

BIBO-stability has to be shown/proved for any input. Is is not sufficient to show


its validity for a single input signal!

35
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.6 Stability
Condition for BIBO stability:
We start from the input-output relation

Y (s ) = G(s ) U (s )
By means of the convolution theorem we get

y (t ) =

g ( ) u (t ) d
0

g ( ) u (t )
0

d M 1

g ( ) d M 2

Therefore it follows immediately:


If the impulse response is absolutely integrable

g (t ) dt <

Then the system is BIBO-stable.


36
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.7 Poles and Zeroes


Can stability be determined if we know the TF of a system?

[sI A] adj
G (s ) = C (s ) B + D = C
B+D
(s )
Coefficients of Transfer function G(s) are rational functions in the complex variables

mk =1 (s zk ) N ij (s )
g ij (s ) = n
=
l =1 (s pl ) Dij (s )
z k Zeroes.pl Poles, real constant, and it is m n (we assume common factors have
already been canceled!)

What do we know about the zeros and the poles?


Since numerator N (s ) and denominator D(s ) are polynomials with real coefficients,
Poles and zeroes must be real numbers or must arise as complex conjugated pairs!
37
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.7 Poles and Zeroes


Stability directly from state-space

Re call : H (s ) = C (sI A) B + D
1

Assuming D=0 (D could change zeros but not poles)

H (s ) =

Cadj (sI A)B b(s )


=
det (sI A)
a (s )

Assuming there are no common factors between the poly Cadj (sI A)B and det (sI A)
i.e. no pole-zero cancellations (usually true, system called minimal ) then we can identify

b(s) = Cadj (sI A) B

and

a(s) = det (sI A)


i.e. poles are root of det (sI A)
Let i be the i

th

eigenvalue of A
if

Re {i } 0 forall i => System stable

So with computer, with eigenvalue solver, can determine system stability directly from coupling matrix A.
38
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.8 Stability Criteria


 A system is BIBO stable if, for every bounded input, the output remains bounded with
Increasing time.
 For a LTI system, this definition requires that all poles of the closed-loop transfer-function
(all roots of the system characteristic equation) lie in the left half of the complex plane.

Several methods are available for stability analysis:


1.Routh Hurwitz criterion
2.Calculation of exact locations of roots
a. Root locus technique
b. Nyquist criterion
c. Bode plot
3.Simulation (only general procedures for nonlinear systems)
 While the first criterion proofs whether a feedback system is stable or unstable,
the second Method also provides information about the setting time (damping term).
39
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.8 Poles and Zeroes


Pole locations tell us about impulse response i.e. also stability:

Im (s) =

Medium oscillation
Medium decay

Fast oscillation
No growth

X
Medium oscillation
Medium growth

No oscillation
No growth

No Oscillation
Fast Decay

Re (s) =

No oscillation
Fast growth

40
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.8 Poles and Zeroes


Furthermore: Keep in mind the following picture and facts!
Complex pole pair: Oscillation with growth or decay.
Real pole: exponential growth or decay.
Poles are the Eigenvalues of the matrix A.
Position of zeros goes into the size of c j ....

 In general a complex root must have a corresponding conjugate root ( N(s), D(S) polynomials
with real coefficients.

41
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.8 Bode Diagram


dB
1

2
Gm

00

Gain Margin

900
0

180

Phase Margin

The closed loop is stable if the phase of the unity crossover frequency of the OPEN LOOP
Is larger than-180 degrees.
42
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.8 Root Locus Analysis


Definition: A root locus of a system is a plot of the roots of the system characteristic
equation (the poles of the closed-loop transfer function) while some parameter of the
system (usually the feedback gain) is varied.

p3
K H (s ) =

p2

p1

K
(s p1 ) (s p 2 ) (s p 3 )

R (s ) +

H(s)

K
-

G CL (s ) =

K H (s )
roots at
1 + K H (s )

1+ KH

Y(s)

(s ) = 0 .

How do we move the poles by varying the constant gain K?


Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

43

2.8 Root Locus Examples


1
s p1
X

p1

(s p1 )(s p2 )
X

p2

p1

(a)

(b)

s z1
(s p1 )(s p2 )

s z1
(s p1 )(s p2 )

p2

z1

p1

z1

p2

p1

(c)

(d)

44
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

2.8 Root Locus Examples (Cntd)


p2

1
(s p1 )(s p2 )(s p3 )

1
(s p1 )(s p2 )(s p3 )

p3

p2

p1

p1
X

p3
(f)

(e)

p2

s z1
(s p1 )(s p2 )(s p3 )

1
(s p1 )(s p2 )(s p3 )

X
X

p1

p3

p2

z1

p1

p3
(g)

(h)
45

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

3.Feedback
The idea:
Suppose we have a system or plant
open loop
plant

We want to improve some aspect of plants performance by observing the output


and applying a appropriate correction signal. This is feedback

r
plant

closed loop

Ufeedback
?
Question: What should this be?
46
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

3.Feedback
Open loop gain:

G(s)

O.L


(s ) = G(s ) = u
y

Closed-loop gain:

G(s)

Y
closed loop

U fb
H (s )

G C.L(s) =

G(s)
1 + G(s) H(s)

Pr oof: y = G (u u fb )
= G u G u fb

y +G Hy = G u

= G u G Hy

y
G
=
u (1 + G H )

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

47

3.1 Feedback-Example 1
Consider S.H.O with feedback proportional to x i.e.:

&x& + x& + n2 x = u + u fb

Where

u fb (t ) = x (t )

&x&

+
-

1
S

x&

1
s

n2

Then

&x& + x& + n2 x = u x
==> &x& + x& + (n2 + ) x = u
Same as before, except that new natural frequency n2 +

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

48

3.1 Feedback-Example 1
Now the closed loop T.F. is:

dB

1
n2

G C.L. (s ) =

1
s 2 + s + n2 +

G O.L. (i)

G C.L. (i)

log(
)

log n

log n2 +

1
n2 +
DC response: s=0
So the effect of the proportional feedback in this case is to increase the bandwidth
of the system
(and reduce gain slightly, but this can easily be compensated by adding a constant gain in front)
49
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

3.1 Feedback-Example 2
In S.H.O. suppose we use integral feedback:
t

u fb (t ) = x( ) d
0

i.e &x& + x& + n2 x = u x( ) d


0

&x&

+
-

1
S

x&

1
S

n2

Differentiating once more yields:

&x&& + &x& + n2 x& + x = u&

No longer just simple S.H.O., add another state


Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

50

3.1 Feedback-Example 2
1
s + s + n2
C.L.
G (s ) =
1

1 + 2
2
s s + s + n +
2

s
s s + s + n2 +

Observe that
1. G C.L. (0 = 0 )
2. For large s (and hence for large )
dB

G O.L. (i)
1
n2

G C.L. (s )

1
G O.L. (s )
2
2
s + s + n

log(
)

G C.L. (i)
So integral feedback has killed DC gain
i.e system rejects constant disturbances
51
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

3.1 Feedback-Example 3
Suppose S.H.O now apply differential feedback i.e.

u fb (t ) = x&(t )

&x&

+
-

x&

1
S

1
S

n2

x&
Now have

&x& + ( + ) x& + n2 x = u

So effect off differential feedback is to increase damping


52
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

3.1 Feedback-Example 3
G C.L. (s ) =

Now

1
s 2 + ( + ) s + n2

dB

G O.L. (i)
1
n2

log(
)

G C.L. (i)

So the effect of differential feedback here is to flatten the resonance i.e. damping is increased.

Note: Differentiators can never be built exactly, only approximately.


53
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

3.1 PID controller


(1) The latter 3 examples of feedback can all be combined to form a
P.I.D. controller (prop.-integral-diff).

u
+

S.H.O

x= y

P.I.D controller
K p + K D s + K l /s

ufb =up +ud +u l


(2) In example above S.H.O. was a very simple system and it was clear what
physical interpretation of P. or I. or D. did. But for large complex systems not
obvious

 Require arbitrary tweaking


Thats what were trying to avoid
54
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

3.1 PID controller


For example, if you are so smart lets see you do this with your P.I.D. controller:

6th order system


3 resonant poles
3 complex pairs
6 poles

Damp this mode, but leave the other two modes undamped, just as they are.
This could turn out to be a tweaking nightmare thatll get you nowhere fast!

Well see how this problem can be solved easily.


55
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

3.2 Full State Control


Suppose we have system

x& (t ) = A x (t ) + B u (t )
y (t ) = C x (t )
Since the state vector x(t) contains all current information about the system the
most general feedback makes use of all the state info.

u fb = k1 x1 ..... k n xn
= -k x
Where

k = [k1......k n ] (row matrix)

Where example: In S.H.O. examples

[ ]

Proportional fbk : up = k p x = k p 0

Differential fbk : ud = kd x& = [0 kd ]


56
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

3.2 Full State Control


Theorem:

If there are no poles cancellations in

G O.L. (s ) =

b (s )
1
= C (sI A ) B
a (s )

Then can move eigen values of A BK anywhere we want using full state feedback.
Proof:
Given any system as L.O.D.E. or state space it can be written as:

AO.L.
x1 0
... 0
=
... 0

x n -a 0

Where

0
... ... .. .
... ... 1

... .. . -a n- 1
x1
...
y = [b0 ... ... bn- 1 ]
...

xn
G

O.L.

...

= C (sI A )

x1
0
...
0
+ u
...
...


1
xn

bn 1 s n 1 + ... + b0
B= n
s + a n 1 s n 1 + ... + a0
57

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

3.2 Full State Control


i.e. first row of A

O.L.

Gives the coefficients of the denominator

a O.L.(s ) = det sI AO.L. = s n + an1s n1 + ... + a0


Now
AC.L. = AO.L. BK
0 1 ... 0 0
0 ... ... ... 0
[k ... ... k ]
=
n-1
0 ... ... 1 ... 0


-a
...
.
..
-a
n-1
1
0
1
...
0
0

.
..
..
.
...

=
0

... ...
1

+
+
-(a
k
)
...
...
-(a
k
)
n-1
n 1
0 0
So closed loop denominator

a C.L. (s ) = det sI A C.L.

= s n + (a 0 + k 0 )s n 1 + ... + (a n 1 + k n 1 )

Using u = Kx have direct control over every closed-loop denominator coefficient


 can place root anywhere we want in s-plane.
58
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

3.2 Full State Control


Example: Detailed block diagram of S.H.O with full-scale feedback

u
-

&x&

1
S

x&

1
S

n2
+

k2

x&
k1

Of course this assumes we have access to the x& state, which we actually
Dont in practice.
However, lets ignore that minor practical detail for now.
( Kalman filter will show us how to get x& from x ).

59
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

3.2 Full State Control


With full state feedback have (assume D=0)

x&
+

1
s

y
C

u fb = kx

K
So

x& = A x + B[u + u fb ]
= Ax + Bu + BKufb
x&

= ( A BK ) x + B u

u fb = Kx
y

= Cx

With full state feedback, get new closed loop matrix

AC.L. = AO.L. BK

Now all stability info is now given by the eigen values of new A matrix
60
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

3.3 Controllability and Observability


The linear time-invariant system
x& = Ax + Bu
y = Cx

Is said to be controllable if it is possible to find some input u(t) that will transfer the
initial state x(0) to the origin of state-space, x(t0 ) = 0,with t0 finite
The solution of the state equation is:
t

x (t ) = (t )x (0 ) + ( )B u (t ) d
0

For the system to be controllable, a function u(t) must exist that satisfies the equation:
t0

0 = (t 0 )x (0 ) + ( )Bu (t 0 ) d
0

With t 0 finite. It can be shown that this condition is satisfied if the controllability matrix

CM = [B AB A2 B ... An-1 B]
Has inverse. This is equivalent to the matrix C M having full rank (rank n for an n- th
order differential equation).
61
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

3.3 Controllability and Observability


Observable:
 The linear time-invariant system is said to be observable if the initial conditions x(0)
Can be determined from the output function y(t), 0 t t1 where t1 is finite. With
t

y (t ) = Cx = C (t )x 0 + C

( )Bu (t ) d
0

 The system is observable if this equation can be solved for x(0). It can be shown that
the system is observable if the matrix:

C
CA

OM =
...
n-1
CA
 Has inverse. This is equivalent to the matrix O M having full rank (rank n for an n-th
Order differential equation).

62
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.Discrete Systems
Where do discrete systems arise?
Typical control engineering example:
Digitized
sample

Continuous system

u(k)

DAC

t
Digitized

uc (t )

h(t )

t
Zero-order-hold

yc (t )

ADC

y(k)

t
t
continuous Digitized

Computer controller
Assume the DAC+ADC are clocked at sampling period T.
Continued 63
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4. Discrete Systems
Then u(t) is given by:

u (k ) u c (t ); kT t < (k + 1 )T
y (k ) y c (kT ); k = 0 ,1,2 ,...
Suppose: time continuous system is given by state-Space

x& c (t ) = A x c (t ) + B u c (t ); x c (0 ) = x0
y c (t ) = C x c (t ) + D u c (t )
Can we obtain direct relationship between u(k) and y(k)? i.e. want
Equivalent discrete system:

u( k )

h( t )

DAC

u( k )

h( k )

ADC

y( k )

y( k )
64

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4. Discrete Systems
Yes! We can obtain equivalent discrete system.
t

Recall

x c (t ) = e x c (0 ) + e A .Bu c (t ) d
At

From this

x c (kT + T ) = e

AT

x c (kT ) + e A .Bu c (kT ) d


0

u (kT + T- ) = u (kT ) for [ 0 ,T]


i.e. u (kT + T- ) is constant u (kT ) over [ 0 ,T]

Observe that

i.e. can pull out of integral.


t A

==> x c (kT + T ) = e x c (kT ) + e B d u c (kT


0

At

x(k + 1 ) = Ad x(k ) + Bd u (k )
y(k ) = Cd x(k ) + Dd u (k )
x(0 ) = xc (0 )
T

So

Ad = e , Bd = e A .B dB dd = C, Dd = D
AT

So we have an exact (note: x(k + 1) = x(k ) + x& (k ) T + O(.)) discrete time equivalent to the time
Continuous system at sample times t=kT- no numerical approximation!
65
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.1 Linear Ordinary Difference Equation


A linear ordinary difference equation looks similar to a LODE

y (k + n ) + an-1 y (k + n 1) + ... + a1 y (k + 1) + a0 y (k ) = bm u (k + m ) + ... + b1 u (k + 1) + b0 u (k )


n m; Assumes initial values y (n-1), ...., y(1),y (0 ) = 0 .
Z-Transform of the LODE yields (linearity of Z-Transform):

z n Y ( z ) + z n 1an 1 Y (z ) + ... + za1 Y (z ) + a0 Y (z ) = z mbm U ( z ) + ... + zb1 U ( z ) + bo U (z )


It follows the input-output relation:

(z

+ z n 1an 1 + ... + za1 + a0 Y (z ) = z mbm + .... + zb1 + b0 U(z)


z mbm + .... + zb1 + b0
Y (z ) = n
U (z )
z + ... + za1 + a0
Y (z ) = G(z ) U (z )

Once again:

if U ( z ) = 1, (u (k ) = (k )), then Y (z ) = G (z ).
Transfer Function of system is the Z-Transform of its pulse response!
66
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.1 z-Transform of Discrete State Space Equation


x(k + 1) = Ad x(k ) + Bd u (k )
y(k ) = C x(k ) + D u (k )
Applying z-Transform on first equation:

z X (z )-z x(0 ) = Ad X (z ) + Bd U (z )

(zI-Ad ) X (z ) = z x(0 ) + B U (z )
X ( z ) = (zI-Ad ) z x(0 ) + (zI Ad ) B U (z )
1

Homogeneous solution
NOW:

Particular solution
Y ( z ) = CX (z ) + D U ( z )

= C ( zI-Ad ) z x(0 ) + C ( zI Ad ) B + D U (z )
1

If x(0)=0 then we get the input-output relation:

Y (z ) = G (z ) U (z
G ( z ) = C ( zI-A d

with

) 1 B + D

Exactly like for the continuous systems!!!!!!!

67

4.2 Frequency Domain/z-Transform


For analyzing discrete-time systems:
z-Transform
(analogue to Laplace Transform for time-continuous system)
It converts linear ordinary difference equation into algebraic equations: easier to find
a solution of the system!
It gives the frequency response for free!
z-Transform ==generalized discrete-time Fourier Transform
Given any sequence f (k ) the discrete Fourier transform is

~
F ( ) =

f (k ) e ik

k =

1
the sampling frequency in Hz,
T
T difference / Time between two samples.
with = 2 f, f =

In the same spirit: F ( z ) = Z[f (k ) ] =

f (k ) z

-k

k =0

With z a complex variable


Note: if f (k ) = 0 for k = -1 ,- 2 ,...... th en F~ ( ) = F(z = e i ).
68
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.3 Stability (z-domain)


A discrete LTI system is BIBO stable if

u (k ) < M; k = y (k ) < K; k
Condition for BIBO stability:

y (k ) =

u (k i ) h(i ) u(k i ) h(i )

M h(i ) M h(i )

h (i ) <

 BIBO stable.

For L.O.D.E State space system:

i = 1 (z z i )
H ( z ) = .
=
n
i = 1 (z p i )

i T i (z )

i=1

With partial fraction of the rational function:


Once again pole locations tell a lot about shape of pulse response.
Zeros determine the size of i
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

69

4.3 Stability (z- domain)


Im{z}

unit circle

X
X

.. . .

X
Growing
Constant

. ..
.

Damping

Damping

X
X

Re{z}

Damping
X
Damping
X
X

z-Plane

Growing

70

4.3 Stability (z- domain)

In General
Complex pair  oscillatory growth / damping
Real pole  exponential growth / decay but maybe oscillatory too (e.g:

r n1(n) wherer < 0)

The farther inside unit circle poles are


The faster the damping  the higher stability

i . e p i 1  system stable

71
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.3 stability (z-domain)


Stability directly from State Space:
Exactly as for cts systems, assuming no pole-zero cancellations and D=0

b(z )
1
= C ( zI Ad ) Bd
a (z )
Cadj ( zI Ad )Bd
=
det ( zI Ad )

H (z ) =

b ( z ) = Cadj ( zI Ad )B d
a ( z ) = det ( zI Ad )
Poles are eigenvalues of Ad
So check stability, use eigenvalue solver to get e-values of the matrix Ad , then
If

i < 1 for all i  system stable

Where

is the ith e-value of A d .


72

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.4 Discrete Cavity Model


Converting the transfer function from the continuous cavity model to the discrete model:

12
H (s ) =
2
2 + (s + 12 )

s + 12 -

s
+

12

The discretization of the model is represented by the z-transform:

1 H (s ) z 1 1 H(s)
H (z ) = 1 Z
|t =kTs
Z L
=

z
z s
s

H (z ) =

12
2 + 122

12
z 1

12

12 2 + 122 z 2 2 ze 12Ts cos( T )s + e 2 12Ts


z e 12Ts cos( Ts ) 12
12

12Ts
12
sin( Ts )
-e

12

73

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.5 Linear Quadratic Regulator


x (k + 1 ) = A x (k ) + B u (k )
z (k ) = C x ( x )

Given:

(Assume D=0 for simplicity)


Suppose the system is unstable or almost unstable.We want to find ufb(k) which will
bring x(k) to Zero, quickly, from any Initial condition.

i.e.

{A,B,C}

u fb (k ) = ?

74
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.5 Trade Off


Z

Z
K

Ufb

Ufb

(1) Bad damping


 Large Output excursions

(2) But Cheap control i.e

ufb

(1) Good damping


 Small Output excursions

Small

(2) But expensive control i.e

ufb

large.

75
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.5 Quadratic Forms


A quadratic form is a quadratic function of the components of a vector:

x1
x = R2
x2
f ( x ) = f (x 1 , x 2 )
= ax 12 + bx 1 x 2 + cx 1 + dx 22
1

a
b

x1
x1
2
[
]
[
]
= x1 x 2
+ c 0
1
x2
b d x2
2

PT

f (x ) = x T Qx + P T x + e
Quadratic Part

Linear Part

Constant
76

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.5 Quadratic Cost for Regulator


What do we mean by bad damping and cheap control? We now define precisely
what we mean. Consider:

T
T
{x
Q
x
+
u
i i i R ui }
i =0

The first term penalizes large state excursions, the second penalizes large control.

Q 0 ,R > 0

Can tradeoff between state excursions and control by varying Q and R.

Large Q good damping important


Large R actuator effort expensive

77
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.5 LQR Problem Statement


(Linear quadratic regulator)
Given:

xi + 1 = Ax i + Bu i ; x 0 given:

Find control sequence {u0 ,u1,u2 ,...} such that

J =

x {Q x
T
i

+ u iT R u i

i =0

= minimum

Answer:

The optimal control sequence is a state feedback sequence {ui }0


u i = K opt xi

K opt = R + B T SB

B T SA

S = AT SA + Q A T AB R + B T SB

B T SA

Algebraic Riccati Equation (A.R.E) for discrete-time systems.

Note: Since

ui

= state feedback, it works for any initial state x 0

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

78

4.5 LQR Problem Statement


Remarks:
(1) So optimal control, ui = K opt xi is state feedback! This is why we are
interested in state feedbck.
(2) Equation A.R.E. is matrix quadratic equation. Looks pretty intimidating but
Computer can solve in a second.
(3) No tweaking ! Just specify {A,B,C,D} and Q and R, press return button, LQR
Routine Spits out Kopt- done
(Of course picking Q and R is tricky sometimes but thats another story).
(4) Design is guaranteed optimal in the sense that it minimizes.

J lqr x0 , {u } = xiT Q xi + uiT R ui

i 0

i =0

(Of course that doesnt mean its best in the absolute sense .-)
79
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.5 LQR Problem Statement - Remarks


(5) As vary Q/R Ratio we get whole family of Klqrs, i.e. can Trade-off between state
excursion (Damping) Vs actuator effort (Control)

State
excursions

Jz =

T
i

Achievable

J z1
Qxi

i =0

T
i

C T pCx i

= z iT z i

optimal

Ju =

T
i

Ru i

i=0

J u1

Actuator effort

80
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.6 Optimal Linear Estimation


Our optimal control has the form u opt (k ) = K (k ) xopt (k )

( )

This assumes that we have complete state Information xopt k


-not actually true!.
e.g: in SHO, we might have only a Position sensor but Not a velocity sensor.
How can be obtain good estimates of the velocity state from just observing
the position state?
Furthermore the Sensors may be noisy and the plant itself maybe subject to
outside disturbances (process noise) i.e. we are looking for this:
Process
noise

w(k )

{A,B,C}

Cx (k )

u = K x(x|k1)

Noise

+
K

x( x|k 1)

Amazing box which


Calculates good estimate
Of x(k) from
y(0),y(k-1)

sensor

v (k )

y (k )
81

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.6 Problem Statement :


x (k + 1) = A x (k ) + B w (k )
z (k ) = C x ( x )
y (k ) = C x (k ) + v (k )
Process w(k )
noise

{A,B,C}

z (k )

u = K x(x|k1)

x (x|k 1)
Estimator

sensor
Noise v (k )

y (k )

Assume also x 0 is Random & Gaussian and that x (k ), w(k ) + V (k )


are all mutually Independent for all k.

( )

Find :

x (k|k1)

Optimal estimate of x(k) given y0 ,.., yk 1

Such that mean squared error

x (k ) x (k|k 1 )

]= minimal

2
2

Fact from statistics: x k k 1 = E x(k )

( y0 ,..., yk 1 )]
82

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.6 Kalman Filter


The Kalman filter is an efficient algorithm that computes the new xi +1 i ( the linear-least-mean

( square estimate) of the system state vector xi +1 , given y0 ,..., yi ,by updating the old estimate

xi i 1 and old ~
xi i 1 (error) .

xi i 1

Kalman
Filter
(step i)

(old estimate)

pi i 1
p i i1

(new estimate)

pi +1 i
(new error variance)

(old error variance)

= ~
xi i1

xi +1 i

2
2

yi

(new measurement)

The Kalman Filter produces xi +1 i from xi i 1 ( rather than xi i ), because it tracks the system
dynamics. By the time we compute xi i from xi i 1 , the system state has changed from

xi to xi +1 = Axi + Bwi
83

4.6 Kalman Filter


The Kalman Filter algorithm can be divided in a measurement update and a time update:
Kalman Filter

xi i 1

xi i

Measure.
update

pi i 1

xi +1 i

Time
update

pi i

pi +1 i

yi
Measurement update (M.U.):

(
C (Cp

) (y Cx )
+ V ) Cp

xi i = xi i 1 + p i i 1C T Cp i i 1C T + V
pi i = pi i 1 pi i 1
Time Update (T.U.):

With initial conditions:

T
C
i i 1

i i 1

i i 1

xi + 1 i = Axi i
pi + 1 i = Ap i i AT + BWB T
x0 1 = 0
p0 1 = X 0

84

4.6 Kalman Filter


By pluggin M.U. equations into T.U. equations. One can do both steps at once:

xi +1 i = Axi i

= Axi i 1 + Api i 1C T Cp i i 1C T + V

(
(Cp

xi +1 i = Axi i 1 + Li yi Cxi i 1
where

Li A pi i 1C T

) (y Cx )
1

i i 1

CT +V
i i 1

) )
1

pi +1 i = Api i AT + BWB T

= A pi i 1 pi i 1C T Cp i i 1C T + V

Cp i i 1 AT + BWB T

pi +1 i = Api i AT + BWB T Api i 1C T Cp i i 1C T + V

) (Cp
1

i i 1

Known as discrete time Riccati Equation


Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

1 AT

85

4.6 Picture of Kalman Filter


wi

xi +1

xi

Z 1

zi

zi
A

yi +

vi

Kalman Filter

+
+

xi +1
-

xi
1

Z 1

yi

i 1

i 1

ei

Li
Time varying gain

86

Continued..

4.6 Picture of Kalman Filter


Plant Equations:

xi +1 = Axi + Bui
yi = Cxi + vi
Kalman Filter:

xi +1 i = Axi

i 1

= Cxi

i 1

yi

i 1

+ Li yi yi

i 1

If v=w=0=> Kalman filter can estimate the state precisely in a finite number of steps.

87
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.6 Kalman Filter


Remarks:
(1) Since y i = Cx i + v i and y i i 1 = C x i can write estimator equation as

xi +1 i = A xi

i 1

+ Li C xi + vi C xi

= ( A Li C ) xi

i 1

i 1

+ Li C xi + vi

can combine this with equation for xi +1

xi +1 A
0 xi
x =
x
L
C
A

L
C
i +1 i i
i
i
zi C 0 xi
y =
x
0
C
i i 1
i i 1

B
+
i 1
0

0 wi
1 vi

(2) In practice, the Riccati equation reaches steady state in a few steps. People
Often run with steady-state K.F.i.e
Where

Lss = Apss C T (CPss C T + V)1

pss = Apss AT + BWBT Apss C T (CPss C T + V)1CPss A


88

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.7 LQG Problem


Now we are finally ready to solve the full control problem.
i.e.

Wk

U fb

p(z)

zk

{A,B,C}

+
-

+
H( z)
{Ac ,B c ,C c ,D c }

Given:

Xc

Vk

yk

= Ax + Bu + B w
k +1
k
k
w k
z = Cx
k
k
y = Cx + v
k
k
k
w ,w
= W ,
i j
ij

v ,v
= V
i j
ij

w ,v
=0
i j

w k ,v k both Gaussian
For Gaussian, K.L. gives the absolute best estimate

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

89

4.7 LQG problem


Separation principle: ( we wont prove)
The separation principle states that the LGQ optimal controller is obtained by:
(1) Using Kalman filter to obtain least squares optimal estimate of the plant state,
i.e.: Let

xc(k) = xk k 1

(2) Feedback estimated LQR- optimal state feedback

u(k) = K

x (k)
LQR c
x
= -K
LQR k k 1

i.e. can treat problems of


- optimal feedback and
- state estimate separately.

90
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.7 Picture of LQG Regulator


Wk

xk +1

+
-

xk

Z 1 1

C
zk

A
Kxk

zk

k 1

yk

vk

K
xk +1

+
+ -

xk

Z 1 1

y k

k 1

k 1

ek

91

4.7 LQG Regulator


x
Plant

LQG Controller

k +1

= A x + ( Bu ) + B w
k
k
w k
z =Cx
k
k
y =Cx +v
k
k
k

k + 1k

= A x
u

k = R + BT SB

k k 1

= K x

+ Bu

+ L y C x
k
k
k
k

k k 1

+ S = AT SA + Q AT SB R + BT SB

1
T
T

L = APC V + CPC
+P

1
T
T
T
T

= APA + BWB APC V + CPC


CPC T

BT SA

92
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.7 Problem Statement (in English)


Want a controller which takes as input noisy measurements, y, and produces as output a
Feedback signal ,u, which will minimize excursions of the regulated plant outputs (if no pole
-zero cancellation, then this is equivalent to minimizing state excursions.)
Also want to achieve regulation with as little actuator effort ,u, as possible.
Problem statement (Mathematically)
Find: Controller

Uk

Controller:

H ( z ) = C zI A
c
c

H( z)
{Ac ,B c ,C c ,D c }

Xc

B +D
c
c

yk

xc (k + 1) = Ac x(k + 1) + Bc y(k )
yc (k ) = Cc xc (k )
Which will minimize the cost

Where
Plant

T
+
J
Ex Q x u R u
LQG
k
k k
k

k
=

xk +1 = Axk + ( Buk ) + Bwwk


zk = C xk

Limit

Rms state Rms actuator


excursions effort

yk = C xk + vk
93
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.7 Problem Statement


Remarks:
(1). Q and R are weigthing matrices that allow trading off of rms u and rms x.

(2) if

Q = C C; > 0 then trade off rms z VS rms u

(3). In the stochastic LQR case, the only difference is that now we dont have complete state
information yi = Cxi + vi we have only noisy observations
i .e cant use full state feedback.

Idea: Could we use estimated state Feedback?

(i.e. -K x )
k k- 1

94
Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

4.7 Problem Statement


(5) We can let Q/R ratio vary and well obtain family of LQG controllers. Can
Plot rms z vs rms u for each one
 Trade-Off curves
rms Z
ACHIEVABLE
LQG, Q/R=0.01

Zrms(1)

X other
LQG, Q/R=100

Zrms(2)

Urms(1)

Urms(2)

rms U

So by specifying (1) system model, (2) noise variances, (3) optimally criterion
J LQG , and plotting trade off curve completely specifies limit of performance of
System i. e which combinations of
-good benchmark curve.

(Zrms,Urms) are achievable by any controller


95

Stefan Simrock, Tutorial on Control Theory , ICAELEPCS, Grenoble, France, Oct. 10-14, 2011

S-ar putea să vă placă și