Sunteți pe pagina 1din 16

Topic #6

16.31 Feedback Control Systems

State-Space Systems
• What are state-space models?
• Why should we use them?
• How are they related to the transfer functions used in classical control design
and how do we develop a state-space model?
• What are the basic properties of a state-space model, and how do we analyze
these?

Cite as: Jonathan How, course materials for 16.31 Feedback Control Systems, Fall 2007. MIT
OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on
[DD Month YYYY].
Fall 2007 16.31 6–1

SS Introduction
• State space model: a representation of the dynamics of an N th
order system as a first order differential equation in an N -vector,
which is called the state.
– Convert the N th order differential equation that governs the
dynamics into N first-order differential equations

• Classic example: second order mass-spring system


mp̈ + cṗ + kp = F
– Let x1 = p, then x2 = ṗ = ẋ1, and
ẍ2 = p̈ = (F − cṗ − kp)/m
= (F − cx2 − kx1)/m
� � � �� � � �
ṗ 0 1 p 0
⇒ = + u
p̈ −k/m −c/m ṗ 1/m
– Let u = F and introduce the state
� � � �
x1 p
x= = ⇒ ẋ = Ax + Bu
x2 ṗ
– If the measured output of the system is the position, then we
have that
� � � �
� � p � � x1
y=p= 1 0 = 1 0 = cx
ṗ x2

September 17, 2007


Cite as: Jonathan How, course materials for 16.31 Feedback Control Systems, Fall 2007. MIT
OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on
[DD Month YYYY].
Fall 2007 16.31 6–2

• Most general continuous-time linear dynamical system has form


ẋ(t) = A(t)x(t) + B(t)u(t)
y(t) = C(t)x(t) + D(t)u(t)
where:
– t ∈ R denotes time
– x(t) ∈ Rn is the state (vector)
– u(t) ∈ Rm is the input or control
– y(t) ∈ Rp is the output

– A(t) ∈ Rn×n is the dynamics matrix


– B(t) ∈ Rn×m is the input matrix
– C(t) ∈ Rp×n is the output or sensor matrix
– D(t) ∈ Rp×m is the feedthrough matrix

• Note that the plant dynamics can be time-varying.


• Also note that this is a multi-input / multi-output (MIMO) system.

• We will typically deal with the time-invariant case


⇒ Linear Time-Invariant (LTI) state dynamics
ẋ(t) = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
so that now A, B, C, D are constant and do not depend on t.

September 17, 2007


Cite as: Jonathan How, course materials for 16.31 Feedback Control Systems, Fall 2007. MIT
OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on
[DD Month YYYY].
Fall 2007 16.31 6–3

Basic Definitions
• Linearity – What is a linear dynamical system? A system G is
linear with respect to its inputs and output
u(t) → G(s) → y(t)
iff superposition holds:
G(α1u1 + α2u2) = α1Gu1 + α2Gu2
So if y1 is the response of G to u1 (y1 = Gu1), and y2 is the
response of G to u2 (y2 = Gu2), then the response to α1u1 +α2u2
is α1y1 + α2y2

• A system is said to be time-invariant if the relationship between


the input and output is independent of time. So if the response to
u(t) is y(t), then the response to u(t − t0) is y(t − t0)

• x(t) is called the state of the system at t because:


– Future output depends only on current state and future input
– Future output depends on past input only through current state
– State summarizes effect of past inputs on future output – like
the memory of the system

• Example: Rechargeable flashlight – the state is the current state


of charge of the battery. If you know that state, then you do not
need to know how that level of charge was achieved (assuming a
perfect battery) to predict the future performance of the flashlight.
– But to consider all nonlinear effects, you might also need to
know how many cycles the battery has gone through

September 17, 2007


Cite as: Jonathan How, course materials for 16.31 Feedback Control Systems, Fall 2007. MIT
OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on
[DD Month YYYY].
Fall 2007 16.31 6–4

Creating State-Space Models


• Most easily created from N th order differential equations that de-
scribe the dynamics
– This was the case done before.
– Only issue is which set of states to use – there are many choices.

• Can be developed from transfer function model as well.


– Much more on this later

• Problem is that we have restricted ourselves here to linear state


space models, and almost all systems are nonlinear in real-life.
– Can develop linear models from nonlinear system dynamics

September 17, 2007


Cite as: Jonathan How, course materials for 16.31 Feedback Control Systems, Fall 2007. MIT
OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on
[DD Month YYYY].
Fall 2007 16.31 6–5

Linearization
• Often have a nonlinear set of dynamics given by
ẋ = f (x, u)
where x is once gain the state vector, u is the vector of inputs, and
f (·, ·) is a nonlinear vector function that describes the dynamics
• Example: simple spring. With a mass at the end of a linear spring
(rate k) we have the dynamics
mẍ = −kx
but with a “leaf spring” as is used on car suspensions, we have a
nonlinear spring – the more it deflects, the stiffer it gets. Good
model now is
mẍ = −(k1x + k2x3)
which is a “cubic spring”.
– Restoring force depends on the deflection x in a nonlinear way.

Figure 1: Response to linear k = 3 and nonlinear (k1 = k, k2 = 0.25) springs


(code at the end)

September 17, 2007


Cite as: Jonathan How, course materials for 16.31 Feedback Control Systems, Fall 2007. MIT
OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on
[DD Month YYYY].
Fall 2007 16.31 6–6

• Typically assume that the system is operating about some nominal


state solution x0(t) (possibly requires a nominal input u0(t))
– Then write the actual state as x(t) = x0(t) + δx(t)
and the actual inputs as u(t) = u0(t) + δu(t)
– The “δ” is included to denote the fact that we expect the
variations about the nominal to be “small”

• Can then develop the linearized equations by using the Taylor


series expansion of f (·, ·) about x0(t) and u0(t).

• Recall the vector equation ẋ = f (x, u), each equation of which


ẋi = fi(x, u)
can be expanded as
d 0
(xi + δxi) = fi(x0 + δx, u0 + δu)
dt � �
0 0 ∂f i�
� ∂f i ��
≈ fi(x , u ) + δx + δu
∂x �0 ∂u �0
where � �
∂fi ∂fi ∂fi
= ···
∂x ∂x1 ∂xn
and ·|0 means that we should evaluate the function at the nominal
values of x0 and u0.

• The meaning of “small” deviations now clear – the variations in


δx and δu must be small enough that we can ignore the higher
order terms in the Taylor expansion of f (x, u).

September 17, 2007


Cite as: Jonathan How, course materials for 16.31 Feedback Control Systems, Fall 2007. MIT
OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on
[DD Month YYYY].
Fall 2007 16.31 6–7

d 0
• Since dt xi = fi(x0, u0), we thus have that
� �
d ∂fi �� ∂fi ��
(δxi) ≈ δx + δu
dt ∂x �0 ∂u �0
• Combining for all n state equations, gives (note that we also set
“≈” → “=”) that
⎡ � ⎤ ⎡ � ⎤
∂f1 �
� ∂f1 ��
⎢ ∂x � ⎥ ⎢ ∂u � ⎥
⎢ 0
� ⎥ ⎥ ⎢ �0 ⎥
⎢ ∂f2 �� ⎥ ⎢ ∂f2 �� ⎥
⎢ ⎢ ⎥
d ⎢
⎢ ∂x �0 ⎥
⎥ ⎢
⎢ ∂u �0 ⎥

δx = ⎢ ⎥ δx + ⎢ ⎥ δu
dt ⎢ ... ⎥ ⎢ ... ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ � ⎥ ⎢ � ⎥
⎣ ∂fn � ⎦ ⎣ ∂fn � ⎦
� �
∂x 0� ∂u �0

= A(t)δx + B(t)δu
where
⎡ ∂f1 ∂f1 ∂f1
⎤ ⎡ ∂f1 ∂f1 ∂f1

∂x1 ∂x2 ··· ∂xn ∂u1 ∂u2 ··· ∂um
⎢ ⎥ ⎢ ⎥
⎢ ∂f2 ∂f2 ∂f2 ⎥ ⎢ ∂f2 ∂f2 ∂f2 ⎥

∂x1 ∂x2 ··· ∂xn
⎥ ⎢
∂u1 ∂u2 ··· ∂um

A(t) ≡ ⎢


⎥ and B(t) ≡ ⎢



⎢ ... ⎥ ⎢ ... ⎥
⎣ ⎦ ⎣ ⎦
∂fn ∂fn ∂fn ∂fn ∂fn ∂fn
∂x1 ∂x2 ··· ∂xn ∂u1 ∂u2 ··· ∂um
0 0

• Similarly, if the nonlinear measurement equation is y = g(x, u)


and y(t) = y0 + δy, then
⎡ � ⎤ ⎡ � ⎤
∂g1 �� ∂g1 ��
⎢ ∂x � ⎥ ⎢ ∂u �0 ⎥
⎢ . 0⎥ ...
.. ⎥ δx + ⎢
⎢ ⎥
δy = ⎢ ⎢ � ⎥ ⎢ �
⎥ δu

⎣ ∂gp � ⎦ ⎣ ∂gp �� ⎦

∂x �0 ∂u � 0
= C(t)δx + D(t)δu
September 17, 2007
Cite as: Jonathan How, course materials for 16.31 Feedback Control Systems, Fall 2007. MIT
OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on
[DD Month YYYY].
Fall 2007 16.31 6–8

• Typically think of these nominal conditions x0, u0 as “set points”


or “operating points” for the nonlinear system. The equations
d
δx = A(t)δx + B(t)δu
dt
δy = C(t)δx + D(t)δu
then give us a linearized model of the system dynamic behavior
about these operating/set points.
– Note that if x0, u0 are constants, then the partial fractions in
the expressions for A–D are all constant → LTI linearized
model.
– Typically drop the “δ” as they are rather cumbersome, and
(abusing notation) we write the state equations as:
ẋ(t) = A(t)x(t) + B(t)u(t)
y(t) = C(t)x(t) + D(t)u(t)
which is of the same form as the previous linear models

• One particularly important set of operating points are the equi-


librium points of the system. Defined as the states & control
input combinations for which
ẋ = f (x0, u0) ≡ 0
provides n algebraic equations to find the equilibrium points.

September 17, 2007


Cite as: Jonathan How, course materials for 16.31 Feedback Control Systems, Fall 2007. MIT
OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on
[DD Month YYYY].
Fall 2007 16.31 6–9

Linearization Example
• Consider the nonlinear spring with (set m = 1)
ÿ = k1y + k2y 3
gives us the nonlinear model (x1 = y and x2 = ẏ)
� � � �
d y ẏ
= ⇒ ẋ = f (x)
dt ẏ k1y + k2y 3

• Find the equilibrium points and then make a state space model

• For the equilibrium points, we must solve


� �

f (x) = =0
k1y + k2y 3
which gives
ẏ 0 = 0 and k1y 0 + k2(y 0)3 = 0

– Second condition corresponds to y 0 = 0 or y 0 = ± −k1/k2,
which is only real if k1 and k2 are opposite signs.

• For the state space model,


∂f1 ∂f1
⎡ ⎤
⎡ ⎤ ⎡ ⎤
⎢ ∂x1 ∂x2 ⎥ 0 1 0 1
A=⎣ ⎢ ⎥ =⎣ ⎦ =⎣ ⎦
∂f2 ∂f2 ⎦ k1 + 3k2(y)2 0 0 k1 + 3k2(y 0)2 0
∂x1 ∂x2 0
˙ = Aδx
and the linearized model is δx
September 17, 2007
Cite as: Jonathan How, course materials for 16.31 Feedback Control Systems, Fall 2007. MIT
OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on
[DD Month YYYY].
Fall 2007 16.31 6–10

• For the equilibrium point y = 0, ẏ = 0


⎡ ⎤
0 1
A0 = ⎣ ⎦
k1 0
which are the standard dynamics of a system with just a linear
spring of stiffness k1
– Stable motion about y = 0 if k1 < 0
• Assume that k1 = 1, k2√= −1/2, then we should get an equilibrium
point at ẏ = 0, y = ± 2, and since k1 + k2(y 0)2 = 0
⎡ ⎤ ⎡ ⎤
0 1 0 1
A1 = ⎣ ⎦=⎣ ⎦
−2k1 0 −2 0
are the dynamics of a stable oscillator about the equilibrium point

1.5 0.1

1.45 0.08

1.4 0.06
y

0.04
1.35

0.02
1.3
0 5 10 15 20 25 30 35
Time
dy/dt

0.1
−0.02

0.05
−0.04
dy/dt

0 −0.06

−0.05 −0.08

−0.1 −0.1
0 5 10 15 20 25 30 35 1.34 1.36 1.38 1.4 1.42 1.44 1.46 1.48 1.5
Time y

Figure 2: Nonlinear response (k1 = 1, k2 = −.5). The figure on the right shows
the oscillation about the equilibrium point.

September 17, 2007


Cite as: Jonathan How, course materials for 16.31 Feedback Control Systems, Fall 2007. MIT
OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on
[DD Month YYYY].
Fall 2007 16.31 6–11

Example: Aircraft Dynamics


• Assumptions:
1. Earth is an inertial reference frame
2. A/C is a rigid body
3. Body frame B fixed to the aircraft (�i, �j, �k)

• The basic dynamics are:


�˙ I
F� = m�˙vcI and T� = H

1 �
⇒ F = �˙vcB + BI ω
� × �vc Transport Thm.
m
�˙ B + BI ω
⇒ T� = H � ×H �

BI
• Instantaneous mapping of �vc and ω
� into body frame is given by
BI
� = P�i + Q�j + R�k
ω �vc = U�i + V �j + W �k

⎤ ⎡ ⎤ ⎡
P U
⇒ BI ωB = ⎣ Q ⎦ ⇒ (vc)B = ⎣ V ⎦
R W

September 17, 2007


Cite as: Jonathan How, course materials for 16.31 Feedback Control Systems, Fall 2007. MIT
OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on
[DD Month YYYY].
Fall 2007 16.31 6–12

• The overall equations of motion are then:


1 �
F = �˙vcB + BI ω� × �vc
m

⎡ ⎤ ⎡ ⎤ ⎤⎡ ⎡ ⎤
F U̇ 0 −R Q U
1 ⎣ x⎦
⇒ Fy = ⎣ V̇ ⎦ + ⎣ R 0 −P ⎦ ⎣ V ⎦
m
Fz Ẇ −Q P 0 W

⎡ ⎤
U̇ + QW − RV
= ⎣ V̇ + RU − P W ⎦
Ẇ + P V − QU

• These are clearly nonlinear – need to linearize about the equilibrium


states.

• To find suitable equilibrium conditions, must solve


⎡ ⎤ ⎡ ⎤
Fx +QW − RV
1 ⎣
Fy ⎦ − ⎣ +RU − P W ⎦ = 0
m
Fz +P V − QU

• Assume steady state flight conditions with Ṗ = Q̇ = Ṙ = 0

September 17, 2007


Cite as: Jonathan How, course materials for 16.31 Feedback Control Systems, Fall 2007. MIT
OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on
[DD Month YYYY].
Fall 2007 16.31 6–13

• Define the trim angular rates, velocities, and Forces


⎡ ⎤ ⎡ ⎤ ⎡ o⎤
P Uo Fx
BI o o o
ωB = ⎣ Q ⎦ (vc)B = ⎣ 0 ⎦ FB = ⎣ Fyo ⎦
R 0 Fzo
that are associated with the flight condition (they define the type
of equilibrium motion that we analyze about).
Note:
– W0 = 0 since we are using the stability axes, and

– V0 = 0 because we are assuming symmetric flight

• Can now linearize the equations about this flight mode. To pro-
ceed, define
Velocities U0, U = U0 + u ⇒ U̇ = u̇
W0 = 0, W = w ⇒ Ẇ = ẇ
V0 = 0, V =v ⇒ V̇ = v̇

Angular Rates P0 = 0, P =p ⇒ Ṗ = ṗ
Q0 = 0, Q=q ⇒ Q̇ = q̇
R0 = 0, R=r ⇒ Ṙ = ṙ

Angles Θ0, Θ = Θ0 + θ ⇒ Θ̇ = θ̇
Φ0 = 0, Φ=φ ⇒ Φ̇ = φ̇
Ψ0 = 0, Ψ=ψ ⇒ Ψ̇ = ψ̇

September 17, 2007


Cite as: Jonathan How, course materials for 16.31 Feedback Control Systems, Fall 2007. MIT
OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on
[DD Month YYYY].
Fall 2007 16.31 6–14

• Linearization for symmetric flight


U = U0 + u, V0 = W0 = 0, P0 = Q0 = R0 = 0
Note that the forces and moments are also perturbed.

1 � 0 �
Fx + ΔFx = U̇ + QW − RV ≈ u̇ + qw − rv
m
≈ u̇

1 � 0 �
F + ΔFy = V̇ + RU − P W ≈ v̇ + r(U0 + u) − pw
m y
≈ v̇ + rU0

1 � 0 �
Fz + ΔFz = Ẇ + P V − QU ≈ ẇ + pv − q(U0 + u)
m
≈ ẇ − qU0

⎤ ⎡⎡ ⎤
ΔFx u̇
1
⇒ ⎣ ΔFy ⎦ = ⎣ v̇ + rU0 ⎦
m
ΔFz ẇ − qU0

• Which gives the linearized dynamics for the aircraft motion about
the steady-state flight condition.
– Need to analyze the perturbations to the forces and moments
to fully understand the linearized dynamics – take 16.61
– Can do same thing for the rotational dynamics.

September 17, 2007


Cite as: Jonathan How, course materials for 16.31 Feedback Control Systems, Fall 2007. MIT
OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on
[DD Month YYYY].
Fall 2007 16.31 6–15

% save this entire code as plant.m


%
function [xdot] = plant(t,x)
global n
xdot(1) = x(2);
xdot(2) = -3*x(1);
if n==3
xdot(2) = xdot(2) - 0.25 *(x(1))^3;
end
xdot = xdot’;
return

% the use this part of the code in Matlab


% to call_plant.m
global n
n=3; %nonlinear
x0 = [-1 2]; % initial condition
[T,x]=ode23(’plant’, [0 12], x0); %simulate NL equations for 12 sec
n=1; % linear
[T1,x1]=ode23(’plant’, [0 12], x0);

subplot(211)
plot(T,x(:,1),T1,x1(:,1),’--’);
legend(’Nonlinear’,’Linear’)
ylabel(’X’)
xlabel(’Time’)
subplot(212)
plot(T,x(:,2),T1,x1(:,2),’--’);
legend(’Nonlinear’,’Linear’)
ylabel(’V’)
xlabel(’Time’)

September 17, 2007


Cite as: Jonathan How, course materials for 16.31 Feedback Control Systems, Fall 2007. MIT
OpenCourseWare (http://ocw.mit.edu), Massachusetts Institute of Technology. Downloaded on
[DD Month YYYY].

S-ar putea să vă placă și