Sunteți pe pagina 1din 29

4 State-space solutions and

realizations
4.2 Solutions of LTI State Equations
Consider
x& ( t ) = Ax ( t ) + Bu( t )

y( t ) = Cx ( t ) + Du ( t )
Premultiplying e At on both sides of first yields
e At x& ( t ) e At Ax ( t ) = e At Bu ( t )
d At
Implies dt (e x( t )) = e At Bu( t )

Its integration from 0 to t yields


e At x ( ) t =0 = 0t e A Bu( )d

Thus, we have

x ( t ) = e At x (0) + 0t e A ( t ) Bu( )d

The final solution


At

t A( t )
x (0) + C 0 e
Bu( )d + Du ( t )

y( t ) = Ce
Compute the solutions by using Laplace
transform
x (s) = (sI A ) 1[ x (0) + Bu (s)]

y(s) = C(sI A ) 1[ x (0) + Bu (s)] + Du (s)

Three methods of computing eAt


1. Using Theorem 3.5
f(i) = h(i) (where f(i) = (s-i)-1)
2. Using Jordan form of A
Let A = QQ-1; then eAt = QetQ-1
3. Using the infinite power series
e

At

1 k k
= t A
k =0 k!

See Examples 4.1 and 4.2

4.2.1 Discretization
Because

x( t + T) x( t)
x& ( t ) = lim
T
T 0

We can approximate a LTI system as


x(t+T) = x(t) + Ax(t)T + Bu(t)T
The discrete-time state-space equation
x (( k + 1)T ) = ( I + TA ) x ( kT ) + TBu ( kT )
y( kT ) = Cx ( kT ) + Du ( kT )

Computing the first at t = kT and t = (k+1)T


yields
x[ k ] := x ( kT) = e AkT x (0) + 0kT e A( kT ) Bu( )d

and

x[ k + 1] := x (( k + 1)T ) = e A( k +1)T x (0) + 0( k +1)T e A(( k +1)T ) Bu( )d

Let = kT + T - . Then, we have


x[ k + 1] = e AT x ( k ) + 0T e AdBu[k ]

The continuous-time state equation becomes


x[ k + 1] = Ad x[ k ] + Bd u[ k ]

y[ k ] = Cd x[ k ] + Dd u[ k ]
with Ad = e AT Bd = ( 0T e Ad) B Cd = C Dd = D

If A is nonsingular,
Bd = A-1(Ad - I)B
4.2.2 Solution of Discrete-Time Equations
Consider
x[k+1] = Ax[k] + Bu[k]
y[k] = Cx[k] +Du[k]
Compute
x[1] = Ax[0] + Bu[0]
x[2] =Ax[1]+Bu[1]=A2x[0]+ABu[0]+Bu[1]

Proceeding forward, for k > 0,


k 1

x[ k ] = A x[0] + A k 1 m Bu[ m]
k

m =0

k 1

y[k ] = CA x[0] + CA k 1 m Bu[ m] + Du[ k ]


k

m =0

Key computation
Ak = QkQ-1
Suppose the Jordan form of A is
1 1 0 0 0
0
1 0 0
1

=0 0
A
0
1 0

0
0
0
0

0 0 0 0 2

k1

0
Ak = Q 0

0
0

kk1

k( k 1)k1 2 / 2

0
0

0
0

k1
0

kk1
k1

0 0
1
0 0 Q

k
1 0
0 k2
0

4.3 Equivalent state Equations


Definition 4.1 Let P be an nn real
nonsingular matrix and Let x = Px. Then the
state equation,
x& ( t ) = Ax ( t ) + Bu( t )
y( t ) = C x ( t ) + D u ( t )
where A = PAP 1 B = PB C = CP-1 D = D

is said to be equivalent to 4.24 and x = Px


is called an equivalence transformation.

Equivalent state equations have the same


characteristic polynomial and consequently,
the same set of eigenvalues and same
transfer matrix.
Theorem 4.1 Two LTI state equations {A, B,
C, D} and {A, B, C, D} are zero-state
equivalent or have the same transfer matrix
m
m
CA
B
=
C
A
B
if and only if D = D and
m = 0, 1, 2, ...

4.3.1 Canonical Forms


Let 1, 2, +j, and -j be the eigenvalues
and q1, q2, q3, and q4 be the corresponding
eigenvectors. Define Q = [q1 q2 q3 q4]. Then
we have
1 0
0
2
J :=
0 0

0 0

0
0
= Q 1AQ
0
+ j

0
j
0

The modal form of A can be obtained


1
0
Q 1J Q :=
0

0 1 0
1 0 0 0 2

0 1 1 0 0

0 j j 0 0
0 0

1
0
0 0

0 0
+ j

0
j 0

0 0
1 0

0
0
0
1
=: A
=
0 0

0
0

0
1 0
0

0 0.5 0.5 j

0 0.5 0.5 j
0

The two transformation (Jordan form and


modal form) can be combined into one as
1
0
P 1 = Q Q = [q1 q 2 q 3 q 4 ]
0

0
1 0
0

0 0.5 0.5 j

0 0.5 0.5 j
0

= [q1 q2 Re(q3) Im(q3)]

The modal form of another example


0
0
0
1
0
0
1
1

A = 0 1 1
0

0
0
2
0
0
0
0 2

0
0

2
2

Its similarity transformation


P-1 = [q1 Re(q2) Im(q2) Re(q4) Im(q4)]

4.4 Realizations
The realization problem: given the inputoutput description of a LTI system
(s) u(s),
y(s) = G

finding its state-space equation


x& ( t ) = Ax ( t ) + Bu ( t )
y( t ) = Cx ( t ) + Du ( t )

A transfer matrix (s) is said to be realizable


if there exist a finite-dimensional state
equation, or simply, {A, B, C, D} such that
(s) = C(sI -A)-1B + D
Theorem 4.2 A transfer matrix (s) is
realizable if and only if (s) is a proper
rational matrix.
(s) = G
() + G
(s) = D + C(sI A ) 1 B
G
sp
= D+

1
C[ Adj(sI A )]B
det(sI A )

If D is nonzero matrix, then C(sI-A)-1B+D


is proper.
Let d(s) = sr+1sr-1+ +r-1s+r be the least
common denominator of all entries of
sp(s). Then sp(s) can be expressed as
(s) = 1 [ N (s)] = 1 [ N sr 1 + N sr 2 + ... + N s + N ]
G
sp
1
2
r 1
r
d (s )
d (s)

We claim that the set of equations


1I p
I
p
x& = 0

.
0

y = [N1

2 I p ... r 1I p
0
...
0
Ip
...
0
.
0

.
...

N 2 ... N r 1

Ip

r I p
I p
0
0


0 x + 0 u


.
.
0
0

( ) u is a realizatio n of G
(s).
N r ]x + G

4.5 Solution of Linear-time


varying (LTV) Equations
Consider
x& ( t ) = A( t ) x ( t ) + B( t ) u ( t )

y ( t ) = C( t ) x ( t ) + D ( t ) u ( t )

Assume that every entry of A(t) is a


continuous function of t.
First discuss the solutions of
x& ( t ) = A( t ) x ( t )

The solution of the scalar time-varying


equation x& = a ( t )x due to x(0) is
x(t) =

t
0 a ( )d
e
x ( 0)

Extending this to the matrix case becomes


x( t) =
with
t
0 A ( ) d
e

t
0 A ( )d
e
x ( 0)

1
= I + 0t A( )d + ( 0t A( )d)( 0t A(s)ds) + ...
2

But,
t
d 0t A ( )d
1
0 A ( ) d
t
e
= A( t ) + A( t )( 0 A(s)ds) + ... A( t )e
dt
2

Arranging n solutions as X = [x1 x2 x3],


we have
& ( t ) = A( t ) X ( t )
X

If X(t0) is nonsingular or the initial states


are linearly independent, then X(t) is called
a fundamental matrix x& ( t ) = A( t ) x ( t )
See example 4.8

Definition 4.2 Let X(t) be any fundamental


matrix of x& ( t ) = A( t )x ( t ) . Then
( t, t 0 ) := X ( t ) X 1( t 0 )

is called the state transition matrix of x& ( t ) = A( t )x( t )


The state transition matrix is also the unique
solution of

( t , t 0 ) = A ( t ) ( t , t 0 )
t

with the initial condition (t0, t0) = I.

The important properties of the state


transition matrix:
(t, t) = I
-1(t,t0)=[X(t)X-1(t0)]-1=X(t0)X-1(t)=(t0,t)
See example 4.9
We claim that the solution of the LTV
system
x ( t ) = ( t, t 0 ) x 0 + tt ( t, ) B( ) u ( )d
0

= ( t , t 0 )[ x 0 + tt ( t 0 , ) B( ) u ( )d]
0

y( t ) = C( t ) ( t, t 0 ) x 0 + C( t ) tt ( t, ) B( ) u ( )d + D( t ) u( t )
0

The zero-input response


x(t) = (t, t0)x0
The zero-state response
y( t ) = C( t ) tt ( t, ) B( ) u( )d + D( t ) u ( t )
0

= tt [C( t ) ( t, ) B( ) u( )d + D( t )( t )]u( )d
0

The impulse response matrix


G ( t, ) = C( t ) ( t, ) B( ) + D( t )( t )

= C( t ) X ( t ) X 1( ) B( ) + D( t )( t )

4.5.1 Discrete-Time case


Consider the discrete-time state equation
x[k+1] = A[k]x[k] +B[k]u[k]
y[k] = C[k]x[k] + D[k]u[k]
As in the continuous-time case, the discrete
state transition matrix
[k+1, k0] = A[k][k, k0]
with [k0, k0] = I.

Its solution can be obtained directly as


[k, k0] = A[k-1]A[k-2] A[k0]
The solution of the discrete-time system
k 1

x[ k ] = [ k, k 0 ]x 0 + [ k, m + 1]B[ m]u[ m]
m=k0

k 1

y[ k ] = C[ k ][ k, k 0 ]x 0 + C[ k ] [ k, m + 1]B[ m]u[ m] + D[ k ]u[ k ]


m=k0

The impulse response


G[k,m]=C[k][k,m+1]B[m]+D[m][k-m]

4.6 Equivalent Time-Varying


Equations
The state equation
x& = A ( t ) x ( t ) + B( t ) u

y = C ( t ) x + D( t )u

where

A ( t ) = [ P( t ) A( t ) + P& ( t )]P 1 ( t )

B( t ) = P( t ) B( t )
C ( t ) = C( t ) P 1( t )

D ( t ) = D( t )

is said to be equivalent to (4.69) and P(t) is


called equivalent transformation.
P(t)X(t) is also a fundamental matrix.

Theorem 4.3 Let A0 be an arbitrary matrix.


Then there exists an equivalence
transformation that transforms (4.69) into
(4.70) with A ( t ) = A0.
Periodic state equation:
A(t+T) = A(t)
Then,
& ( t + T ) = A( t + T ) X ( t + T ) = A( t ) X ( t + T )
X

Thus X(t+T) is also a fundamental matrix


X(t+T) = X(t)X-1(0)X(T)

Let Q = X-1(0)X(T) be a constant


nonsingular matrix.
There exists a constant matrix A such that
e AT = Q (Problem 3.24).
Thus
X ( t + T ) = X ( t )e A T

Define
P( t ) := e AT X 1( t )

Note that P(t) is periodic with period T

Theorem 4.4 Consider (4.69) with A(t) =


A(t+T) for all t and some T > 0. Let X(t) be
a fundamental matrix. Let A be the
constant matrix. Then (4,69) is Lyapunov
equivalent to
x& ( t ) = Ax ( t ) + P( t ) B( t ) u( t )
y ( t ) = C( t ) P 1 ( t ) x ( t ) + D ( t ) u ( t )

where P( t ) = e AT X 1( t )

4.7 Time-Varying Realizations


Theorem 4.5 A qp impulse response
matrix G(t, ) is realizable if and only if G(t,
) can be decomposed as
G(t, )=M(t)N() + D(t)(t - )

S-ar putea să vă placă și