Sunteți pe pagina 1din 13

CS545—Contents VI

l  Control Theory II


§  Linear Stability Analysis
§  Linearization of Nonlinear Systems
§  Discretization

l  Reading Assignment for Next Class


§  See http://www-clmc.usc.edu/~cs545
Stability Analysis
l  Given the control system
x˙ = f (x,u ) or x˙ = Ax +Bu

l  How can we the show that a particular choice of a


controller generates a stable control system?

l  In order to get started, consider whether the generic


dynamical system is stable:
x˙ = f (x ) or x˙ = Ax
Equilibrium Points and
Stability
l  Definition of an Equilibrium Point
l  A state x is an equilibrium state (or equilibrium point) of the system if
once x(t) is equal to x, it remains equal to x for all future time.
l  Mathematically, this means:
l  Definition of Stability
l  An equilibrium state x is said to be stable, if, for any R>0, there exists
r>0, such that if ||x(0)||<r,then ||x(t)||<R for all t≥0. Otherwise, the
equilibrium point is unstable.

Equilibrium

R

r

Linear Stability Analysis
(Local Stability Analysis)
l  What is needed at the outset:
l  The system model (linear or nonlinear)
l  An equilibrium point
l  The linearization of the system about the equilibrium point
l  Then, we have the linear(ized) system:
x˜˙ = A x=x* x˜ where x˜ = (x - x* )

l  This system is stable if and only if:


REAL(eig( A)) < 0

l  The complete stability definitions are:


continuous system discrete system
stable REAL(eig(A)) < 0 eig(A) <1
marginally stable REAL(eig(A)) = 0 eig(A) =1
unstable REAL(eig(A)) > 0 eig(A) >1
Example: Stability of
Pendulum
l  The nonlinear equations of a controlled pendulum were
derived in Lecture III to be:
! x˙1 $ ! ' sin ( x2 )$
g ! 1 2$
# &=# l & + ( # ml &
" x˙ 2 % " x % " 0 %
1

l  Just consider the system without control input:


! x˙1 $ ! ' sin ( x2 )$
g
# &=# l &
" x˙ 2 % " x %
1

l  What are the equilibrium points?


" ! g sin ( x )%
0=$ l 2
' ( x1 = 0, x2 = 0 or )
# x1 &
Example: Stability of
Pendulum (cont d)
l  Are the equilibrium points locally stable?
l  Linearization:
l  Taylor Series Expansion about the equilibrium point:
Given: x˙ = f (x )
then a First Order Taylor Series Expansion about
a point x 0 is:
"f " ! g sin ( x )%
f ( x) ! f ( x 0 ) + (x # x 0 ) f ( x) = $ l
"x x=x 0
2
'
# x1 &
l  For the pendulum example:
(f ( x) " 0 ! cos( x2 )%
g
=$ l ';
(x #1 0 &

(f ( x) " 0 ! g % (f ( x) " 0 g%
= =
(x x=0 $# 1 0l '& (x x= ) $# 1 0l '&
;
Example: Stability of
Pendulum (cont d)
l  Eigenvalues:
l  Case 1:

" 0 ! g% g g
l ' : ( + = 0 ) (1,2 = ± j
2
eig$
#1 0 & l l
) marginally stable

l  Case 2:

!0 g$
g g
eig# l& : ' 2
( = 0 ) '1,2 = ±
"1 0% l l
) unstable
Make the Pendulum Stable:
Add Dissipation
l  In order to be stable, a system MUST be able to lose
energy somehow
l  Add viscous friction to pendulum
! x˙1 $ ! ' sin ( x2 ) ' bx1 $
g
# &=# l &
" x˙ 2 % " x %
1

l  Linearized System


# g &
!f ( x ) "b " cos ( x2 )
=% l (;
!x % (
$ 1 0 '
# g& # g&
!f ( x ) "b " !f ( x ) "b
=% l (; =% l(
!x x=0 % ( !x x=) % (
$ 1 0 ' $ 1 0'
g 1# g&
* + + b+ ± = 0 * +1,2 = % "b ± b ! 4 (
2 2

l 2$ l'
The Controlled Pendulum
! x˙1 $ ! ' sin ( x2 )$
g ! 1 2$
# &=# l & + ( # ml &
" 2% "
x
˙ x % " 0 %
1

l  Assume a PD controller: ! = kp ( x 2,d " x2 ) + k D ( x1,d " x1 )


# # x1,d & # x1 & &
= (k D k P )% % ( " % ( ( = K (x d " x )
$ $ x 2,d ' $ x2 ' '
l  … and apply to the system
! x˙1 $ ! ' sin ( x2 )$ ! 1 ml 2 $
g
# &=# l & +# & K(x d ' x )
" 2% "
x˙ x % " 0 %
1

l  Equilibrium Points:


" ! g sin ( x )% " 1 2 %
0=$ l ' + $ ml ' K(x d ! x )
2

# x1 & # 0 &
g
( x1 = 0, ! sin ( x2 ) + 1 2 k P ( x2,d ! x2 ) = 0
l ml
The Controlled Pendulum
(cont d)
l  Linearize the system
" 0 ! g cos( x )% " 1 2 % " !1 0 %
$ l 2,0
' + $ ml ' K$ '
#1 0 & # 0 & # 0 !1&

" ! kD ! cos( x2,0 ) ! kP 2 %


g
=$ ml 2 l ml '
# 1 0 &

l  Eigenvalues can be determined as before


l  Note: D-controller makes the system stable!!!!

" ! kD !
g
cos( x2,0 ) ! kP 2 %
$ ml 2
l ml ' , assume m = l = 1
# 1 0 &

(1,2 =
1
2
( (
!k D ± k D2 ! 4 kP + g cos( x2,0 )) )
Discretization
l  In real implementations, time is discrete, and continuous
systems need to be discretized in order to examine
stability
x˙ = Ax + Bu
x n +1 ! x n
= Axn + Bun
"t
x n+1 = "tAxn + "tBu n + x n
x n+1 = ("tA + I )x n + "tBu n
Discretized Pendulum
Stability
l  The discretized linearized system becomes
# & # "t & #
% 1 ! l cos ( x2,0 ) "t
g &
( +% ml ( K % !1 0 (
2

% ( % (' $ 0 !1 '
$ "t 1 ' $ 0

# "tkD "tkP &


cos ( x2,0 ) !
g"t
1! !
=% ml 2 l ml 2 (
% (
$ "t 1 '
l  Example: equilibrium state is zero and desired state is
zero #
1!
"tk
!
g"t "tk
!
&
% ml 2 (
D P
ml 2 l
% (
$ "t 1 '
for simplicity, assume m = l = 1
# 1 ! "tk !"t ( g + kP ) &
% (
D

$ "t 1 '
! 2 + ! ( !2 + "tkD ) +1 + "t 2 ( kP + g ) + "tkP = 0

!1,2 =
1
2(2 ! "tkD ± ( 2 ! "tkD )2 ! 4 (1 + "t 2 ( kP + g) + "tkP ) )
Eigenvalues as a function
of gains
Note: inappropriate gains 
can make a naturally
stable equilibrium  1.4
1.2
100
point unstable
eig 1
0.8 80
0.6
60
0
20 40 kd
40
60 20
kp 80
100 0

1 100
eig
0.5 80
0 60
0
20 40 kd
40
60 20
kp 80
100 0

S-ar putea să vă placă și