Sunteți pe pagina 1din 4

Applied Nonlinear Control

Nguyen Tan Tien - 2002.5

_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

6. Feedback Linearization
Feedback linearization is an approach to nonlinear control
design.
- The central idea of the approach is to algebraically
transform a nonlinear system dynamics in to a fully or
partly one, so that the linear control theory can be applied.
- This differs entirely from conventional linearization
(such as Jacobian linearization) in that the feedback,
rather than by linear approximations of the dynamics.
- Feedback linearization technique can be view as ways of
transforming original system models into equivalent
models of a simpler form.
6.1 Intuitive Concepts
This section describes the basic concepts of feedback
linearization intuitively, using simple examples.
6.1.1 Feedback linearization and the canonical form
Example 6.1: Controlling the fluid level in a tank
Consider the control of the level h of fluid in a tank to a
specified level hd. The control input is the flow u into the tank
and the initial value is h0.
u

~
This implies that h (t ) 0 as t . From (6.2) and (6.3),
the actual input flow is determined by the nonlinear control
law
u (t ) = a 2 gh A(h) (h)

(6.5)

Note that in the control law (6.5)


a 2 gh

: used provide the output flow

A(h) (h) : used to rise the fluid level according to


the desired linear dynamics (6.4)
If the desired level is a known time-varying function hd (t), the
~
equivalent input v can be chosen as v = h& (t ) h so as to
d

~
still yield h (t ) 0 when t .

The idea of feedback linearization is to cancel the


nonlinearities and imposing the desired linear dynamics.
Feedback linearization can be applied to a class of nonlinear
system described by the so-called companion form, or
controllability canonical form.
Consider the system in companion form

h
output
flow

x2
x&1

&

x
x
3
2 =

&
x n f ( x ) + b( x) u

Fig. 6.1 Fluid level control in a tank


The dynamic model of the tank is
h

d
A(h)dh = u (t ) a 2 gh

dt
o

where
(6.1)

where, A(h) is the cross section of the tank, a is the cross


section of outlet pipe. The dynamics (6.1) can be rewritten as
A(h) h& = u a 2 gh

(6.6)

(6.2)

x
: the state vector
f ( x), b( x) : nonlinear function of the state
u
: scalar control input
For this system, using the control input of the form
u = (v f ) / b

(6.7)

we can cancel the nonlinearities and obtain the simple inputIf u(t) is chosen as
u (t ) = a 2 gh + A(h)v

output relation (multiple-integrator form) x ( n) = v . Thus, the


(6.3)

with v being an equivalent input to be specified, the


resulting dynamics is linear h& = v
Choosing v as
~
v = h

(6.4)

~
with h = h(t ) hd is the level error, is a strictly positive
constant. Now, the close loop dynamics is
~
h& + h = 0

(6.4)

control law v = k 0 x k1 x& K k n 1 x ( n 1) with the ki chosen


so that the polynomial p n + k n 1 p n 1 + K + k 0 has its roots
strictly in the left-half complex plane, lead to exponentially
stable

dynamics

x ( n ) + k n 1 x ( n 1) + K + k 0 x = 0

which

implies that x(t ) 0 . For tasks involving the tracking of the


desired output xd (t), the control law
v = x d ( n) k 0 e k1e& K k n 1e ( n 1)

(6.8)

(where e(t ) = x(t ) x d (t ) is the tracking error) leads to


exponentially convergent tracking.

___________________________________________________________________________________________________________
Chapter 6 Feedback linearization

26

Applied Nonlinear Control

Nguyen Tan Tien - 2002.5

_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

Example 6.2: Feedback linearization of a two-link robot


Consider the two-link robot as in the Fig. 6.2

first put the dynamics into the controllability canonical form


before using the above feedback linearization design.
6.1.2 Input-State Linearization
Consider the problem of design the control input u for a
single-input nonlinear system of the form

I2, m2

lc2

q 2, 2

l1
lc1

l2

I 1, m 1
q1,1

Fig. 6.2 A two-link robot

x& = f ( x,u )
The technique of input-state linearization solves this problem
into two steps:
- Find a state transformation z = z ( x ) and an input transformation u = u( x, v ) , so that the nonlinear system
dynamics is transformed into an equivalent linear timeinvariant dynamics, in the familiar form z& = A z + b v .
- Use standard linear technique to design v .

The dynamics of a two-link robot

Example: Consider a simple second order system

H 11 H 12 q&&1 h q& 2 h q&1 h q& 2 q&1 g1 1

=
& + =
0
H 21 H 22 q&&2 h q&1
q 2 g 2 2
(6.9)
where,

x&1 = 2 x1 + a x 2 + sin x1
x& 2 = x 2 cos x1 + u cos(2 x1 )

q = [q1 q 2 ]T : joint angles

= [ 1 2 ]T : joint inputs (torques)


H 11 = m1l c21 + I 1 + m 2 (l12 + l c22 + 2l1l c 2 cos q 2 ) + I 2
H 12 = H 21 = m2 l1c 2 cos q 2 + m2 lc22 + I 2
H 22 = m 2 l c22 + I 2
h = m 2 l1l c 2 sin q 2
g1 = m1l c1 g cos q1 + m 2 g[l c 2 cos(q1 + q 2 ) + l1 cos q1 ]
g 2 = m 2 l c 2 g cos(q1 + q 2 )
Control objective: to make the joint position q1 and q 2
follows desired histories q d1 (t ) and q d 2 (t )
To achieve tracking control tasks, one can use the follow
control law
1 H 11 H 12 v1 h q& 2 h q&1 h q& 2 q&1 g1
=
=
& +
0
2 H 21 H 22 v 2 h q&1
q 2 g 2
(6.10)
where,
v& = q&&d 2q~& 2 q~
v = [v1 v 2 ]T : the equivalent input
q~ = q q d
: position tracking error

: a positive number

The tracking error satisfies the equation q&~& + 2q~& + 2 q~ = 0


and therefore converges to zeros exponentially.
When the nonlinear dynamics is not in a controllability
canonical form, one may have to use algebraic transforms to

(6.11a)
(6.11b)

Even though linear control design can stabilize the system in a


small region around the equilibrium point (0,0), it is not
obvious at all what controller can stabilize it in a large region.
A specific difficulty is the nonlinearity in the first equation,
which cannot be directly cancelled by the control input u.
Consider the following state transformation
z1 = x1

(6.12a)

z 2 = a x 2 + sin x1

(6.12b)

which transforms (6.11) into


z&1 = 2 z1 + z 2
z& 2 = 2 z1 cos z1 + cos z1 sin z1 + a u cos(2 z1 )

(6.13b)
(6.13b)

The new state equations also have an equilibrium point at (0,0).


Now the nolinearities can be canceled by the control law of
the form
u=

1
(v cos z1 sin z1 + 2 z1 cos z1 )
a cos(2 z1 )

(6.14)

where v is equivalent input to be designed (equivalent in the


sense that determining v amounts to determining u, and vise
versa), leading to a linear input-state relation
z&1 = 2 z1
z& 2 = v

(6.15a)
(6.15b)

Thus,
state
the problem of
the problem of
transformation
stabilizing the original
stabilizing the new
(6.12)
nonlinear dynamics
dynamics (6.15)
input
(6.11) using the original
using the new
transformation
control input u
input v
(6.14)

___________________________________________________________________________________________________________
Chapter 6 Feedback linearization

27

Applied Nonlinear Control

Nguyen Tan Tien - 2002.5

_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

Now, consider the new dynamics (6.15). It is linear and


controllable. Using the well known linear state feedback
control law v = k1 z1 k 2 z 2 , one could chose k1 = 2, k 2 = 0
or
v = 2 z 2

(6.16)

&y& = ( x 2 + 1) u + f1 ( x )
f1 ( x )

= ( x15

(6.21)

+ x 3 )( x 3 + cos x 2 ) + ( x 2 + 1) x12

(6.22)

Clearly, (6.21) represents an explicit relationship between y


and u . If we choose the control input to be in the form
1
(v f 1 )
x2 +1

resulting in the stable closed-loop dynamics z&1 = 2 z1 + z 2


and z& 2 = 2 z 2 . In term of the original state, this control law

u=

corresponds to the original input

where v is the new input to be determined, the nonlinearity in


(6.21) is canceled, and we apply a simple linear doubleintegrator relationship between the output and the new input v,
&y& = v . The design of tracking controller for this doubleintegrator relation is simple using linear technique. For
instance, letting e = y (t ) y d (t ) be the tracking error, and
choosing the new input v such as

1
(2 a x 2 2 sin x1 cos x1 sin x1 + 2 x1 cos x1 )
a cos(2 x1 )
(6.17)
The original state x is given from z by
u=

x1 = z1

(6.18a)

x 2 = ( z 2 sin z1 ) / a

(6.18b)

The closed-loop system under the above control law is


represented in the block diagram in Fig. 6.3.
0

v=- k Tz

x& =f(x,u)

u=u (x,v)

z=z (x)

Fig. 6.3 Input-State Linearization


To generalize the above method, there are two equations:
- What classes of nonlinear systems can be transformed
into linear systems ?
- How to find the proper transformations for those which
can ?
6.1.3 Input-Ouput Linearization
Consider a tracking control problem with the following system
x& = f ( x,u )
y = h( x )

v = &y&d k1e k 2 e&

(6.24)

where k1 , k 2 are positive constant. The tracking error of the


closed-loop system is given by
&e& + k 2 e& + k1e = 0

(6.25)

which represents an exponentially stable error dynamics.


Therefore, if initially e(0) = e&(0) = 0 , then e(t ) 0, t 0 ,
i.e., perfect tracking is achieved; otherwise, e(t ) converge to
zero exponentially.

linearization loop
pole-placement loop

(6.23)

(6.19a)
(6.19b)

Control objective: to make the output y (t ) track a desired


trajectory y d (t ) while keeping the whole state bounded.
y d (t ) and its time derivatives are assumed to be known and
bounded.
Consider the third-order system

Note that:
- The control law is defined anywhere, except at the
singularity point such that x 2 = 1 .
- Full state measurement is necessary in implementing the
control law.
- The above controller does not guarantee the stability of
internal dynamics.
Example 6.3: Internal dynamics
Consider the nonlinear control system
x&1 x 23 + u

& =
x 2 u

(6.27a)

y = x1

(6.27b)

Control objective: to make y track to y d (t )


y& = x&1 = x 23 + u

u = x 23 e(t ) + y& d (t )

(6.28)

yields exponential convergence of e to zero.

x&1 = sin x 2 + ( x 2 + 1) x 3

(6.20a)

x& 2 = x15 + x 3

(6.20b)

e& + e = 0

(6.20c)

Apply the same control law to the second dynamic equation,


leading to the internal dynamics

x& 3 =

x12

+u

y = x1

(6.20d)

(6.29)

To generate a direct relationship between the output and input,


x& 2 + x 23 = y& d e
(6.30)
let us differentiate the output y& = x&1 = sin x 2 + ( x 2 + 1) x 3 .
which is non-autonomous and nonlinear. However, in view of
Since y& is still not directly relate to the input u , let us
the facts that e is guaranteed to be bound by (6.29) and y& d is
differentiate again. We now obtain
___________________________________________________________________________________________________________
Chapter 6 Feedback linearization

28

Applied Nonlinear Control

Nguyen Tan Tien - 2002.5

_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

assumed to be bounded, we have y& d (t ) e D , where D is


positive constant. Thus we can conclude from (6.30) that
x 2 D1 / 3 , since x& 2 < 0 when x 2 > D1 / 3 , and x& 2 > 0 when
x 2 < D1 / 3 . Therefore, (6.28) does represent a satisfactory
tracking control law for the system (6.27), given any trajectory
y d (t ) whose derivative y& d (t ) is bounded.

Note: if the second state equation in (6.27a) is replaced by


x& 2 = u , the resulting internal dynamics is unstable.
The internal dynamics of linear systems
refer the test book
The zero-dynamics
Definition: The zeros-dynamics is defined to be the internal
dynamics of the systems when the system output is kept at
zero by the input.
For instance, for the system (6.27)
x&1 x 23 + u

& =
x 2 u
y = x1

(6.27a)
(6.27b)

the out put y = x1 0 y& = x&1 0 u x 23 , hence the


zero-dynamics is
x& 2 + x 23 = 0

(6.45)

This zero-dynamics is easily seen to be asymptotically stable


by using Lyapunov function V = x 22 .
The reason for defining and studying the zero-dynamics is
that we want to find a simpler way of determining the stability
of the internal dynamics.
- In linear systems, the stability of the zero-dynamics
implies the global stability of the internal dynamics.
- In nonlinear systems, if the zero-dynamics is globally
exponentially stable only local stability is guaranteed for
the internal dynamics.
To summarize, control design based on input-output
linearization can be made in three steps:
- differentiate the output y until the input u appears.
- choose u to cancel the nonlinearities and guarantee
tracking convergence.
- study the stability of the internal dynamics.
6.2 Mathematical Tools
6.3 Input-State Linearization of SISO Systems
6.4 Input-Output Linearization of SISO System

___________________________________________________________________________________________________________
Chapter 6 Feedback linearization

29

S-ar putea să vă placă și