Sunteți pe pagina 1din 12

1

From Differential Equations Through Lyapunov


Functions to Sliding Mode Control (v1.0)
Ogbeide Imahe

AbstractThis paper gives a holistic introduction to sliding


mode control, and presents it as a suitable advanced control
method to teach in an introductory controls course. Sliding mode
control is a robust control method with very good rejection of
disturbance signals that act or appear to act through the control
input channel. The main idea and design task is to encapsulate a
stable dynamic system that obeys the system model dynamics into
a variable known as the switching function. This new variable is
treated as an objective function to be minimised using Lyapunov
stability theory. Restatements of that theory, known as reaching
conditions/laws, were derived and a general form given. These
laws were used to derive control laws that explicitly contained the
signum function, and were applied to some nonlinear systems.
The stability behaviour of a simplified version of the so-called
supertwisting sliding mode control algorithm provided insight
into the stability behaviour of Proportional-Integral (PI) control,
since they were shown to be analogous. Their similarity leads
to a generalised view of Proportional-Integral-Derivative (PID)
control that suggests that the sliding mode control structure may
be used to replace or modify PID controllers in some existing
and future industrial controllers.

An Illustrative Example
V-A
Cruise Control . . . . . . . . . . . . . .

6
6

VI

Further Examples
VI-A
Single Link Manipulator . . . . . . . .
VI-B
Ball and Beam System . . . . . . . . .

8
8
9

VII

Improvements and Enhanced Implementations


VII-A Sliding Dynamics . . . . . . . . . . . .
VII-B Implementations And Dealing With Unmatched Disturbances . . . . . . . . . .

10
10
10

VIII Conclusion and Recommendations

11

References

12
I. I NTRODUCTION

A large group of systems have disturbance signals or


model uncertainty that appear primarily, or entirely, to affect
the system through the same integrator or channel as the
input signal. We call the disturbance/uncertainty matched in
this case, otherwise, unmatched. Sliding mode control may
therefore be suitable to control these systems given its noted
invariance/insensitivity, in the ideal case, to matched disturbance when the so-called sliding mode is in effect. Mechanical
and electro-mechanical systems are typical examples. [1], [2],
[3], [4]
Within the above group, there is a large class of systems
that are SISO (single input with single output) and have
models that are representable in the controllable canonical
form (1)or that are linearisable to yield the same form. This
model lends itself to easy creation of the switching function
for sliding mode control, and so is suited to an introductory
note like this one.
The controllable canonical form for nonlinear systems, with
matched disturbance d, can be given as

x i = xi+1
x n = f (xi,n ) + g(xi,n )(u + d)

Index Termsnonlinear, Lyapunov stability theory, sliding


mode, introduction, tutorial, supertwisting, power-rate, reaching
laws, reachability condition.

C ONTENTS
I

Introduction

II

Key Concepts

III

Lyapunov Statement Of Stability


III-A
Stability And Lyapunov Functions . . .
III-A1
Norms . . . . . . . . . . . .
III-B
Asymptotic Stability Illustration . . . .

2
2
3
3

IV

Evolution of Sliding Mode Control


IV-A
From Differential Equations To Lyapunov Functions . . . . . . . . . . . . .
IV-B
From Differential Equations to Sliding
Surfaces . . . . . . . . . . . . . . . . .
IV-C
From Lyapunov Functions to Sliding
Mode Control . . . . . . . . . . . . . .
IV-C1
Dealing with Input Disturbance
IV-C2
Reaching Laws . . . . . . .
IV-C3
A Simplified Supertwisting
Algorithm . . . . . . . . . .

4
4
5
5

(1)

where n is the dimension of the system; xi , xn are the system


state variables (i = 1, . . . , n 1); u, the control input, and
either or both f (x) and g(x) are linear or nonlinear.
That there are many SISO systems, with matched uncertainty, that are representable in the controllable canonical form
(as described above) makes sliding mode control a useful control method to learn in an introductory controls course. Hence,
in addition to being an introductory note, the presentation and

c 2015 Ogbeide Imahe


organisation (as shown in the table of contents) is such as to


demonstrate a simple and easily understandable flow of ideas
that may be regarded as pedagogically useful. Additionally,
links with classic PID control and other control methods are
noted, so as to show that sliding mode control could be used
to motivate these topics.
The paper shows how stable linear systems could be used
to derive Lyapunov functions, highlights that sliding modes
are stable trajectories, and then combines these with Lyapunov stability theory to derive sliding mode controllers. In
the process we promote the view of controllers as dynamic
optimisation engines. A view that is clearly demonstrated by
the idea known as model predictive control (MPC): where
optimisation is explicitly used throughout the system response
trajectory to derive a control signal that achieves an objective.
We may thus see that the switching function could be used to
specify the objective for MPC.
Reaching laws are laws or conditions that may be derived
form Lyapunov stability theory that ensure that sliding mode
is achieved and maintain. They were presented together as
a construct in [4], but without derivation. Their derivation
is simple and is shown in this paper in addition to giving
reasons why the parameter value, , in the so-called powerrate reaching law is commonly given a value of 0.5 [5].
Stability is fundamental to control, since control may be
understood to be a quest to avoid unstable behaviour in the
long run; typically while also attaining another objective.
It is the subject of the next section. After that, we would
evolve sliding mode control from basic ideas and discuss how
the classic Proportional-Integral (PI) controller could operate
in a region of instability that doesnt end up harmfulfor
the right gain parameters and disturbance magnitude limit.
Subsequently, several design and simulation examples are
presented to give a sense of the resulting control signals and
associated performance, and to highlight other considerations.
In these examples, all system states were assumed to be
available (measured, or determinable) for use in control.
The perspective taken in this paper is holistic, placing sliding mode control, Lyapunov functions and stable (stabilized)
systems in their relative contexts. It thereby highlights an
intuitiveness to the associated mathematics. The presentation
attempts to build intuition and facilitate heuristic design, so it
primarily requires only a basic understanding of calculus and
ordinary differential equations, and the concept of control to
follow.
II. K EY C ONCEPTS
Two ideas make sliding mode control when combined
together. One is encapsulation, while the other is Lyapunov
stability theory. Encapsulation involves abstracting a desired or
designed dynamics based on the system states into a new state
variable. This new state variable is then used in the controller
design. The classic example of this is the error signal: the
difference between a desired reference signal and the actual
output.
For sliding mode control, we call this new state variable the
switching function (which hints at the idea that the eventual

control signal may demonstrate switching or discontinuous


behaviour). It defines a space such that when it is either
positive or negative, its gradient can be designed, via a control
law, to drive its value towards zero. And since it represents
dynamics that the systems is made to obey by the control
law, it is both an objective function and a constraint on
the behaviour of the states of the system to achieve total
disturbance rejection where a zero value is maintained.
Lyapunov stability theory plays the role of ensuring that
the value of the switching function is driven to zero when not
equal to zero, and kept at that value subsequently. The control
law derived from it thus serves as an optimisation engine to
achieve and maintain the desired objective.
When the switching function takes the zero value and
continually retains it, we say that the system is in sliding
mode. The system dynamics is then characterised by the
designed dynamic response when the switching function is
zero. Necessarily, therefore, the switching function must be
designed such that it ensures that the system is stable in the
sliding mode, and that it results in acceptable performance.
For appropriately chosen controller parameters, the effects
of matched disturbance signals are absorbed by the dynamics
of the switching function. And if the system arrives and stays
on the sliding mode, the system becomes invariant or immune
to those disturbances.
However, unmatched disturbance signals, where present,
would still filter through to the output through the states and
affect the overall performance. In this case, sliding mode
control may be combined with other control methods or
paradigms suitable for this problem [6].
Two advantages to be obtained from sliding mode control
are thus robustness to matched uncertainty/disturbance and
ability to choose the system dynamic response. As noted in the
introduction, there is a sufficiently large group of controlled
systems where it is sufficient to deal only (or mainly) with
matched uncertainty/disturbance. We will focus on developing
sliding mode control using the concepts highlighted in this
section, in the next two sections.
III. LYAPUNOV S TATEMENT O F S TABILITY
Stability refers to the quality of resisting change or deviation
from a balance point or region. It also refers to a tendency to
converge to a finite point or region of behaviour, and remaining
there.
Newtons first law of motion says that a body continues in its
state of rest or uniform motion in a straight line unless acted
upon by an external force. It thus gives two examples of stable
behaviour while noting that regions of stable behaviour may be
changed by an external force. It is the objective of controllers
to ensure the stability of a system and/or give the system the
ability to switch stable states (or equilibrium points).
We next describe asymptotic stability from a Lyapunov perspective and show how discontinuous control signals emerge
to satisfy it.
A. Stability And Lyapunov Functions
A Lyapunov function is a scalar valued differentiable arbitrary function of the state variables of a systems that has the

TABLE I
S IGN CONDITIONS FOR A SYMPTOTIC S TABILITY OF A S INGLE S TATE
S YSTEM x
x
-ve
+ve

x
+ve
-ve

xx
-ve
-ve

following characteristics:
1) It has a zero value when all the state variables have zero
values.
2) It always takes a positive value except when all the state
variables have zero values.
3) Its value would tend to infinity as the absolute values
of the states tend to infinity.
We can infer from the items in the above list that the Lyapunov
function has zero as its minimum possible value.
In order to prove a system stable, its first time derivative
must have a value that is either negative or zero. If its gradient
is negative, the Lyapunov function itself would tend to zero;
which in turn means that the state variables will tend to zero,
making the system asymptotically stable. If this gradient is
zero, the system would just be described as stable, with the
states maintaining their values. A positive gradient implies
that the state variables are increasing, thus implying unstable
behaviour.
Hence, a controller derived based on the gradient of the
Lyapunov function that is negative will make the system
asymptotically stable; it will drive the variables that make
up the Lyapunov function towards zero. Therefore all the
system states would ideally be used to construct the Lyapunov
function.
1) Norms: A norm is a function that maps a set of numbers
or variables to a positive number greater than or equal to zero;
It would only yield zero for a domain or input set of all zeros.
If the set is scaled, then the resulting norm is similarly scaled
by the absolute value of the scalar. Norms obey the triangle
inequality: the norm of a sum of input sets is less than or equal
to the sum of its norms. We can thus see that every Lyapunov
function is a norm, and norms could be made into Lyapunov
functions.
B. Asymptotic Stability Illustration
Table I refers to Figure 1. For asymptotic stability, the ball,
distance x from the fulcrum, should tend towards a position
above the fulcrum specified to be the equilibrium point. If
the ball is to the left of the fulcrum (negative side), it should
be moved to the right (with positive velocity). Also, if the
ball is to the right of the fulcrum (positive side), it should be
moved to the left (with negative velocity). It becomes clear
that the product of the position and velocity must be negative
if asymptotic stability is to be achieved.
In sliding mode control, the sliding mode is defined to be
the equilibrium point. And the sliding mode controller is what
drives the system states towards the sliding mode.

Fig. 1.

Ball on a see-saw

IV. E VOLUTION OF S LIDING M ODE C ONTROL


This section highlights how sliding mode control, models
and evolves from Lyapunov stability theory.
A. From Differential Equations To Lyapunov Functions
We show an example of how a Lyapunov function may
encapsulate a stable dynamic system by working from the
solution of a simple stable linear system to a Lyapunov
function (equations 2 to 5). Define
x(t) = x(0)ekt

(2)

x(t)

= kx(t)

(3)

so that,
Equation (2) is an asymptotically stable path for x(t) with
k > 0 and initial value x(0). It is the solution to the differential
equation (3), which must therefore be a stable system.
Let us simply represent x(t) by x. If we multiply equation
(3) by x, it gives (4), which has a strictly negative right-handside for x 6= 0.
xx
= kx2
(4)
Also, define the gradient of a Lyapunov function V , as V =
xx.
Integrating it with respect to t gives
1 2
x
(5)
2
It is a Lyapunov function since V (x) > 0 for x 6= 0, and
V (0) = 0. Circularly, (3) is shown to be a stable system since
the time derivative V is less than zero.
V =

B. From Differential Equations to Sliding Surfaces


Generally, any system in the controllable canonical form (1)
could have a linear switching function designed in the form
(6), or equivalently, (7). The superscripts of x1 are derivative
degrees, n is the order of the system, and ai is constant, (i =
1, . . . , n 1).
(n1)

S = x1

(n2)

(t) + an1 x1

(t) + a1 x1 (t)

S = xn (t) + an1 xn1 (t) + an2 xn2 (t)


+ a1 x1 (t)

(6)
(7)

The equation of the sliding mode, S = 0, may then be written


as (8). For stability, bi (i = 1, . . . , n) is a chosen to be positive
and Real.

Combining (9) with (11) , we get the controller


u=

1
( sgn(S) h(x))
j(x)

(12)

0 = (xn + bn )(xn1 + bn1 )(xn2 + bn2 ) . . . (x1 + b1 ) (8)


Equation (8) is an equation specifying the designed dynamics
for the system in the sliding mode. This echoes [7], that
any stable differential equation in the states of a system that
respects the relationships of the states in the system model
could be the sliding mode.
C. From Lyapunov Functions to Sliding Mode Control
The function of the controller reduces to driving the system
states to the point where they obey the sliding mode, and
keep them there. We could thus use a suitable Lyapunov
functionconsidering the sliding mode as the equilibrium
point of the systemto derive a family of controllers that
could satisfy this objective.
Let us define a Lyapunov function, V = |S|; its first

derivative is V = S d|S|
dS , thus V = S sgn(S). For stability,

set S sgn(S) = ; rewritable as (9).


S = sgn(S),

>0

(9)

Equation (9) would ensure that S 0 in finite time. It is called


the -reachability condition [8] or the contant-rate reaching
law [4]. It is the basis of classic sliding mode control.
Similarly, let V = 12 S 2 ; its first derivative would be
For stability, we can set SS
= |S|, which then
V = SS.
leads again to (9).
If we rephrase (9) as
dS
x = sgn(S), x = f (x, u)
(10)
dx
where x defines the system of differential equations that model
the dynamical system, and u its control signal, we see that S
must be designed a function of all the states of the system.
And since u is the manipulated variable, it must be a function
of sgn(S).
From vector calculus, dS
dx is the vector perpendicular to S.
Therefore its inner product with any vector field (e.g. x)
shows
whether the vector field faces the same direction with it, or is
opposite it in relation to S = 0. Therefore, for S = 0 to be
attractive to the state trajectories that make it true, the control
signal must ensure that if S > 0, then the left-hand-side of
(10) must be negative, and vice versa. (The left-hand-side of
(10) can also be described as the Lie derivative of S in the
direction of x).

We can also write


S = h(x) + j(x)u

(11)

where h() and j() are functions obtained by taking the


derivative of S, for instance of (7), given (1). Equation (11)
shows that the system should be linear in the control so that
it could be easily made the subject of the equation. (It seems
feasible to also have a system linear in an exclusive function
of the control, and still be able to put the control signal
exclusively on one side of the control equation.)

We see from (11) and (12) that j(x) 6= 0. Put differently, j(x)
must be invertible.
The next subsection discusses how the size of may be determined: we consider the bounds of all the non-signum terms
of the controller (12), and any potential input disturbances.
1) Dealing with Input Disturbance: Matched disturbances
modify S (13), which then affects S. But if regardless of this,
S is driven to zero, we may then intuitively see why invariance
to input/matched disturbance exists as a characteristic of
sliding mode control.
Where the disturbance signal is unmatched, it means that
it affects a state directly, and outside of where a matching
controlled signal could be used to compensate for its effect.
It may thus be seen as implicit in the variable processed by
the system; so that when S = 0, their effect on the subject
variables remain.
Considering input disturbance d, we can rewrite equation
(11) as
S = h(x) + j(x)u + d
(13)
Combine (12) and (13) to yield
S = sgn(S) + d

(14)

S = sgn(S) ( sgn(S)d)

(15)

with sgn(S)d > 0, for asymptotic stability. Thus choose


> |d| to this meet condition.
Disturbance inputs within the specified bound would therefore appear as if they do not exist when the system is in sliding
mode. This is the ideal behaviour.
If we choose to regard h(x) as a disturbance input also, the
controller (12) becomes
u=

1
( sgn(S))
j(x)

(16)

with the value of adjusted as appropriate. The variable j(x)


may be replaced by a constant, min(J(x)), or its mean value,
with the value of similarly adjusted to ensure asymptotic
stability. This would transform (16) to the simpler structure
u = sgn(S)

(17)

Using (17) as derived in this section, effectively turns any


nonlinear system in the form (1) to a linear system with added
disturbance inputs.
Since computer simulations can be used to heuristically
determine , designing a suitable S may be regarded as the
main design activity for sliding mode control. It was shown
in section IV that this is relatively easy for systems in the
controllable canonical form.

2) Reaching Laws: A reaching law is an equation like (9)


that makes S = 0 achievable and maintainable; attractive to the
state trajectories. We elaborate here on the power-rate reaching
law, and the general form of reaching laws stated in [4].
The general form of a reaching law was given as
S = Qsgn(S) Kf (S)

(18)

where f (0) = 0, and K, Q > 0. Multiplying (18) by S gives


S S = Q|S| KSf (S)

(19)

Therefore, for the stability of (18), we design Sf (S) 0.


This means that both S and f (S) are the same sign, so that
we can rewrite (18) as
S = (Q + K|f (S)|) sgn(S)

(20)

Equation (20) is the -reachability condition (9) with


set as (Q + K|f (S)|), therefore (18) is a restatement of this
condition. The import of this is that we can derive a family
of controllers with different behaviours by making in (9)
variable: specifically, by making it a suitable function of S,
and more specifically, a suitable function of the state variables.
For instance, if we set Q = 0, K = 1, and |f (S)| = k|S| ,
0 < < 1, (20) yields the power-rate reaching law
S = k|S| sgn(S)

(21)

And if = 1, we get what is akin to classic proportional control. (The sliding mode for the classic proportional controller
occurs when the error signal equals zero.) Therefore, we can
infer that the power-rate reaching law is a generalised view
of proportional control. By this, we also see that proportional
control, in the classic PID controller formulation, deals with
disturbances acting through the input channel.
We can now rewrite the -reachability condition as
S = (S) sgn(S)

(22)

where (S) 0, with (S) = 0 iff S = 0, to more


clearly reflect a general view. Hence, we can from it define
new reaching laws that satisfy the conditions for asymptotic
stability. For instance:


|S|
sgn(S)
(23)
S = k
|S| + m


m
S = k 1 ea|S| sgn(S)
(24)
S = k (a|S| + |S| ) sgn(S)

3) A Simplified Supertwisting Algorithm: The supertwisting


sliding mode controller has been said to be one of the most
implemented of sliding mode control algorithms [5]. It can be
written in simple form as
Z
u = k|S| sgn(S) w sgn(S), w, k > 0
(26)

(25)

Equation (13) shows that the disturbance d filters through


If the controller is a function of |S|, the
to S by affecting S.
control signal gets adjusted to compensate for changes in the
magnitude of d, up to the limit it can handle, given the values
of its parameters, the size of S, and actuator limits. This is why
(21), (23)-(25) could still be effective for disturbance rejection
even if they appear to contradict the idea of specifying as a
fixed value to deal with disturbances, as was done for (17).

From (26), we see that in its essence, it combines the powerrate reaching law with an integral term in the sign of the
switching function; thus mimicking the structure of the PI
controller. It would be exactly like the classic PI controller
with a varying proportional gain (k|S|1 ) if w = ki |S|
(ki > 0 the integral gain). The ability of the integral term
to accumulate its input may thus lend it a greater robustness
to disturbance signals.
Given the preceding discussion on reaching laws, its corresponding reaching law may chosen to be
Z
S = k|S| sgn(S) w sgn(S)
(27)
Multiplying (27) by S gives
S S = k|S|+1 S

Z
w sgn(S)

(28)

R
This means that S w sgn(S) 0, for (28) to represent
a Lyapunov stable system at S = 0 and an asymptotically
+1
stable

R system for S 6= 0. Alternatively, for stability, k|S|


|S w sgn(S)|. The corresponding Lyapunov function is
known to be V = 12 S 2 .
Where S, S R= 0, the system would be stable. It is also stable
where S and w sgn(S) have the same sign; like the case
where S approaches zero from its initial value.
As S crosses
R zero the first time, it changes sign to that
opposite of w sgn(S), potentially making the system
unstable, while at the same time reducing the magnitude if the
integral term. (For very small S, after crossing, the system may
still be stable.) If the system leaves the stability region, the
relative values of the parameters k and w, given the selected
, play a role as to how quickly the system kicks back into
stability as the size of integral term reduces, while the term
with |S|+1 increases with S to the point where the alternative
condition for stability Rstated above is achieved.
If, however, S and w sgn(S) have different signs, and
the sign of S remains the same, and long enough for the
accumulated value by the integral term to exceed k|S| , the
system tips into instability and remains there. (A Lyapunov
function to directly assess the stability of the supertwisting
sliding mode control algorithm was presented in [9]. It may
also thus be useful for analysing PI control.)
Let S = q(x, t) + v(x, t)u st , 0 < m v M , and q p
with p > 0. Then the full supertwisting controller [5] is given
as
ust = u1 + u2
k|S | sgn(S) if |S|>S
u1 = {k|S|0 sgn(S) if |S|S00
(29)
u

st
u 2 = {wsgn(S)

if |ust |>usat
if |ust |usat

4pM (w + p)
p
,w>
,
3
m (w p)
m
and 0 < 0.5. Observe that a limit usat is placed on the
with parameters determined via k 2

magnitude of the control signal such that the integral term u2


is only in effect when the control signal is not saturated at
usat . This would stop the control signal from growing due to
the integral term when it is saturated, alleviating the effect
known as integral windup, which may slow the response
of the controller when the sign of S changes. (Also known
to be potentially a serious issue with classic PI control.)
Furthermore, notice that the magnitude of S is also given
a limit S0 which is useful for managing the size of the
proportional term u1 when S > S0 and is thus another way
design for actuator limits.
Analogously, replacing S (or sgn(S)) with the classic error
signal e turns the above controller into an advanced PI
controller with anti-windup protection.
Fig. 2.

V. A N I LLUSTRATIVE E XAMPLE
In this section, we simulate the phenomenon of chattering
[10] when the classic sliding mode controller is used. And
we show graphically that for controllers using reaching laws
like (21) e.g., (23)-(25), chattering is minimal or non-existent.
Also, we present PI control from the perspective of sliding
mode control, highlight its relationship to the super-twisting
law, and thus suggest a modification to the classic PI controller
for improved disturbance rejection.
Generally, we attempt to show how the control signal affects
the trajectory of the switching surface, and vice versa, by
viewing the locus of the error signal for a simple first-order
system.
A. Cruise Control
We use a simple first order cruise control system for a car
in fourth gear [11] for the illustrations. The reference speed
is r = 25m/s, u is the controller, and the state variable that
represents its velocity is y. Choosing e = y r, the system
model becomes
e = 0.0142e + 1.38u

(30)

If we consider the slope of the road, we can obtain the system


equation
e = 0.0142e + 1.38u g sin ,

g = 9.81m/s2

(31)

where is the slope of the road.


The control objective is to drive e to zero, to make r = y.
To ensure that this first order system is stable (that e 0),
when e is positive, we would want u to be negative and vice
versa. This results in a controller given by
u = k sgn(e)

(32)

with k chosen heuristically to compensate for the disturbance,


to and enhance system performance. Figure 2 shows the
performance of (32), the classic sliding mode control equation.
Observe that the error signal e behaves as a switching
function so that its corresponding sliding mode is a point
and not a dynamic (sliding) trajectory. (The switching function
typically defines a dynamic system, but as done here, it could
be defined to not exclude systems not described by differential

Classic sliding mode cruise control: k = 0.95

equations. This view somewhat makes all controllers that


attempt to minimise the error signal sliding mode controllers.
It is therefore unhelpful in delineating this control method
from others. The explicit use of the signum term in the
controller and a recognition that the variable it maps to a
sign encapsulates a stable constrained version of the system
should sufficiently distinguish sliding mode control from other
methods.)
Figures 3-6 show plots of the system response using a
PI controller (with e = r y), and several sliding mode
controllers. The classic PI and classic sliding mode controllers
provide a background to assess the performance of the other
controllers designed using a power-rate law, a sigmoid based
law, and a super-twisting control law.
The controllers where manually tuned in the simulations
to ensure the control signals fell within the range 0 u 1
without saturation. Performance may be improved if saturation
(staying at u = 1) is allowed.
Figure 2 shows a chattering (rapidly varying) control signal
and output. We can imagine that for most mechanical actuators and controlled systems, this input and response may be
unrealisable given mechanical inertia and energy requirements.
However, this controller would give the best disturbance
response compared to the others in this section.
The causes of chattering are noted to be finite switching
frequency and the interaction of the switching control with
parasitic dynamics (e.g. of unmodelled, fast, actuator and
sensor dynamics) [1]. Therefore, sliding mode controllers that
use continuous signals naturally tend to eliminate actuator
chattering.
Controllers derived via (21) (23)-(25) (27), do not force S to
be discontinuous across S = 0. This makes the sliding mode
controller continuous, and therefore allows it to be used with
systems that are sensitive to chattering control signals. Alternative ways to eliminate chattering include to put the controller
in a model reference framework that isolates the chattering
from the actuator [1], and averaging the discontinuous signal
via integration or filters [8] [10].
Furthermore, the so-called higher-order sliding mode controllers involve switching on derivatives of S. Their integrals,

Fig. 3.

Classic PI cruise control: P = 0.8, I = 0.25.

Fig. 5.

Sigmoid law cruise control:  = 0.25, k = 1.3.

Fig. 4.

Power-rate reaching law cruise control: = 0.5, k = 1.2.

Fig. 6.

Super-twisting law cruise control: W = 0.05, k = 0.87.

which are continuous, thus appear in the control signals. An


example is the super-twisting algorithm (section IV-C) because
the integral term results from switching on the first derivative
S
of |S|
. It is specifically classed as a second-order sliding mode
controller since the order assigned is the derivative degree plus
one [5], [13].
Additionally, there are cases where chattering control signals do not lead to chattering actuators. This is because some
actuators have a good filtering effect on the discontinuous
signals. In these cases, they yield a continuous input to the
system under control still with the benefit of disturbance
rejection. This is particularly the case with electric motors
[3]. Thus, we can assert that chattering need not be an issue
in sliding mode control implementations.
Regarding the power-rate control law, u = |e| sgn(e)
(used in figure 4), if = 0, it yields the classic sliding mode
controller and a response like figure 2. Generally, the output
would chatter for very near zero. If = 1, it yields the
classic Proportional controller. We note also that as tends
from zero to one, the offset between the reference signal and
the steady state output increases while the rise time decreases.
Thus = 0.5, the mid-point, represents a sort of optimal value
for a trade-off between rise time and output offset, which is

one likely reason why it is regarded as a reasonably good


choice in [5]. Also, if the power-rate law were exclusively
applied to a simple second-order system with the error signal
as the switching function, = 0.5 yields the smallest steady
state error without oscillation. Any < 0.5 would produce
oscillations. And as one goes from 0.5 to 1, the steady state
error increases.
For > 1, the shape of |S| sgn(S) changes such that,
in the limit, it approximates what is known as a deadzone.
However, it is possibly to use an |S| term in an added
power-rate controller, with 1, in a way that shortens the
transient period, particularly when |S| > 1. Generally, there is
some flexibility in choosing , and combining reaching laws.
Our choices would then depend on performance requirements
and actuator capability, as would be illustrated in the further
examples section.
We see from the figures 4 and 5 that there is an offset
between the achieved steady state output and the reference
signalas would be obtained with classic proportional control.
Hence we can infer that the sigmoid and power-rate laws
demonstrate proportional action and are also proportional
controllers. Adding Integral-control would thus prove useful
to eliminate this situation since the control signal is not

necessarily zero when the switching function is zero.


Figure 6 demonstrates a PI controller where the input is
chosen as sgn(e) rather than e; this is a super-twisting controller. (We ignore adding a derivative term with sgn(e) since
it is constant except at the ideally instantaneous transitions.)
The superior performance of the supertwisting controller
(with S = e)to the classic PI controller may be due to the
stronger proportional action when 0 < S < 1, of the powerrate law (lower tracking offset). It therefore needs less integral
action to perfect the tracking. A lower offset means that the
accumulated integral value needs be less than would otherwise
be needed. Smaller integral term growth makes overshoot
minimal, as can be observed when we compare the top plots
in figures 3 and 6.
Also, the size of its integral term changes according to
the sign of the error signal rather than the size and sign. It
therefore has a fixed change per time, so that its effect in more
predictably modifiable using its gain W . Hence, we may want
to replace, for general use, the classic Proportional controller
that uses = 1 in the power-rate law with one using, say,
= 0.5. A combination of the two might also be a good
option. And where theres an integral term, to use sign(e) in
place of e.
Given the above discussion, we can suggest that improvements to the controller structure of sliding mode control might
be translatable to improved PID control, and vice versa.
VI. F URTHER E XAMPLES
The first example is the tracking control of a single link
manipulator. It is representative of a simple robot arm, a
rudder, a vane or similar. The control objective is for the angle
to a reference point to follow a planned path as best as possible
despite the presence of input or matched disturbances, like,
say, undulating gusts of wind.
Because the model is in the controllable canonical form, we
can design a switching function in the states of the system by
specifying a stable linear system. Arbitrarily choosing poles
that are negative would suffice to result in a stable system in
the sliding mode.
A case where this isnt directly feasible for stability in
the sliding mode is explored in the second example where
we simulate set-point tracking control in a nonlinear ball and
beam system model. This example specifies the required stable
poles without the superfluous step of using the Routh stability
criterion as in [14] and [15], to achieve this.
We use a plug-play-tune approach for parameter selection
in a simulation environment, akin to manual PID tuning. This
demonstrates a comparable simplicity in use to PID control;
the extra step, unlike with PID, being to design a dynamic
system for the switching function. Once this is done, and the
controller structure selected, parameters may then be modified
to suit actuator limits and disturbance rejection ability with
similar reasoning to manual PID tuning.
A. Single Link Manipulator
This example shows that sliding mode control may be used
to replace PID control in plants where facility exists, or may

Fig. 7.

An Implementation Scheme for PI Control

be added, to replace the error signal e with |S| sgn(S). It


also demonstrates a usefulness of this control method for
motion control, and particularly where there is significant input
disturbance present.
The model (33) from [16] is used, with the addition of a
disturbance input signal:
x 1 = x2
x 2 = 9.8sin(x1 ) 3x2 + 0.5(u + d)

(33)

Following from section IV-A, let us define a switching function


S = x2 + 30e, which, when S = 0, yields the first order
system x 1 = 30(x1 r) that has a steady state value of
r, the reference signal. We avoided using S = e + 30e as
might have been expected because it includes a derivative of
the reference signal. This reduces the size of the spike in S,
that would transfer to the control signal given the selected
controller design, if r suddenly changes value. (A better way
may be to shape/filter the input so that there are no sudden
or steep transitions in states thereby reducing the need to
consider their impact on performance and controller design.)
Additionally, the control signal was saturated at 35, selected
based on the example design in [16], because real controllers
have limits.
We used a simpler control design in this example; one that
follows from the derivation of (17). The controller designed
in [16] had the structure of (12).
Also, we perturbed the system with a sinusoidal disturbance
signal of amplitude 10 and frequency 0.5. (The magnitude of
this disturbance signal was such that classic PI control and the
classic super-twisting sliding mode control algorithm would
likely not produce good tracking for any parameter selection,
given the control input constraints and the desire to minimise
chattering.)
An implementation structure forR PI control was shown in
[11] which effects u = k(e + T1i e). We show it in figure
7, and substitute |S| sgn(S) for e as the input for the actual
controller design (34). We effect it using
R
u = k(u1 + T1i u1 )
(34)
u1 = |S| sgn(S)
which is the same structure as the super-twisting algorithm.
Figure 8 shows its performance: good tracking of the reference
signal despite substantial disturbancethe main selling point
for the use of sliding mode control.
The significance of figure 7 is also that it describes a
controller as an engine to dynamically optimise the value of

Even though the system is not in the form (1), we can still
chose the switching function to be S = x4 + px3 + qx2 + mx1
and assess its suitability for the purpose. If S is driven to zero,
we get
x4 = px3 qx2 mx1
so that the resulting dynamics is the nonlinear third order
system
x 1 = x2
x 2 = k14 x1 x24 kg4 sin(x3 )
x 3 = px3 q x1 mx1

Fig. 8. Top: Tracking performance; Middle: Input signals; Bottom: Tracking


error. (k = 35, = 0.75, T i = 0.2)

an input objective function, the optimisation of which achieves


the control objective. (Minimisation is the usual case.)
We had previously assessed the stability of the PI control
analogue that is the supertwisting algorithm formulation, and
so can get a sense of the contribution of the intergral term in
(34) to its overall disturbance rejection performance.
B. Ball and Beam System
This article demonstrates sliding mode control for an underactuated mechanical system, and shows the effect of unmatched disturbances.
The nonlinear ball and beam system model (35) is taken
from [14], which took the model structure from [17] and
parameters from [15]. This model mimics the ball on a see-saw
described earlier, giving the view that sliding mode control is
a natural fit for it. Sliding mode control should be a natural
control option for systems where balancing the output variable
at a point (regulation) is a key featureas with the inverted
pendulum and flight controls in general.
x 1
x 2
x 3
x 4

= x2
= k14 x1 x24 kg4 sin(x3 )
= x4
= mx21+k1 [k2 u (2mx1 x2 + k3 )x4
1
(mgx1 + L2 M g)cos(x3 )] + d

(35)

With the system states defined thus: x1 , the ball position on


the beam; x2 , the ball velocity; x3 , the angle of the beam
to the horizontal, with 6 < x3 < 6 ; and x4 , the angular
velocity of the beam. The parameters are k1 = 0.16956kgm2 ,
k2 = 1.00083333N m/V , k3 = 16.55455208V /(rad/s),
k4 = 1.4, L = 0.43m (beam length), g = 9.81m/s2
(gravitational acceleration), m = 0.07kg (ball mass), and
M = 0.15kg (beam mass). The control input is the voltage
u to the electric motor that drives the beam. The desired
equilibrium state is x2 = x3 = x4 = 0, and x1 = r; that is,
that we want to be able to make the ball stationary virtually
anywhere r on the beam.

Its performance and stability around an equilibrium point can


be designed using an appropriate selection of p, q, and m,
despite that x3 has a nonlinear relationship with x 2 , and
therefore, also with x1 .
It is known that if a linearised system, that is, a linear
approximation of a nonlinear system at an equilibrium point,
is strictly stable, then the associated nonlinear system has that
equilibrium point asymptotically stable. Also, x3 sin(x3 )
where 6 < x3 < 6 . We may therefore linearise the above
system around the stationary point x4 = x3 = x2 = x1 = 0
to give
x 1 = x2
x 2 = kg4 x3
(36)
x 3 = px3 q x1 mx1
From this we get x3 = kg4 x 2 . The sliding mode equation
can then be rewritten as
g
g
...
x 1 + p
x1 q x1 mx1 = 0
(37)
k4
k4
with which we can do pole placement.
Placing the poles at [1, 5, 5] gives the characteristic
equation P 3 + 11P 2 + 35P + 25 = 0, in a variable P . Hence,
we have the sliding mode system as
...
x 1 + 11
x1 + 35x1 + 25x1 = 0
(38)
Equating (37) and (38) provides the parameters for the switching function, yielding it to be
S = x4 + 11x3 5x2 3.6x1
For tracking control, we replace x1 with e = x1 r.
Figures 9, 10 and 11 show the performance of the system
under different conditions. The control signal used is (34)
with parameters as specified in their respective captions.
Figures 9 and 10, respectively, show the performance without
disturbance, and with matched input disturbance equivalent to
a constant 2V input. Such a disturbance signal might perhaps
be due to the beam being actually lopsided, with a centre of
gravity not at its middle.
The tracking performance shown in figure 10 (with disturbance) is better than in figure 9 (without disturbance) because
the controller parameters were selected with the assumption
that the system would normally operate with disturbance
inputs.
Figure 11 illustrates the response to an unmatched disturbance signal (arbitrarily applied at x 3 and chosen to be a
sinusoidal signal of amplitude 0.5 rad/s and period 2). Unmatched disturbance may be caused by some real extraneous

10

signals, model approximation, and parameter variation; all not


entering the system through the same integrator as the input.
If this is significantly present for the system to be controlled,
then sliding mode control by itself would be unsuitable. The
next section briefly surveys improvements to sliding mode
control as presented in this paper, including attempts to deal
with unmatched disturbance/uncertainty.
VII. I MPROVEMENTS AND E NHANCED I MPLEMENTATIONS

Fig. 9. A PI-Sliding mode control of a ball and beam system. (k = 7,


Ti = 0.2, = 0.75)

We consider that a key area for enhanced implementation is


in dealing with systems that have unmatched uncertainty. Here
we briefly mention some attempts and proposed solutions.
While we do not consider reaching phase elimination significant for the increased and broader use of sliding mode
control, it nevertheless represents an advancement in sliding
surface design.
We also briefly mention some implementations, and a way
to potentially design improved sliding dynamics compared
with pole placement.
A. Sliding Dynamics

Fig. 10. A PI-Sliding mode control of a ball and beam system with a
simulated disturbance input of 2 Volts. (k = 7, Ti = 0.2, = 0.75)

Since S determines the sliding mode dynamics, the


paradigm used to design it may lead to better transient performance of the states while in the sliding mode. Bandyopadhyay
et al, in [18], discuss nonlinear sliding surfaces for high
performance tracking and robustness. A nonlinear function
was used in the switching function design such that, while in
the sliding mode, the system has a low initial damping ratio
(for fast rising); this ratio increases over time to a relatively
high value as it approaches steady state. This approach was
said to shorten the transient period while also preventing
overshoot.
Removing (or reducing) the reaching mode implies that
the disturbance rejection property is effective throughout (or
for most of) the system operation. It therefore represents an
improvement in disturbance rejection performance to eliminate
the reaching phase. An approach to doing this uses a dynamically changing switching function. This involves a nominal
or final sliding mode system added to a function to form the
overall sliding mode system. The value of the function is made
to disappear over time in such a way as to maintain a sliding
mode until the nominal sliding mode is achieved.
A way to design nonlinear sliding surfaces that eliminate
the reaching phase by adding a nonlinear term to the nominal
siding surface was proposed in [18]. The integral sliding mode
approach [3] adds an integral term to the nominal sliding
mode system. In practice, its performance would be adversely
affected by noise in measurements [3] but good design choices
would alleviate this issue [6].
B. Implementations And Dealing With Unmatched Disturbances

Fig. 11.
A PI-Sliding mode control of a ball and beam system with
unmatched disturbance signal. (k = 7, Ti = 0.2, = 0.75)

In [12], the error between the plant and a reference model


was made to exhibit a sliding mode thereby providing reference model tracking by the system under control. The use of
sliding mode control in model reference control systems was

11

said to enable simpler reference model tracking while also


dealing with system uncertainties/disturbances [19].
Using a similar concept, sliding mode observers for state
estimation is discussed in [20], where the observer dynamics
are made to exhibit a sliding mode. A benefit of observer
based sliding mode control in disturbance rejection is based
on the fact that it can be used for both state and disturbance
estimation [1].
We had assumed that all the state values required for control
are available (measurable or determinable). See [8] for an
approach to use output feedback, where only the output is
available for control determination.
Some systems have unmatched disturbance inputs. Since
sliding mode control by itself is only suited to matched
disturbances, it may be combined with other suitably designed
controllers to deal with the unmatched case. Castanos and
Fridman in [6] combined what is known as H robust control
with sliding mode control to deal with both matched and
unmatched disturbance inputs.
An alternative approach was attempted in [21] for underactuated mechanical systems, where the switching function was
constructed hierarchically in a similar general philosophy to
the so-called backstepping paradigm of deriving stabilising
controllers. However, as presented, their approach can not
deal with output disturbance. A backstepped sliding mode
controller with the possibility of dealing with unmatched
inputs at any part of the model was presented in [22]. This
approach is viable because backstepping creates virtual control
signals that attempt to directly control the rate of change of
each state.
For discrete-time systems: [23] presented an investigation
of the implementation of sliding mode control for such systems, touching on the basic theory and design techniques.
And a discrete-time sliding mode control implementation that
handles unmatched disturbances was proposed in [24] for a
discrete-time linear time-invariant system.
VIII. C ONCLUSION AND R ECOMMENDATIONS
This paper has attempted to present sliding mode control
simply and intuitively with the aim of providing an introduction to the subject, generating increasing interest in the method
and its increased utility, and demonstrating that it is teachable
and pedagogically useful in an introductory course in control
systems engineering.
Weve considered sliding mode control as involving a variable that is an encapsulation of a dynamical system in the
state variables of a model. Using this view, we highlighted
an analogy with PID control, and determined the power-rate
reaching law to be a generalised proportional-controller.
The power-rate reaching law has a parameter, , that can
be chosen such that better performance may be obtained. This
can thus be used to replace classic Proportional control in
industrial controllers.
Weve demonstrated that a simplified supertwisting sliding
mode control structure with the sign of the switching function replaced by the switching function itself could create
an enhanced version of it with superior input disturbance

rejection performance. Conversely, for the classic PI controller,


replacing the error signal with its sign produced a superior
version of it. This version could plausibly be used to replace
classic PI control, particularly where robustness to matched
disturbance is a key requirement.
Generally, sliding mode control, as presented here, could
be used as an alternative to, or a replacement for PID control.
Industrial PID controllers could be designed to accommodate
a generalised view of Proportional-Integral control, to provide
facility for specifying an objective/switching function (the
error signal in the classic case) to be minimised, and for
gains that vary with the switching function. This should
thus facilitate the expanded use of the sliding mode control
perspective, particularly in industries where PID is commonly
used.
The analysis of the stability of the supertwisting sliding
mode control algorithm provided intuition as to the conditions
for stability of the classic PI controller. It demonstrated using
this analogy, how a PI controlled system may be stable;
highlighting that the relative sizes of the proportional and
integral gains in the PI controller matter to the overall stability
of the controlled system. Discussions on the stability behaviour
of PID controllers appear to be rare despite its simplicity, the
ubiquity of its treatment in controls courses, and popularity in
industry.
A PI control implementation structure was shown to model
the idea of a controller as an engine to dynamically minimise
an input objective function. That is, that a controller is a
function that dynamically progressively maps an objective
function variable towards the value zero. This is the view
expressed by sliding mode control, reducing the job of finding
a control signal for any system to determining appropriate
objective functions and an adequate mapping or optimisation
function that sufficiently minimises the objective.
Because reaching laws/conditions are a rephrase of Lyapunov stability theory, and switching functions encapsulate
stable dynamics, the principles of sliding mode control may be
extended to other areas of knowledge where Lyapunov stability
behaviour have been applied or observed.
We assert that the chattering phenomenon is no longer an
issue in the implementation of sliding mode controllers given
that there are continuous controllers that can effect the sliding
mode. And chattering has typically not being an issue with
discrete-time controllers.
In regards to pedagogy in introductory controls courses:
Most introductions to control systems engineering seem to
leave Lyapunov stability theory for a second course or advanced course. This paper is of the view that an introductory
course in control engineering ought to include treatment of
Lyapunov stability theory and some treatment of nonlinear
systems in order to present a more complete overview of the
controls landscape. As we have seenand probably already
knowLyapunov stability theory provides an intuitive and
useful view of the stability of linear and nonlinear systems,
particularly in relation to controller synthesis.
PID control is a ubiquitous control method that is commonly
included in introductory control engineering courses. Given
the intuitiveness and simplicity of sliding mode control, that

12

PID control may be extrapolated from it, that is has utility for
both linear and nonlinear systems, and that it can in almost
similar ways be applied like PID control, there is a case
for its use (or increased use) in first/introductory courses in
controls. It may also used as a lead in for related advanced
concepts/coverage/topics in control like feedback linearisation,
and Lie derivatives. The content of this article serves towards
these objectives.
Finally, it is likely the case that there is already literature
related to an application area you might be interested in. The
literature on the subject is large, as a cursory search on the
internet might show. This introduction is like the proverbial
tip of the iceberg.
R EFERENCES
[1] D. Young, et al, A Control Engineers Guide to Sliding Mode Control,
IEEE Transactions on Control System Technology, vol. 7, no. 3, pp. 328342, 1999.
[2] G. Bartolini et al., A survey of applications of second-order sliding mode
control to mechanical systems, International Journal of Control, vol. 76,
no 9, pp. 875-892, 2003.
[3] V. Utkin et al., Sliding mode control in electro-mechanical systems, 2d
ed. Philadelphia, PA: CRC/Taylor and Francis, 2009.
[4] W. Gao and J. C. Hung, Variable Structure Control of nonlinear systems:
A new approach, IEEE Transactions on Industrial Electronics, vol. 40,
no. 1, pp. 45 - 55, Feb. 1993.
[5] L. Fridman and A. Levant, Higher Order Sliding Modes, in Sliding
Mode Control in Engineering, W. Perruquetti and J. P. Barbot, Eds. New
York, NY: Marcel Dekker, Inc. 2002, pp. 53 - 101.
[6] F. Castanos and L. Fridman, Analysis and Design of Integral Sliding
Manifolds for Systems With Unmatched Perturbations, IEEE Transactions on Automatic Control, vol. 51, no. 5, pp.853-858, 2006.
[7] M. Fliess and H. Sira-Ramirez, A Module Theoretic Approach to Sliding
Mode Control in Linear Systems, Proceedings of the 32nd IEEE Conference on Decision and Control, 1993, San Antonio, Texas. Pages 24652470 vol.3. 15-17 December 1993.
[8] C. Edwards and S. K. Spurgeon, Sliding Mode Control: Theory and
Applications, London: Taylor and Francis, 1998.
[9] J. A. Moreno, M. Osorio, A Lyapunov Approach to Second-order Sliding
Mode Controllers and Observers, Proceedings of the 47th IEEE Conference on Decision and Control, 2008, Cancun, Mexico. Pages 2856-2861
vol.3. 9-11 December 2008.
[10] L. Fridman, The Problem of Chattering: an Averaging Approach, Lecture
Notes in Control and Information Sciences 247, Editor: M. Thoma.
om, R. M. Murray. (2012 Sep. 28). Feedback Systems: An
[11] K. J. Astr
Introduction for Scientists and Engineers. (Electronic version v2.11b)
[Online]. Available: http://www.cds.caltech.edu/ murray/amwiki
[12] S. K. Spurgeon and R. J. Patton, Robust variable structure control of
model reference systems, IEE Proceedings, vol. 137, Pt. D, no. 6, pp
341-348, Nov. 1990.
[13] A. Levant, Higher-Order Sliding Modes, Differentiation And OutputFeedback Control, Int. J. Control, vol. 76, no. 9/10, pp. 924-941, 2003.
[14] O. Imahe, Sliding Mode Control of Nonlinear Systems, M.Sc. thesis,
Control Syst. Center, Univ. of Manchester, Manchester, England, 2010.
[15] N. B. Almutairi and M. Zribi, On The Sliding Mode Control Of A Ball
On A Beam, Nonlinear Dynamics, vol. 59, no. 1-2, pp. 221-238, Jan.
2010.
[16] V. M. Becerra. Lecture notes on advanced nonlinear control. Lecture
eight: sliding mode control. [Online]. Viewed 2010, July, 19. Available:
http://www.personal.rdg.ac.uk/ shs99vmb/notes/anc/
[17] W. Yu, Nonlinear PD Regulation for the Ball and Beam System, Int.
J. of Elect. Eng. Educ. vol. 46, no. 1, pp 59-73, Jan. 2009.
[18] B. Bandyopadhyay et al, High Performance Robust Controller Design
Using Nonlinear Surface, in Sliding Mode Control Using Novel Sliding
Surfaces, Berlin: Springer-Verlag, 2009.
[19] R. J. Stonier and J. Zajaczkowski, Model reference control using sliding
mode with Hamiltonian dynamics, ANZIAM, Australian Mathematical
Society, (E), pp. E1-E40, Dec. 2003.
[20] C. Edwards et al., Sliding Mode Observers, in Mathematical Methods
for Robust and Nonlinear Control, M.C. Turner et al. (Eds.), Berlin:
Springer-Verlag 2007, pp. 221-242.

[21] D. W. Qian et al., Robust Sliding Mode Control for a Class of


Underactuated Systems with Mismatched Uncertainties, Proc. IMechE
Vol. 223, Part I: J. Systems and Control Engineering, pp.785-795, 2009.
[22] W. GU et al., Sliding mode control for an aerodynamic missile based
on backstepping design, Journal of Control Theory and Applications,
vol. 1, pp. 71-75, 2005.
[23] W. Gao et al, Discrete-Time Variable Structure Control Systems, IEEE
Transactions on Industrial Electronics, vol. 42, no. 2, pp. 117-122, 1995.
[24] S. Janardhanan and B. Bandyopadhyay, Discrete Sliding Mode Control
of Systems With Unmatched Uncertainty Using Multirate Output Feedback, IEEE Transactions on Automatic Control, vol. 51, no. 6, pp.10301035, 2006.

S-ar putea să vă placă și