Sunteți pe pagina 1din 340

Rolf Isermann

Digital
Control Systems
Volume 2:
Stochastic Control, Multivariable Control,
Adaptive Control, Applications

Second, revised edition

With 120 Figures

Springer-Verlag
Berlin Heidelberg NewYork
London Paris Tokyo
Hong Kong Barcelona Budapest
Professor Dr.-Ing. Rolf Isermann
Institut fUr Regelungstechnik
Technische Hochschule Darmstadt
SchloBgraben 1
D-6100 Darmstadt, West Germany

ISBN-13: 978-3-642-86422-3 e-ISBN-13: 978-3-642-86420-9


DOl: 10.1007/978-3-642-86420-9

Library of Congress Cataloging-in-Publication Data


Isermann, Rolf.
Digital control systems.
Rev. and eni. translation of: Digitale Regelsysteme.
Includes bibliographical references (v. 1, p. [321]-
327) and index.
Contents: v. /. Fundamentals, deterministic
control- v. 2. Stochastic control, multi variable
control, adaptive control, applications.
/. Digital control systems. I. Title.
TJ213.164713 1989 629.8'312 88-38730
ISBN 0-387-50266-1 (U.S.: v. 1: alk. paper)

This work is subject to copyright. All rights are reserved, whether the whole or
part of the material is concerned, specifically the rights of translation, reprinting,
re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in
other ways, and storage in data banks. Duplication of this publication or parts
thereof is only permitted under the provisions of the German Copyright Law of
September 9, 1965, in its version of June 24, 1985, and a copyright fee must always
be paid. Violations fall under the prosecution act of the German Copyright Law.
© Springer-Verlag Berlin Heidelberg 1991
Softcover reprint of the hardcover 2nd edition 1991
The use of registered names, trademarks, etc. in this publication does not imply,
even in the absence of a specific statement, that such names are ekemDt from the
relevant protective laws and regulations and therefore free for general use
Typesetting: Macmillan India Ltd orangaioie 25.

61/3020 543210-Printed on aeid-free PaPer


Preface

The great advances made in large-scale integration of semiconductors and the


resulting cost-effective digital processors and data storage devices determine the
present development of automation.
The application of digital techniques to process automation started in about
1960, when the first process computer was installed. From about 1970 process
computers with cathodic ray tube display have become standard equipment for
larger automation systems. Until about 1980 the annual increase of process
computers was about 20 to 30%. The cost of hardware has already then shown a
tendency to decrease, whereas the relative cost of user software has tended to
increase. Because of the high total cost the first phase of digital process automation
is characterized by the centralization of many functions in a single (though
sometimes in several) process computer. Application was mainly restricted to
medium and large processes. Because of the far-reaching consequences of a
breakdown in the central computer parallel standby computers or parallel back-up
systems had to be provided. This meant a substantial increase in cost. The tendency
to overload the capacity and software problems caused further difficulties.
In 1971 the first microprocessors were marketed which, together with large-scale
integrated semiconductor memory units and input/output modules, can be assem-
bled into cost-effective microcomputers. These microcomputers differ from process
computers in fewer but higher integrated modules and in the adaptability of their
hardware and software to specialized, less comprehensive tasks. Originally, micro-
processors had a shorter word length, slower operating speed and smaller opera-
tional software systems with fewer instructions. From the beginning, however, they
could be used in a manifold way resulting in larger piecenumbers and leading to
lower hardware costs, thus permitting the operation with small-scale processes.
By means of these process-microcomputers which exceed the efficiency of former
process computers decentralized automatic systems can be applied. To do so, the
tasks up to now been centrally processed in a process computer are delegated to
various process microcomputers. Together with digital buses and possibly placed
over computers many different hierarchically organized automatization structures
can be build up. They can be adapted to the corresponding process. Doing so the
high computer load of a central computer is avoided, as well as a comprehensive
and complex user-software and a lower reliability. In addition decentralized
systems can be easier commissioned, can be provided with mutual redundancy
VI Preface

(lower susceptibility to malfunctions) and can lead to savings in wiring. The second
phase of process automation is thus characterized by decentralization.
Besides their use as substations in decentralized automation systems process
computers have found increasing application in individual elements of automation
systems. Digital controllers and user-programmable sequence control systems,
based on microprocessors, have been on the market since 1975.
Digital controllers can replace several analog controllers. They usually require an
analog-digital converter at the input because of the wide use of analog sensors,
transducers and signal transmission, and a digital-analog converter at the output
to drive actuators designed for analog techniques. It is to be expected that, in the
long run, digitalization will extend to sensors and actuators. This would not only
save a-d and d-a converters, but would also circumvent certain noise problems,
permit the use of sensors with digital output and the reprocession of signals in
digital measuring transducers (for example choice of measurement range, correc-
tion of nonlinear characteristics, computation of characteristics not measurable in
a direct way, automatic failure detection, etc.). Actuators with digital control will be
developed as well. Digital controllers not only are able to replace one or several
analog controllers they also succeed in performing additional functions, previously
exercised by other devices or new functions. These additional functions are such as
programmed sequence control of setpoints, automatic switching to various con-
trolled and manipulated variables, feedforward adjusted controller parameters as
functions of the operating point, additional monitoring of limit values, etc.
Examples of new functions are: communication with other digital controllers,
mutual redundancy, automatic failure detection and failure diagnosis, various
additional functions, the possibility of choice between different control algorithms
and, in particular, selftuning or adaptive control algorithms. Entire control systems
such as cascade-control systems, multi variable control systems with coupling
controllers, control systems with feedforward control which can be easily changed
by configuration of the software at commissioning time or later, can be realized by
use of one digital controller. Finally, very large ranges of the controller parameters
and the sample time can be realized. It is because of these many advantages that,
presently various digital devices of process automation are being developed, either
completing or replacing the process analog control technique.
As compared to analog control systems, here are some of the characteristics of
digital control systems using process computers or process microcomputers:
- Feedforward and feedback control are realized in the form of software.
- Discrete-time signals are generated.
- The signals are quantized in amplitude through the finite word length in a-d
converters, the central processor unit, and d-a converters.
- The computer can automatically perform the analysis of the process and the
synthesis of the control.
Because of the great flexibility of control algorithms stored in software, one is not
limited, as with analog control systems, to standardized modules with P-, 1- and D-
behaviour, but one can further use more sophisticated algorithms based on
Preface vii

mathematical process models. Many further functions can be added. It is especially


significant that on-line digital process computers permit the use of process
identification-, controller design-, and simulation methods, thus providing the
engineer with new tools.
Since 1958 several books have been published dealing with the theoretical
treatment and synthesis of linear sampled-data control, based on difference
equations, vector difference equations and the z-transform. Up to 1977, when the
first German edition of this book appeared, books were not available in which the
various methods of designing sampled-data control have been surveyed, compared
and presented so that they can be used immediately to design control algorithms
for various types of processes. Among other things one must consider the form and
accuracy of mathematical process models obtainable in practice, the computa-
tional effort in the design and the properties of the resulting control algorithms,
such as the relationship between control performance and the manipulation effort,
the behaviour for various processes and various disturbance signals, and the
sensitivity to changes in process behaviour. Finally, the changes being effected in
control behaviour through sampling and amplitude quantization as compared
with analog control had also be studied.
Apart from deterministic control systems the first edition of this book dealt also
with stochastic control, multi variable control and the first results of digital
adaptive control. In 1983 this book was translated into Chinese. In 1981 the
enlarged English version entitled "Digital Control Systems" was edited, followed
by the Russian translation in 1984, and, again a Chinese translation in 1986. In
1987 the 2nd edition appeared in German, now existing in two volumes. This book
is now the 2nd English edition.
As expected, the field of digital control has been further developed. While new
results have been worked out in research projects, the increased application
provided a richer experience, thus allowing a more profound evaluation of the
various possibilities. Further stimulation of how to didactically treat the material
has been provided by several years of teaching experience and delivering courses in
industry. This makes the second edition a complete revision of the first book,
containing many supplements, especially in chapters 1,3,5,6, 10,20,21,23,26,30,
31. Since, compared with the first edition, the size of the material has been
significantly increased, it was necessary to divide the book in two volumes.
Both volumes are directed to students and engineers in industry desiring to be
introduced to theory and application of digital control systems. Required is only a
basic familiarity of continuous-time (analog) control theory and control technique
characterized by keywords such as: differential equation, Laplace-Transform,
transfer function, frequency response, poles, zeroes, stability criterions, and basic
matrix calculations. The first volume deals with the theoretical basics of linear
sampled-data control and with deterministic control. Compared with the first
edition the introduction to the basics of sampled-data control (part A) has been
considerably extended. Offering various examples and exercises the introduction
concentrates on the basic relationships required by the up-coming chapters and
necessary for the engineer. This is realized by using the input/output-, as well as the
VIII Preface

state-design. Part B considers control algorithms designed for deterministic noise


signals. Parameter-optimized algorithms, especially with PID-behaviour are in-
vestigated in detail being still the ones most frequently used in industry. The sequel
presents general linear controllers (higher order), cancellation controllers, and
deadbeat controllers characteristic for sampled-data control. Also state controllers
including observers due to different design principles and the required supplements
are considered. Finally, various control methods for deadbeat processes, insensitive
and robust controllers are described and different control algorithms are compared
by simulation methods. Part C of the second volume is dedicated to the control
design for stochastic noise signals such as minimum variance controllers. The
design of interconnected control systems (cascade control, feedforward control) are
described in Part D while part E treats different multi variable control systems
including multi variable state estimation. Digital adaptive control systems which
have made remarkable progress during the last ten years are thoroughly investig-
ated in Part F. Following a general survey, on-line identification methods,
including closed loop and various parameter-adaptive control methods are pre-
sented. Part G considers more practical aspects, such as the influence of amplitude
quantization, analog and digital noise filtering and actuator control. Finally the
computer-aided design of control with special program systems is described,
including various applications and examples of adaptive and selftuning control
methods for different technical processes. The last chapters show, that the control
systems and corresponding design methods, in combination with process modeling
methods described in the two volumes were compiled in program systems. Most of
them were tried out on our own pilot processes and in industry. Further specifica-
tion of the contents is given in chapter 1.
A course "Digital Control Systems" treats the following chapters: 1,2, 3.1-3.5,
3.7,4,5,6, 7, 3.6, 8, 9, 11. The weekly three hours lecture and one hour exercises is
given at the Technische Hochschule Darmstadt for students starting the sixth
semester. For a more rapid apprehension of the essentials for applications the
following succession is recommended: 2, 3.1 to 3.5 (perhaps excluding 3.2.4, 3.5.3,
3.5.4) 4,5.1,5.2.1,5.6,5.7,6.2, 7.1, 11.2, 11.3 with the corresponding exercises.
Many of the described methods, development and results have been worked out
in a research project funded by the Bundesminister fur Forschung und Technologie
(DV 5.505) within the project "ProzeBlenkung mit DV-Anlagen (PDV)" from
1973-1981 and in research projects funded by the Deutsche Forschungsgemein-
schaft in the Federal Republic of Germany. The author is very grateful for this
support.
His thanks also go to his coworkers,-who had a significant share in the
generation of the results through several years of joint effort-for developing
methods, calculating examples, assembling program packages, performing simu-
lations on digital and on-line computers, doing practical trials with various
processes and, finally, for proofreading.
The book was translated by my wife, Helge Isermann.

Darmstadt, June 1991 Rolf Isermann


Contents

C Control Systems for Stochastic Disturbances

12 Stochastic Control Systems (Introduction) . . . . . . . . 3


12.1 Preliminary Remarks . . . . . . . . . . . . . . . . 3
12.2 Mathematical Models of Stochastic Signal Processes 3
12.2.1 Basic Terms. . . . . . . . . . . . . . . . . 4
12.2.2 Markov Signal Processes . . . . . . . . . 6
12.2.3 Scalar Stochastic Difference Equations. 8

13 Parameter-optimized Controllers for Stochastic Disturbances . . . . . .. 10

14 Minimum Variance Controllers for Stochastic Disturbances ... 13


14.1 Generalized Minimum Variance Controllers for Processes
without Deadtime . . . . . . . . . . . . . . . . . . . . . . . . 13
14.2 Generalized Minimum Variance Controllers for Processes
with Deadtime . . . . . . . . . . . . . . . . . . . . . . 21
14.3 Minimum Variance Controllers for Processes with
Pure Deadtime . . . . . . . . . . . . . . . . . . . . 25
14.4 Minimum Variance Controllers without Offset. 27
14.4.1 Additional Integral Acting Term . . . . . 27
14.4.2 Minimization of the Control Error .. . 28
14.5 Simulation Results with Minimum Variance Controllers 28
14.6 Comparison of Various Deterministic and
Stochastic Controllers . . . . . . . . . . . . . . . . . . . . 32

15 State Controllers for Stochastic Disturbances. . . . . . . . . . . . . . . .. 36


15.1 Optimal State Controllers for White Noise. . . . . . . . . . . . .. 36
15.2 Optimal State Controllers with State Estimation for White Noise. 38
15.3 Optimal State Controllers with State Estimation for External
Disturbances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 40
x Contents

D Interconnected Control Systems

16 Cascade Control Systems . . 49

17 Feedforward Control. . . . . 56
17.1 Cancellation Feedforward Control . . . . . . . . 56
17.2 Parameter-optimized Feedforward Control . . . 60
17.2.1 Parameter-optimized Feedforward Control without a
Prescribed Initial Manipulated Variable . . . . . . . . . . 60
17.2.2 Parameter-optimized Feedforward Control with Prescribed
Initial Manipulated Variable. . . . . . . . . . . . . . . . . 61
17.2.3 Cooperation of Feedforward and Feedback Control. . . 64
17.3 State Variable Feedforward Control . . . . . .. 65
17.4 Minimum Variance Feedforward Control .. 66

E Multivariable Control Systems


18 Structures of Multivariable Processes . . . . 71
18.1 Structural Properties of Transfer Function Representations. 71
18.1.1 Canonical Structures . . . . . . . . . . . . . . . . . . . . 71
18.1.2 The Characteristic Equation and Coupling Factor. . . .. 75
18.1.3 The Influence of External Signals. . . . . . . . . . . . 78
18.1.4 Mutual Action of the Main Controllers. . 79
18.1.5 The Matrix Polynomial Representation. . 82
18.2 Structural Properties of the State Representation. 82

19 Parameter-optimized Multivariable Control Systems . . . . . . . . . . .. 89


19.1 Parameter Optimization of Main Controllers without
Coupling Controllers . . . . . . . . . . . . . . . . . . . . . . . . . 89
19.1.1 Stability Regions. . . . . . . . . . . . . . . . . . . . . . . . 92
19.1.2 Optimization of the Controller Parameters and Tuning
Rules for Twovariable Controllers . . . . . . . . . . . . . 96
19.2 Decoupling by Coupling Controllers (Non-interaction) . . . . . 99
19.3 Parameter Optimization of the Main and Coupling Controller 103

20 Multivariable Matrix Polynomial Control Systems . . . . . . . . . . . . . 105


20.1 The General Matrix Polynomial Controller. . . . . . . . . . .. 105
20.2 The Matrix Polynomial Deadbeat Controller. . . . . . 105
20.3 Matrix Polynomial Minimum Variance Controllers . . . . . . . . . 107

21 Multivariable State Control Systems . . . . . . . . . . . . . . . . . 109


21.1 Multivariable State Control Systems. . . . . . . . . . . . . 109
21.2 Multivariable Matrix Riccati State Controllers . . . . . . . . . . . . 112
Contents Xl

21.3 Multivariable Decoupling State Controllers. . . . . . 113


21.4 Multivariable Minimum Variance State Controllers. 113

22 State Estimation . . . . . . . . . . . . . . . . . . . . 116


22.1 Vector Signal Processes and Assumptions. . 117
22.2 Weighted Averaging of Two Measurements. 119
22.3 Recursive Estimation of Vector States (Kalman Filter) 121

F Adaptive Control Systems

23 Adaptive Control Systems (A Short Review) 127


23.1 Model Reference Adaptive Systems (MRAS). 129
23.1.1 Local Parameter Optimization 130
23.1.2 Ljapunov Design. . . . . . . . . . . . . 132
23.1.3 Hyperstability Design. . . . . . . . . . 133
23.2 Adaptive Controllers with Identification Model (MIAS). 138

24 On-line Identification of Dynamical Processes and


Stochastic Signals. . . . . . . . . . . . . . . . . . . . 141
24.1 Process and Signal Models. . . . . . . . . . . 141
24.2 The Recursive Least Squares Method (RLS). 143
24.2.1 Dynamical Processes . . . . . . . . . . 143
24.2.2 Stochastic Signals . . . . . . . . . . . . 148
24.3 The Recursive Extended Least Squares Method (RELS). 149
24.4 The Recursive Instrumental Variables Method (RIV) . . 150
24.5 A Unified Recursive Parameter Estimation Algorithm. . 152
24.6 Modifications to Recursive Parameter Estimation Algorithms. 154

25 On-line Identification in Closed Loop . . . . . . . 158


25.1 Parameter Estimation with Perturbations. 159
25.1.1 Indirect Process Identification. . . . 160
25.1.2 Direct Process Identification. . . . . 164
25.2 Parameter Estimation with Perturbations. 167
25.3 Methods for Closed Loop Parameter Estimation. 168
25.3.1 Indirect Process Identification without Perturbation 169
25.3.2 Direct Process Identification without Perturbation 169
25.3.3 Direct Process Identification with Perturbation 169

26 Parameter-adaptive Controllers. . . 170


26.1 Design Principles. . . . . . . 170
26.2 Suitable Control Algorithms 175
26.2.1 Deadbeat Control Algorithms 175
26.2.2 Minimum Variance Controllers 176
xii Contents

26.2.3 Parameter-optimized Controllers. . . . . . . . . . . . . . . . 179


26.2.4 General Linear Controller with Pole-assignment (LCPA) . 179
26.2.5 State Controller. . . . . . . . . . . . . . . . . . . . . . . 179
26.3 Suitable Combinations. . . . . . . . . . . . . . . . . 180
26.3.1 Ways of Combinations . . . . . . . . . . . . . . . . . . 180
26.3.2 Stability and Convergence . . . . . . . . . . . . . . . . 182
26.3.3 Choice of the Elements for Parameter-adaptive
Controllers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
26.4 Stochastic Parameter-adaptive Controllers . . . . . . . . . . . . . . 188
26.4.1 Adaptive Minimum Variance Controller (RLS/MV4) ... 188
26.4.2 Adaptive Generalized Minimum Variance Controllers
(RLS/MV3, RELS/MV3) . . . . . . . . . . . . . . . . .. 190
26.5 Deterministic Parameter-adaptive Controllers. . . . . . . . .. 192
26.5.1 Adaptive Deadbeat Controller (RLS/DB) 193
26.5.2 Adaptive State Controller (RLS/SC) . . . . . . . . . . . . . . 196
26.3.3 Adaptive PID-Controllers . . . . . . . . . . . . . . . . . . . . 199
26.6 Simulation examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
26.6.1 Stochastic and Deterministic Adaptive Controllers . . . . . 207
26.6.2 Various Processes . . . . . . . . . . . . . . . . . . . . . . . . 208
26.7 Start of Parameter-adaptive Controllers and Choice
of Free Design Parameters . . . . . . . . . . . . . . . . . 208
26.7.1 Preidentification . . . . . . . . . . . . . . . . . . . . . . . . . 212
26.7.2 Choice of Design Parameters . . . . . . . . . . . . . . . . . . 212
26.7.3 Starting Methods . . . . . . . . . . . . . . . . . . . . . . . . . 213
26.8 Supervision and Coordination of Adaptive Controllers. . . . . . . 215
26.8.1 Supervision of Adaptive Controllers . . . . . . . . . . . . . . 215
26.8.2 Coordination of Adaptive Controllers . . . . . . . . . . . . . 217
26.9 Parameter-adaptive Feedforward Control . . . . . . . . . . . . . . 217
26.10 Parameter-adaptive Multivariable Controllers. . . . . . .. . .. 220
26.11 Application of Parameter-adaptive Control Algorithms . . . . . . 221

G Digital Control with Process Computers and Microcomputers


27 The Influence of Amplitude Quantization for Digital Control . . . . . . . 227
27.1 Reasons for Quantization Effects . . . . . . . . . . . . . . . . . . . . 227
27.2 Various Quantization Effects. . . . . . . . . . . . . . . . . . . . . . . 231
27.2.1 Quantization Effects of Variables . . . . . . . . . . . . . . . . 231
27.2.2 Quantization Effects of Coefficients. . . . . . . . . . . . ... 235
27.2.3 Quantization Effects of Intermediate Results . . . . . . . .. 235

28 Filtering of Disturbances . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240


28.1 Noise Sources and Noise Spectra. . . . . . .. . . . . . . . . . . . 240
28.2 Analog Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Contents xiii

28.3 Digital Filtering . . . . . .245


28.3.1 Low-pass Filters. .246
28.3.2 High-pass Filters. · 248
28.3.3 Special Filters . . .249

29 Combining Control Algorithms and Actuators . . 254

30 Computer-aided Control Algorithm Design. . . . 266


30.1 Program Packages. . . . . . . . . . . . . . . 266
30.1.1 Modelling through Theoretical Modelling
or Identification . . . . . . . . . . . . . . . . . 268
30.1.2 Program Packages for Process Identification. . 269
30.1.3 Program Packages for Control Algorithm Design . 269
30.2 Case Studies . . . . . . . . . . . . . . . . . . . . 274
30.2.1 Digital Control of a Superheater . . . . 274
30.2.2 Digital Control of a Heat Exchanger. . 275
30.2.3 Digital Control of a Rotary Dryer . . . 280

31 Adaptive and Selftuning Control Systems Using Microcomputers


and Process Computers . . . . . . . . . . . . . . . . . . . 290
31.1 Microcomputers for Adaptive Control Systems. . . . . .290
31.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . .293
31.2.1 Adaptive Control of a Superheater (Simulation) · 293
31.2.2 Adaptive Control of Air Conditioning Units · 293
31.2.3 Adaptive Control of the pH-value. · 301

References. . . · 307

Subject Index . · 321


Summary of Contents Volume I

1 Introduction

A Fundamentals
2 Control with Digital Computers (Process Computers,
Microcomputers)
3 Fundamentals of Linear Sampled-data Systems (Discrete-time
Systems)

B Control-Systems for Deterministic Disturbances


4 Deterministic Control Systems
5 Parameter-optimized Controllers
6 General Linear Controllers and Cancellation Controllers
7 Controllers for Finite Setting Time
8 State Controller and State Observers
9 Controllers for Processes with Large Deadtime
10 Sensitivity and Robustness with Constant Controllers
11 Comparison of Different Controllers for Deterministic Disturbances
Appendix A: Tables and Test Processes
Appendix B: Problems
Appendix C: Results of the Problems
Graphic Outline of Contents (Volume I)

Design of Design of Information on RealizatIOn


Control Systems Structure Control Algnthms Process and Signals with Digital Computers

12 Control with Digital Computers

3 Fundamentals of Linear Sampled-Data Systems

4 Deterministic Control Systems (Survey)

5-11 Single-input / 5 Parameter-


Single-output opllmlzed
Control Systems Controllers (PID)

6 General linear
and Cancellallon
Controllers

7 Deadbeat
Controllers

18 State Controllers and Observers

9 Controllers for
Processes with
Lorge Deadtlme

10 Robust
Controllers

11 Companson of
Control Algorithms
Graphic Outline of Contents (Volume II)
Design of Design of Information on Realization
Control Systems Structure Control Algrilhms Process and Signals with Digital Computers

12 Stochastic Conlrol Syslems (Survey)

13-15 Single-input! 1J Parameter-


Single-output optimized
Control Systems Stochastic Controllers

14 Minimum
Variance Controllers

115 State Controllers

16-17 Interconnected 16 Cascode


Control Systems Control Systems

17 Feedforward
Control

18-21 Mullivariable 18 Structures of Mulli-


Control Systems variable Processes

19 Parameter-optimized
Multivariable
Control Systems

20 Matrix
Polynomial Controllers

121 State Controllers

122 State Estimation

12J Adaptive Control Systems (Survey)

24125 Process
Identification

126 Parameter-adoptive Control Systems

27 Amplitude
Quantization

128 Signal Filtering

129 Actuator Control

JD Computer Aided Design of Control Algorithms


with Process Identificatian

IJ 1 Adoptive Control with Microcomputers


List of Abbreviations and Symbols

This list defines commonly occurring abbreviations and symbols:

a} parameters of the difference equations of the process


b

c parameters of the difference equations of stochastic signals


d}
d dead time d = TtlTo = 1,2, ...
e control deviation e = w - y (also ew = w - y); or equation error for
parameter estimation; or the number e = 2.71828 ...
f frequency,! = IITp (Tp period), or parameter
g impulse response (weighting function)
h parameter
integer; or index; or i 2 = - 1
k discrete time unit k = tlTo = 0, 1,2, ...
I integer; or parameter
m order of the polynomials A( ), B( ), C( ), D( )
n disturbance signal (noise)
p parameters of the difference equation of the controller, or integer
p( ) probability density
q parameters of the difference equation of the controller
r weighting factor of the manipulated variable; or integer
s variable of the Laplace transform s = b + iw; or signal
continuous time
u input signal of the process, manipulated variable u(k) = U (k) - U 00
v nonmeasurable, virtual disturbance signal
w reference value, command variable, setpoint w(k) = W(k) - Woo
x state variable
y output signal of the process, controlled variable y(k) = Y(k) - Yoo
z variable of the z-transformation z = eTas
a}
b parameters of the differential equations of the process
A(s) denominator polynomial of G(s)
B(s) numerator polynomial of G(s)
xviii List of Abbreviations and Symbols

A(z) denominator polynomial of the z-transfer function of the process model


B(z) numerator polynomial of the z-transfer function of the process model
C(z) denominator polynomial of the z-transfer function of the noise model
D(z) numerator polynomial of the z-transfer function of the noise model
G(s) transfer function for continuous-time signals
G(z) z-transfer function
H( ) transfer function of a holding element
I control performance criterion
K gain
L word length
M integer
N integer or discrete measuring time
P(z) denominator polynomial of the z-transfer function of the controller
Q(z) numerator polynomial of the z-transfer function of the controller
R( ) dynamical control factor
S power density or sum criterion
T time constant
T95 settling time of a step response until 95% of final value
To sample time
Tt dead time
U process input (absolute value)
V loss function
W reference variable (absolute value)
Y process output variable (absolute value)
b control vector
c output vector
k parameter vector of the state controller
n noise vector (r x 1)
u input vector (p x 1)
v noise vector (p x 1)
w reference variable vector (r x 1)
x state variable vector (m x 1)
y output vector (r x 1)
A system matrix (m x m)
B input matrix (m x p)
C output matrix, observation matrix (r x m)
D input-output matrix (r x p), or diagonal matrix
F noise matrix or F = A - BK
G matrix of transfer functions
I unity matrix
K parameter matrix of the state controller
Q weighting matrix of the state variables (m x m)
R weighting matrix of the inputs (p x p); or controller matrix
d(z) denominator polynomial of the z-transfer function, closed loop
~(z) numerator polynomial of the z-transfer function, closed loop
List of Abbreviations and Symbols XIX

jj Fourier-transform
.:J information
£( ) Laplace-transform
3( ) z-transform
~( ) correspondence G(s) --+ G(z)
IX coefficient
f3 coefficient
y coefficient; or state variable of the reference variable model

,
(j deviation, or error
e coefficient
state variable of the noise model
11 state variable of the noise model; or noise/signal ratio
K coupling factor; or stochastic control factor
A standard deviation of the noise v(k)
J1 order of P(z)
v order of Q(z); or state variable of the reference variable model
n 3.14159 ...
(1 standard deviation, (12 variance, or related Laplace variable
r time shift
w angular frequency w = 2n/Tp (Tp period)
d deviation; or change; or quantization unit
o parameter
II product
~ sum
n related angular frequency
x = dx/dt
Xo exact quantity
X estimated or observed variable
i,dx = x - Xo estimation error
X average
Xoo value in steady state

Mathematical abbreviations
exp(x) = eX
E{ } expectation of a stochastic variable
var[ ] variance
covE ] covariance
dim dimension, number of elements
tr trace of a matrix: sum of diagonal elements
adj adjoint
det determinant
Indices
P process
Pu process with input u
xx List of Abbreviations and Symbols

Pv process with input v


R or C feedback controller, feedback control algorithm, regulator
S or C feedforward controller, feedforward control algorithm
o exact value
00 steady state, d.c.-value

Abbreviations for controllers or control algorithms (C)


i-PC - j
parameter optimized controller with i parameters and j parameters to
be optimized
DB Deadbeat-controller
LC-PA linear controller with pole assignment
PREC predictor controller
MV minimum variance controller
SC state controller (usually with an observer)

Abbreviations for parameter estimation methods


COR-LS correlation analysis with LS parameter estimation
IV instrumental variables
LS least squares
ML maximum-likelihood
STA stochastic approximation
DSFI discrete square root filter in variance form
DSFC discrete square root filter in covariance form
DUDC discrete UD-factorization in covariance form
The letter R means recursive algorithm, i.e. RIV, RLS, RML.

Abbreviations for signal processes


AR autoregressive
MA moving average
ARMA autoregressive moving average
ARMAX autoregressive moving average with exogeneous variable

Other abbreviations
ADC analog-digital converter
CPU central processing unit
DAC digital-analog converter
PRBS pseudo-random binary signal
ADC analog digital converter
CPU central processor unit
DAC digital-analog converter
PRBS pseudorandom binary signal
WL word length
List of Abbreviations and Symbols xxi

SISO single-input single-output


MIMO multi-input multi-output
MRAS adaptive system with reference model
MIAS adaptive system with identification model

Remarks

The vectors and matrices in the Figures are roman types and underlined. Hence it
corresponds e.g. x ---+ ,!; K ---+ K
The symbol for the dimension of time t in seconds is usually s and sometimes sec in order to
avoid misinterpretation as the Laplace variable s.
C Control Systems for Stochastic
Disturbances
12 Stochastic Control Systems (Introduction)

12.1 Preliminary Remarks

The controllers treated in Volume I were designcd for deterministic disturbances,


that means for signals which are exactly known a priori and can be described
analytically. Real disturbances, however, are mostly stochastic signals which can-
not be exactly described nor predicted. The deterministic signals used for the design
of control systems are often 'proxies' of real signals. These proxies have simple
shapes to reduce the design complexity and to allow for easy interpretation of the
control system output. The resulting control systems are then optimal only for the
chosen proxy signal and the applied criterion. For all other signals the control
system is sub-optimal; however, this is not very important in most cases. If the
demands on the control performance increase, the controllers must be matched not
only to the dynamic behaviour of the processes but also to the disturbances. To this
the theory of stochastic signals has much to contribute.
In section 12.2 the mathematical models of stochastic signals, required
by the following chapters, are briefly treated. Then three important controllers for
stochastic disturbances are considered. All the parameter-optimized controllers of
chapter 5 can also be matched to stochastic disturbances as shown in chapter 13.
Then chapter 14 gives a detailed treatment of various minimum variance controllers
which result from the minimization of a quadratic performance criterion and which
are matched with an optimal structure both to the process to be controlled and to
the stochastic disturbances. Finally the state controller, which also has an optimal
structure, and which requires a state variable estimation for estimation of the
stochastic state variables is treated in chapter 15.
The derivation of the state estimation is presented in a special chapter,
chapter 22, which requires a separate introduction because of its complexity.
The theory of stochastic control systems is quite recent, and so far the following
books have been published on discrete time stochastic control systems:
[12.1]-[12.5], [8.3].

12.2 Mathematical Models of Stochastic Signal Processes

This section presents some equations describing signal processes which are re-
quired in the design of stochastic controllers and state estimators. However,
a detailed introduction and derivation cannot be given here, so the reader is
4 12 Stochastic Control Systems (Introduction)

referred to special publications, for example on continuous stochastic signals


[12.6], [12.7], [12.8], and discrete-time stochastic signals [12.9], [12.10], [12.4],
[3.13].

12.2.1 Basic Terms


We consider the discrete-time stochastic signal process
{x(k)}; k = 1,2, ... , N.
The statistic properties of stochastic signals are described by their amplitude
probability density and by all joint probability density functions. If these probabil-
ity densities are functions of time, the stochastic signal is nonstationary. If the
probability densities and the joint probability densities are independent of a time
shift the signal is called stationary (in the narrow sense). Stationary signals are
called ergodic if their ensemble-average can be replaced by their time-average.
A stationary ergodic signal can be described by its expectation (linear average
value)
1 N
X = E{x(k)} = lim - L x(k) (12.2.1)
N-ooN k =l

and by its autocorrelation function:


1
+ ,n
N
<1>xx('r) = E{x(k)x(k = lim - L x(k)x(k + ,). (12.2.2)
N-ooN k =l

The autocorrelation function describes the intrinsic relationships of a random


signal. From the definition of the autocorrelation function it is seen that the d.c.
value of the signal influences its value. If only deviations from the average are
considered, one obtains the autocovariance function:

RxA,) = cov[x, ,] = E{[x(k) - x] [x(k + ,) - x]}


= <1>xA,) - x2 . (12.2.3)

For, = 0 the variance of the signal is obtained as:

(12.2.4)

If the signal has a Gaussian amplitude distribution, it is completely described by its


expectation and the autocovariance function. A stochastic signal is stationary in the
wide sense if x and RxA,) are independent of time.
The relationship between different stationary stochastic signals x(k) and y(k) can
be described by the crosscorrelation function:
1 N
<1>XY(') = E{x(k)y(k + ,)} = lim - L x(k)y(k + ,) (12.2.5)
N-ooN k =l
12.1 Preliminary Remarks 5

or by the crosscovariance function:

Rxy{r) = cov[x,y,r] = E{x(k) - i][y(k + r) - y]}


= cPxy(r) - iY· (12.2.6)

Two different stochastic signals are called uncorrelated if:

cov[x,y,r] = Rxy(r) = o. (12.2.7)

They are orthogonal if additionally i y= 0, which means that:

cPXy(r) = 0 . (12.2.8)

For white noise, a current signal value is statistically independent of all past values.
It has no intrinsic relationship, and in the case of Gaussian amplitude distribution
it is completely described by the average i and covariance function

cov[x,r] = u;b(r) (12.2.9)

where b(r) is the Kronecker delta function

b(r) = {Io for r = 0


for Irl =1= 0
(12.2.10)

and u; is the variance of x(k).


Hitherto only scalar stochastic signals were considered. A vector stochastic signal
of order n
{xT(k)} = [xdk)X2(k) ... xn(k)] (12.2.11)
contains n scalar signals. If they are stationary, their average is

(12.2.12)

The relationship between two (scalar) components is described by the elements of


the covariance matrix:

cov[x,r] = E{[x(k) - x] [x(k + r) - xY}


RX1Xl (r) R X,X2 (r) RXlxJr) ]
[ RX2~' (r) R X2X2 (r) RX2~Jr). (12.2.13)

RXnX,(r) RXnx,(r) RxnxJr)

On the diagonal are the n autocovariance functions of the individual scalar signals,
and all other elements are crosscovariance functions. Note that the covariance
matrix is symmetric for r = o.
6 12 Stochastic Control Systems (Introduction)

Example 12.2.1: Xl (k) and x2(k) are two different white random signals. Then their
covariance matrix is:

cov[x, '!" = 0] = [
2
U XI OJ
2
o U X2

cov[x, '!" =1= 0] = O.

Covariance or correlation functions are non parametric models of stochastic


signals; the next two sections describe parametric models of stochastic signal
processes.

12.2.2 Markov Signal Processes


A stochastic signal process is called a first-order Markov signal process (Markov
process) if its conditional probability density function satisfies:
p[x(k)lx(k - 1), x(k - 2), ... , x(O)] = p[x(k)lx(k - 1)] . (12.2.14)
The conditional probability for the event of value x(k) depends only on the last
value x(k - 1) and not on any other past value. Therefore a future value will only
be influenced by the current value. This definition of a Markov signal process
corresponds to a first-order scalar difference equation
x(k + 1) = + fv(k)
ax(k) (12.2.15)
for which the future value x(k + 1) depends only on the current values of both x(k)
and v(k). If v( k) is a statistically independent signal (white noise) then this difference
equation generates a Markov process. If the scalar difference equation has an
order greater than one, for example satisfying
x(k + 1) = al x(k) + a2x(k - 1) + fv(k) (12.2.16)
then one can transform the process equation by replacing
x(k) = xl(k) (12.2.17)
x(k + 1) = xl(k + 1) = x2(k)
into a first-order vector difference equation

[ xd
k
xz(k
+ I)J [0al az1J[xxz(k)(k)J + [OJ
+ 1) = f
1 v(k) (12.2.18)

which becomes in general


x(k + 1) = A x(k) + /v(k) . (12.2.19)
Here A and/are assumed to be constant. Then each element of x(k + 1) depends
only on the state x(k) and on v(k), i.e. only on current values. x(k + 1) is then a first-
order vector Markov signal process. Stochastic signals which depend on finite past
values can always be described by vector Markov processes by transforming into
a first-order vector difference equation. Therefore a wide class of stochastic signals
12.1 Preliminary Remarks 7

Figure 12.1. Model of a vector Markov


signal x(k). v(k): statistic independent ran-
dom variable.

can be represented by vector Markov signal processes in a parametric model, as


shown in Figure 12.1. If the parameters of A andfare constant and v = 0, then the
signal is stationary. Nonstationary Markov signals result from A (k), f(k) or v(k)
which vary.
The covariance matrix X(k + 1) of the signal x(k + 1)

cov[x(k + 1), r = 0] = E{[x(k + 1) - i(k + 1)] [x(k + 1 + r)


- i(k + 1 + r)Y}
= X(k + 1) (12.2.20)

is derived for a Markov signal with constant parameters


x(k + 1) = A x(k) + Fv(k) . (12.2.21)
The following values are known:

E{v(k)} = v
V for r =0
cov[v(k), r] =
{
0 for r '*' 0
E{x(O)} = i(O) (12.2.22)
cov[x(O), r = 0] = X(O)
E{[x(k) - i][v(k) - vY} = O.

Taking the expectation of Eq. (12.2.21) gives:


i(k + 1) = Ai(k) + Fv. (12.2.23)
Subtracting from Eq. (12.2.21) and Eq. (12.2.23) yields:
x(k + 1) - i(k + 1) = A[x(k) - i(k)] + F[v(k) - v] . (12.2.24)
(12.2.24) is now multiplied with its transpose from the right and the expectation is
taken. Then the covariance matrix obeys:
X(k + 1) = AX(k) AT + FVFT. (12.2.25)
If the eigenvalues of the characteristic equation
det [z I - A] = 0
8 12 Stochastic Control Systems (Introduction)

are within the unit circle of the z-plane, and if A and F are constant matrices, then
for k -+00 a stationary signal process with covariance matrix X is obtained which
can be recursively calculated using (12.2.25) giving
X = A X AT + F V FT . (12.2.26)
In the following, the expectation of a quadratic term of the form x T(k) Qx( k) will
be required, where x(k) is a Markov process with covariance matrix X, and both
X and Q are nonnegative definite matrices. Using:
x TQx = tr[QxxT] (12.2.27)
where the trace operator tr produces the sum of the diagonal elements it follows, for
x(k)=O:
E{xT(k)Qx(k)} = E{tr[Qx(k)xT(k)]} = tr[QE{x(k)xT(k)}]
= tr[QX] . (12.2.28)
If x(k) = x '* 0 accordingly
E{xT(k)Qx(k)} = XTQX + tr[QX] . (12.2.29)

12.2.3 Scalar Stochastic Difference Equations


A stochastic difference equation with constant parameters is:

n(k) + cln(k - 1) + ... + cmn(k - m)


= dov(k) + d1v(k - 1) + ... + dmv(k - m). (12.2.30)

Here n(k) is the output of a 'virtual' filter with z-transfer function

n(z) do + d1z- 1 + ... + dmz- m D(Z-l)


(12.2.31)
v(z) 1 + C1Z 1 + ... + CmZ m C(Z-l)

and v(k) is a white noise with expectation v = 0 and variance a; = 1. Stochastic


difference equations represent a stochastic signal n( k) as a function of a statistically
independent signal v(k). The scalar stochastic difference equation (12.2.30) results
from the vector difference equation (12.2.19) by:

xT(k) = [xdk)X2(k) ... xm(k)]


n(k) = c T x(k) + dov(k)
cT = [(dm - docm)(dm- 1 - doc m- d ... (d 1 - docd] . (12.2.32)
12.1 Preliminary Remarks 9

One distinguishes the autoregressive process (AR)


do
n(z) = C(Z - 1) v(z) (12.2.33)

the moving average process (MA)


n(z) = D(z - 1) v(z) (12.2.34)
and the mixed autoregressive-moving average process (ARMA) of equation
(12.2.31). If the roots of C(z - 1) lie within the unit circle of the z-plane these
processes are stationary, but if roots are allowed to lie on the unit circle, for
example:
n(z)
(12.2.35)
v(z)
then nonstationary processes can also be described. For further details see for
example [12.4], [12.9], [12.10].
13 Parameter-optimized Controllers
for Stochastic Disturbances

The parameter-optimized control algorithms given in chapter 5 can be modified to


include stochastic disturbance signals n(k) by using the quadratic performance
criterion
M
S;" = L [e 2 (k) + rK;Au 2 (k)] (13.1)
k=l

if the disturbance signals are known. When using a process computer, the stochas-
tic noise can first be stored and then used in the optimization of controller
parameters. If the disturbance is stationary, and if it has been measured and stored
for a sufficiently long time, it can then be assumed that the designed controller is
optimal also for future noise and a mathematical noise model is not necessary for
parameter optimization.
In the following some simulation results are presented which show how the
optimized controller parameters change compared with parameters obtained for
step changes of the disturbances and for test processes II and III. A three-para-
meter-control-algorithm

(13.2)

is used as in (5.2.10). A stochastic disturbance v(k), as in Figure 5.1 acts on the input
of the process, and is considered to be a normally distributed discrete-time white
noise with
E{v(k)} =0 (13.3)
and standard deviation

(13.4)

Then one has n(z) = Gp(z)v(z). For this disturbance the controller parameters were
determined by minimization of the control performance criterion (13.1) for
M = 240, r = 0 and using the Fletcher-Powell method. Table 13.1 gives the
resulting controller parameters, the quadratic average value of the control error Se
(control performance), the quadratic average value of the deviation ofthe manipu-
13 Parameter-optimized Controllers for Stochastic Disturbances 11

Table 13.1 Controller parameters, control performance and manipulation effort for stochastic
disturbances v(k)
Process II Process III

Se.stoch --> Min Se ...r --> Min Se.stoch --> Min Se ...r --> Min
10 = 4s 10 = 4s
3 PC-3 3 PC-2 3 PC-3 3 PC-2 3 PC-3 3 PC-2 3 PC-3 3 PC-2

qo 0.477 1.750 2.332 1.750 qo 3.966 2.500 4.549 2.500


ql -0.512 -3.010 -3.076 -2.039 ql -7.171 -3.983 -7.160 -3.320
q2 0.014 1.239 1.105 0.591 q2 3.181 1.455 3.030 1.097
K 0.463 0.5\\ \.227 1.159 K 0.785 1.045 1.519 1.403
CD 0.031 2.425 0.90\ 0.511 CD 4.051 1.392 1.994 0.783
C1 -0.045 -0.04\ 0.095 0.26\ C1 -0.030 -0.026 0.275 0.198
Se 0.0346 0.037 0.0435 0.0411 Se 0.0216 0.0213 0.0245 0.0249
Su 0.0207 0.0604 0.0786 0.0595 Su 0.0572 0.0361 0.0673 0.0438
:K 0.903 0.966 1.13 1.08 :K 0.70 0.71 1.82 0.83

Se.stoch --> Min S •. ..r --> Min S •• stoch --> Min Se ...r --> Min
10 = 8s 10 = 8s
3 PC-3 3 PC-2 3 PC-3 3 PC-2 3 PC-3 3 PC-2 3 PC-3 3 PC-2

qo 0.913 1.500 1.999 1.500 qo 1.494 2.000 2.437 2.000


ql -1.488 -2.154 -2.079 -1.338 ql -2.565 -3.370 -2.995 -2.280
q2 0.557 0.652 0.748 0.364 q2 1.052 1.394 1.158 0.784
K 0.356 0.848 1.251 1.136 K 0.442 0.606 1.279 1.216
CD 1.564 0.770 0.597 0.321 CD 2.378 2.300 0.905 0.645
C1 -0.051 -0.002 0.534 0.464 CI -0.044 -0.040 0.469 0.414
Se 0.0423 0.0452 0.0528 0.0519 Se 0.0356 0.037 0.0432 0.0431
Su 0.0387 0.0658 0.0858 0.0663 Su 0.0485 0.0677 0.0807 0.0672
:K 0.90 0.95 1.11 1.09 :K 0.85 0.88 1.02 1.03

lated variable Su (manipulation effort), and the stochastic control factor

y2(k) with controller


X= (13.5)
y2(k) without controller
for two different. sample times. These are shown in the columns headed by
'Se.stoch -+ Min'. The same characteristic values were also calculate for the controller
parameters which were optimized for step changes in the reference variable. They
can be found in Table 13.1 in the column headed 'Se ...r -+ Min'. Considering first
the controller parameters for the control algorithm 3PC-3 optimized for step
changes, the parameters qo and K for stochastic disturbances decrease for both
12 13 Parameter-optimized Controllers for Stochastic Disturbances

processes and CD increases, with the exception of process II, To = 4 s. The integra-
tion factor CI tends towards zero in all cases, as there is no constant disturbance,
meaning that E {v(k)} = O. The controller action in most cases becomes weaker, as
the manipulation effort Su decreases. Therefore the control performance is im-
proved as shown by the values of the stochastic control factor x. The inferior
control performance and the increased manipulation effort of the controllers
optimized to step changes indicates that the stochastic disturbances excite the
resonance range of the control loop. As the stochastic disturbance n(k) has
a relatively large spectral density for higher frequencies, the x-values of the
stochastic optimized control loops are only slightly below one. The improvement
in the effective value of. the output due to the controller is therefore small as
compared with the process without control; this is especially true for process II.
For the smaller sample time To = 4 s, much better control performance is pro-
duced for process III than with To = 8 s. For process II the control performance in
both cases is about the same. For the controller 3PC-2 with a given initial input
u(O) = qo and where two parameters q1 and q2 are to be optimized, only one value
qo was given. For process II qo was chosen too large. For process II the control
performance is therefore worse than that of the 3PC-3 controller. In the case of
process III for both sample times To = 4 s and To = 8 s, changes of qo compared
with 3PC-3 have little effect on performance.
These simulation results show that the assumed 3-parameter-controller, having
a PID-like behaviour for step disturbances, tends to a proportional differential
(PD-)action for stationary stochastic disturbances with E {n(k)} = O. As there is no
constant disturbance, the parameter-optimized controller does not have integral
action. If in (5.2.18) CI = 0, then the pole at z = 1 is cancelled and one obtains
a PD-controller with transfer function
u(z) -1-1
GR(z) = e(z) = K[(1 + CD) - CDZ ] = qo - q2 Z (13.6)

and a control algorithm


u(k) = qoe(k) - q2e(k - 1) . (13.7)
If the disturbance signal n(k) is also stationary and E{n(k)} = 0, then the para-
meter-optimized controller of (13.6) can be assumed. As in practice this is not true,
at least a weak integral action is recommended in general, and therefore the
assumed 3-parameter-controller of (13.2) or (5.2.10) should be used. For this
controller one calculates K and CD using parameter optimization, and then one
takes a small value of the integration factor CI > 0 so that drift components of the
disturbance signal can also be controlled and offsets can be avoided.
The command variable transfer function Gw(z) = y(z)!w(z) with the PID-con-
troller optimized for Se,stoch -+ Min in Table 13.1 contain each a zero and a pole at
z ~ 1. Hence, they can cancel approximately. This is caused by a PID-controller
which contains a surplus pole and a zero at z = 1 because of the resulting PD-
controller, compare (13.6).
14 Minimum Variance Controllers
for Stochastic Disturbances

In the design of minimum variance controllers the variance of the controlled variable
var[y(k)] = E{y2(k)}
is minimized. This criterion was used in [12.4] by assuming a noise filter given by
(12.2.31) but with C(Z-l) = A(Z-l). The manipulated variable u(k) was not
weighted, so that in many cases excessive input changes are produced. A weighing
r on the input was proposed in [14.1], so that the criterion
E{y2(k + i) + ru 2(k)}; i =d+1
is minimized. The noise n(k) can be modelled using a nonparametric model
(impulse response) or a parametric model as in (12.2.31). As a result of the
additional weighting of the input, the variance of the controlled variable is no
longer minimal; instead the variance of a combination of the controlled variable
and the manipulated variable are minimized. Therefore a generalized minimum
variance controller is produced.
The following sections derive the generalized minimum variance controller for
processes with and without dead time; the original minimum variance controller is
then a special case for r = O. For the noise filters are assumed parametric models as
they are particularly suitable for realizing adaptive control algorithms on the basis
of parameter estimation methods.

14.1 Generalized Minimum Variance Controllers


for Processes without Deadtime

It is assumed that the process to be controlled is described by the transfer function

(14.1.1)

and by the noise filter


n(z) AD(z-l) A[l + d 1z- 1 + ... + dmz- m]
G p (z) = - = = . (14.1.2)
v v(z) C(z 1) 1 +C1 Z 1 + ... +CmZ m
14 14 Minimum Variance Controllers for Stochastic Disturbances

w y

I
Controller L _______ ~

Figure 14.1 Control with minimum variance controllers of processes without deadtime

Here v(k) is a statistically independent signal:

E{v(k)v(k + r)} = g for r = 0


for r =t= 0
(14.1.3)

E{v(k)} = v= 0

(see Figure 14.1).


Now w(k) = 0, i.e. e(k) = - y(k) is assumed. The problem is then to design
a controller which minimizes the criterion:

(14.1.4)

The controller must generate an input u(k) such that the errors induced by the noise
process {v(k)} are minimized according to (14.1.4). In the performance function J,
y(k + 1) is taken and not y(k), as u(k) can only influence the controlled variable at
time (k + 1) because of the assumption bo = O. Therefore y(k + 1) must be pre-
dicted on the basis of known signal values y(k), y(k - 1), ... and u(k), u(k - 1), ....
Using (14.1.1) and (14.1.2) a prediction of y(k + 1) is

B(Z-l) D(Z-l)
zy(z) = A(Z-l) zu(z) + A C(Z-l) zv(z) (14.1.5)

and
A(Z-l )C(Z-l)Z y(z) = B(Z-l )C(Z-l)Z u(z)
+ AA(z-l )D(Z-l )zv(z) (14.1.6)
or
(1 + alz- 1 + ... + a",z-m)(1 + CIZ- 1 + ... + cmz-m)zy(z)
= (b1z- 1 + ... + bmz- m)(1 + CIZ- 1 + ... + cmz-m)zu(z)

+ A(1 + alz- 1 + ... + a",z-m)(1 + d1z- 1 + ... + dmz-m)zv(z).


(14.1.7)
14.1 Generalized Minimum Variance Controllers for Processes without Deadtime 15

After multiplying and transforming back into the time domain we obtain:
y(k + + (al + cdy(k) + ... + amcmy(k - 2m + 1)
1)
= b1u(k) + (b 2 + b1cdu(k - 1) + ... + bmcmu(k - 2m + 1)
+ A[v(k + 1) + (al + ddv(k) + ... + amdmv(k - 2m + 1)] . (14.1.8)
Therefore the performance criterion of (14.1.4) becomes:
/(k + 1) = E{ [-(al + cdy(k) - .,. - amcmy(k - 2m + 1)
+ b 1u(k) + (b 2 + b1cdu(k - 1) + ... + bmcmu(k - 2m + 1)
+ A[(al + ddv(k) + ... + ~dmv(k - 2m + 1)]
(14.1.9)
At time instant k, all signal values are known with the exception of u(k) and
v(k + 1). Therefore the expectation of v(k + 1) only must be taken. As in addition
v(k + 1) is independent of all other signal values:
/(k + 1) = [ -(al + cdy(k) - ... - ~cmy(k - 2m +
+ b1u(k)
1)
+ (b 2 + b1cdu(k - 1) + ... + bmcmu(k - 2m + 1)
+ ).[(al + ddv(k) + ... + ~dmv(k - 2m + I)]Y
+ A2E{v 2 (k + I)}
+ 2).[ -(al + cdy(k) - ... + bmcmu(k - 2m + 1)
+ A[(al + ddv(k) + .. , + ~dmv(k - 2m + 1)]]E{v(k + I)}
+ ru 2 (k) . (14.1.10)
Therefore the condition for optimal u(k) becomes:
o/(k + 1)
o k
u( )
= 2[ -(al + cdy(k) - '" - amcmy(k - 2m + 1)
+ b1u(k) + (b 2 + b1cdu(k - 1) + ... + bmcmu(k - 2m + 1)
+ A[(al + ddv(k) + ... + amdmv(k - 2m + 1)]] b1
+ 2ru(k) = O. (14.1.11)
Considering that for the term with b 1 according to (14.1.8)
[ ... ] = [y(k + 1) - Av(k + 1)]
is valid, it follows using (4.1.11) that:
[zy(z) - ).zv(z)]b 1 + ru(z) =0. (14.1.12)
Applying (14.1.5) to predict v(k + 1)
C(Z-l) B(Z-l )C(Z-l)
AZV(Z) = D(Z-l) zy(z) - A(z 1)D(z 1) zu(z)
16 14 Minimum Variance Controllers for Stochastic Disturbances

one finally obtains the generalized minimum variance controller:


u(z) Q(Z-I) A(Z-I)[D(z-l) - C(Z-I)]Z
G RMV1 (z) = -
y(z) P(z) ZB(Z-I)C(Z-I) + :1
= ----1- = - - - - - - ' - - - = - - - - ' - - - - - - -
A(z- I D(z-l)

(14.1.13)
(Abbreviation: MV1)
This controller contains the process model with polynomials A(Z-I) and B(Z-I)
and the noise model with polynomials C(Z-I) and D(Z-I). With r = 0, the simple
form of the minimum variance controller is produced:

( ) __ A(Z-I)[D(z-l) - C(Z-I)]Z
G RMV2 z - ZB(Z-1 )C(Z-I)

= _ ZA(Z-I) [D(Z-I) -
ZB(Z-I) C(Z-I)
1J . (14.1.14)

(Abbreviation: MV2)
This controller is a cancellation controller with the command variable behaviour of
the closed loop
G (z) = Z[D(Z-I) - C(Z-I)] = 1 _ _2_
w zD(z 1) Gpv(z)

as shown by comparison with (6.2.4).


If C(Z-I) = A(Z-I), as assumed in [12.4], it gives:

G ( )_ [D(Z-I) - A(Z-I)]Z
(14.1.15)
RMV3 Z - - •
r
ZB(Z-I) + b 1 D(Z-I)

(Abbreviation: MV3)
and for r =0
G ( ) [D(Z-I) - A(Z-I)]Z
RMV4 Z = - zB(z 1) (14.1.16)

(Abbreviation: MV4)
After extension with A (z - 1 ) in the numerator and the denominator and compari-
son with (6.2.4), it follows that this controller corresponds with a cancellation
controller with the command variable behaviour:
G (z) = Z[D(Z-I) - A(Z-I)] = 1 _ _ A._
w ill~ 1) GhW
Therefore, for the minimum variance controller with r = 0 the command variable
behaviour of the closed loop only depends on the noise filter. That means, that,
14.1 Generalized Minimum Variance Controllers for Processes without Deadtime 17

apart from the process model Gp(z) the noise model Gpv(z) has to be known
relatively well, which is, of course, a practical problem.
These controller equations have the following properties:

a) Controller order

numerator denominator

MVl 2m - 1 2m
MV2 2m - 1 2m - 1
MV3 m-l m
MV4 m-l m- 1
Because of the high order of MVl and MV2, one should assume C(z - 1) = A (z - 1)
for modelling the noise and then prefer MV3 or MV 4.

b) Cancellation of poles and zeros


Taking into consideration the discussion in chapter 6 on the approximate cancella-
tion of poles and zeros of controller and process, the following can be stated:
MVl: The poles of the process (A(Z-I) = 0) are cancelled. Therefore the controller
should not be applied to processes whose poles are near the unit circle or to
unstable processes.
MV2: The poles and zeros of the process (A(Z-I) = 0 and B(Z-I) = 0) are can-
celled. Therefore the controller should not be used with processes as for
MVl nor with processes with nonminimum phase behaviour.
MV3: In general no restriction.
MV 4: The zeros of the process (B(z - 1) = 0) are cancelled. Therefore this controller
should not be used with processes with nonminimum phase behaviour.
The most generally applicable controller is therefore MV3.

c) Stability
It is assumed that the conditions listed under b) are satisfied. Then the character-
istic equation of the closed loop becomes:

For minimum variance controller MVl it follows that:

A (z{ :1 A (z) + zB(z) ] D(z) = 0 (l4.1.17a)

and for MV3:

[:1 A(z) + zB(z) ] D(z) = 0 (14.1.17b)


18 14 Minimum Variance Controllers for Stochastic Disturbances

The polynomial C(z) is cancelled in both cases. It follows that for closed-loop
stability
MVl and MV3 (r 0): *'
- The zeros of the noise filter D(z) = 0 must lie within the unit circle of the z-plane.
- The zeros of

[:1 A(z) + zB(z) ] = 0

must lie within the unit circle. The larger the weight r on the process input, the
nearer are these zeros to the zeros A(z) = 0, i.e. to the process poles.
- for MVI the process poles must lie inside the unit circle.
MV2 and MV4 (r = 0):
- The characteristic equation of the closed loop becomes, for r = 0:
MV2: zA(z)B(z)D(z) = 0; MV4: zB(z)D(z) = 0 (l4.1.17c)
- Therefore the zeros of the process B(z) = 0 and of the noise filter D(z) = 0 must
lie within the unit circle.
- The poles of the noise filter C(z) = 0 (for MV2) do not influence the characteristic
equation. Therefore they can lie anywhere.
d) Dynamic control factor
The dynamic control factor of the closed loop means for the controller MVl:

(14.1.18)
with r = 0, for the controller MV2 or MV4 it follows that:

(14.1.19)

Therefore the dynamic control factor for r = 0 is the inverse of the noise filter. It
follows that:
-1 y(z) 1
Gv(z ) = AV(Z) = R(z) GpJz) l = 1 . (14.1.20)

Hence the closed-loop is forced to behave as the reciprocal of the noise filter. Poles
and zeros ofthe processes do not appear in (14.1.19) because they are cancelled by
the controller. With increasing weight r on the process input, however, the poles of
the process increasingly influence the closed-loop behaviour, as can be seen from
(14.1.18).
14.1 Generalized Minimum Variance Controllers for Processes without Deadtime 19

e) Control variable y(k)


For the disturbance transfer behaviour of the closed-loop using controller MV1 it
is:
1 r
;: Gpjz) ~ A(Z-l )D(Z-l) + ZB(Z-l )C(Z-l)
()
= ~ = -----
[:1
G (z)
v AV(z) 1 - GR(z)Gp(z)
A(Z-l) + ZB(Z-l)]C(Z-l)

: A(Z-l)[D(z-l) - C(Z-l)]
1+
[:1
= 1 . (14.1.21)
A(Z-l) + ZB(Z-l)] C(Z-l)

The controlled variable y(k) for r,* 0 is a mixed autoregressive moving-average


process with order 2m for MV1 and order m for MV3. For r --+ 0, i.e. for MV2 and
MV 4, it is y(z) --+ ).v(z), i.e. the controlled variable becomes a statistically indepen-
dent, i.e. white noise process with variance u;
= ).2U;. The smaller the weight on r
the process input, the smaller is the variance of the control variable y(k), and the
control variable converges to a white noise signal ).v(k). The smallest variance
which can be attained by a minimum variance controller is therefore:
min var[y(k)] = A2 . (14.1.22)
f) Special case
If D(Z-l) = C(Z-l), all minimum variance controllers are identically zero. If
a statistically independent noise n(k) = AV (k) acts directly on the controlled vari-
able, minimum variance controllers cannot decrease the variance of the controlled
variable; only for coloured noise n(k) can the variance of the controlled variable
be reduced. The more 'colourful' the noise, i.e. the greater differences in
[D(Z-l) - C(Z-l)] the larger is the effect of the minimum variance controller.
g) Behaviour of minimum variance controllers for constant disturbances E {v(k)} '* 0
From Eq. (14.1.13) it follows that the static behaviour of MV1 satisfies:
A(l)[D(l) - C(1)]
=-
:1
GRMV 1 (1) ---=-.:..=..--'--'-----.:....:....::=---

B(l)C(l) + A(l)D(l)

Ia,[Id; - Ic;]
(14.1.23)

m
Here I is read as I . If the process Gp(z) has a proportional action behaviour, i.e.
I a; '* 0 and I b; '* 0, then the controller MV1 in general has a proportional
;=0

action static behaviour. For constant disturbances, therefore, offsets occur. This is
also the case for the minimum variance controllers MV2, MV3 and MV4. To avoid
20 14 Minimum Variance Controllers for Stochastic Disturbances

offsets with minimum variance controllers some modifications must be made, and
these are discussed in chapter 14.4.
h) Choice of the weighting factor r
The influence of the weighting factor r on the manipulated variable can be
estimated by looking at the first input u(O) in the closed-loop after a reference
variable step w(k) = l(k), see Eq. (5.2.30). Then one obtains u(O) = qow(O) = qo.
Therefore qo is a measure for the size of the process input. For the controller MVI
(process without deadtime) it follows, if the algorithm is written in the form of
a general linear controller as Eq. (11.1.1)
dl - CI
qo = (14.1.24)
r
bl +-
b l

and for MV3


d l - al
qo = (14.1.25)
r
bl +-
b l

Hence, there is approximately a hyperbolic relationship between qo and r/b l for


r/b l ~ bl.r = 0 leads to MV2 or MV4 with qo = (d l - cd/bl or qo = (d l - ad/bl.
A reduction of this qo by one half is obtained by choosing
r = bi, (14.1.26)
b l can be estimated from the process transient response as for a process input step
Uo the relationship b l = y(1)/uo holds. For a process with deadtime one obtains as
well for MVI - d as for MV3 - d (see section 14.2)
to
qo = --'--- (14.1.27)
r
bl +-
b l

i) Summary
Typical properties of the minimum variance controllers are summarized in Table
14.1. The best overall properties are shown by controller MV3. Hence for practical
realization of minimum variance controllers, C (z - I) = A (z - I ) should be assumed.
Here it should be emphasized once again that minimum variance controllers are
decidedly characterized by the noise filter Gp,,(z). For r = 0, the closed loop
behaviour is only prescribed by the noise filter.
Minimum variance controllers, therefore, require a relatively precise knowledge
of the stochastic noise model which can be expected only in connection with
adaptive estimation methods, see chapter 26.
The larger the weighting factor r, the more the closed loop behaviour is charac-
terized by the denominator polynomial A(Z-l) of the process, compare (14.1.17).
This leads to a less influential noise filter. As, generally, the process model is known
14.2 Generalized Minimum Variance Controllers for Processes with Deadtime 21

Table 14.1. Different properties of minimum variance controllers (A - = 0 means: zeros of


A on or outside the unit circle, c.f. chapter 6)

Con- GR Danger of Instability Offset disappears for


troller instability for for w=n=u v = 1 w=n=1

zA[D - C]
MVl A- = 0 D- =0
r
zBC +-AD
b1

zA(D - C) A- = 0 D- =0
MV2 r=O C(l) = 0 C(l) = 0
zBC B- = 0 B- = 0

zeD - A]
MV3 C=A D- =0 A(l) = 0
r
zB+-D
b1

C=A zeD - A] D- =0
MV4 B- = 0
B- = 0
A(l) =0
r=O zB

more precisely than the noise model, minimal variance controllers with r > 0 are
recommended for application.
In deriving minimum variance controllers we assumed bo = O. If bo =1= 0, one
needs only replace b 1 by bo and write
B(Z-1)=bo+b1z- 1 + ... +bmz- m .

14.2 Generalized Minimum Variance Controllers


for Processes with Deadtime

The process to be controlled may be described by the transfer function with


deadtime
_ y(z) _ B(Z-l) -d _
( )-
GpZ b1z- 1 + ... + bmz- m -d
-----z - z (14.2.1)
u(z) A(Z-l) 1 + a1z- 1 + ... + ~z-m

as shown in Figure 14.2. The disturbance filter is as assumed in (14.1.2) and (14.1.3)
describes the disturbance signal v(k). As the input u(k) for processes with deadtime
d can influence the controlled variable y(k + d + 1) at the earliest, the performance
criterion
J(k + 1) = E{y2(k + d + 1) + ru 2(k)} (14.2.2)
22 14 Minimum Variance Controllers for Stochastic Disturbances

w e

Figure 14.2 Control with a minimum variance controller for processes with dead time

is used. Corresponding to (14.1.5), for the prediction of y(k + d + 1) results:

(14.2.3)

As at the time k for which u(k) must be calculated the disturbance signals
v(k + 1), ... , v(k + d + 1) are unknown, this part ofthe disturbance filter is separ-
ated as follows:

(14.2.4)

As can also be seen from Figure 14.2, the disturbance filter is separated into a part
F(Z-I) which describes the parts of n(k) which cannot be controlled by u(k), and
a part Z-(I+d)L(z-I)/C(Z-I) describing the part of n(k) in y(k) which can be
influenced by u(k). The corresponding polynomials are:

F(Z-I)= 1 +ilZ-1 + ... +idz- d (14.2.5)


L(Z-I) = 10 + 11z- 1 + ... + Im_ 1z-(m-l). (14.2.6)

Their parameters are obtained by equating coefficients in the identity:


D(Z-I) = F(Z-1 )C(Z-l) + z-(d+ 1) L(Z-I) . (14.2.7)

Example 14.2.1
For m = 3 and d = 1 it follows from (14.2.7)
fl = d1 - Cl

10 = d2 - C2 - Clfl

11 = d 3 - C3 - C2fl

12 = -c3fl
14.2 Generalized Minimum Variance Controllers for Processes with Deadtime 23

and for m = 3 and d = 2


fl = dl - CI

f2=d 2 - C2- Clfl


10 = d3 - C3 - clf2 + c2fl
II = -c2f2 - c3fl
12 = -cdz·
The coefficients for m = 2 are obtained by C3 = d 3 = 0 for m = 1 by C2 = d 2 = C3 = d 3 = O.

(14.2.4) now leads to:


A(Z-1) C(z- 1)z(d+ 1) y(z) = B(Z-1 )C(Z-1 )zu(z)
+ AF(z-1 )A(Z-1 )C(Z-1 )z(d+ 1)V(Z)
+ AL(z-1)A(z-1)V(Z). (14.2.8)
After multiplying and transforming back into the time-domain, one obtains from
(14.1.7) to (14.1.10) /(k + 1) and from o/(k + 1)/ou(k) = 0 as in (14.1.12):
[Z(d+1)y(Z) - AF(z-1)Z(d+1)V(z)]b 1 + ru(z) = O. (14.2.9)
Substituting from (14.2.3)
C(Z-1) B(Z-1 )C(Z-1) -d
)"v(z) = D(Z-1) y(z) - A(z 1 )D(z 1) z u(z)

one finally obtains:


u(z) A(Z-1)[D(z-1) - F(Z-1)C(Z-1)]Z(d+1)
GRMV 1 d (z) =- =- ---'----'--=----'----'-----'---'--'---'--='----
y(z) ZB(z-1 )C(Z-1 )F(Z-1) + ~ A(Z-1 )D(Z-1)
b1
A(Z-1 )L(Z-1)

(14.2.10)
(Abbreviation: MVI - d)
For r = 0:
G ) A(Z-1)L(z-1)
RMV2Az = - zB(z 1)c(z 1)F(z 1)' (14.2.11)

(Abbreviation: MV2 - d)
With C(Z-1) = A(Z-1) and r =1= 0 it follows that
L(Z-1 )
GRMV3Az) = - ------'-----'------ (14.2.12)
r
ZB(Z-1 )F(Z-1) + b1 D(Z-1)
24 14 Minimum Variance Controllers for Stochastic Disturbances

(Abbreviation: MV3 - d)
and with r = 0:
L(Z-1 )
GRMV4d(Z) = - ZB(Z-1 )F(Z-1) . (14.2.13)

(Abbreviation: MV4 - d)
The properties of these minimum variance controllers with d = 0 can be sum-
marized as follows:
a) Controller order

- MV1 - d and MV2 - d: Numerator: 2m - 1


Denominator: 2m + d - 1 (d ~ 1)
- MV3 - d and MV4 - d: Numerator: m- 1
Denominator: m + d - 1 (d ~ 1)
b) Cancellation of poles and zeros
As for controllers without dead time.
c) Stability

The characteristic equations for MV1 - dare


MV1- d:

[:1 A(z) + ZB(Z)] A (z)D(z) = 0 (14.2.14a)

MV3 - d:

[:1 A(z) + ZB(Z)]D(Z) =0 (14.2. 14b)

MV2-d:
zA(z)B(z)D(z) = 0 (14.2.15a)
MV4-d:
zB(z)D(z) =0 (14.2.15b)
They are identical with the characteristic equations for the minimum variance
controllers without deadtime, and therefore one reaches the same conclusions
concerning stability.
d) Dynamic control factor
For MV1 - d one obtains:
ZB(Z-1)C(Z-1)F(z-1) +: A(Z-1)D(z-1)

[:1
R(z) = y(z) = 1 (14.2.16)
n(z) A(z-1) + ZB(Z-1)]D(Z-1)
14.3 Minimum Variance Controllers for Processes with Pure Deadtime 25

With r = 0 it follows that for controller MV2 - d:

(14.2.17)

Again in the dynamic control factor the reciprocal disturbance filter arises, but
it is now multiplied by F (z - 1) which takes into account disturbances
v(k + 1) ... v(k + d + 1) which cannot be controlled by u(k).

e) Controlled variable
For r = 0, we have for controllers MV2 - d and MV4 - d

y(z) 1_1
-.- = R(z)Gpv(z) -;- = F(z ). (14.2.18)
).v(z) It

y(z) is therefore the moving average process

y(k) = [v(k) + 11 v(k - 1) + ... + .hJv(k - d)F (14.2.19)

and the variance of y(k) is:

(14.2.20)

The larger the deadtime the larger is the variance of the controlled variable.

f) Offset behaviour
In principle,the same disadvantages arise as for d = O. In order to control offsets,
one can proceed in the same way as described in section 14.4.

14.3 Minimum Variance Controllers for Processes with


Pure Deadtime

The minimum variance controllers of section 14.2, being structurally optimal for
the process B(Z- 1)z -d / A (z - 1) and the stochastic disturbance filter D(z - 1)/c(z - 1),
were derived for time/ag processes with deadtime. As can be seen from
(14.2.18)-(14.2.20), the controlled variables of the controllers MV2 - d and
MV3 - d are a moving average signal process of order d whose variance increases
strongly with dead time d.
As in section 9.2.2 the minimum variance controllers for pure dead time processes
are considered. Based on
(14.3.1)
and B(Z-1) = b 1z- 1 and the deadtime d - 1 as in section 14.2, the following
26 14 Minimum Variance Controllers for Stochastic Disturbances

controllers can be derived (c.f. (9.1.4)):


D( -1)
a) Disturbance filter: Gpv = C(;-I)

L(Z-1 )
GRMVld = (14.3.2)
b 1C(Z-1 )F(Z-I) + br1 D(Z-I)
L(Z-1 )
GRMV2d = - b 1C(z 1)F(z 1) (14.3.3)

with, from (14.2.5),


(14.3.4)
and from (14.2.7) one now has:
D(Z-I) = F(Z-I)C(Z-I) + z- d L(z-I). (14.3.5)
If the order of the polynomial C(Z-I) is m ~ 1 or of D(Z-I) m ~ d, then there exist
nonzero controllers.
b) Disturbance filter: G pv (Z-I) = D(Z-I) -+ C(Z-I) = A(Z-I) = 1

(14.3.6)

L(Z-1 )
GRMV4d(Z) = - b 1 F(z 1)· (14.3.7)

From (14.2.7) it follows that L(Z-I) = 0, and therefore no controller exists if the
order m of D(Z-I) is m ~ d - 1. This again illustrates the principle used to derive
the minimum variance controller to predict the controlled variable y(k + d + 1)
based on known values of u(k - 1), u(k - 2), ... and v(k - 1), v(k - 2), ... and use
the predicted value to compute the input u(k). Here the component of the disturb-
ance signal
yv(k + d + 1) = [v(k + d) + flV(k + d - 1) + ... + fd-lV(k + 1)]A.
(14.3.8)
cannot be considered nor controlled (see (14.2.4) and (14.2.19)). Ifnow the order of
D(Z-I) is m = d - 1 then:

yv(k + d + 1) = [v(k + d) + d 1 v(k + d - 1) + ... + dd-lV(k + 1)]A..


(14.3.9)
Then D(Z-I) = F(Z-I), and the disturbance signal consists of the uncontrollable
part so that the minimum variance controller cannot lead to any improvement over
the open-loop and is therefore null. Only if m ~ d can the minimum variance
controller generate a smaller variance of y(k) than the open-loop.
14.4 Minimum Variance Controllers without Offset 27

Hence minimum variance controllers lead only to a better control performance


compared with the uncontrolled pure deadtime process if the disturbance signal
n(k) acting on y(k) is an autoregressive moving average (coloured noise) process or
a moving average process of order m ;?; d.

14.4 Minimum Variance Controllers without Offset

To avoid offsets of the controlled variable for constant external disturbances or


constant reference value changes, the controller should satisfy, c.f. chapter 4,
(14.4.l)
=_1
This is not true for the derived mInImUm variance controllers in the case of
proportional acting processes and noise filter since A (1) =l= 0, B( I) =l= and °
C(I) =l= 0, D(I) =l= 0. Therefore the controllers must be suitably modified.

14.4.1 Additional Integral Acting Term


The simplest modification is to add an additional pole at z = 1 to the minimum
variance controller transfer function. For the design of the corresponding control-
ler this pole can be added to the process model at MV3 and MV4. Rather more
freedom in weighting the integral term is obtained by multiplying the minimum
variance controller
u'(z)
GMy(z) = y(z) (14.4.2)

by the proportional integral action term:

u(z) IX 1 - (I - IX)Z-l
) = 1 + --1
= ---;-( (14.4.3)
GPJ(z) U Z z- = 1 -z 1

This results in an additional difference equation


u(k) - u(k - 1) = u'(k) - (I - lX)u'(k - 1) (14.4.4)
with the special cases
IX = 0: u(k) = u'(k) (only P-action; no I-action)
IX = 1: u(k) - u(k - 1) = u'(k) (equal weighting of the P- and I-term)
F or IX =l= °then:
lim GR(z) = lim GMy(z)GPJ(z) = 00
z-l z-l

is fulfilled if for controllers MVI and MV2 D(1) =l= C(1), and for MV3 and MV4
D(I) =l= A(I). If these conditions are not satisfied, additional poles at z = 1 can be
28 14 Minimum Variance Controllers for Stochastic Disturbances

assumed
MV2: C'(z) = (z - 1)C(z)
MV3 and MV4: A'(z) = (z - I)A(z)
Only for MVI is there no other possibility. The insertion of integrations has the
advantage of removing offsets. However, this is accompanied by an increase of the
variance of y(k) for higher frequency disturbance signals v (k), c.f. section 14.5.
Through a suitable choice of IX both effects can be weighted against each other.

14.4.2 Minimization of the Control Error


The minimum variance controllers treated before were derived for a vanishing
reference variable w(k) = 0 and therefore for y(k) = -e(k). Now the performance
criterion is modified into
I (k + 1) = E {[y(k + d + 1) - W(k)]2 + r[u(k) - uw(k)J 2 } (14.4.5)
so that the variances around the non-zero operating point [w(k); uw(k)] are
minimized with
A(I) 1
uw(k) = B(I) w(k) = Kp w(k) (14.4.6)

the value of u(k) for y(k) = w(k), the zero-offset case. A derivation corresponding to
section 14.2 then leads to the modified minimum variance controller [14.2J
L(Z-l)[D(z-l) - C(Z-l)]Z
u(z) = y(z)
ZB(Z-l )C(Z-l )F(Z-l) + br A(Z-l )D(Z-l)
l 1 J
Y

A(Z-l)D(z-l)
+------------------------------
ZB(Z-l)C(Z-l)F(z-l) + hIr A(Z-l)D(z-l)

X(1 + :1 ~J w(z) . (14.4.7)

This controller removes offsets arising from variations in the reference variable
w(k). Another very simple possibility in the connection with closed loop parameter
estimation is shown in section 26.4.

14.5 Simulation Results with Minimum Variance


Controllers

The behaviour of control loops with minimum variance controllers is now shown
using an example. The minimum variance controllers MV3 and MV4 were
simulated for a second-order test process using a digital computer.
14.5 Simulation Results with Minimum Variance Controllers 29

Process VI I (second-order low-pass process)


A(Z - I) = 1 - 1.036z - 1 + 0.263z- 2 }
+ 0.0889z - 2
B(Z - I) = 0.1387z - 1 (14.5. 1)

D(Z-I) = 1 + 0.5z- 1 + 0.25z - 2

The polynomials A and B are obtained from the transfer function


1
G(s)= - -- - -
(I + 7.5s)(1 + 5s)
with a sample time To = 4 s.
For the minimum variance controller MV4, (14.1.15), the quadratic mean values
of the disturbance signal n(k), the control variable y(k) and the process input u(k)
were determined by simulation for weighting factors on the process input of
r = 0 ... 0.5 and weighting factors on the integral part of IX = 0 ... 0.4, applying
(14.1.25). Then the characteristic value (the stochastic control factor)

x = J y 2(k)/Jn 2(k) (14.5.2)

was determined and shown as a function of

~:@
1.0
"
!~-
0'15-:\
\/\ '-
0:J.\ "\.
0/ '\ -
\ I "",
1\ _
0.5 0.05\~ ! "'_
, '-....../
--=:::::::--=
------ 0
!.o: I.
r: O.o2'-.. '{
om r :ofex 2
=0

MVI.

o 0.5 1.0 1.5

Figure 14.3 Stochastic control factor x as a function of the manipulating effort Su for process
VII with the minimum variance controller MV3 for different weighting factors r on the
process input and rx on the integral part.
30 14 Minimum Variance Controllers for Stochastic Disturbances

••• ••
0 0
N N

·

••

""
··•
•"
•"
•••"
0
·•·• S'
•• ~:

0
•"
,..... . ..
~

>.. 0
0 ~
~

0 I. 0

0 N N
\.

·• I:
0 0

0 I:
0
I"
I:
••
0
I:
-3

·•"
S' > S'
::£ 1 0
0 I:
0
0 11
•• ~
• "
0 >.. 0. 0
~

-
_.---.
0 ______ •
o
III
_0
·--0
••

\-
.':
It > •
...---
.--.--
-..-"
o .,
__---,0- M _ . ,-e

...

---..
O_______ • _ _ _ _ _ _

_.
~

:::::==-•

---;:.0

---"
e __ • _ _ _ _ •

0' ·
.....
M N 0 -3 M N
C -3
o· d d C; d d, d,
u u

10
~.: MV~ (r:O) ~
10 n n nn n ~ ': ~~ ""',," 5

05
0
IIl u 10 20 :"'-
:111~U"r-":ro v.
-5 [/)

: rt 1~ I t~MI ~~'~~I r~'
-1.5

-201
u
u u U -10 III - 10
c::
§:

::l
;:Q
(t)
Y .. Y MV3 ( =0.02)
03 §'"
'"
1 , ,- - - - - - - 1 h·~ ......···········-
\ ............. ~.
0.1 ::r
lit •

0
I 0
10 20
0
I 10 20
~
::l
-0 1 \ 50 3·
-0.2
c
. 3
u
-<
Il'
..,
u1
~.
MV 3 (r=O.Q2)
5 ::l
1.0 I II ;l @
05 ()
o
lll ~ ::l
o nrl ~ ~i ~'I'r~1 rlf' 6'lJ1H;11 I~. 10
°1 ~~=120 10 20
-OS
[
..,o
-1.0
'"
a b
Figure 14.4a-c. Signals for process VII with minimum variance controllers MV4 and MV3. a Stochastic disturbance n(k); b step change
w
in the reference variable w(k) CI. = 0; c step change in the reference variable w(k) ex = 0.2.
32 14 Minimum Variance Controllers for Stochastic Disturbances

In Figure 14.3 N = 150 samples were used. Figure 14.3 now shows:
- The best control performance (smallest x) is obtained using r = 0 and cx = 0, i.e.
for controller MV4.
- The rather small weighting factors ofr = 0.01 or 0.02 reduce the effective value of
the manipulated variable compared with r = 0 by 48% or 60% at the cost of
a relatively small increase in the effective value of the controlled variable by 12%
or 17% (numbers given for cx = 0). Only for r ~ 0.03 does x become significantly
worse.
- Small values of the integral part cx ~ 0.2 increase the effective value of the
controlled variable by r ~ 3 ... 18% according to r. For about cx > 0.3 the
control performance, however, becomes significantly worse.
Figure 14.4a shows a section of the disturbance signal n(k) for A. = 0.1, the resulting
control variables and manipulated variable with the minimum variance controller
MV4 (r = 0)
G () 11.0743 - 0.0981z- 1
RMV4 Z = - 1 + 0.641Oz 1

and with the controller MV3 for r = 0.02


5.4296 - 0.0481z- 1
GRMV3 (Z) = - 1 + 0.5691z 1 + 0.1274z 2'

For MV4 it can be seen that the standard deviation of the controlled variable y is
significantly smaller than that of n(k); the peaks of n(k) are especially reduced.
However, this behaviour can only be obtained using large changes in u(k).
A weighting factor of r = 0.02 in the controller MV3 leads to significantly smaller
amplitudes and somewhat larger peak values of the controlled variable.
Figure 14.4b shows the responses of the controlled variable to a step change in
the reference value. The controller MV4 produces large input changes which are
only weakly damped; the controlled variable also shows oscillating behaviour and
an offset. By using the deadbeat controller DB(v) the maximal input value would be
u(O) = 1/(b 1 + b2 ) = 4.4; the minimum variance controller MV4, however, leads to
values which are more than double this size. In addition, the offset means that the
resulting closed loop behaviour is unsatisfactory for deterministic step changes of
w(k). The time response of u(k) obtained using controller MV3 and r = 0.02 is
much better. However, the input u(O) is still as high as for DB (v) and the offset is
larger than for MV4.
For cx = 0.2, Figure 14.4c, the offset vanishes. The time response of the manipu-
lated and the controlled variable is more damped compared with Figure 14.4b. The
transient responses of the various controllers are shown in Figure 14.5.
The simulation results with process III (third order with deadtime) show that
with increasing process order it becomes more and more difficult to obtain
a satisfactory response to a step change in the reference variable. In general,
however, it is possible to systematically find a compromise matched to each special
case by suitable variation of the weighting factors rand cx.
14.6 Comparison of Various Deterministic and Stochastic Controllers 33

u u

10 10
MVl, (r=O) MVl, (r=O)

5 5

o 10 20 k o 10 20 k

u u

10 10

MV 3 (r = 0.02)

5 5

o 10 20 k o 10 20 k
a Ct =0 b Ct = 0.2

Figure 14.5 Transient responses of the controllers MV3 and MV4 and process VII for
different rand rx..

14.6 Comparison of Various Deterministic and


Stochastic Controllers

In order to compare the control behaviour of various control algorithms for


a stochastic and a deterministic noise signal, seven different control algorithms
were simulated with the following process:
y(s) 1
Gp(s) =- = ------- (14.6.1)
u(s) (1 + 3.7Ss)(1 + 2.Ss)
with To = 2 s it yields
_ y(z) _ B(Z-I) _ 0.1387z- 1 + 0.0889z- 2
Gp(z ) - - - - - - - - -------;-------;:: (14.6.2)
u(z) A(Z-I) 1 - 1.036z- 1 + 0.2636z- 2
(test process VII, Appendix). This process was disturbed at the input by a reproduc-
ible coloured noise signal, generated by a noise signal generator in such a way that
34 14 Minimum Variance Controllers for Stochastic Disturbances

Figure 14.6a-h. Graph of controlled variable y(k) and manipulated variable u(k) of a sec-
ond-order process with different control algorithms for a stochastic noise signal n(k). a noise
signal n(k) (without control); b MV4; c MV3 rlb t = 0.144; d MV3-PI rlb t = 0.144, 11.1 = 0.1;
e DB (v); f DB(v + 1) qo = 2.158; g 3PC-2 qo = 4.394; h LCPA Zt = 0.1, Z2 = 0.4.
Z3.4 = 0.1 ± O.li (linear controller with pole prescription)

the noise signal filter


n(z) D(Z-l) 1 + O.0500z - 1 + O.8000z- 2
(14.6.3)
Gpv(z) = v(z) = A(Z-l) = 1 _ l.036z I + O.2636z 2
resulted.
Figure 14.6 shows the graph of the controlled and the manipulated variable for
a stochastic noise signal with seven different control algorithms. Figure 14.8
y(k) y(k) y(k)
u(k) u(k) u( k)

k k k

a MVI. b MV 3 c MV3 -PI


rIb , =0.11.1. r/b , =O.II.L, « ,=0.1
y(k) y(kJ y(k ) y(k)
u(k) u (k) u(k) u( )

k k k k

d DB(v) e DB(v+1] f 3PC-2 g LCP A


qo= 2.158 q o =1. .391. z ,= O.1,Zl=O.1.
zlf=O.l!O.l i

Figure 14.7 Graph of controlled variable y(k) and manipulated variable u(k) for a second-
order process with various control algorithms for a step change in the reference variable
w(k).
36 14 Minimum Variance Controllers for Stochastic Disturbances

0.4 \
\
0.3
\
\
\
0.2 "- . - DB(v+ l)
8( v l
MV3- PI
"- 3PC-2

'
r: 0.02 .... LCPA
0.1
MV3......... ............. MVI.
r:O

0.1 0.5

Fig. 14.8
Figure 14.8 Mean squared quadratic control deviation Se as a function of the mean squared
input power Su for the control algorithms for stochastic disturbances indicated in Figure 14.6.
1. no control

Figure 14.9 Mean squared quadratic control deviation Se as a function of the mean squared
input power Su for the control algorithms for step changes in the reference variables shown in
Figure 14.7.

represents the averaged quadratic control deviation


1 M-l
Se = - L e~(k) (14.6.4)
M k=O
as a function of the mean squared input power
1 M-l
Su=- L [u(k)-U(OO)]2 (14.6.5)
M k=O
for M = 100 [26.14].
MV4 provides smallest Se yet largest Su, compare the strong oscillations of u(k)
in Figure 14.6b. For MV3 and MV3 - PI, Se is somewhat disadvantageous,
compensated by a smaller Suo For the approximately same Su as MV3, DB (v),
DB(v + 1) and 3PC - 2 furnish a poorer Se. The fundamentally better control
performance of the minimum variance controllers for stochastic disturbances is
also demonstrated by the LCPA.
Figure 14.7 represents the signal charts of the same control algorithms and
Figure 14.9 shows Se and Su (M = 20) for a step change in the reference value w(k).
DB(v) and LCPA provided the smallest Se, however with a large Su oAn exceeding-
ly strong oscillation with even essentially larger control amplitudes compared with
the deadbeat controllers can be realized with the minimum variance controllers,
especially the MV4. This shows that the minimum variance controllers have a very
poor control behaviour for step changes in the reference values. In this case
deadbeat controllers provide the optimal control performance.
15 State Controllers for Stochastic Disturbances

15.1 Optimal State Controllers for White Noise

The process model assumed in chapter 8 for the derivation of the state controller
for deterministic initial values is now excited by a vector stochastic noise signal v(k)
x(k + 1) = Ax(k) + Bu(k) + Fv(k) . (15.1.1)
The components of v(k) are assumed to be normally distributed, statistically
independent signal processes with expectation
E{v(k)}=0 (15.1.2)
and covariance matrix
cov[v(k), '! = i - j] = E{v(i)vT(j)} = Vc5,j (15.1.3)
where
1 for i = j
{
c5'J = 0 for i =t= j

is the Kronecker delta-function. v(k) is assumed to be statistically independent of


x(k). The initial value x(O) is also a normal stochastic process with:
E{x(O)} =0
cov[x(O)] = E{X(O)XT(O)} = Xo . (15.1.4)

The matrices V and Xo are positive semidefinite.


Required is a controller which generates a process input u(k), based on com-
pletely measurable state variables x(k), such that the control system approaches
the final state x(N) :::::; 0 and the quadratic performance criterion

E{I} = E{ xT(N)Qx(N) + ~t~ [xT(k)Qx(k) + UT(k)RU(k)]} (15.1.5)

becomes a minimum. Here Q is assumed to be positive semidefinite and symmetric,


and R to be positive definite and symmetric. As the state variables and input
variables are stochastic, the value of the performance criterion I is also a stochastic
variable. Therefore the expectation of I is to be minimized, (15.1.5). As in section 8.1
38 15 State Controllers for Stochastic Disturbances

the output variable y(k) is not used. Section 15.3 considers the case of nonmeasur-
able state variables x(k) and the use of measurable but disturbed output variables.
The literature on stochastic state controllers started at about 1961, and an extens-
ive treatment can be found in [12.2], [12.3], [12.4], [12.5], [8.3].
The Bellman optimality principle, described in section 8.1, can be used to
calculate the optimal input u(k), giving:

min E{I} = min E{XT(N)QX(N)


N(k)
+ Nil
k=O
[xT(k)Qx(k) + uT(k)RU(k)]}

k = 0, 1, 2, ... , N - 1

= min E { min E {xT(N)QX(N)


u(k) u(N-l)

+ :t~ [xT(k)Qx(k) + uT(k)Ru(k)] } } .


(15.1.6)

If I possesses a unique minimum, it is given by ([12.4] p. 260)

(15.1.7)

Optimization and expectation operations can therefore be commuted. It is there-


fore plausible that one obtains the same equations for stochastic state controllers as
in the deterministic case. This is [12.4]:
u(N - j) = -KN_jx(N - j) (15.1.8)
together with (8.1.30) and (8.1.31). For N -+00 the stationary solution becomes:
u(k) = -Kx(k) (15.1.9)
i.e. a linear time-invariant state controller. This controller can be interpreted as
follows. From (15.1.1), u(k) can only decrease x(k + 1). Since x(k + 1), as well as
u(k), depend only on x(k) and v(k), but not on x(k - 1), x(k - 2), ... and v(k - 1),
v(k - 2), ... and furthermore v(k) is statistically independent of v(k - 1),
v(k - 2), ... the control law for large N can be restricted to u(k) = f(x(k» (c.f.
(15.1.9». For small N both the stochastic initial value x(O) and the stochastic
disturbances v(k) have to be controlled, and the resulting optimal controller is
(15.1.8). As the optimal control of a deterministic initial value x(O) leads to the same
controller, (8.1.33) is an optimal state controller for both deterministic and stochas-
tic disturbances if the same weighting matrices are assumed in the respective
performance criteria.
Now the covariance matrix X(k + 1) of the state variables in closed loop for the
stationary case is considered. From (15.1.1) and (15.1.9) it follows that
x(k + 1) = [A - BK]x(k) + Fv(k) (15.1.10)
15.2 Optimal State Controllers with State Estimation for White Noise 39

and according to Eq. (12.2.25)


X(k + 1) = [A - BK]X(k)[A - BKY + FVF T (15.1.11)
and for k -+eN the covariance matrix becomes:
x= [A - BK]X[A - BKY + FVF T . (15.1.12)
The value of the performance criterion which can be attained with the optimal state
controller, can be determined as follows. If (15.1.1) instead of (8.1.7) is introduced
into (8.1.6), the calculations of that section follow until Eq. (8.1.19) giving

E{IN-1.lV} = E{xT(N - I)PN-l,NX(N - I) + vT(N - I)FTQFv(N - I)}


E{IN-d = E{xT(N - I)PN-1X(N - 1) + vT(N - I)FTQFv(N - 1)}

and
E{IN-Z} = E{XT(N - 2)PN- 2 X(N - 2) + vT(N - 2)F T QFv(N - 2)

+ vT(N - I)FTQFv(N - I)}

and therefore finally, if v(k) is stationary, for N steps


E{Io} = E{XT(O)POX(O) + NE{vT(k)FTQFv(k)} . (15.1.13)
In the steady state Po = P, and instead of the single initial state x(O) the disturb-
ance signals Fv(k) can be taken. Then
- 1
I = lim - E{Io}
N~ CD N

= tr[FT(p + Q)FV] (15.1.14)


using (12.2.28).

15.2 Optimal State Controllers with State Estimation for White Noise

In section 15.1 it was assumed that the state variables x(k) can be measurable
exactly, but in practice this is generally untrue and the state variables must be
determined on the basis of measured variables. We now consider the process
x(k + 1) = Ax(k) + Bu(k) + Fv(k) (15.2.1)
with measurable ouputs
y(k) = Cx(k) + n(k) (15.2.2)
or
y(k + 1) = Cx(k + 1) + n(k + 1) .
40 15 State Controllers for Stochastic Disturbances

Here it is assumed that the output disturbance signal satisfies


E{n(k)} = 0 }
(15.2.3)
cov[n(k); 't" = i - j] = E{n(i)nT(j)} = NOij
i.e. white noise. In chapter 22 it will be shown that the unknown state variables can
be recursively estimated by a state variable filter (Kalman filter) which measures
y( k) and u( k) and applies the algorithm:
x(k + 1) = Ax(k) + Bu(k)
+ r(k + 1)[y(k + 1) - CAx(k) - CBu(k)] . (15.2.4)
Here r(k + 1), the correction matrix, follows from (22.3.19) and (22.3.21). For
k -+ 00 this matrix converges to a constant r. For state estimation N, V and F have
to be known. Replacing the state variables x(k) in the control law Eq. (15.1.9) by
their estimates
u(k) = -Kx(k) (15.2.5)
then one again obtains an optimal control system which minimizes the perform-
ance criterion (15.1.5) [12.4]. For the overall system one then obtains:

[ X(k
x(k
+
+ 1)
I)J =
[A
rCA
- BK
A - B K - rCA
J[X(k)J
x(k)

+[
F
rCF
OJ
r
[V(k) J
n(k + 1) .
(15.2.6)

Introducing an estimation error, as in section 8.7,


i(k) = x(k) - x(k) (15 ..2.7)
and transforming (15.2.6) by the linear transformation of Eq. (8.7.4) into the
equation system

[ X(k
i(k
+
+ 1)
I)J = [A - BK BK
0 A - rCA
J [X(k)J
i(k)

+ [~l - rc] ~ - r J[:~:) + 1)l (15.2.8)

This equation system is identical to the equation system Eq. (8.7.5) with exception
of the last noise term. Instead of the observer feedback
Ax(k) = HCx(k)
here
Ax(k + 1) = rCAx(k) ,
the state filter feedback, influences the modes, as the state filter, unlike the observer
of section 8.6, uses a prediction Ax(k) to correct the state estimate. The poles ofthe
15.3 State Estimation for External Disturbances 41

control system with state controller and filter follow from (15.2.8)

det[zl- A*] = det[zl- A + BK]det[zl- A + rCA] = O. (15.2.9)

They consist, in factored form, of the m poles of the control system without state
filter, (15.1.10), and of the m poles of the state filter. Therefore the poles of the
control and the poles of the state filter do not influence each other and can be
independently determined. Stochastic state controllers also satisfy the separation
theorem. The design of the state filter is independent of the weighting matrices
Q and R of the quadratic performance criterion which determine the linear
controller as well as the process parameter matrices A and B. The design of the
controller is also independent of the covariance matrices V and N of the distur-
bance signals and independent of the disturbance matrix F. The only common
parameters are A and B.
As the state controller is the same for both optimally estimated state variables
and exactly measurable state variables, one can speak of a 'certainty equivalence
principle'. This means that the state controller can be designed for exactly known
states, and then the state variables can be replaced by their estimates using a filter
which is also designed on the basis of a quadratic error criterion and which has
minimal variances. Compared with the directly measurable state variables the
control performance deteriorates (15.1.14), because of the time-delayed estimation
of the states and their error variance [12.4].
Note, that the certainty equivalence principle is valid only if the controller has no
dual property, that means it controls just the current state and the manipulated
variable is simply computed so that future state estimates are uninfluenced in any
definite way [15.1]. A general discussion of the separation and certainty equival-
ence principles can be found in chapter 26.

15.3 Optimal State Controllers with State Estimation for


External Disturbances

In the design of the stochastic state controller of (15.2.5) a white vector noise signal
v(k) was assumed to influence the state vector x(k + 1). As the output signal with
n(k) = 0 satisfies

y(k) = Cx(k)
the internal disturbance v(k) generates an output

(15.3.1)
with
(15.3.2)

The disturbance component Yv( k) can also be generated by an external disturbance


42 15 State Controllers for Stochastic Disturbances

Q(kJ
y(kJ

Figure 15.1 Stochastically disturbed process with a disturbance model for external disturb-
ance n~(k)

signal ~(k) with the disturbance signal model


n~(k) = C'1(k) (15.3.3)
'1(k + 1) = A'1(k) + F~(k) (15.3.4)
see Figure 15.1. The covariance matrix of ~(k) is:
cov[~(k), r = i - j] = Ebij . (15.3.5)
The generation of this disturbance signal model for external disturbances corres-
ponds to the discussion of sections 8.2 and 8.7.2. If the assumptions on ~(k)
correspond to the assumption on v(k), a filter estimates the state variables '1(k) of
the disturbance signal model based on measurements of n~(k) or y(k), so that the
components of the disturbance signal ~(k) are optimally controlled as the signals
v(k) using the state controller of Eq. (15.2.5).
Now will be discussed which disturbance signal filter
n~j(z) = Gp~j(z) ~(z) j = 1,2, ... ,r (15.3.6)
can be realized with the disturbance model of(15.3.3) and (15.3.4). Here we consider
a process with one input and one output. Then, from (3.2.50) for a disturbance
signal n~j = n~, we have:
cTadj[zI - A]F
n~(z) = cT[zI - A] -1 F~(z) = ~(z) . (15.3.7)
det[zI - A]
If F is now a diagonal matrix, it follows that

(15.3.8)
15.3 State Estimation for External Disturbances 43

and, depending on the choice of /;, one obtains for each ~i(Z) a disturbance signal
filter
Di(Z)
Gp~I(Z) = A(z) i = 1,2, ... ,m (15.3.9)

with
A(z) = det[zl- A] (15.3.10)
DT(z) = [Dm(z) ... D,(z) ... Ddz)F = c T adj [zl- A] F. (15.3.11)
Note, that the process satisfies

Gp(z) = y(z) = B(z) = c T adj [zl- A]b . (15.3.12)


u(z) A(z) det[zl- A]
The denominators of Gp(z) and Gp~(z) are therefore identical, and the polynomials
D,{z) and B(z) contain common factors. The following example will show possible
forms of Di(Z) for two canonical state representations (c.f. section 3.6).

Example 15.3.1
Consider a second-order process with transfer function
B(z) blz-I+blz- l biz+ bl
Gp(z) = - = --=------,---=-----:0
A(z) l+a l z- l +alz- l Zl + alz + az .
a) Controllable canonical form

= [[bl(z + ad - alb l ] [bl + biz]] [~J


= biz + bl

With F as a diagonal matrix, one obtains

DT(Z) = [[bz(z + ad + albl][bl + biz]] [~ ~J


= [flbl Z + flal bl + alb l ] .
flblz + flb l
Therefore with white disturbances ~i(k) as input, disturbance polynomials
D,(z) = dl,z + dl ,
44 15 State Controllers for Stochastic Disturbances

can be generated which satisfy the following conditions on their parameters:

-el(k) =t= 0; e2(k) = 0: fl =t= 0; f2 = 0


dl l =f1 b 1
d21 =fl b2
-el(k) = 0; e2(k) =t= 0: fl = 0; f2 =t= 0
d 12 = f 2b 2
d22 =f2alb2 + a2 bl.
d l i and dli cannot be arbitrarily chosen because they are dependent on each other, so that
choice of one parameter fixes the other.
b) Observable canonical form

A =[~

Hence only the following disturbance polynomials can be realized:

D 1(z) = d 21 Z =flZ for el(k) =t= 0; e2(k) = 0


D2(Z)=d I2 =f2 for el(k)=O; e2(k)=t=0.

Here also d li and d2i cannot be freely chosen; one of the two parameters is always zero.

This example shows that with the assumption of white vector disturbance signals
C;(k) or v(k) with independent disturbance signal components, the parameters of
the corresponding disturbance signal polynomials cannot possess arbitrary values.
This position changes, however, if the disturbance signal components are equal:
~1(k) = ~2(k) = ... = ~m(k) = ~(k) . (15.3.13)
Then F changes to a vector
IT = [fm ... i2il] (15.3.14)
and in example 15.3.1 we have
D(z) = d 1 z + d2
15.3 State Estimation for External Disturbances 45

where
a) Controllable canonical form

[ d 2J =
dl
[(a l b2 + a2 b d
b2
b2J[I2J
bl II
b) Observable canonical form
d 2 =12
d l = II .
The parameters of D can then be chosen independently. The assumption of (15.3.13)
means that all elements are equal in the covariance matrix of the disturbance

2=(Jt[~ ~"'~l'
~ .
1
.
1
.
1
(15.3.15)

(J
This covariance matrix is, however, positive semidefinite for ~ =1= 0, so that the
assumptions of (15.1.1) are not violated, nor is the assumption of (22.1.4) used in
deriving the Kalman filter violated by (15.3.15).
Until now Fwas assumed to be diagonal. If all elements are non-zero, that means
in the case of example 15.3.1

F= [122 121
112 III
J (15.3.16)

then arbitrary parameters d I, d 2 , ... can be realized.


From this discussion, as in the discussions of section 8.2 and 8.7.2, it follows that
the state controller of(15.2.5) also becomes optimal for external correlated disturb-
ances n~(k) as a consequence of the assumption of a white vector disturbance
process v(k) where n~(k) is generated through the disturbance filter (15.3.3) and
(15.3.4) from ~(k). By the choice of the elements off, disturbance filters can be given
in the form

(15.3.17)

with
D(z)=cTadj[zl-ArIJ=dlz m + ... +dm~IZ+dm (15.3.18)
where e.g.
~(k) = [1 ... 1 IF ~(k) (15.3.19)
or
v(k) = [I ... 1 I]T v(k)

and
(15.3.20)
46 15 State Controllers for Stochastic Disturbances

The parameters off and E or V and therefore the parameters of the disturbance
polynomial D(z) affect only the design of the state filter but not the design of the
state controller. Therefore in the state filter one must set either
(15.3.21)
from (15.3.15) and F = f from (15.3.20), or all elements of F must be properly
chosen so that the stochastic correlated disturbance signal n~(k) generated by the
noise filter (15.3.17) can be optimally controlled.
D Interconnected Control Systems
Up to now, when considering the design of controllers or control algorithms it was
assumed, with the exception of state controllers, that only the control variable
y determines the process input u. This leads to single control loops. However, in
chapter 4 it was mentioned that by connecting additional measurable variables to
the single loop - for example auxiliary variables or disturbances - improved control
behaviour is possible. These additions to the single loop lead to interconnected
control systems. Surveys of common interconnected control systems using analogue
control techniques are given for example in [5.14], [5.32], [5.33], [16.2], [16.3].
The most important basic schemes use cascade control, auxiliary control variable
feedback or feedforward control.
In cascade control and auxiliary control variable feedback additional (control)
variables of the process, measurable on the signal path from the manipulated
variable to the controlled variable, are fed back to the manipulated variable. The
cascade control scheme uses an inner control loop and therefore involves a second
controller. In the case of the auxiliary variable, the differentiated auxiliary variable
(continuous-time) is usually added to the input or the output of the controller.
Then, instead of a controller only a differentiating element is necessary, which
possibly needs no power amplification. When realising control schemes in digital
computers the hardware cost is a small fraction of the total, so we concentrate here
on the cascade control scheme. This also allows for a more systematic single loop
design, so only cascade control systems (chapter 16) and no other auxiliary variable
feedback scheme is considered. Also of significance is feedforward control (chapter
17). In this case measurable external disturbances of the process are added to the
feedback loop.
16 Cascade Control Systems

The design of an optimal state controller involves the feedback of all the state
variables of the process. If not all state variables can be measured, but for
example only one state variable between the process input and output, then
improvements can be obtained for single loop systems using for example parameter
optimized controllers, by assuming this state variable to be an auxiliary control
variable Y2 which is fed back to the manipulated variable via an auxiliary control-
ler, as shown in Figure 16.1. Then the process part Gpu2 and the auxiliary controller
G R2 form an auxiliary control loop whose reference value is the output of the main
controller G Ri .
The main controller forms the control error as for the single loop by subtracting
the (main) control variable Yi from the reference value Wi' The controlled plant of
the main controller is then the inner control loop and the process part Gpui ' The
auxiliary control loop is therefore connected in cascade with the main controller.
A cascade control system provides a better control performance than the single
loop because of the following reasons:
1) Disturbances which act on the process part Gpu2 , that means in the input
region of the process, are already controlled by the auxiliary control loop
before they influence the controlled variable Yl.
2) Because of the auxiliary feedback system, the effect of parameter changes in
the input process part G pu2 is attenuated (reduction of parameter sensitivity
by feedback, chapter 10). For the initial design of controller GRb only

V2
,------------,
I V1
Process

I
I I
Auxiliary I I
(Minor) I n2 n1 I
Controller I I
W1 IU I Y1

Figure 16.1 Block diagram of a cascade control system


50 16 Cascade Control Systems

parameter changes in the output process part GplIl need to be considered and
the small changes in the auxiliary control loop behaviour can be incorporated
in the second place.
3) The behaviour of the control variable Yl becomes faster (less inert) if the
auxiliary control loop leads to faster modes than those of the process part
Gp1I2 ·
The overall transfer function of a cascade control system can be determined as
follows. For the reference value of the auxiliary loop as input one has
G 2(Z) = Y2(Z) = GR2 (Z)GPII2 (z) (16.1)
w W2(Z) 1 + GR2 (Z) GpII2 (z)
and for the behaviour of its manipulated variable:
u(z) GR2 (Z)
(16.2)
W2(Z) 1 + GR2 (Z) GpII2 (z) .
With
Yl (z) = GPul GpII2 (z) U (z) = GplI(z)u(z)
it follows for the behaviour of the plant of the main controller GRI that:

GR2 (Z) ,
1 + GR2 ()G ( ) GplI(z) = GplI(z). (16.3)
Z PII2 Z

In addition to the plant GplI(z) of the single loop the plant of the main controller of
the cascade control system now includes a factor which acts as an acceleration
term. Therefore a 'faster' plant results. For the closed loop behaviour of a cascade
control system it finally results:

G (z) = Y1(z) = GRI(Z)GplI(z)


w WI (z) 1 + GRI (z) GplI(z)
GRI (z) GR2 (Z) GplI(z)
(16.4)

The design of cascade control systems depends significantly on the location of the
disturbances, so that each cascade control system should be individually treated.
A simple example shows the behaviour of such a cascade control system.

Example 16.1
The process under consideration has two process parts with the s-transfer function
1
G (s)----
PII2 - (1 + 7.5s)

1
GplI ! (s) = (l + 10s)(1 + 5s) .
16 Cascade Control Systems 51

For a sample time To = 4 s, the z-transfer functions are:


bl2z~ 1 0.4134z~ 1 0.4134
G (z)- - ----
Pu2 - 1 + al2z~ 1 - 1 - 0.5866z~ 1 - z - 0.5866
0.1087z~' + 0.0729z~2
G (z) - - - - - - - - - -
Pul - 1 _ 1.1197z~' + 0.3012z~2

Gpu(z) = Gpu2 Gpu1 (z)


0.0186z~ 1 + 0.0486z~2 + 0.0078z~3
1 - 1.7063z~' + 0.958z~2 - 0.1767z~3
0.0186(z + 0.1718)(z + 2.4411)
(z - 0.5866)(z - 0.6705)(z - 0.4493)
Initially a P-controller is assumed as auxiliary controller
GR2 (Z) = q02
so that the closed loop behaviour of the auxiliary loop is:

To obtain an asymptotically stable auxiliary loop its pole must lie within the unit circle of
the z-plane, giving:
- I < - (a 12 + q02bl2) < 1.
Therefore the gain of the P-controller satisfies:
1 + al2 1 - al2
- - - - < q02 < - - - or - 1 < q02 < 3.838 .
b l2 b l2
(Note that a proportional controller acting on a first order process is not structurally stable
in discrete time, unlike the continuous-time case.) If positive Q02 are chosen then with
Q02 = 0.7 or Q02 = 1.3:
0.2894 0.5374
G w2 (z) = z _ 0.2972 or z - 0.0492·

The pole moves toward the origin with increasing Q02 reaching the origin for Q02 = 1.42.
This shows that the settling time of the auxiliary control loop becomes smaller than that of
the process part Gpu2 . The resulting closed loop behaviour of the cascade control system
compared with that of the single loop becomes better only for Q02 > 1.3. If Q02 is chosen too
small then the behaviour of the cascade control system becomes too slow because of
a smaller loop gain compared with that of the optimized main controller. Notice that the
parameters of the main controller were changed when the gain of the auxiliary control loop
changed. The gain of the auxiliary loop varies for 0 < Q02 ~ 1.3 by 0 < Gw2 (1) ~ 0.54. It
makes more sense to use a PI-controller as auxiliary controller

G (z) =
Q
02
+ Q12 Z~I
R2 l-z~1
52 16 Cascade Control Systems

w
..............
0
5 10 15 k
u

w2
1 .............. "
o +---~----~----~-- 3
5 10 15 k
u
2 2
-,

.... ...........-.-
6-,
0.,
C-~:3:&" • • • • •

o +--~--~--~~ o+---~-----+----~- c+-----+-----+------+-~


5 10 15 k 5 10 15 k 5 10 15


........... .
Gpu
-..
0
Yl
1
G'Pu .,,"
e'·· •
Yl
1 •
•• i i • • • • • • •
0

• 0 \G ~ ·0°
• 0 • 0

Pu

5 k
~ • .
0
O~~-~----+I----_+I----_+I---
10 15 5 15 k 5 10 15 k
a c

Figure 16.2 Transients of a control loop with and without cascade auxiliary controller. The
main controller has PID-, the auxiliary controller PI-behaviour. 000 without auxiliary
controller ___ with auxiliary controller; a Auxiliary control variable Y2; b control variable
Yl without main controller; c control variable Yl with main and auxiliary controller

so that Gw2 (l) = 1. The closed loop transfer function of the auxiliary loop then becomes:
q02b12Z-1 + q12 b 12 Z- 2
Gw2 (z) = 1 2
1 + (a 12 + Q02b12 - l)z- + (Q12b12 - a12)z-

With controller parameters Q02 = 2.0 and Q12 = - 1.4 one obtains:
0.8268(z - 0.7000)
G (z) - - - - - - - - -
w2 - (z - 0.7493)(z - 0.0105)

One pole and one zero approximately cancel and the second pole is near the origin. The
settling time of Y2(k) becomes smaller than that of the process part G pu2 (z), Figure 16.2a.
16 Cascade Control Systems 53

Table 16.1 Optimized controller parameters of the main controller. Design criterion (5.2.6)
r = 0.1

3-PC3 r = 0.1 qo ql q2 K CD c[

without auxiliary
control loop 2.895 -4.012 1.407 1.488 0.9456 0.1950
with auxiliary
control loop 2.6723 - 3.3452 1.036 1.6363 0.6330 0.2219

The overall transfer function of the plant of the main controller is given by Eq. (16.3)
, 0.0372(z - (0.6433 + 0.0528i))(z - (0.6433 - 0.0528i))
Gp.(z) = (z _ 0.7493)(z - 0.0105)

(z + 0.1718)(z + 2.4411)
(z - 0.5866)(z - 0.6705)(z - 0.4493)

The auxiliary control loop introduces the poles of G w2 and a conjugate complex zero pair in
addition to the poles and zeros of the process G pu ' Figure 16.2b shows that therefore the
plant of the main controller becomes quicker. Finally, a quicker but well damped overall
behaviour is obtained which, of course, requires large process input changes, Figure 16.2c.
Table 16.1 shows that the parameters of the main controller are changed by adding the
auxiliary controller as follows: K larger, CD smaller, C[ larger.

The control algorithms which have to be programmed for a PI-controller as


auxiliary controller and a PID-controller as main controller are:
el(k) = wI(k) - YI(k) (16.5)
wz(k) = wz(k - 1) + qOlel(k) + qllel(k - 1) + qZlel(k - 2) (16.6)
ez(k) = wz(k) - yz(k) (16.7)
u(k) = u(k - 1) + qozez(k) + q12ez(k - 1) . (16.8)
The relatively small additional effort for cascade control instead of a single loop
consists in measuring the variable yz(k) and in the algorithms of (16.7) and (16.8).
All the treated controllers for processes with one input and one output are suitable
as auxiliary controllers and main controllers. Therefore many combinations are
possible. A comprehensive investigation of cascade control with P- and PI-control-
lers for continuous-time signals is described in [16.2], where it is shown that as an
auxiliary controller a P-controller and as a main controller a PI-controller should
be used. Furthermore, for disturbances at the input, the auxiliary variable should
be near the disturbance location and for equally distributed disturbances along the
process the process part Gpuz should have about half the order of the overall
process.
54 16 Cascade Control Systems

In discrete time the gain of the auxiliary controller must be reduced because of
the smaller stability region (see example 16.1). The auxiliary control loop therefore
becomes slower and its offset is larger. In addition the parameter changes of the
process part Gpu2 have more influence on the parameter tuning of the main
controller. By adding an I-term one obtains always G w2 (1) = 1 as the gain of the
auxiliary loop independently of any parameter change of the process part Gpu2 •
Then if the resulting PI-controller is tuned so that it is far enough away from the
stability limit, larger parameter changes of the first process part need not be
considered when designing the main controller, provided that the dynamics of the
auxiliary control loop are much quicker than those of the second process part.
As main controllers for example parameter optimized controllers, dead-beat
controllers or minimum variance controllers are suitable. For their design the
process plus the already tuned P- or PI-auxiliary controllers can be considered
together as one plant given by Eq. (16.3). Using a state controller one should
consider the auxiliary variable Y2 to be a directly measurable state variable and
employ a reduced order observer (see section 8.8) or by inserting the directly
measurable state variable in place of an observed state variable given by a full
order observer (see section 8.7.2).
For two measurable auxiliary control variables a double-cascade control system
with two auxiliary controllers can be designed [16.1]. If all state variables are
measurable then the multi-cascade control system has a similar structure as a state
controller. From the theory of optimal state control it is known that the single
auxiliary controllers have P-behaviour, chapter 8. Cascade control systems with
P-controllers can therefore be considered as first steps towards optimal state
control.
A particular advantage of the separation into an auxiliary controller and main
controller is in the resulting stepwise parameter adjustment one after another. This
is true both for applying tuning rules and for computer aided design. For cascade
control systems, first estimates can be simply obtained of the parameters q02 of the
auxiliary controller and qOI of the main controller by prescribing the manipulated
variable u(O) in the case of a step in the reference value WI (0). (5.2.31) and (5.2.32)
give

u(O) = q02W2(0)
W2(0) = qOI WI (0)

and therefore
(16.9)
This relation can in particular be used to choose qOI of a parameter optimized main
controller if the initial manipulated variable u(O) must be adjusted to the manipUla-
tion range and if the parameter q02 of the auxiliary controller is already fixed.
Cascade control systems can often be applied. If in the input area of a process an
auxiliary control variable is measurable for control systems with higher perform-
ance requirements cascade control systems should always be applied. They can be
16 Cascade Control Systems 55

especially recommended in cases where a valv~ manipulates flow. The gain of the
valve is non-linear, as it depends among others on the pressure drop across the
valve which can change significantly during operation. An auxiliary control loop
with PI-controller can compensate for these gain changes completely. Cascade
control systems should be applied more frequently in digital control systems
compared with analogue control systems, as the additional effort of the auxiliary
controller is small.
17 Feedforward Control

If an external disturbance v of a process can be measured before it acts on the


output variable y then the control performance with respect to this disturbance can
often be improved by feedforward control, as shown in Figure 17.1. Here immedi-
ately after a change in the disturbance v the process input u is manipulated by
a feedforward control Gs which does not wait as with feedback control until the
disturbance has effected the control variable y. Significant improvement in control
performance, however, can only be obtained for a restricted manipulation range
if the process behaviour Gpu is not slow compared with the disturbance
behaviour Gpv '
When designing a control system one should always try to control the effects of
measurable disturbances using feedforward, leaving the incompletely feedforward-
controlled parts and the effect of unmeasurable disturbances on the controlled
variable to feedback control.
As feedforward does not influence the stability of a control loop in the case of
linear processes, feedforward control systems can be added after the feedback
controllers are tuned. In this chapter the following design methods of feedforward
control systems are treated:
If an element Gs can be realized such that the disturbance behaviour Gpv can be
exactly compensated by GsG pu , then after a change in the (deterministic or
stochastic) disturbance variable v there is no change in the control variable y. This
is ideal feedforward control: its realizability and other cancellation feedforward
control systems are considered in section 17.1. Section 17.2 describes parameter-
optimized feedforward control systems, where the structure of the feedforward
element is fixed a priori, and which are suitable for many more processes. Right
from the onset one restricts the problem to nonideal feedforward control. Para-
meter-optimized feedforward control systems can be designed for both determinis-

1---------1
v I
1
1
Iy Figure 17.1 Feedforward control
of a single input/single output
process
17.1 Cancellation Feedforward Control 57

tic and stochastic disturbances. State controllers for external disturbances already
contain ideal feedforward control for part of the disturbance model. Feedforward
control systems for directly measurable state variable disturbances satisfy the state
control concept for external disturbances in the form of state variable feedforward
control, section 17.3. Finally, corresponding to minimum variance control, min-
imum variance feedforward control for stochastic disturbances can also be designed,
section 17.4.
In the following it is assumed that mathematical models of the processes both for
the process behaviour
y(z) B(Z-I) -d b 1z- 1 + ... + bmz- m -d
Gpu(z) =- = --1 Z = z (17.1)
1 + al z - + ... + amz m
1
u(z) A(z-)
and for the disturbance behaviour
n(z) D(Z-I) do + d 1 z- 1 + ... + dqz- q
Gp ,(z) =- = - - = -------:-------'-- (17.2)
[ v(z) C(Z-I) I+clz-I+···+cqz- q
are known. For state feedback control the state model
x(k + l) = A x(k) + Bu(k) (17.3)
y(k) = Cx(k) (17.4)
is assumed to be known.

17.1 Cancellation Feedforward Control

For ideal feedforward control one has


(17.1.1)
and therefore:

fd Z d + fl +dZ-(1 +d) + ... + fm+q+dZ-(m+q+d)


H(Z-1 )
(17.1.2)
F(z I)·

Therefore the process behaviour must be completely cancelled by the feedforward


control element [17.1]. The feedforward element, however, becomes simpler if the
disturbance filter satisfies C(z - 1) = A (z - I). Then
o D(Z-I)
Gs(z) = B(Z-I)Z-d (17.1.3)

and only the numerator of the process transfer behaviour has to be cancelled.
58 17 Feedforward Control

If these feedforward controls can be realized and are stable, the influence of the
disturbance v(k) on the output y(k) is completely eliminated. One condition for the
realizability of (17. 1.2) is that if the element ho is present an element/o must also be
present, and if h1 is present /1 must also be present, etc. This means that for the
assumed process model structure of (17.1) and (17.2) d = 0 and do = 0 must always
be fulfilled. Therefore one can assume d = 0 from the beginning if Gpv(z) has no
jumping property and does not always contain a deadtime d' ~ d. Then only the
part B/A is cancelled.
To obtain stable feedforward control the roots Zj of the denominator F(z) must
satisfy IZjl < 1, that means the zeros of B(z) and C(z) must lie within the unit circle.
Therefore ideal feedforward control is impossible for processes with deadtime and
with a jumping property, or for processes with zeros of the process or of the
disturbance behaviour on or outside the unit circle in the z-plane (e.g. for nonmini-
mum phase behaviour).

Example 17.1.1
As examples the feedforward control of three test processes I, II and III with distinct process
behaviour, but with identical disturbance behaviour are considered (see Tables 17.1, 17.2
and appendix Vol. I).
Process I (second order with nonminimum phase behaviour; model of an oscillator)
From (17.1.2) it follows that:
o di + (aid i + d 2)z-1 + (a i d 2 + a2 d dz- 2 + a2 d2z - 3
G (z)-------------~----------~----~
S - b i + (biCi + b 2)z-1 + (b i C2 + b2cdz- 2 + b 2C2Z- 3
0.144 - 0.123z- i - 0.039z- 2 + 0.065z- 3
1.0 - 0.527z- i - 0.237z- 2 + 0.132z- 3
(z + 0.646)(z - (0.750 + 0.369i))(z - (0.750 - 0.369i))
(z + 0.494)(z - (0.510 + 0.082i)(z - (0.510 - 0.082i))

Table 17.1 Parameters of Gp.(z) (process behaviour)

To[s] ai a2 a3 bi b2 b3 d

process I 2 -1.5 0.7 1.0 0.5


process II 2 -1.425 0.496 -0.102 0.173
process III 4 -1.5 0.705 -0.100 0.065 0.048 -0.008

Table 17.2 Parameters of Gpv(z) (disturbance behaviour)

for processes I and II 2 -1.027 0.264 0.144 0.093


for process III 4 -0.527 0.070 0.385 0.158
17.1 Cancellation Feedforward Control 59

ulk)
- 0150
- 0.133 ~ - - - - ~ .-.-. --. ... -e ..... --. .......... ____ -



- 0.100
• •


- 0.050

Figure 17.2 Time behaviour of the


0+++4~~~++-----------r~-
manipulated variable u(k) for pro-
o 10 20 k cess I

This transfer function can be realized and is stable. Figure 17.2 shows the behaviour of the
manipulated variable for a step change in the disturbance v(k). No change in the output
variable arises, as there is complete compensation.
Process II (second order with nonminimum phase behaviour)
For this process one obtains a real pole at z = 1.695, as the zero is outside the unit circle for
Gpu(z), indicating unstable behaviour of the feedforward element. Ideal feedforward control
is therefore impossible.
Process III (third order with deadtime; model of a low pass process)
The feedforward element given by Eq. (17.1.2) is unrealizable for process III because d =F 0.1f
the deadtime d in (17.1.2) is omitted, that means only the dead time-free term BjA is
compensated, then:
o (z - 0.675)(z - 0.560)(z - 0.264) 0.385(z + 0.441)
Gs (z) = .-----------
+ 0.879)(z - 0.140) Z2 - 0.527z + 0.07
0.065(z
5.923 - 6.448z- 1 + 0.526z- 2 + 1.121z- 3 - 0.243z- 4
1 + 0.212z- 1 - 0.443z- 2 + 0.117z- 3 - 0.009z- 4
This feedforward element is realizable. However, the cancellation in this case involves a large
input amplitude which can be seen from the alternative equation for the feedforward element
us(O) = (0.385jO.065)v(0) = 5.923v(0) .

These examples show that ideal feedforward control is often unrealizable or


leads to excessive manipulated variables. In this case one can, as in section 6.2, add
an additional realizability term G~(z)
(17.1.4)
which leads to transient deviations of the output variable y. When designing
60 17 Feedforward Control

cancellation controllers one can suitably prescribe the overall behaviour


y(Z)
Gv(z) = v(z) = Gpv(z) - Gs(z) Gpu(z)

and hence calculate the feedforward control


1
Gs(z) = - Gpu(z) [Gv(z) - Gpv(z)] .

Although this design is computationally simple, because of the arbitrariness in the


prescription of Gv(z), the cancellation of poles and zeros, and the untreated
intersample behaviour, this procedure is not recommended just as with cancella-
tion controllers. Therefore other design procedures are considered.

17.2 Parameter-optimized Feedforward Control

When designing parameter-optimized feedforward control, one assumes a fixed


(realizable) structure, as in the design of parameter-optimized controllers, i.e.
the structure and order of the feedforward algorithm are given and the free para-
meters are adjusted by parameter optimization [17.1]. Here feedforward control
structures
G (z) _ us(z) _ H(Z-l) _ ho + h1z- 1 + ... + h,z- ' (17.2.1)
s - v(z) - F(Z-l) - 1 +flZ 1 + ... +f,z I
are assumed. Because the structure need not be correct, one does not obtain in
general an ideal feedforward control, and transient deviations may occur.

17.2.1 Parameter-optimized Feedforward Controls without a


Prescribed Initial Manipulated Variable
The unknown parameters of the algorithm
aT = [hoh 1 ••• hi : fd2 ... f,] (17.2.2)
are minimized by a cost function, e.g.
M
V = L [y2(k) + rAu 2(k)] (17.2.3)
k=O
hence
dV =0 (17.2.4)
da .
Here the disturbance signal can be deterministic or stochastic. For the change in
the manipulated variable one must set
Au(k) = u(k) - ii (17.2.5)
17.2 Parameter-optimized Feedforward Control 61

with
u=u(oo) final value for e.g. step disturbances
u = E{u(k)} expected value for stochastic disturbances
In many cases one obtains satisfactory feedforward control performance for I ~ 2.
As the gain of the feedforward element satisfies

Gs (1) = Gp ,,(I) = Ks = ho + hI + h2 (17.2.6)


Gpu (1) 1 + fl + f2
then for I = 2 four parameters and for I = 1 only two parameters have to be
determined through optimization.

17.2.2 Parameter-optimized Feedforward Control with


Prescribed Initial Manipulated Variable
Now the response of u(k) to a step change of the disturbance variable v(k) = 1(k) is
considered. For 1= 2, (17.2.1) leads to the difference equation:

u(k) = - f 1 u(k - 1) - f2U(k - 2) + hov(k) + hlV(k - 1) + h2V(k - 2).


(17.2.7)
For v(k) = l(k) we have:
u(O) = ho
u(l) = (1 - fdu(O) + hI
u(2) =- f 1 u(l) + (1 - f2)U(0) + hI + h2 (17.2.8)

u(k) = - flU(k - 1) - f2U(k - 2) + u(O) + hI + h2 .


The initial manipulated variable u(O) equals ho or hov(O). Therefore ho can be fixed
simply by a suitable choice of u(O), so that a definite manipulating range can be
easily considered. By means of the given u(O) the number of optimized parameters
for I = 2 is reduced to three parameters and for 1= 1 to one parameter. For I = 1
the equations, together with (17.2.8) and (17.2.6), become:
ho = u(O) (17.2.9)
u(l) - Ks
fl=---- (17.2.10)
u(O) - Ks
hI = u(l) - u(O)(1 - fd . (17.2.11)
Here u(1) can now be chosen as the single independent variable in the parameter
optimization, and its value is such that:
dV
d[u(1)] =0. (17.2.12)
62 17 Feedforward Control

From stability considerations it follows that


Ifll < 1 (17.2.13)
and therefore from Eq. (17.2.10)
u(1) < u(O) (17.2.14)
and from (17.2.11)

hI <fl u(O) . (17.2.15)

Hence for 1= 1, the design of the feedforward element with a prescribed initial
manipulated variable u(O) leads to the optimization of a single parameter, taking
into account the restriction of (17.2.14). The computational effort for parameter
optimization in this case is particularly smalL Improved gradient methods, de-
scribed in section 5.4, are recommended as suitable optimization methods when
using a digital computer. The truncation criterion must not be selected too small.
A parameter-optimized feedforward control of first order (l = 1) with a pre-
scribed initial manipulated variable is now described for the examples of processes
II and III subjected to a step change in the disturbance v(k).

Example 17.2.1
Process II
Figure 17.3 shows V[u(1)] and Figure 17.4 the corresponding time responses of the
manipulated and the controlled variable for u(O) = 1.5. Because of the nonminimum phase
behaviour, the initial deviation is increased by feedforward control, the deviation for k ;?: 3,
however, is improved.

Process III
Figure 17.5 shows V[u(1)] for different u(O). The minima are relatively fiat for large u(O).
Figure 17.6 shows that the feedforward control improves the behaviour for k ;?: 2.

The above method for designing parameter-optimized feedforward control sys-


tems is suitable for linear stable processes with either minimum or nonminimum

2.0

1.0

0+-------+----1--- Figure 17.3 Loss function V[u(1)]


1.0 1.5 u(l) for process II
17.2 Parameter-optimized Feedforward Control 63

ylk)

10 - - - - - ~o (5 0 0 --<) 0-0 C>--{) 0--0 ---0-- 0----0 0--0 0 - - -

o o~
o ~
o without

feedforword control
o
with

.:. / / /
05

o •

o •

o+
o
i
I I I I I I I I I

10
••
•• ..... -..... 20 k

ulk)
- 1.5
1 •
•• Figure 17.4 Transient re-
- 10O [-, ~-------~--"'
•• • • • • a . . . . . . . . . . . . e-.-.-- sponses of the manipulated
variable u( k) and the output
variable y(k) for the process
--. I I I I I I I I I II for 11 = - O.S; ho = 1.5;
o 10 20 k h[=-1.3.

15 u(0)=2.o

10
ulO) = 3.0

0.5 I

o tl____--+I---L_ _ _--+I--L_ _ _--+I_ _ ___ Figure 17.5 Loss function


05 1.0 1.5 2.0 Ull) V[u(I)] for process III

phase behaviour. The computational effort required for synthesis is, however,
larger than that for feedforward cancellation control. The parameter calculation
for first or second order elements is in general a simple computer aided design
problem.
64 17 Feedforward Control

y Ikl
1.0 ----0-0-0-00-0-0-0-0-0'0""0-0-0-0-0- - -
o
o

~W;\hO"\
../With
o ulkl
teedforward control
- 3.0

0.5
- 2.0


-1.0 -•~---- ................... - ...... -


•••nrrT~~-----
O~~++4-.~+4~.~._ 0+-+4-1--+-+-+-+-1--+-;-------------t---__
••• • 20 k o 10 20 k

Figure 17.6 Transient responses of the manipulated variable of u(k) and output variable
y(k) for process III for II = 0.3; ho = 3.0; hI = - 2.3.

17.2.3 Cooperation of Feedforward and Feedback Control


If a control system with feedforward control is designed without taking into
account the cooperation of a later connected control system, then an additional
control action emerges with an initial control difference which cannot be avoided.
This control action might lead to a large opposed control difference, compare
Figure 17.7. A better control behaviour can be attained if the initial feedforward
control action is rapidly reduced, that means, if for proportional acting processes
the stationary process excitation through the disturbance
Yv(oo) = Gpv(1)v(oo)
is not completely compensated by the feedforward control
Yu(oo) = Gs (1)G pu (l)v(oo)
hence,
(17.2.16)
Thus the proportional part deteriorates compared with the differential part.
Figure 17.7 represents the significantly better control performance. This example
shows that feedforward control should be adapted to the control system. For the
design the described parameter optimization (17.2.2)-(17.2.4) can be applied, how-
ever only for the pre-optimized process with connected controller. If, however,
feedforward and feedback controllers are optimized, then a sequential procedure
with mutual optimization is recommended, see section 26.11.
17.3 State Variable Feedforward Control 65

....................................

20 1.0 s 60
:(.... _..._............................... .

~! U
.'
-1 -1
.'
l

-2 -2
wi(hou feedback control w i h feedback con ro l

Figure 17.7 Graph of controlled and manipulated variable after a step change in the
disturbance variable v for the case that the control system with feedforward control was
designed for: . ._ - open loop; ...... closed loop (example: steam superheater control)

17.3 State Variable Feedforward Control

It is assumed that measurable disturbances v( k) influence the state variables


x( k + \) as follows:
x(k + \) = Ax(k) + Bu(k) + xv(k)
x,,(k) = Fv(k)
y(k) = Cx(k) .
} (17.3.\)

If the state variables x( k) are directly measurable, the state variable deviations
x,,( k) are acquired by the state controller of (8.1.33)

u(k) = - Kx(k)
one sample interval later, so that for state control additional feedforward control is
unnecessary. With indirectly measurable state variables, the measurable distur-
bances v(k) can be added to the observer. For observers as in Figure 8.7 or Figure
8.8 the feedforward control algorithm is:
i(k + \) = Fv(k) or i(k + \) = Fv(k) . (17.3.2)
66 17 Feedforward Control

17.4 Minimum Variance Feedforward Control

Corresponding to minimum variance controllers, feedforward control with min-


imum variance of the output variable y( k) can be designed for measurable stochas-
tic disturbances v(k). Here, as in the derivation of the minimum variance controller
in chapter 14 for processes without dead time, the quadratic cost function
(17.4.1)
is minimized. One notices that the manipulated variable u(k) can at the earliest
influence the output variable y(k + 1), as bo = O. The derivation is the same as for
minimum variance control, giving (14.1.5) to (14.1.12). The only difference is that
v(k) is measurable, and as a result instead of a control u(z)jy(z) = ... , a feed-
forward control u(z)jv(z) = ... is of primary interest.
(14.1.12) implies:
r
zy(z) - AzV(Z) + b;u(z) = O. (17.4.2)

In this case for z y(z) (14.1.5) is introduced, and for feedforward control it follows
that:
u(z) AzA(z-I)[D(z-l) - C(Z-I)]
GSMV I (z) = - = - -----'--=---'-------'------==------ (17.4.3)
v(z) ZB(Z-I)C(Z-I) + :1 A(Z-I)C(Z-I)

This will be abbreviated as SMVI.


If r = 0, then:
G (z) __ AzA(z-I)[D(z-l) - C(Z-I)]
SMV2 - ZB(Z-I)C(Z-I) (17.4.4)

If C(Z-I) = A(Z-I), then it follows from (17.4.3) that:

GSMV3 () _ Az[D(z-I)-A(z-l)]
Z - - -------- (17.4.5)
r
ZB(Z-I) + bl A(Z-I)

and for r = 0:
G ( ) __ Az[D(z-l) - A(Z-I)]
SMV4 Z - zB(z I) (17.4.6)

The feedforward control elements SMV2 and SMV4 are the same as the minimum
variance controllers MV2 and MV4 with the exception of the factor A. As the
discussion ofthe properties of these feedforward controllers is analogous to that for
the minimum variance controllers in chapter 14, in the following only the most
important points are summarized.
For SMVI and SMV2 the roots of C(Z-I) = 0 must lie within the unit circle for
SMV3 and SMV4 also the roots of B(Z-I) = 0, so that no instability occurs.
17.4 Minimum Variance Feedforward Control 67

The feedforward control SMVI affects the output variable in the following way:
y(z) AD(z-l) B(Z-l)
G,,(z) = v(z) = C(Z-l) + Gs(z) A(Z-l)

~D(Z-l) [I +
C(Z-l)
ZB(Z-l)
r
[C(Z-l) -
D(Z-l)
IJ]. (17.4.7)
ZB(Z-l) + -A(Z-l)
hI
C(Z-I) = A(Z-l) has to be set only for SMV3. When r --> 00, Gv(z)-->
AD(z-1 )/C(Z-l), and the feedforward control is then meaningless. For r = 0 i.e.
SMV2 or SMV4, one obtains, however:

(17.4.8)

This means that the effect of the feedforward control is to produce white noise
y(z) = AV(Z) with variance A2 at the output. For processes with dead time d the
derivation of the minimum variance controller is identical with (14.2.2) to (14.2.9).
In (14.2.9) one has only to introduce (14.2.4) to obtain the general feedforward
element
, u(z) AA(z-l)L(z-l)
GSMVld(Z) = - = - ------------ (17.4.9)
v(z) ZB(Z-l )C(Z-I) + ~ A(Z-I )C(Z-l)
hI
or with r = 0:
G z =_ ),A(Z-I)L(z-l)
SMV2d( ) ZB(Z-l )C(Z-I) . (17.4.10)

If C(Z-l) = A(Z-l) one obtains


AL(Z-l)
GSMV3 Az) = - -------'--- (17.4.11)
r
ZB(Z-l) + -A(Z-I)
hI
or for r = 0:
AL( -1)
G (_) _ z (17.4.12)
SMV4d':' - - ZB(Z-I)·

The resulting output variable is, for feedforward controllers SMV2d and SMV4d:

G ( ) = y(z) = 'F( -I) (17.4.13)


v z v(z) " z .

Therefore, as with minimum variance feedback control, a moving average process


of order d given by (14.2.19) is generated. With increasing dead time the variance of
the output variable increases rapidly, as in (14.2.20). The feedforward controller
GSMV4 (Z) was first proposed by [25.9].
E Multivariable Control Systems
18 Structures of Multivariable Processes

Part E considers some design methods for linear discrete-time multi variable
processes. As shown in Figure 18.1 the inputs u, and outputs Y1 of multivariable
processes influence each other, resulting in mutual interactions of the direct signal
paths Ul - Yl, U2 - Y2, etc. The internal structure of multi variable processes has
a significant effect on the design of multi variable control systems. This structure
can be obtained by theoretical modelling if there is sufficient knowledge of the
process. The structures of technical processes are very different such that they
cannot be described in terms of only a few standardized structures. However, the
real structure can often be transformed into a canonical model structure using
similarity transformations or simply block diagram conversion rules. The following
sections consider special structures of multi variable processes based on the transfer
function representation, matrix polynomial representation and state representa-
tion. These structures are the basis for the designs of multi variable controllers
presented in the following chapters.

18.1 Structural Properties of Transfer Function Representations

18.1.1 Canonical Structures


As an example the block diagram of the evaporator and superheater of a steam
generator with natural circulation is considered as shown in Figure 18.2. The
controlled variables of this two-variable process are the steam pressure Y2 in the
drum and steam temperature Yl at the superheater outlet. The manipulated

Figure 18.1 Multivariable process


-...I
IV

~Mf ~Ms." .1 Pd, Pressure


Fuel valve Evaporator gauge
U2 Mfm l e i Msm Pd, Y2
ILl J~I ILl .....
I I II + - L I I I 00

GlO G13 G'4 G,s ~


LlMs ....
~
(')

~
riel Msm
--
@
Ga '"o
....,
GglLI Superheater
;turbance ~Ms r-----------l s:::
filter -:;.- I I g.:;::.
I I
~~: Ms - ILl I ~
I II I :;!.
~
G,s I G, I 0"
I I 0"
~qG I I
I I
qG I I '"a
'-- I
I---lLI I ~ I ~
G s I Gs I ~
Injection I I Temperature '"
I I
I
+ - gauge
u, .1 "'5O ,1"'so I~I Y,
I~I 1t=1 I 1lL=1
I I ,1M ow I -I : I I + + : II
G, G2 I G3 I G4
Ms I I
L ____________ J
-- - -_.-

Figure 18.2 Block diagram ofthe evaporator and superheater of a natural circulation steam generator [\8.5], [18.6].
18.1 Structural Properties of Transfer Function Representations 73

variables are the fuel flow U2 and spray water flow Ul. Based on this block diagram
the following continuous-time transfer functions can be derived.

Y2(S)
Evaporator: G 22 (s) = -(-) = GlO(S)G13(S)G14(S)G1S(S)
U2 S

Coupling Ya(s)
superheater-evaporator: G 12 (S) = -(-) = Gl(S)GS(S)G14(S)G1S(S)
Ul S

Coupling Yl (s)
evaporator-superheater: G2ds) = - () = G10(S)GS(S)G6(S)G4(S)
U2 S

G ll and G 22 are called the 'main transfer elements' and G12 and G21 the 'coupling
transfer elements'. Assuming that the input and output signals are sampled syn-
chronously with the same sample time To, the transfer functions between the
samplers are then combined before applying the z-transformation, giving:
Ydz)
Gll (z) = -(-) = G1G2 G3G4 (z)
Ul Z

(18.1.1)

This example shows that there are common transfer function elements in this
input/output representation. The transfer functions can be summarized in a trans-
fer matrix G(z):

[ Yl(Z)] = [Gll(Z)G2dZ)] [Ul(Z)]


Y2(z) G12 (z) G22 (z) U2(Z)
y(z) = G(z)u(z) . (18.1.2)
In this example the number of inputs and outputs are equal, leading to a quadratic
transfer matrix. If the number of inputs and outputs are different, the transfer
matrix becomes rectangular. It should be noted that the transfer function ele-
ments describe only the controllable and observable part of the process. The
74 18 Structures of Multivariable Processes

U2 Y2 Y2

Figure 18.3a, b. Canonical structures of multivariable processes shown for a twovariable


process. a P-canonical structure; b V-canonical structure.

non-controllable and non-observable process parts cannot be represented by


transfer functions, as known.
The most important canonical structures used to describe the multi variable
process' input/output behaviour are shown in Figure 18.3 [18.1].
In the case of the P-canonical structure each input acts on each output, and the
summation points are at the outputs; P-canonical multivariable processes are
described by (18.1.2). Changes in one transfer element influence only the corres-
ponding output, and the number of inputs and outputs can be different. The
characteristic of the V-canonical structure is that each input acts directly only on
one corresponding output and each output acts on the other inputs; this structure
is defined only for the same number of inputs and outputs. Changes in one transfer
element influence the signals of all other elements. For a twovariable process with
V-canonical structure we obtain the following equation

and in generalized form:


y=GH{U+GKY}. (18.1.3)
GH is a diagonal matrix which contains the main elements. In GK the coupling
elements are summarized; its diagonal elements are zero. As an explicit representa-
tion y = feu) one obtains:
y = [ / - GHGKr1GHu. (18.1.4)

The transfer matrix of a V-canonical process is therefore:


G = [I - GHGKr1GH . (18.1.5)
It exists if det[J - GHGK] + O. Using (18.1.5) a V-canonical structure can be
converted to a P-canonical structure. Conversely a square P-canonical structure
can be converted into a V-canonical structure as follows. Here (18.1.2) must be
written in a form which corresponds to Eq. (18.1.3) [18.4], by splitting up G(z) into
18.1 Structural Properties of Transfer F unction Representations 75

a matrix GH containing only the diagonal elements, and into a matrix GN which
contains the remaining elements, yielding:
y = GHu + GNU = GH[u + GH1GNU] .
If G is non-singular, one has
U = G-1y

so that:
y = GH[u + GH1GNG-1y]. (18.1.6)
Comparing with Eq. (18.1.3) one obtains:
GK = GH1GNG- 1 . (18.1.7)
Both canonical forms can therefore be converted to each other, but realizability
must be considered. For a two variable process the calculation of the transfer
function elements is for example given in [18.2].
If the behaviour of multi variable processes has to be identified on the basis of
nonparametric models, as for example using nonparametric frequency responses or
impulse responses, then one obtains only the transfer behaviour in a P-canonical
structure. If other internal structures are considered, proper parametric models and
parameter estimation methods must be used.
The overall structure describes only the signal flow paths. The actual behaviour
of multi variable processes is determined by the transfer functions of the main and
coupling elements including both their signs and mutual position. One distin-
guishes between symmetrical multivariable processes, where
Gii(z) : Gjj(Z)} ~: 1,2, .. .
Gi/z) - Gji(z) ] - 1,2, .. .
and non-symmetrical multivariable processes, where
Gii(Z) =t= Gj/z)
Gij(z) =t= GJ,(z) .
With regard to the settling times of the decoupled main control loops slow process
elements Gii can be coupled withfast process elements Gij. With lumped parameter
processes signals can only appear at the input or output of energy, mass or
momentum storages. The main and coupling elements often contain the same
storage components, so that a main transfer element and a coupling transfer
element possess some common transfer function terms. Hence G ii ~ Gij or G ii ~ Gji
can often be observed.

18.1.2 The Characteristic Equation and Coupling Factor


To describe some further structure-conditioned properties of multivariable pro-
cesses, we use as a simple example a twovariable process with a P-structure of
76 18 Structures of Multivariable Processes

(18.1.2) connected with a two variable controller

[ UI(Z)] = [ -Rll(z)
U2(Z) 0
u(z) = R(z) y(z) (18.1.8)
which consists of only two main controllers. The sample time is assumed to be
equal and sampling to be synchronous for all signals. Furthermore WI = W2 = o.
Then one obtains:
[/ - G(z)R(z)]y(z) = 0 (18.1.9)
or

After multiplying one gets:

(1 + GllRll)YI + G2I R22 Y2 = 0


G l2 R 11 YI + (1 + G22 R 22 )Y2 = o.

If the first equation is solved for Y2 and introduced into the second equation, one
obtains:
[(1 + GllRll)(l + G 22 R 22 ) - G 12 R ll G 2I R 22 ]YI = o.
Therefore the characteristic equation of the twovariable control system becomes:

[1 + Gll(z)Rll(z)] [1 + G22 (Z)R 22 (z)]


- G12(z)RII(Z)G2dz)R22(Z) = o. (18.1.10)

For the characteristic equation it also holds that:


det[/ - G(z)R(z)] = 0. (18.1.11)
The expressions 1 + G ll Rl! and 1 + G 22 R 22 are characteristic polynomials of the
uncoupled single control loops arising from the main transfer elements and in the
main controllers. The term -GI2Rl! G 21 R 22 expresses the influence ofthe coup-
ling between both single control loops by the coupling elements G12 and G21 on the
eigenbehaviour. This term describes the effect on the characteristic equations of
the single loops induced by the coupling elements. If G12 = 0 and/or G21 = 0 the
coefficients of the single control loops are unchanged.
We now consider another representation of the characteristic equation of the
twovariable control system. For this (18.1.10) is written in the form:

G12RllG2IR22]
(1 + G ll R ll )(1 + G 22 R 22 ) [ 1 - (1 + Gl!R ll )(1 + G22 R 22 ) = o.
18.1 Structural Properties of Transfer Function Representations 77

The transfer functions with the reference variables as inputs are introduced

(18.1.12)

so that:
(18.1.13)
The term (1 - xG wl G w2 ) = 0 contains additional eigenvalues arising from the
influence of the couplings, where

x(z) = G12 (z) G2 dz) (18.1.14)


G 1 dz)G 22 (z)
is the dynamic coupling factor. (18.1.13) shows that the eigenvalues of a multivari-
able system in P-structure consist of the eigenvalues of the single main control
loops and additional eigenvalues caused by the couplings G 12 and G21 . Again, if
Gl2 = 0 and/or G21 = 0 the eigenvalues of the twovariable control system are
identical to those of the single uncoupled loops. From Eq. (18.1.10) it follows after
division by (1 + G 22 R 22 ) that
1 + Gl l R l l (1 - xG w2 ) = 0 (18.USa)
or after division by (1 + Gl l R l l ):
(18.USb)

Influenced by the coupled control loop the controlled "process" of the main
controller changes as follows:

Gl l -+ G l l (1 - xG w2 )

G22 -+ G22 (1 - xGwd

(see Figure 18.4a). A second transfer path G"xG wj appears in parallel to the
controlled main process element G u .
Now the change in the gain of the controlled "processes" caused by the coupled
neighbouring control loop is considered. For the controller R;;(z) the process gain
is Gii (l) in the case of the open loop j and Gu(1) [1 - x o Gwj (l)] in the case of the
closed neighbouring loop. The factor [1 - x oGwj (I)] = Eu describes the change of
the gain through the coupled neighbouring loop. Xo is called the static coupling
factor

(18.1.16)

This coupling factor exists for transfer elements with proportional behaviour, or
integral behaviour ifthere are two integral elements G'i(Z) and Gij(z). In Figure 18.5
the factor Eii is shown as a function of Xo. For an open neighbouring loop j, eu = 1 is
78 18 Structures of Multivariable Processes

I
I
I
I
j I
a L _____________________ I

I
I
I
I I
I I
b L ____________________________ ~:

Figure 18.4 Resulting controlled "process" for the controller R\\.

EII=(1~)(.o GwJ (1))

o
I
Negative coupled Positive coupled
Figure 18.5 Dependence of the
factor I:jj on the static coupling fac-
without
-....-----------------~?
I with .. tor "0 for two variable control sys-
sign change tems with P-canonical structure.

valid. If the neighbouring loop is closed, the following cases can be distinguished
[18.7]:
1) "0 < 0: negative coupling ~ ejj > 1
2) "0 > 0: positive coupling

~O ~ ejj < 1

1
<0.
b)
"0 > Gwj(l)
~eji
IS.1 Structural Properties of Transfer Function Representations 79

Therefore a twovariable process can be divided into negative and positive coupled
processes. In case 1), the gain of the controlled "process" increases by closing the
neighbouring loop, so that the controller gain must be reduced in general. In case
2a), the gain of the controlled "process" decreases and the controller gain can be
increased. Finally, in case 2b) the gain of the controlled "process" changes such that
the sign of the controller Rii must be changed. Near 8ii ~ 0 the control of the
variable Yi is not possible in practice.
As the coupling factor x(z) depends only on the transfer functions of the
processes including their signs, the positive or negative couplings are properties of
the twovariable process. The path paralleling the main element Gil, see Figure
18.4b, generates an extra signal which is lagged by the coupling elements. If these
coupling elements are very slow, then the coupled loop has only a weak effect. For
coupling elements G l2 and G 21 which are not too slow compared with Gil, a fast
coupled loop 2 has a stronger effect on Y1 than a slow one.

18.1.3 The Influence of External Signals


The dynamic response of multi variable processes to external disturbances and
reference values depends on where these signals enter and whether they change one
after another or simultaneously. The following cases can be distinguished, using the
example of a twovariable process, as in Figure 19.1:

a) The disturbance v acts on both loops


Then one has nl = GVI v and nz = G vz v. This is the case for example for changes
in the operating point or load, which results mostly in simultaneous changes of
energy, mass flows or driving forces. Gd and G vz can have either the same or
different signs.

b) The disturbances nl and nz are independent


Both disturbances can either change simultaneously, as for example for statist-
ically independent noise. They can, however, also appear sequentially, as for
occasional deterministic disturbances.

c) Reference variables

The reference variables WI and Wz can be changed simultaneously,


wl(k) = wz(k) or wl(k) =1= wz(k). They can, of course, also be changed indepen-
dently.

In the example of the steam generator of Figure 18.2 these cases correspond to the
following disturbances:
a) - changes in steam flow following load changes
- changes in calorific value of the fuel (coal)
- contamination of the evaporator heating surface
80 18 Structures of Multivariable Processes

b) ni: - contamination of the superheater surface


- change in the steam input temperature of the final superheater caused by
disturbances of the spraywater flow or temperature
n2: - changes in feedwater flow
c) In the case of load changes the reference variables Wi and W2 can be changed
simultaneously, particularly in gliding pressure operation, but single changes
can also occur.
The most frequent disturbances for this example act simultaneously on both loops.
These disturbances tend to have the largest amplitude.

18.1.4 Mutual Action of the Main Controllers


Depending on the external excitation and the transfer functions of the main and
coupling elements the main controllers may mainly reinforce or mainly counteract
each other [18.7]. With a step disturbance v acting simultaneously on both loops,
Figure 19.1, such that GVi and Gv2 have the same sign and that all main and
coupling elements have low pass behaviour and a P-structure Table 18.1 shows
4 corresponding groups of sign combinations, derived from inspection of signal
changes in the block diagram of the initial control variable response, where the
largest deviations occur in general. The separation of the groups depends on the
signs of the quotients

K12 and K21 .


Kll K22
Their product yields the static coupling factor leo. Therefore for positive coupling
leo > 0 the groups

I) Rll reinforces R 22 , R22 reinforces Rll


II) Rll counteracts R 22 , R22 counteracts Rll

and for negative coupling leo <0

III) Rll reinforces R 22 , R22 counteracts Rll


IV) Rll counteracts R 22 , R22 reinforces Rll

can be distinguished. If GVi and Gv2 have different signs, in Table 18.1 the sign
combinations of groups I and II or groups III and IV must be changed. The
disturbance transfer function

shows that the response of the controlled variable is identical for the different sign
combinations within one group. If only one disturbance ni acts on the out-
put Yi (and n2 = 0), then the action of the neighbouring controller R22 is given in
18.1 Structural Properties of Transfer Function Representations 81

Table IS.1 Mutual effect of the main controllers as a function of the sign of the main and
coupling elements for a step disturbance v, simultaneously acting on both loops. Gv1 and
Gv2 have the same sign. From [18.7].

sign of mutual action of


coupling Kll K22 K21 K12 main controllers group

+ + + + I)

+ + reinforcing - >0
K21
K22

+ + - >0
K12
Kll

positive
"0> 0 + + II)

+ + counteracting - <0
K21
K22

+ + <0
K12
-
Kll
+ +
+ + + III)

+ + + Rll reinforces R22 - <0


K21
K22

+ R22 counteracts Rll - >0


K12
Kll
+
negative
"0 < 0
+ + + IV)

+ Rll counteracts R22 - >0


K21
K22

+ + + R22 reinforces Rll - <0


K12
Kll
+

Table 18.2. The controller R22 counteracts the controller Rll for positive coupling
and reinforces it for negative coupling.
After comparing all cases
- GV1 and Gv2 have same sign
- GV1 and Gv2 have different sign
- GV1 = 0; Gv2 9= 0 or Gv2 = 0; GV1 9= 0
82 18 Structures of Multivariable Processes

Table 18.2 Effect of the main controller R22 on the main


controller Rll for one disturbance nl on the controlled variable
Yl. Sign combinations and groups as in Table 18.1.

coupling effect of R22 on Rll group

positive counteracting I
)Co> 0 counteracting II
negative reinforcing III
)Co < 0 reinforcing IV

it follows that there is no sign combination which leads to only reinforcing or only
counteracting behaviour in all cases. This means that the mutual effect of the main
controllers of a twovariable process always depends on the particular external
excitation. Each multi variable control system must be individually treated in this
context.
As an example again the steam generator in Figure 18.2 is considered. The
disturbance elements have the same sign for a steam change, so that Table 18.1 is
valid. An inspection of signs gives the combination - + + + and we have
therefore group IV. The superheater and evaporator are negatively coupled and
Xo = -0.1145. The steam pressure controller R22 reinforces the steam temperature
controller R ll , c.f. [18.5]. However Rll counteracts R22 only unsignificantly, as
the coupling element Gs in Figure 18.2 has relatively low gain. Also the calorific
value disturbances act on both outputs with the same sign, so that the same group
is involved.

IS.I.5 The Matrix Polynomial Representation


An alternative to the transfer function representation of linear multi variable
systems is the matrix polynomial representation [18.10]

(18.1.17)
with:

(18.1.18)
If A (z - 1) is a diagonal polynomial matrix one obtains for a process with two inputs
and two outputs:
18.2 Structural Properties of the State Representation 83

This corresponds to a P-canonical structure with common denominator poly-


nomials of GII(Z) and G2dz) or G22 (Z) and G I2 (z)-compare with (18.1.2). More
general structures arise if off-diagonal polynomials are introduced into A (z - I).

18.2 Structural Properties of the State Representation

Extending the state representation (3.6.31), (3.6.32) of linear single-input/single-


output processes to linear multivariable processes with p inputs u(k) and r outputs
y(k), the following equations are obtained:

x(k + 1) = Ax(k) + Bu(k) (18.2.1)


y(k) = Cx(k) + Du(k) . (18.2.2)
Here
x( k) is an (m xl) state vector
u(k) is a (p x 1) control vector
y( k) is an (r xl) output vector
A is an (m x m) systems matrix
B is an (m x p) control matrix
C is an (r x m) output (measurement) matrix
D is an (r x p) input-output matrix.
The state representation of multivariable systems has several advantages over the
transfer matrix notation. For example, arbitrary internal structures with a minimal
number of parameters and noncontrollable or nonobservable process parts can
also be described. Furthermore, on switching from single-input/single-output pro-
cesses to multi variable processes only parameter matrices B, C and D have to be
written instead of parameter vectors band c T and the parameter d. Therefore the
analysis and design of controllers for single-input/single-output processes can
easily be extended to multi-input/multi-output processes. However, a larger num-
ber of canonical structures exists for multi variable processes in state form. The
discovery of an appropriate state structure can be an extensive task.
To set a first view of the forms of the matrices A, Band C and the corresponding
structures of the block diagram, three examples of a twovariable process as in
section 18.1 are considered.
a) A twovariable process with direct couplings between the state variables of the main
transfer elements
Figure 18.6 shows two main transfer elements for which the state variables are
directly coupled by the matrices A'12 and A21 . This means physically that all
storages and state variables are parts of the main transfer elements. The coupling
84 18 Structures of Multivariable Processes

elements have no independent storage or state variable. The state representation is:

(18.2.3)

Figure 18.6 Twovariable


process with direct coup-
lings between the state vari-
ables of the main elements.

Figure 18.7 Twovariable


process with a P-canonical
structure.
18.2 Structural Properties of the State Representation 85

The matrices All and A22 of the main transfer elements become diagonal blocks
and the coupling matrices A'12 and All nondiagonal blocks of the overall system
matrix A. The main transfer elements can be put into one of the canonical forms of
Table 3.3. The coupling matrices then contain only coupling parameters in a cor-
responding form and zeros.
Observable or controllable multivariable processes of arbitrary structure can be
presented in block wise triangular form by similarity transformation so that the
diagonal blocks All and A22 are represented in row companion canonical form or
observable canonical form or in controllable canonical form or column companion
canonical form. c.f. [2.19, 18.8].
b) A twovariable process with a P-canonical structure
In analogy to Figure 18.3a a twovariable process with P-canonical structure is
shown in Figure 18.7. Different storages and state variables are assumed for both
the main elements and the coupling elements, with no direct couplings between

1
them. The state representation then becomes:

[ :::~~: :~ 1 [A~l [
o o o

x2dk
x22(k
+
+ 1)
1) 0
0
A12
o
o
o

o
o
o 1 Xll(k)
x12(k)
X21

x22(k)
(k)

(18.2.5)

U1(k)

Figure 18.8 Twovariable


process with a V-canonical
structure.
86 18 Structures of Multivariable Processes

(18.2.6)

In this case all matrices of the main and coupling elements occur in A as diagonal
blocks.

c) A twovariable process with a V-canonical structure

A twovariable process in a V-canonical structure as in Figure 18.8 with different


storages and state variables of the various transfer elements leads to:

[X"(k
X21(k
+ 1) ]
X12(k + 1)
+ 1)
X22(k + 1)
[
b,t
A"
0
A12
0
bllcII
0
A21
0
0

b2l CI2
] [X"(k)
x12(k)
X21
]
(k)
b22 Ci2 0 A22 X22(k)

+ [I :][U'(k)]
o
b 22
u2(k)
(18.2.7)

[YI(k)]=[C il 0 0 o [X"(k)
XI2(k)
]
Y2(k) 0 0 0 cIJ (18.2.8)
X21(k) .
X22(k)

In addition to the matrices of the main and coupling transfer elements in the block
diagonal 4 coupling matrices appear for this V-canonical structure as for the direct
coupling, (18.2.3). The matrices Band C are also similar.
If the inner structure of a multivariable process is determined through theoret-
ical modelling (compare section 3.7.2), then it is obvious that multivariable pro-
cesses rarely show the simple structures treated in the previous examples. A P-
canonical structure according to (18.2.5) first results for the steam generator shown
in Figure 18.2. Because of the common elements G b G4 , G IO , G I4 and GIS,
compare Figure 18.2 this structure is transformed in the following minimal
realization:
18.2 Structural Properties of the State Representation 87

o o 0 o o o o
o 0 o o o o
a32 a33 0 0 o o o o
a41 0 0 1 0 o o o o
A= 0 0 0 00 1 o o o (18.2.9)
o 0 0 0 0 o o o
o o o o 0 0 0 0
o o o o 0 0 0 0
o o o

o 1
o 0
o 0

B= 0 o (18.2.10)
o 0
o 0
o 0
o

CI3 0 CI5 C16 C17 C18


(18.2.11)
o 00000
With:

all = 0.2640 a95 = 0.1836


a32 = -0.3828 a96 = - 1.2885
a33 = 1.3140 a97 = 3.6170
a41 = 0.0113 a98 = - 5.0765
a99 = 3.5625
b41 = 0.001741 b42 = 0.01237
C11 = 2.476.10- 2 CI6 = -5.950,10- 4
CI2 = -1.619'10- 3 C17 = 2.143 .10- 3
C13 = 8.998'10- 2 CI8 = -1.831.10- 3
C15= 4.900'10- 5 C 19 = - 1. 730· 10 - 3

This example shows that, in general, mixtures of different special structures occur,
the reader is also referred to [18.11].
88 18 Structures of Multivariable Processes

If the state representation is directly obtained from the transfer functions of the
elements of Figure 18.3 some multiple state variables are introduced if the elements
have common states, as in (18.1.1) for example. Then the parameter matrices have
unnecessary parameters. However, if the state representation is derived taking into
account common states so that they do not appear in double or multiple form,
a state representation with a minimal number of states is obtained. This is called
a minimal realization. A minimal realization is both controllable and observable.
Nonminimal state representations are therefore either incompletely controllable
and/or incompletely observable. Methods for generating minimal realizations are
given for example in [18.9], [18.3].
The definition of observability and controllability of multi variable systems is
analogous to single-input/single-output systems, described in chapter 3, c.f. [2.19],
[18.3]. A multi-input/multi-output system of order m is controllable if
Rank[B, AB, ... , A N - 1 B] = m
and is observable if
Rank[C, CA, ... , CAN-1Y = m.
The definition of N causes a certain problem when examining the observability and
controllability of multi variable systems. If N = m, then each state variable is
controllable from each manipulated variable or observable from each output
variable. In most cases, however, only certain state variables are controllable or
observable from one input or one output variable. Then N < m. The controllability
and observability of multivariable systems is treated in more detail in e.g. [2.19,
18.3, 5.17].
The controllable and observable state model contains m2 + mp + mr para-
meters. In order to describe the input/output behaviour, however, often fewer
parameters are sufficient. The state model can be performed by a linear trans-
formation (see section 3.6.3). The transformation matrix T is to be chosen in such
a way that specific canonical state models are generated; hereby as many para-
meters of A should become zero or one.
The thus emerging models with minimal number of parameters are significant
particularly in connection with parameter estimation methods. The state model in
row companion canonical form is especially suited for multi variable control
systems [26.43]. If also provides a simple transition to minimal realized and P-
canonical input/output models.
Mter discussion of some special structure properties of multivariable processes
in this chapter the two following ones will present some methods for the design of
multi variable control systems.
19 Parameter-optimized MuItivariable
Control Systems

Parameter-optimized multi variable controllers are characterized by a given con-


troller structure and by the choice of free parameters using optimization criteria or
tuning rules. Unlike single variable control systems, the structure of a multi variable
controller consists not only of the order of the different control algorithms but also
of the mutual arrangement of the coupling elements, as in chapter 18. Correspond-
ing to the main and coupling transfer elements of multi variable processes, one
distinguishes main and coupling controllers (cross controllers). The main controllers
R i , are directly dedicated to the main elements G'i of the process and serve to
control the variables y, close to the reference variables Wi, see Figure 19.1a. The
coupling controllers Rij couple the single loops on the controller side, Figure
19.1b-d. They can be designed to decouple the loops completely or partially or to
reinforce the coupling. This depends on the process, the acting disturbance and
command signals and on the requirements on the control performance.
The coupling controllers can be structured in P-canonical form, before, parallel
or behind the main controllers; corresponding arrangements are also possible in V-
canonical form. When realizing with analogue devices, the arrangement of the
coupling controllers depends on the position of the controller's power amplifier. If
one is restricted to one power amplifier per manipulated variable, generally only
the elements arranged before and parallel to the main controller are applied.
However, when implementing control algorithms in process computers, all struc-
tures can be used, since Rij only describe control algorithms. In the following,
two-variable processes are considered because of the corresponding simplification
and practical relevance. These considerations can be extended easily to include
more than two control variables.

19.1 Parameter Optimization of Main Controllers


without Coupling Controllers

Chapter 18 has already shown that there are many structures and combinations of
process elements and signs for twovariable processes. Therefore general investiga-
tions on twovariable processes are known only for certain selected structures and
transfer functions. The control behaviour and the controller parameter settings are
90 19 Parameter-optimized Multivariable Control Systems

W2

Figure 19.1 Different structures of two-variable controllers. a Main controllers; b coupling


controllers behind the main controllers; c coupling controllers parallel to main controllers;
d coupling controllers before the main controllers.
19.1 Parameter Optimization of Main Controllers without Coupling Controllers 91

described in [19.1], [19.2], [19.3], [19.4], [19.5] and [18.7] for special P-canonical
processes with continuous-time signals. Based on these publications, some results
which have general validity and are also suitable for discrete-time signals, are
summarized below.
For two variable processes with a P-canonical structure, synchronous sampling
and equal sample times for all signals, the following properties of the process are
important for control (see section 18.1):
a) Stability, modes
• transfer functions of the mam elements Gil, G22 and coupling elements
G 12 , G 21 :
- symmetrical processes
Gil = G 22
G 12 =G 21
- asymmetric processes
Gil =l= G 22
G I2 =l= G 21
• coupling factor
_ dynamic %(z) = G I2 (Z)G 21 (Z)
G ll (Z)G 22 (Z)

KI2K21
- static %0=---
KIIK22

%0 < 0: negative coupling


%0 > 0: positive coupling
b) Control behaviour, controller parameters
in addition to a):
• influence of disturbances, see Figure 19.1:
- disturbance v acts simultaneously on both loops (e.g. change of operating
point or load)
nl = GVI v and n2 = G v2 v
• GVI and G v2 have the same sign
• GVI and Gv2 have different signs
- disturbances nl and n2 are independent
• nl and n2 act simultaneously
• nl and n2 act one after another (deterministic)

• change of reference variables WI and W2:


. {WI(k) = w2(k)
- simultaneously WI (k) =l= w2(k)
- one after another
92 19 Parameter-optimized Multivariable Control Systems

• mutual action of the main controllers:


- Rll and R22 reinforce each other
- Rll and R22 counteract each other
- Rll reinforces R 22 , R22 counteracts Rll
- Rll counteracts R 22 , R22 reinforces R ll ·
In the case of sampled signals the sample time To may be the same in both main
loops or different. Synchronous and nonsynchronous sampling can also be distin-
guished.
The next section describes the stability regions and the choice of controller
parameters for P-canonical twovariable processes. The results have been obtained
mainly for continuous signals, but they can be qualitatively applied for relatively
small sample times to the case of discrete-time signals.

19.1.1 Stability Regions


Good insight into the stability properties of two variable control systems is ob-
tained by assuming the main controllers to have proportional action and by
considering the stability limits as functions of both gains KRll and K R22 .
For a symmetrical twovariable process with P-canonical structure, continuous-
time signals and transfer functions

K-.
GiAs) = (1 + ~S)3 ij = 11,22, 12,21 (19.1.1)

the stability limits are shown in Figures 19.2 and 19.3 for positive and negative
values of the coupling factor [19.1]

The controller gains KRii are related to the critical gains KRiiK on the stability limit
of the noncoupled loops, i.e. "0 = o. Therefore the stability limit is a square with
K RidK RiiK = 1 for the noncoupled loops. In the case of increasing magnitude ofthe
negative coupling "0 < 1 an increasing region develops in the middle part and also
the peaks at both ends increase, Figure 19.2. For an increasing magnitude of
positive coupling "0 > 1 the stability region decreases, Figure 19.3, until a triangle
remains for "0 = 1. If "0 > 1 the twovariable system becomes monotonically
structurally unstable for main controllers with integral action, as is seen from
Figure 18.4a. Then Gw1 (O) = 1 and Gw2 (O) = 1 and with "0 = 1 a positive feedback
results. If "0 > 1 the sign of one controller must be changed, or other couplings of
manipulated and controlled variables must be taken. Figures 19.2 and 19.3 show
that the stability regions decrease with increasing magnitude of the coupling factor,
if the peaks for negative coupling are neglected, which are not relevant in practice.
Figure 19.4 shows - for the case of negative coupling - the change ofthe stability
regions through adding to the P-controller an integral term ( -+ PI-controller) and
19.1 Parameter Optimization of Main Controllers without Coupling Controllers 93

2
KR11 "-

KR11k

t 1.5

0.5

-3
-1
"- ,
o 0.5 1.5

Figure 19.2 Stability regions of a symmetrical two variable control system with negative
coupling and P-controllers [19.1].

KR11
KR11k 1 ~::--=:~=!::::,""..-;n""""""77A x o=O

0.5

Figure 19.3 Stability regions of


o 0.5 a symmetrical two variable control
system with positive coupling and P-
controllers [19.1].

a differentiating term (-> PID-controller). In the first case the stability regIOn
decreases, in the second case it increases.
The stability limits so far have been represented for continuous-time signals. If
sampled-data controllers are used the stability limits differ little for small sample
times To/T95 ~ 0.01. However, the stability regions decrease considerably for
larger sample times, as can be seen from Figure 19.5. In [19.1] the stability limits
------1
I
I
I
I
PIO I

I PI
-r-

Figure 19.4 Stability regions of a symmetrical twovariable system with negative coupling
Xo = -1 for continuous-time po, PI- and PID-controllers [19.1]. PI-controller: TJ = Tp;
PID-controller: TJ = Tp; TD = O.2Tp; Tp: time period of one oscillation for K R/i = KRiiK
(critical gain on the stability limit), see figure in Table 5.6.

2 KR22
KR22k

Figure 19.5 Stability regions for the same twovariable system as Figure 19.4. However
discrete-time P-controllers with different sample time To.
19.1 Parameter Optimization of Main Controllers without Coupling Controllers 95

0
u

a:; Vl
Vl
E <l>
u
E 0
o.
..
C>-
If) Increasing asymmetry

!
t?
/

~ /
/
IIncr.eaSing
/

b/// b
positive

tl
c ou p ling

r'"'y

o ~/"
----- ---- -<J------- nJnCOJPted

,~ Increasing
negatl'Je
coupling

~
-1

~
L
Figure 19.6 Typical stability re-
gions for two variable control sys-

---
2 00
tems with P-controllers; T p ,: period
T p2 of the uncoupled loops at the stabil-
T p1 ity limit [19.1].

have also been given for asymmetrical processes. The time constants, (19.1.1), have
been changed so that the time periods Tp, of the uncoupled loops with P-controllers
satisfy Tp2/TpJ > 1 at the stability limits. Figure 19.6 shows the resulting typical
forms of stability region. Based on these investigations and those in [18.7] twovari-
able control systems with P-canonical structure and lowpass behaviour show the
following properties:
a) For negative coupling, stability regions with peaks arise. Two peaks appear for
approximately symmetric processes. Otherwise there is only one peak.
b) For positive coupling, large extensions of the stability region arise with increas-
ing asymmetry.
c) With increasing asymmetry, i.e. faster loop 1, the stability limit approaches the
upper side of the square of the uncoupled loops. This means that the stability of
the faster loop is influenced less by the slower loop.
96 19 Parameter-optimized Multivariable Control Systems

The knowledge of these stability regions is a good basis for developing tuning rules
for twovariable controllers.

19.1.2 Optimization of the Controller Parameters and


Tuning Rules for Twovariable Controllers
As in the case of single input/single output control systems parameter-optimized
control algorithms (5.2.3) are also expedient for main controllers i = 1,2, ...
R-.(z) = Ui(Z) = Qi(Z) = qOi + qli Z - 1 + ... + q.i Z -· (19.1.2)
II ei(z) Pi(z) l-z- 1
mostly with v = 1 or 2, that means PI- or PID-behaviour. As already discussed in
chapter 4 and section 5.4 the free parameters of these control algorithms can be
designed by numerical parameter optimization through control performance cri-
teria, pole assignment or tuning rules. This is also valid for the main controllers of
two variable and multi variable processes which have been already treated. Note,
however, that the individual control variables have to be weighted differently
depending on their significance. If a mathematical model of the multi variable
process is known, control performance criteria for determined external disturban-
ces can be used for optimization through numerical optimization methods, (com-
pare 5.2.6). They have the following form
p M
S;u = L (Xi L [e?(k) + riAu?(k)] (19.1.3)
i=1 k=O

Here, the (Xi are the weighting factors for the individual controller- and manipulated
variables, with ~(Xi = 1. If these have a unique minimum

dS;u = 0 (19.1.4)
dq
lead to the optimal controller parameters:
(19.1.5)
Already with restriction of v = 2, however, the required computational effort
increases considerably with the number p of controlled variables, approximately
proportional to n 3 , if n is the number of parameters [5.36, p. 186]. Good starting
values of the controller parameters or given parameters qOh compare section 5.2.2,
can reduce the computational effort. Appropriate starting values for parameter
optimization can be determined through tuning rules which will be treated in the
following. Note, that the results depend very much on the noise- or command-
signals which act separately simultaneously on the twovariable system, see start of
section 19.1.
Tuning rules for parameter-optimized main controllers with P-, PI- or PID-
behaviour have been developed by several authors. However, these rules have been
obtained only for continuous-time signals. Since, at least for small sample times,
19.1 Parameter Optimization of Main Controllers without Coupling Controllers 97

they can be also used for discrete-time controllers, some tuning rules given in [18.7,
19.1-19.5] are described.
The tuned controller parameters, of course, have to lie inside the stability region
sufficiently distant from the stability limits. An additional requirement in practice is
that each of the control loops remains stable if the other is opened. Therefore the
gains must always satisfy KRII/KRliK < 1 and can only lie within the hatched areas
in Figure 19.7.
Based on the stability regions, the following controller parameter tuning rules
can be derived. The cases a) to d) refer to Figure 19.7.
1. Determination of the stability limits
1.1 Both main controllers are switched to P-controllers.
1.2 Set KR22 = 0 and increase KRll until the stability limit KRllK IS
reached -+ point A.
1.3 Set KRll = 0 and search for KR22K -+ point B.
lA Set KRll = KRllK and increase KR22 until a new oscillation with constant
amplitude is obtained -+ point C for a) and b).
1.5 If no intermediate stability occurs, KR22 is increased for KRll =
KRllK/2 -+ point C' in case c) and d).
1.6 In case a) and b) item IA is repeated for KR22 = KR22K and changing
KRll -+point D for a).
Nowa rough picture of the stability region is known and also which case a) to d)
is appropriate.

B
K R22C KR22k
b

A
KRllk

KRllk/2

KR22k
d

Figure 19.7a-d Allowable regions of controller gains for two variable systems. Negative
coupling: a symmetrical; b asymmetrical. Positive coupling: c symmetrical; d asymetrical.
98 19 Parameter-optimized Multivariable Control Systems

2. Choice of the gain KRii(pJior P-controllers


a) If the control performance of Yl is more important:
KRll = 0.5KRllK KR22 = 0.5KR22C
If the control performance of Y2 is more important:
KR22 = 0.5KR22K KRll = 0.5KRllD
b) The parameters are generally chosen within the broader branch of the
stability region:
KRll = 0.5KR11K KR22 = 0.5KR22C
c) KRll = 0.25KRllK KR22 = 0.5KR22C·
d) KRll = 0.5KRllK KR22 = 0.5KR22K
3. Choice of the parameters for PI-controllers
Gain: as for P-controller
Integration time:
a) + b): TJii = (0.8 ... 1.2) T pC or TJii = 0.85TpiiK
c) + d): TJii = (0.3 ... 0.8) Tpc or TJii = 0.85 TpiiK
T pC or TpiiK are the time periods of the oscillations at the stability points C or
A for i = 1 or B for i = 2.
4. Choice of the parameters for PID-controllers
KRii(PID) = 1.25KRii (P)
TIii (PID) = 0.5 TJii (PI)
T Dii = 0.25 TJii (PID)
These tuning rules can only give rough values for an appropriate tuning of the
controller. The resulting dynamic response has to be checked always and in many
cases corrections are required.
These rules have been given for controllers with continuous-time signals. Up to
now tuning rules for discrete-time twovariable control algorithms have not been
known. It can be assumed, however, that they can be used in the same way for
discrete-time signals. At least for relatively small sample times the tuning rules
given for continuous-time signals should furnish good approximate values. The
principle of keeping a suitable distance to the stability region remains unchanged,
also for large sample times.
Followed by the determination of the characteristic values K, TJ and TD the
parameters qQ, ql and q2 of the PID-algorithm (19.1.2) can be calculated using
(5.1.5).
The dynamic response of different twovariable control systems with P-canonical
structure has been considered in [18.7]. In the case of simultaneous disturbances
on both controlled variables, the coupling factor XQ, positive or negative, has no
major influence on overshoot, damping, etc. The control behaviour depends much
more on the mutual effect of the main controllers (groups I to IV in Table 18.1). If
the system is symmetric, the control becomes worse in the sequence group
I - III - IV -+ II, and if it is asymmetric in the sequence group III -+ I - IV -+ II.
The best control resulted for negative coupling if Rll reinforces R22 and
19.2 Decoupling by Coupling Controllers (Non-interaction) 99

Table 19.1 Comparison of the control performances ofunsym-


metrically coupled and decoupled control loops in the case of
simultaneous disturbance

group control variable 1 control variable 2


(faster) (slower)

I equal better
II worse worse
III equal better
IV worse equal/worse

R22 counteracts R ll , and for positive coupling if both controllers reinforce each
other. In both cases the main controller of the slower loop is reinforced. The poorest
control is for negative coupled processes, where Rll counteracts R22 and
R22 reinforces R 11 , and especially for positive coupling with counteracting control-
lers. In these cases the main controller of the slower loop is counteracted. This
example also shows that the faster loop is influenced less by the slower loop. It is
the effect of the faster loop on the lower loop which plays a significant role.
A comparison of the control performance of the coupled twovariable system
with the uncoupled loops gives further insight [18.7]. Only small differences occur
for symmetrical processes. For asymmetrical processes, see Table 19.1, it is shown
that the control performance of the slower loop is improved by the coupling, if its
controller or both controllers are reinforced. The loops should then not be
decoupled. The control performance becomes worse if both controllers counteract,
or if the controller of the slower loop is counteracted. Only then should one
decouple, i.e. especially for positively coupled processes with counteracting
controllers.

19.2 Decoupling by Coupling Controllers (Non-interaction)

If the coupled control system has a poor behaviour or if the process requires
decoupled behaviour, decoupling controllers can be designed in addition to the
main controllers. Decoupling is generally only possible for definite signals. A multi-
variable control system as in Figure 19.8 is considered, with dimy = dimu = dim w.
External signals v and w result in
y = [I + GpR] -1 GpvV + [I + GpR] -1 GpRw (19.2.1 )
l J \ J
Y v
Gv Gw
whereas for missing external signals the modes are described by:
[I + GpR]y = 0 . (19.2.2)
Three types of non-interaction can be distinguished [18.2], [19.6].
100 19 Parameter-optimized Multivariable Control Systems

Figure 19.8 Multivariable control system

a) Non-interaction for reference signals


The reference variable Wi influences only the controlled variable Yi but not the
other Yj. Then
G w = [I + GpR] -1 GpR = Dw (19.2.3)
must be a diagonal matrix.
b) Non-interaction for disturbance signals
A disturbance Vi influences only Yi. but not the other Yj. Then
G v = [I + GpR] -IGpv = Dv (19.2.4)
must be diagonal.
c) Non-interaction of modes
The modes of the single loops do not influence each other if the system has no
external disturbance. Then the elements of yare decoupled and Eq. (19.2.2) leads
to the open loop matrix
(19.2.5)
which must be diagonal. A system which has independent modes is also
non-interacting for reference inputs.
The diagonal matrices can be freely chosen within some limits. The transfer
function can be given for example in the same way as for uncoupled loops. Then the
coupling controllers Rij can be calculated and checked for realizability. As a de-
coupled system for disturbances is difficult to design and is often unrealizable
[18.2] in the following only independence of modes which also leads to the
non-interaction for reference variables, is briefly considered.
(19.2.5) gives:

R=G- 1 D =adjGpD (19.2.6)


p 0 detGp o·

The choice of the elements of Do and the structure of R is arbitrary if the


realizability conditions are satisfied and the process inputs are of acceptable size.
Some cases are briefly discussed.
19.2 Decoupling by Coupling Controllers (Non-interaction) 101

a) P-structure process and P-like structure controller


The process transfer matrix is, see Eg. (18.1.2)

and the controller matrix is

The controller becomes due to (19.2.6):

D~J
(19.2.7)

If D describes the response of the uncoupled loops, D\\ = G l l R\ and


D22 = G 22 R 2 , then:

(19.2.8)

If realizability problems occur D must be changed.


b) P-structure process with V -structure controllers
The decoupling scheme of Figure 19.9 gives with

R~J

Figure 19.9 Non-interaction of a P-canonical process by V-canonical decoupling after the


main controllers
102 19 Parameter-optimized Multivariable Control Systems

the overall controller


R = [1- R K ] - 1 RH . (19.2.9)
Decoupling of modes for reference signals is attained, see (19.2.6), if
[I - R K ] -1 RH = G p- 1 Do

RH = [1- RK]Gi 1 Do (19.2.10)


is satisfied. Hence for a twovariable system with Dii = GiiRi
R22 = R2 (19.2.11)
G21
R21 =--. (19.2.12)
Gll
The decoupling is very simple. The main controllers do not require any addi-
tional term and the coupling controllers are independent of the main con-
trollers. Rl2 and R21 are not realizable if the order of the process main elements
is higher than the order of the coupling elements or if they have zeros outside of
the unit circle of the z-plane. An inspection of the block diagram shows that
the equations of the coupling controllers correspond to ideal feedforward
controllers.
c) V-structure process with P-structure controller

Decoupling according to Figure 19.10 again leads to simple relationships


R11 = R1 R22 = R2 (19.2.13)
Rl2 = -G l l Gl2 R21 = -G 22 G21 . (19.2.14)
No realizability problem occurs.

IL _ _ _ _ _ _ _ _ _ J

Figure 19.10 Non-interaction of a V-canonical process by P-canonical decoupling after the


main controllers
19.3 Parameter Optimization of the Main and Coupling Controller 103

The decoupling structures shown in Figures 19.9 and 19.10 are basically also
valid for higher-order multivariable processes [19.6]. Compared with analogous
control systems, these decouplings can be easily realized in process computers
through algorithms as in feed forward control systems.

19.3 Parameter Optimization of the Main and Coupling Controller

Section 19.1 showed that the couplings in a twovariable process may deteriorate or
improve the control compared with uncoupled processes. Should the control
behaviour deteriorate coupling controllers are to be introduced which act in
a "decoupling" way. The previous section already treated the case for complete
decoupling. If, through process couplings, the control behaviour should improve,
one should examine whether control performance can be even more improved by
additional reinforcement of the coupling, that means through coupling controller
acting in an "coupling" way. This has been considered in [18.7].
As shown in Figure 19.1, the coupling controller can be arranged in P-similar
structure before, after or parallel to the main controllers. Also corresponding
V-similar structures can be used. The same algorithms which are used for the main
controllers, (19.1.2) can be applied for the coupling controllers. Often pure P-
controllers are sufficient, hence for parallel arrangement in P-structure, e.g.
u,(z) = qO'Jej(z) = KR,jeAz) .
As it was done for the main controllers, a control performance criterion according
to (19.1.3) can be used to optimize the parameters KRlJ" For numerical parameter
optimization the unknown parameters KRl} are taken up in the parameter vector
qT, (19.1.5).
In [18.7] proportionally acting coupling controllers were examined for various
symmetrical and asymmetrical twovariable processes with 4th-order proportional
processes in P acting structure. The results listed in the following refer to these
structures. At the same time both control variables are disturbed.
For symmetrical processes coupling controllers show no essential improvement
of the control performance of both control variables. For asymmetrical processes
the quadratic integral criterion can be improved from 10% to 50% for the cases
listed in Table 19.2. This table also shows whether coupling controllers should act
in a "coupling" or "decoupling" way. Hence, coupling controllers are supposed to
act in an additionally "coupling" way, if the main controllers reinforce each other
or if the controller Rll of the more rapid control loop reinforces the controller
R22 of the slower control loop and if the coupling G 12 to the slower control loop is
dynamically rapid. (In the latter case the coupling controllers G12 act from the
more rapid to the slower control loop.)
Coupling controllers are to be used in a decoupling way, if the main controllers
disturb each other or if the controller R22 of the slower control loop is disturbed by
the controller Rll of the more rapid control loop. (In the latter case the decoupling
coupling controllers act from the rapid to the slow control loop.)
104 19 Parameter-optimized Multivariable Control Systems

Table 19.2 Improved effect of coupling controllers with twovariable processes in P-canoni-
cal structure, arrangement of the coupling controllers in P-similar structure after the main
controllers (Fig. 19.1b) and simultaneous disturbance of both control variables

Group Coupling Mutual effect G 11 and G 12 rapid G 11 and G21 rapid


of the main G22 and G21 slow G22 and G 12 slow
controllers

I positive reinforcing with highly


Xo > 0 unsymmetrical
processes
K21 coupling,
improves control
performance of loop 2
K 12 coupling,
improves control
performance of loop
1 and 2
II counteracting K21 decoupling,
improves control
performance of loop 1
K 12 decoupling,
improves control
performance of loop 2
III negative Rll reinforces R22 K 12 coupling
Xo < 0 R22 counteracts R11 improves control
performance of loop 2
R11 counteracts R22 K 12 decoupling
improves control
R22 reinforces R11 performance of
loop 1 and 2

Note, that these results are only valid for the indicated process structures. It is
recommended to treat each case specifically and to pay attention that no instabili-
ties occur through opening of single control loops by the additionally introduced
coupling controllers.
20 Multivariable Matrix Polynomial
Control Systems

Based on the matrix polynomial representation of multi variable processes de-


scribed in section 18.1.5 the design principles of some single input/single output
controllers can be transferred to the multivariable case with equal numbers of
process inputs and outputs.

20.1 The General Matrix Polynomial Controller

The basic matrix polynomial controller is


P(Z-I )u(z) = Q(Z-I) [..,(z) - y(z)] = Q(Z-l )e(z) (20.1.1)
with polynomial matrices
P(Z-I)=PO+PIZ- I + ... +P/lz-/l }
(20.1.2)
Q(Z-I) = Qo + QIZ-I + ... + Qvz-v.
The manipulated variables can be calculated from
u(z) = P-1(Z-1 )Q(Z-I )e(z) . (20.1.3)
if P(Z-I) is nonsingular. Substituting into the process equation
A(Z-l)y(Z) = B(Z-I)Z-dU(Z) (20.1.4)
leads to the closed loop system:
y(z) = [A(Z-I) + B(z-I)P-I(z-I)Q(z-l)z-dr l
·B(Z-I)P-1(Z-I)Q(Z-I)Z-d..,(Z) . (20.1.5)
Comparison with Eq. (6.1.3) indicates the formal correspondence with the single
input/single output case.

20.2 The Matrix Polynomial Deadbeat Controller

It is assumed that all polynomials of the process model have order m and that all
inputs are delayed by the same dead time d. A deadbeat controller then results by
106 20 Multivariable Matrix Polynomial Control Systems

requiring there to be a finite settling time of m + d for the process outputs and of
m for the process inputs if step changes of the reference variables w( k) are assumed.
For the SISO case this gave the closed loop responses, c.f. section 7.1,

~~~ = B- 1(I)B(z-1 )Z-d

u(z) = B- 1(I)A(z-1)
w(z)
and the deadbeat controller
u(z) B- 1(I)A(z-1)
GR(z) = e(z) = 1 _ B- 1(I)B(z-1 )Z-d .

A direct analogy leads to the design equation for the multivariable deadbeat
controller (MDBl) [20.1]:
[I - B- 1(I)B(z-1)Z-d]U(Z) = B-l(I)A(z-l)e(z). (20.2.1)
This controller results in the finite settling time responses
u(z) = B- 1(I)A(z-1)W(Z) (20.2.2)
y(z) = A- 1(z-1)B(z-1)Z- dB- 1(I)A(z-1)W(Z) = R(Z-l)W(Z) (20.2.3)
if R(Z-l) has a finite order ofm + d. The controller equation can also be written as:
(20.2.4)
To decrease the amplitudes of the process inputs the settling times can be increased.
If the settling time is increased by one unit to m + 1 and m + d + 1 the SISO
deadbeat controller becomes, c.f. Eq. (7.2.14),
_u(z)_ qO[I-Z-1/OC]A(z-1)
G R ()
Z - - - ...,..---....::...:...::.,..,...----i-:-=----'-----,-'---;
e(z) 1 - qo(l - z l/oc)B(z l)Z d

with l/oc = 1- l/qoB(I). qo can be arbitrarily chosen in the range


1/(1 - adB(I) ~ qo ~ I/B(I)

so that
u(l) ~ u(O) .
The smallest process inputs are obtained for
qo = 1/(1 - adB(I)
which means that
l/oc = al .
The multivariable analogy (MDB2) is
[I - Qo[l- Hz-1]B(z-1)Z-d]U(Z) = Qo[l- Hz-1]A(z-1)e(z) (20.2.5)
20.3 Matrix Polynomial Minimum Variance Controllers 107

with
(20.2.6)
Qo can arbitrarily be chosen in the range
QOmin = B- 1 (1) [I - Al r 1 and QOmax = B- 1 (1) (20.2.7)
satisfying u(1) = u(O) for QOmin. For the smallest process inputs, u(1) = u(O), this
requires that
Qo = B- 1 [1 - Alr l (20.2.8)
yielding
H=A 1 · (20.2.9)

20.3 Matrix Polynomial Minimum Variance Controllers

A stochastic matrix polynomial model


A(Z-I)y(Z) = B(Z-I)Z-d U(Z) + D(Z-I)V(Z) (20.3.1 )
is assumed, with:
D(Z-I) = Do + D 1z- 1 + ... + Dmz-m. (20.3.2)
A generalized minimum variance controller is obtained by minimizing the criterion
[20.1]
J(k +d+ 1) = E{[y(k +d+ 1) - w(k)Y [y(k +d + 1) - w(k)]
+ [u(k) - uw(k)YR[u(k) - uw(k)]} (20.3.3)
with R = RT positive semidefinite. uw(k) is the offset steady-state value of u(k)
uw(k) = B-l(1)A(1)w(k). (20.3.4)
Corresponding to (14.2.4), the process and signal model is split up into
Z(d+l)y(Z) = A- 1(z-I)[B(z-I)ZU(Z)
+ L(Z-1 )v(z)] + F(Z-1 )z(d+ 1) v(z) (20.3.5)
where the new matrix polynomials are defined by:
F(Z-I)=I+F1z- 1 + ... + Fdz- d (20.3.6)
L(Z-I) = Lo + L 1z- 1 + ... + L m_ I z-(m-l). (20.3.7)
Their parameters are determined by:
D(Z-I) = A(Z-I)F(z-l) + Z-(d+l)L(z-I). (20.3.8)
(20.3.5) is now transformed into the time domain and analogously to (14.1.7) to
(14.1.10) J(k + d + 1) is obtained. Then oJ(k + d + 1)/ou(k) = 0 is computed,
108 20 Multivariable Matrix Polynomial Control Systems

resulting in
Bf[A- 1(z-1)[B(z-1)ZU(Z) + L(Z-l)V(Z)] - w(z)] + R[u(z) - uw(z)] = 0
(20.3.9)
where v(z) can be replaced (reconstructed) by
v(z) = D- 1(z-1)[A(z-1)y(Z) - B(Z-l)Z-d U(Z)] . (20.3.10)
After introducing (20.3.4) the generalized matrix polynomial minimum variance
controller (MMV1) is found to be
u(z) = [F(Z-1)D- 1(z-1)B(z-1)Z + (Bf)- l Rr 1 .
. {[I + (Bf)-l RB-1(1)A(1)] w(z)
- A- 1(z-1)L(z-1)D- 1(z-1)A(z-1)y(Z)}. (20.3.11)
If R = 0 is set, the minimum variance controller (MMV2) results from (20.3.9) and
(20.3.1), [20.2],
Z-l
u(z) = B- 1(z-1) 1 _ z (d+ l)[A(z-l )[w(z) - y(z)]

+ [D(Z-l) - L(Z-l)]V(Z)] (20.3.12)


where v(z) must be reconstructed from Eq. (20.3.10). This controller yields for the
closed-loop system
y(z) = F(Z-l)V(Z) + Z-(d+1)W(Z). (20.3.13)
Examples are given in section 26.11.
21 Multivariable State Control Systems

The state controller for multi variable processes was designed in chapter 8. There-
fore only a few additional comments are made in this chapter. The process
equation considered in the deterministic case is
x(k + 1) = A x(k) + Bu(k) (21.1 )
y(k) = Cx(k) (21.2)
with m state variables, p process inputs and r process outputs. The optimal
steady-state controller is then
u(k) = - Kx(k) (21.3)
and possesses p x m coefficients if each state variable acts on each process input.

21.1 Multivariable Pole Assignment State Controllers

For the closed-loop system


x(k + 1) = [A - BK]x(k) = Fx(k) (21.1.1 )
the characteristic equation is

det[zI - A + BK] = det[zI - F]

(21.1.2)

c.f. section 8.3. The p x m controller coefficients, however, cannot be determined


uniquely by assigning the m coefficients (Xi of the characteristic equation or by
assigning the m poles Zi' Therefore additional requirements can be taken into
account if controller design is being realized through characteristic equation
assignment or pole assignment. Some examples dealing with this matter will be
considered in the following.
According to the basic structures of twovariable processes treated in section 18.2
a general twovariable process is assumed with state variables
(21.1.3)
110 21 Multivariable State Control Systems

and manipulated and controlled variables


u T = [u l U2] yT = [Yl Y2] (21.1.4)
with following matrices

A,,] [6" 6" ]


Al2 Al3

A=
A2l
A3l
[A" A22
A32
A 23
A33
A24
A34
B = b2l b 22
b3l b32
A4l A42 A43 A44 b4l b42

C = [C~l ci2 ci3


C~4
C2l cI2 CI3 C24
J (21.1.5)

Note the index notations which, compared with section 18.2 are changed and are
adapted to the usual matrix representation. The state controller is assumed to be

K = [kil ki2 ki3 ki4


kIl kI2 kI3 kI4
J (21.1.6)

and for the closed system

r"
F12 Fl3
F22 F23 F"
F24 ]
A - BK= F= F2l (21.1.7)
F3l F32 F33 F34
F4l F42 F43 F44
becomes valid.
F is composed of the following blocks

F= (21.1.8)

As shown in section 18.2, the system matrix A can be transformed into a canonical
form (e.g. controllable canonical form or observable canonical form for multivari-
able systems). Matrix F obtains the same form. The coefficients of the characteristic
equation (21.1.2) are determined by the parameters of F. Specifications of the
structures of K and F ease the generation of specific coefficients of the characteristic
equation or ease the determination of the coefficients of K for a prescribed
characteristic equation or prescribed poles. Some simplified structures are there-
fore briefly examined.
If, for example, only the state variables Xl and X4 of the main transfer elements
are fed back, then the controller matrix is to be written as follows

= [k[l 00OJ (21.1.9)


K 0 0 0 kI4
Table 21.1 Settings of simplified state controllers K for two-input and two-output variables and the basic structures depicted in Figures 18.6,
18.7 and 18.8

A B C examples of simplified state controller for N


......
state controllers decoupling of
process parts ~
g.
Direct state ] :;;-
[ All A14] 0 ] 0;] [krl OT [k[1 kr4] ~
coupling ::!.
b42 C24 kr4 ~
A41 A44
[boll [coL o kL kr4
cr'
0-
P-canonical 0 0 '"C
0
structure A22 0 b 21 0 kr3 0-
[0" 0 0 ] [:[1 )-
0 A33 0 0 kr2 0 '"
[corl Cr2 Cr3 Cr4 :rJ '"
QCi'
0 0 0 b42 ;:l
[t u L] S
0
V-canonical 0 A13 a
VJ
structure A21 A22 0 0 kr3
000 ] [:[1 ;....
0
0 0 A33 0 0 Ci4 kr2 0
[corl :rJ n
0
0 A42 0 A44 [r
[" qU a
.,
e.
0-
.,
'"

......
......
112 21 Multivariable State Control Systems

and F is simplified considerably. In the case of direct state coupling the number of
controller coefficients equals the number of coefficients of the characteristic equa-
tion, so that the controller coefficients can be determined uniquely provided the
poles are assigned.
For P- and V-canonical structure the characteristic equation has order
m = ml + m2 + m3 + m4, if dimension of Xl is ml, dimension of X2 is m2, etc. For
assigned poles the controller coefficients can be determined uniquely, if their
number is also m. This is the case for the simplified state controllers which are listed
in Table 21.1. They were composed such that, except for the state variables of the
main transfer elements, the state variables of the coupling elements act on the
corresponding control variables in the sense of feedforward control.
Simplifications to the state controller can also then be recommended if control-
lers which correspond to specific command variables or control variables are to be
switched to manual operation of the corresponding manipulated variable. This is
to be realized in such a way that the state variables of the corresponding main
transfer elements do not directly influence the remaining control via the con-
trollers. Then for P- or V-structure ki4 = 0 and kIl = 0 can be set.
Another possibility to simplify F is to only parameterize the diagonal matrices
F l l , F22 , F33 , F44 and to set all non diagonal matrices to zero. This decouples the
state vectors Xl, X2, X3, X4, thus the eigen oscillations of the individual processes do
not disturb each other. Table 21.1 shows the resulting parametrization of K for the
case of direct state coupling. Then A4l - b42 kIl = 0 and A 14 - b ll ki4 = 0 is to be
fulfilled.
This state decoupling of the systems cannot be realized, however, for P-canonical
and V-canonical structure with the assumed controller structure (21.3), since this
leads to requirements for K = 0 with P-structure or coupling matrices A2l = 0,
A34 = 0 with V-structure.
At this time, one should emphasize again that the basic structures, treated as
examples, rarely occur in this form. Mixtures of these structures mostly prevail.
Further methods for controller coefficient determination via pole assignment are
given in [2.19]. Pole assignment for multi variable processes in diagonal form
through state controller (modal control) has been already treated in section 8.4.
A basic paper on decoupling of multi variable state control systems for command
variable behaviour was published in [21.1]. Here an additional feedforward con-
trol u = L wand a specific choice of K leads to command autonomy, c.f. [21.2,
21.3].

21.2 Multivariable Matrix Riccati State Controllers

As the above examples showed, the determination of state parameters using pole
assignment methods may then be expedient when specific structural simplifications
of the controller are involved. If, however, it is difficult to determine the poles and
all coefficients of the control matrix K are allowed to be occupied, then the design
21.3 Multivariable Decoupling State Controllers 113

of multi variable state controllers according to (8.1.34)


K = - (R + BT PB) - 1 BT P A (21.2.1)
is given preference over a quadratic criterion (8.1.2) and over the solution of the
recursive matrix-Riccati-difference equation (8.1.31). The free design parameters
then occur in the weighting matrices Q and R. After specializing these matrices,
however, the time history of state-, manipulated- and control variables can be
systematically influenced, c.f. section 8.10. Since in most cases not all state variables
are measurable, state observers have to be used for control. The corresponding
design method for multi variable processes with initial value disturbances as well as
external disturbances have been treated in section 8.6 and 8.7. Here, apparently, the
direct design using a quadratic performance criterion is again advantageous, c.f.
(8.6.15) and (8.6.16). Especially for multi variable state control a computer-aided,
iterative design should be chosen.

21.3 Multivariable Decoupling State Controllers

A multi variable state-control system for which the outputs y do not influence one
another

y(k + 1) = Ay(k) = ACx(k) (21.3.1)


A = diag(A,) i = 1, ... , r .

(a non-interacting system, c.f. section 19.2), can be obtained by comparing with


[21.1 ]
y(k + 1) = Cx(k + 1) = C[A - BK]x(k) (21.3.2)
resulting in

K = [C Br 1 [C A - A C] (21.3.3)

where the parameters )'i determine the eigenvalues of the system of (21.1.2).

21.4 Multivariable Minimum Variance State Controllers

In section 15.3 an optimal state controller for stochastic disturbances was discussed
which minimizes the performance criterion Eq. (15.1.5) and uses a state variable
estimator. The derivation of this state cop·"oller was performed according to the
state controller for deterministic disturbances in chapter 8. In this section another
approach is presented which is based on the minimum variance principle shown in
chapter 14, which uses a prediction of the noise and which is especially suitable for
114 21 Multivariable State Control Systems

multi variable adaptive control [26.33, 26.43]. To derive stochastic minimum vari-
ance state controllers the innovations state space model (as suitable for identifica-
tion methods)
x(k + 1) = + Bu(k) + Fv(k)
Ax(k) (21.4.1)
y(k) = Cx(k) + v(k) (21.4.2)
is used, where v(k) is a zero-mean Gaussian white noise. The quadratic criterion
J(k + 1) = E{xT(k + l)Qx(k + 1) + uT(k)Ru(k)} (21.4.3)
where Q is positive definite and R positive semidefinite is to be minimized. This
criterion is the same as Eq. (8.1.8) the only difference being that the process is
disturbed by the noise v( k). Therefore the results of (8.1.9) to (8.1.10) can be directly
used to write
oJ(k + 1)
ou(k) = 2BT Q[Ax(k) + Bu(k) + Fv(k)J + 2Ru(k) = 0 (21.4.4)

and the generalized multivariable minimum variance state controller (MSMV1)


becomes:
u(k) =- (BT QB + R)-l BT Q[Ax(k) + Fv(k)] . (21.4.5)
The noise can be reconstructed by:
v(k) = y(k) - Cx(k) (21.4.6)
where x(k) is predicted using (21.4.1). If the deadtime is not included in the system
matrix A, the controller equations are [20.1J
(21.4.7)
with the prediction:
d-l
x(k + d) = E{x(k + d)lk - I} = AdX(k) + L A d- i - 1 Bu(k - d + i).
i=O
(21.4.8)
Another version which corresponds to the minimum variance controllers discussed
in chapters 14 and 20.3 is obtained by using the criterion
J(k + d + 1) = E{yT(k + d + l)Sy(k + d + 1) + uT(k)Ru(k)}. (21.4.9)
Here the variances of the outputs rather than all the state variables are weighted.
Introducing Q = ~ SC in (21.4.3) yields:
u( k) = - (BTC TSCB + R)-l BTC TSC[Ax(k + d) + AdFv(k)J (21.4.10)

(MSMV2).
For R = 0 finally the multivariable minimum variance state controller (MSMV3)
becomes:
u(k) = - (CB)-lC[Ax(k + d) + AdFv(k)J. (21.4.11)
21.4 Multivariable Minimum Variance State Controllers 115

The controller equations show that they consist of a deterministic feedback law
uAk) and a stochastic feedforward law uv(k)

(21.4.12)
The deterministic feedback controller in the generalized minimum variance con-
troller is the matrix Riccati controller if P in (21.2.1) or Eq. (8.1.34) is replaced by Q.
And the deterministic controller in the minimum variance controller (21.4.11) is
a decoupling state controller, (21.3.3), if A = 0 [21.1, 26.43].
Comment on the Multivariable Systems which have been treated up to now
The multivariable processes and multi variable control systems treated in chapter
18 to 21 presuppose a complete (central) process model. Quite often, however,
a complete multivariable process model is not known and the control algorithms
are not desired to be realized in only one (central) computer because of reliability
and simplicity reasons. This then leads to the design of decentralized control systems
for multi variable processes, c.f. section 26.11.
To assume the same sampling time for all input- and output variables of
multivariable processes is disadvantageous for processes with significantly different
settling times. The design of parameter-optimized control can of course be directly
applied for different sampling times, yet is not so easy for the structure-optimal
multivariable control systems of chapter 20 and 21. One can then attempt to
convert the required models to integer multiples of the sampling time. Also
possible is the transition to the design of decentralized control systems, c.f. section
26.11.
22 State Estimation

For the realization of state controllers for processes with stochastic disturbances
estimates x( k) of the internal state variables are required which are based on
measured input and output signals u(k) and y(k), see chapter 15 and 21. The
measurable variables y(k) are not only contaminated by u(k) but also by the
nonmeasurable noise signals v( k) and n( k). Since, however, only the signal part
caused by u(k) is of interest in the state variables x(k), suitable filtering methods
have to separate the signal from the noise. Therefore, the general problem of how to
separate signals from noise is briefly treated first, followed by the derivation of the
Kalman filter explaining in several steps the principle of estimation for both, the
scalar and vector cases. State representation allows a direct consideration of
multi variable processes.
A signal s(t) is supposed to be separated from a noise n(t) and only
y(t) = s(t) + n(t) is measurable. It is assumed that noise and signal are located in
the same frequency range which excludes a simple separation through bandpass
filtering and requires the application of estimation methods.
The Wiener filter was developed first in 1940 by Wiener [22.1], compare Figure
22.1. Signal s(t) and noise n(t) are now assumed to be noncorrelated and con-
tinuous-time signals and their spectral densities Sss(w) and Snn(w) are known in
rational fractional form.
The estimation error
s(t) = s(t) - s(t)
can be minimized for - 00 < t < 00 according to the least squares method and one
obtains as a transfer function of the Wiener filter [22.2, 22.3]

GwF(iw) = [Ss.(W) ] _1_


S)I-;(w) PR S)I~(w)

with S)I)I(w) = Sss(w) + Snn(w) and S)I~(w) as the rational fractional part of S)I)I(w)
with poles and zeroes in the left half plane. Here, only the physically realizable part
(poles in the left half plane) of Sss(w)/S;'(w) is used which is marked by "PR".
The calculation of the Wiener filter may cause considerable problems. The
factorization of S)I)I might be difficult when trying to solve the problem in the
frequency domain. The corresponding solution in the time domain requires the
22.1 Vector Signal Processes and Assumptions 117

s ] y I Wlener- Ir--.----
s
--~~-o~~
I filter I

Figure 22.1 Estimation of signal s using the Wiener filter. sit) signal, nit) noise, y(t) measured
signal, sit) estimated signal, sit) estimation error.

solution of the Wiener-Hopf integral equation. Furthermore, stationary signals


have to be presumed. Finally, in its original form, the Wiener filter is not well suited
for the use of digital computers.
A considerable extension to filter design was published by Kalman in 1960. Here,
an algorithm is used which recursively estimates the signal after each new measured
value, instead of stating the equation of the filter explicitly. The Kalmanfilter is not
only restricted to stationary stochastic signals but can also be applied with
nonstationary signals. It was first derived for discrete-time signals, Kalman [22.4J,
followed one year later by the version for continuous-signals, Kalman-Bucy [22.5J,
the so-called Kalman-Bucy filter.
Another essential difference between Wiener- and Kalman-Bucy filter is that the
Kalman-Bucy filter describes the signal x( t) through a vector differential equation,
hence through a parametric model while the Wiener filter describes x( t) only in the
form of spectral density or autocorrelation function of the signal s( t), hence
through nonparametric models. By that means it is possible to estimate not only
the output signal x( t), but also all inner state variables, e.g. x( t), x( t), etc. and one
obtains a state variable filter.
The Wiener filter can thus be classed with the "classical" system theory and the
Kalman filter with the "modern" system theory.
Modern system theory which was essentially influenced by digital computer
techniques, is predominantly used in the time domain and with parametric models
describing the inner structure of the process. A transparent derivation of the
Kalman filter is given in the following.

22.1 Vector Signal Processes and Assumptions

It is assumed that a stationary stochastic vector signal can be described by the


Markov process
x(k + 1) = Ax(k) + Fv(k) (22.1.1)
with measurement equation
y(k) = Cx(k) + n(k) or y(k + 1) = Cx(k + 1) + n(k + 1) (22.1.2)
see Figure 22.2 and section 12.2.2. This process will be extended subsequently to
118 22 State Estimation

ulk) ~--,
B ~=~~=============~~~
~=~L_-_..J £ y(k+1)

Figure 22.2 Stochastic vector Markov process y(k + 1) (u = 0) or a dynamic process with
measurable input u(k), output y(k + 1) and noise l1(k).

include a measurable input u(k). Here the following symbols are used:
x(k) (m xl) state vector
v(k) (p x 1) vector input noise, statistically independent with covariance
matrix V
y(k) (r x 1) measurable output vector
n(k) (r xl) vector output noise, statistically independent with covariance
matrix N
A (m x m) system matrix
F (m x p) input matrix
C (r x m) output matrix
A, F and C are assumed to be time invariant.
The objective is to estimate the state vector x(k) based on measurements of the
outputs y(k) which are contaminated by white noise n(k). The following are
assumed known a priori
A, Cand F

E{v(kj} = v(k) = v (22.1.3)


cov[v(k), 't" = i - j] = E{[v(i) - vJ[v(j) - ii]T} = Vbij (22.1.4)
E{n(k)} = 0
cov[n(k), 't" = i - j] = E{n(i)nT(j)} = Nbij (22.1.5)
where
b .. = {I for i = j
IJ 0 for i j *'
is the Kronecker delta-function. v(k) and n(k) are statistically independent. As the
state estimates are time varying in most applications one is interested in recursive
estimation, in which the states x( k) are calculated after the measurement of y( k).
The derivation of the estimation algorithms can be based on several estimation
methods, for example
• the principle of orthogonality between estimation error and measurement
E{xyT} = 0 [22.4J, [22.6J, [22.2J
• the recursive least squares method [22.7J
22.2 Weighted Averaging of Two Vector Measurements 119

• the minimum variance estimation [22.8J


• the maximum likelihood method [22.8J
• the Bayes method [3.12J
The following derivation uses minimum variance recursive estimation. This
method is appropriate as an introduction because it is transparent and leads
directly to a nice interpretation of the result.

22.2 Weighted Averaging of Two Vector Measurements

The resulting Kalman filter estimation algorithms form a weighted mean of two
independent vector estimates. Therefore it is assumed that Xl and X2 are two
statistically independent estimates of an m-dimensional vector x. The weighted
mean of these two estimates is
(22.2.1)
where K' is an (m x m) weighting matrix which is to be chosen such that the
x
variance of the estimate becomes a minimum. Subsequently a dynamic system is
considered where Xl is a state vector of dimension m and instead of X2 a measurable
output vector Y2 with dimension p < m is used. Therefore
Y2 = CX2 (22.2.2)
is asserted. Then (22.2.1) becomes:
x = Xl + KC[X2 - X1J
'-v--'
K'

= Xl + K[Y2 - CX1J

= [l-KCJXl + KY2· (22.2.3)


The (m x m) covariance matrix of X 1 is
Q = E{(Xl - E{xdHxl - E{xdf} (22.2.4)
and the (p x p) covariance matrix of Y2 is
Y= E{(Y2 - E{Y2})(Y2 - E{Y2})T}. (22.2.5)
For the covariance matrix of the estimate x Eq. (22.2.3) gives
P = E{[x - E{x}J [x - E{x}Y}
= E{[(I - KC)Xl + KY2 - (/- KC)E{xd - KE{Y2}J
[(I - KC)Xl + KY2 - (I - KC)E{xd - KE{Y2}Y}
= (I - KC)Q(I - KC)T + KY KT (22.2.6)
as Xl and Y2 are statistically independent.
Nowa value of K is sought which minimizes the variance of the estimation errors
(the diagonal elements of P). To find this minimum without differentiation, Eq.
120 22 State Estimation

(22.2.6) is modified. By multiplying out it follows that


P= Q + (KC)Q(KCf - (KC)Q - Q(KCf + KYK T
= Q + K(CQC T + Y)KT - K(CQ) - (CQf KT (22.2.7)
with Q = QT, because Q is symmetric. Eq. (22.2.7) can be formed into a complete
square in K. Two new matrices Rand S are introduced [22.9]
(KS - R)(KS - Rf = K(SST)KT - KSR T - RS TKT + RRT
= K(SST)KT - K(SRT)
- (SRTf KT + RRT . (22.2.8)
If now one sets
SST = CQC T + Y (22.2.9)
SR T = CQ (22.2.10)
then:
(KS-R)(KS-Rf =K(CQCT + Y)KT -K(CQ)
_ (CQ)T KT + RRT . (22.2.11)
This equation agrees with (P- Q) except for RRT; see (22.2.7). RRT can be
obtained from (22.2.9) and (22.2.l0) as follows:

STSR T = STCQ
RT = (STS)-lSTCQ
R = QC TS(ST S)-l
RRT = QCTS(STS)-l(STS)-lSTCQ
\ J
Y
W
STWS= (STS)(STS)-l(STS)-l(STS) = I
SST WSS T = SIS T = SST
W = (SST)-l

and with (22.2.9)


RRT = QCT(CQC T + y)-lCQ. (22.2.12)
With (22.2.11) and (22.2.12), (22.2.7) can be written as:
P= Q + (KS - R)(KS - Rf - QCT(CQC T + y)-l CQ. (22.2.13)
In this equation only the term (KS - R)(KS - Rf depends on K. The diagonal
elements of this term consist only of squares of elements of (K S - R) and therefore
can have only positive or zero values. The diagonal elements of P become minimal
22.3 Recursive Estimation of Vector States (Kalman-Filter) 121

only if (K S - R) = 0 or, using (22.2.9) and (22.2.10)


KS=R
K= RST(SST)-l
K = QCT(CQCT + y)-l . (22.2.14)
The minimum variance of the estimation error of x is then, using (22.2.7):
P=Q-KCQ (22.2.15)
and the estimate with minimum variance is
x = Xl + K[Y2 - CX1] (22.2.16)
with K from (22.2.14).

22.3 Recursive Estimation of Vector States (Kalman-Filter)

The recursive weighted averaging of two vector variables described in the last
section is applied to the estimation of the state variable x(k + 1) of the Markov
process (22.1.1) and (22.1.2). In (22.2.3) the following are introduced:
Xl = i(k + 1), the prediction of x(k + 1) based on the last estimate x(k)
Y2 = y(k + 1), the new measurement
with the prediction

i(k + 1) = A x(k) + Fv(k) . (22.3.1)


Here the expectation v(k) is used as the exact value v(k) is unknown. The recursive
estimation algorithm is then:
x(k + 1) = i(k + 1) + K(k + 1)[y(k + 1) - Ci(k + 1)] . (22.3.2)
To calculate the correction matrix K(k + 1) the covariance matrices Q(k + 1) of
i(k + 1) and Y of y(k + 1) have to be known as in (22.2.14).
The covariance of the estimation error of x(k) is:
P(k) = E{(x(k) - E{x(k)})(x(k) - E{x(k)}f} . (22.3.3)
+ I)} = E{x(k + I)} one obtains:
Then from Eq. (22.3.1) with E{.t(k
Q(k + 1) = E{(x(k + 1) - E{x(k + 1)})(x(k + 1) - E{x(k + l)}f}
=AP(k)AT+FVFT (22.3.3)
as x(k) and v(k) are uncorrelated. Furthermore (22.1.2) gives:
Y(k + 1) = E{(y(k + 1) - E{y(k + 1)})(y(k + 1) - E{y(k + l)}f}
= E{n(k + 1)n T (k + I)} = N. (22.3.4)
122 22 State Estimation

Hence the correction matrix becomes from (22.2.14)

K(k + 1) = Q(k + I)C T [CQ(k + 1)CT + Nr 1 (22.3.5)

and (22.2.15) the covariance matrix of the estimation error of x(k + 1) becomes:
P( k + 1) = Q( k + 1) - K( k + 1) C Q( k + 1) . (22.3.6)

If the prediction x( k + 1) given by (22.3.1) is inserted in (22.3.2) and v( k) = 0 is


assumed, one obtains the recursive estimation algorithm of the Kalman filter:
x(k + 1) = Ax(k) + K(k + 1) [y(k + 1) - C A x(k)]
new predicted correction new predicted]
estimate estimate, + matrix [ measurement measurement,
based on old based on old
estimate estimate

(22.3.7)

Some additional remarks are appropriate:

Starting values
To start the filter algorithm, assumptions on the initial state have to be made. If it is
unknown
x(O) = x(O)
is taken. The initial value of the covariance matrix P(O) must also be assumed. For
properly chosen x(O) and P(O) their influence vanishes quickly with time k, so that
precise knowledge is unnecessary.

The correction matrix


As the correction matrix is independent of the measurements it can be calculated
in advance. After inserting (22.3.6) in (22.3.5) it follows that:
K(k + 1) = P(k + 1)C T N- 1 . (22.3.8)
For a stationary process K(k + 1) reaches a constant value for k --+00. P and Qthen
become constant covariance matrices. They can be calculated from the equation
system
P- 1 = Q- 1 + C T N- 1 C (22.3.9)
Q = APA T + FVF T (22.3.10)
which follows from [3.12].
22.3 Recursive Estimation of Vector States (Kalman-Filter) 123

Block diagram

In Figure 22.3 the estimation algorithm is shown for v(k) = O. The Kalman filter
contains the homogeneous part of the process. The measured y(k + 1) and its
model predicted value j(k + 1) are compared and an error
e(k + 1) = y(k + 1) - j(k + 1) = y(k + 1) - Ci(k + 1)
=y(k+1)-CAi(k) (22.3.11)
is formed. This error causes a correction i(k + 1) to the predicted i(k + 1).

Dynamical processes with a measurable input u(k)


If a measurable stochastic or deterministic input u(k) acts on the process via an
input matrix B the prediction becomes
i(k + 1) = Ai(k) + Bu(k) + Fv(k) (22.3.12)

PROCESS
-
I
------------- - - -- ---~-i

I
I
I n( k+1)
I
I x (k+1) y(k+1 )
I
I measured
output
I
I
L __________________ _ _______ J
I - - - - - - - - - - - - -~Tk+1l new - - - - +- ~k:1) - - - -l
I estimate Output error
I (Redidual,
I innovation)
I
I predicted
predicted new output
estimate
C

~lk+1) Correction of the predicted


new estimate
I
I
L ______________ _ ___________ ----....-J

KALMAN FILTER

Figure 22.3 Markov process with Kalman filter for the estimation of i(k + 1).
124 22 State Estimation

and after inserting into (22.3.2)


i(k + 1) = Ai(k) + Bu(k) + Fv(k) + K(k + 1)[y(k + 1) - CAi(k)

- CBu(k) - C Fv(k)] . (22.3.13)

Orthogonalities
In the original work of Kalman [22.4] the recursive state estimator was derived by
applying the orthogonality condition between the estimates and the measurements:
E{i(i)yT(j)} = 0 for j < i . (22.3.14)
However, also other orthogonalities exist for minimum variance estimates:
E{i(i)iT(i)} = 0 (22.3.15)
or with y(i) = Ci(i):
E{i(i)P(i)} = o. (22.3.16)
From these orthogonalities it follows that the error signal (residual, innovation)
e(k + 1) = y(k + 1) - CAi(k)
is statistically independent
E{ e(i)eT(j)} = 0 for i 9= j . (22.3.17)

Extensions to the original Kalman filter


Since 1960 many publications appeared which can be considered as extending the
original Kalman filter. Among the extensions are
• input noise v(k) and measurement noise n(k) are correlated
• input noise v(k) is correlated (coloured)
• measurement noise n( k) is correlated
• influence of wrong initial values, covariance matrices and process para-
meters on the convergence (divergence)
• simultaneous estimation of unknown covariance matrices
• nonlinear filter problem
- simultaneous state and parameter estimation
- nonlinear processes
These problems are treated for example in [22.2], [22.3], [22.9], [22.10].
F Adaptive Control Systems
23 Adaptive Control Systems (A Short Review)

The irnplernentation of structure optirnal and precisely adjusted control algorithrns


presupposes the knowledge of dynarnic process rnodels. For stochastic control
systerns signal rnodels have to be known additionally. Since both process rnodels
and signal rnodels can be obtained through identification- and pararneter estirna-
tion rnethods using digital cornputers, a cornbination with controller design
rnethods is appropriate. If the single problerns are to be solved on-line, in real-tirne
and in closed loop, this then leads to selftuning and adaptive control systerns.
There are rnany rnore ways in which adaptive controllers or adaptive control
algorithrns can be realized with process cornputers and rnicrocornputers, than with
analog techniques. The digital autornation systerns enable the irnplernentation of
cornplex control algorithrns which would otherwise either not be realizable at all or
only at unjustifiable expense using analog techniques. In addition there are sorne
advantages in having process rnodels and controllers in discrete-tirne forrn corn-
pared with continuous-tirne forrn, especially in theoretical developrnent and in
cornputational effort. Furtherrnore, the progress in the field of process identifica-
tion and in the synthesis of controllers plays an irnportant role.
For these reasons interest in adaptive control has increased considerably during
recent years. Many early papers on adaptive control are given e.g. in [23.1] to
[23.5] and in books [23.6] to [23.l1], and [23.l6]. Most of these early papers
consider continuous-tirne systerns.
Since about 1970 papers dealing with the realization of adaptive control in
digital systerns are of special relevance. Surveys e.g. [23.l4], [23.17], [23.20] and
books [23.21], [23.22], [2.22], [2.23] show that the consideration of adaptive
control systerns with discrete-tirne signals has been prevailing ever since.
This chapter gives a short survey of the rnost irnportant basic structures of
adaptive control systerns which are suited for digital systerns. Many different
definitions of the terrn "adaptive" have been proposed. In [23.l5] sorne of these
definitions are surnrnarized and discussed. The ones which, in the rneantirne, are
rnost cornrnonly used will be applied in the following.
Unlike fixed control systerns adaptive control systems adapt (adjust) their behavi-
our to the changing properties of controlled processes and their signals.
If the process behaviour changes can be observed frorn rneasurable signals acting
on the process and if it is known in advance how the controller has to be adjusted
128 23 Adaptive Control Systems (A Short Review)

w y

1-----------,
I I
Figure 23.1 Basic schemes of con-
I ---.J troller applications. a Feedforward
I ~------------,
adaptation (open loop adaptation,
gain scheduling) As: (adaption algo-
rithm (feedforward) z: measurable
w y external signal; b Feedback adapta-
tion (closed loop adaptation) A R :
Adaption algorithm (feedback)
b

depending on the signals, feedforward adaptation can be applied, Figure 23.1a.


Here there is no feedback from the internal closed loop.
If the process behaviour changes cannot be observed directly feedback adaptation
has to be applied, Figure 23.1b. The changing process properties can be determined
for example by process identification or the determination of the closed loop
behaviour.
This results in a second feedback, leading to a closed loop action with the signal
flow path: control loop signals - adaptation algorithm - controller - control loop
signals.
The following treats only feedback adaptation, as feedforward adaptation (if
applicable) is usually straightforward. For slow process parameter changes adap-
tive controllers with feedback can be divided into two main groups. The model
identification adaptive systems (MIAS) determine a process model using measured
input- and output signals and identification methods, compare Figure 23.2a. Here
the controller parameters are calculated according to a controller design method
which has been programmed in advance. This class of adaptive controllers is also
referred to as "selftuning regulators".
Model reference adaptive systems (MRAS) try to obtain a closed loop response
close to that of a given reference model for given input signal. Here an external
signal, for example the reference variable, is measured and a difference signal is
formed using the signals of the control loop and the reference model and which
changes the controller parameters through an adaptation method, compare
Figure 23.2b.
For a continuous adaptation to a changing process behaviour model identifica-
tion adaptive systems presuppose a changing process input signal u. Stationary
23.1 Model Reference Adaptive Systems (MRAS) 129

Process
modell

w y

Figure 23.2a, b. Basic schemes


of adaptive controllers with
feedback adaptation. a Adap-
w y tive controller with identifica-
tion model (MIAS); b adaptive
controller with reference model
b (MRAS)

stochastic noise signals n may act on the process if appropriate identification


methods (especially parameter estimation methods) are applied. Here reference
variable or other external signals must not necessarily change. For model reference
adaptive systems the desired controller adaptation only happens after changing the
external signals. However, the acting of noise signals n on the process contaminates
the adaptation. These adaptive controllers are therefore basically meant for servo-
control systems.
These two methods of feedback adaptation can be designed with parametric
models as well as with nonparametric models. Furthermore the applied models and
controllers can be defined in continuous time as well as in discrete time. During the
last 15 years both methods have been applied mainly with parametric models.
Discrete-time signals were mainly used for MIAS while for MRAS continuous-
time as well as discrete-time signals were considered.
The basic principles of both adaptive control systems are given in the following.

23.1 Model Reference Adaptive Systems (MRAS)

Since the reference model methods and the relevant theory are mainly given for
continuous-time signals, continuous time is assumed first. Surveys dealing with this
class of adaptive systems can be found in [23.12, 23.13, 23.21, 23.37].
130 23 Adaptive Control Systems (A Short Review)

With MRAS a reference model is prescribed expressing the desired closed loop
transfer behaviour, for example the command variable behaviour

G ( ) = Ym(s) (23.1.1)
M S w(s) ,

compare Figure 23.2b. The unknown parameters of a controller GR(s) are then
tuned in such a way that an appropriate error signal, e.g.
output error: L\y(t) = YM(t) - y(t) (23.1.2)
state error: e(t) = XM(t) - x(t) (23.1.3)
becomes small. Hereby performance criteria are chosen which are suited for further
analytical use, for example quadratic performance criteria as:

(23.1.4)

I,

/2 = JeT(t)Pe(t)dt . (23.1.5)
o
Minimizing the performance criterion and possible additional requirements then
results in the adaptation law. Note, that the emerging adaptive system is nonlinear
and timevariant in principle.

23.1.1 Local Parameter Optimization


The first MRAS-methods assume that the to be tuned parameters rR are already
close to the correct values. If a simple gradient method is used for optimization, then
each parameter is changed proportionally to the gradient of the performance
criterion:
0/
L\rRi = -kigradrR;l = - ki orr; . (23.1.6)

i = 1,2, ... , p

Hereby ki > 0 are gain factors which have to be appropriately chosen. For
changing speed it follows that:
drRi _ k a 0/ _ k a 0/
(23.1.7)
dt - - i at orRi - - i orRi at '

where the time derivative of / is assumed to be small (slow adaptation). If for the
performance criterion (23.1.4) is introduced, then it yields with (23.1.2)

drRi = k. ~ L\ 2() = -2k.L\ ( ) oL\y(t)


dt ' orRi y t , Y t orRi

oy(t)
= 2kiL\y(t) OrRi . (23.1.8)
23.1 Model Reference Adaptive Systems (MRAS) 131

Hence the changing speed is proportional to the product of error signal and
parameter sensitivity of the output signal. Integrating leads to:
ay(t)
JL\y(t)-a-dt.
I,
rR,(td = 2ki (23.1.9)
o rRi

This result originates in [23.23], [23.24] and is referred to as "MIT-rule". The


parameter sensitivity function has to be determined from a sensitivity model which
follows from the command variable behaviour
y(s) = Gw(s)w(s) (23.1.10)
of the control loop to
ay(s) aGw(s)
--=--w(s) (23.1.11 )
arR , arRi
In this the parameters to be adapted are replaced by the instantaneous tuned
values. However, the expense of the sensitivity model can be remarkable, depend-
ing on the parameters to be adapted. Figure 23.3 shows the resulting adaption
scheme.
Practical application of this simplest reference model method is restricted by
several properties. First of all one has to find the magnitudes of the gains ki by
probing. In order to fulfill the requirement of the derivation the gain factors k i have
to be chosen sufficiently small so that small adaptation results.
Nonlinear behaviour, however, might cause instability for large signal values.
Therefore, with further development of the reference model methods the require-
ment for global stability was already taken into account when designing the
method. This was first accomplished by Ljapunov's design and later by Popov's
hyperstability method.

Reference
model

y Figure 23.3 Model reference


w adaptive control system and
parameter adjustment
through gradient method
(local parameter optimization)
132 23 Adaptive Control Systems (A Short Review)

23.1.2 Ljapunov Design


In order to consider global stability right from the beginning the performance
criterion is modified in such a way that a Ljapunov function results for the error
equation system [23.25,23.13, 23.26, 23.21, 23.37]. Here state space representation
is more suitable.
One assumes as reference model
XM(t) = AMxM(t) + BMur(t) (23.1.12)
and as control system
X(t) = Ax(t) + Bur(t) . (23.1.13)
For the state error it follows that:
e(t) = XM(t) - x(t) (23.1.14)
After subtracting (21.1.12), (23.1.13) one obtains the error equation system:
e(t) = AMe(t) + [AM - A]x(t) + [BM - B]ur(t) (23.1.15)
The Ljapunov function for this equation system is now chosen in such a way that
state errors as well as parameter errors are contained [23.21]:
V= eTpe + tr[(A M - A)TFi 1 (AM - A)]
+ tr[(BM - B)T Fi 1 (BM - B)] . (23.1.16)
Here P, Fi 1 and F B- 1 are positive definite matrices which still have to be deter-
mined. For the first derivation of the Ljapunov function one obtains
V= eT[Al;P + PAM]e
+ 2tr[(A M - A)T (Pex T - Fi 1 A)]
+ 2tr[(BM - B)T(PeurT - Fi 1 B)] (23.1.17)
If AM (of the reference model) is a Hurwitz matrix (eigenvalues with negative real
parts), then
Al;P + PAM = -Q (23.1.18)
becomes a positive definite matrix Q which then allows to calculate a suitable
*
matrix P. The first term in (23.1.17) is negative definite for all e( t) 0 and the
second and third term become zero if
A. =FA[Pe]x T }
T (23.1.19)
B = FB[Pe]ur
is chosen as adaption law.
Here a transformed error signal
e(t) = Pe(t)
may be introduced additionally.
23.1 Model Reference Adaptive Systems (MRAS) 133

Figure 23.4. Adaptive system with refer-


ence model and parameter adjustment
via stability design (without state vari-
able filter)

After integration one then obtains:

A(td = !
"
FA8(t)X T (t)dt }
(23.1.20)
B(td = JFH8(t)u T (t)dt
r
o
According to (23.1.17) a global asymptotically stable system emerges provided FA
and FH are (arbitrary) positive definite matrices. Figure 23.4 shows the resulting
adaptive system.
Comparing (23.1.20) with (23.1.9) it follows that the error signals are multiplied
with the states as well as with the input variables instead of being multiplied with
the locally valid sensitivity functions of the output variable.
The assumed system structure (23.1.13) can be used, for example to develop an
adaptive state feedback control (contained in A) and an adaptive feedforward
control (contained in B). Note, the derivations of e(t) in e(t) have to be calculated
separately (e.g. through state variable filters).
The resulting adaption law follows directly from the Ljapunov function which,
however, was chosen arbitrarily. This shows that the resulting adaptation law is
only one among several others. Another disadvantage is the arbitrary choice of the
many free parameters of Q, FA and F H.

23.1.3 Hyperstability Design


According to [23.21] a larger number of adaptation laws can be attained by
applying the Popov hyperstability concept to the considered nonlinear and time-
variant system instead of using the Ljapunov method. For this the adaptive
134 23 Adaptive Control Systems (A Short Review)

Figure 23.5 Division of the adaptive system into a linear


forward path and a nonlinear backward path

system is divided into a


linear, timeinvariant forward path Gv(s)
and
a nonlinear, timevariant backward path RNL

compare Figure 23.5.


The backward path is supposed to fulfil the Popov integral equation

rJ(td = Jdt)v(t)dt ~ -yJ for all t1 ~ 0 (23.1.21)

with yJ as a finite positive constant factor. The overall system is then (asymp-
totically) hyperstable if the transfer function of the forward path Gv(s) is positive
real [23.27, 23.28, 23.37, 23.39], that means:
a) Gv(s) is real for real s
b) For the poles of Gv(s) Re(s) < 0 is valid (23.1.22)
c) For all real w holds:
Re(iw) > 0, -00 < w <00.
This is the case if for Gv(s) = Bv(s)/Av(s) it holds that [23.21]:
- Av(s) and Bv(s) are asymptotically stable
- The pole excess is Igrad [A(s)] - grad [B(s)]I (23.1.23)
= 1m - nl ~ 1.
Hence, transfer functions are positive real if the maximum phase shift is Icp I ~ 90°.
That means they show first order system behaviour. Examples are:

(23.1.24)

It can be further shown that transfer functions are positive real if the system
elements are passive, hence involving losses [23.39, 23.40].
The system (23.1.12) to (23.1.14) will further on be examined using again state
representation. (23.1.15) is the equation system relevant for stability. The linear
23.1 Model Reference Adaptive Systems (MRAS) 135

term of this equation is:


(23.1.25)
with U1 (t) as interim variable. For the nonlinear feedback control it follows that:
udt) = - v(t) = - [A(t) - AM]X(t) - [B(t) - BM]Ur(t). (23.1.26)
Here A(t) and B(t) are determined through the still unknown adaptation law. For
a scalar udt) the characteristic equation or the denominator polynomial of the
forward path can be written as:
(23.1.27)
In order to make this partial system positive real for n ~ 1, the associated transfer
function has to be extended with a differentiating numerator polynomial with
order n - 1, i.e. the filter:
(23.1.28)
so that
8(S) D(s) -1
Gv(s) = U1(S) = AM(S) = D[sI - AM] (23.1.29)

becomes positive real. In Figure 23.4 this filter corresponds to the block with P.
Now a proportional integral acting adaptation law is formed:

=!
II

A(td tP 1 (8, t)dt + tP 2 (8, td + A(O) }


II
(23.1.30)
B(td = f '1'1(8, t)dt + '1'2(8, td + B(O)
o

tP 1 , tP 2 , '1'1 and '1'2 have to be chosen in such a way that the Popov integral
equation (23.1.21)
II

,.,(td = f eT(t)v(t)dt ~ - Y6 for all t1 ~ 0 (23.1.31)


o

is fulfilled. Hence after inserting (23.1.26) and (23.1.30)

,.,(t 1) = ! eT(t{t tPd8, ,)d, + tP 2 (8, t) + A(O) - AM ] x(t)dt

+ i eT(t{t '1'1(8, ,)d, + '1'2(8, t) + B(O) - BM ] ur(t)dt

~ - Y6 (23.1.32)

has to be valid.
136 23 Adaptive Control Systems (A Short Review)

It was shown in [23.21] that this equation is fulfilled by the following specific
solutions:

tPl = FAe(t)xT(t)
tP2 = FAe(t)xT(t)
"1 = FBe(t)unt)
"2 = Fae(t)u,T(t)
) (23.1.33)

Here FA, FA, F B, Fa are positive definite matrices, that means they are for example
diagonal. For the adaptation laws it yields:
II

A(td = JFAe(t)x T(t)dt + FAe(t1)x T(td }


o (23.1.34)
II

B(td = J FBe(t)u,T(t)dt + FBe(t dU,T(td .


o
Consequently, the block diagram of the adaptive system obtains the same structure
as already shown in Figure 23.4.
If those adaptation laws are compared with the Ljapunov design (23.1.20) and
e(t) = De(t) (23.1.35)
is observed, it then becomes obvious that the proportional additional term in
(23.1.34) contains the main difference. This term originated in the quite arbitrary
approach in (23.1.34). Omitting this term and setting D = P leads to identical
adaptation laws according to (23.1.20) and (23.1.34). The Ljapunov equation of the
reference model (23.1.18) is fulfilled for D = P. This also means that (23.1.29) is
positive real [23.21].
Derivations of the error signal e(t) are required to realize the adaptation law,
which of course is not easy. They can be determined by state variable filters of the
form:
YF(S) 1
(23.1.36)
y(s) =co+c1s+ +c,,_lSn 1
which e.g. can be connected before or after the adjustable system [23.21].
The described reference model design can also be stated for the application of the
output error [23.21] as follows:
L\y(t) = c T e(t) (23.1.37)
When applying reference model methods to discrete-time signals one has to con-
sider that with adaptive feedback control an additional lag for one sample time
emerges. Therefore global discrete-time systems have to be developed specifically.
The hyperstability design method basically follows the case of continuous time
signals. If, however, a positive real linear forward path [23.21] has to be realized,
23.1 Model Reference Adaptive Systems (MRAS) 137

then the hyperstability conditions become more complicated, particularly the


choice of parameters of the cancellation filter D(z) [23.21]. [23.29,23.30] indicate a
discrete-time adaptive reference model method which assumes a measurable
output error L\y(k) and at the same time allows process dead times.
A comparison of the Ljapunov design with the hyperstability design shows that
both methods can be transformed to each other. The two requirements for the
hyperstability of the linear/nonlinear adaptive system correspond with the search
for suitable Ljapunov functions. This makes the adaptation laws appear somewhat
more systematic. Compared to the reference model method with local parameter
optimization (which contains no positive real linear forward path and which
therefore cannot be globally stable) the advantage of global stability of the refer-
ence model design through hyperstability has the following disadvantages:
a) The adaptive system becomes more sensitive to process disturbances because
D(s) has to be added to reduce the pole excess of the reference model.
b) A large number of parameters emerge which in each case have to be determined
a ppropria tel y:
- Polynomial D(s) of the filter
- Gain factors FA, F~, FH, F~ of the adaptation
- Polynomial C(s) of the state variable filter
With the Ljapunov design the number offree parameters is only somewhat smaller:
Polynomial C(s) and gain factors FA, FH and P or Q.
As generally expedient for adaptive control the adaptive reference model
methods were described for parallel design of the reference model and the system
to be tuned. However, they also exist in series or series and parallel schemes
[23.21].
Finally, the characteristic traits of adaptive control systems with reference model
are once again summarized:
a) For controller parameter adaptation the measurable external input signal w(t)
has to change continuously and has to excite the eigenvalues of the system
sufficiently.
b) On the above assumptions (linear process, precisely known structure of the
transfer behaviour of the process) global stable adaptive systems can be
designed.
c) The design methods consider deterministic signals.
d) Particularly state controllers or low-order parameter-optimized controllers (P-,
PI-controllers) are eligible as controllers.
e) The prescription of an appropriate reference model requires a precise know-
ledge of the process model and the resulting control loop behaviour. The
manipulated variable behaviour has to be particularly observed since it has only
indirect influence. Hence, the required a-priori knowledge is extensive.
f) A given reference model forces the control loop to behave according to the
prescribed model even though a better behaviour could be possible. (Variable
processes actually require variable reference models.)
g) Many free parameter of the filters and gain matrices have to be prescribed.
Often this has to be done by probing.
138 23 Adaptive Control Systems (A Short Review)

h) With constant process parameters the controller parameters are unnecessarily


changed if noise signals n(t) occur in the process, compare Figure 23.2b.
i) Because of the many degrees of freedom for the adjustable system's basic
structure, the various filters and the large number of eligible parameters,
tailorized solutions are required for definite processes.
j) Some disadvantages might be overcome by additional measures (e.g. output
signal analysis, stochastic design methods).
It is because of these properties that adaptive reference model methods are
mainly suited for control systems with continuously changing reference variable
and only small process noise signals. Additionally, the desired closed loop control
system behaviour must be constant and has to be stated precisely. These re-
quirements are fulfilled with e.g. servo control systems and some vehicles.
Up to now adaptive reference model methods have been applied e.g. for D.C.
drives, electrical generators, vessels and satellites, compare e.g. [23.22].
The adaptive reference model method which has been considered so far uses the
reference model for comparison with the closed control loop. It is also referred to as
direct adaptive control with reference model, since the controller parameters are
determined directly. A reference model, however, can also be used for comparison
with the process in order to identify the process. Those adaptive control systems
are called indirect adaptive control systems with reference model [23.21] as the
controller is designed through a (explicitly determined) process model, that means
in an indirect way. This then corresponds with the explicit adaptive control systems
with identification model, compare the following section.

23.2 Adaptive Controllers With Identification Model (MIAS)

Adaptive control systems with identification model are based on the identification
of a closed loop process model, compare Figure 23.2a. Here, the input signal u(k)
and the output signal y(k) are measured. A recursive identification method for
example determines on-line and in real-time the process model and perhaps also
a noise signal model. Based on this process/noise signal model the controller
parameters are calculated using an appropriate controller design method. Then
the parameters of the control algorithm are changed.
Hence, these adaptive control methods are composed of two main steps:
1. Process/noise signal identification
2. Controller parameter calculation
They try to attain optimal controller adaptation according to the controller
design method or the control performance criterion. They therefore may be called
self-optimizing controllers.
Development of these adaptive controllers presupposes appropriate recursive
identification methods. In practice they can be realized only with digital computers,
otherwise the computational effort would be intolerable.
23.2 Adaptive Controllers With Identification Model 139

The first publication [23.31J consisted in the recursive least squares method
(though in a somewhat unexpedient form) for parameter estimation of the process
model and of a deadbeat controller for the control. This digital adaptive control
method was implemented on a digital computer which was particularly built for
that purpose.
The advances in parameter estimation of dynamic processes and in designing
stochastic controllers brought forth discrete-time adaptive controllers also called
"selftuning regulators".
In [23.32, 23.33J the recursive least squares method was combined with a min-
imum variance controller, in [23.34J this was done with an extended minimum
variance controller.
Next were e.g. selftuning controllers with prescribed pole design [23.35J and the
combination of various parameter estimation- and controller design methods. The
further development will be treated in chapter 26.
Adaptive controllers with identification model can be designed with non-
parametric or parametric models in continuous or discrete time. Up to now the
emphasis was put on discrete-time parametric models. The resulting adaptive
digital controllers with parameter estimation are also called parameter-adaptive
controllers or selftuning regulators. They can be classified in two main groups,
compare Figure 23.6:
- Explicit parameter-adaptive controllers estimate explicitly the process model
parameters followed by the calculation of the controller parameters. The process
model parameters are then available as an interim result.

Process parameters

w y

w y Figure 23.6a, b. Parameter-


adaptive controller. a explicit
combination; b implicit combi-
b nation.
140 23 Adaptive Control Systems (A Short Review)

- Implicit parameter-adaptive controllers estimate directly the controller para-


meters, since the process model is implicitly integrated in the adaptive control
algorithm.
Adaptive controllers based on identification methods can also be formed with
nonparametric models. For this purpose e.g. weighting functions or transfer func-
tions can serve as process models that means models in form of nonparametric time
responses. The emerging adaptive controller can be called nonparametric adaptive
controllers or time response adaptive controllers. It is their particular advantage that
no special model structure with defined order and dead time has to be assumed for
identification, e.g. [23.38].
Some characteristic features of adaptive control systems with identification are
listed in the following:
a) For controller parameter adaptation an identification model with adequate
control performance has to be available. This is only the case if the process input
signal u(k) is continuously changing and if u(k) sufficiently excites the process
eigenvalues. Also identification in closed loop must be possible.
b) Stochastic noise signals are admitted when the respective identification methods
are being used.
c) Excitation through external signals is not necessarily required.
d) Many identification methods can be combined with many controller design
methods and controllers. This means that e.g. the controller design through
tuning rules, pole assignment, parameter optimization, Riccati-design, etc. can
be easily applied.
e) The required a-priori knowledge of the process is comparatively small.
f) The number of the free design parameters can be kept small.
g) Adaptive controllers with identification model can be applied for many different
processes and for various operating conditions.
h) They can be transformed directly to multivariable processes and nonlinear
processes.
i) For asymptotical stability relatively universally valid conditions are known.
j) Up to now the proof of global stability has been only furnished for some types of
parameter-adaptive controllers. This disadvantage, however, can be avoided by
additional measures which in practice are necessary anyway.
Because of their many positive properties adaptive controllers based on identi-
fication methods can be applied universally and can be easily adjusted to the
respective conditions by corresponding modifications.
This is why these adaptive controllers will be treated more comprehensively in
the following. Chapter 24 deals with recursive methods of process- and signal
identification and chapter 25 with closed loop process identification. This is
followed by a detailed consideration of parameter-adaptive control systems in
chapter 26.
24 On-line Identification of Dynamical Processes
and Stochastic Signals

Identification is the experimental determination of the dynamical behaviour of


processes and their signals. Measured signals are used to determine the system
behaviour within a class of mathematical models. The error between the real
process or signal and its mathematical model has to be as small as possible [3.12],
[3.13]. On-line identification means the identification with computers in on-line
operation with the proces. If the measured signals are first stored in a block or
arrays this is called batch processing. However, if the signals are processed after
each sample instant this is called real time processing.
For adaptive control systems on-line identification in real time-real time identi-
fication - is of primary interest. Furthermore parametric process and signal models
are preferred for controller design. They involve a finite number of parameters and
allow the application of advanced controller design procedures with relatively little
computational effort for a wide class of processes.
For real time identification recursive parameter estimation methods have been
developed for linear time invariant and time variant processes, for some classes of
nonlinear processes and for stationary and some classes of nonstationary signals.
This chapter reviews some important methods of recursive parameter estimation.
For a more extensive study, particularly for their derivation and their convergence
conditions, the reader is referred to the literature, for example [3.12], [3.13],
[24.20], [24.24] and to the cited references.

24.1 Process and Signal Models

It is assumed that a stable process is time invariant and linearizable so that it can be
described by a linear difference equation
yu(k) + alYu(k - 1) + ... + a",yu(k - m)
(24.1.1)
where
u(k) = U(k) - Uoo } (24.1.2)
y(k) = Y(k) - Yoo
are the deviations of the absolute signals U(k) and Y(k) from the d.c. ('direct
142 24 On-line Identification of Dynamical Processes and Stochastic Signals

Yu Y
Figure 24.1 Process and noise model.

current' or steady-state) values U 00 and Y oo . d = 0, 1,2, ... is the discrete dead-


time. From Eq. (24.1.1) the z-transfer function becomes:
_ y,,(z) _ B(Z-1) -d _ b 1z- 1 +... + bmz- m -d
G p (z ) - - - - z - z . (24.1.3)
u(z) A(z 1) 1 +a1 z 1 + ... +~z m

The measured output y(k) is assumed to be contaminated by disturbances n(k),


Figure 24.1,
y(k) = y,,(k) + n(k) . (24.1.4)
The disturbance signal n(k) is assumed to be described as an ~utoregressive I!loving
~verage signal process (ARMA), c.f. section 12.2.3,

n(k)+c1n(k-l)+ ... +cpn(k-p)

= v(k) + d 1v(k - 1) + ... + dpv(k - p) (24.1.5)


where v(k) is a nonmeasurable, normally distributed, statistically independent
noise (discrete white noise) with:
E{v(k)} =0

cov[v(k), rJ = E{v(k)v(k + r)} = O';~(r)

where 0';is the variance and ~(r) is the Kronecker delta function. The z-transfer
function of the noise filter is:
Gv(z) = n(z) = D(Z-1) = 1 + d 1z- 1 + ... + dpz- p . (24.1.6)
v(z) C(z 1) 1 +C1 Z 1 + ... +cpz p

Eq. (24.1.3) and (24.1.6) yield the combined process and noise model:
B(z -1) -d D(z - 1)
y(z) = A(Z-1) Z u(z) + C(Z-1) v(z). (24.1.7)

The objective of parameter estimation is to estimate the process parameters in the


polynomials A(z -1) and B(z -1) and the noise parameters in C(z -1) and D(z -1),
based on measured signals u(k) and y(k). It is assumed that the model orders m and
p are known a priori. If this is not the case they can be determined by order search
methods [3.13]. The noise n(k) is assumed to be stationary, i.e. the roots of the
polynomial C(z - 1) lie within the unit circle in the z-plane. The parameter estima-
tion methods described in the following differ, for example, in the assumptions on
the structure of the noise filter which must be made for convergent parameter
24.2 The Recursive Least Squares Method (RLS) 143

estimates. As well as the general model (24.1. 7) one distinguishes in particular two
specialized models, the "ML-model" also called the "ARMAX-model" (ARMA
model, (24.1.5), with an exogenous variable [23.14J):

(24.1.8)

and the "LS-model":

(24.1.9)

24.2 The Recursive Least Squares Method (RLS)

24.2.1 Dynamical Processes


Considering measured signals y(k) and u(k) up to time point k and the process
parameter estimates up to time point (k - 1), one obtains from (24.1.1):
y(k) + adk - l)y(k - 1) + ... + am(k - l)y(k - m)
- bl(k - l)u(k - d - 1) - ... - bm(k - l)u(k - d - m) = e(k)
(24.2.1 )
where the equation error (residual) e(k) replaces "0" in (24.1.1). This error arises
from the noise contaminated ouputs y(k) and from the erroneous parameter
estimates. In this equation the following term can be interpreted as a one-step
ahead prediction y(klk - 1) of y(k) at time (k - 1)
.Y(klk - 1) = - al(k - l)y(k - 1) - ... - am(k - l)y(k - m)
+ bdk - l)u(k - d - 1) + + bm(k - l)u(k - d - m)
= ",T(k)@(k - 1) (24.2.2)
with the data vector:

",T(k) = [ - y(k - 1) ... - y(k - m)u(k - d - 1) ... u(k - d - m)J


(24.2.3)
and the parameter vector:
@T(k) = [a l ... amb l ... bmJ . (24.2.4)
For the equation error this gives:
e(k) = y(k) - y(klk - 1) (24.2.5)
equation new one step ahead
error measurement prediction by the model
144 24 On-line Identification of Dynamical Processes and Stochastic Signals

Now inputs and outputs are measured for k = 1,2, ... , m + d + N. Then N +1
equations of the form:

y(k) = "T(k)8(k - 1) + e(k)


can be represented as a vector equation

y(m + d + N) = ¥,(m + d + N)8(m + d + N - 1) + e(m + d + N) .


(24.2.6)
with:
yT(m + d + N) = [y(m + d)y(m + d + 1) ... y(m + d + N)] . (24.2.7)
9'(m+d+ N)
_Y(m+d_l) -y(m+d-2) . -y(d) : u(m-I) u(m-2)
.. u(O) ]
[ -y(m +d) -y(m + d-I) .. -y(1 +d) I u(m) u(m-I) .. u(l) (24.2.8)
Y(~ +
I
I .
= - d + N -I) - y(m +d+N - 2) .. -y(N + d)1 u(m + N-I)u(m + N-2) ... u(N)

eT(m + d + N) = [e(m + d)e(m + d + 1) ... e(m + d + N)] . (24.2.9)

Minimization of the loss function


m+d+N
V = eT(m + d + N)e(m + d + N) = L e 2 (k) (24.2.10)
k=m+d

and therefore

dVI
- =0 (24.2.11)
dB 8=9

results with the assumption N ~ 2m and the abbreviation


P(m + d + N) = [¥,T(m + d + N)¥,(m + d + N)r 1 (24.2.12)
in the estimation [3.12], [3.13],
8(m + d +N - 1) = p(m + d + N) ¥,T(m + d + N)y(m + d + N). (24.213)
This is the nonrecursive parameter estimation as the parameter estimates are
obtained only after measuring and storing of all signal values. Writing the non-
recursive estimation equations for 8(k + 1) and 8(k) and subtracting one from the
other results in the recursive parameter estimation algorithm
8(k + 1) = 8(k) + )'(k) [y(k + 1) - "T(k + 1)8(k)]

[-
new old correcting one step ahead
estimate
=
estimate
+ vector measurement prediction of the
new measurement. ] (24.1.14)
24.2 The Recursive Least Squares Method (RLS) 145

The correcting vector is given by:


I'(k) = P(k + 1)",(k + 1)
1
",T(k + I)P(k)",(k + 1) + 1 P(k)",(k + 1) (24.2.15)

and

P(k + 1) = [I -1'(k)",T(k + 1)]P(k). (24.2.16)

To start the recursive algorithm one sets:


8(0) = 0 and P(O) = rxI (24.2.17)
with (X large [3.13]. The expectation of the matrix P is proportional to the
covariance matrix of the parameter estimates
1
E{P(k + I)} = 2cov[L\@(k)] (24.2.18)
(Je

with (24.2.19)
and the parameter error L\@(k) = @(k) - eo. Hence the recursive algorithm
produces the variances of the parameter estimates (diagonal elements of covariance
matrix). (24.2.14) can also be written as:
8(k + 1) = 8(k) + I'(k)e(k + 1) . (24.2.20)

Convergence Conditions
The general requirements for the performance of parameter estimation methods
are that the parameter estimates are unbiased
E { 8(N)} = @o N finite (24.2.21)
and consistent in mean square
lim E{ 8(N)} = @o (24.2.22)
N--+oo

(24.2.23)
N--+oo

For the method of least squares, applied to a stable difference equation which is
linear in the parameters, the following conditions must therefore be satisfied:
a) The process order m and the dead time d are known.
b) The input signal u(k) = U(k) - U oo must be exactly measurable and U oo must
be known.
c) lim p- 1 (k) is positive real. This includes that the input signal u(k) must be
k--+ 00

persistently exciting at least of order m. Hence the matrix


Hu = {hij = tP~(i - j)} i,j = 1,2, ... , m
146 24 On-line Identification of Dynamical Processes and Stochastic Signals

must be positive definite (det Bu > 0), and


N-1
V 00 = lim L V(k)
k-oo k=O

and
N-1
<Puu(.) = lim L u(k)u(k +.)
N-oo k=O

exist, see [24.15], [24.16].


d) The output signal y(k) = Y(k) - Y oo may be disturbed by a stationary noise
n(k). Y oo has to be known and has to correspond with Voo according to the
static process behaviour.
e) The equation error e(k) must be uncorrelated with the elements of the data
vector ",T(k). This means that e(k) must be uncorrelated.
f) E{e(k)} = o.
The convergence of the recursive algorithm also depends on the choice of the
starting values P(O) and 8(0).
The requirement of an uncorrelated error signal e(k) considerably restricts the
applicability of the least squares method for strongly noisy processes. Unbiased
parameter estimation supposes a noise filter Gv(z) = n(z)/v(z) = I/A(z -1) , which
rarely exists. Therefore other methods must be used in general for strongly
disturbed processes. However, despite this fact, the recursive least squares method
is well suited for parameter adaptive control, as will be shown later, compare
section 24.3 to 24.5.

D.C. Value Estimaton

As for the process parameter estimation the variations of u(k) and y(k) of the
measured signals V(k) and Y(k) have to be used, the d.c. values V 00 and Yoo either
have also to be estimated or have to be removed. The following methods are
available:
a) Differencing
Through differencing
L\ Y(k) = Y(k) - Y(k - 1)
= [y(k) + Yoo ] - [y(k - 1) + Yoo ]

= y(k) - y(k - 1) = L\y(k) (24.2.24)


the D.C. value Y oo is eliminated. If differencing is applied to both, the process
output and input, only y(k) has to be replaced by L\y(k) and u(k) by L\u(k) in the
data vector (24.2.3). Thus the process parameters can be estimated in the same way
as in the case of measuring u(k) and y(k). Through differencing, however, the
amplitudes of high frequency noise signals become larger so that for a comparat-
ively lower frequency excitation the noise/signal performance deteriorates.
24.2 The Recursive Least Squares Method (RLS) 147

b) Implicit estimation of a D.c. value parameter


Inserting (24.1.2) in (24.1.1) yields
Yu(k) + a1 Yu(k - 1) + ... + a", Yu(k - m) - (1 + a1 + ... + a",) Yoo
l~_ _----.,y * J

Y oo

(24.2.25)
Both constants are integrated in one D.C. value parameter
Ko = Yo*o - U60 (24.2.26)
and the data vector and the parameter vector (24.2.4), (24.2.4) are extended as
follows
",J(k) = [1 - Y(k - 1) ... - Y(k - m) U (k - d - 1) ... U(k - d - m)]
@J = [KOa1 ... amh1 ... hm] (24.2.27)
The equation error thus is
e(k) = Y(k) - Y(klk - 1) = Y(k) - ",J(k)@*(k - 1) . (24.2.28)
The parameter estimation then can be ensued with the correspondingly extended
matrix tp and with
yT = [Y(m + d) ... Y(m + d + N)] (24.2.29)
according to (24.2.13)
QT_[tpTtp ]-ltpTy (24.2.30)
0'* - * * *
or with the corresponding recursive parameter estimation equation (24.2.14).
The implicit D.C. value parameter estimation is e.g. then of interest, if U 00 is
known and Yoo is to be determined continuously according to (24.2.26). Then,
however, D.C. parameter Ko and dynamic parameters @ are coupled via the
estimation equation.
c) Explicit estimation of a D.C. value parameter
The parameters ai and hi for the dynamic behaviour and the D.C. value parameter
Ko can also be estimated separately. At first the dynamic parameters are estimated
by differencing according to a). Then it follows from (24.2.25) and (24.2.26) with

L(k) = Y(k) + a1 Y(k - 1) + ... + ~ Y(k - m)


- h1 U(k - d - 1) - ... - hmU(k - d - m) (24.2.31)

for the equation error


e(k) = L(k) = Ko (24.2.32)
148 24 On-line Identification of Dynamical Processes and Stochastic Signals

and after applying the least squares method on this parameter


_ 1 m+d+N
Ko(m + d + N) = --1 L L(k). (24.2.33)
N + k=m+d

Now for large N it yields

(24.2.34)

If for an unknown U 00 e.g. Yoo is of interest, it can be determined with the


estimated Ko from (24.2.34).
In this case the coupling of 8 and K is only one-sided, since 8 is not a function of
Ko. A disadvantage, however, might be the deterioration of the noise/signal
performance through differencing.
The choice of the most expedient method of treating unknown D.C. values thus
depends on the very single case. In closed loop with reference value W(k),
Yoo = W(k) can be set. Then controllers without integral part can be used for
proportionally acting processes, see Chapter 26.

24.2.2 Stochastic Signals


The method of recursive least squares can also be used for the parameter estima-
tion of stochastic signal models. A stationary autoregressive moving average
process (ARMA)
y(k) + cly(k - 1) + ... + cpy(k - p)
= v(k) + dlv(k - 1) + ... + dpv(k - p) (24.2.35)
is considered, where compared with (24.1.5) the nonmeasurable n(k) has been
replaced by the measurable y(k). According to (24.2.1) to (24.2.5) it is written
y(k) = ",T(k)8(k - 1) + v(k) (24.2.36)

where
",T(k) = [-y(k - 1) ... - y(k - p)v(k - 1) ... v(k - p)] (24.2.37)
8 T = [el ... cpd l ... dp] . (24.2.38)
If v(k - 1), ... ,v(k - p) were known, the RLS method could be used as Eqs.
(24.2.14) to (24.2.17), as v(k) in Eq. (24.2.36) can be interpreted as equation error,
which is statistically independent by definition.
Now the time after the measurement of y(k) is considered. Here
y(k - 1), ... ,y(k - p) are known. Assuming that the estimates
v(k - 1), ... ,v(k - p) and 8(k - 1) are known, the most recent input signal v(k)
can be estimated via Eq. (24.2.36), [24.1], [24.2]
v(k) = y(k) - q,T(k)8(k - 1) (24.2.39)
24.3 The Recursive Extended Least Squares Method (RELS) 149

with
Y, T(k) = [ - y(k - 1) ... - y(k - p) {}(k - 1) ... {}(k - p)J . (24.2.40)
Then also
y,T(k + 1) = [-y(k) ... - y(k - p + l){}(k) ... {}(k - p + 1) (24.2.41)
is determined, such that the recursive algorithms (24.2.14) to (24.2.16) can be used
to estimate 8(k + 1) if there", T(k + 1) is replaced by y, T(k + 1). Then {}(k + 1) and
8(k + 2) are estimated, etc. For starting the algorithm
(}(O) = .Y(O); 8(0) = 0; P(O) = (XI
can be used. As v(k) is statistically independent, v(k) and ",T(k) are uncorrelated
which results in unbiased estimates consistent in mean square. As the model
(24.2.39) must be stable, the roots of C(z) = 0 and D(z) = 0 should lie within the
unit circle of the z-plane. The variance of v(k) can be estimated by [3.13J

A2(k)
=k 1 ~ A2(.) (24.2.42)
(Tv
+1_ 2 L..., V
P 1=0
I

or by the resultant recursive algorithm

a;(k + 1) = a;(k) + k + 21_ 2p [(}2(k + 1) - a;(k)J . (24.2.43)

24.3 The Recursive Extended Least Squares Method (RELS)

If instead of the LS-mode1


A(Z-l)y(Z) - B(Z-l)Z-dU(Z) = 6(Z) (24.3.1)
with an uncorrelated error signal 6(Z) the ARM AX-model
A(Z-l)y(Z) - B(Z-l)Z-d U(Z) = D(z-l)e(z) (24.3.2)
with a correlated signal 6(Z) = D(z-l)e(z) is used, the recursive methods for
dynamical processes and for stochastic signals can be combined to form an
extended least squares method [24.3J, [24.2]. Based on
y(k) = y,T(k)6(k - 1) + e(k) (24.3.3)
the following extended vectors are introduced
y,T(k)=[-y(k-1) ... -y(k-m): u(k-d-1) ... u(k-d-m):
{}(k - 1) ... {}(k - m) (24.3.4)
8T = [a1 ... am : h1 ... hm : d1 ... dmJ (24.3.5)
and the parameters are estimated using
8(k + 1) = 8(k) + ;v(k) [y(k + 1) - y,T(k + 1)8(k)J (24.3.6)
150 24 On-line Identification of Dynamical Processes and Stochastic Signals

and equations corresponding to (24.2.14) to (24.2.16). The signal values iJ(k) = e(k)
in y,T(k + 1) are calculated recursively with (24.2.39). Therefore the roots of
D(z) = 0 must lie within the unit circle of the z-plane. The parameter estimates are
unbiased and consistent in mean square if the convergence conditions of the least
squares method, section 24.2.1 and 24.2.2 are transferred to the model (24.3.3). That
means that the model (24.3.2) has to be valid.
Furthermore,
H(z) = I/D(z) - 1/2 positive real (24.3.7)
This means, that H(z) is the transfer function of a system which can only be realized
with positive elements (phase angle magnitude ~ 90°), leading to (c.f. section
23.1.3)
H(z) is stable
Re{ H(z)} > 0 for z = e iwTo ) (24.3.8)
- n < wTo < n
This includes (sufficient condition)
ID(e iWTO ) - 11 < 1 . (24.3.9)

24.4 The Recursive Instrumental Variables Method (RIV)

For convergence of the least squares method the error signal e(k) must be
uncorrelated with the elements of '" T (k). The instrumental variables method
bypasses this condition by replacing the data vector '" T (k) by an instrumental
vector w T(k) whose elements are uncorrelated with e(k). This can be obtained if the
instrumental variables are correlated as strongly as possible with the undisturbed
components of ",T(k). Therefore an instrumental variables vector
wT(k) = [ -h(k - 1) ... - h(k - m) u(k - d - 1) ... u(k - d - m)]
(24.4.1)
is introduced where the instrumental variables
(24.4.2)
are taken from the undisturbed output of an auxiliary model with parameters
@aux(k). The resulting recursive estimation algorithms have the same structure as
for RLS, [24.5], [24.6], c.f. Table 24.1. To have the instrumental variables h(k)
less correlated with e(k), the parameter variations of the auxiliary model are
delayed by a discrete first order low-pass filter with dead time [24.6]
(24.4.3)
During the starting phase this RIV is sensitive to inappropriately chosen initial
values of 8(0), P(O) and p. It is therefore recommended that this method is started
with RLS [24.11].
Table 24.1 Unified recursive parameter estimation algorithms for bo = 0 and d = o. 6(k + 1) = 6(k) + l'(k)e(k + 1);
l'(k) = J.l(k + I)P(k),(k + 1); e(k + 1) = y(k + 1) - ",T(k + 1)6(k).

method 8 '{IT(k + 1) J.I(k + 1) P(k + 1) rp(k + 1) unbiased and


consistent for
noise filter

N
£1 1 .j>.
[ - y(k) ... - y(k - m + 1) 1 N
RLS I : [1- y(k)'{Il (k + 1)] P(k) '{I(k + 1) ...,
u(k) ... u(k - m + 1)] 1 + '{IT(k + I)P(k)'{I(k + 1) A(Z-I)
0-
I ~m D(Z-I) (l>
RIV as RLS 1 [ -h(k) ... - h(k - m + 1)
b.. [/- y(k)rpT(k + 1)] P(k) ;:tI
1 + '{I T(k + I)P(k)rp(k + 1) u(k) ... u(k - m + 1)] C(Z-I) (l>
(")
IX 1 !:
....
rJ>
STA I bm I as RLS p(k + 1)1 = --I '{I(k + 1)
k+ 1 A(Z-I) :;;-
(l>

::l
~
£1 1
[-y(k) ... -y(k - m + 1)
-
....
!:
RELS I : 3
(l>
u(k) ... u(k - m + 1) ::l
am
elk) ... elk - m + 1)] as RLS as RLS '{I(k + 1) D(Z-I) [
hI
A(Z-I) -<
P>
::1.
P>
hm CT
Ci"
rJ>
dl ~
RML I as RELS 1 (l>
[1- y(k)rpT(k + 1)] P(k) [ - y'(k) ... - y'(k - m + 1) D(Z-I) ....
0-
1 + rpT(k + I)P(k)rp(k + 1) A(Z-I) 0
u'(k) ... u'(k - m + 1) Q..
dm I e'(k) ... e'(k - m + 1)] ~
~
-
Vl
-
152 24 On-line Identification of Dynamical Processes and Stochastic Signals

The method of instrumental variables results in unbiased and consistent para-


meter estimates, if
a) E{n(k)} = 0 and E{u(k)} = const
or
E{n(k)} = const and E{u(k)} = 0
b) E{u(k - .)n(k)} = 0 for 1.1 ~ 0
c) u(k) = U(k) - U oo must be known
d) Y oo must not be known if E{u(k)} = O.
An important advantage of the RIV method is that no special assumptions about
the noise filter have to be made to obtain unbiased parameter estimates. The
polynomials C(Z -1) and D(z -1) therefore can be independent of the process
polynomials B(z -1) and A(z -1). The RIV method yields only the process para-
meters ai and bi' In the case the parameters Ci and di ofthe noise model are required
they can be estimated by RLS (section 24.2.2) using the noise signal estimate
n(k) = y(k) - y,,(k) = y(k) - h(k) . (24.4.4)

24.5 A Unified Recursive Parameter Estimation Algorithm

The recursive parameter estimation algorithms RLS, RELS, RIV, RML and STA
can be represented uniquely by, compare [2.22, 2.23J

8(k + 1) = 8(k) + y(k)e(k + 1) (24.5.1)


y(k) = Jl(k + I)P(k),(k + 1) (24.5.2)
e(k + 1) = y(k + 1) - ",T(k + 1)8(k) . (24.5.3)

They differ only in the parameter vector 8, the data vector ",T(k + 1) and in the
correcting vector y(k). These quantities are summarized in Table 24.1.
Up to now it was assumed that the process parameters to be estimated are
constant and therefore the measured signals u(k) and y(k) and the equation error
e(k) are weighted equally over the measuring time k = 0, ... ,N. If the recursive
estimation algorithms are to be able to follow slowly time varying process para-
meters, more recent measurements must be weighted more strongly than old
measurements. Therefore the estimation algorithms should have afading memory.
This can be incorporated in the least squares method by time dependent weighting
of the squared errors (the method of weighted least squares [3.13J)
(m+d+N)
V= L w(k)e 2 (k) . (24.5.4)
k=(m+d)

By choice of
w(k) = A(m+d+N)-k = AN'-k with 0< A < 1 (24.5.5)
the errors e(k) are weighted as shown in Table 24.2 for N' = 50. The weighting then
24.5 A Unified Recursive Parameter Estimation Algorithm 153

Table 24.2 Weighting factors due to (24.5.5) for N' = 50

k 10 20 30 40 47 48 49 50

A = 0.99 0.61 0.67 0.73 0.82 0.90 0.97 0.98 0.99


A = 0.95 0.08 0.13 0.21 0.35 0.60 0.85 0.90 0.95

increases exponentially to 1 for N'. The recursive estimation algorithms given in


Table 24.1 are modified as follows:
- The 1 in the denominator of J1.(k + 1) is replaced by),. For RIV therefore

1
(24.5.6)
J1.(k + 1) = ), + IfIT(k + I)P(k)rp(k + 1)
- P(k + 1) is multiplied by 1/),

P(k + 1) = [I - y(k)rpT(k + 1)]P(k) ~ . (24.5.7)


A.

When choosing the weighting factor ), one has to compromise between greater
elimination of the noise or better tracking of time varying process parameters. It is
recommended that A is chosen within the range 0.90 < A < 0.995. As the RML and
RELS methods exhibit slow convergence during the starting phase due to the
unknown e(k) = D(k), convergence can be improved if the initial error signals are
weighted less and the subsequent error signals are increasingly weighted up to 1.
This can be achieved with a time varying ),(k) as in [24.13]
A(k + 1) = AoA(k) + (1 - Ao) (24.5.8)
with Ao < 1 and A(O) < 1. For Ao = 0.95 and A(O) = 0.95 one obtains for example
A(5) = 0.9632 A(10) = 0.9715 A(20) = 0.9829.
In the limit, lim A(k + 1) = 1.
The weightings given by (24.5.8) and (24.5.5) can be combined in the algorithm
A(k + 1) = AoA(k) + A(1 - Ao) . (24.5.9)
There is a small weighting in the starting phase, depending on Ao and A(O), and for
large k an exponential forgetting given by (24.5.5) is obtained:
lim A(k + 1) = A .

The recursive parameter estimation algorithms have been compared with respect
to the performance of the estimates, the reliability of the convergence and the
computational effort by simulations [24.9], [3.13], [24.10], [24.13], by practical
154 24 On-line Identification of Dynamical Processes and Stochastic Signals

tests [24.11], [24.12] and theoretically [24.13], [24.17]. The results ofthe compari-
sons of the recursive parameter estimation algorithms can be summarized as
follows:
RLS: Applicable for small noise/signal ratios, otherwise gives biased estimates.
Reliable convergence. Relatively small computational effort.
RELS: Applicable for larger noise/signal ratios, if the noise model D/A fits. Slow
convergence in the starting phase. Convergence not always reliable (c.f. RML).
Noise parameters D are estimated. They show a slower convergence than for Band
A. Somewhat larger computational effort than RLS.
RIV: Good performance of parameter estimates. To accelerate the initial conver-
gence, starting with RLS is recommended. Larger computational effort than RLS.
RML: High performance of parameter estimates, if the noise model D/A fits. Slow
convergence in the starting phase. More reliable convergence than RELS. Noise
parameters D are estimated, but show slow convergence. Larger computational
effort than RLS, RELS and RIV.
STA: Acceptable performance only for very large identification times. Conver-
gence depends on IX. Small computational effort.
For small identification times and larger noise/signal ratios all methods (except
STA) lead to parameter estimates of about the same quality. Then in general RLS is
preferred because of its simplicity and its reliable convergence. The superior
performance of the RIV and RML methods is only evident for a larger identifica-
tion time.

24.6 Modifications to Recursive Parameter Estimation Algorithms

In order to improve some properties parameter estimation methods can be modi-


fied. This is mainly done to improve the numerical properties of digital computers,
to gain access to provisional results and to minimize the influence of initial values.
Numerical properties become important for relatively small wordlengths as e.g. for
8 bit and 16 bit microcomputers or if the input signal changes become small which
occurs in adaptive control systems. In both cases ill-conditioned equation systems
for parameter estimation emerge.
The condition can be improved by e.g. not calculating the matrix P as provis-
ional value which contains squares and covariances of the signal values, but by
determining suitable roots of P in which the elements are found in the original size
of the signal values. This leads to square root filtering or factorization methods
[24.18,24.19,24.23,24.20,24.25].
Here, forms can be distinguished which either emanate from the covariance
matrix P or from the information matrix P -1 [26.44], also compare section 24.2.
In discrete square root filtering in the covariance form (DSFC) the symmetrical
matrix P is divided in two triangular matrices S
P=SST (24.6.1)
24.6 Modifications to Recursive Parameter Estimation Algorithms 155

Here S is called the "square root" of P. The resulting algorithms for the least
squares method then are written

@(k + 1) = @(k) + )'(k)e(k + 1)


)'(k) = a(k)S(k)f(k)
f(k) = ST(k)",(k + 1)
S(k + 1) = [S(k) - g(k))'(k)fT(k)] ~ (24.6.2)
v}·(k)
l/a(k) = fT(k)f(k) + ).(k)
g(k) = 1/[1 + J).(k)a(k)]

with the initial value S(O) = ~. These equations were stated in similar form for
the state estimation [24.18].
The discrete square root filter in the information form (DSFI) results from the
nonrecursive least squares method in the form

P-l(k + l)@(k + 1) = 'PT(k + l)y(k + 1) =f(k + 1) (24.6.3)

the right and left sides of it are recursively calculated as follows

P-l(k + 1) = A(k + l)P-l(k) + ",(k + l)",T(k + 1) }


(24.6.4)
f(k + 1) = A(k + l)f(k) + ",(k + l)y(k + 1)

Now the "information matrix" is divided in two triangular matrices S-l


p- 1 = (S-l)TS-l. (24.6.5)
Then @(k + 1) is determined according to (24.6.3) by setting backwards from
S-l(k + l)@(k + 1) = b(k + 1). (24.6.6)
This equation follows because of an orthogonal transformation matrix T with
TT T = J) (24.2.13)
(24.6.7)
Here
S-l] (24.6.8)
T'P= [
o
have an upper triangular form and it yields

Ty = [~l (24.6.9)

From (24.6.7) it follows that


T(k + 1) 'P(k + l)@(k + 1) = T(k + l)y(k + 1) . (24.6.10)
156 24 On-line Identification of Dynamical Processes and Stochastic Signals

This equation is now transformed in recursive form [24.18]

[ S-l(k + 1)J = (k 1)[fiS- (k)J


1
(24.6.11)
OT T + ",T(k + 1)

[ h(k + 1)
w(k + 1)
J = T(k + 1)[fi h (k)
y(k + 1)
J. (24.6.12)

Then S-l(k + 1) and h(k + 1) are used to calculate 8(k + 1) according to (24.6.6).
This partly nonrecursive partly recursive form has the advantage that no initial
values 8(0) have to be assumed and that exactly S-l(O) = 0 is valid. Therefore
convergence is excellent in the initial phase. Furthermore, matrix inversion is not
required. This method is especially expedient if the parameters 8 are not required
for each sampling step. Then only S-l and h have to be calculated recursively.
For the discrete root filtering in covarianceform a further developed method has
been given in [24.23], the so-called U-D factorization (DUDC). Here the covariance
matrix is factorized as follows
p= UDU T (24.6.13)
where D is a diagonal matrix and U an upper diagonal matrix with ones in the
diagonal. Then the recursive equation for the covariance matrix is written (24.2.16)

U(k + 1)D(k + l)U T(k + 1) = ~[U(k)D(k)UT(k)


_y(k)",T(k + 1)U(k)D(k)UT(k)] . (24.6.14)

After inserting (24.2.15) and (24.6.13) it yields for the right-hand side

UDU T = ~ U(k) [ D(k) - CX/k) v(k)fT(k)D(k) JUT(k)


= ~ U(k) [D(k) - v(k)vT(k) CX(1k )J UT(k) (24.6.15)

with the abbreviations

f(k)
v(k)
= UT(k)",(k +
= D(k)f(k)
1) 1 (24.6.16)
cx(k) = A + fT(k)v(k)

The correcting factor leads to


1
y(k) = U(k) v(k) cx(k) (24.6.17)
24.6 Modifications to Recursive Parameter Estimation Algorithms 157

If the term [D - vv Ta - 1] is factorized again in (26.4.15) the recursive relations for


the elements of U, D and I' are written as follows [24.23]

dj(k + 1) = dj(k)a(j - l)/(aj - A)


j = 2, ... ,2m (24.6.18)
bj = vJ
rj = - h/rtr1
with the initial values

(24.6.19)

For each j it yields for the elements of U

u'J(k + 1).: ui/k) + rjb') i = 1, ... ,j _ 1 ) (24.6.20)


b, .- b, + u'JvJ
1
I'(k)=-b. (24.6.21)
C(2m

Finally one obtains the parameters according to (24.2.14)


@(k + 1) = @(k) + I'(k)e(k + 1) ~ }
(24.6.22)
e(k + 1) = y(k + 1) - ",T(k + 1)8(k) .
Instead of the original equations (24.2.15) and (24.2.16) now (24.6.21) and (24.6.18),
(24.6.20) are calculated, also compare [5.23].
Unlike the DSFC method, this method does not require root calculation
routines. The computational effort is about equivalent to the RLS method.
In order to reduce the number of calculations after each sample one can generate
"fast" algorithms which result from certain invariance properties of matrices due to
the shifted time arguments [24.21]. However, compared with the usual RLS
method, computation time is only saved for order numbers m > 5, [24.22]. There is
also a larger storage requirement and high sensitivity for the initial values. Also
compare [2.23], section 23.8.
25 On-line Identification in Closed Loop

If the design of adaptive control systems is based on identified process models,


process identification has to be performed in closed loop. There are also other
applications in which dynamic processes have to be identified in closed loop.
Relevant examples are processes which have to operate in closed loop because of
technical reasons or for integrated and economical processes for which feedback is
an integrated part of the overall system. Process identification in closed loop
therefore is of general significance and will not be restricted to application for
adaptive control systems in this chapter. It must be first established whether
methods developed for open loop identification can also be applied to the closed
loop taking into account the various convergence conditions. The problem is quite
obvious if correlation analysis is considered. For convergence of the cross correla-
tion function it is required that the input u(k) is not correlated with the noise n(k).
Feedback, however, generates such a correlation. If the method of least squares is
considered for parameter estimation, the error signal e(k) must be uncorrelated
with the elements of the data vector "T(k). It will have to be examined whether
feedback changes this independence.
Sections 25.1 and 25.2 discuss conditions for closed-loop parameter estimation
without and with external perturbation signals. This then leads to methods which
can also be applied in closed-loop, c.f. section 25.3. To treat parameter estimation
in closed loop systematically, the following cases can be distinguished, c.f. Figure
25.1 and 25.2:
Case a: Indirect process identification. A model of the closed loop is identified.
The controller is to be known. The process model is calculated based on
the closed loop model
Case b: Direct process identification. The process model is directly identified, i.e.
not by using a closed loop model as an intermediate result. The controller
need not be known.
Case c: Only the output y(k) is measured.
Case d: Only the input u(k) and the output y(k) are measured.
Case e: No external perturbation is applied.
Case f: An external perturbation uik) is applied (nonmeasurable or measurable)
Case g: The external measurable perturbation u.(k) is used for identification.
As shown in the next sections, the following combinations of cases are possible:
a + c + e and b + d + e-section 25.1
a +9 and b + d + f - sections 25.2 and 25.3.3.
25.1 Parameter Estimation without Perturbations 159

Unless stated otherwise, in this chapter it is assumed that the processes are linear
and the controllers are linear, time-invariant and noise-free.

25.1 Parameter Estimation without Perturbations

Figure 25.1 shows a linear, time-invariant process with z-transfer function

(25.1.1)

and noise filter

md
Gp (z) = n(z) = D(Z-l) = _1_+_d_1 _z_-.,-1_+_ _+_d_m-'!.d_z_-_ (25.1.2)
v v(z) A(Z-l) 1 + alz- 1 + + amaz- ma

which is to be identified in closed loop. The assumption that C(Z-l) = A(Z-l) in


the noise filter considerably simplifies parameter estimation without perturbation.
The controller transfer function is

(25.1.3)

The signals are

y(z) = Yu(z) + n(z)


ew(z) = w(z) - y(z) .

In general w(z) = 0 is assumed, i.e. ew(z) = - y(z). v(z) is assumed to be a non-


measurable statistically independent noise with E {v(k)} = 0 and variance 17; .

w y Figure 25.1 Scheme of the


process to be identified in
L _______ .........l closed loop with no external
perturbation signal.
160 25 On-line Identification in Closed Loop

25.1.1 Indirect Process Identification (Case a + c + e)


The closed loop with the noise as input is
y(z) Gpv(z)
v(z) 1 + GR(z)Gp(z)
D(Z-I)P(Z-I)
A(z I)P(Z 1) + B(z I)Z dQ(Z 1)
1 + PIZ- I + ... + PrZ-r ~(Z-I)
(25.1.4)
=l+tXIZ 1+ ... +tX(Z (= d(z 1)·

Therefore the controlled variable y(k) is an autoregressive/moving-average


stochastic process (ARMA), generated by v(k) and the closed loop as a noise filter.
The orders are
t = max[ma + J.l,mb + v + d J}
(25.1.5)
r=md+J.l·
If only the output y(k) is known, the parameters of the ARMA
8[fJ = [eXI ... eXtPI ... PrJ (25.1.6)
can be estimated using the methods given in chapter 24, provided the poles of
d (Z -1) = 0 lie within the unit circle of the z-plane and the polynomials D(z - 1) and
d(Z-I) have no common roots.
The next step is to determine the unknown process parameters
(25.1.7)
for given eXi and Pi. In order to determine these parameters uniquely, certain
identifiability conditions must be satisfied.

Parameter Identifiability Conditions


A process (in closed loop) is called parameter-identifiable if the parameter estimates
are consistent when using an appropriate parameter estimation method. Then

lim E {8(N)} =80 (25.1.8)


N-+oo

holds, with 8 0 as the true parameter vector and N as the measuring time. Now
conditions for parameter identifiability are given, if only the output y(k) is measured.

Identifiability Condition 1
In concised notation the process equation for the input/output behaviour accord-
ing to (25.1.4) is

[A + B~JY = Dv.
25.1 Parameter Estimation without Perturbations 161

This equation is extended by an arbitrary polynomial S(Z-l)

[A + S + B ~- S] Y = Dv

[A + S + (B - ~ S) ~ ] y = Dv

[Q(A + S) + (QB - PS)~JY = QDv

[A*+B*. ~JY = D*v. (25.1.9)

This shows that the closed loop with the process


B* BQ - PS D* DQ
(25.1.10)
A* = AQ + SQ and A * = AQ + SQ
and the controller Q/P has the same input/output behaviour y/v as the process B/A
with noise filter D/A and same controller. As S is arbitrary the process cannot be
uniquely determined by the input/output behaviour y/v, if the order of the poly-
nomials B(Z-l )Z-d and A(Z-l) are unknown even though the controller Q/P is
known [25.1]. Therefore the orders of the process model must be exactly known.

Identifiability Condition 2
+ mb unknown process parameters tli and bi have to be
(25.1.4) shows that the ma
determined by the t parameters IXi. If the polynomials D and d have no common
roots, a unique determination of the process parameters requires t ~ ma + mb or
max [ma + Jl., mb + v + d] ~ ma + mb } (25.1.11)
max [Jl. - mb, v + d - ma] ~ o.
Hence the controller orders have to be

or
v > Jl. - d + ma - mb -+ v ~ ma - d }
(25.1.12)
v < Jl. - d + ma - mb -+ Jl. ~ mb .
If the process deadtime is d = 0, the orders of the controller polynomials must
satisfy either v ~ ma or Jl. ~ mb' If d > 0, either v ~ ma - d or Jl. ~ mb must be
satisfied. Here the dead time d can exist in the process or can be generated in the
controller, see (25.1.4). This means that the identifiability condition 2 can also be
satisfied by using a controller e.g. with d = ma and v = 0 and Jl. = O.
The parameters d;, (25.1.2), can be calculated uniquely from the Pi> (25.1.4),
if r ~ md, i.e.
(25.1.13)
Hence the parameters d, can be estimated for any controllers, provided D and
d have no common roots.
162 25 On-line Identification in Closed Loop

If d(Z-I) and D(Z-I) have p common poles, they cannot be identified, but only
t - p parameters IX; and r - p parameters Pi.
The identifiability condition 2 for
process parameters tli and bi then becomes
(25.1.14)
Note that only the common roots of d and D are of interest, and not those of
d and 91, as 91 = DP and P is known. Therefore the number of common zeros in
the numerator and denominator of
D(Z-I) D(Z-I)
Gid(Z) = d(z 1) = A(z l)p(Z 1) + B(z I)Z dQ(Z 1)·
(25.1.5)

is significant. If the controller order is not large enough, parameter estimation in


closed loop can be performed with two different parameter sets, [25.2, 25.3]. One
then obtains additional equations for determining the parameters. Some examples
may discuss the identifiability condition 2.

Example 25.1.1
The parameters of the first-order process m. = mb = m = 1
y(k) + ay(k - 1) = bu(k - 1) + v(k) + dv(k - 1)
are to be estimated in closed loop. Various controllers are considered.
a) 1 P-controller: u(k) = -qoy(k) (v = 0; II = 0). (25.1.4) leads to the ARM A process
y(k) + (a + bqo)y(k - 1) = v(k) + dv(k - 1)
or
y(k) + ~y(k - 1) = v(k) + pv(k - 1) .
Comparison of the coefficients gives
a+ bqo
!X =
P=d
No unique solution for aand b can be obtained, as
- L1a
a= ao + L1a and b = bo - -
qo
satisfy these equations for any L1a. The parameters a and b are therefore not identifiable.
According to (25.1.12) it is required that v ~ 1 or II ~ 1.
b) 1 PD-controller: u(k) = -qo(Y) - qly(k - 1) (v = 1; II = 0). The ARMA process now
becomes second order
y(k) + (a + bqo)y(k - 1) + bqly(k - 2) = v(k) + dv(k - 1)
y(k) + ~ly(k - 1) + ~2y(k - 2) = v(k) + pv(k - 1).
Comparison of coefficients leads to
a= !Xl - bqo; b = !X2/ql; d= P
The process parameters are now identifiable.
25.1 Parameter Estimation without Perturbations 163

c) 2 P-controllers: u(k) = -qol y(k); u(k) = -qo2y(k). Due to a) two equations with
coefficients
:£11 = a+ bqol and :£12 = & + bqoz·
are obtained. Hence

b- = - 1 ['iX12 - a'J .
qoz
The process parameters are identifiable if qo I oj= q02.
Generally the process parameter vector e is obtained by the ARMA parameters
&1, ... ,:1.1 via the comparison of coefficients in (25.1.4) by considering the identifi-
ability conditions 1 and 2. If d = 0 and rna = mb and therefore t = 2m, and the
orders of the controller polynomials are v = m, and J1 ~ m to satisfy (25.1.12), it
follows with Po = 1 that
al + b l qo = !XI - PI

alPI + az + blql + bzqo = !xz - pz

aIP)-1 + azp)-z + + amPrm + b1qr I + + bmqj-m = !Xj - Pj


(25.1.16)
or in matrix form

0 0 qo 0 0 al !XI - PI
PI 0 ql qo 0 az !xz - pz
PI ql
PI' qo am !XI' - PI'
0 PI' PI qm ql bl !XI' + I

0 0 0 qm bz !X1'+2

PI'
0 0 0 0 0 qm bm !X2m

S () a*. (25.1.17)

As the matrix S is quadratic the process parameter vector is obtained by


(25.1.18)
Again it is obvious that the matrix S must have the rank r ~ 2m for a unique
solution of (25.1.17), i.e. v ~ m or J1 ~ m. If v > m or J1 > m the overdetermined
164 25 On-line Identification in Closed Loop

equation system (25.1.17) can be solved by using the pseudo inverse


iJ = [STSr1STa*. (25.1.19)
However, as discussed in section 25.3, the process parameters converge very slowly
with indirect process identification. The advantage of this method is that the closed
loop identifiability conditions can be derivt:d straightforwardly.

25.1.2 Direct Process Identification (case b + d + e)


In the previous section it was assumed that the output signal y(k) is measurable and
the controller is known. Then the input signal u(k) is also known using the
controller equation. Therefore an additional measurement of u(k) does not provide
further information. However, if u(k) is used for process identification, the process
can be identified directly without deconvolution of the closed loop equation.
Furthermore, knowledge of the controller is unnecessary.
If for closed loop identification ofthe process Gp(z), see Figure 25.1 nonparamet-
ric identification methods, such as the correlation method were applied directly to
the measured signals u(k) and y(k), because of the relationship
u(Z) -GR(z)Gpv(z)
(25.1.20)
v(z) 1 + GR(z)Gp(z)
and
y(z) Gpv(z)
(25.1.21)
v(z) 1 + GR(z)Gp(z)
a process with transfer function

y(z) y(z)/v(z) 1
-= --- (25.1.22)
u(z) u(z)/v(z) GR(z)

would have been obtained, i.e. the negative inverse controller transfer function. The
reason is that the undisturbed process output yu(k) = y(k) - n(k) is not used, but
the disturbed y(k). If yu(k) were known, the process

Yu(Z) y(Z) - n(z) y(z)/v(z) - n(z)/v(z)


(25.1.23)
u(z) u(z) u(z)/v(z)

could be identified. This shows that for direct closed loop identification the
knowledge of the noise filter n(z)/v(z) is required. Therefore the process and noise
model resulting from (25.1.1) and (25.1.2)
A(Z-l)y(Z) = B(Z-l)Z-dU(Z) + D(Z-l)V(Z) (25.1.24)
is used.
25.1 Parameter Estimation without Perturbations 165

The basic model for indirect process identification is the ARMA, c.f. (25.1.4),
[A(Z-l)P(Z-l) + B(Z-l)Z-dQ(Z-l)]y(Z) = D(Z-l)P(Z-l)V(Z). (25.1.25)
Replacing the controller equation
Q(Z-l)y(Z) = _P(Z-l)U(Z) (25.1.26)
results in
A(Z-l)P(Z-l)y(Z) - B(Z-l)Z- dp(Z-l)U(Z) = D(Z-l)P(Z-l)V(Z) (25.1.27)
and after cancellation of the polynomial p(Z-l) one obtains the equation of the
process model as in open loop, (25.1.23). The difference from the open loop case is,
however, that u(z) or p(Z-l )u(z) depend on y(z) or Q(Z-l )y(z), (25.1.26), and
cannot be freely chosen.
The identifiability conditions for direct process identification in closed loop can
be derived from the condition for a unique minimum of the loss function

(25.1.28)

The process model assumed for the parameter estimation is


A(Z-l)y(Z) - B(Z-l)Z-d U(Z) = D(z-l)e(z) (25.1.29)
see also (25.1.24). In the closed loop u(z) is given by (25.1.26). Hence

D(:-l) [ A(Z-l) + B(Z-l )Z-d ;g=: ~] y(z) = e(z) . (25.1.30)

A unique minimum of the loss function V with regard to the unknown process
parameters requires a unique dependence of the process parameters in
1 [ A-
....,., + Bz Q] _
- -d - - AP +_Bz-dQ -
_ d-
(25.1.31)
D P DP f!I
on the error signal e. This term is identical to the right-hand side of (25.1.4), the
model for indirect process identification for which the parameters of A, Band Dcan
be uniquely determined based on the transfer function y(z)/v(z), provided that the
identifiability conditions 1 and 2 are satisfied. Therefore, in the case of convergence
with e(z) = v(z) the same identifiability conditions must be valid for direct closed
loop identification. Note that the error signal e(k) is determined by the same
equation for both the indirect and the direct process identification, compare (25.1.4)
and (25.1.30, 31). In the case of convergence this gives A = A, B = Band D= D and
therefore in both cases e(k) = v(k). A second way of deriving the identifiability
condition 2 results in the consideration of the basic equation of some nonrecursive
parameter estimation methods. For the least squares method (24.2.2) gives
y(k) = ",T(k)8 = [ - y(k - 1) ... - y(k - rna)
u(k - d - 1) ... u(k - d - rnb)]8. (25.1.32)
166 25 On-line Identification in Closed Loop

",T (k) is one row of the matrix tp of the equation system (24.2.6). Because of the
feedback (25.1.26), there is a relationship between the elements of ",T(k)
u(k - d - 1) = -PI u(k - d - 2) - ... - p,.u(k - Jl - d - 1)
-qoy(k - d - 1) - ... - qvy(k - v - d - 1) . (25.1.33)

u(k - d - 1) is therefore linearly dependent on the other elements of ",T(k) if


Jl ~ rnb - 1 and v ~ rna - d - 1. Only if Jl ;;;;; rnb or v ;;;;; rna - d does this linear
dependence vanish. This holds also for the actual equation system (24.2.6) for the
LS method. This shows that linearly dependent equations are obtained if the
identifiability condition 2 is not satisfied.
Now it remains to consider if the same identification methods can be used for
direct parameter estimation in closed loop as for open loop. For both the least
squares and the maximum likelihood method the equation error or one step ahead
prediction error is
e(k) = y(k) - y(kJk - 1) = y(k) - ",T(k)8(k - 1) (25.1.34)
c.f. (24.2.5) and (24.5.5). The convergence condition is that e(k) is statistically
independent of the elements of ",T (k). For the LS method this gives
",T(k) = [ - y(k - 1) ... : u(k - d - 1) ... J
and for the RELS method
",T(k) = [ - y(k - 1) ... : u(k - d - 1) ... : v(k - 1) ... J .
In the case of convergence e(k) = v(k) can be assumed. As v(k) influences only y(k),
y(k + 1), ... , and these signal values do not appear in ",T(k), e(k) is certainly
independent of the elements ",T(k). According to (25.1.26), this is also true with
a feedback on u(k) via the controller. The error e(k) is independent of the elements
of", T(k) also in closed loop. Therefore these two identification methods which are
based on the one step ahead prediction error e(k) according to (25.1.34) can be
applied in closed loop as in open loop, provided that the identifiability conditions
are satisfied. They can also be applied for the signals u(k) and y(k) measured in
closed loop, without paying attention to the feedback. The application of the other
parameter estimation methods is discussed in section 25.3. An extensive treatment
of closed loop identification is given in [25.2, 25.4J. Nonlinear and time variant
controllers are also considered there.
The most important results for closed loop identification without external per-
turbation but assuming a linear, time invariant, noisefree controller can be sum-
marized as follows:
1. For indirect process identification (only y(k) is measured) as well as for direct
process identification (y(k) and u(k) is measured) using parameter estimation
methods, identifiability conditions 1 and 2 must be satisfied.
2. Since for indirect process identification a signal process with t;;;;; rna + rnb
parameters in the denominator (and r = rnd + Jl parameters in the numerator) of
the transfer function has to be estimated, for direct process identification,
25.2 Parameter Estimation with Perturbations 167

however, only a process with rna parameters in the denominator and rnb para-
meters in the numerator, a better result can be expected for direct process
identification. This especially holds for higher-order processes. Additionally the
computational effort is smaller.
3. For direct process identification in closed loop, parameter estimation
methods using the prediction error can be applied as in open loop, provided the
identifiability conditions are satisfied. The controller need not be known.
4. If the existing controller does not satisfy the identifiability condition 2 as it has
a too Iowan order, identifiability can be obtained by:
a) switching between two controllers with different parameters [25.4, 25.5].
b) introduction of dead time d ~ rna - V + p in the feedback,
c) use of nonlinear or time varying controllers.

25.2 Parameter Estimation with Perturbations

Now an external perturbation us(k) is assumed to act on the closed loop as shown
in Figure 25.2. The process input then becomes
(25.2.1)
with
Q(Z-l)
UR(Z) = - P(Z-l) y(z) . (25.2.2)

The additional signal us(k) can be generated by a special filtered signal s(k)
us(z) = Gs(z)s(z) . (25.2.3)
If Gs(z) = GR(z) = Q(Z-l )/p(Z-l) then s(k) = w(k) is the reference value. s(k),
however, may also be a noise signal generated in the controller. If a test signal acts
directly on the process input, then Gs(z) = 1 and us(k) = s(k).
That means that there are several ways to generate the perturbation us(k). For
the following it is only important that this perturbation is an external signal which
is uncorrelated with the process noise v(k). At this time the perturbation needs not
to be measurable.

w y

Figure 25.2 Scheme of the process to be identified in closed loop with an external perturba-
tion s.
168 25 On-line Identification in Closed Loop

Again, the process can only be identified by measuring y(k) indirectly or by


measuring u(k) and y(k) directly. As indirect process identification, however, has no
advantage, only direct process identification is considered in this section. The
output for the closed loop is
DP Bz-dp
y(z) = AP + Bz-dQ v(z) + AP + Bz-dQ u.(z) . (25.2.4)

resulting in
[AP + Bz-dQ]y(z) = DPv(z) + Bz-dpu.(z) .
Inserting (25.2.1) gives
A(Z-I)p(Z-I)y(Z) - B(Z-I)Z-dp(Z-I)U(Z) = D(Z-I)P(Z-I)V(Z) (25.2.5)
and after cancellation of the polynomial P(z - I) one obtains the open loop process
equation
(25.2.6)
Unlike (25.1.25) u is generated not only from the controller based on y, but
according to (25.2.1) also by the perturbation u.(k). Therefore the difference equa-
tion according to (25.1.33), (25.2.1) and (25.2.2) is
u(k-d-l)= -Plu(k-d-2)- ... -p"u(k-jl-d-l)
- qoy(k - d - 1) - ... - qvy(k - v - d - 1)
+ u.(k - d - 1) + PI u.(k - d - 2)
+ ... + p"u.(k - jl - d - 1) .
If u.(k) =1= 0, u(k - 1) is nonlinearly dependent on the elements of the data vector
",T(k), (25.1.32) for arbitrary controller orders jl and v. The process described by
(25.2.6) is therefore directly identifiable if the external perturbation u.(k) is suffi-
ciently exciting the process parameters to be identified. Note, that the perturbation
u.(k) need not to be measurable. Hence, for an external perturbation signal u.(k),
the identifiability condition 2 indicated in the last section is not significant in this
case. Identifiability condition 1, however, has still to be satisfied.
As already stated in the previous section, the prediction error parameter estima-
tion methods as used in open loop identification can be applied provided an
external perturbation signal acts on the process. The controller needs not to be
known and the perturbation signals needs not to be measurable. Note, that this
result is also valid for any arbitrary noise signal filter D(Z-l )!C(Z-l).

25.3 Methods for Closed Loop Parameter Estimation

In this section the on-line parameter estimation methods described in chapter 24


are considered for closed loop application.
25.3 Methods for Closed Loop Parameter Estimation 169

25.3.1 Indirect Process Identification without Perturbation


If the process is identified indirectly that means by only measuring y(k) and if there
is no perturbation signal, then the ARMA parameters (Xi and Pi (25.1.4) can be
estimated by the RLS method for stochastic signals, section 24.2.1. The process
parameters ai and hi can then be calculated with (25.1.18), 25.1.19), if the identifia-
bility conditions are satisfied, or by the method of recursive correlation and least
squares (RCOR-LS), compare [25.3].
However, the parameter estimates converge very slowly with indirect process
identificaton because the number of parameters (t + r) to be estimated is large, and
the input signal v(k) is unknown. Therefore in general direct process identification
is preferred provided the input process input u(k) can be measured.

25.3.2 Direct Process Identification without Perturbation


As already discussed in section 25.1 the prediction error estimation methods as
RLS, RELS and RML based on (25.1.34) can be applied for direct process
identification in closed loop. By measuring u(k) and y(k) they provide
unbiased and consistent estimates, provided the identifiability conditions 1 and
2 are satisfied and the noise filter is 1/A for RLS and D/A for RELS and RML. To
obtain unbiased estimates with RIV, the instrumental variables vector w T (k),
(24.4.1), must be uncorrelated with the error signal e(k) and therefore also uncor-
related with the noise n(k) [3.13]. However, the input signals u(k - r) are correlated
with n(k) for r ~ 0 because of the feedback. In closed loop RIV therefore furnishes
biased estimates. The correlation between u(k - r) and e(k) vanishes for r ~ 1 only
if e(k) is uncorrelated, i.e. if the noise filter is of the form 1/ A, as for the LS method,
c.f. [3.13, p. 66/67].

25.3.3 Direct Process Identification with Perturbation


If, as described in section 25.2 only an external perturbation acts on the control
loop then only identifiability condition 1 has to be considered. If only u(k) and y(k)
are used for parameter estimation, and not the perturbation, the RLS, RELS and
RML methods are suitable. A measurable perturbation can be introduced into the
instrumental variables vector of the RIV method. Then this method can be applied
for the same noise filters as in open loop.
The application of the correlation and least squares (RCOR-LS) method [3.13J in
closed loop is considered in [25.3J for all three cases in this section. This method is
suitable, if the parameter estimates are not required after each sampling step but in
larger time intervals.
If the identifiability condition 2 is not satisfied by a given controller, closed loop
parameter estimation can be performed by switching between two different control-
lers. It has been shown in [25.5J that the variance of the parameter estimates can be
minimized by choosing the switching period to be about (5 ... 10)To.
26 Parameter-adaptive Controllers

This chapter treats parameter-adaptive controllers which are based on suitable


parameter estimation methods, controller design methods and control algorithms,
c.f. chapter 23. The relevant parameter estimation methods were discussed in
chapter 24 and 25. This chapter is therefore mainly devoted to the discussion of
combining the identified process model with appropriate controller design
methods, to examining the resulting behaviour, to give examples of various para-
meter-adaptive control systems and their applications, the continuous supervision,
etc.

26.1 Design Principles

Various design principles of parameter-adaptive control systems are considered


first. It is assumed that the process is linear and has either constant or time-varying
parameters. The class of adaptive controllers that is considered can be classified in
terms of, compare Figure 26.1:
- the process model
- the parameter estimation (and perhaps state estimation)
- the information (,3) about the process
- the criterion for controller design
- the control algorithm (design method, controller parameter calculation)
- the additional functions.

w u y

Figure 26.1 Structure of a parameter-adaptive controller.


26.1 Design Principles 171

Within each of these groups further classifications can be made. The most import-
ant cases, the resulting designations of adaptive controllers and upcoming tasks are
considered in the following.
a) Process models
Section 3.7.1 included a classification of mathematical process models. Here only
parametric process models are of interest:
- Input/output models in the form of (stochastic) difference equations or z-transfer
functions
A(Z-l )y(z) - B(Z-l )Z-dU(Z) = D(Z-l )v(z) (26.1.1 )
(ARMAX-model)
A(Z-l )y(z) - B(Z-l )Z-dU(Z) = v(z) (26.1.2)
(least squares model -> LS model).
v(k) is a statistically independent stochastic signal with E {v(k)} = 0 and variance
(J'~. The parameters 6JT = Cal ... am; b l ... bm; d l ... dm] are assumed to be con-
stant. Further specifications see section 24.1.
- State models in form of (stochastic) vector difference equations

x(k + 1) = A(e)x(k) + B(e)u(k) + FV(k)}


(26.1.3)
y(k) = C(e)x(k) + n(k).

In general v(k) and n(k) are statistically independent signal processes with
E {v(k)} = 0, E {n(k)} = 0 and covariance matrices V and N. For more details the
reader is referred to chapter 15 and 22. The parameters @ can be constant or
modelled by the stochastic process
@(k + 1) = tP@(k) + ~(k) (26.1.4)
Here the vector signal process ~(k) is statistically independent with E{~(k)} = 0
and covariance matrix E. The parameter vector e(k) can also be contained in an
extended state vector x( k).
The above given process models are valid for stochastical disturbances. Ordi-
nary difference or vector difference equations are involved if the disturbances Vi and
ni are either deterministic signals or null.
The difference equation belonging to (26.1.1) or (26.1.2) can be easily transformed
into nonlinear difference equations, which are linear in the parameters e i however at
the same time, contain terms as ua(k) and y P(k), see [26.59]. Nonlinear state models
can be written in general form

x(k + 1) = f[x(k), u(k), e, v(k)] }


(26.1.5)
y(k) = g[x(k), u(k), e, n(k)]

Only linear process models will be considered in the sequel.


172 26 Parameter-adaptive Controllers

b) Parameter estimation and state estimation


Suitable parameter estimation methods for the closed loop case are considered in
chapter 25. Table 26.3 provides a survey.
For state estimation and state observation see section 8.6 and 22.3. State- and
parameter estimation can also be performed together using an enlarged Kalman
filter, see e.g. [3.12]. External additional signals can be used to speed up the
parameter estimation. For suitable methods, however, this action is not required

c) Information about the process


The information 3 about the process that is acquired by parameter and state
estimation forms the basis of controller design and calculation of the controller
parameters. 3 can contain the following components:

- Parameter estimation:
- process parameter estimates
311 = [e] = [Ili; b;]T or [Ili; bi ; d;]T (26.1.6)
- processes parameter estimates and their uncertainty
312 = [e, L1ey (26.1.7)

- State estimation:
- state estimates
321 = [.i(k + 1)] (26.1.8)
- state estimates and their uncertainty
322 = [X(k + 1), L1.i(k + I)Y (26.1.9)

- Signal estimation:
If the assumed process model contains a noise signal filter, e.g. D(Z-l )/A(Z-l),
the nonmeasurable noise signals vi(k) or ni(k) can be estimated applying para-
meter- or state estimation methods

(26.1.10)

In addition, future outputs y(k + j), j ~ 1, can be predicted based on inputs


ui(k - I), I ~ 0, and noise estimates vi(k - I), I ~ O.
Dependent on the information used, various types of adaptive controllers can be
distinguished. The following principles are presented for the case of stochastic
control systems. Here stochastic state variables as well as stochastic parameters are
estimated, which is information type (26.1.6) or (26.1.8) or their combination. The
definitions given in the literature are not unique, see [26.1, 26.2, 22.14]. The
following is an attempt to formulate the principles in such a way that they do not
contradict the former definitions.
26.1 Design Principles 173

Control using the separation principle


A stochastic controller follows the separation principle if parameter or state
estimation is performed separately from the calculation of the controller para-
meters, c.f. chapter 15 and [23.14]. The process input is then, including the directly
measured signals
u(k) = fs[y(k), y(k - 1), ... ,u(k - 1), u(k - 2), ... , 8(k), x(k)] .

Control using the certainty equivalence principle


A stochastic controller obeys the certainty equivalence principle if it is designed
assuming exactly known parameters and state variables
u(k) = fD[y(k), y(k - 1), ... , u(k - 1), u(k - 2), ... , eo, xo(k)]
and if the parameters Go and state variables xo(k) are then replaced by their
estimates
u(k) = k[y(k), y(k - 1), ... , u(k - 1), u(k - 2), ... ,8(k), x(k)] .
The certainty equivalence principle is therefore a special case of the separation
principle. Functionfs includes functionk but not vice versa. Hence, the certainty
equivalence principle presupposes the separation principle.
The certainty equivalence principle is theoretically justified if the process para-
meters are known, the state variables of a linear process are to be estimated with
white measurement process noise v(k) and n(k), and a quadratic performance
criterion is used [26.2]. For unknown stochastic parameters the certainty equival-
ence principle is valid theoretically only if the parameters are statistically indepen-
dent [26.3, 26.4, 23.14]. For parameter-adaptive stochastic control the certainty
equivalence principle is not satisfied in general. However, it is frequently used as an
ad-hoc design principle [23.14]. Based on these principles two different types of
adaptive controllers emerge [23.14]:

Certainty equivalence controllers


A controller which is designed by making use of the certainty equivalence principle
is called 'certainty equivalence controller'. It is then assumed-for the purpose of
controller parameter calculation - that the parameter or state estimates are ident-
::h
ical with the actual parameters or states. The information measures 1 or 321 are
used. The controller does not take into account the uncertainty of the estimates.

Cautious controllers

A controller which employs the separation principle in the design and uses the
parameter and state estimates together with their uncertainties is called a 'cautious
controller'. Here the information measures are 312 or 322 are used. Because of the
uncertainty of the estimates the controller applies a cautious action on the process.
174 26 Parameter-adaptive Controllers

d) The criterion for the controller design


- Dual adaptive controllers
The performance of adaptive controllers based on process identification
depends mainly on the performance of process identification and on the
control algorithm applied which forms u(k) based on the control deviation
ew(k) and information measure 3. The process input must be determined such
that two objectives are simultaneously achieved:
- good compensation of current disturbances
- good future process identification.
This leads to the dual controller of [26.5]. Both requirements may be contra-
dictory. If, for example, the process parameter estimates are wrong, the
controller should act cautiously, i.e. small changes in u(k), but to improve the
parameter estimates large changes in u(k) are required. Dual controllers
therefore have to find an appropriate compromise between the objectives.
Hence the controller design criterion has to take into account both the
current control signal and the future information 3.
- N ondual adaptive controllers
Nondual controllers use only present and past signals and the current
information 3 concerning the process. The controller design criteria most
frequently used for nondual controllers have been discussed in previous
chapters. These were mostly quadratic criteria or special criteria such as the
principles of deadbeat control, pole-zero cancellation or pole-assignment.
The criteria mostly used for the design of nondual controllers have been already
indicated in chapter 4. For the considered parameter adaptive controllers exclus-
ively quadratic criteria are eligible as e.g.

II = E{e~(k + d + j) + ru 2 (k)};j ~ 0

12 = E{N ~ 1 kto [e~(k + d + j) + rU 2(k)]}; j ~ 0


for minimum variance and parameter optimized controllers for stochastic noise.
Here ew(k) is the control deviation
ew(k) = w(k) - y(k) .
For state controllers the following is used, compare (15.1.5)

It is not necessary to use the expectation value for designing with deterministic
signals. Controllers, designed according to the cancellation principle as e.d. dead
beat controllers, do not require a specific performance function since the graph of
the control and manipulated variable or their settling times are prescribed.
26.2 Suitable Control Algorithms 175

e) Control algorithms
The actual design of a control algorithm is of course performed before implementa-
tion in a digital computer. Then it remains to calculate the controller parameters
as functions of process parameters. Control algorithms for adaptive control should
satisfy:
(1) Closed loop identifiability condition 2.
(2) Small computing and storage requirements for controller parameter
calculation.
(3) Applicability to many classes of processes and signals.
The next section discusses which of the control algorithms meet these requirements
for parameter-adaptive control. Within the class of self-optimizing adaptive con-
trollers based on process identification, nondual methods based on the certainty
equivalence principle and recursive parameter estimation have shown themselves to
be successful both in theory and practice. The resulting methods will be called
parameter adaptive control algorithms; they are also called self-tuning regulators, e.g.
[27.8, 27.13]. One could imagine there a distinction between 'self-tuning' and
'adaptive', as the former appears to imply constant process parameters. However,
there is no sharp boundary between the cases when considering their applicability,
so the distinction is of secondary importance.

26.2 Suitable Control Algorithms

This section examines structure and design efforts of various control algorithms
with regard to their application for parameter-adaptive controllers. For minimal
computational effort in calculating the controller parameters, the following control
algorithms are preferred for parameter-adaptive control:
- deadbeat controller DB (v), DB(v + 1)
- minimum variance controller MV3, MV4
- parameter-optimized controller iPC-j
(with direct parameter calculation)
More computation is required for:
- general linear controller with pole assignment LC-PA
- state controller SC
These control algorithms are now considered with regard to the identifiability
condition 2 (25.1.14), and the computational effort involved. For control algo-
rithms which theoretically cancel the process poles or zeros, one must distinguish if
the controller is adjusted exactly or not to the process.

26.2.1 Deadbeat Control Algorithms


According to (7.1.27) the z-transfer function of DB(v) is

GR(z) = 40 A_(Z-1) (26.2.1)


1- 40 B(z-1 )Z-d
176 26 Parameter-adaptive Controllers

Its orders are v = rna and JI. = rnb + d. This means that identifiability condition 2 is
satisfied provided there are no common roots in the transfer behaviour (25.1.15).
For the case of inexactly adjusted controller parameters

(26.2.2)

becomes valid. As no poles and zeroes cancel, identifiability condition 2 is satisfied.


If process model and process agree, the transfer behaviour, however, changes with
A = A and B = B. This leads to

(26.2.3)

Since no common roots occur in the numerator and denominator the process
remains identifiable. Note, that in the control loop model (26.2.2) which is based
on process identification, the polynomial d(Z-l) has order I = rna + rnb + d.
In the adjusted state, however, d(Z-l) = A(Z-l) so that the parameters
«(Xm - 1 ••• (X2m + d) = 0 for e.g. rna = rnb = rn. Hence, in the case of indirect process
identification these disappearing parameters have to be used for calculation of the
process parameters. According to (7.2.14) the deadbeat controller DB(v + 1) has
orders (v = rna + 1 and JI. = rnb + d + 1. Concerning the identifiability condition
this controller shows the same behaviour as DB (v). It is recommended to apply
deadbeat controllers only with increased order for strongly damped low-pass
processes.

26.2.2 Minimum Variance Controllers


Because of the assumed noise filter (25.1.2) and the smaller computational effort
only MV3 and MV4 are of interest. The z-transfer function of the minimal variance
controller MV3 - dis, (14.2.12)
L(Z-l)
GR(z) = -------'--~--- (26.2.4)
-
ZB(Z-l -
)F(Z-l) r D(Z-l)
+ :::- -
bl
with
D(Z-l) = F(Z-l)A(z-l) + z-<d+l)i(z-l). (26.2.5)
The orders are
v = max [rnd, rna] - I} d =0
JI. = max[rnb - 1, rnd]

v = max[rnd - d - 1, rna - I]} d ~ 1


= max[rnb + d - 1, rnd]
JI. -
26.2 Suitable Control Algorithms 177

According to identifiability condition 2, it follows for d = 0, (25.1.14) that


max[rnd - rnb, rnd - rna - 1] ~ P (26.2.6)
and for d ~ 1
max[d - 1, rnd - rnb, rnd - rna - 1] ~ p. (26.2.7)
If the controller is not exactly tuned, it holds

G,d(Z) =
15
[
zB +i
JD
+ A zF[AB - BA]
(26.2.8)

In general no common roots appear, and the identifiability conditions (26.2.6) and
(26.2.7) are satisfied with rnd ~ rnb or rnd ~ rna + 1 for d = 0 or d ~ 1.
Ho~ever, if the controller becomes exactly tuned to the process, A = A, B = B,
D=D
G,d(Z) =
D
[
zB
D

+ ~A
r J' (26.2.9)

bl
and p = rnd common roots appear, which means that the process is no longer
identifiable for d = 0 (26.2.6). (26.2.7) leads to
max[d - 1, rnd - rna - 1] ~ rnd , (26.2.10)
resulting in the requirement for MV3-d that
d ~ rnd + 1 (26.2.11)
which is satisfied only for relatively large deadtime. With r = 0 the MV4-d con-
troller arises, (14.2.13)
i(z- I)
GR(z) = ~ ~ (26.2.12)
ZB(Z-I)F(z-l)
Identifiability for d = 0 in the untuned case is obtained with md ~ rna + 1. For
d ~ 1 the identifiability conditions are always satisfied. When tuning is exact
p = rnd common roots appear for r = 0, (26.2.3). Then the process is not identifiable
for d = O. For d ~ 1 it has to be satisfied
(26.2.13)
If the minimum variance controllers are followed by a proportional-integral acting
term (14.1.25) in order to avoid lasting control deviations, the orders v and J.l are
increased by one and no common roots of D and d occur for !Y. =1= O. Hence the
same identifiability conditions are valid for these modified minimum variance
controllers MV3d-PI or MV4d-PI as for inexactly tuned MV3 and MV4 control-
lers. Therefore minimum variance controllers often satisfy the identifiability condi-
tions, see Table 26.1. It is recommended to use minimum variance controllers only
for distinct stochastic disturbance signals.
....
-..j
00

Table 26.1 Properties of different control algorithms with respect to application for parameter-adaptive control.
N
0'1
control closed loop computational effort danger of evaluation for parameter-
algorithm
~
identifiability parameter operation instability adaptive control
condition 2 calculation for *) ~
2
o
satisfied ...
deadbeat contr. yes very medium [(z) =0 suitable for asymptotically ~
~
DB(v), DB(v + 1) small stable processes <'
o
min. variance d~md+l small medium D-(z)=O suitable for stochastic C"l
o
contr. MV3-d disturbances a(3
min. variance d~md+l small medium B-(z) =0 suitable for processes with if
;;!
contr. MV4-d D-(z) =0 zeros inside the unit circle
and stochastic disturbances
parameter- v~m.-d medium small suitable, dependent on
optimized (v = 2) proper design
contr. i-PC-j

linear contr. v = m. large medium suitable, if pole placement is


LC-PA Jl=mb+ d no problem
state contr. yes medium/ large suitable, if computational
with observer large expense is no problem

*) compare Table 11.1 and Table 14.1.


26.2 Suitable Control Algorithms 179

26.2.3 Parameter-optimized Controllers


The deadbeat- and minimum variance controller which have been considered up to
now are only suitable for special processes. For most processes users desire
PID-controllers. A parameter-optimized controller (5.2.3)

(26.2.14)

satisfies the identifiability condition (25.1.12) if its order is related to the process by
v ~ rna - d and the parameters q, are chosen in such a way that no common roots
appear in (25.1.15). Hence PID-controllers (3PC) are suitable for processes where
rna ~ 2 + d. If the process possesses no deadtime the allowable process order is
m = 2. For d = 1 the order m = 3 is allowed to be chosen, for d = 2, m = 4, etc.
This somewhat restricts the application of PID-controllers. (However, also for
higher-order processes good results can be accomplished as will be shown later.)
For PID-control algorithm design the following methods can be used
(chapter 5):
- numerical parameter optimization
- approximation of controllers which can be easily designed
- simulation of tuning rules.

26.2.4 General Linear Controller with Pole-assignment (LCPA)


The general linear controller (6.1.1) is considered using the pole-assignment design
according to section 6.1. For I given poles (6.1.8) the controller parameters can be
determined uniquely provided the controller polynomials have orders
v = rna and J1 = mb +d
see (6.1.13). If no roots cancel in (25.1.15) the identifiability condition 2 is satisfied
for d ~ O. A drawback is the relatively large computational effort.

26.2.5 State Controller


The derivation of the identifiability conditions in chapter 25 is based on in-
put/output controllers. The results can therefore be only transferred to state
controllers with state observers or state estimators if the control algorithms can be
put into an input/output representation, c.f. (8.7.18). The characteristic equation
then has order I ~ 2m, (8.7.19). Therefore the identifiability condition 2 is
satisfied if no common roots appear in D(Z-l). The state controller can be
designed either by pole assignment or by recursive solution of the matrix Riccati
equation, c.f. section 8.1.
Table 26.1 summarizes some properties of the different control algorithms. Table
26.2 shows the computing and storage requirements. Further details concerning
the computational effort will be given in chapter 31.
180 26 Parameter-adaptive Controllers

Table 26.2 Computational effort and storage requirement for different control algorithms
[25.16].

control controller parameter control algorithm


algorithm calculation algebraic shift
FORTRAN storage operations operations
statements [bytes]

DB (v) 12 171 4m+2 2m+d


DB(v + 1) 19 328 4m+6 2m+d+2
MV4 32 508 4m+2d+2 2m+d+2
MV3 39 612 max [4m; 4m + 2d - 2] max [2m - 1; 2m + 2d - 2]
3PC-2 34 631 8 3 (7.4.11) (7.4.12)
LCPA 78 1187 4m+2d-2 2m+d

26.3 Suitable Combinations

As already shown in section 26.1, parameter-adaptive control systems result from


the combination of following methods:
1. Recursive parameter estimation methods for:
a) dynamic behaviour
b) static behaviour
2. Controller design procedures
3. Control algorithms
a) with integral term
b) without integral term and special ways for avoiding offsets.
In this section different possibilities of combinations are given.

26.3.1 Ways of combination


The basic approaches of parameter-adaptive controllers have been already de-
scribed in section 26.1. In the following the separation principle is applied which
means that parameter estimation and controller parameter calculation is treated
separately, see also chapters 15 and 26.1. In addition, the certainty equivalence
principle is applied which means that parameter estimates are assumed to be
identical with the real parameters.
Combining parameter estimation, controller design and control algorithms the
following cases can be distinguished:

Explicit and implicit combination


The process model is assumed to be, (24.2.2)
y(k) = ",T(k)8(k - 1) (26.3.1)
26.3 Suitable Combinations 181

and the control algorithm


u(k) = (/(k)r(k - 1) (26.3.2)
(!T(k) = [ew(k)ew(k - 1) ... ew(k - v) - u(k - 1) ... - u(k - J.l)] (26.3.3)
rT= [qOql ... qvPl ... Pit] (26.3.4)
In the case of explicit combination the process parameters are estimated explicitly
@(k) = @(k - 1) + I'(k - 1) [y(k) - ",T(k)@(k - 1)] (26.3.5)
and stored as intermediate results. Then the controller parameters are calculated
r(k) = f[@(k)] (26.3.6)
and the new process input u(k + 1) follows from (26.3.2). For an implicit combina-
tion the controller parameter calculation (26.3.6) is inserted in the process model
(26.3.1), hence
y(k) = ,T(k)r(k - 1) (26.3.7)
The controller parameters are then estimated with a recursive equation
i(k) = i(k - 1) + I'r(k - 1)[y(k) - ,T(k)r(k - 1)] (26.3.8)
which follows from insertion of (26.3.6) in (26.3.5). The process parameters are then
not accessible. See Figure 23.6.
The implicit combination has the advantage that some time may be saved.
Additionally theories for the convergence of parameter estimation methods can be
directly applied. However, the number of controller parameters may increase and
only some of the parameter-adaptive controllers can be brought into an implicit
form. The first proposals for stochastic parameter-adaptive controllers were impli-
cit combinations [23.32-23.34]. The explicit combinations give more freedom for
the design. They allow the combination of any suitable parameter estimation and
controller design method, modular programming, access to intermediate results,
especially for supervision, see section 26.4 and 26.5. Therefore the explicit combina-
tion is more generally applicable.

Synchronous and asynchronous combination


The original parameter-adaptive controllers use the same sampling time for para-
meter estimation, controller parameter calculation and control algorithm. This
may be called synchronous combination. But also asynchronous combination can be
provided:
a) Different sampling time. Control algorithm and parameter estimation may have
different sampling time. For example control algorithm can obtain a small
sampling time Toe in order to result in a good control performance and the
parameter estimation a larger sampling time Top (Top = xToo x = 2,3, ... ) to
improve the numerical properties. For fast processes two different microproces-
sors can be used for control and parameter estimation. A small sampling time
182 26 Parameter-adaptive Controllers

for parameter estimation and a large sampling time for control may be used for
an automatic on-line search for a suited, later synchronous sampling time,
[26.36]. In the case of computing time problems the controller design can also
be distributed over several sampling intervals.
b) Conditional controller design. The calculation or the change of controller para-
meters may depend on certain conditions. For example the transgression of
thresholds of process parameter changes, or the excitation by input signals, or
the result of a closed-loop simulation are such conditions.
Hence, several possibilities in combining parameter-adaptive controllers do exist.
Their selection, for example, depends on the process, the acting signals, the
necessity of adaptation and the computer capacity.

26.3.2 Stability and Convergence


For the choice of the parameter estimation method, the controller design method
and the control algorithm it is of interest, for which conditions a stable and
convergent behaviour of the adaptive control system can be reached. In this
context stability means that the signals remain bounded, and convergence that the
desired performance of the control behaviour is reached asymptotically [26.63].
The parameter-adaptive system is nonlinear and timevariant. Therefore stability
investigations are difficult, [26.38]. The problem is now divided in single steps
which can be solved easier. Through this candidates for stable overall systems are
obtained.
Following conditions for the stability can be given directly:

Stability condition 1 (necessary)


The closed loop is stable with the fixed controller whose parameters have the exact
values ro or values in the vicinity ro + Ar.

Stability condition 2 (sufficient)


The parameter estimates converge to such values
lim E {8(k)} =
k-+C()
e.. (26.3.9)

that the controller parameters converge to the exact values ro


lim E{r(k)} = ro. (26.3.10)
k-+C()

(asymptotic stability)
To satisfy the first condition, for example, the pole zero cancellation problems,
which depend on the structure of the process and the controller have to be taken
into account. The second condition implies the close connection between conver-
gence and stability. The convergence can be subdivided in several phases:
- convergence at the beginning
- convergence far from the convergence point
- convergence close to the convergence point (asymptotic convergence)
26.3 Suitablw Combinations 183

The stability condition 2 is related only to the asymptotic behaviour. The adaptive
system in general is stable, if the process model parameters reach the true values

lim E{@(k)} = 8 0 . (26.3.11)


k-oo

The parameter estimation methods RLS, RELS and RML satisfy this condition if
following convergence conditions (see section 24.2) are met:
a) process: stable and identifiable
b) process order m and dead time d known
1
c) lim kP- 1 (k) positive definite (persistent excitation of order m)
k-oo
d) e(k) not correlated with u(k)
e) e(k) not correlated and E{e(k)} = 0
f) RELS, RML: H(z) = 1/(Dz- 1 ) - 1/2 positive real. This means H(z) is stable
and Re(H(z)) > 0 or Re(D(z-l)) < 2)
g) identifiability conditions in closed loop satisfied (chapter 25).
For some parameter-adaptive controllers these conditions can be weakened:
- biased parameter estimates can be tolerated or just give asymptotic convergence
(RLS/MV4)
- the process may be unstable
- the identifiability conditions can be circumvented by assuming some controller
parameters as known (e.g. MV).
The stability and convergence of parameter-adaptive controllers were investigated
with different approaches. A survey is given in [26.38] and [26.37].
At first the convergence properties of recursive parameter estimators are of
interest. The ODE-method (analysis with approximation through ordinary differ-
ential equations due to Ljung, (1977), [26.17], the analysis via Ljapunov-function,
de Larminat (1979) [26.41], or the application of the Martingale theory, Kumar,
Moore (1978), Solo (1979) result in conditions for the asymptotic even global
convergence [26.39]. Those results can be directly transferred to some implicit
parameter-adaptive controllers, Egard (1979), [26.40] Gawthrop (1980), [26.63],
e.g. for RLS/MV4 due to Ljung [26.37, 26.40, 26.63].
The convergence of a large class of explicit parameter-adaptive controllers can
be investigated as shown by de Larminat (1980) [26.42]. For the case of determin-
istic signals (no disturbances) the procedure is as follows, [26.42], [26.43]
For the process it is

y(k) = ",T(k)@(k) + e(k) (26.3.12)

and for the controller with w(k) = 0 or ew(k) = - y(k)

(26.3.13)

After inserting (26.3.12) one obtains


u(k) = - ",T(k) [rdk) + qo(k)@(k)] - qo(k)e(k) (26.3.14)
184 26 Parameter-adaptive Controllers

Now a model loop is formed, existing in the timevariant process model, the present
controller and the measured signals. In state space representation then it is
",(k + 1) = tP(k)",(k) + h(k)e(k) (26.3.15)
with

- 8T (k) -1
- ------+------ -
1 0 I 0
I
0 I
I 0
0 I
I
0 1 I 0
tP(k) = -
------+------ - h(k) = (26.3.16)
- [r1(k) + qo(k)O(k)Y qo(k)
-
------+------ -

I 1 0 0
I
I0
I
I 0
I
I 0 1 0

If now the signals in ",T(k), the process parameters 8(k) and the controller
parameters r(k) are bounded also all other elements of tP(k) and h(k) are bounded.
If the process parameters for k -+ 00 converge towards final values, then there exists
a finite time kl' after which tP(k) has only stable eigenvalues. For k > kl the system
becomes more and more a time invariant form. As for k -+00 the elements of ",(k)
approach zero, the model control loop and also the parameter-adaptive control
loop becomes asymptotically stable. This method can also be transferred to
stochastic disturbances, applying e.g. the RELS estimation method, Schumann
(1982) [26.43].
The investigation shows that the asymptotic stability of an explicit parameter-
adaptive control system can be reached, if
a) The convergence condition of the parameter estimation method are satisfied
(items a to g).
b) The manipulated variables u(k) of the process are restricted.
This convergence consideration can be classified as "somewhat far" from the
convergence point. Simulations and applications have shown that generally con-
vergence "far from the convergence point" can be reached, if the above mentioned
26.3 Suitable Combinations 185

conditions are met. To assure an "initial convergence" special means can be


applied, see section 26.7.
For the explicit parameter-adaptive RLS-DB controller, global stability could be
proofed for the deterministic case, [26.45]. With sufficient excitation and therefore
P-l(k) -HX) for k -HX), the parameter-adaptive controller converges to the exact
DB-controller.
The identifiability conditions in closed loop for parameter estimation methods
were derived for fixed and exactly tuned controllers, see chapter 25. However, in the
course of an adaptive loop, the controller is timevariant. Therefore some additions
are needed with respect to the convergence of the parameter estimation.
With excitation by stochastic signals v(k) minimum variance controllers are
suited which satisfy identifiability condition 2
max[/l - mh, v + d - ma] ~ p (26.3.17)
for ma = mb = md = m and d ~ 0 for MV3 and d ~ 1 for MV4 as p = 0, (26.2.6-7).
If the controllers are timevariant, each controller satisfies the identifiability condi-
tion. Therefore in both cases the parameter estimates with the methods RELS,
RML converge to the true values
A(Z-l) ---> A(Z-l); B(Z-l) ---> B(Z-l); D(Z-l) ---> D(Z-l) . (26.3.18)
If the control algorithms, however, are tuned exactly, p = md common zeroes
appear in (25. t. t 5) and the identifiability is going to be lost. If the parameter
estimation would start in that instant, convergence to the true values would not be
possible. If, however, the parameter estimation starts earlier, also the transient
phase 0 < k < 00 is included. Then a convergence to the true values can be
observed as this is the common solution for the transient phase 0 < k < 00 and the
final tuned phase k ~ 00. Therefore the identifiability condition for these control-
lers can be enlarged [26.16]
(26.3.19)
(The identifiability condition (26.3.17) is only valid for fixed controllers). Figure
26.2 represents an example of the convergence behaviour. The closed loop para-
meter estimates do not converge with the precisely tuned and fixed controller.
However, for the parameter-optimized control method a good convergence is
obtained.
Now a reference value w(k) is assumed to change either deterministically or
stochastically and which is persistently exciting at the same time. As w(k) is an
external signal, acting between y(k) and u(k), identifiability condition 2 must not be
satisfied. Hence, the parameter estimates converge to the true values
A(z - 1) ---> A (z - 1 ), B(z - 1 ) ---> B(z - 1 )
with and controllers and RLS.

26.3.3 Choice of the Elements for Parameter-adaptive Controllers


The multitude of possibilities recommends to summarize the most important
viewpoints for the choice of the methods mentioned in the beginning of section 26.3
186 26 Parameter-adaptive Controllers

0.6

0.2
0
1500 2000 k
-0.2

-0.6

a -1.0

0.6

0.2
0
1000 1500 2000 k
-0.2

-0.6

-1.0
b

Figure 26.2a, b. Parameter estimation values for a first-order process in closed loop
(al = -0.8; b l = 0.2; d l = 0.5) with a an exactly tuned control algorithm MV3 (r = 0.05);
b parameter-adaptive controller RML/ MV3 ().o = 0.99, ),(0) = 0.95, r = 0.05).

especially with regard to


- stability and convergence
- acting signals
- behaviour in steady-state
- computational expense.
a) Choice of parameter estimation for the dynamic behaviour and of the control
algorithm
By considering the stability, convergence and acting signals all combinations are
possible if some special assumptions for the design are taken into account, see
Table 26.3.
26.3 Suitable Combinations 187

Table 26.3 Possible combinations for parameter adaptive controllers.

Control algorithm
Parameter
Estimation Stochastic Deterministic

MY4 MY3 DB i-PC-j SC LCPA


(PID)

RLS x· x· x x x x
RELS x x x x x x
RML x x xb xb xb xb

• .Q(Z-l) = 1 for controller design


b D(Z-l) not used for controller design

b) Choice of the estimation method for the static behaviour


A special discussion is required for the d.c. value estimation of the signals U(k) and
Y(k), compare the methods described in section 24.2.1.
The most simplest way is differencing the signals and using the increments Au(k)
and Ay(k) for parameter estimation. However, this can only be recommended for
low frequency noise. For high frequency noise the implicit d.c. value estimation
through the d.c. parameter C (see (24.2.26) and (24.2.27)) is better. But then (; and
@are coupled. If the d.c. values change frequently but not the dynamic parameters,
the explicit d.c. value estimation should be used. However, then the noise should not
contain high frequency components. Hence, the selection of the d.c. value estima-
tion depends on the single case and must also be viewed with respect to the
compensation of offsets and the supervision of the adaptive loop.
c) Compensation of offsets
The behaviour in the steady-state for convergent parameter estimates and piece-
wise constant reference variables w(k) is determined usually by the integral action
of the controller. For controllers with integral behaviour (DB, PI D) in general no
problems do exist. However, for the controllers MV, SC, LCPA, special measures
have to be taken to avoid offsets. To this the single ways to insert integral actions
described for the various controllers can be used. Another way is to add a pole
Zl = 1 to the identified model by multiplication of the model with f3/(z - 1) and
designing a controller for the extended model. However, this does not result
necessarily in a best control performance. A further possibility is the replacement of
y(k) by [y(k) - w(k)] and u(k) by Au(k) = u(k) - u(k - 1) as well in the parameter
estimation as in the control algorithm, [26.9]. But this results in unnecessary
changes of the parameter estimates after reference value changes. If the implicit or
explicit d.c. value estimation is used, it can be set Yoo = W(k). Then the d.c. value
U 00 is calculated such that offsets vanish [26.15]. The steady-state behaviour then
188 26 Parameter-adaptive Controllers

depends on the quality ofthe parameter estimates. Therefore a d.c. value correction
according to (26.5.16) is recommended. In general an integral term in the controller
is to be preferred.
d) Choice of the controller design method
The choice of the method for the controller parameter calculation is mainly
determined by the computational effort, see section 26.2.

26.4 Stochastic Parameter-adaptive Controllers

Some parameter-adaptive controllers will now be described in more detail. The


resulting signals will be represented in figures mostly for the same process so that
immediate comparisons are possible.

26.4.1 Adaptive minimum variance controller RLS/MV4


One of the first proposals goes back to [26.7-26.9]. For the process model
D(Z-l) = 1 is assumed, so that
(26.4.1)
The corresponding minimum variance controller MV4 - d is due to (14.2.13)
GR(z) = u(z) = Q(Z-l) = __ i(z-~) (26.4.2)
y(z) p(Z-l) ZB(Z-l)F(z-l)
The parameters of i(Z-l) and F(z-l) follow with (14.2.5) to (14.2.7) by comparison
of the coefficients from
(26.4.3)
as shown in example 14.2.1, using however db ... , dm = O.
With controller (26.4.2) and md = 0 according to (26.2.13) all process parameters
are identifiable for d ~ 1. This is also shown by the following consideration.
According to (25.1.5), (25.1.4) and p = md, (section 26.2.2),
j = 1 - p = max [rna + jI.; mb + V + d] - md
(26.4.4)
for md = 0
j=ma+ mb+ d - 1 (26.4.5)
parameters can be estimated. If method RLS with the recursive algorithm (24.2.14)
is being used for parameter estimation of model (26.4.19), then all rna + mb para-
meters can be estimated only for d ~ 1. For d = 0, however, one parameter has to
be assumed as known.
To reduce the calculation effort for the controller parameters according to
(26.4.3), the parameter estimation was performed for a modified model in
26.4 Stochastic Parameter-adaptive Controllers 189

[26.8, 26.9], such that an implicit combination results, compare section 26.3.1.
(26.4.1) is multiplied with F(Z-l)
FAy - BFz-du = Fv (26.4.6)
and inserted in the controller design equation (26.4.3)
y(z) = L(Z-1)Z-(d+1)y(Z) + B(Z-l)F(z-l)Z-d U(Z) + F(Z-l)V(Z). (26.4.7)
With (26.4.2) it follows that
y(z) = - Q(Z-l)Z-(d+l)y(Z) + P(Z-1)Z-(d+1)U(Z) + F(Z-l)V(Z). (26.4.8)
Thus the modified model directly contains the controller parameters qi and Pi'
They can be estimated by applying the RLS method on (26.4.8) and can be used for
the calculation of u( k). The corresponding difference equation becomes with
v = rna - 1 and f.1. = mb + d - 1

y(k) = - qoy(k - d - 1) - ... - qvy(k - d - rna)

+ Po[u(k - d - 1) + ... + p/lu(k - mb)]

+8(k-d-l) (26.4.9)

Hereby B(z) = F(Z-l )v(z) is a MA-process of order d. (26.4.9) contains rna para-
meters q, and mb + d parameters p" i.e. altogether rna + mb + d parameters have to
be estimated. Since, however, according to (26.4.5) only rna + mb + d - 1 para-
meters can be estimated, in principle one parameter has to be assumed as known
when using the modified model. If for example Po = b 1 is known, then it follows
from (26.4.9) that
y(k) = ",T(k - d)@ + Pou(k - d - 1) + 8(k - d - 1) (26.4.10)
with
@ = [qo ... qvP1 ... P/l] (26.4.11)
",T(k - d) = [-y(k - d - 1) ... - y(k - d - rna)

Pou(k - d - 2) ... Pou(k - mb)] . (26.4.12)


The RLS method is applied to (26.4.10)

8(k + 1) = 8(k) + I'(k)[y(k + 1)


- Pou(k - d) - ",T(k - d + 1)8(k)] . (26.4.13)

The parameter estimates are inserted in the control algorithm (26.4.2) and the new
manipulated variable is calculated

u( k + 1) = -1 A

[qoy(k + 1) + ... + qvy(k


A

- rna + 2)]
Po
- P1 u(k) - ... - p/lu(k - mb - d + 2)] . (26.4.14)
190 26 Parameter-adaptive Controllers

Some specialities of this adaptive controller are:


a) The application of RLS on the modified model (26.4.8) only yields unbiased
estimates, if d = O. For d ~ 1 the error signal e(k) is a MA-process (moving average)
of order d.
b) If the controller designed for D(Z-I) = 1 is applied for
D(Z-I)= 1 +dIz- 1 + ... +dmdz- md
biased parameter estimates result. Despite this, the controller (26.4.14) converges
towards the optimal minimum variance controller, provided the parameter esti-
mates converge to steady state values, compare [26.8]. The output then becomes
a MA process of order d, see (14.2.19).
c) An exact knowledge of the parameter Po = b i is not necessary, [26.8], [26.11].
d) The RLS-MV4 controller can only be applied for processes with zeros inside the
unit circle, compare Table 26.1.
e) Offset control errors can be avoided by inserting a pole z = 1 in the controller.
Therefore this pole is added to the process for parameter estimation and in (26.4.10)
to (26.4.13) it is replaced
y(k) through ew(k) = y(k) - w(k)

u(k) through ~u(k) = u(k) - u(k - 1) .

The control algorithm then becomes

~u(k + 1) = ~ [qoew(k + 1) + ... + q.ew(k - rna + 2)]


Po
- PI~u(k) - ... - P/J~u(k - rnb - d + 2) (26.4.15)
and the manipulated variable is calculated from
u(k + 1) = u(k) + ~u(k + 1) (26.4.16)
compare [26.9].
The convergence properties of this adaptive control algorithm are given in
[26.10]. An advantage of this implicit controller is, that the convergence analysis
for recursive parameter estimation can be directly applied [26.17]. In [26.12] it was
shown for a first-order process that the adaptive controller is globally stable for
o < bt/po < 2. In [26.17] it was shown for d = 0 that H(z) = I/D(z) - 1/2 is
positive real, compare (24.3.7). This includes ID(eiWTO ) - 11 < 1, that means that
errors caused by the assumption D(Z-I) = 1 should have a frequency response
inside th.e unit circle, thus causing no amplification for all frequencies. A summary
of applications of this adaptive controller is given in [26.11,26.12, 26.22, 26.38].

26.4.2 Adaptive generalized minimum variance controUers


(RLS/ MV3, RELSjMV3)
One drawback of RLS/MV4 is the relatively large control actions, Figure 14.6.
These can be avoided by using the minimum variance controller with weighting of
26.4 Stochastic Parameter-adaptive Controllers 191

manipulated variables [26.13]. If a PI-element is connected in series (MV3' PI),


identifiability conditions are always met, so that no parameters have to be assumed
as known. A further advantage is the applicability to processes with zeros outside
the unit circle. Application of RLS presupposes D(Z-l) = 1 and consequently
model (26.4.1). (14.1.15) with (14.3.2) and (26.2.5) have to be applied for the
controller calculation.
The implicit combination RLS/MV3 was proposed in [26.13] and was analysed
in [26.18], [26.19]. Explicit combinations RML/MV3 and RELS/MV3 were
investigated in [26.15], [26.16], by including the estimation of D(Z-l).

Example 26.4.1:
Equations for programming of an explicit stochastic parameter-adaptive controller
1. Estimation of d.c.-values through differentiation
~u(k) = U(k) - U(k - 1)
~y(k) = Y(k) - Y(k - 1)
~u(k) = u(k); ~y(k) = y(k)
2. Parameter estimation (RLS, RELS, RML), compare table 24.1
a) e(k) = y(k) - 'l'T(k)e(k - 1)
b) e(k) = e(k + 1) + y(k - l)e(k)
c) Inserting y(k) and u(k - d) in 'l'T(k + 1) and 'fJ(k + 1)
d) y(k) = fl(k + I)P(k)'fJ(k + 1)
r 1
e) P(k + 1) = [1-y(k)'fJ (k + 1)]P(k)~

3. Controller parameter calculation (MV3)


The parameters of the generalized minimum variance controller result from (26.2.4) and
(26.2.5)
4. Calculation of the new manipulated variable
~ a) New control variable: Y( k + 1)
b) New control difference: ew(k + 1) = W(k + 1)- Y(k + 1)
c) New d.c.-value of Y: Yoo(k + 1) = W(k+ 1)
d) New d.c.-value of U: Uoo(k + 1) = [,4(1)/8(1)] Yoo(k + 1)

= [[ 1 JI I[tl
+ a, ] i h, ] ] Yoo ( k + 1)
e) New manipulated variable: U(k + 1) = U oo + Plu(k) + P2u(k - 2) + ...
+ Pm+d-I u(k - m - d + 2)
- qoew(k + 1) - ql ew(k) - ...
- qm-Iew(k - m + 2).
5. Cycle
a) Replace Y(k + I) by Y(k) and u(k + 1) by u(k).
b) Step to I.
Note, that the old parameters e(k) are used to calculate the process input u(k + 1) in order
to save computing time between 4a) and e).

Figure 26.3 shows the stochastic noise signal for the process with m = 2 (de-
scribed in section 26.6), the signals for a fixed, exactly tuned controller MV3 and
adaptive controller RML/MV3. The adaptive controller was switched on at k = 0
with initial parameters 8(0) = 0 (without pre-identification). After initial larger
192 26 Parameter-adaptive Controllers

y(k)
u(k)

Figure 26.3a-c. Control variable y(k) and manipulated variable u(k) for test process VII
(m = 2). a Noisy output y(k) = n(k), no control; b Fixed controller MV3 (r = 0.01);
c Adaptive controller RML/ MV3 (r = 0.01).

controller actions and about 10 sampling steps, almost the same control perform-
ance is obtained as for the exactly tuned, fixed MV3. Further examples will be given
in section 26.6.

26.5 Deterministic Parameter-adaptive controllers

Deterministic controllers are often designed for step changes in the reference value,
which e.g. is a natural excitation for servo-control systems. They are also used for
fixed controllers with constant reference values because of their easy understand-
ing and simple experimental verification. In the following different deterministic
adaptive controllers are briefly described. For some adaptive controllers control
signals with test process VI (Appendix, Vol. I) are shown
1
Gp(s) = (1 + IOs)(t + 7.5s)(t + 5s)
26.5 Deterministic Parameter-adaptive controllers 193

Before the adaptive controller is switched on, for some cases a pre-identification
with a PRBS-test signal is performed in open loop, also compare section 26.7. The
adaptive behaviour is also demonstrated for step changes of the dominant time
constant of TI = 10 s to 7;. = 2.5 s. If not indicated otherwise, the results were
obtained with microcomputer DMR-16 (which has been especially developed for
adaptive control) and an analog simulated process, [26.44].

26.5.1 Adaptive deadbeat controller (RLSjDB)


An especially simple adaptive controller results by combining the recursive method
of least squares RLS with the deadbeat controller DB(v), or better, with DB(v + 1),
[23.31, 23.36, 26.16]. It shows a rather short computation time but may only be
used for asymptotically stable processes which are well damped.
An explicit combination results by combining the parameter estimation equa-
tions from example 26.4.1 with the controller design equations according to
(7.1.15), (7.2.9), (7.2.10) or (26.2.1).
Also an implicit comhination can be realized [26.37]. To do so, the controller is
written in the form

u(z) A(Z-I)
GDBv(Z) = - - = - - - - - - (26.5.1 )
1 - B( Z - I) Z -d
ew(z) -
qo
with

(26.5.2)

This yields

(26.5.3)

Inserting (26.5.3) in the process equation

A(Z-I )y(z) - B(Z-I )Z-dU(Z) = viz) (26.5.4)


yields
A(Z-I)y(Z) + B*(z 1)(1 - Z-I)U(Z) - biu(z) = v(z) (26.5.5)

This equation contains 2 m controller parameters which can be estimated directly


by applying RLS, provided the following equation is used
y(k) = ,T(k)r(k - 1) + e(k) (26.5.6)
194 26 Parameter-adaptive Controllers

W,Y
6.0
V
2.0

O+-~~~~-------r----~-;-,~----~­
15 k
-2. 0
-1..0

u
8.0
V
5.0

2.0
0+-4-~~~~------~-#----4--r--~------~
1.0 60 k
- 2.0

-5.0

-8.0
a
W,Y
60
V

2.0

O+-~--~~--------~------~r-~~rr----"

-2.0
-1. .0

5.0
V
2.0

O T-+-Th-T--~-------+--------r-~--~r-~­
15 1.0 60 k
-20

-5.0
b

Figure 26.4a, b. Adaptive deadbeat controller RLS/ DB. Process VI. (Design parameters:
m = 3; To = lOs; A. = 0.95). a DB(v); b DB(v·+ I) r ' = I(qo = qomm).
26.5 Deterministic Parameter-adaptive controllers 195

with
C(k)= [-y(k-I)··· - y(k-m)u(k-d-I)-~u(k-d-l)···
- ~u(k - m- d + I)] (26.5.7)
T(k - J) = [al .. . ambt . .. b!] (26.5.8)

~u(k) = u(k) - u(k - I) 1


~u(k - J) = u(k - I) - u(k - 2) (26.5.9)

The control algorithm then is using the parameter estimation values f


I ~
u(k)=u(k-d - I)+ bi[-b!~u(k - d-l)-

- b!~u(k - d- m + I) + ew(k)
+ alew(k - I) + .. + amew(k - m)] . (26.5.10)
This means that no calculation time can be saved by applying the implicit
algorithms instead of the explicit one. This is why the explicit version is recommen-
ded which is also more transparent.
In [26.45] it was shown that this adaptive deadbeat controller is globally
asymptotic stable, also compare section 26.3.2.
Figure 26.4 represents control signals. After pre-identification for 12 samples in
open loop the exact deadbeat behaviour results already for the first step change of
the reference variable. Deadbeat controller with increased order DB(v + I) shows
considerably smaller control amplitudes compared with DB(v). After changing the
dominant time constant T l , Figure 26.5, the fixed DB-controller shows a remark-
able oscillating behaviour, while the adaptive DB-controller is better tuned after

W.Y
T,: 2.5s
\,

1\
0
10 25 52 ... ' k

-1

u
2

0
k
-2

Figure 26.5. Adaptive deadbeat controller RLSj DB for a step change of the time constant
Tl . Process VI. (m = 3; To = 8 s; A = 0.93) ~- adaptive controller - - - fixed controller
196 26 Parameter-adaptive Controllers

each step change in the reference value and is almost exactly adapted after the third
step.
Since for deadbeat controller design the process model has to be known rather
precisely, the parameter-adaptive deadbeat controller is more suited for applica-
tion than the fixed deadbeat controller. The parameter sensitivity which is ob-
served for some processes can be reduced considerably through the automatically
tracked process model. Several applications have shown, that the parameter-
adaptive deadbeat controller with increased order shows good results for well-
damped processes and not too small a sampling time. The significance of the
adaptive deadbeat controller, however, is mainly to be seen as a simple standard
type for simulations, comparisons, first experimental trials and theoretical invest-
igations.

26.5.2 Adaptive state controller (RLS/SC)


As already shown in chapters 8 and 11, state controllers can yield the best control
performance for difficult controllable processes when being compared with all
other controllers. In general, however, the advantages only then show up when
exact process models are known and when the selection of the many free para-
meters is simplified. These requirements can be met by adding a parameter
adaptation. The adaptive state controller then obtains the state variables by an
adaptive observer.
For example, the following methods can be used to build up a deterministic state
controller
1. Recursive parameter estimation:
a) dynamic behaviour:
RLS or RELS or modified versions (DSFC, DSFI, DUDC)
b) static behaviour
- differencing
- implicit or explicit d.c.-value estimation
2. State controller:
a) state controller calculation
- recursive matrix-Riccati-equation
- pole assignment
b) state variable calculation:
- state observer
- state reconstruction
3. Compensation of offsets:
- integrator in the observer
- integrator parallel to the state controller
- integrator before or after the process model
- setting of Yoo = W(k) and implicit or explicit d.c.-value estimation due to Ib)
Hence, many possibilities result which cannot be described here in full length.
Figure 26.6 represents the general scheme of an adaptive state controller.
26.5 Deterministic Parameter-adaptive controllers 197

~( k) calculation

!iT

d. c. value
calculat.
Process
W(k)

State Y(k)
Y(k)~-H>----l estlmat.

Figure 26.6 General scheme of a parameter-adaptive state controller including parameter-


adaptive state calculation.

Thus many possible combinations emerge which cannot be described completely


here. Figure 26.6 represents the basic scheme of an adaptive state controller.
In the following a parameter-adaptive state controller for proportionally acting
processes is briefly explained which has been realized using a microcomputer and
which has been practically applied several times [26.44]. This controller is com-
posed of:
1. a) DSFI
b) Implicit d.c. value estimation
2. a) Recursive Matrix Riccati equation
b) State reconstruction
3. Setting of Yoo = W( k).
The performance criterion for the controller design is as (8.1.2) in the form as
N-l
1= xT(N)Qx(N) + L [xT(k)Qx(k) + rK~u2(k)] (26.5.11 )
k=O

By including the process gain Kp the influence of the weighting of the manipulated
variable becomes independent of the present Kp. The state controller parameters
kT are calculated by the recursive solution of the matrix Riccati equation (8.1.31).
Mostly 10 recursion steps are sufficient to obtain a good approximation of the
steady state solution P. Then according to (8.1.34)
e = [rK~ + bTPbr1bTPA (26.5.12)
can be determined. The required computational effort is reasonable, also for
microcomputers.
198 26 Parameter-adaptive Controllers

Q is chosen as a diagonal matrix. Feasible values are


1~ qii ~ 3; 0.05 ~ r ~ 0.5.
A state reconstruction, according to section 8.9, was used to avoid the design effort
of an observer. If (8.9.10) is applied, then it yields for processes with deadtime d
u(k) = - kT i(k + d) . (26.5.13)
A d.c.-value estimation according to section 26.2 yields the constant Ko. Then the
manipulated variable in steady state becomes, (24.2.34)

(26.5.14)

Ko is determined implicitly, (24.2.27).


By setting
Yoo = W(k) , (26.5.15)
in (26.1.14) offsets ofthe control variable are zeroed in steady state. Then no special
integrator is required. This, however, is only valid for an exact estimate Ko. To
avoid this drawback a d.c.-value correction is introduced
Ko = Ko + x[Ko - Ko] }
(26.5.16)
Ko = Y(k) - IJI*T(k)8*(k)
x = 0.1.
For the calculation of the changes x(k) of the state variables X(k) from their steady
states Xoo according to the state reconstruction equations the following (mean)
quantities have to be used, c.f. section 8.9,
Ym = Ym - W Um= Um - Uoo (26.5.17)
and the manipulated variable for the process is
U(k) = U oo + u(k) (26.5.18)
If in the case of small sampling times the calculation time for the single tasks cannot
be performed within one sampling interval, it can be distributed over several
sampling intervals by applying appropriate truncation criteria [26.44]. Up to now
only a few publications have been given for parameter-adaptive state controllers
[26.46,26.47].
A considerable advantage of an adaptive state controller is, that the many free
design parameters, e.g. Q and r, can be changed on-line during control action.
Therefore the resulting control behaviour can be judged immediately, also compare
section 26.7.
The control signals for the described adaptive state controller DSFI/SC show,
Figure 26.7, with the small manipulated variable weighting r = 0.2 and after
a short pre-identification period a relatively large initial manipulated variable
change. The further signals show the very well damped behaviour which is typical
26.5 Deterministic Parameter-adaptive controllers 199

W,Y

5.0

2.0

0
15 50 k
- 2.0

8.0

5.0

2.0

0
15 50 110 k
- 2.0

-5.0

-!l.0

Figure 26.7 Adaptive state controller DSFIjSC Process VI. Design parameters: m = 3;
To = 5 s; A = 0.95; Q = I; r = 0.2.

for state controllers. During the first settling towards the new reference value, the
state controller is approximately, after the second step completely adapted. After
changing the time constant T\ and after two changes in the reference variable, the
adaptive state controller is well adapted, see Figure 26.8. The fixed state controller
shows a somewhat oscillating behaviour which, however, is significantly less
distinct than for the DB-controller and also in comparison to the PID-controller,
Figure 26.12).

26.5.3 Adaptive PID-controllers


Because of the structure difference between PID-controllers and general linear
controllers of order m > 2 a direct calculation of the controller parameters is more
difficult than for the structure-optimal controllers. Further on, problems with the
identifiability in closed loop may arise, see section 26.2.3. On the other side po, PD-,
PI- and PID-controllers are most important for the general applicability. Now
different possibilities are considered for the design of parameter-adaptive PID-
controllers.
200 26 Parameter-adaptive Controllers

W,Y
- - - f -- - - - - - T, =2.55 - -- - - - --1

O *---+~~-~~--~~=-~~--~-~-r~~----
15 41 83 117 k

-1 '--
u

2
1
O~-~-h~-~~--4~----+---~--r-+~~----
15 k
-1
-2

Figure 26.8 Adaptive state controller DSFI/SC for a step change of the time constant T,.
Process VI. (m = 3; To = 5s; A. = 0.95; Q = I; r = 0.2). - - adaptive controller; -----
fixed controller.

Approximation of other controllers


As already indicated in chapter 5, first a structure-optimized controller which can
be simply calculated may be determined and then approximated by a PID-
controller. This was applied via the deadbeat controller in [26.6, 2.22, 26.16]. The
selection of the sampling time is, however, restricted through the large changes of
the manipulated variable for too small sampling times. In addition, the applic-
ability is limited to low-pass processes with small dead time.

Design as cancellation controller


As shown in chapter 5, PIO-controllers can be simply designed as cancellation
controllers by assuming a model order d = 2 and further special assumptions.
According to this principle, adaptive PIO-controllers are proposed in [5.22,26.48,
26.50]. However, they can only be used for special processes.

Design with tuning rules


The tuning rules described in chapter 5 may also be used, at least for selftuning
PID-controllers used once. The following cases can be distinguished:
a) Tuning experiment: The tuning procedure is performed with the process as for
manual tuning, but automatically.
b) Tuning simulation: The tuning procedure is performed by simulation, that
means not with the real process.
26.5 Deterministic Parameter-adaptive controllers 201

The first selftuning controllers which appeared on the market use mainly a), for
example measurement of an impulse response, [26.51J or by changing character-
istic values of the settling behaviour of the closed loop [26.52].
In [26.53J the tuning rule of Ziegler-Nichols was applied after an oscillation
experiment. To determine the critical gain K erit and the critical period Tp, first an
on-off controller instead of a P-controller is inserted in the loop. Kerit and Tp are
determined through parameter estimation of the basic oscillation and the con-
troller parameters are calculated using prescribed amplitude- and phase margins.
Hence, the methods a) with an active tuning experiment are simple and easy to
understand. However, they require a considerable disturbance of the process by the
experiment and may be only applied for low process noise. They are further on
limited to unique or occasionally repeated tuning.
If the process model is determined by parameter estimation then testsignals of
small amplitude can be used and considerable noise and also closed loop operation
may be allowed. Furtheron, the different tuning rules may be applied to simulated
"experiments" in the computer, i.e. method b). In [26.54, 26.55J the critical gain
Kent and the period Tp are calculated for a P-controller by using a stability

criterion. Then modified Ziegler- Nichols-rules are used as shown in Table 5.8. In
order to improve the resulting signals, the gain can be modified iteratively, until the
simulated control behaviour shows a given overshoot. This method can be applied
rather generally, also for continuously operating adaptive controllers.

y
6.0

1..0
")(. =10% ")(. = , ., . ~")(. = O%
I
Y
2.0 w

0
20 30 1. 0 50 60 70
-20

U
Identificat ion self uning controller for )(.
6.0
1..0

2.0
0
10 20 30 ['0 50 60 70 kTo
- 2.0

Figure 26.9 Selftuning PID-controller with RLS and design via tuning rules and tuning
simulation [26.54]. Process VI. Design parameters: m = 3; To = 4 s; A = 0.99;){ = 10%; 1%;
0%; )( = overshoot.
202 26 Parameter-adaptive Controllers

Figure 26.9 shows the signals for this tuning method. The parameter estimation
is performed for two step responses. A fixed PIO-controller is designed and the
overshoot is reduced for the next closed loop responses.

Design with numerical parameter estimation


For the application of a numerical parameter optimization a performance cri-
terion, as
M
S= L [e;(k) + rK;Au 2 (k)] (26.5.19)
k=O

is minimized by a numerical optimization method, see section 5.4. Through this the
PIO-parameters can be determined for arbitrary linear processes. A drawback
seems at first to be the relatively large calculation time.
In [26.44, 26.56] an optimization procedure was developed on the basis of the
Hooke-Jeeves search method which distributes the optimization time over several
sampling intervals, if required.
These stepwise parameter optimization is realized in two program parts:
1. Real-time program:
control variable sampling -+ calculation of manipulating variable -+ generation
of manipulating variable -+ parameter estimation
2. Design program:
Starting values q(n) -+ parameter optimization -+ intermediate values q(n + 1)-+
interruption by t. -+ continuation of 2., etc.
As starting values q(O) the parameters q* by approximation of the deadbeat
controller are used, see section 7.4. The control performance calculation is done by
applying Parseval's equation (5.4.5) in the z-domain, provided the control loop of
the model is asymptotically stable. Otherwise, a simulation of the required signals
in the time domain is performed. This rather generally applicable method con-
verges already after a few samplings.

W,Y

~.O

0
k
- 2.0
- 4.0
u

4.0

0 Figure 26.10 Adaptive PID-


15 50 k
-2.0 controller RLS/ PID with para-
- 4.0 meter optimization. Process VI.
-6.0 Design parameters: m = 3;
To = 8 s; ;. = 1; r = 0.08.
26.5 Deterministic Parameter-adaptive controllers 203

A first example, Figure 26.10, shows after a short preidentification (k = (5)


already for the first reference variable step change a complete adaptation with
well-damped behaviour. In this case the duration of the pre identification was not
fixed, but dependent on the performance of the identified model, see section 26.7,
with automatic switch to closed loop. Figure 26.11 shows the same run with
superimposed stochastic n(k) at the process output (normal amplitude probability

W,Y
6.0
V
2.0
O*+~~~~~~--------~--~~~~+----­
k
- 2.0
- 4.0

u
6.0
V
2.0
0~++~~~~44------~--~--~~~~-+----­
k
- 2.0
- 4.0

Figure 26.11 Adaptive PID-controller RLSj PID with parameter optimization. Process VI
with stochastic noise. Design parameters: m = 3; To = 8 s; A = 1; r = 0.08.

W,Y
- --1---- - - - - - - - T,: 2.5s - - - - - --1

25 1.2 67 k

-1

u
2

-+--l-Lr. -..
~~J~b 25
- ~~______Ib
--+-b
1.2
-·,,·-t-p, ".,
67
1,1
..,..--,~~~
92 '':
- -----k
-1 I .

-2

Figure 26.12 Adaptive PID-controller RLSj PID for a step change in the time constant T1 •
Process VI. (m = 3; To = 8s; A = 0.9; r = 0.05) adaptive controller, -- -- ---
fixed controller.
204 26 Parameter-adaptive Controllers

n (k)

y (k)
u( kJ

tOOk
b

y( kJ
u(kJ

tOOk

c _

y(k)
u (k)
1

y (k) Um""for k ~tO


u (k) - --1

Figure 26.13a-e. Control variable y(k) and manipulated variable u(k) for stochastic noise
signal. y(k) is drawn stepwise. a disturbed output signal y(k) = n(k) without control; b fixed
controller MV3 r = 0.01; c adaptive controller RML-MV3 (r = 0.01); d fixed controller
DB (v); e adaptive controller RML-DB(v).

Figure 26.14a-e. Control variable y(k) and manipulated variable u(k) for: a change of the
reference value w(k); b fixed controller MV3 r = 0.025; D(Z-I) = 1; c adaptive controller
RLS/ MV3 (r = 0.025); d fixed controller DB (v); e adaptive controller RLS/DB(v).
26.5 Deterministic Parameter-adaptive controllers 205

W(K)

100 k
a

Y (K)
U(K)
W (K) VU(K)

Y(K)

1 II
U [J
II ~
100 k
b

Y(K)
U(K)
W(K) U(K)

Y(K)

100 k
c

Y(K)
U(K)
W(K)

100 k
d

Y (K)
U(K)
(K)
",U(K)

1"( Y(K)
1
n.
I/ lf
~
e
I 100 k
a,/12 0,,02

O2 92
________ J 02

100 k 100 k
0,
- - - ,,-
\---;' ~01
01 01

5»2
b1.b2
b,
0.1 0.1

100 k 100 k

K K

1· - -
L - - I - K
·
100 k 100 k

a b
Figure 26.158, b. Estimates of parameters di(k) and bi(k) and gain factor K(k) for the adaptive controllers with deterministic disturbances
according to Figure 26.14. a RMLjMV3 (r = 0.025); b RLSjDB(v).
26.6 Simulation examples 207

density; standard deviation 0.4 V). The preidentification now lasts longer (k = 39).
The PID-controller adapts quickly also under stochastic noise and with good
accuracy. With changed time constant T1 the adaptive PID-controller is adapted
completely after two steps in the reference values, Figure 26.12. The fixed controller
distinctly shows a less-damped behaviour.
Chapter 31 will show the respectively required computational effort for the
various adaptive controllers and the necessary storage capacity.

26.6 Simulation examples

In order to further discuss the basic behaviour, signals are shown with different
adaptive controllers and processes. The adaptive controllers were programmed on
a process computer HP21MX and were operated together with processes which
were simulated on analog computers.

26.6.1 Stochastic and deterministic adaptive controllers


The simulated linear process has the transfer function
G (s) _ y(s) _ 1
(26.6.1)
p - u(s) - (1 + 3.75s)(1 + 2.5s)
With sampling time To, the z-transfer function becomes
y(z) B(Z-1) 0.1387z- 1 +O.0889z- 2
Gp(z) = u(z) = A(Z-1) = 1 _ 1.0360z 1 + 0.2636z 2
(26.6.2)

(Test process VII, see Appendix Vol. I). This process was disturbed at the input by
a reproducible coloured noise signal which was produced by a noise signal
generator. This corresponds to a noise signal filter
n(z) D(Z-1) 1 + 0.0500z- 1 + 0.8000z- 2
Gpv(z) = v(z) = A(Z-1) = 1 _ 1.0360z 1 + 0.2636z 2 (26.6.3)

driven by white noise.


Table 26.4 shows the parameter-adaptive controllers which were examined in
[26.15,26.16]. Figure 26.13 represents some signals for stochastic disturbances. The
results for all twelve parameter-adaptive controllers can be summarized as follows:
- After 20 sampling steps the control performance of the adaptive stochastic
controllers is almost the same as for the exactly tuned controller;
- The control performance ofthe adaptive deterministic controllers even improves
compared with exactly adapted controllers. (This shows that through adaptation
these deterministic controllers which are not designed for stochastic noise
signals, adapt even better to stochastic signals);
- The convergence of the parameters di of the noise signal filter is quite slower than
the convergence of the process parameters ai and hi' The fastest adaptation is
therefore achieved through RLS;
- The control performance could not be improved by external test signals.
208 26 Parameter-adaptive Controllers

Table 26.4 Examined parameter-adaptive controllers.

Parameter- Control algorithm


estimation
stochastic deterministic

MV4 MV3 DB(v) DB(v + 1) 3PC-3LCPA


(PID)

RLS
RML

• .q(Z-l) = 1 for controller design


b D(z -1 ) not used for controller design
C To compensate offsets: Implicit d.c. estimation with Yoo = W(k)
d Design through approximation of DB-controller

Figure 26.14 shows the signals and Figure 26.15 represents the parameter estimates
for step changes in the reference values. These and other simulations show:
- After closing the control loop small changes in the manipulated variables are
generated which lead to an approximation model. This model then provides
a reasonable control performance after the first step of the reference value;
- After the second step in the command variable the parameter estimates differ
only little from the exact values.
- After the second step in the reference value the performance of adaptive con-
trollers is about the same as for exactly fixed controllers.

26.6.2 Various processes


Figure 26.16 represents a collection for the initial control behaviour for various
stable and unstable, proportionally- and integral-acting processes with minimal
and nonminimal phase behaviour as also indicated in Table 26.5. For all propor-
tionally-acting stable processes RLS/DB provides a rapid and exact adaptation. As
expected, a critically stable behaviour resulted for the integral-acting process, for
which DB-controllers are not to be used. The adaptive controller RLS/MV3
proved a good control behaviour for integral acting processes and a reasonable one
for unstable processes. Good and exact adaptation could be reached with
RLS/LCPA for all processes provided appropriate poles were prescribed.

26.7 Start of parameter-adaptive controllers and choice of


free design parameters

The considered parameter-adaptive controllers can be used as selftuning controllers


or as adaptive controllers, see section 26.11. This has to be taken into account as
well for the selection of combinations as for the design parameters.
Table 26.5 Various simulated processes. N
0-
~

process s-transfer function sample z-transfer function characterization rn


S"
time To :l
o-,
'0
I>:>
.....
1 0.1387z- 1 + 0.0889z- 2 I>:>
Gds) = 2.0 G (z) - - - - - - - , - - - - - low-pass behaviour 3
(1 + 2.5s)(1 + 3.75s) 1 - 1 _ 1.036z- 1 + 0.2636z- 2 ~
(1)
.....
~
1 0.0186z- 1 + 0.0486z- 2 + 0.0078z- 3 0-
I>:>
2 G 2 (s) = 2.0 G2 (z) = ----~---~---- low-pass behaviour; one zero 'Q.
(1 + 2.5s) (1 + 3.75s)(1 + 5s) outside unit circle of the z-plane ~2"
(1)
(")
o
0.1098z- 1 + 0.0792z- 2 - 0.0229z- 3
;:;.
.....
3 1 + 2s 3.0 damped oscillatory behaviour 2-
G 3 (s) = (1 + 3s)(25s 2 + 5s + 1) G3 (z) = -1---1.-6-54-z---"-I-+-1.-02-2-z--:::2---O.-2-01-9-z-3 n
;;l
I>:>
o
0-
4 G4 (s) = G 3 (s)·e- 9s 3.0 G4 (z) = G3 (Z)·Z-3 process with time delay (")
::r
o

(1)

1 - 4s - 0.102z- 1 + 0.173z- 2 o-,


5 Gs(s) = 2.0 G (z) - - - - - - - ,1- - - - - =2 all-pass behaviour ;:;>
s - 1 _ 1.425z- + 0.496z- (1)
(1)

0-
(1)
0.0088z - 1 + 0.0086z - 2 V>

6 1 0.3 integral behaviour ciCi·


G6 (s) = s(1 + 5s) G6 (z) = 1 _ 1.9418z- 1 + 0.9418z- 2 o
'0
I>:>
.....
I>:>
s + 0.03 0.1964z- 1 + 0.000Iz- 2 - 0.1892z- 3 3
7 ()=
G,s Q5 G (z)----~---~---- oscillating unstable behaviour ~
(1 + 2S)(S2 - 0.35s + 0.15) , - 1 - 2.930z- 1 + 2.866z- 2 - 0.9277z- 3 (1)
(model of a helicopter) ;;l

1 - 0.0132z- 1 - 0.0139z- 2
8 0.5 G (z)-----~--~ unstable behaviour
Gs(s) = (1 + 5s)(1 - 2s) s - 1 - 2.1889z- 1 + 1.1618z- 2
~
210 26 Parameter-adaptive Controllers

l y(k) y (k)
u(kl
w(kl

RLS I DB
CD /'If r
20 k u 50 lOa k

y(kl y( kl
u(k l
w(k)

RLS/DB2
l rL~ "'"
]

~
20 k ""50 100 k

y(kl y(kl
u(k)
w(kl

~ RLS/DBI

IS k 75 k

y(k) y(k)
u(k)
w(k)
RLS/DBI

15 k 75 k

Figure 26.16 Parameter-adaptive control of various processes according to Table 26.5.


26.7 Start of parameter-adaptive controllers and choice of free design parameters 211

y(k) y(k)
u(k)
w(k)

®
_n~~
RLS/OB 2
re:---J
20 k 1r
Il.ij SO 100 k

J2:=
RLS/MV 3

®
20 k 165 k

y(k)
u(k)
w(k)

(j) RLS/MV 3

- k k

&y(k) y(k)
u(k)
w(k)

®
RLS/MV 3

-
k 165 k

Figure 26.16.
212 26 Parameter-adaptive Controllers

26.7.1 Preidentification
As the parameter-adaptive controllers are based on process parameter estimation
it has to be assured that the parameter estimation method and the free parameters
are chosen properly. For an unknown process it is therefore recommended to first
perform a process identification. In the case of stable processes this can be done in
open loop, for unstable processes in closed loop with a fixed controller. To this
a test signal (e.g. PRBS) is introduced and after sufficiently long identification time
a model verification is performed. This shows how well the model agrees with the
process, see e.g. [3.13, ;3.18]. This includes the determination of appropriate
m
sampling time To, model order and deadtime d. Since process identification is an
iterative procedure, this is also valid for the initial phase of an adaptive controller.

26.7.2 Choice of design parameters


To start the parameter-adaptive control algorithms the following parameters must
be specified initially:
To sample time
m process model order
d process model deadtime
A. forgetting factor
r process input weighting factor.
When applying parameter-optimized controllers it is seen that the control is not
very sensitive to the choice of the sample time To. For proportionally acting
processes good control can be obtained with PID-controllers within the range

(26.7.1)

where T95 is the 95% settling time of the process step response. In respect to
control, the sampling time should be as small as possible (exceptions: DB- and
MV-controllers), while for parameter estimation the sampling time should be
neither too small (numerical problems) nor too large (poor model performance).
Hence, suitable compromises have to be found, also see section 26.8.
Simulations and practical experience have shown that adaptive control was
insensitive to mo ~ 3 if model order within the range

(26.7.2)

is chosen. Adaptive control algorithms, however, can be sensitive to the wrong


choice of dead time d, especially for combinations with minimum variance con-
trollers. This, in particular, holds for a too large assumed deadtime, since for a too
small assumed dead time and sufficiently large order m, the correct deadtime is
automatically taken into account as the parameters h1' h2' ... , become approxim-
ately zero by the parameter estimation. Continuous dead time determination is
recommended for varying dead time [26.57].
26.7 Start of parameter-adaptive controllers and choice of free design parameters 213

F or the following selection of the forgetting factor A, the following holds, also see
section 24.5:
- rapid process parameter changes: A small
- model order m large: )0 large
- large noise signal amplitudes: A large.
For adaptive control the following has proved to be efficient:
- Constant or very slowly time varying processes: A = 0.99
- slowly time varying processes with stochastic disturbances: 0.95 ~ A ~ 0.99
- stepwise reference variable changes 0.85 ~ A ~ 0.90.
The smaller values are valid for lower model orders (m = 1, 2) and the larger values
for higher orders. For selftuning of lateron fixed controllers 0.95 ~ A ~ 1 can be
taken.
The choice of the controller design parameters, e.g. r, Q etc., is described in the
corresponding chapters which treat controller design.
A considerable advantage of parameter-adaptive controllers is that all free
design parameters can be changed in on-line operation. Therefore the result of
changing the design parameters can be observed immediately. This holds especially
for the final adjustment of the parameters A, rand Q which mostly cannot be
specified exactly in advance.

26.7.3 Starting methods


After selection of the design parameters and starting values of the parameter
estimation the parameter-adaptive control can be simply switched on. However,
then an unpredictable time behaviour with increasing and decreasing amplitudes
may result depending on the disturbances. Figure 26.17a. represents an example.
Also compare Figure 26.3,26.14 and 26.15. To avoid possibly larger amplitudes,
upper and lower limits U max and U min can be set. Figure 26.13 shows an example.
(Compare with Figure 26.3.)
A better starting can be obtained by proper excitation of the process input using
a well-defined test signal for a definite time period (e.g. 10 to 20 samples), like
a PRBS. For stable processes the process can operate in open loop, for critical
stable or unstable processes in closed loop (e.g. with a fixed P-controller). Then
a good starting model of the design of the first controller is obtained after switching
to closed adaptive loop. This corresponds with a short preidentification. A relative-
ly short identification time may be sufficient, as the model improves in closed loop
operation. Figure 26.17b shows an example for this starting with preidentification.
This method was used for the starting of most of the parameter-adaptive con-
trollers the examples of which were given in section 26.5. After the first adaptation
it is recommended to test the obtained control performance. This can be done
through external disturbance signals, e.g. command signal changes. If necessary,
design parameters may be improved on-line.
The switching from preidentification to closed adaptive loop can be made
dependent on the performance of the process model and other conditions and
214 26 Parameter-adaptive Controllers

ulk) r- u{kl
I, I,

3 3
2 2

J n r.~ r-~
Ir
-1
J
10 l/" 20 30 k
-1 L 10 20 30
k

-2 -2
·3 -3
-I. -I,

w(kl w(kl
ylkl y lkl
I, I.

3 3
2 2

30 k 30 k
-I -I

-2 -2
-3 -3
-I. - I,
a b

Figure 26.17a, b. Starting methods for parameter-adaptive control. RLS/DB (m = 3; d = 0;


To = 2s; A. = 0.95). a Without preidentification and closed adaptive loop from the begin-
ning; b with preidentification and excitation with a PRBS in open loop and closed adaptive
loop for k ;:;; 13.

thereby automated. As criteria are suitable [26.44]:


- At least 2m + d + 1 samples
- convergence of parameter estimation method
trp-1(k + 2(m + 1» - trp- 1(k) < Xl
- comparison of estimated and measured d.c. values V 00 or Yoo
- comparison of the model and process outputs
1 N
- L [y(k) - .Y(k)] < X2 •
N k=l
26.8 Supervision and coordination of adaptive controllers 215

These and perhaps other verifications and the appropriate actions allow an
automatic starting with a sufficiently good starting model.

26.8 Supervision and coordination of adaptive controllers


26.S.1 Supervision of adaptive controllers
Many simulations and applications together with stability and convergence in-
vestigations have shown that parameter-adaptive control functions as expected,
provided all a priori assumptions are met. If controlling real processes, however,
with the basic scheme according to Figure 26.1 deviations of the normal behaviour
may be observed. The reason for this in most cases are wrong assumptions about:
- process model structure
- parameter estimation
- controller design
- design parameters.
Therefore the basic functions of the parameter-adaptive control loop should be
continuously supervised and suitably controlled. In addition to the adaptation
level a third feedback level is introduced in which supervision is realized [26.36,
26.58J, see Figure 26.18. The tasks of this supervision are:
- detection of faulty behaviour
- diagnosis of causes
- initiation of remedial measures
The following briefly describes how the supervision of the basic functions can be
realized.
a) Parameter estimation
Possible violations of assumptions of a convergent parameter estimation are:
- no persistent excitation
- nonstationary noise signals, as step changes or outliers
- too fast changes of the static or dynamic process behaviour
- wrong model structure parameters (m, d), wrong sample time To, wrong forget-
ting factor A..
These effects may result in a poor control performance or even cause unstable
behaviour. Various supervisory measures were examined in [26.59, 26.58].
For on-line supervision in realtime the following quantities are especially suited:
- equation error (a priori) e(k):e(k) and 8;(k)
- parameter estimation values: variance 8~i
- information matrix H(k) = P-l(k):trH(k)
- control variable: mean y(k) and variance 8;(k).
- eigenvalue of the parameter estimator [26.66J, [26.67J
If these quantities surpass certain limit values, the following measures can be taken:
- to stop for M steps the dynamic model parameter estimation
- to stop for M steps the signal d.c.-value estimation
- to start the parameter estimation anew or to set back the covariance matrix
- automatic search of model order and deadtime.
216 26 Parameter-adaptive Controllers

Coordination Isupervlslon
Supervlslonl
coordination
level

Adaptation
level

Control
level
w y

Figure 26.18 Parameter-adaptive control loop with supervision and coordination level.

Low-pass filtering proved to be successful of both, the signals u(k) and y(k) the
parameter estimation values 8(k); thus resulting in an improved control perform-
ance and supervision.
b) Controller design
Problems might occur by e.g.:
- cancellation of unstable poles (DB) or zeros (MV4), compare Table 26.1;
- sample time To too large or too small;
- wrong design parameters.
Therefore for the corresponding controllers e.g. the poles and zeros can be cal-
culated before the controller synthesis.
c) Closed control loop
A possible malfunction is indicated e.g. by:
- control difference ew(k) = w(k) - y(k) increases monotonously
- actuator position stays at a restriction
- Oscillating unstable behaviour.
Should, despite all supervision measures by a) and b) this behaviour be observed,
a fixed and robust back-up controller which can be designed and stored already
during preidentification, can be used instead of the adaptive controller. Examples
in [26.58, 26.59] show that a considerable improvement of the overall behaviour
and robustness can be reached by applying these supervision measures, also
compare [26.44, 26.60]. The additional computational effort amounts to about
15%.
26.9 Parameter-adaptive feedforward control 217

26.8.2 Coordination of adaptive control


Up to now considerations concerning start-up and superVISIOn of parameter-
adaptive controllers showed that adaptive control can be considerably improved
by attaching additional elements and back-up systems on a higher level. This
makes the overall structure of adaptive control more complex and sometimes even
variable. The switching on- and off of further elements and the choice of some free
parameters, which make the adaptive control structure dependent on the operation
conditions as e.g. start-up, normal operation, rapid change of the operating point,
can be considered as tasks of a coordination and can be realized in a third feedback
level, see Figure 26.18.
This coordination now allows to divide the basic structure of a parameter-
adaptive control loop according to Figure 26.1 as well as the combination of the
individual elements according to further viewpoints.
Examples for tasks of this coordination level are:
- Start-up of the adaptive control (preidentification, choice of the design para-
meters, switching to closed-loop)
- on-line search of the model structure (e.g. structure parameters m and d)
- design of a fixed back-up controller
- asynchronous combination of parameter estimation and controller design (see
section 26.3.1)
• different sampling times
• conditional controller design
- filtering of the parameter estimation values
- choice of different control algorithms.
For the conditional controller design the new controller parameters are only used,
if simulating the control loop leads to an improved control behaviour.
By correlating the determined process models and the adaptive controllers with
measurable quantities of the process and storing them correspondingly this leads to
a learning control system.

26.9 Parameter-adaptive feedforward control

The same principle as for parameter-adaptive feedback control can be applied to


(certainty equivalent) feedforward control, Figure 26.19.
It is assumed that the process is described by
y(Z) = Gp(z)u(z) + Gv(z)v(z) (26.9.1)
where v(z) is a measurable disturbance signal and
y(z) B(Z-l) b1z-1+ .. ·+b z-m p
Gp(z) = - = ___ Z-d p = mp (26.9.2)
u(z) A(Z-l) l+alz l+ ... +ampz

(26.9.3)
218 26 Parameter-adaptive Controllers

Adaptation

Figure 26.19 Block diagram for parameter-


adaptive feedforward control.

The feedforward controller has the transfer function


u(z) S(Z-l) SO+SlZ-l+···+svz-v
Gs(z)=-=--= . (26.9.4)
v(z) R(Z-l) 1 + rlz-1 + ... + r/lz-/l
(26.9.1) is given the structure
A(z-l)y(z) = B(Z-l)Z-d PU (Z) + D(Z-l)Z-dvV (Z) (26.9.5)
with
A(Z-l) = 1 + IX1Z-1 + ... + cxnz- n )
B(Z-l) = f31Z-1 + .,. + f3n z - n (26.9.6)
D(Z-l) = b 1z- 1 + ... + bnz- n
A(z-l) is the common denominator of Gp(z) and Gv(z) and B(Z-l) and D(Z-l) the
corresponding extended numerators. As all signals of (26.9.5) are measurable, the
parameters lXi, f3i and bi can be estimated by recursive least squares (RLS) using
f)T = [IXl ... IX nf31 ... f3nbl ... bn] (26.9.7)
",T(k) = [-y(k - 1) ... - y(k - n)u(k - d p - 1) ... u(k - d p - n)
v(k - dv - 1) ... v(k - d v - n)] . (26.9.8)
Also in this case an identifiability condition must be satisfied as u(k) and v(k) are
correlated. Here the second way of deriving the identifiability condition 2 in section
25.1 can be used, see (25.1.32) and (25.1.33). The feedforward control algorithm is
u(k-dp-1)= -rlu(k-dp -2)- ... -r/lu(k-dp -/-L-1)
+ sov(k - dp - 1) + ... + svv(k - dp - v-I) (26.9.9)
and the elements of (26.9.8) then become linearly independent only if
max[/-L; v + (dp - dv)] ;:;; n for dp - dv ;:;; 0 }
(26.9.10)
max[/-L+(dv-dp);v];:;;n fordp-dv~O.
26.9 Parameter-adaptive feedforward control 219

y (kl
v (kl (l
IV k
1 ~------------~

100 k
a

y(k )
u(k )
v(k l

/v(kl /u(kl

~ ~
L 100 k

VYIkI

Figure 26.20a, b. Parameter-adaptive feed forward control for a low-pass process with order
m = 3 and a disturbance filter with m = 2. a no feedforward control, v(k): steps; b para-
meter-adaptive feedforward control with RLS-DB

Based on the model (26.9.5) and the parameter estimates e,


feedforward control
algorithms can be designed using pole-zero cancellation, minimum variance, dead-
beat or parameter optimization. The resulting adaptive algorithms are described in
[26.29]. They show rapid adaptation. An example (with somewhat large manipu-
lated variable changes) is shown in Figure 26.20. The combination of RLS with
MV4 was proposed in [26.9].
220 26 Parameter-adaptive Controllers

Since feedforward controller parameters often cannot be accurately tuned,


methods for their selftuning may lead to improvement.

26.10 Parameter-adaptive multivariable controllers

Methods for selftuning and adaptive control systems are of special interest for
application, if the process shows a difficult behaviour and the control systems are
more complex. If a complete (centralized) process model is used for parameter
estimation, the principle of parameter-adaptive control can be extended to multi-
variable processes.
This section briefly describes the development of parameter-adaptive (centralized)
control loops for multi variable systems. Extensions to the RLS/MV4 controller to
multivariable systems have been made in [26.30, 26.32] using matrix polynomial
models.
In [26.33, 26.34, 26.61) a variety of combinations is given. There the following
process moclels are used with p input and r outputs and stochastic noise signals:
- p-canonical model
p r
Aii(z-l)Yi(Z) = L Bij(Z-l)Z-dUiz) + L Du(Z-l)Vj(z)i= 1, ... ,r
j;l j;l
(26.10.1)
- matrix polynomial model (section 18.1.5)

A(Z-l )y(z) = B(Z-l )Z-dU(Z) + D(Z-l )v(z) (26.10.2)

- innovation state model (21.4.1) and (21.4.2)

x(k + 1) = Ax(k) + Bu( k) + GV(k)} (26.10.3)


y(k) = Cx(k) + v(k)
If the state model is represented in the observable canonical form, state reconstruc-
tion according to section 8.9 can be used so that an observer design can be omitted.
The parameters of these models can be estimated via

Yi(k) = ",TOi + vi(k) i = 1,2, ... , r (26.10.4)

e.g. by the RLS or RELS method. Combinations of these parameter estimation


methods with the multi variable controllers of chapters 20 and 21 results in the
following parameter-adaptive multi variable controllers:

Matrix polynomial controllers


- marix polynomial deadbeat controllers (MOB!, MOB2)
- matrix polynomial minimum variance controllers (MMVl, MMV2)
26.11 Application of parameter-adaptive control algorithms 221

u, Y,

I
u,
l 1 • 105 • 215 2

1 • 75 • 125 2
~
11 1 • 125 • 355 2
~
J 2 • 175 • 325 2
u2
l 1 _ 175_1045 2• 2865 3• 2405' Y2

Figure 26.213, b. Twovariable process 3 basic structure; b P-canonical form.

M ultivariable state controllers


- the multivariable pole assignment state controller (MSPA)
- the multi variable matrix Riccati state controller (MSR)
- the multi variable decoupling state controller (MSD)
- multi variable minimum variance state controllers (MSMVI, MSMV2, MSMV3)
Simulations of the parameter-adaptive control of a twovariable process shown in
Figure 26.21 are presented in Figure 26.22 for step changes in the reference values
and in Figure 26.23 for stochastic disturbances. In both cases the parameter-
adaptive controllers are tuned after about 20 to 30 samples and the expected
control behaviour is achieved [26.43].

26.11 Application of parameter-adaptive control algorithms


The described methods for parameter-adaptive control can be applied e.g. as
follows:
I. Selftuning controllers:
The controller parameter adaptation is done once, in order to tune the later on
fixed controller automatically to the process. Thus, a precise controller parameter
222 26 Parameter-adaptive Controllers

W1 (kl\

1 + - - - 1- - - ' 3 0
50 90 120 k

u
Ir~--'"
k

Figure 26.22 Parameter-adaptive control of the twovariable test process of Figure 26.21 for
step changes of wl(k) and w2(k). RLS/MDB. To = 4s, ml = 3, m2 = 5. Restricted input
signals - 2 ~ Ui ~ 2 for 0 ~ k ~ 20.

tuning is achieved in short time with only small test signals. This is also valid for
processes which are more disturbed. This method is not only useful for digital
controllers but also for analogue controllers, provided small sampling times are
chosen [26.16, 26.44]. The application as selftuning controller is especially
26.11 Application of parameter-adaptive control algorithms 223

,
y,lkJ

Figure 26.23 Parameter-adaptive control of the twovariable test process of Figure 26.21 for
stochastic noise signals n,(k). RELSj MMVI. To = 4s; ml = 3; m2 = 5; R = 0.005 I, S = I.
Restricted input signals - 5 ~ u, ~ 5 for 0 ~ k ~ 20
224 26 Parameter-adaptive Controllers

recommended for parameter-optimized controllers (PID) or state controllers which


have sufficiently robust properties for later process changes.
Selftuning can also be practiced for various operating points. A controller with
feedforward adapting parameters can thus be realized after storing the correspond-
ing controller parameters. Selftuning controllers can also be applied for tuning
decentralized controllers for complex processes. Here the tuning function is
switched successively from controller to controller: from the lower to the higher
levels, from faster to slower process dynamics, from weak to strong couplings.
Examples have shown that sequential selftuning leads to a rapid convergence, also
for multi variable control systems [26.62]. Here the couplings are taken into
account automatically and they need not be formulated in process models.
2. Adaptive controllers
The controller parameter. adaptation is performed continuously in order to control
automatically a (slow) timevariant process with the best control performance
possible. Note, however, that stability and convergence conditions have to be met.
As a rule, adaptive controllers should only be applied if neither a fixed controller
nor a feedforward adaptive controller is sufficient. This holds e.g. for timevariant
and nonlinear processes. Good results can be reached with parameter-adaptive
controllers which were designed for linear processes, provided the time variance or
the nonlinearity is not too large or too distinct. As shown in this chapter, a variety
of control algorithms can be implemented.

Applications to real processes will be given in chapter 31.


G Digital Control with Process
Computers and Microcomputers
226 26 Parameter-adaptive Controllers

As well as choosing appropriate control algorithms and their tuning to the process,
several other aspects must be considered in order to obtain good control with
digital computers. Amplitude quantization in the AjD converter, in the central
processor unit and in the DjA converter is discussed in chapter 27 with regard to
the resulting effects and required word length. Another requirement is suitable
filtering of disturbances which cannot be reduced by the control algorithms.
Therefore the filtering of high and medium frequency signals with analog and
digital filters is considered in chapter 28. The combination of control algorithms and
various actuators is treated in chapter 29. The linearization of constant speed
actuators and the problem of windup are both considered there. Chapter 30 deals
with the computer aided design of control algorithms based on process identification.
Case studies are demonstrated for the digital control of a superheater and an heat
exchanger. Then the application to digital control of a rotary drier is shown.
In the last chapter adaptive control with microcomputers and process computers is
described and applications are shown for the digital adaptive control of an air
heater, an air conditioning unit and a pH-value process.
27 The Influence of Amplitude Quantization
on Digital Control

In the previous chapters the treatment of digital control systems was based on
sampled, i.e. discrete-time signals only. Any amplitude quantization was assumed to
be so fine that the amplitudes could be considered as quasi continuous. This
assumption is justified for large signal changes in current process computers.
However, for small signal changes and for digital controllers with small word
lengths the resulting effects have to be considered and compared with the continu-
ous case.

27.1 Reasons for Quantization Effects

Quantization of amplitudes arises at several places in process computers and


digital controllers. If the sensors provide digital signals this implies quantization.
A second occurs in the central processor unit (CPU), and a third is in the digital or
analog output device. With analog sensors and transducers quantization also arises
at three places. This case is treated below.
1. Analog Input
In the analog input device typically standard voltages (0 ... 10 V or currents
0 ... 20 rnA or 4 ... 20 rnA) are sampled by the analog/digital converter (ADC) and
digitized. The signal value generally is represented in fixed point form. The
quantization unit A ( = resolution) is then given by the word length WL (with no
sign bit). The decimal numerical range N R of the word with length WL [bits] is for
one polarity
NR = 2 WL - 1. (27.1.1)
Hence, the quantization unit becomes
1 1 1
A- -
NR--
- 2 WL - 1-
- 2-
WL •
(27.1.2)

The quantization units of an ADC with WL = 7 ... 15 bits are shown in Table 27.1.
Two examples are given as illustration:
If the largest numerical value is the voltage 10 V = 10000 mY, for word lengths
of 7 ... 15 bits the smallest representable unit is A = 78.7 ... 0.305 mY. If a temper-
ature of 100°C is considered, this gives A = 0.787 ... 0.003°C.
228 27 The Influence of Amplitude Quantization on Digital Control

Table 27.1 Quantization units as functions of the word length with no sign bit

word length [bits] 7 8 10 12 15


numerical range NR 127 255 1023 4095 32767
quantization unit d 0.00787 0.00392 0.00098 0.00024 0.00003
quantization unit d [%] 0.787 0.392 0.098 0.024 0.003

Analog/digital converters count the integer L multiples of quantization units


~ which corresponds to the analog voltage Y
YQ = L~. L = 0, 1, 2, ... , N R. (27.1.3)
The remainder by < ~ is either rounded up or down to the next integer, i.e. to L, or
simply truncated. Both cases give
Y = YQ + by (27.1.4)
with quantization error by
- for rounding
(27.1.5)
- for truncation
(27.1.6)
Amplitude quantization therefore introduces a first nonlinearity, see Figure 27.1.
2. The Central Processor Unit
The ADC discretized signal (YQ)AD is transferred to the CPU and is there repres-
ented mainly using a larger word length - the word length WL N of number
representation. For a linear control algorithm the following computations are
made:
- calculation of the control deviation
eQ(k) = (YQ(k))AD - wQ(k) (27.1.7)
- calculation of the manipulated variable
uQ(k) = -P1QuQ(k - 1) - ... - PIlQ(k - J.l)uQ(k - J.l)
+ qOQeQ(k) + ... + q.QeQ(k - v) . (27.1.8)
In the CPU new quantization errors are added because of the limited word length
WLcp
- reference variable wQ(k) )
- manipulated variable uQ(k - i), i = 1,2, ...
- parameters PiQ, qiQ i = 0, 1,2, ...
- products PiQuQ(k - i), qiQeQ(k - i)
- sum of the products uQ(k).
, - - W- - - - - - - - - - - - - -- I
I I

,------l ! !,- - -----


kTo kTo [
I I I
rn(wa)cpu I I [
N
y: [AL[ ily,l" i : 1r-T:l11",)ool ) I, :-J
I I I (e ) II
I I I Q CPU I I ::tI
("D
~
I Rounding I Control Product Rounding
II Hold
I in AID - [ I olgorlthm rounding I I in D/A- :
'"o
::I
L ___ convert~ ~ L _________________ ~ L ~n.':'E!r~r_______ ~ '"0'
...
Analog input Central processing unit Analog output Process ,0
!:
~
::I
...N·
Figure 27.1 Simplified block diagram of the nonlinearities in a digital closed loop, caused by amplitude quantization. ~
...o·
::I
tT1
~
~
'"

N
N
1.0
230 27 The Influence of Amplitude Quantization on Digital Control

For fixed point representation the quantization units shown for the ADC hold if
8 bits or 16 bits word length CPUs are used. The quantization can be decreased by
the use of double length working.
In the case of floating point representation for process computers with 16 bits
word length two or more words are often used. The floating point number
(27.1.9)
for example can be represented using two words of 16 bits each, with 7 bits for the
exponent E (point after the lowest digit) and 23 bits for the mantissa M (point after
the largest digit), within a numerical range of
-0.8388608,2- 128 ~ L ~ 0.8388607,2 127
- 0.24651902 '10- 39 ~ L ~ 0.14272476' 10 39 .

Therefore the smallest representable unit is


A ~ 10- 38
which is negligible for digital control. If fixed point representation with a small
word length is used, quantization errors can come up by the products which
introduce nonlinearities, Figure 27.1.
The quantization of the reference variable and the controller parameters cause
only deviations from their nominal values and do not introduce nonlinearities into
the loop.

3. Analog Output
With analog controlled actuators the quantized manipulated variable uQ(k) is
transferred to a digital/analog converter (DAC) followed by a holding element. The
quantization interval of the DAC depends on its word length. As shown in Figure
27.1, the DAC introduces a further nonlinear multiple point characteristic.
The above discussions have shown the various places where nonlinearities crop
up. As it is hard to treat theoretically the effect of only one nonlinearity on the
dynamic and static behaviour of a control loop the effects of all the quantizations
are difficult to analyze. The known publications assume either statistically uni-
formly distributed quantization errors or a maximal possible quantization error
(worst case) [27.1] to [27.6], [2.17]. The method of describing functions [5.14],
[2.19] and the direct method of Ljapunov [5.17] can be used to analyze stability.
Simulation probably is the only feasible way, for example [27.3], to investigate
several quantizations and nontrivial processes and control algorithms.
The following sections consider the effects of quantization using simple ex-
amples. The principal causes can be summarized as follows:
- quantization of variables (rounding of the controlled or manipulated variables in
the ADC, DAC or CPU)
- quantization of coefficients (rounding of the controller parameters)
- quantization of intermediate results in the control algorithm (rounding of pro-
ducts, Eq. (27.1.8».
27.2 Various Quantization Effects 231

In digital control systems the effect of these quantizations is of interest when


considering the behaviour of the closed loop, which is assumed to be asymp-
totically stable without these nonlinearities. Here the following effects are to be
observed:
a) The control loop remains approximately asymptotically stable, as the quantiz-
ation effects are negligible. After an initial change the control deviation becomes
lim e(k) ~ O.
k~oo

b) The control loop does not return into the zero steady state position as offsets
occur
lim e(k) =!= 0 .

c) An additional stochastic signal- the quantization noise or rounding noise - is


generated if the loop is persistently excited.
d) A limit cycle with period M arises
lim e(k) = lim e(k + M) =!= 0 .
k-x k-oo

27.2 Various Quantization Effects

27.2.1. Quantization Effects of Variables


One multiple point characteristic with quantization unit ~ for the ADC is assumed
within the loop, as drawn in Figure 27.1. The possible quantization errors c5 then
are given by (27.1.5) and (27.1.6) for rounding and truncation.

Quantization noise
If a variable changes stochastically such that different quantization levels are
crossed it can be assumed that the quantization errors c5(k) are statistically indepen-
dent. As the c5(k) can attain all values within their definition interval (27.1.5) and
(27.1.6) uniform distribution can be assumed, Figure 27.2. The digitized signal YQ
then consists in the analog signal value y and a superimposed noisy value c5. Eq.

J ''1
-~/2 ~!2
[ }/~
-<5
'J 0
L-Il/~
~
.
<5

a b

Figure 27.2 Probability density of the quantization error for a rounding; b truncation.
232 27 The Influence of Amplitude Quantization on Digital Control

(27.1.4) gives
Ya(k) = y(k) - c5(k) . (27.2.1)
The expectation of the quantization noise then becomes
00

- rounding: E {c5(k)} = J p(c5)c5 dc5 = 0 (27.2.2)


-00

- truncation: E {c5(k)} = fl/2 (27.2.3)

and the variance in both cases is

J [c5 -
00

af = E{c5(k)}]2p(c5)dc5 = .1 2 /12. (27.2.4)


-00

If this white quantization noise is generated in the ADC it acts as a white noise n(k)
on the controlled variable, and its variance cannot be decreased by any control.
This leads to undesirable changes of the manipulated variable which can be larger
than one quantization unit of the DAC, as shown by the next example.

Example 27.1. Effect of the ADC quantization error on the manipulated variable.
The process is assumed to have low-pass behaviour. Parameter-optimized controllers then
tend to have PD-behaviour, chapter 13. With e(k) = - y(k) the control algorithm becomes
u(k) = -qoy(k) - q,y(k - 1) .

The superimposed quantization noise becomes

If u6(k) is filtered by the low-pass process such that the resulting output component
Y6(k) ~ 0, the variance of the moving average signal process u6(k) is
U;6 ~ [qfi + qf]uE .
With the controller parameters qo = 3 and q1 = -1.5 the standard deviation is, using Eq.
(27.2.4),

U u6 ~ 3.35a6 = (3.35/ Ji2).1 = 0.97.1 .


The quantization noise in the ADC therefore generates about a 3-times larger standard
deviation in the manipulated variable.

Because of the nonzero expectation of the quantization error, rounding must be


preferred to truncation.

Offsets and limit cycles


Two examples are used to illustrate the quantization effects if a deterministic signal
acts on the loop.
27.2 Various Quantization Effects 233

Example 27.2 Offsets due to quantization in the ADC


A first order process
y(k + 1) = -aly(k) + h1u(k) with al = -0.5867 and hi = 0.4133
and a P-controller are assumed, c.f. Example 16.1. The control deviation is
e(k) = w(k) - YQ(k) .

In the ADC the measured analog signal y(k) is rounded to the second place after the decimal
point, resulting in YQ(k). The response of the signals without and with rounding to a refer-
ence value step w(k) = 1(k) the initial conditions y(k) = 0 and u(k) = 0 for k < 0 and the gain
qo = 1.3 is shown in Table 27.2.
The quantization unit is ~ = 0.01. The rounded controlled variable stops at YQ = 0.56.
This results in an offset of ~y = 0.003 which is negligible.

Example 27.3 Limit cycle by quantization in the ADC


In the loop of Example 27.2 the controller gain is increased to qo = 2.0. The same
assumptions on rounding are made. Table 27.3 shows the effects. A constant amplitude
oscillation with period M = 3 arises, i.e. a limit cycle with amplitude I~YI ~ 0.003, which is
very small. The amplitude of the manipulated variable becomes I~u I = om which is the
same size as the quantization unit of the controlled variable.

These examples have shown that the resulting amplitudes of quantization noise,
offsets or limit cycles in the controlled variable are at least one quantization unit of
the ADC. Limit cycles arise particularly with strongly acting control algorithms.
They can disappear if the controller gain is reduced. The simplest investigation of
these quantization effects is obtained by simulation. This is true particularly if
quantizations occur at more than one location.

Example 27.4 Simulation of ADC and DAC quantization effects


A third order process, the test process VI (see Appendix) was simulated together with
a P-controller having qo = 4. The controlled variable was quantized (A DC) with quantiz-
ation unit ~y = 0.1 and the manipulated variable with ~u = 0.3 (DAC). In Figure 27.3 the
response is shown to the initial condition y(O) = 2.2 and the reference variable Wo = 3.5.
A limit cycle occurs with amplitudes I~YI ~ ~y and I~ul ~ 3~u.

Table 27.2 Effect of rounding in the ADC. qo = 1.3.

without rounding with rounding

k u(k) y(k) u(k) y(k) YQ(k)

0 1.3000 0 1.3000 0
1 0.6015 0.5373 0.5980 0.5373 0.54
2 0.5670 0.5638 0.5720 0.5640 0.56
3 0.5653 0.5651 0.5720 0.5649 0.56
4 0.5652 0.5652 0.5720 0.5649 0.56
5 0.5652 0.5649 0.56
234 27 The Influence of Amplitude Quantization on Digital Control

Table 27.3 Effect of rounding in the ADC. qo = 2.0.

without rounding with rounding

k u(k) y(k) u(k) y(k) YQ(k)

0 2.0000 0 2.0000 0 0
0.3468 0.8266 0.3400 0.8266 0.83
2 0.7434 0.6283 0.7400 0.6254 0.63
3 0.6482 0.6759 0.6600 0.6727 0.67
4 0.6711 0.6644 0.6600 0.6675 0.67
5 0.6656 0.6672 0.6800 0.6644 0.66
6 0.6669 0.6665 0.6600 0.6708 0.67
7 0.6667 0.6600 0.6663 0.67
8 0.6600 0.6664 0.67
9 0.6800 0.6637 0.66
10 0.6600 0.6705 0.67
11 0.6600 0.6661 0.67
12 0.6800 0.6636 0.66
13 0.6600 0.6703 0.67
14 0.6600 0.6661 0.67
15 0.6800 0.6636 0.66

Y,U

40 80 120 160 ([s] 200

Figure 27.3 Response of the controlled variable y and the manipulated variable u for
quantization in the ADC and DAC with quantization units L\y = 0.1 and L\u = 0.1. Third
order process (test process VI). P-controller: qo = 4. To = 4 sec.
27.2 Various Quantization Effects 235

The describing function or the direct method of Ljapunov can be used in stability
investigations for the detection of limit cycles. To determine the describing function
of one multiple point characteristic, for example two three-point characteristics can
be connected in parallel, to obtain a five-point characteristic, etc. [5.14, chapter 52].
A limit cycle results if there is an intersection of the negative inverse locus
- l/G(iw) of the remaining linear loop with the describing function.
To apply Ljapunov's method it is assumed that the linear open loop of Figure 27.1
with the transfer function y(z)/(YQ(Z))AD can be described by
x(k + 1) = Ax(k) + b(YQ(k))AD )
(27.2.5)
y(k) = c Tx(k) .

As with Eq. (27.2.1) only the solution for the superimposed quantization error by as
input is considered and the stability of
x(k + 1) = Ax(k) + bby(k); Ibyl max = 11/2 (27.2.6)
is analyzed. Further details are given in [5.17, chapter 12]. After defining
a Ljapunov function
V(k) = xT(k) Yx(k); AT YA - Y = I
maximal possible errors l1y in the output can be obtained dependent on the
quantization error 11.

27.2.2 Quantization Effects of Coefficients


The influence of rounding effects in the controller parameters can usually be
neglected, even in the case of fixed point representation. This becomes obvious if
the quantization errors of these coefficients are compared with the process model
parameter errors which influence the controller design.

27.2.3 Quantization Effects of Intermediate Results


Offsets and limit cycles
Intermediate results within control algorithms are products of coefficients and
variables, as in (27.1.8). Both the factors and the products are rounded in general.
The resulting product error can be estimated as follows: Let the product be qe.
Then
q = QI1 + bq; e = EI1 + be (27.2.7)
qe = QEI1 2 + Ql1b e + El1bq + bqb e . (27.2.8)
'-v---'

~O

If the rounding errors bq and be are statistically independent and have variance
(Jl = 112/12, the product error obtained by rounding the factors is
(Jr ~ (Q2 + E2 )112 (Jl . (27.2.9)
236 27 The Influence of Amplitude Quantization on Digital Control

To this one must add the rounding error of the product

(27.2.10)

with variance O"i = 0";. Hence, for statistically independent bq , be and bQE , the
variance of the overall error becomes

(27.2.11)

This shows that with increasing values of the factors q and e, the factor rounding
mainly determines the overall error.
The quantization effects are affected according to whether the rounding is
performed for each product or for its sum. If each product is rounded the resulting
error in the manipulated variable for the control algorithm (27.1.8) with quantiz-
ation error bpui and bqei of the products Piu(k - i) and qie(k - i) becomes

bu(k) = -bpu1(k - 1) - ... - bpu/l(k - {L) + bqeo(k) + ... + bqev(k - v) .


(27.2.12)

The estimation of the resulting error depends largely on the assumptions of the
mutual dependence of the quantization errors. For stochastic signals they can be
assumed to be statistically independent. Then the variance of bu is
/l v
2
O"lJu = ~ 2
t.... O"lJpui + ~ 2
t.... O"lJqei . (27.2.13)
i=1 i=O

This increases with the number of products. A statistical analysis for control loops
with quantization in the ADC and rounding of products has been made in [27.3].
The resulting error standard deviations in the output signal decrease by the factor
3 per 1 bit increase of the word length. They also depend on the programming - see
also [27.1], [27.2]. A simple example shows the effect for a deterministic input
signal.

Example 27.5 Limit cycle due to quantization of the products in the control algorithm
The same control loop is assumed as in Example 27.2. The factors and the product in the
control algorithm are rounded to the second decimal place such that the quantization unit is
A = 0.01. The results are shown in Table 27.4. A limit cycle with period M = 3 arises as with
quantization in the ADC. The amplitude is also about the same: IAYI ~ 0.0034 and
IAul = 0.01, though there is only one product.

Dead band
If the parameters ofJeedJorward control algorithms or digitalfilters lie within certain
ranges, offsets in the output variable can arise, by product rounding, which are
multiples of the quantization units of the products.
27.2 Various Quantization Effects 237

Table 27.4 Effect of rounding the product in the control


algorithm. qo = 2.0.

without rounding with rounding


k u(k) y(k) uQ(k) y(k)

0 2.0000 0 2.00 0
1 0.3468 0.8266 0.34 0.8266
2 0.7434 0.6283 0.74 0.6255
3 0.6482 0.6759 0.66 0.6728
4 0.6711 0.6644 0.66 0.6675
5 0.6656 0.6672 0.68 0.6644
6 0.6669 0.6665 0.66 0.6708
7 0.6667 0.66 0.6664
8 0.68 0.6637
9 0.66 0.6705
10 0.66 0.6661
11 0.68 0.6636
12 0.66 0.6704
13 0.66 0.6661
14 0.68 0.6636
15 0.66 0.6704

Example 27.6
Consider a first order feedforward control algorithm
u(k + 1) = -alu(k) + blv(k)
with al = -0.9 and b l = 0.1. As the gain is K = bd(l + al) = 1 in the ideal case y(oo) = 1 is
attained for v(k) = 1. With rounding of the products to the second decimal place (i.e.
L\ = 0.01) one obtains for various initial values u(O) the final values of u(k) given in
Table 27.5.
Depending on the initial values, the following final values are attained:

u(O) ~ 0.9640: lim u(k) = 0.96

u(O) ~ 1.0450: lim u(k) = 1.05 .


k-oo

For k ~ 1 all initial values 0.9639 ~ u(O) ~ 1.0449 give a nearby rounded steady state value
within the range 0.97 ~ uQ ~ 1.04. The region 0.96 ~ uQ ~ 1.05 is called a dead band [27.6]
which lies around the steady state value for a constant process input. If starting with
u(O) = 0.96 the input v(k) = 0 is applied, the signal u(k) approaches u(k) = 0.05 for k ~ 24.
The dead band always lies around the new steady state.
If the same difference equation is used as a process and a P-controller with qo = 2.0 is used
as feedback and rounding due to this example with L\ = 0.01 are made a limit cycle arises
with period M = 3 and lL\ul = 0.005 and IL\YI ::<::: 0.0005. Because of the feedback there is no
large dead band over several quantization units.
238 27 The Influence of Amplitude Quantization on Digital Control

Table 27.5 Effects of rounding the product of a feedforward control algorithm for
v(k) = 1 and various initial values u(O).

k u(k) uQ(k) k u(k) uQ(k) k u(k) uQ(k)

0 0.9000 0.90 0 1.1000 1.10 0 0.9800 0.98


1 0.9100 0.91 1 1.0900 1.09 1 0.9820 0.98
2 0.9180 0.92 2 1.0810 1.08 2 0.9820 0.98
3 0.9280 0.93 3 1.0720 1.07
4 0.9370 0.94 4 1.0630 1.06
5 0.9460 0.95 5 1.0540 1.05
6 0.9550 0.96 6 1.0450 1.05
7 0.9640 0.96 7 1.0450 1.05
8 0.9640 0.96

The dead zone for a first-order difference equation can be approximately cal-
culated as follows. It is assumed that in the difference equation of example 27.5 the
signals u(k) and v(k) and the parameters al and b l are already rounded according
to the quantization unit A.
Considering only the rounding of a product yields
u(k + 1) = - [al u(k)]Q - bau(k) + b l v(k) (27.2.14)
and in the steady state with v(w) = v

(27.2.15)
Since Ibau I ~ A/2
IUQ(1 + ad - blvl ~ Aj2. (27.2.16)
For example 27.5 this leads with b l = 1 + al to
A
IuQ - v I ~ 21 = 0.05 . (27.2.17)
1 + all
The same relation yields for the rounding of the product b l v(k). For several
rounding errors either the worst case has to be assumed, that is to superimpose
additively several bi = A/2 or statistic assumptions have to be made for the single
errors.
(27.2.17) also indicates the range of the parameter al, for which for processes
with K = 1 dead zones occur with width IuQ - v I = jA, j = 1, 2, 3.
The following is valid
1
11 + all ~ 2j . (27.2.18)

Hence it must also be al < 0 and it follows for j = 1; 2; 3; 4; 5 the conditions


27.2 Various Quantization Effects 239

at ~ -0.5; -0.75; -0.83; -0.9. The closer the pole Zt = - a t gets to the stability
limit, the larger becomes the dead zone.
The same difference equation according example 27.5 for the controlled process
and with a P-controller of gain qo = 2 as feedback for roundings according
example 27.4 with L\ = 0.01 leads to a limit cycle of period M = 3 and amplitudes
lL\ul = 0.005 and lL\yl ~ 0.0005. Because of the existing feedback, a dead zone
covering several quantization units does not occur. A dead zone can be sometimes
avoided by superimposing a small noise signal on the input signal (dithering)
[27.7].
Based on these examples, the following conclusions can be drawn as to how
undesired quantization effects in digital control loops can be avoided:
1. The word lengths of the ADC and DAC and the numerical range of the CPU
must be sufficiently large and coordinated.
2. The word lengths or dynamic range at all quantization locations must be
utilized as much as possible, by appropriate scaling of variables. The word
length of the ADC should be chosen such that its quantization error is smaller
than the static and dynamic errors of the sensors. A word length of 10 bits
(resolution 0.1 %) is usually sufficient. The word length of the DAC must be
coordinated with that of the ADC. For digital control it can be taken such that
one quantization unit of the manipulated variable arises, after transfer through
the process, in about one quantization unit of the ADC.
3. To avoid excessive quantization errors of factors and products, the CPU word
length for fixed point calculations must be significantly larger than that of the
ADC (for example double word).
4. If limit cycles arise for a given digital controller, the controller parameters
should be modified to give a weaker controller action (detuned).
5. For feedforward control algorithms and digital filters one must take care of the
dead band effect around the steady state.
28 Filtering of Disturbances

Some control systems and many measurement techniques require the determina-
tion of signals which are contaminated by noise. It is assumed that a signal s(k) is
contaminated additively by n(k) and only
y(t) = s(t) + n(t)
is measurable.
If the frequency spectra of the signal and the noise lie in different ranges, they can
be separated by suitable bandpass filters, Figure 28.1. If, however, the frequency
spectra lie in the same frequency range, estimation methods have to be used to
determine the signal. In this case it is not possible to determine the signal without
error. The influence of the noise can only be minimized. This was discussed in
chapter 22.

Figure 28.1 Separation of a signal S and a noise


s
n by a bandpass filter. y: measured signal; SF:
filtered signal.

This chapter considers the bandpass filtering with regard to application for
control systems. In section 28.1 the noise sources and noise spectra are treated
which usually contaminate the control systems. Various filters and their applica-
tion are then described: analog filters in 28.2 and digital filters in 28.3.

28.1 Noise Sources and Noise Spectra

The graph of the dynamic control factor IR(z)l, Figure 11.5, shows that high
frequency disturbances n(k) with frequencies in the range III, ro > roll or IR(z)1 ~ 1
cannot be influenced by the control system. They only cause undesired actuator
changes and should therefore be eliminated by suitable filters. High frequency noise
signals in general consist of the following components:
28.1 Noise Sources and Noise Spectra 241

a) high frequency disturbances of the process;


b) high frequency measurement noise (for example turbulent flow, vibrations,
instrument noise);
c) electrical disturbances during signal transmission.
The components a) mostly cannot be changed. Occasionally components b) can
be reduced. In the case of components c) the following can be mentioned:
The amplitude modulated d.c. current transmission of the noise components
emerge because of galvanic, inductive and capacitive coupling to other electric
sources and often consist of high and low frequency components. The high
frequency noise does not generally significantly influence the function of analog
control devices, because of their natural low-pass behaviour. However, in the case
of digital signal processing the noise is sampled and transmitted. Therefore it must
be reduced at source and filtered at the digital computer input. The frequency
modulated or digital signal transmission is essentially more advantageous with
regard to frequency noise transmission.
One distinguishes noise signals which act in opposite directions on both wires
and those which act in the same direction. The noise can be avoided or diminished
by the following actions: proper installation with sufficient spacing between cables,
twisting as protection against inductive coupling, proper earthing of the computer,
different potential of measurement cable and analog input [28.1]. Despite these
techniques there generally is some residual high frequency noise.

Noise spectra
Analog and digital filtering can avoid or diminish the various noncontrollable high
frequency noise signals provided the signals required for the control are not
affected. In order to design those filters, at first the frequency spectra have to be
considered which are generated by noise signals and sampling.
The continuous measurement signal is described by
y(t) = s(t) + n(t) (28.1.1)
with s(t) the undisturbed signal and n(t) the noise. y(t) is sampled with sample time
To or at a sample frequency Wo = 2n/To . By this sampling the Fourier transform of
a deterministic signal becomes periodic at Wo
y*(iw) = y*(i(w + vWo)) v = 0, 1,2, ... (28.1.2)
c.f. chapter 3. The power density spectrum of a stochastic signal is also periodic
(28.1.3)
As well as the basic spectrum (v = 0), side spectra (side bands) with distance Wo
appear for v = ± 1, ±2, ... These are shown in Figure 28.2 a for the signal s(t)
and in b) for the noise n(t) for v = + 1.
If W max is the maximum frequency of the signal which is of interest for control
then, if W max > wo/2, the basic and side spectra overlap. The continuous signal
242 28 Filtering of Disturbances

S .. (wl

;'
, + ......
I "
,.
/' I , I
/ /

a Wo 3w s w

Snn(w)
S~n(wl
Snn(W'Wo)

b w

c w

Figure 28.2a-d Power density spectra S(w) for the signal s(k), the noise n(k) and their
low-pass filtering a) signal b) noise c) low-pass filter d) filtered signals; Wo : sample fre-
quency; Ws = wo /2: Nyquist frequency.

cannot then be reconstituted without error using ideal bandpass filters. To recon-
stitute a limited frequency spectrum W ~ Wmax from the sampled signal, Shannon's
sampling theorem states that Wmax < wo / 2 = Ws must be satisfied, c.f. chapter 3.
Hence,
(28.1.4)
If high frequency noise n(t) with Snn(w) =t= 0 for W > Ws is contained in the
measured signal y(t), side spectra Snn(w + vWo) are generated which are super-
imposed on the basic spectrum Snn(w), forming S:n(w), see Figure 28.2 b). High
frequency noise with (angular) frequency Ws < Wl < Wo generates after sampling at
Wo a low frequency component with frequency

(28.1.5)
with the same amplitude. To illustrate this so-called aliasing effect Figure 28.3
shows a sinusoidal oscillation with period Tp = 12To / lO.5 and therefore
28.2 Analog Filtering 243

n( t) W, n( k)

•• •••
k
• •

Figure 28.3 The aliasing effect: generation of a low frequency signal n(k) with frequency W2
by sampling of a high frequency signal n(t) with frequency WI > wo/2 with sampling
frequency Wo > WI'

WI = 2n/ Tp = 14n/ STo with sampling frequency Wo = 2n/ To = 16n/STo. This re-
sults in a low frequency component with the same amplitude and with frequency
W2 = Wo - WI = 2n/STo [2S.2]. Noise components with WI ~ Wo therefore gener-
ate very low frequency noise W2' This is the reason why high frequency noise with
significant spectral densities for W > Ws = n/ T o have to be filtered before they are
sampled. This is shown in Figure 2S.2 c) and d). Analog filters are effective for this
purpose.

28.2 Analog Filtering

Analog low pass filters


Using analog techniques broad band noise with W > Ws = n/To can be filtered. For
filtering of noise before sampling, low-pass filters must be used which have sufficient
attenuation at W = Ws = wo / 2 of about 1/ 10 ... 1/ 100 or -20 ... -40 dB, de-
pending on the noise amplitudes. To design frequency responses of low-pass filters
there are the following possibilities [2S.4].
Simple low-pass filters are obtained by connecting first-order lags in series
I I
Gdiw) = (I + iwT)" = (I + We)" n = I, 2, 3, ... (2S.2.1)

with normalized frequency Q e = W/We = wT. We = I/ T is called the corner or


break-point frequency (Bode plot). In filtering the normalized frequency is usually
related to the limiting frequency Wg for which the amplitudes are decreased to
- 3dB = 0.70S. Then

I I
(2S.2.2)
GdiQ g ) = (I + iQ9 )" = (I + 0")" .
In this representation the time constant T changes with the order n. Higher order
low-pass filters with n ~ 2 can be designed differently. Here compromises must be
made with regard to a flat pass-band, to a sharp cut-off and to a small overshoot of
the resulting step response, c.f. Figure 2S.4. Such special low-pass filters are for
244 28 Filtering of Disturbances

o I .. ," , "

IGFI '" ''' '1' .1.,. .... .... ...... . i""l' .

[dBI

-30
0.001
I I
0.01
I I
0.1

Figure 28.4 Frequency response magnitudes of various low-pass filters with order n = 4.
[28.4]. 1 Simple low-pass due to (28.2.2) 2 Butterworth low-pass 3 Bessel low-pass
4 Tschebyscheff low-pass ( ± 1.5 dB pass-band oscillations).

example Butterworth-, Bessel- and Tschebyscheff-filters. They have the transfer


function

(28.2.3)

with magnitude

(28.2.4)

and phase shift

(28.2.5)

c.f. [5.14, p. 86].


Butterworth filters are characterized by amplitudes
I
IGF(iDg)1 =J (28.2.6)
1+ Din
They have a flat pass-band and a rapid transition to the asymptote IGFI = l /D;.
However, the step response shows an overshoot which is 12% for n = 4.
Bessel filters have a phase shift proportional to the frequency
(28.2.7)
The time delay caused by the phase shift is then At = - cp/w = - cp/Dgwg = c/Wg
and is therefore independent of the frequency. This results in a step response with
little overshoot. The amplitude does not descend so quickly to the asymptote 1 /~
as for the Butterworth filter.
28.3 Digital Filtering 245

The frequency response of Tschebyscheff filters

(28.2.8)

contain Tschebyscheff polynomials ({3Q g = q)

Sl(q) = q; S2(q) = 2q2 - 1; S3(q) = 4q3 - 3q; etc.


These filters have particularly rapid transitions to the stop-band asymptotes
which is paid for by oscillations in the amplitude response W < Wg and by large
overshoots of the step response. e determines the pass-band oscillations.
Unlike simple low-pass filters, these special filters have conjugate complex poles.
They can be built with active elements, especially operational amplifiers together
with RC-networks [28.4J, [28.S]. Simple low-pass filters with passive elements are
cheap for the high frequency range of h > S Hz or Wg > 31.4 ljsec. If the filter
should have 20 dB damping for Ws = wo/2 = n/To and an order n = 2 the limiting
frequency is Wg ~ 0.3 Ws. Therefore passive RC-filters can be used for sample
times To < O.1S/h = 0.03 sec. For lower frequency noise 0.1 Hz < h < S Hz or
0.6 l/sec < Wg < 31.4 l/sec active low-pass filters are appropriate. This corresponds
to sample times of 1.S sec> To > 0.03 sec.

28.3 Digital Filtering

As analog filters for frequencies of /g < 0.1 Hz become expensive, such low fre-
quency noise should be filtered by digital methods. This section first considers
digital low pass filters. Then digital high pass filters and some special digital filtering
algorithms are reviewed.
It is assumed that the sampled signal s(k) is contaminated by noise n(k), so that

y(k) = s(k) + n(k)


is measurable. If the spectra of s(k) and n(k) lie in different frequency ranges, the
signal s(k) can be separated by a bandpass filter which generates sF(k) at its output,
c.f. Figure 28.1. Linear filters are described by difference equations of the form

sF(k) + alsF(k - 1) + ... + amsF(k - m)


= boy(k) + b 1 y(k - 1) + ... + bmy(k - m) (28.3.1)

or by the z-transfer function

(28.3.2)

Some simple discrete-time filters are considered now.


246 28 Filtering of Disturbances

28.3.1 Low-pass Filters


The z-transfer function of a first order low-pass filter with s-transfer function

GF(s) = SF(S) = _1_ (28.3.3)


y(s) 1 + Ts
with no holding element follows from the z-transformation table as
SF(Z) bo
(28.3.4)
GFl (z) = -
Y(z) = 1 + alz I

and with a zero-order hold due to example 3.8


SF(Z) b l Z-l
GF2 (z) = - ( ) = 1 - (28.3.5)
Yz + alz I .

The parameters are

and the gains become


1
GFl (1) = T(l + ad and GF2 (1) = 1.
As GF2 (z)/G Fl (z) = z-lbt/b o the filter GF2 (z) gives, compared with GFl(z), the
filtered signals with a dead time d = 1 but with unity gain. Therefore GFl (z) is
preferred in general. As GFI (1) = 1 is obtained bo must be replaced by b~ = 1 + al

, () b~ (28.3.6)
GFl Z = 1
+ alz I .

The frequency response of the first order low-pass filter using z = eToi '" is

G' (. )
= 1
b~ b~
FI 1W
+ ale Toi'"
b~[(1 + alcoswTo) + ial sin wTo]
(28.3.7)
(1 + al cos WTO)2 + (al sin WTO)2
This gives the amplitudes

I GFI (iw)1 = b~ (28.3.8)


Jadal + 2coswTo) + 1
with IGFt! = 1 for wTo = 0, 2n, 4n, ...
In Figure 28.5 the magnitudes of a discrete-time and continuous-time filter are
shown for To/T = 4/7.5 = 0.533. There is good agreement in the low frequency
range. At wTo = 1 the difference is about 4%. Unlike the continuous filter, the
discrete filter shows a first minimum of the amplitudes at the Nyquist frequency
28.3 Digital Filtering 247

2.0

1.0 +--==---~---I.----.

0.1
I
I
I
f
I
w To = L
Figure 28.5 Frequency re-
sponse magnitude of first or-
der low-pass filters - -
0.01 discrete filter - - - continuous
0.1 wTo 10 20 filter.

Wo T = n with the magnitude

IGi.' l (iw)1 = (I + ad/(I - ad· (28.3.9)


This is followed by a maximum at wTo = 2n with IGF11 = 1, a mmlmum at
wTo = 2n, a maximum at wTo = 4n, etc. The discrete filter cannot therefore
effectively filter signals with higher frequencies than the Nyquist frequency. For
frequencies w > n/ T o continuous filters must be used.
A second order low-pass filter with
I
G (s) - - -----;:- (28.3.10)
F -(1 + TS)2

follows from the z-transform table (without hold)

/ b '1 z - 1 b'1 Z - 1

GF(z) = (I + az 1)2 (28.3.11 )

with the coefficients

F or noise filtering in control systems for frequencies /g < 0.1 Hz digital low-pass
filters should be applied. They can filter the noise in the range of Wg < w < Ws .
Noise with w > W s must be reduced with analog filters. The design of the digital
filter, of course, depends much on the further application of the signals. In the case
of noise pre-filtering for digital control, the location of the Nyquist frequency
Ws = n/ T o within the graph of the dynamic control factor IR(z)l, section 11.4, is
crucial, c.f. Figure 28.6. If W s lies within range III for which a reduction of the noise
248 28 Filtering of Disturbances

IR (zl" IR(zll

Ws w s w

a - l - --ill b

Figure 28.68, b Location of the Nyquist frequency Ws = n/To within the dynamic control
factor. a Ws in range III, small sample time; b Ws in range II, large sample time.

components is not possible by feedback, a discrete low-pass filter of first or second


order (or an analog filter) with limiting frequency Wg ~ WII can be used. The
controller parameters must not be changed in this case. If Ws lies close to WII or
within range II, an effective low-pass filter becomes a significant part of the process
to be controlled, implying that the controller must be detuned. The graph of the
dynamic control factor would change, leading possibly to a loss in the control
performance in regions I and II. Any improvement that can be obtained by the
low-pass filter depends on the noise spectrum and must be analyzed in each case.
The case of Figure 28.6 a) arises if the sample time To is relatively small and the case
of Figure 28.6 b) if To is relatively large.

28.3.2 High-pass Filters


The z-transfer function of the continuous first order high-pass filter

GF(s) = T 2s TI < T2 (28.3.12)


1 + TIS
with zero-order hold is

GF(z) = bo + bIz- 1 bo(l - Z-I)


(28.3.13)
l+aIz I I+alz l
with parameters
al = _e- To / T ,; bo = -b i = T2 / T I •
In this case a hold is required, as a z-transfer function with corresponding
input/output behaviour is desired, see example 3.8. The first order high-pass filter
has a zero at z = 1. The transmission range follows from the corner frequency

~: (l - e- To / T ,) ~ wTo ~ 1t.
In the high frequency range is IGF(iw)1 = 0 for wTo = V1t, with v = 2,4, ... For low
frequencies the behaviour is as the continuous filter.
28.3 Digital Filtering 249

Some simulation examples of various discrete filters can be looked up in [2.20].


For more complex filters see e.g. [28.6, 28.7].
As well as the above simple low-order filters, many other more complex discrete-
time filters can be designed. The reader is referred for example to [28.6, 28.7].

28.3.3 Special Filters


This subsection considers discrete-time filters for special tasks, such as for recursive
averaging and for filtering of bursts.

Recursive averaging
For some tasks only the current average value of the signals is of interest, i.e. the
very low frequency component. An example is the d.c. value estimation in recursive
parameter estimation, chapter 24. The following algorithms can be applied.
a) Averaging with infinite memory
It is assumed that a constant value s is superimposed on the noise n(k) with
E {n(k)} = 0 and the measured signal is given by
y(k) = s + n(k) . (28.3.14)
The least squares method with the loss function
N N
V= Le 2 (k) = L [y(k) - §]2 (28.3.15)
k=1 k=1
yields with d V/ds = 0 the well-known estimate
1 N
§(N)=-L y(k). (28.3.16)
N k=1
The corresponding recursive estimate results from subtraction of §(N - 1)

§(k) = s(k - 1) + ~[Y(k) - s(k - 1)] . (28.3.17)

This algorithm is suitable for a constant s. With increasing k the errors e(k) and
therefore the new measurements are weighted increasingly less. However, if s(k)
is slowly timevariant and the current average is to be estimated, other algo-
rithms should be used.
b) Averaging with a constant correcting factor
If the correcting factor is frozen by setting k = k1' the new measurements y(k)
always give equally weighted contributions

§(k) = s(k - 1) + :1 [y(k) - §(k - 1)] = k\~ 1 s(k - 1) + :1 y(k) .


(28.3.18)
250 28 Filtering of Disturbances

The z-transfer function of this algorithm is


s(z) bo
(28.3.19)
y(z) 1 + al z 1

with al = - (k1 - 1)/k 1 and b o = 1/k 1 • Hence, this algorithm is the same as the
discrete first order low-pass filter, Eq. (28.3.6).
c) Averaging with limited memory

Only N past measurements are averaged with equal weight

s(k) =~ i
N i=k-N+1
y(i) . (28.3.20)

Subtraction of s(k - 1) gives recursive averaging with limited memory

s(k) = s(k - 1) + ~ [y(k) - y(k - N)] (28.3.21)

with the z-transfer function


G(z) = s(z) = ~ (1 - Z-N) .
(28.3.22)
y(z) N(1-z- 1 )
d) Averaging with fading memory
The method of weighted least squares is used, c.f. [3.13], by exponential
weighting of past measurements
N
V= L AN- k e 2 (k) 0 < IAI < 1 . (28.3.23)
k=1
The older the measurement the less the weight. d Vlds = 0 yields for large N

N {N-l
s(N) = (1 - A) k~1 y(k)AN-k(1 - A) k~l y(k)A.N-k + y(k) } , (28.3.24)

using the approximation


N
L AN- k = 1 + A + A2 + ... + AN- 1 ~ [1 - Ar1.
k=l

Subtraction of s(N - 1)
N-1
s(N - 1) = (1 - A) L y(k)A N- k- 1 (28.3.25)
k=1

gives a recursive average with fading memory


s(k) = As(k - 1) + (1 - A) y(k) (28.3.26)
28.3 Digital Filtering 251

10

IGI

,
J
"-
2

i
\
\
,
\

0.1

0.01 + - - - - - - - / - - - I I - - l I --lf--
0.001 0.1 T[ 21{ 10 20

Figure 28.7 Magnitudes of the frequency response of various recursive algorithms for
averaging of slowly time varying signals 1 frozen correcting factor kl = 20; fading memory
A = 0.95; 2 limited memory N = 20.

This algorithm has the same form as for the frozen correcting factor (28.3.18) and
therefore the z-transfer function as in (28.3.19) with a 1 = - A and bo = (I - A).
Hence, kl = 1/(\ - A) becomes valid.
As these averaging algorithms track low frequency components of s(k) and
eliminate high frequency components n(k), they can be considered as special
low-pass filters. Their frequency responses are shown in Figure 28.7.
The recursive algorithm with a frozen correcting factor or fading memory has the
same frequency response as a discrete low-pass filter with To / T = In(l / - ad.
Noise with wTo > n cannot be filtered and there increases the variance through
averaging.
The frequency response of the recursive algorithm with limited memory becomes
zero at wTo = vn/N with v = 2,4,6, .... Noise with these frequencies is eliminated
completely, as with the integrating A/ D converter. The frequency response magni-
tude has a maximum at wTo = vn/N with v = 1, 3, 5, ... . Therefore noise with
w To > 2n/ N cannot be effectively filtered.

Integrating A/D Converters


For the integrating A/D converter the measurement signal is averaged over
a certain time period Llt. Harmonic oscillations with periods nTp = Llt = 1,2,3, ...
are then almost completely eliminated. Compare the frequency response for dis-
crete averaging with limited memory, Figure 28.7. Hence, a noise with f = 50 Hz
can be eliminated by an average time of Llt = 1/ 50s = 20 ms. Integrating A/ D
252 28 Filtering of Disturbances

converters are filters for specific elimination of certain periodic noise signals with
discrete frequencies.

Filtering of Outliers
Up to now high frequency, quasistationary, stochastic noise signals were con-
sidered, which cannot be damped by a control algorithm. A similar problem arises
if the measurement value contains a so-called "outlier". These measured values can
sometimes be totally wrong and lie far away from the normal values. They may be
caused by avoidable or unavoidable disturbances of the sensor or of the transmis-
sion line. As they do not correspond to a real control deviation they should be
ignored by the controller. Contrary to analog control systems, digital control
provides simple means to filter these types of disturbance.
It is assumed that the normal signal y(k) consists of the signal s(k) and the noise
n(k)
y(k) = s(k) + n(k) . (28.3.27)
In this signal the outliers are to be detected. The following methods can be used:
a) - estimation of the mean value y = E { y(k)}
- estimation of the variance (1; = E ([ y(k) - YJ2}.
b) - estimation of the signal §(k)
(signal parameter estimation as in section 24.2.2, then Kalman filtering as in
chap. 22)
- estimation of the variance (1; = E{[s(k) - S]2}
c) - estimation of the parametric signal model as in section 24.2.2.
- prediction of Y(klk - 1)
- estimation of the variance (1;.
Here only the simplest method a) is briefly described. Estimation of the mean value
can be performed by recursive averaging

~ - 1 ~
Y(k+ l)=y(k)+ k+ 1 [y(k+ l)-Y(k)] (28.3.28)

compare (28.3.17), etc. For slowly time varying systems an averaging with
a "frozen" correcting factor is recommended

y(k + 1) = y(k) + K[y(k + 1) - y(k)] (28.3.29)


with K as a suitably chosen constant, e.g. K = 1/1 + kJ, see (28.3.18). (28.3.29)
represents a first-order difference equation with constant parameters and can be
taken as a low-pass filter. The variance with
Ay(k + 1) = y(k + 1) - y(k + 1)
in the recursive estimator becomes
u;(k + 1) = u;(k) + K(k)[(y(k + 1) - y(k + 1)f - u;(k)] (28.3.30)
28.3 Digital Filtering 253

with K(k) = 11k or, better, K = const. To detect outliers knowledge of the prob-
ability density p(y) is required. Then it can be assumed that measured signals with
I~y(k + 1)1 = Iy(k + 1) - y(k + 1)1 > xUy(k + 1) (28.3.31)
are outliers, with for example x ~ 3 for a normal distribution p(y).
The value y(k + 1) is finally replaced by y(k + 1) = y(k + 1) and used as estimate
for control.
29 Combining Control Algorithms and Actuators

Within a control system actuators have to be moved to a certain absolute position


U(k). After linearization around an operating point the linear controllers deter-
mine relative positions u(k) = ~U(k) = U(k) - U oo with respect to the operating
point U 00 which depends on the command variable. In digital control systems
recursive algorithms are used to determine u(k). Programming can be performed
such that the first difference ~u(k) = u(k) - u(k - 1) appears at the digital control-
ler output. (For the PID-control algorithm, section 5.1, this is called "velocity
algorithm".)
The following shows how for various types of actuators and corresponding
actuator control the desired position U(k) can be attained using the treated control
algorithms. The following groupings can be made:

Actuator control
At the digital computer output the required manipulated variable or its change is
only briefly present as a digitized value. For controlling the actuator this digital
manipulated variable has now to be transformed in a matching signal using
a corresponding interface. Different control signals are required dependent on the
type of actuator. One distinguishes between e.g. actuator feedforward control with
amplitude modulated, pulse width modulated and impulse number modulated
analog control signals, see Figure 29.1, and, for example [5.33]. An absolute analog
manipulated variable URis generally required for proportional and integral ac-
tuators with e.g. pneumatic, hydraulic or electrical auxiliary energy. For older
process computers often only one D/A converter was available for several ac-
tuators and for each actuator one analog holding element had to be provided.
Today a D/A converter which is preconnected by a data register as an intermediate
storage is usually assigned to each analog output, Figure 29.1. Corresponding
interface modules are available as integrated circuits. The analog manipulated
variable value URis transmitted then as a d.c. voltage (e.g. 0 ... 10 V) or as an
impressed current (e.g. 0 ... 20 rnA or 4 ... 20 rnA) to the corresponding actuator.
This is followed by an analog signal conversion and a power amplification leading
to a suitable quantity for controlling the pneumatic, hydraulic or electrical drive.
Integral actuators with constant speed are controlled through incremental analog
signals ~U R in the form of pulse width modulated signals of a certain amplitude
29 Combining Control Algorithms and Actuators 255

Dlgltat Anatog
Register I--- D/A
quantity quan I y

JUUUl

Digital
Register Counterl---- -
quantity

Dlgltat pulse Ana to g


Reg ister I - - converter
quanlity quantity

Figure 29.la- c. Various schemes for the control of actuators. a absolute analog value
output with register and DAC. b incremental analog value output with register and counter
(pulse width modulated signal). c Incremental analog value output with register and
impulse transformer (impulse number modulated analog signal).

with different signs. Figure 29.1b represents a frequently used scheme. A data
register holds the digital output of the computer. The following counter counts the
register value with a determined clock frequency in positive or negative direction
towards zero. The analog output signal continues until the counter has reached
zero. In this case a DAC is not required.
Quantizing actuators as for example step motors are controlled by brief pulses
and therefore need on the input incremental analog signals f!U R in the form of pulse
number modulated signals of determined amplitude and a pulse duration with
different signs. Figure 29.1 c depicts a realization using a data register and an
impulse transformer. This is called "direct digital actuator control". Also for this
case a DAC is not required.

Response of actuators
Table 29.1 summarizes some properties of frequently used actuators. Here, pneu-
matic, hydraulic and electrical actuators are considered because of their signifi-
cance in industrial applications. Because of the great variety available, only
a selection of types can be considered here.
Table 29.1 Properties of frequently used actuators

Type of Con- Input D/ A- Analog Time Group Power Rising Simplified


actuator struction signal conver- trans- behaviour [mN] time block diagram
energy sion by mitter [sec]

pneuma- membrane air D/ A- e1ectr./ proport. 0.1 ... 1 ... 10


tical with pressure converter pneumat. with 2000 ~ YH r H £I~
spring 0.2 ... trans- time lag
1.0 bar mitter

hydrau- piston oil D/ A- electri- integral II 100 1 ... 10


lical without pressure converter cal/ hy-
mech. draulic. proport. 750000
feedback transm. with
time lag
piston 100 .. . I ... 10
with 750000
mech.
feedback

electro- d.c. shunt d.c. D/ A- ampli- integral II 10 .. . 0.01


mechan. electromotor voltage converter fier variable 4000 ... 60
speed
a.c. two a.c. control 3-point III 10 ... I ... 60
phase electro- voltage unit relais integral 4000
motor voltage actuator constant
speed
step motor pulses step- IV 0.02
wise ... 60
proport.
29 Combining Control Algorithms and Actuators 257

With respect to the dynamic response the following grouping can be made, see
Table 29.1:
Group I: Proportional actuators
- proportional behaviour with lags of first (or higher) order
- pneumatic or hydraulic actuators with mechanical feedback
Group II: Integral actuators with varying speed
- linear integral behaviour
- hydraulic actuators without feedback,
electrical actuators with speed controlled d.c. current motors
Group III: Integral actuators with constant speed
- nonlinear integral behaviour
- electrical actuators with a.c. motors and three-way switches
Group IV: Actuators with quantization
- integral or proportional behaviour
- electrical stepping motors
Within the control range
(29.1)
the actuators of groups I, II and IV approximately behave linearly. Feedback from
the actuator load, hysterese effects, and dead zones may, however, also lead to
nonlinear behaviour. The actuators of group III generally show nonlinear behavi-
our, which, however, can be linearized for small signal changes as will be shown
later.

Position feedforward control and position feedback control


Methods for the controlled positioning based on the output of a control algorithm
are considered for the case that u(k) is given by the control algorithm. In order to
attain uA(k) = u(k) the following control schemes exist which are given in Figure
29.2:

a) Position feedforward control


The output uR(k) of the DAC directly controls the actuator.

b) Analog position feedback control


UR acts as reference value on an analog position controller (positioner, fre-
quently to be found with pneumatic actuators)

c) Digital position feedback control


The position change uA(k) of the actuator is fed back to the CPU (by analog
measurement through the ADC) and the position deviation
ue(k) = u(k) - uA(k)
is formed. Mostly a P-controller is sufficient as a position control algorithm
258 29 Combining Control Algorithms and Actuators

Figure 29.2a-d. Various possibilities for actuator control (shown for an analog controlled
actuator). a feedforward position control; b analog feedback position control; c digital
feedback position control; d position feedback to the process algorithm.

A PI-algorithm can be applied for proportional actuators. UR then controls the


actuator (with analog control via a DAC control).
d) Position feedback to the control process algorithm
The position U A (k) of the actuator is fed back to the CPU. The recursive control
algorithm for the process calculates the present manipulated variable u(k) by
using the past effective positions

(29.2)
Process and actuator have the same sampling time, uR(k) = u(k) is given to the
actuator.
Scheme a) is the simplest, but gives no feedback from the actuator response.
Therefore the agreement between required and effective position range can be lost
with time. Schemes b), c) and d) presume position feedback uA(k). b) and c) have
the known advantages of a positioner which acts such that the required position is
really attained [5.14]. c) requires in general a smaller sample time in comparison
to that of the process, which is an additional burden on the CPU. Scheme
d) avoids the use of a special position control algorithm. The calculation of u(k) is
29 Combining Control Algorithms and Actuators 259

based on the real position of the actuator uA(k - 1), uA(k - 2), .... This is an
advantage for integral acting control algorithms if the lower or the upper position
constraint is reached. Then no wind-up of the manipulated variable occurs.
Some specialties of the actuators of group I to VI will be briefly considered in the
following.

Proportional actuators
For the pneumatic and hydraulic proportional acting actuators (group I), the
change of the manipulated variable u(k) or u'(k) calculated by the process- or
position control algorithm can be used directly to control the actuator, scheme
a) in Figure 29.2. In the case of the actuator position control the schemes Figure
29.2 b) or d) are applicable. Figure 29.3 indicates the symbols used in Figure 29.2.

Integral actuators with wrying speed


Integral actuators (group II) can be controlled by u(k) according to scheme a), if the
integral behaviour of the actuator is included in the process model used in the
design. Position control schemes b) and c) give proportional behaviour of the
position loop, so that u(k) can be applied as with proportional actuators. Direct
feedforward control as in a) can also be achieved by programming ~u(k). A dis-
crete PID-control algorithm then becomes
(29.3)
or
~u(z) -1-2
GR(z) = ~- = qo + q1 Z + Q2 z .
e(z)

The integral actuator with transfer function G(s) = 11Ts and with zero-order hold
results in the z-transfer function
UA(Z) To Z-1
GSA(z) = ~u(z) = T 1 _ Z-1 (29.4)

u ---
Amax

UA Figure 29.3 Characteristic of a pro-


__ L __
Operating portional acting actuator U R con-
I point troller DAC output; U A actuator
I I

U
tLJ;i
------1"'-----+-'-'--'-----.-------
position; UR change of the controller
position; UA change of the actuator
Amln
position.
260 29 Combining Control Algorithms and Actuators

Then control algorithm and actuator together yield the PID-transfer function with
dead time d = 1
G ()G () = UA(Z) = To [qo + q1 Z-1 + Q2 Z-2]Z-1 (29.5)
R Z SA Z e(z) T 1_ Z-l

The actuator then becomes part of the controller. Its integration time T has to be
taken into account when determining the controller parameters. (Note, for math-
ematical treatment of the control loop no sampler follows the actuator.) This
method also avoids a wind-up when reaching the constraint.

Integral actuators with constant speed


Actuators with integral behaviour and constant speed (group III) must be switched
by a three-way switch to give right-hand rotation, left-hand rotation or stop. The
first possibility offeedforward position control consists in connecting u(k) directly to
the three-way switch. The actuator then moves to the right-hand or the left-hand
side during the sample interval if lu(k)1 > URt, where URt is the deadband or stops if
lu(k)1 < URt. The actuator speed must then be relatively small to avoid too large
a change. This may result in a poor control performance. To attain the same
position changes u(k) in a shorter time, the actuator speed must be increased and
the switch durations TA < To must be calculated and stored. To save computing
time in the CPU this is often performed in special actuator control devices [1.11].
This actuator control device can also be interpreted as a special D/A-converter
outputting rectangular pulses with quantized duration T A , see Figure 29.1 b. Figure
29.4 shows a simplified block diagram of the transfer behaviour of the actuator-
control device and the actuator.
Integral actuators with constant speed are described by a three-way switch and
a linear integrator, Table 29.1. If Ts is the settling time of the actuator, i.e. the
running time for the whole position range
(29.6)
it follows for the position speed that UR = +URO, UR = - URO or UR = 0 depending
on the three-way switch position, c.f. Figure 29.4,

u~ = dU~ = AU~ = AU A_x URO (29.7)


dt At Ts IURol
Hence the position change per switch duration TA is
AU~(TA) = U~(TA) - U~(O) = U~(TA)
TA ., TA URO
= f
o
UA(t)dt = UAmax - -I-I·
Ts URO
(29.8)

The three-way switch introduces a nonlinear behaviour. If the dead band


- URt ~ UR ~ URt of the switch is large enough, no limit cycle appears from this
nonlinearity and stable steady state can be attained, c.f. [5.14, chapter 52]. To
Actuator Three -way sWitch Integrator Position
control device constraint

UM

tlU R UR UM UA
N
UR '-D
(')
o
3
0-

'- 'v'
vf1rn ~
S'
5'
(JQ

(')
o
~
...,
2-
J "," ;l>-
UA ...:!- __
UR U'A OCi
"L o
... ,,~- - ::l.
/. - ;.
v 3
en
~ L'It '1 To
'::s"
0-
;l>-
n
.~ b i 2
'0-...,"
Figure 29.4 Simplified block diagram of integral acting actuators with constant speed en

N
0\
262 29 Combining Control Algorithms and Actuators

generate the position changes u(k) calculated by the control algorithm, the ac-
tuator control device has to produce pulses with amplitudes URO, 0, -URo and the
switch duration TA(k), i.e. pulse modulated ternary signals, see Figure 29.4. This
introduces a further nonlinearity. The smallest realizable switch duration TAO de-
termines the quantization unit LlA of the actuator position
(29.9)
It is recommended to choose this as the quantization unit of a corresponding DAC
for position changes, LlA = LlDA' i.e. about 6 ... 8 bit. The smallest switch duration
must be large enough such that the motor does actuate. The required switch
duration TA(k) follows for the required position change from one sample point to
another
Llu(k) = u(k) - u(k - 1) = j(k)LlA j = 1,2, 3, ...
from (29.8) as

TA(k) = j(k) TAO = Ts IUROI Llu(k) (29.10)


URO Ll VA max

which is for example transmitted as a pulse number j to the actuator control device.
The largest position change per sample time To is,

A ,
L.l.UAmax = V A max -To •
Ts
Therefore position changes

(29.11)

with quantization unit LlA can be realized within one sample time. They result in
the ramps shown in Figure 29.36.
As these actuators with constant speed introduce nonlinearities into the loop, the
next section briefly discusses when the behaviour can be linearized.

a) Method of'small time constants'


The rampwise step responses of the actuator with three-way switch and control
device can be described approximately by first-order time lags with the amplitude
dependent time constant
(29.12)
If these time constants are negligible compared with the process time constants,
a proportional action element without lag can be assumed and therefore a lin-
earized actuator. Process model simplification by the neglection of small time
constants was investigated in [3.4], [3.5]. For the case of continuous-time PID-
controllers, small time constants TSIft can be neglected for processes with equal time
constants T of order n = 2,4, 6 or 8, assuming an error of ~ 20 % of the quadratic
29 Combining Control Algorithms and Actuators 263

performance index for r = 0 if


Tsm/TJ: ~ 0.015, 0.045, 0.083 or 0.13 (29.13)
where TJ: = nTis the sum of time constants, c.f. section 3.7.3. (29.3) and (29.12) give
position changes for which the actuator can be linearized
(29.14)

b) Method o!,amplitude density'


Another possibility is to estimate negligible small actuator action times from the
ratio of the amplitude densities of a ramp and a step function

X= ( S. IWTA)/WT
n-- -- A (29.15)
2 2

with TA the switch duration of the ramp function, c.f. [3.11]. If differences of
5 ... 20%, i.e. x = 0.95 ... 0.80 are allowed for the maximum frequency Wmax of
interest it follows that
TAw max ~ 1.1 ... 2.25. (29.16)
In general Wmax = Ws = n/ To (Nyquist frequency). Hence
TA/To ~ 0.35 ... 0.72 (29.17)
or with (29.14)
AUA/AUAmax ~ (0.35 ... 0.72) To/Ts . (29.18)
This leads to a "rule of thumb", using (29.17):
Actuators with constant speed can be linearized if the maximum switch
duration TA is about half of the sample time To.
Note, for the application of this rule the sample time has to be chosen suitably
such that Wmax = Ws.
Two examples in Table 29.2 show how large the changes in the manipulated
variables can be for a linearization of the nonlinear behaviour of the actuators.
Both methods, of course, give different values for the linearizable position ranges
which are relatively large.
If good control performance is required, the settling times of the actuators have
to be always adapted to the time characteristics of the control system. This leads to
small sampling times, so that for small changes around the operating point
constant speed actuators may be linearized for designing the control algorithm.
These considerations have been in reference to feedforward controlled integrated
actuators with constant speed. However, analog or digital feedback control or
a position feedback to the control algorithm due to (29.2) can also be applied to
electrical actuators with constant speed.
264 29 Combining Control Algorithms and Actuators

Table 29.2 Examples for estimation of the linearizable manipulating range for constant
speed actuators.

Injection valve Steam flow valve


superheater heat exchanger

positional range AUAmax % 100 (= 50°C) 100 (= lO°C)


quantisation unit AA 0.00195 0.00787
(WL = 9 bit) (WL = 7 bit)
settling time T9S s 720 50
sampling time To s T9S/24 = 30 T9s/17 ~ 3
rise time Ts s 30 5
quantisation time TAO = TsAA ms 58.5 39
max. position change per To
Au~(To) = AUAma,,To/Ts % 100 60
method a): neglectable time constant Tsm s ~ 0.045 Tx = 10 ~ 0.03 Tx = 0.6
Iinearisable range Au~ ~ AUAmax Tsm/Ts % 33 12
method b): Iinearisable range:
Au~ ~ 0.5 AU Amax To/Ts % 50 30

Proportional actuators with quantization (stepping motors)


Actuators with quantization (group IV) such as electrical stepping motors are well
suitable for process computers or digital controllers. They have a behaviour which
is proportional to the pulse count and need no DAC. An amplifier transforms the
low energy pulses of the digital computer to the output to higher energy pulses
which excite stepwise the stator windings. The angle rotation per step varies from
1 degree to 240 degrees. The smaller the step angle the more coils are required and
the smaller the torque. Both single steps and stepping frequencies up to the
kHz-range are attainable. For low frequencies the stepping motor can be stopped
within one step. However, this is not possible at the higher frequencies for which
the motor can be regarded as a sychronous machine, because of the mechanical
inertia. If exact positioning is required, as for example in the case of feedforward
actuator control, the moment of inertia of the actuator and the step frequency must
be kept small.
Positioning can be accelerated by using digital feedback [2.18]. When connected
to digital controllers and process computer step motors are of advantage, especially
for small actuator performance requirement and no position feedback is needed in
the case of exact positioning.

Remarks on nonlinear control


In the previous small signal changes were presupposed for the design of control
algorithms so that in many cases the control system could be considered as being
linear. As application shows, the such gained linear control algorithms can also
successfully be used for moderately nonlinear control systems which have unique
29 Combining Control Algorithms and Actuators 265

static characteristics. An even more distinct nonlinear behaviour of this control


system results, if adaptive control algorithms according to chapter 26 are applied.
Hence, mainly the strong nonlinearities in the control loop have to be taken into
account when designing control systems. This includes nonlinear static character-
istics such as backlash, hysteresis and saturations which may occur with some
actuators, and distinctly nonlinear processes as for example chemical reactors or
pH-control systems.
These nonlinear control problems cannot be treated in the volume since they
cannot be solved without detailed process knowledge and they are composed of
many individual cases, see e.g. [29.1, 29.3]. Adaptive control systems for nonlinear
processes are treated in [29.5].
Here, only one last remark will be made concerning the nonlinearities which
occur in all control systems with large signal changes: saturations of signals and
signal speed. In digital control systems signal saturations or limited signal ranges
may occur at the following places: measurement value, finite word length in the
ADC, and CPU and in the DAC; actuator; process. Often the signal constraint first
appears at the manipulating variable:
U min ~ U ~ U max .

Saturations of signal velocities that means limited signal velocities mainly occur in
actuators:

Stability is not affected by these constraints, provided the control loop is stable
without restrictions, since the describing function N(iw) of the constraint charac-
teristic is always IN(iw) I ~ 1, [5.14,29.4].
For control algorithms with integral action special measures are necessary to
avoid the wind-up of control deviations when reaching a signal saturation (con-
straint of manipulated variables). If the sign of the control deviation changes it
needs a relatively long time to restore the integrator during which the loop remains
at saturation.
To avoid this wind-up the manipulating range can be considered in the control
program and inserted in the recursive control algorithm
u(k) = - Pl u(k - 1) + P2 u(k - 2) -

when reaching the constraint UAmax or UAmln instead of the calculated u(k - 1),
u(k - 2), ... the true positions UAmax or UAmin. This, in general, presupposes at least
an approximate agreement of the programmed and the real manipulated range.
Another possibility for the feedback of the real actuator position was already
given in (29.2). Also compare section 5.8.
30 Computer-aided Control Algorithm Design

30.1 Program Packages

Conventionally analog and digital control algorithms of PID-type in practice are


designed and tuned by trial and error, supported by rules of thumb and sometimes
by simulation studies. For processes with
- little knowledge on the internal behaviour
- difficult dynamic behaviour
- strong couplings in multi variable systems
- large dimension
- long settling times
- high control performance requirements
this procedure is generally quite time consuming and rarely results in the best
possible control performance. Better control in a shorter time can be achieved by
using computer aided design methods. The use of digital computers then allows to
apply modern design methods, which again leads to extended or even new control
methods. Since the control design problem mostly cannot be solved in one
calculation step but only after comparison of several alternatives, a computer
badge process design is not expedient. An efficient application of modern design
methods which takes into account the necessary pre-conditions, can be attained by
an interactive dialogue with graphic representation. Instead of individual calcu-
lation programs, especially program systems are of interest which are easy to learn
and are not too extensive.
If, compared to conventional methods a better control is to be desired, a more
profound knowledge of the process behaviour in the form of mathematical process
models is of course needed. Since the determination of these process models can be
quite complex for the processes mentioned above, the computer-aided control
algorithm design becomes even more attractive if also the process models are
provided by computer-aided procedures. Furthermore, the form and accuracy of the
process models are mutually connected with the control algorithm. Process models
and control algorithms are therefore considered to be a coherent overall system.
This is why this chapter treats the computer-aided process modelling (using process
identification methods) as well as the computer-aided design of digital control
systems.
30.1 Program Packages 267

Design steps Model per formance

or high reqUi red r ea chable


performance
Process knowledge
control systems good small

..........._ .....,100
1. Requirement specification
• Operaling range
• Main disturbances
• Control VOrl abIes
• Block behaviour
• Control performance
2. SpeCification of devices
• Sensors / actuators
• Man/machine Interface
• Automation devices
• Cabling 01
3. Design of control structure .S
QJ
'0
• Controlled /manlpulated o
variables E
.... 5150
• Control 100Ps ' MISO
• Multi level control
I.. Design of control algOrithms
• lo wer level
• higher level

S. Parameter - tun ing

.'. I
C C

... -
C 2 C... 0
;::.
'00 _ u
'00
a b
_u
c

Figure 30.1 Design steps for control systems with high performance requirements and
relevant process modelling.

For a given process the design of control systems is of course performed in several
steps, five of which are represented in Figure 30.1.
For the first two steps, which are the definition of the requirements and the
definition of the device technique rough model outlines generally are sufficient. For
control structure design models are required which describe the process statics and
268 30 Computer-aided Control Algorithm Design

the process dynamics at least in a quantitative way. The design of the feedforward
and feedback control algorithms presupposes quantitative models which have to be
rather precise at least for tuning the parameters. With increasing progress of the
control design also the required performance of the process models augments, see
Figure 30.1, column a.

30.1.1 Modelling through theoretical modelling or identification


There are two possibilities to obtain mathematical process models: theoretical
modelling or experimental modelling (identification), section 3.7. For this either
known physical laws or measured signals are used. Theoretical modelling presup-
poses the knowledge of the essential physical laws of the process. Here, it is of
significant advantage that modelling is already feasible during the planning period
and that the influence of the process design parameters can be made out. For
process identification the process has to be ready for operation and appropriate
devices of signal recording and suitable identification methods have to be available.
In order to point out the fields of application of the two possibilities mentioned
above, the attainable model accuracy and the required effort will be regarded more
closely [30.9]. The intended use of the models is the design of control systems.
For theoretical modelling the knowledge of the internal structure has to be quite
profound in order to obtain a satisfying model performance. Applying identifica-
tion methods, however, less knowledge of the internal structure is needed (at least
for linearizable processes) to attain models with the same or better control per-
formance. Three classes of processes can be distinguished:
I: Processes with good knowledge of the internal processes (e.g. mechanical and
electrical processes)
II: Processes with less knowledge of the internal processes (e.g. processes in
power-engineering)
III: Processes with poor knowledge of the internal processes (e.g. processes in
chemical engineering)
Applying process identification often a higher model performance can be reached,
especially for linearizable processes in power- and chemical-engineering. If re-
quired, this is also possible for mechanical and electrical processes. In industrial
application the invested effort plays of course an important role, since it has an
essential impact on the final costs.
If for processes in power- and chemical engineering a better model performance
is desired, process identification often is the most efficient method. During the last
ten to 20 years research, of course, has continuously broadened the knowledge of
these processes. However, there are still many processes with relatively few quanti-
tative information on the internal processes (e.g. drying, grinding, vulcanizing,
extruding, chemical reactions, metallurgical processes, multi-phase processes, con-
vective flows, etc. These processes can be classified as difficult or complicated
processes. For larger plants, moreover, also the number of process parts and their
couplings have to be taken into account, hence the dimension (e.g.) order number,
number of inputs and outputs.
30.1 Program Packages 269

These considerations lead to the fields of application of the two methods of


modelling during the controller design, Figure 30.1, column band c. For both,
good and bad process knowledge the first formulations of theoretical process
modelling are sufficient for items 1 and 2 (definition of the independent and
dependent variables, signal flow charts, balance equations). The control structure
design can be effected through theoretical models, a good knowledge provided.
From this point on, difficult processes may require identification methods. The
design offeedforward- and feedback control algorithms using theoretical modelling
is generally only possible for processes with good process knowledge. For most
processes process identification leads to the most precise parameter tuning results.
Hence, if the control design is performed consequently, then both methods of
modelling are applied, which complement each other. If a very good con-
trol performance is required, generally one starts with theoretical modelling and
during the control design it is recommended to sooner or later turn to process
identification.
In the following the last three items of the control design will be treated for
which, at least for difficult processes, the combination with process identification
methods are expedient.

30.1.2 Program packages for process identification


The flow of a process identification from test signal generation to the final model
comprises a row of procedures which can be summarized in a program package.
Figure 30.2 depicts some tasks which occur in the program packages OLID, using
a single-input/single-output process as an example [30.2, 30.4, 30.9]. Here, the
interactive dialogue between operator and computer during all phases of the
operation is of significant importance. The operator is able to view, sometimes even
during the measurement operation, the developing models and have them graphi-
cally represented. The operator can change the test signal parameters, apply
various parameter estimation methods, select order and dead time automatically
or by himself, verify the model performance, etc. The basis of this is a continuous
question-answer catalogue.
Only about 20% of the memory capacity is being occupied by the actual
parameter estimation methods. The program packages are structured in such
a way that they can be quickly handled and details of the mathematical models
need not necessarily be known. This is also valid for program packages for linear
multi variable and nonlinear processes [30.9].

30.1.3 Program Packages for control algorithm design


Based on process models, various control system structures and control algorithms
can be computer-aided designed, if a computer in on-line operation is used:
1. assumption of a control scheme
- feedforward and feedback control systems
- single loop, cascaded loops, multiple loops
On -line -Identification with process computers tv
-.J
o
Phases Interactive dlOlogue

Operator Inputs w
Computer performance Graphic output examples o
(1
In/output No 2/5
o
Sample time a
Testslgnal Sample time: 3s "0
free param
Experiment D c. values Dialogue Dnftellmi. : yes S.
n
Files output ....
preparation Drift elimination Ident verillc : COR/LS :k
Ident method s.:
1 Start command
8-
~ (1
Start o
u ~
Generat test signals nnn ....
Measurement UL- £.
Ron UU
rec par est
duration Dialogue ;l>
Experiments Y ciQ
o
~"""""'" /) ,--t t ::l,

Measurem ent
s-a
data ~ ~
DlOgrams
V. 1 ~
V>
Order selection QQ'
Automatic order del 0 0 1 =d ::s
Dialogue 1
Data processing Deadtlme selection 1 o 0
~

2 34m

Model Cha nges 1 1 ~ t


Vertlflcatlon Meas correl fct

Model Graphics DlOlogue Calc correl Ict


verification Model changes Error signal y~.
I

Figure 30.2 Organization of an on-line identification with a process computer with interactive dialogue (program package OLID),
Computer aided design of control algorithms

Phases Interactive dialogue

Operator inputs Computer performance GraphiC output examples

Single variable Code type


control ConfiguratIOn 1 deadbeat-C.
Selection of Dialogue 3 PI - C.
control structure Controller selection Controller file 4 PID-C.
5 state-C.
T Startcommand
Start
Weightings of r = 0.1
Controller parameters
controls
- calcul.
Dialogue qo = [1~'~1
Oplim. time - optlm. Q = 050
SynthesIs
Matrix - riccatl - eq. 008

Controller t
Parameters Cho nges

Controller
Simulation y 1" <II ~,.,:,.~,:.," _
Simulation Seleclion Dialogue Performance w
Modi flcatlOn measures 9
Disturbance u~.
'1:)
.....
o
(fQ
~ .....
~
Controller Chc nges
t
Select controller . '. 3
Conflguralion
type
On -line programm
. ·. · ~
'1:)
(')
yw. · · ·----
Constraints ref value Dialogue
)<';'
On -line -operation Performance ~
measures j~-~ ~
'------ -- ------- -
'"

IV
-..J

Figure 30.3 Organization of a computer-aided design of control algorithms with interactive dialogue (CADCA)
272 30 Computer-aided Control Algorithm Design

2. Transfer of process models to the program package for the controller design
- from the process computer resident identification program package
- from other sources (e.g. theoretical modelling)
3. Design of various control algorithms e.g.
- parameter-optimized controllers (PI-, PID, PID 2 )
- deadbeat controller
- state controller with observer
- minimum variance controller
4. Simulation of the system behaviour
5. Modification of the control algorithms and final selection
6. Control algorithm implementation in the process computer
- the control algorithms and their parameters are transferred to the real-time
operation system of (he process computer
7. Setting of operation conditions
- restrictions on the manipulated variables
- reference variables
8. Closed-loop operation
- Command for closed-loop operation with start-up conditions
- The closed-loop operation is supervised and compared with the simulated
behaviour
- algorithms can be modified, if required
For off-line design, items 1. to 5. are sufficient.
This computer-aided design method has the following advantages:
- Automation of the design and the start-up of digital control.
- Simulation of the control system with various control schemes and control
algorithms without disturbing the process.
- Saving of implementation and start-up time, especially for processes with large
settling time, complicated behaviour or strong interactions.
- Improvement of the control performance by better-tuned simple algorithms or
more sophisticated control algorithms.
- Determination of the dependence of controller parameters on the operating
point. Therefore feedforward adaptive controllers can be quickly designed.
It is expedient to summarize the individual design tasks in program packages.
Figure 30.3 shows some tasks for single-input/single output processes using the
program package CADCA [30.5, 30.7] as an example. With this program about ten
different control algorithms can be designed. The process model is put in as
difference- or vector difference equation. With this the smallest sampling time is
defined. Mter the program's question-answer dialogue, the selected control algo-
rithms are designed in an interactive dialogue. Then various design parameters have
to be chosen, e.g. manipulated variable weighting factor, manipulated variable
limits, state variable weighting factors. The course of the manipulated and control-
led variable is simulated and put out for selected input signals (reference value).
Various control performance measures and the location of poles and zeros can be
observed. After modification and comparison the operator can decide on one control
30.1 Program Packages 273

1.0

20 i.0 [51 0 20 1.0 t (sl 0 20 i.0 t (sl 0 20 1.0 t [51

u SC 3 PC-2 DB[ v ) DB(v+l)


slole coni roller PID - controller deodbeo Deodbeot- cont roUer
WIlh observer controller Increased order

1.0

b 0 20 1.0 t [51 0 20 i.0 [sl 0 20 i.0 t[s1 0 20 i.0 lsI

Figure 30.4 Signal flow during process identification (a) and with four computer-aided
designed control algorithms (b). Process: Gp(s) = 1.2/(1 + 4.25)(1 + 15)(1 + 0.95), To = 2s.

algorithm and can test it on-line with the same program package. A special analysis
program is available for examining the behaviour with the real process.
Figure 30.4 shows the signal flow during the identification with OLID and for
closed control loop after application of CADCA for a simulated analog process.
The advantages of a computer-aided design are especially evident, if intercon-
nected control systems and multivariable control systems are to be designed. This
is described e.g. in: [30.8, 30.10, 30.l3].
It is the preference of the separated identification-, design- and control phases
that arbitrary identification- and control methods can be combined, provided the
process is time invariant. Especially the process behaviour between identification
and control action is not supposed to change essentially. Otherwise selftuning
control algorithms are to be used, see chapter 31.
274 30 Computer-aided Control Algorithm Design

30.2 Case Studies

30.2.1 Digital Control of a Superheater


The application of identification methods and of the computer-aided controller
design will be demonstrated on the control of a super-heater of a simulated Benson
steam generator.
Figure 30.5 shows the schematic representation of this steam generator which
was simulated with simplified models using analog computers. Here, only the
superheater final temperature control is considered which is a cascade control.
First, the slave injection cooler control is being designed. Figure 30.1 depicts the
identification of the process with OLID consisting of an electrical valve and an
injection cooler. Figure 30.6b shows the control using a CADCA-designed PI-
controller. This is followed by the identification of the control system for the main
controller which is composed of the injection cooler control and the final super-
heater, Figure 30.7a. Here, stochastic steam flow disturbances were assumed which
significantly disturb the final temperature. After approximately 50 minutes of
identification time, a third-order model (m = 3; d = 0) was obtained which was
used for the design of a state controller with observer. Figure 30.7b represents the

Figure 30.5 Schematic representation of the temper-


ature- and feed water control of a Benson steam gener-
ator (N = 180 kW; MD = 550t/ h; p = 176 bar;
8. = 540 °C).
30.2 Case Studies 275

M s3e["CI
5

ll ,Pi I w2 [%1
M Iw20
50

25

25

Figure 30.6a, b. Slave injection cooler control (controller R3 d. Simulation without noise
signals. a Process identification with OUD (To = 3 s; m = 2; d = O. PRBS-clock time: 2);
b Pf-control for a change in the reference variable. CADCA design. r = 2; qo = 0.66;
ql = - 0.50.

attained value behaviour for a reference value step with noise signals. Figure 30.7c
shows the same, however without noise signals.
Multivariable control systems can be designed using the same principles. [30.10,
2.23] describe an example with simultaneous excitation of two manipulated vari-
ables (injection water and fuel flow) used for identification and state control of
steam outlet temperature and steam pressure.

30.2.2 Digital Control of a Heat Exchanger


Figure 30.8 is the schematic representation of a steam-heated heat exchanger. The
process input is the position U of a pneumatically driven steam valve, and the
process output Y is the water temperature as measured with thermocouples. Figure
30.9 shows the manipulated and control variable during on-line identification with
a pseudo-random-binary signal (PRBS) as a test signal. For on-line identific-
ation method the "recursive correlation and least squares parameter estimation"
(RCOR-LS) of program package OLIO was used. The model order search pro-
gram resulted in m = 3, d = 1, see [30.1, 30.2]. The identified model followed as
y(z) - 0.0274z - 1 - 0.0692z- 2 - 0.0218z- 3 - 1
Gp(Z) = u(z) = 1 _ 1.2329z 1 + 0.0478z 2 _ 0.01276z 3 Z
tv
ll"'S 30 roc -.I
1. fL 0\
5

25 , t...>
0
("J
0
100 t[mlnl a
"0
S-
o
....
~
0.:
0
tl"'SJe soli [OCl t 0-
5 ILIA~ ("J
0

25 j ,., n nnrlnn n n n nnnnn n n n nnnnn n n n )


a....
So
;J>
.. OQ
t [mlnl 0
10 100 :l.
- 25 1 UU~~ ~ ~U UUU~~ ~ ~U UUU~~ ~ ~U UUU s-a
0
0
'J>
ciQ.
0
MS ["!ol

25

~-
t[mln!
-25
Disturbances Identification w ith PRBS I _ . - _. Contr with disturbances I Conlr without disturb
•• •I •
a b c
Figure 3O.7a-c. Superheater final temperature control (R32) with slave injection cooler temperature control (simulation with steam
flow disturbances). a process identification with OUD (To = 30s; m = 3; d = 0; PRBS clock time: 1); b state control for step change
in the reference value. CADCA design; c as b, however, with switched-off noise signal.
30.2 Case Studies 277

Warm wa er

Steam valve
Seam
pipe
Condensate

Wa er
valve
Cold wa er

Figure 30.8 Steam-heated heat exchanger. L = 2.5 m. tubes d = 25 mm. Input: change L'lU
of the steam valve. Output: change L'l Y of the water temperature.

ruoonmIII=
ul~1
30j
5.0
o 117 210 303 396 ['89 (s]

YI:~II~
o 117 210 303 396 ['89 (s]

Figure 30.9 Process input and output signal during on-line identification. PRBS: period
N = 31. clock time A = 1. water flow Mw = 3100 kg/ h. Sample time: To = 3 s.

In Figure 30.10 the closed loop response to steps in the reference value is shown for
various control algorithms designed with CADCA. Because of the nonlinear
behaviour of the valve and the heat exchanger, the closed loop response depends on
the direction of the step change. However, satisfactory agreement (on average)
between the simulated and the real response is obtained.
tv
-.I
tis] tlsi 00
0 100
!
200
I
0 100
78.8 78.8J fL 78.8
78':1 1 yl\ I.;.)
0
l·cl \l
0
I I • ........
l.~lll I
Yj~ 3
81 . 2 0I' 81.2l 81.2 81.2 ." "0
100 200 0 100 !:
tlsl (t
..,
~
0:
<>
0..
~
\l
tIs I 0
tiS! I. u ;::.
0 100 ..,
0 100 200 ImAI Q.
U ;l>-
ImAI 6 6 O<i
0
6~.r"
6 0
~ 100 :!.
&
u tlsi
0 100 3
tlsl 200 ImAI
Cl
<>
8 OQ'
'"
::s
lm~~
porometer- opllml zed controller 3PC - 3 (PlDl State controller with observer SC
Figure 3O.IOa Closed loop response for four CADCA designed control algorithms based on an identified process model. Reference variable
steps in both directions - - measured response; .. . .. . simulated response (during design phase).

kl k2 k3 k4 ks r
SC -3.9622 -3.4217 -2.7874 0.4472 1.3372 0.04

qo ql q2 q3 q4 qs Po PI P2 P3 P4 Ps
3PC-3 -1.7125 2.3578 -0.7781 1.0000 -1.0000 0.01
78.8 CO 100 1(5) 78.:CO 100 115)

Y Y
78'~i r­
1'C1 I'CI rei
78·C
81) 0 100 I (5)
81.2
I~
81.2 0 100 1 (51
81.2

o 100 I Is) o

u
0 100 lIs)
ImAI

6
II( 0 160 1(5)
U
I:AI 6
ImAI
10 8
'I'
81 0 100 1(5)
12
Oeodbeot controller DB Iv) Deodbeol controller of J.ncreosed order DB I v. ,) w
o
N
Figure 30.10b Closed loop response with four CADCA designed control algorithms based on an identified process model. Reference variable
(')
steps in both directions. - - measured response; .. . ... simulated responses (during design phase). ~
CJ>
o
CIl
qo ql q2 q3 q4 qs Po PI P2 P3 P4 Ps 2
0-

CJ>
DB(v) -8.4448 10.4119 - 4.0384 1.0775 0.0000 1.0000 0.0000 - 0.2317 - 0.5841 - 0.1842
DB (v + I) - 3.8612 0.1770 3.8048 - 1.6993 0.5848 0.0000 1.0000 0.0000 - 0.1059 - 0.3928 - 0.4013 -0.100
tv
-...J
\C)
280 30 Computer-aided Control Algorithm Design

30.2.3 Digital Control of a Rotary Dryer


In sugar production sugar beet cosettes or pulp is a by-product which is used as
cattle fodder. This pulp has to be dried thermally in a rotary dryer. Properly dried
pulp should contain about 10% moisture or 90% dry substance. Below 86%
dangerous internal heating occurs during storage and the nutrients decompose.
With the dry substance exceeding 90% the overdried pulp becomes too brittle and
the payback falls because of higher fuel consumption and loss in weight. The goal is
therefore to keep the dry substance within a tolerance range of about ± 1%.
Figure 30.11 shows the schematic of a rotary dryer. The oven is heated by oil.
Flue gases from a steam boiler are mixed with the combustion gases to cool parts of
the oven. An exhaust fan sucks the gases through the dryer. The wet pulp (pressed
pulp with about 75-85% moisture content) is fed in by a screw conveyor with
variable speed. The drum is fitted inside with cross-shaped shelves so as to
distribute the pulp within the drum. Because of the rotation of the drum (about
1.5 rpm) the pulp drops from one shelf to another. At the end of the drum another
screw conveyor transports the dried pulp to an elevator. The heat transmission is
performed mainly by convection. Three sections of drying can be distinguished. In
the first section the evaporation of water takes place at the surface of the pulp, in
the second section the evaporation zone changes to the inner parts of the cosettes
and in the third section the vapour pressure within the cosettes becomes less than
the saturated vapour pressure because of the pulp's hygroscopic properties.
The control of the drying process is difficult because of its nonminimum phase
behaviour with dead times of several minutes, large settling time (about 1 hour),
large variations of the water content of the wet pulp and unmeasurable changes of
the drying properties of the pulp. Because of these reasons the rotary dryers are
mostly controlled manually partly supported by analog control of some temperat-
ures. However, the control performance achieved is unsatisfactory with tolerances
of more than ±2.5%, c.f. Figure 30.12. Figure 30.12 shows a block diagram of the
plant. The main controlled variable is the dry substance of the dried pulp. The gas
temperatures at the oven outlet, in the middle of the drum and in the dryer exhaust
can be used as auxiliary controlled variables. The main manipulated variable is the
fuel flow. The speed of the wet pulp screw conveyor can be used as an auxiliary
manipulated variable. The water content of the pressed pulp is the main disturb-
ance variable.
The goal was to improve the control performance using a computer. Because of
the complicated dynamical behaviour and the large settling time, computer aided
design of the control system was preferred. The required mathematical models of
the plant cannot be obtained by theoretical model building as the knowledge of the
physical laws describing the heat and mass transfer and the pulp motion is
insufficient. Therefore a better way is process identification. Because of the strong
disturbances step response measurements give hardly any information on the
process dynamics. Hence parameter estimation using PRBS as input was applied
[30.11], [30.13]. Special digital data processing equipment based on a microcom-
puter was used, generating the PRBS, sampling and digitizing the data for storage.
Emergency
smoke stock

Gases '-.

MeaSUring posItion:
Pressed pulp h bond conveyor
l
" , ~I#h'hl
) ( 11 ,n n 1"l'"7+-, Dned
'I I pulp
Oven ~
Drum II I I I
I 1
I
Flue gases
from the bOiler (I rI
JI l I'~ 'llI -~-~
I~ 0

M."

t:t
Cumbus[,on aIr w
1"0 In Iwps I~M I ~A 1'I',s I 0
tv

Water Gas temperature Gas Dry substance


..,(')en
Moss flow of Gas tempera- RevolutIons <1>
ture at the of the screw content In the middle of temperature percentage r:n
the fuel oil
oven outlet conveyor of the the drum at the exhoust C
MF pressed 0-

en
pulp

Figure 30.11 Schematic of the rotary dryer (Sueddeutsche Zucker AG, Werk Plattling)
drum diameter DD = 4.6 m; oil mass flow MFrtulx ::::: 4.S t/ h; drum length LD = 21.0 m; temperatures 9 0 ::::: 1050 °C; wet pulp mass flow tv
00
MpsmIlx ::::: SOt/h; 9 M ~ 140-21O o C; flue gas mass flow MKGmIlx ~ SOOONm 3 / h; 9 A ::::: llO- 155 °C.
282 30 Computer-aided Control Algorithm Design

r----------------, Gas temperature


at the
oven outlet
~o

Gas temperature
In the middle
Mass flow of of the drum
the fuel
t:1F

Revolutions Gas
of the screw temperature at
conveyor the plant outlet
n

Water content
of the
pressed pulp Dry substance
IjJrs

Figure 30.12 Block diagram of the rotary dryer.

The evaluation of the data was performed off-line using the parameter estimation
method RCOR-LS of the program package OLID-SISO. The initial identification
experiments have shown that the following values are suitable:
sample time To = 3 min
clock interval A. = 4
amplitudes fuel AMF = 0.25 tlh
amplitudes speed An = 1 rpm
The required identification times varied between 112 To to 335 To which is 5.6 to
16.8 hours. Figure 30.13 shows one example of an identification experiment. The
step responses of the identified models are presented in Figure 30.14. The settling
time is shortest for the oven outlet temperature and increases considerably for the
30.2 Case Studies 283

"'0 ['c)
111,0 ...
1130
.........
1120
.. ..
1110 ...
.... .. . .......
1100

"'M ['c]
210 .. ....
...... .
200
.. ..
190 ..
.. ..
180
.....

"A ['e)
130 ..
120 ......... .... · · .. 0_

110

"'T5 ["to]
96
9S
94
93
92 ...
91 ...
90
89
88
87
86
85

2 3 4 5 I [hI

Figure 30.13 Data records of an identification experiment with fuel flow changes.

gas temperatures in the middle and at the end of the dryer. The dry substance has,
with fuel flow as input, a dead time of 6 min, an all pass behaviour with undershoot
which lasts for about 30 min, and a 95% settling time of about 2.5 h. This behaviour
is one of the main reasons for the control problem. With the screw conveyor speed
as input the dry substance shows a dead time of 18 min. The estimated model
orders and the dead times are given in Figure 30.14.
284 30 Computer-aided Control Algorithm Design

90
80
70 .... ........................................................ ...
60
50
1.0
30
20
10

t::. M (OC)

......... ... ....... ... ... ........................... .......


1.0
30
20
10 [hl
2
t [hl _ 10 ............. ................ ........................ ........

t::. A (OCI
- 20
- 30
1.0 ... .. .... ............. . ...... ..... ............... -1.0
30 .. .. ~'1 rOC)
m =1 d =1
20
10 2 t [hl
.....
2 t[hl ....
-10 .... ....... .......... ............. ......... .. ......
m =3 d =1
-20
Ll is [%J
.............. ........ ........ -30
10 - 0
8 "'A (OC]
m=3 d=2
6
4
2
2 t[hl
0
-2 2 [hl
-2
m = 5 d=2
-I.
a
-I. ..
-6 ............................ ........ ... ......
-8
-10
t::. ~\5 (%1
m=5 d=6
b

Figure 30.14 Step responses of the identified models. n = 13 rpm. To = 3 min. a change of
the fuel flow tlM F = 1 t/ h; b change of the screw conveyor speed tln = 1 rpm.
30.2 Case Studies 285

Based on the identified process models various control systems were designed
using the program package CADCA [30.1], [30.3]. The manipulated variable is the
fuel flow and the main controlled variable the dry substance. If only the dry
substance is fed back control is poor; feedback of the gas temperatures 8M and .9 A
improves it considerably. Figure 30.15 shows the simulated responses to step
changes of the screw conveyor's speed for a double cascade control system with
3 PID-control algorithms and a state controller with observer. The better control
(better damped and with fewer oscillations) is obtained with state control. Control
can be improved considerably using a second order feedforward control algorithm
GFl measuring the speed n and manipulating the fuel flow, Figure 30.16. Because of
practical reasons the cascaded control system was finally implemented on a
SIEM ENS 310 K process computer (easy transfer to other dryers, transparency for
the operator, computer manufacturer's program package SIMAT C). A block
diagram of the implemented control system is shown in Figure 30.16 which also
shows the positional control algorithms for the actuators, a feedforward control
Gn for the case where a reliable water content measurement of the wet pulp is
possible, a feedforward controller GF8 to change the speed of the screw conveyor
such that the total water mass flow is kept constant and a feedforward controller
GF7 of differential type to reduce the nonminimum phase behaviour of the dry
substance by changing the boiler flue gases such that the gas flow through the dryer
is initially kept constant after a fuel flow change. Figure 30.17a) shows signal
records for manual analog control (original status) and Figure 30.17b) for digital
cascaded control with feedforward control GFl . Although the pulp mass flow MPS
is fairly constant the dry substance oscillates within a tolerance of about ±2.5%
for manualjanalog control. With digital control the tolerance is reduced to about
± 1% for larger pulpmass flow disturbances than in Figure 30.17a), or ± 0.5% for
periods with fairly constant pulpmass flow. This shows a significant improvement
in performance using the digital control.
A report of the practical experience with the digital control of three rotary dryers
with one process computer has shown that the fuel saving because of better control
was about 2.5%, which is about 329 tons of oil annually [30.14]. This rotary dryer
is a typical example of a process with complicated internal behaviour and large
settling time for which manual tuning of controller parameters did not result in
satisfactory control. The process identification and computer aided design of
various control systems led to a good insight into the process' behaviour and
allowed the simulation and comparison of various control systems. As the rotary
dryer generally operates at full load, fixed control algorithms are suitable and an
adaptive algorithm is not required.
The described methods of computer-aided design with process identification
have also been successfully applied for other processers, see [30.9]
- drying plants
- plastic tube extrusion
- material test machine
- motor test bench
l1M F • [lIhJ l1MF 4 [t/h)

1.0 1.0

0.8 O.B

0.6 0.6

0.4 0.4
State control

0.2 0.2

0 o

ll-J-M l r· C) ll'" M l (·C]

20 20
;.
15 Cascade controt \ Cascade control
15
I:" ,

.. ( :.j./....\;:.... : ....... ". ..' ... .


10 '. 10 I . ...... ......... "". .......... "

... :............ ,. "::., \ .... ... :::, ... ,,::.::: ...... ;" .... . . .' ........... ..... .. \ ......................... .

5 + I 5
. I
State control Stale control

o + // o
\
-5 +i -5

Figure 3O.15a.
M A f ' OC) M" f lOC)
.. ' C05Cod/> conlrol
:: •. ::::: •. ,.•....•....... /COSCOde conlrol
10 I 10 .... . ' ....... .
·:::·:1-::······
.. \ ....................... :: :::: :::::::::~;:;::.:::: ....... \ ................... .
5 5
Slate conlrol State control
o
o I
-5 -5

t.IjITst '%l Coscod/> control t.IjITs 4 '% )


Cascade conlrol
1 .J. 1
......
... ...... ,: ... \ ....... ::::0::: ............. ::::"..,,:::::::::::. o \ " .:.... ........................................................ .
,
o 1"'T
.. / ... :. . Stole conlrol . . .... S!ole conlrol
-1~ l -1 . '.'
'. . '
'i \ \ I ..·........·;·
\\%:~r ',- .. '
-2 '\'1 -2
ri
."I\! !
-3 \ \\/ ! -3 \
\ III I \
-I, -4

-5
"'\.i \ -s
w ithout contrct Wllhout control w
o
-6 -6 tv
(")
10
~
a 2 3 4 5 t (hl b 2 3 5 t 'h l Vl
C-
Figure 30.158, b. Simulated control behaviour of the rotary dryer for step changes of the screw conveyor speed of ~n = 1 rpm, measuring the o..

on
dry substance and the flue gas temperatures 8M and 8A' To = 3 min. 8 without feedforward control; b with feedforward control GFl '

N
00
-.J
tv
00
00
I:!.Wps

s
w
o
n
o
I:!.nman
EI
'0
DrYing s::
process y=tJ.ljJTS
~ ...CD
!.
Q.
I:!.~KGman a
n
o
...a
UF U F, e.
w UR -t W, UR' W2 >
~
::l.
ET
Y,=tJ.1JA
EI
Y2=tJ.1JM
~
ci<j'
'"
::l

Figure 30.16 Block diagram of the cascaded control system implemented on a process computer.
30.2 Case Studies 289

<jJTS [% 1
95
94
93
92
91
90

[OCI
,
" .... -~-

[OC I

190
155
n [min · I )
12
9 -- --

~F
3.6
[/ h l
. - . '-'~- ..... - - ..... -
3.0

Wr>s [%1

I I I I I I I I I t
a 7 0• 18 00 19.0 20 00 21. 0 22·· 23·· 0 0• 100 2 00 3.0 4. 0 50. t

Figure 3O.17a.
290 30 Computer-aided Control Algorithm Design

TS (%J
93
92 ". -_........ ,
.~.

91

\)A ('C J
140
120 -----~ ------.. .. ------,-- --~",--.------~-.---- ... ------- - ...~ .....~--...
100
['C J
"M
225
190
155
.. __ - ... _...... . ..... ~-- ,- -" .. --.

nllmin-

I
1
)
12 _
----.---
.9
~F
3.5 [t/ h J
3.0 .

Wps [%J

b 15" 15" 17" 18" 19" 20" 21" 22 00 23 00 0" 1" 2 00 300

Figure JO.17a, b. Signal records of the rotary dryer. Signals are defined in Figure 30.11. MM:
molasses mass flow. a manual control; b digital control with cascaded control system and
feedforward controller, GFl .
30.2 Case Studies 291

Final remarks on the application of the computer-aided control design with process
identification.
Process identification with following computer-aided control design is especially
recommended for complicated or complex processes which still require initial basic
research on the choice of the controller structure and type of the control algo-
rithms. Various process identification - and control design methods can be com-
bined by using this separated procedure. The a-priori knowledge of the process
might be little and can be restricted to the basic behaviour as e.g. linearizable or
stronger nonlinear, proportional or integral-acting behaviour. The process, how-
ever, should not essentially change its behaviour during the design phase, that
means possess time invariant behaviour.
If the principal process behaviour and the control structure are fixed, then the
transition to self tuning control algorithms is suggested. Their application will be
considered in the following chapter. The starting action of the selftuning control
algorithms with pre-identification corresponds with the computer-aided control
design treated above. In this respect the transition of the methods treated in this
chapt.;r to the methods of selftuning control systems is smooth.
31 Adaptive and Selftuning Control Systems Using
Microcomputers and Process Computers

As already described in chapter 26, parameter-adaptive control systems are gener-


ated if recursive parameter estimation methods and controller design methods are
combined. The following will briefly report on their implementation with micro-
computers and on various applications.

31.1 Microcomputers for adaptive control systems

For testing the adaptive digital control systems several microcomputers were set up
[26.25, 26.44, 26.60]. They consist of single board computers with memory enlarge-
ment, input/output unit console processor and operating elements, see Figure 31.1.
A special feature are the console processors which organize the communication
between the microcomputer and the operator. They allow a variety of keyboard
inputs, information representation transmission inside and outside of the system,

Figure 31.1 Hardware structure of the microcomputer DMR-16 [26.44].


31.1 Microcomputers for adaptive control systems 293

operator guidance by dialogue, representation on alphanumerical displays and the


identification of unpermitted inputs.
Some technical data of the three microcomputers are summarized in Table 31.1.
The 8-bit computers use assembler, the 16-bit the higher-level language PL/M.
Table 31.2 shows a comparison of the computing times per sampling step for the
same adaptive control algorithm. Only the faster clock time of the 8085 processor

Table 31.1 Data of the microcomputers.

Microcomputer DMR-2 DMR-4 DMR-16


(1980) (1981) (1983)

wordlength 8 bit 8 bit 16 bit


processor 8080-A 8085-A 8086
single chip computer SBC 80/10 MPC 85 iSBC 86/12
console processor 8748 8748 8085
analog input/output RTI 1200 HDAS 8 AD 363
DAC 336-12 DAC 336-12
A/D channels 8 8 8
D/A channels 2 4 6
memory enlargement
- random access memory RAM 2114A RAM 6116
- program storage EPROM 2708 EPROM 2716 EPROM 2732
MBB 80
arithmetic processor AM 9511 AM 9511 8087
(AM 9511)
user software 12 EPROM 16 EPROM 64 EPROM
storage in kbyte 4 RAM 4 RAM 28 RAM
90 BUBBLE
user program language Assembler Assembler PL/M-86
ASM-80 ASM-80
power consumption in W 30 13 65

Table 31.2 Comparison of computing times for various adaptive control algo-
rithms RLS/DB or RLS/MV3 [26.44]

Microcomputer DMR-2 DMR-4 DMR-16

Code size kbyte 3.3 3.3 4.8


Data size kbyte 1.2 1.2 1.9
Computing time with floating
point arithmetic m sec
- Software 660 330
- 9511 180 90 130
- 8087 30
294 31 Adaptive and Selftuning Control Systems

Table 31.3 Comparison of computing times for various adaptive and fixed control algo-
rithms and microcomputer DMR-16 with arithmetic processor S087. Parameter estimation
with RLS

Adaptive control Fixed control

single variable multivariable single multi-


vari- vari-
able able

in- and outputs 1 1 1 2 2 3 1 2


model order (per channel) 2 3 3 2 2 2 2 2
controller PIO DB SC DB SC DB PIO PIO
code size in kbyte 12 4.8 6.7 8.9 9.5 S.9 0.5 0.9
data size in kbyte 2.8 1.9 5 2.4 8 7.2 0.08 0.4
computing time in ms 40 30 130 75 400 120 1 128

• with arithmetic processor 9511 and assembler.

makes the difference between the two 8-bit computers. Using the higher language
for the 16-bit computer causes an about 50% increase in memory storage. This,
initially is also the case for the computing time. The computing time of the 16-bit
computer becomes significantly smaller, if the arithmetic processor 8087 is being
used. Since the part of arithmetic operation is relatively large, the performance of
the arithmetic processor is of special significance. For model order m = 3 about
half of the computing time is needed for parameter estimation (about 16 ms). If
programmed in FORTRAN IV on a process computer HP 21 MX-E, the same
adaptive control algorithm requires about 500 ms. Table 31.3 shows the required
computational effort for various parameter-adaptive and fixed control algorithms
with one and two control variables.
As expected, the adaptive controllers require significantly more memory storage
(factor 10 to 25) and more computing time (factor 40) than the fixed controllers.
The required memory storage for adaptive single variable control systems for
RLSjDB is about half of RLSjPID and RLSjSC. For RLSjPID and RLSjDB the
computing times are almost equal, for RLSjSC, however, they are 4-times larger.
For adaptive multivariable control systems these differences, however, become
smaller.
It should be noted, that these performance data are valid for microcomputer
prototypes. The goal was a general testing of the functioning of adaptive control
methods implemented on microcomputers. The programs therefore include many
additional functions for performance analysis and supervision. Storage require-
ments as well as computing times can still be reduced. Concerning the computing
time, the lower limit of the applied 16-bit computer seems to be about 10ms.
Control algorithm sampling times of about 2 ms can be realized, if the controller
is not adapted anew after each sampling step but. The recursive parameter estima-
tion calculation then should be spread over several sampling intervals.
31.2 Examples 295

31.2 Examples

Various case studies have already shown the applicability of parameter adaptive
control algorithms to industrial and pilot processes. Early examples (1975-1979)
apply the implicit version of RLS/MV4 for moisture control of a paper machine
[26.12], for heading a tanker [26.22] and to control the titan dioxide content in
a kiln [26.23]. The application of implicit RLS/MV3 with microcomputers is
described in [26.24]. Explicit RLS/DB and RLS/MV3 with microcomputers was
used to control an airheater [26.25, 26.26]. [26.27] describes the application of
RLS/MV4 and RLS/SC on a pH-process. A survey of more applications after 1979
is given e.g. in [23.20, 23.19, 23.22, 30.9].
Some examples of application are described more closely in the following.

31.2.1 Adaptive Control of a Superheater (Simulation)


In order to enable a direct comparison with separated identification and controller
design at the same process as in section 30.2.1, at first the selftuning control of
a final superheater with cascaded injection cooler temperature control is con-
sidered according to Figure 30.6.
Figure 31.2a shows for the case of no steam flow disturbances a brief pre-
identification (RLS). As soon as the process model reaches a satisfactory perform-
ance, the state controller is designed automatically, the PRBS-testsignal is switched
off and the control loop is closed. Then the control system operates as a selftuning
control system until it reaches a certain control performance (hereafter a step
change in the reference value) and finally keeps the selftuned controller parameters.
Figure 31.2b depicts the same for the case with steam flow disturbances. It can be
recognized that selftuning control yields a good control performance even with
relatively large noise signals. Note, that the controller parameters are tuned already
after 18 min and that the time requirement, including two responses to steps in the
reference value, is only 50 min, compared with 110 min using separated identifica-
tion and control according to Figure 30.4. [30.9] shows an example for selftuning
two-variable state control.
This means that already for a single input/single output process 50% of time can
be saved, which is of interest for processes with larger settling time [30.9].

31.2.2 Adaptive Control of Air Conditioning Units


a) Adaptive control of an Airheater

Figure 31.4 shows the scheme of an air-conditioning plant which was constructed
of usual components. The air temperature can be changed by changing the position
of the three-way valve. It changes the water flow through the air heater. The air
humidity control is ensured by changing the spray water flow in the air humidifier.
It refers to a strongly coupled two-variable control system which shows a distinctly
nonlinear behaviour and which also is especially dependent on the load (air flow).
25 50 t [min]

r\. 1
2. 5 r-
(l

-2. 5 U 25 lJ' 50 t [mrn]

Pre-I selftunrng I fi xed .. .


Idenh- I control I control
a flcatron

75 100

.6.~ u leSoll 1°C)


5

2.5

75 tlmln) 100
-2.5 fixed control
Process- Idenh- self tuning
disturbances flea Ion control without
dis urbances
MoIOJo)

2.5

[min) 100

b
Figure 31.2a,b. Selftuning superheater final temperature control as in Fig. 30.7 (simulation).
a Selftuning state control (RLSjSC) with brief pre-identification, without noise signal; b As
a), however with steam flow disturbances.
31.2 Examples 297

Air heo er

Condilloned
Fresh air Heote xchanger Sprayer air
/
0;-
<> ,;-

Parameter
adaotive
controtter

Figure 31.3 Scheme of an air-conditioning plant (pilot process) .•9 air temperature; qJ rela-
tive humidity; M air flow ratio; U I position of the inlet water valve; U 2 position of the spray
water valve.

25

Figure 31.4 Gain Kll = fo.9 A l fo.U t


as a function of the valve position
U I and air flow ratio M.
298 31 Adaptive and Selftuning Control Systems

w [OC)
~A
SO
1.8
200 k
1.6
1.1.

u [V)
7.0

6.0

5.0

1..0 50 k

3.0

2.0

Figure 31.5 Adaptive deadbeat control of the air heater with RLSj DB. m = 3; d = 0;
To = l8s; A. = 0.93; r' = 0.83. U 1 = U.

Figure 31.4 depicts the gain factor of the temperature control system. Within the
considered operating points it changes (about t : to), the settling times by 1: 2.
In the following some signal graphs are shown for the adaptive control of the
air temperature, which were obtained for the operating point M = 300m 3/ h;
Voo = 4.3 V; 9A = 47 °C. For the adaptive control algorithms To = 18s; m = 3;
d = 0; A. = 0.93 were chosen and for the pre-identification a PRBS with amplitude
1.25 V and 17 sampling steps [26.44J.
Figure 31.5 shows the control behaviour for the parameter-adaptive
RLS/DB(v + 1). After a brief pre-identification the loop is closed and stabilized.
The controller parameters are adapted anew for each step change of the set point.
With increasing temperature the settling behaviour is more damped because of the
gain increase. If the temperature decreases, the settling behaviour shows an over-
shoot because of a gain decrease. The adaptive state- and PID controller (para-
meter-optimized) show a similar behaviour, Figures 31.6 and 31.7.
In Figure 31.8 the PID-control algorithm was fixed after pre-identification. In
the vicinity of this operating point the expected control behaviour is attained. The
loop, however, reaches the stability limit (oscillating behaviour), if the reference
value is tuned towards a larger process gain.
Figure 31.9 shows the behaviour with a DSFI/ PID (square root filter in informa-
tion form) for an especially large range of process parameter changes. The signals
demonstrate the corresponding adaption.
31.2 Examples 299
w (OC)

1.8

1.6 k

l.l.

u [V)

7.0

6.0

S.O

1..0 50 180 k

3.0

2.0

1.0

Figure 31 .6 Adaptive state control of the air-heater with RLS/SC. Q = I; r = 0.5.

w [OC)
.sA
50
1.8

1.6 k

l.l.

u [V)

6.0

5.0

1..0
SO k
3.0

2.0

1.0

Figure 31.7 Adaptive PID-control of the air-heater with RLS/ PID. r = 0.08.
V>
8

~
w ,,"el
"A ~
Q..
50 po
'S.
:;;:: .
i.8 ~ IT 1\ ."
po
V v~ v ::s
50 'V' 150 200 k Q..
i.6 1 0
en
Ll.
1\0
[\1\1\1\_
r ."
::;;
s=
::s
5'
u t IVI O<l
(J
7.0 ~ n 0
a....
6.0 f IL IL 2-
en
'<
5.0 t " I III I'l... ............. '"CD
" 3
L.O t 50 ~ 150 200 '"
I IIi V II)~~ nn I~ I~k
3.0

2.0

Figure 31.8 Unique selftuning PID-control of the air heater with RLS/ PID
W f rOCI
--'"A

5.0

{..o+IIIIY': IJ\.4A rc==; I P ....


200 k
3.0

2.0 V>

N
1.0 tTl
:><
~

Figure 31.9 Adaptive PID-control of the air heater with RLSC/ DSFI/ PID. 4 = 0.8. 3
'0
0'
<;,

....,
0
302 31 Adaptive and Selftuning Control Systems

Heat exchanger

¢=l
Inlet air

Used air c::) Q


Outlet air

Fil er Prehea er Cooler Sproyer Fon

Condlloned
air

Figure 31.10 Scheme of a high-pressure air conditioning plant of a building (max. air flow
rate: 12 3OOm 3 / h).

The results of further experiments are represented in [26.44]. (As a comparison


also with analog PID-controllers.) A good adaptation is also obtained for air flow
disturbances. For the adaptive state- and PID-controllers a substantial improve-
ment of the control performance and of the adaptation velocity can be reached by
decreasing the sampling time To = 5 s (To = 18 s was chosen in the experiments
mentioned above, in order to enable comparisons with the adaptive deadbeat
controller). Good results were also obtained using adaptive control of the humid-
ity [26.60].
b) Adaptive control of an air-conditioning unit
Figure 31.10 shows a high-pressure air conditioning plant of a building. The results
with an adaptive state control for the two-input two-output control system (with 20
parameters) is shown in Figure 31.11 . The controlled variables are the air temperat-
ure and the air humidity. The adaptive controller was implemented in the 16-bit
microcomputer controller DMR-16 [26.44]. After a short preidentification of 20
samples the first values of the manipulated variable show too large changes (also
caused by disturbances). Therefore the weight of the temperature in the quadratic
performance criterion of the state controller was increased. For k > 55 then a good
31.2 Examples 303

Ijl {%I
51
47+L~~~~--~--~~~~~~~~~~----~~~-

43

2.0

UIjl [VI

o LO 55 80 120 k

Figure 31.11 Adaptive two-variable control with RLS and state controller. ,9 = air temper-
ature, rp = relative air humidity, To = 45 s.

control behaviour resulted. This is an example for on-line adjustment of design


parameters during the initial adaptation.

31.2.3 Adaptive Control of the pH-value


Figure 31.12 shows the scheme of the pilot process of a wastewater neutralization
plant. Water and acid are fed into a stirred tank and represent an acid wastewater.
Then base is added. The base volume flow can be manipulated by a pistonpump
with changeable stroke. The controlled variable is the pH-value in the outlet of the
304 31 Adaptive and Se1ftuning Control Systems

Wo er

MIxing vessel

Signol-tronsmi er

Pump

Bose ACId

,----------------,
DMR-2

L---;...I---l A Parameter I
I adopt ive 1-------'
'-------;--,----I 0 controller A I
L _______________ J

pH

10

8
7

L
Figure 31.12 a Scheme of a pH-
T value process. b Measured titration
0 0.5 1.5 2 2.5 M./M
characteristic.
Q

tank. It can be measured as a galvanic voltage between a glass electrode and


a reference electrode.
Figure 31.13 shows the nonlinear titration characteristics and the resulting gain
which changes strongly with the process input. In Figure 31.14a the adaptive
control with RLS/ DB(v + 1) is shown. For t = 0 the loop is closed with 8(0) = 0
and P(O) = 500 I without extra preidentification. After about 2 min the adaptive
31.2 Examples 305

y
9
8
7

4 +---------------~--------------~
10 20 30 40 50 50 70 80 90 100 U [0/0]

K [pHrr.]

0.2

01

10 20 30 40 50 60 70 80 90 100 U [Ofo]

Figure 31.13 Titration characteristic and gain factor of the pH-value process.

loop stabilizes at reference value pH = 7. The transients after different reference


value steps show relatively short settling times and only small or even no over-
shoots. A stepwise acid disturbance at t = 14 min is also compensated nicely.
The gain estimates changed during this experiment between 0.023 for t = 5 min
and 0.144 for t = [Omin, i.e. by a ratio of 1 :6.
Figure 31.14b shows the adaptive control with RLS/MV3. In this case a preiden-
tification with a PRBS was performed for 2 min before closing the loop. The better
starting model results in smaller changes of the process input. The further behavi-
our shows good control also for disturbances of the acid and water flow. A com-
parison with a fixed analog PI-controller demonstrates a considerable better
control performance with the parameter adaptive control [26.20, 26.28].
A further improvement of the pH-value control can be obtained by a nonlinear
adaptive control system, existing of a nonlinear Volterra model of the process and
nonlinear state and minimum variance controllers [26.59, 26.64]. If the nonlinear
controller once is adapted, considerable less parameter adaptations are required as
with adaptive linear controllers. This leads to a well-damped behaviour of the
process input. This could, also be demonstrated for an adaptive air heater control
[26.59].
Further application examples for parameter adaptive control are [30.9]:
- seeds humidity in a rotary drier
- dry edge process
- fluidized bed drier [26.20].
w
0
0\

:>
0-
~
~(PH)
'S.
9 =;;-
."
~
8 ::s
0-
7 en
."
5 10 15 ([min)
S
c
6
::s
Er
(JQ

ur" hl (j
0
24
l
.,:::.
- 2-
12 III n I I 1,.11..1 -~
~- en
'<
~
."

([mm]
3
(J>
5 10 15 20 25 30

l'lMOI%J !
O +-----------~---------+I----~----------~----------------------------------------
a -10 5 10 15 20 25 30 ([mIn]

Figure31.14a,b. Adaptive control of the pH-value. M.: acid flow; Mb: base flow; Mw: water flow a RLSj DB(v + 1); r' = 0.5; To = ISs; m = 3;
A. = 0.88.
31.2 Examples 307

c: c
E E

o
M

U"> U">
N N

oN o oN
N

U">
00
~ ~ 00
ci
II
""N-
II
""::!
,;.)
g g g II
12
'"
V)

ci
II
\ ..
U'>
M
>-
:2
-----
r/l
.....l
C(

. -.....
...,.
~

.-:
I
.e-
co <-

--:c=>
-oJ
N
N 0
....
0 S? 0
,.
0 0
,
N ~o
z
0
Q,I

3> 0
.:2
.:2
<I
~
<I .tJ
~
308 31 Adaptive and Se1ftuning Control Systems

Final Remarks on the Application of Parameter-adaptive Digital Control Systems


The described examples, as well as many others, have shown that parameter-
adaptive control systems give good results if the requirements for their application
are met, see section 26.8.
Thus the parameter-adaptive control methods can be also applied for moder-
ately nonlinear processes or slowly timeinvariant processes. Supervision and, if
necessary coordination of the basic functions are suggested if the method is to be
applied for continuously operating adaptive control systems, see section 26.8. For
first applications it is recommended to employ the parameter-adaptive control
algorithms initially as selftuning controllers with following use of fixed control
algorithms, see section 26.11.
For many processes adaptive PID-controllers are sufficient. An especially good
control performance can be achieved using parameter-adaptive state controllers
for higher-order processes, processes with larger dead times and multivariable
processes. The selection of the weighting factors of the control performance
criterion can often not be performed in a direct way, but is possible during on-line
operation. Parameter-adaptive minimum variance controllers should be only used
for processes with strong stochastic noise and parameter-adaptive deadbeat con-
trollers only for well-damped low-pass processes.
All applications have shown that a certain a priori knowledge of the process is
always required, at least for the first applications of a new process type.
Summing up, it can be said that parameter-adaptive control algorithms repres-
ent multi-purpose tools which lead to an essentially improved control performance
and which minimize the personal expenditure.
References

Chapter 12

12.1 Aoki, M.: Optimization of stochastic systems. New York: Academic Press 1967
12.2 Meditch, J.S.: Stochastic optimal linear estimation and control. New York:
McGraw-Hill 1969
12.3 Bryson, A.E.; Ho, Yc.: Applied optimal control. Watham: Ginn and Co. 1969
12.4 Astrom, K.J.: Introduction to stochastic control theory. New York: Academic Press
1970
12.5 Kushner, H.1.: Introduction to stochastic control. New York: Holt, Rinehart and
Winston 1971
12.6 Schlitt, H.; Dittrich, F.: Statistische Methoden der Regelungstechnik. Mannheim:
Bibliographisches Inst. 1972 Nr. 526
12.7 Davenport, W.; Root, W.: An introduction to the theory of random signals and
noise. New York: McGraw-Hill 1958
12.8 Bendat, J.S.; Piersol, A.G.: Random data: analysis and measurement procedures.
New York: Wiley Interscience 1971
12.9 Box, G.E.P.; Jenkins, G.M.: Time series analysis, forecasting and control.
San Francisco: Holden Day 1970
12.10 Jazwinski, A.H.: Stochastic processes and filtering theory. New York: Academic
Press 1970
12.11 Hansler, E.: Grundlagen der Theorie statistischer Signale. Berlin: Springer-Verlag
1983

Chapter 14

14.1 Clarke, M.A.; Hastings-James, R.: Design of digital controllers for randomly dis-
turbed systems. Proc. lEE, 118 (1971) 1503-1506
14.2 Schumann, R.: Various multi variable computer control algorithms for parameter-
adaptive control systems. IF AC Symp. on Computer Aided Design of Control
Systems, Zurich 1979, Oxford: Pergamon Press

Chapter 15
15.1 Bar-Shalom, Y; Tse, E.: Dual effect, certainty equivalence and separation in stochas-
tic control. IEEE Trans. Autom. Control AC 19 (1974) 494-500
310 References

Chapter 16
16.1 Benennungen fur Steuer- und Regelschaltungen. VDI/VDE-RichtIinie 3526,
VDI/VDE-Handbuch Regelungstechnik. Berlin: Beuth 1972
16.2 PreBler, G.: Regelungstechnik. Mannheim: Bibliographisches Inst. 1967
16.3 Leonhard, W.: Einfiihrung in die Regelungstechnik. Braunschweig: Vieweg 1972

Chapter 17
17.1 Isermann, R.; Bauer, H.: Entwurfvon Steueralgorithmen fiir ProzeBrechner. ETZ-A
36 (1975) 242-245

Chapter 18
18.1 Mesarovic, M.D.: The control of muItivariable systems. New York: Wiley 1960
18.2 Schwarz, H.: Mehrfachregelungen. 1. Bd. Berlin: Springer-Verlag 1967
18.3 Schwarz, H.: Mehrfachregelungen. 2. Bd. Berlin: Springer-Verlag 1971
18.4 Thoma, M.: Theorie linearer Regelsysteme. Braunschweig: Vieweg 1973
18.5 Isermann, R.: Die Berechnung des dynamischen Verhaltens der Dampftemperatur-
regelung von Dampferzeugern. Regelungstechnik 14 (1966) 469-475, 519-522
18.6 Isermann, R.; Baur, u.; Blessing, P.: Test case C for the comparison of different
identification methods. Boston: Proc. 6th IFAC-Congress 1975
18.7 Schramm, H.: Beitriige zur Regelung von ZweigroBensystemen am Beispiel eines
Verdampfers. Fortschr.-Ber. VDI Z Reihe 8 (1976) Nr. 24
18.8 Freund, E.: Zeitvariable MehrgroBensysteme. Lecture Notes in Operations Re-
search and Mathematical Systems. Berlin: Springer-Verlag 1971
18.9 Sinha, N.K.: Minimal realization of transfer functions matrices-a comparative
study of different methods. Int. 1. Control 22 (1975) 627-639
18.10 Kucera, V.: Discrete linear control. Prague: Academia 1979
18.11 Fisher, D.G.; Seborg, D.E.: Multivariable computer control. Amsterdam: North
Holland 1976

Chapter 19
19.1 Kraemer, W.: Grenzen und MOglichkeiten nicht entkoppelter, linearer Zweifach-
regelungen. Fortschr.-Ber. VDI Z. Reihe 8 (1968) Nr. 10
19.2 Muckli, W.: Analyse und Optimierung nicht entkoppelter Zweifachregelkreise.
Aachen: Diss. TH 1968
19.3 Muckli, W.; Kraemer, W.: Reglereinstellung an nicht entkoppelten ZweigroBensys-
temen. Regelungstechnik 20 (1972) 155-163
19.4 Zietz, H.: Stabilitiitsbetrachtungen und Reglerentwurf bei nicht entkoppelten
ZweigroBenregelungen. Mess. Steuem Regeln 16 (1973) 84-88
19.5 Niederlinski, A.: A heuristic approach to the design of linear multivariable inter-
acting control systems. Automatica 7 (1971) 691-701
19.6 Engel, W.: Grundlegende Untersuchungen uber die Entkopplung von Mehrfach-
regelkreisen. Regelungstechnik 14 (1966) 562-568
References 311

Chapter 20

20.1 Schumann, R.: Various multivariable computer control algorithms for parameter-
adaptive control systems. IF AC Symp. on Computer Aided Design of Control
Systems, Zurich 1979, Oxford: Pergamon Press
20.2 Keviczky, L.; HeUhessy, J.: Self-tuning minimum variance control ofMIMO discrete
systems. Autom. Control Theory and Applic. 5 (1977)

Chapter 21
21.1 Falb, P.L.; Wolovich, W.A.: Decoupling in the design and synthesis of multi variable
control systems. IEEE Tras. AC 12 (1967) 651-659
21.2 Gilbert, E.G.: The decoupling of multivariable systems by state feedback. SIAM
Control 7 (1969) 50-63
21.3 Schwarz, H.: Optimale Regelung Iinearer Systeme. Mannheim: Bibliographisches
Inst. (1976)

Chapter 22
22.1 Wiener, N.: Extrapolation, interpolation and smoothing oftime series with engineer-
ing applications. New York: Wiley 1949
22.2 Sage, A.P.; Melsa, lL.: Estimation theory with applications to communications and
control. New York: McGraw-Hili 1971
22.3 Nahi, N.E.: Estimation theory and applications. New York: Wiley 1969
22.4 Kalman, R.E.: A new approach to linear filtering and prediction problems. Trans.
ASME, Series D, 83 (1960) 35-45
22.5 Kalman, RE.; Bucy, R.S.: New results in linear filtering and prediction theory.
Trans. ASME, Series D, 83 (1961) 95-108
22.6 Deutsch, R: Estimation theory. Englewood Cliffs: Prentice Hall 1965
22.7 Bryson, A.E.; Ho, Y.c.: Applied optimal control. Watham: Ginn (Blaisdell) 1969
22.8 Jazwinski, A.H.: Stochastic processes and filtering theory. New York: Academic
Press 1970
22.9 Theory and Applications of Kalman Filtering. AGARDOgraph Nr. 139 (1970).
Zentralstelle fur Luftfahrtdokumentation, Munchen. ESRO/ELDO Space 114,
Neuilly-sur-Seine/Frankreich.
22.10 Brammer, K.; Siffling, G.: Kalman-Bucy-Filter. Munchen: Oldenbourg 1975

Chapter 23

23.1 Aseltine, lA.; Mancini, A.R.; Sature, C.W.: A survey of adaptive control systems.
IRE Trans. Autom. Control 6 (1958) 102
23.2 Stromer, R.R: Adaptive and self-optimizing control systems - a hibliography. IRE
Trans. Autom. Control 4 (1959) 65
23.3 Truxal, J.: Adaptive control- a survey. ProC. 2nd IFAC-Congress, Basel 1963
23.4 Donaldsson, D.P.; Kishi, F.H.: Review of adaptive control systems theories and
techniques. Modern Control Systems Theory. (ed) Leondes. New York: McGraw
1965
312 References

23.5 Tsypkin, Y.Z.: Adaptation, training and self organization in automatic systems.
Autom. Remote Control 27 (1971) 16
23.6 Mishkin, E.; Braun, L.: Adaptive control systems. New York: McGraw-Hill 1961
23.7 Eveleigh, V.W.: Adaptive control and optimization technique. New York:
McGraw-Hill 1967
23.8 Mendel, J.M.; Fu, K.S.: Adaptive, learning and pattern recognition systems.
New York: Academic Press 1970
23.9 Tsypkin, Y.Z.: Adaptation and learning in automatic systems. New York: Academic
Press 1971
23.10 Weber, W.: Adaptive Regelungssysteme. Bd. I und II. Munchen: Oldenbourg 1971
23.11 Maslov, E.P.; Osovskii, L.M.: Adaptive control systems with models. Autom. Re-
mote Control 27 (1966) 1116
23.12 Landau, 1.0.: A survey of model reference adaptive techniques-theory and applica-
tions. Automatica 10 (1974) 353-379
23.13 Lindorff, D.P.; Carroll, R.L.: Survey of adaptive control using Ljapunov design. Int.
J. Control 18 (1973) 897
23.14 Wittenmark, B.: Stochastic adaptive control methods: a survey. Int. J. control 21
(1975) 705-730
23.15 Saridis, G.N.; Mendel, J.M.; Nicolic, Z.J.: Report on definitions of self-organizing
control processes and learning systems. IEEE Control System Soc. Newsletters 48
(1973) 8-13
23.16 Gibson, J.: Nonlinear automatic control. New York: McGraw-Hill 1962
23.17 Astrom, K.J.; Borisson, U.; Ljung, L.; Wittenmark, 8.: Theory and applications of
selftuning regulators. Automatica 13 (1977) 457-476
23.18 Asher, R.B.; Adrisani, D.; Dorato, P.: Bibliography on adaptive control systems.
IEEE Proc. 64 (1976) 1126
23.19 Isermann, R.: Parameter adaptive control algorithmus-a tutorial. Automatica 18
(1982) 513-528
23.20 Astrom, K.J.: Theory and applications of adaptive control- a survey. Automatica 19
(1983) 471-486
23.21 Landau, 1.0.: Adaptive control-the model reference approach. New York: M.
Dekker 1979
23.22 Harris, c.J.; Billings, S.A. (Eds): Self-tuning and adaptive control- theory and
applications. London: P. Peregrinus 1981
23.23 Whitaker, H.P.; Yamron, J.; Krezer, A.: Design of model-reference adaptive systems
for aircraft. Report R-I64, Instrumentation Laboratory MIT, Cambridge 1958
23.24 Osborn, P.V.; Whitaker, H.P.; Kezer, A.: New developments in the design of model
reference adaptive control systems, lAS-paper No 61-39, Inst. of Aeronautical
Sciences, 29th Annual Meeting, New York, 1961
23.25 Parks, P.C.: Lyapunov redesign of model reference adaptive control systems. IEEE
Trans. Autom. Control AC 11 (1966) 362-367
23.26 Narendra, K.S.; Kudva, P.: Stable adaptive schemes for systems identification and
control, Parts I, II, IEEE Trans. Syst. Man Cybern 4 (1974) 542-560
23.27 Desoer, C.A.; Vidyasagar, M.: Feedback systems: Input-Output properties. New
York: Academic Press 1975
23.28 Chen, C.T.: Introduction to linear system theory. New York: Holt, Rinehart,
Winston 1970
23.29 Schmid, Chr.: Ein Beitrag zur Realisierung adaptiver Regelungssysteme mit dem
ProzeBrechner. Diss. Ruhr-Univ. Bochum 1979
References 313

23.30 Unbehauen, H.: Systematic design of discrete model reference adaptive systems. In
'Selftuning and adaptive conrol' lEE - Control Eng. Series 15. Stevenage (UK):
Peregrinus
23.31 Kalman, R.E.: Design of a self-optimizing control systems. Trans. ASME 80 (1958)
468-478
23.32 Peterka, v.: Adaptive digital regulation of noisy systems. 2nd IF AC-Symp. on
Identification. Preprints. Prag: Academia: 1970
23.33 Astrom, K.J.; Wittenmark, B.: On self-tuning regulators. Automatica 9 (1973)
185-199
23.34 Clarke, D.W.; Gawthrop, P.J.: A self-tuning controller. lEE Proc. 122 (1975)
929-934
23.35 Well stead, P.E.; Prager, D.; Zanker, P.: Pole assignment self-tuning regulator. lEE.
Proc. 126 (1979) 781-787
23.36 Kurz, H.; Isermann, R.; Schumann, R.: Experimental comparison and application of
various parameter adaptive control algorithm us. 7th IFAC-Congress, Helsinki 1978
und Automatica 16 (1980) 117-133
23.37 Parks, P.e.: Stability and convergence of adaptive controllers - continuous systems.
In [23.22]
23.38 Fromme, G.; Haverland, M.: Selbsteinstellende Digitalregler im Zeitbereich. Re-
gelungstechnik 31 (1983) 338-345
23.39 Guillemin, E.A.: Synthesis of passive networks. New York: Wiley 1957
23.40 Anderson, B.D.O.: A simplified viewpoint of hyperstability. IEEE Trans. Autom.
Control AC 13 (1968) 292-294
23.41 Goodwin, G.e.; Sin, K.S.: Adaptive filtering, prediction and control. Englewood
Cliffs, N.J. Prentice Hall 1984

Chapter 24

24.1 Panuska, V.: A stochastic approximation method for identification oflinear systems
using adaptive filtering. Proc. JACC 1968
24.2 Panuska, V.: An adaptive recursive least squares identification algorithm. Proc.
IEEE Symp. on Adaptive Processes, Decision and Control 1969
24.3 Young, P.e.: The use of linear regression and related procedures for the identifica-
tion of dynamic processes. Proc. 7th IEEE Symp. on Adaptive Processes. New York:
IEEE 1968
24.4 Young, P.e.; Shellswell, S.H.; Neethling, e.G.: A recursive approach to time-series
analysis. Report CUED/B-Control/TR16, Univ. of Cambridge 1971
24.5 Wong, K.Y.; Polak, E.: Identification of linear discrete time systems using the
instrumental variable method. IEEE Trans. Autom. Control AC 12 (1967) 707-718
24.6 Young, P.e.: An instrumental variable method for real-time identification of a noisy
process. IFAC-Automatica 6 (1970) 271-287
24.7 Fuhrt, B.P.; Carapic, M.: On-line maximum likelihood algorithm for the identifica-
tion of dynamic systems. 4th IF AC-Symp. on Identification, Tbilisi 1976
24.8 Soderstrom, T.: An on-line algorithm for approximate maximum likelihood identifi-
cation of linear dynamic systems. Report 7308. Dept. of Automatic Control, Lund
Inst. of Technology 1973
24.9 Isermann, R.; Baur, u.; Bamberger, W.; Kneppo, P.; Siebert, H.: Comparison of six
on-line identification and parameter estimation methods. IF AC-Automatica 10
(1974) 81-103
314 References

24.10 Saridis, G.N.: Comparison of six on-line identification algorithms. IF AC-Auto-


matica 10 (1974) 69-79
24.11 Baur, U.: On-line Parameterschiitzverfahren zur Identifikation linearer dynamischer
Prozesse mit ProzeBrechnem. Diss. Univ. Stuttgart. Karlsruhe: Ges. f. Kemfor-
schung, Ber. KFK-PDV 65 (1976)
24.12 Baur, u.; Isermann, R.: On-line identification of a heat exchanger with a process
computer IF AC-Automatica 13 (1977)
24.13 Soderstrom, T.; Ljung, L.; Gustavsson, I.: A comparative study of recursive identifi-
cation methods. Report 7427 Dept. of Automatic Control, Lund Inst. of Technology,
1974
24.14 Hannan, E.J.: Multiple time series. New York: Wiley 1970
24.15 Astrom, K.J.; Bohlin, T.: Numerical identification of linear system dynamics from
normal operating records. IF AC-Symp. on Theory of self adaptive control systems,
Teddington. New York: Plenum Press 1966
24.16 Isermann, R.: Practical aspects of process identification. IFAC-Automatica 16 (1980)
24.17 Ljung, L.: Analysis of recursive stochastic algorithms. IEEE Trans. AC 22 (1977)
551-575
24.18 Kaminski, P.G.; Bryson, A.E.; Schmidt. S.F.: Discrete square root filtering. A survey
of current techniques. IEEE Trans. AC 16 (1971) 727-735
24.19 Peterka, V.: A square root filter for real time multivariate regression. Kybemetika 11
(1975) 53-67
24.20 Strejc, V.: Least squares parameter estimation. Automatica 16 (1980)
24.21 Ljung, L.; Morf, M.; Falconer, D.: Fast calculation of gain matrices for recursive
estimation schemes. Int. J. Control 17 (1978) 1-19
24.22 Miincher, H.: Vergleich verschiedener Rekursionsalgorithmen fUr die Methode der
kleinsten Quadrate. TH Darmstadt: Diplomarbeit Inst. fiir Regelungstechnik 1980
24.23 Biermann, G.J.: Factorization methods for discrete sequential estimation.
New York: Academic Press 1977
24.24 Kofahl, R.: Verfahren zur Vermeidung numerischer Fehler bei Parameterschiitzung
und Optimalfilterung. Automatisierungstechnik 34 (1986) 421-431

Chapter 25

25.1 Bohlin, T.: On the problem of ambiguities in maximum likelihood identification.


Automatica 7 (1971) 199-210
25.2 Gustavsson, I.; Ljung, L.; Soderstrom, T.: Identification of linear, multivariable
process dynamics using closed loop experiments. Report 7401, Dept. of Automatic
Control, Lund Inst. of Technology 1974
25.3 Kurz, H.; Isermann, R.: Methods for on-line process identification in closed loop. 6th
IF AC-Congress, Boston 1975
25.4 Gustavsson, 1; Ljung, L.; Soderstrom, T.: Identification of processes in closed
loop- Identification and accuracy aspects. Preprints 4th IFAC-Symp. on Identifica-
tion, Tbilisi 1976 and Report 7602, Dept. of Automatic Control, Lung Inst. of
Technology 1976
25.5 Kurz, H.: Recursive process identification in closed loop with switching regulators.
Proc. 4th IF AC-Symp. on Identification. Amsterdam: North Holland 1977
References 315

Chapter 26

26.1 Patchell, J.W.; Jacobs, O.L.R.: Separability, neutrality and certainty equivalence. Int.
J. Control 13 (1971) 337-342
26.2 Bar-Shalom, Y.; Tse, E.: Dual effect, certainty equivalence and separation in stochas-
tic control. IEEE Trans. Autom. Control AC 19 (1974) 494-500
26.3 Tou, J.T.: Optimum design of digital control systems. New York: Academic Press
1963
26.4 Gunckel, T.L.; Franklin, G.F.: A general solution for linear sampled-data control. 1.
Basic Eng. 85 (1963) 197
26.5 Feldbaum, A.A.: Optimal control systems. New York: Academic Press 1965
26.6 Kurz, H.; Isermann, R.; Schumann, R.: Development, comparison and application of
various parameter-adaptive digital control algorithms. 7th IF AC-Congress Helsinki
1978
26.7 Peterka, v.: Adaptive digital regulation of noisy systems. 2. IF AC-Symp. on Identifi-
cation, Prag 1970
26.8 Astrom, K.J.~ Wittenmark, B.: On self tuning regulators. IFAC-Automatica 9 (1973)
185-199
26.9 Wittenmark, B.: A self tuning regulator. Report 7311, Dept. of Automatic Control,
Lund Inst. of Technology 1973
26.10 Ljung, L.; Wittenmark, B.: Asymptotic properties of self tuning regulators. Report
7404. Dept. of Automatic Control, Lung Inst. of Technology 1974
26.11 Borrisson, u.: Self tuning regulators - industrial application and multi variable
theory. Report 7513, Dept. of Automatic Control, Lund Inst. of Technology 1975
26.12 Astrom, K.J.; Borrisson, u.; Ljung, L.; Witten mark, B.: Theory and applications of
adaptive regulators based on recursive parameter estimation. 6th IF AC-Congress.
Paper 50.1 Boston 1975
26.13 Clarke, D.W.; Gawthrop, B.A.: Self tuning controller. Proc. lEE 122 (1975) 929-934
26.14 Kurz, H.; Isermann, R.: Feedback control algorithms for parameter adaptive con-
trol-comparison and identifiability aspects. Joint Automatic Control Conference,
San Francisco 1977
26.15 Kurz, H.; Isermann, R.; Schumann, R.: Experimental comparison and application of
various parameter-adaptive control algorithms. Automatica 16 (1980) 117-133
26.16 Kurz, H.: Digitale adaptive Regelung auf der Grundlage rekursiver Para-
meterschiitzung. Diss. TH Darmstadt. Karlruhe: Ges. f. Kernforschung, Ber. KFK-
PDV 188 (1980)
26.17 Ljung, L.: On positive real transfer functions and the convergence of some recur-
sions. IEEE Trans. AC 22 (1977) 539
26.18 Gawthrop, P.J.: Some interpretations of the self tuning controller. Proc. lEE 124
(1977) 889-894
26.19 Clarke, D.W.; Gawthrop, P.J.: Self tuning control. Proc. lEE 126 (1979)
633-640
26.20 Egardt, B.: Stability of adaptive controllers. Lecture Notes in Control and Informa-
tion Sciences. Berlin: Springer-Verlag 1979
26.21 Kurz, H.: Digital parameter adaptive control of processes with unknown constant or
timevarying dead time. 5th IF AC Symp. on Identification and System Parameter
Estimation. Darmstadt 1979
316 References

26.22 Kallstrom, e.G.; Astrom, KJ.; Thorell, N.E.; Eriksson, J.; Sten, L.: Adaptive auto-
pilots for large tankers. 7th IFAC-Congress Helsinki 1978
26.23 Dumont, G.A.; Belanger, R.R.: Self tuning control of a titanium dioxide kiln. IEEE
Trans. AC 23 (1978) 532-538
26.24 Clarke, D.W.; Gawthrop, P.J.: Implementation and application of microprocessor
based self tuners. 5th IF AC Symp. on Identification and System Parameter Estima-
tion. Darmstadt 1979
26.25 Bergmann, S.; Radke, F.; Isermann, R.: Ein universeller digitaler Regler mit
Mikrorechner. Regeiungstech. Prax. 20 (1978) 289-294, 322-325
26.26 Bergmann, S.; Schumann, R.: Digitale adaptive Regelung einer Liiftungsanlage.
Regeiungstech. Prax. 22 (1980) 280-286
26.27 Buchholt, F.; Kummel, M.: Self tuning control of a pH-neutralization process.
Automatica 15 (1979) 665-671
26.28 Bergmann, S.; Lachmann, K.-H.: Digital parameter adaptive control of a pH
process. San Francisco: Joint Automatic Control Conference 1980
26.29 Schumann, R.; Christ, H.: Adaptive feedforward controllers for measurable disturb-
ances. Denver: Joint Automatic Control Conference 1979
26.30 Peterka, V.; Astrom, K.J.: Control of multivariable systems with unknown but
constant parameters. 3rd IF AC Symp. on Identification and System Parameter
Estimation. The Hague 1973 Oxford: Pergamon Press
26.31 Keviczky, L.; Hetthessy, J.: Selftuning minimum variance control of MIMO discrete
time systems. Automatic Control Theory and Applic. 5 (1977)
26.32 Borisson, V.: Self tuning regulators for a class of multi variable systems. 4th IF AC
Symp. on Identification and System Parameter Estimation. Tbilisi 1976
26.33 Schumann, R.: Identification and adaptive control of multi variable stochastic linear
systems. 5th IF AC Symp. on Identification and System Parameter Estimation.
Darmstadt 1979
26.34 Blessing, P.: Identification of the input-output and noisedynamics of linear multi-
variable systems. 5th IF AC Symp. of Identification and System Parameter Estima-
tion. Darmstadt 1979
26.35 Schumann, R.: Digital parameter-adaptive control of an air conditioning plant. 6th
IFAC/IFIP Conference on Digital Computer Applications to Process Control.
Dusseldorf 1980
26.36 Schuman, R.; Lachmann, K.H.; Isermann, R.: Towards applicability of parameter
adaptive control algorithms. Proc. 8th IF AC-Congress, Kyoto 1981. Oxford:
Pergamon Press.
26.37 Isermann, R.: Parameter adaptive control algorithms-a tutorial. Automatica 18
(1982) 513-528
26.38 Astrom, K.J.: Theory and applications of adaptive control- a survey. Automatica 19
(1983) 471-486
26.39 Matko, D.; Schumann, R.: Comparative stochastic convergence analysis of seven
recursive estimation methods. Proc. 6th IF AC-Symp on Identification, Washington.
Oxford: Pergamon Press 1982
26.40 Egardt, B.: Stability of adaptive controllers. Lecture Notes Nr. 20, Berlin: Springer-
Verlag 1979
26.41 de Larminat, Ph.: On overall stability of certain adaptive control systems. Proc. 5th
IFAC-Symp on Identification, Darmstadt, Oxford: Pergamon Press 1979
26.42 de Larminat, Ph.: Unconditional stabilizers for nonminimum phase systems.
References 317

Methods and applications in adaptive control, Lecture Notes Nr. 24, Berlin:
Springer-Verlag 1980
26.43 Schumann, R.: Digitale parameteradaptive MehrgroBenregelung. Diss. TH
Darmstadt. Karlsruhe: Ges. R. Kernforschung, Ber. PDV 217 (1982)
26.44 Radke, F.: Ein Mikrorechnersystem zur Erprobung parameteradaptiver Regelver-
fahren. Diss. TH Darmstadt. Forstchr.-Ber. VDI-Z. Reihe 8, Nr. 77, Diisseldorf:
VDI -Verlag 1984
26.45 Matko, D.; Schumann, R.: Selftuning deadbeat controllers. Int. J. Control 40 (1984)
393-402
26.46 Buchholt, F.; Kiimmel, M.: Self-tuning control of a pH-neutralization process.
Automatica 15 (1979) 665-671
26.47 Clarke, D.W.: Introduction to self-tuning controllers. In [23.22]
26.48 Banyasz, Cs.; Keviczky, L.: Direct methods for self-tuning PID-regulators. Proc. 6th
IFAC Symp. on Identification, Washington 1982. Oxford: Pergamon Press
26.49 Ortega, R.; Kelly, R.: PID self-tuners: some theoretical and practical aspects. IEEE
Trans. Ind. Electron Control Instrum. 31 (1984) 332-338
26.50 Banyasz, Cs.; Hetthessy, J.; Keviczky, L.: An adaptive PID-Regulator dedicated for
microprocessor based compact controller. Proc. 7th IF AC-Symp. on Identification,
New York 1985, Oxford: Pergamon Press
26.51 Andreiev, N.: A new dimension: A self-tuning controller that continually optimizes
PID constants. Control Eng. Aug. (1981) 84-85
26.52 Kraus, T.W.; Myron, T.J.: Self-tuning PID controller based on a pattern recognition
approach. Control Eng. June (1984)
26.53 Astrom, K.J.; Hagglund, T.: Automatic tuning of simple regulators with specifica-
tions on phase and amplitude margins. Automatica 20 (1984) 645-651
26.54 Kofahl, R.; Isermann, R.: A simple method for automatic tuning of PID-controllers
based on process parameter estimation. American Control Conference. Boston 1985
26.55 Kofahl, R.: Selbsteinstellende digitale PID-Regler-Grundlagen und neue Entwick-
lungen. VDI-Ber. 550 Diisseldorf: VDI-Verlag 1985
26.56 Radke, F.; Isermann, R.: A parameter-adaptive PID-controller with stepwise para-
meter-optimization, Proc. 9th IF AC-Congress, Budapest. Oxford: Pergamon Press
1984 and Automatica 23 (1987)
26.57 Kurz, H.; Goedecke, W.: Digital parameteradaptive control of processes with un-
known constant or time-varying deadtime. Automatica 17 (1981) 245-252
26.58 Isermann, R.; Lachmann, K.H.: Parameteradaptive control with configuration aids
and supervision functions. Automatica 21 (1985) 625-638
26.59 Lachmann, K.H.: Parameteradaptive Regelalgorithmen fiir bestimmte Klassen
nichtIine arer Prozesse mit eindeutigen Nichtlinearitaten. Diss. TH Darmstadt.
VDI-Fortschr Ber. Reihe 8 Nr. 66. Diisseldorf. VDI-Verlag 1983
26.60 Bergmann, S.: Digitale parameteradaptive Regelung mit Mikrorechner. Diss. TH
Darmstadt. VDI-Fortschr.-Ber. Reihe 8 Nr. 55. Diisseldorf: VDI-VerJag 1983
26.61 Schumann, R.: Design and application of multi variable selftuning controllers. Proc.
6th IFAC-Symp. on Identification, York. Oxford: Pergamon Press 1985
26.62 Isermann, R.; Hensel, H.: Sequential design of decentralized controllers with identifi-
cation and selftuning control. Proc. 3rd IF AC-Symp. on Computer Aided Design in
Control, Copenhagen, Oxford: Pergamon Press 1985
26.63 Gawthrop, P.J.: On the stability and convergence of a self-tuning controller. Int.
J Control 31 (1980) 973-998
318 References

26.64 Lachmann, K.H.: Selbsteinstellende nichtlineare Regelalgorithmen fur eine be-


stimmte Klasse nichtlinearer Prozesse. Automatisierungstechnik 33 (1985) 210-218
26.65 Lachmann, K.H.: Regelung verschiedeneer nichtlinearer Prozesse mit nichtlinearen
parameteradaptiven Regelverfahren. Automatisierungstechnik 33 (1985) 280-284,
318-321
26.66 Kofahl, R.: Robuste parameteradaptive Regelungen. Fachbericht Nor 19, Berlin:
Springer-Verlag 1988.
26.67 Knapp, T., lsermann, R.: Supervision and coordination of parameter-adaptive
controllers, American Control Conference, San Diego, USA 1990.

Chapter 27

27.1 Bertram, J.E.: The effect of quantization in sampled feedback systems. AlEE on
Applic and Industry 77 (1958) 177-182
27.2 Tsypkin, Y.Z.: An estimate of the influence of amplitude quantization on process in
digital control systems. Automato i Telemekh. 21 (1960) 195
27.3 Knowles, J.B.; Edwards, R.: Effect of a finite word length computer in a sampled-
data-feedback system. Proc. lEE 112 (1965) 1197-1207,2376-2384
27.4 Biondi, E.; Debenedetti, A.; Rotolni, P.: Error determination in quantized sampled-
data-systems. 3rd IFAC-Congress London 1966
27.5 Koivo, A.J.: Quantization error and design of digital control systems. IEEE Trans.
Autom. Control. AC 14 (1969) 55-58
27.6 Scheel, K.H.: Der EinfluB des Rundungsfehlers beim Einsatz des ProzeBrechners.
Regelungstechnik 19 (1971) 326, 329-331, 389-392
27.7 Blackman, R.B.: Linear data-smoothing and prediction in theory and practice.
Reading, Mass.: Addison-Wesley 1965

Chapter 28

28.1 Lauber, R.: ProzeBautomatisierung I. Berlin: Springer-Verlag 1976


28.2 Goff, K.: A systematic approach to DDC design. ISA J. (1966) 44-54
28.3 Welfonder, E.: Vergleich analoger und digitaler Filterung beim Einsatz von
ProzeBrechnern. Regelungstechnik (1975) 84-91
28.4 Schenk, Ch.; Tietze, U: Aktive Filter. Elektronik 19 (1970) 329-334, 379-382,
421-424
28.5 Berthold, W.; Lamer, U: Aktive Hoch- und TiefpaBfilter mit handelsublichen
Bauelementen. Elektronik 25 (1976) 73-75
28.6 Gold, B.; Rader, Ch. M.: Digital processing of signals. New York: McGraw-Hill 1969
28.7 SchUssler, H.W.: Digitale Systeme zur Signalverarbeitung. Berlin: Springer-Verlag
1973

Chapter 29

29.1 Fallinger, 0.: Nichtlineare Regelungen. Munchen: Oldenbourg Bd. I 1969, Bd. II
1980
29.2 Leonhard, W.: Einfiihrung in die Regelungstechnik. Nichtlineare Regelvorgiinge.
Braunschweig: Vieweg 1970
29.3 Glattfelder, A.H.: Regelungssysteme mit Begrenzungen. Munchen: Oldenbourg 1974
29.4 PreBler, G.: Regelungstechnik. Mannheim: Bibliographisches Inst. 1967
References 319

Chapter 30
30.1 Baur, u.; Isermann, R.: On-line identification of a heat exchanger-a case study.
IFAC Automatica 13 (1977)
30.2 Baur, U.: On-line Parameterschatzverfahren zur Identifikation Iinearer dynamischer
Prozesse mit Proze13rechnern. Karlsruhe: Ges. f. Kernforschung, Ber. KFK-PDV 65
(1976)
30.3 Blessing, P.; Baur, u.: On-Iine-Identifikation von Ein- und Zweigro13enprozessen
mit den Programmpaketen OLID. VDI-Ber. 276 'Proze13modelle 1977'. Diisseldor:
VOl-Verlag
30.4 Mann, W.: 'OLID-SISO'. Ein Programm zur On-Iine-Identifikation dynamischer
Prozesse mit Prozessrechnern - Benutzeranleitung -. Karlsruhe: Ges. f. Kern-
forschung, Ber. E-PDV 114 (1978)
30.5 Dymschiz, E.; Isermann, R.: Computer aided design of control algorithms based on
identified process models. 5th IFACjIFIP-Conference on Digital Computer Ap-
plications to Process Control. Den Haag 1977
30.6 Dymschiz, E.: Rechnergestiitzter Entwurf von Regelungen mit Proze13rechnern und
dem Programmpaket CADCA. VOI-Ber. 276 'Proze13modelle 1977'. Dusseldorf:
VOl-Verlag
30.7 Hensel, H.: CADAjCAFCA. Ein Programmpaket zum rechnergestiitzten Entwurf
von Regelalgorithmen. Karlsruhe: Ges. f. Kernforschung PDV-E 117 (1983)
30.8 Dymschiz, E.: A process computer program package for interactive computer aided
design of multi variable control systems. 2nd IF ACjIFIP Symp. on Software for
Computer Control. Prague 1979
30.9 Isermann, R.: Rechnerunterstiitzter Entwurf digitaler Regelungen mit Proze13identi-
fikation. Regelungstechnik 32 (1984) 179-189,227-234
30.10 Isermann, R.: Digital control methods for power station plants based on identified
process models. Proc. IF AC-Symp. on Automatic Control in Power Generation,
Pretoria 1980. Oxford: Pergamon Press
30.11 Mann, W.: Identifikation und digitale Regelung eines Trommeltrockners. Diss. TH
Darmstadt. Karlsruhe: Ges. f. Kernforschung, PDV-Ber. 189 (1980)
30.12 Mann, W.: Digital control of a rotary dryer in the sugar industry. 6th IFACjIFIP
Conference on Digital Computer Applications. Diisseldorf 1980. Automatica 19
(1983) 131-148
30.13 Mann, W.: Identifikation und digit ale Regelung eines Trommeltrockners fUr
Zuckerriibenschnitzel. Regelungstechnik 29 (1981) 263-269, 305-311
30.14 Mosel, P.; Feuerstein, E.; Peters, P.; Scholze, G.: Fiihrung einer Trommeltrock-
neranlage fiir Pre13schnitzel mit einem Proze13rechner. Zuckerindustrie 105 (1980)
554-561
30.15 Hensel, H.; Isermann, R.; Schmidt-Mende, P.: Experimentelle Identifikation und
rechnergestiitzter Regler-Entwurf bei technischen Prozessen. Chem.-Ing. Tech. 58
(1986) 875-887
Subject Index

actuators -, infinite memory 249


-, and control algorIthms 254 -, limIted memory 250
-, control 258 -, recursive 249
-, mtegral 257 -, vectors 119
-, properties 256
-, proportional 257
-, quantization 262 back-up controller 216
actuator constraints 265 Bessel filter 244
actuator speed bias 145
-, constant 260 Butterworth filter 244
-, varying 259
adaption law 129, 131, 132, 136
adaptive control 127,224 cancellation feedforward control 57
-, applications 290 cascade-control 49
-, cautious 173 certainty equivalence 173
-, certainty equivalence 173 -, controller 173
-, definition 127 -, principle 40, 173
-, direct 138 comparison
-, dual 174 -, deterministic and stochastic controllers 32
-, feedforward 128 -, minimum variance controllers 32
-, indirect 138 consistent 145
-, multivariable -, in mean square 145
-, parameter adaptive 170 control difference
-, with feedback 128 -, offset 212,257
-, with identification model 129, 138 control factor
-, with microcomputers 290 -, dynamic 18
-, with reference model 129 -, stochastic 11, 30
air-conditioning control systems
-, adaptive control 293 -, adaptive 127,224, 306
aliasing effect 242 -, cascade 49
amplitude quantization 228 -, interconnected 48
analog digital conversion -, learning 217
-, integrating 251 -, multivariable 71, 89
-, quantization 227 -, selftuning 138,221
ARMA model 9, 148, 160 -, servo 129
ARMAX model 149, 171 -, state 36
autocorrelation 4 -, stochastic 3
auto covariance 4 controller
autoregressive process 9 -, adaptive 127
auxiliary variable -, deadbeat 175, 193
-, control 48 -, main 79
auxiliary controller 49 -, matrix-polynomial 105
averaging -, mInImum variance 13,176
-, fading memory 250 -, parameter optimized 89, 179
322 Subject Index

-, PD- 12 feedforward control


-, PID- 10, 199 -, cancellation 57
-, state 36 -, minimum variance 66
convergence -, parameter-adaptive 217
-, adaptive control 182 -, parameter optimized 60
-, Martingale theory 183 -, state variable 65
-, ODE-method 183 filter
-, parameter estimation 145 -, analog 243
coordination -, band pass 240
-, adaptive control 215,217 -, Bessel 244
correction matrix 122 -, Butterworth
correlation function -, digital 245
-, auto- 4 -, high pass 248
-, cross- 4 -, Kalman 117, 121
coupling controller 99 -, low pass 243, 246
coupling elements 72 -, special 249
coupling factor -, Tschebyscheff 245
-, dynamic 76 -, Wiener 117
-, negative 78 filter algorithms
-, positive 78 -, averaging 249
-, static 77 -, recursive 249
covariance function filtering
-, auto 4 -, alias 242
-, cross 4 -, analog 243
covariance matrix 5, 145 -, digital 245
cross correlation 4 -, disturbances 240
cross covariance function 4 -, noise 240
-, outlier 252
fixed point representation 230
d.c. value estimation 146, 191
dead band 236
deadbeat controller 175 gradient method 130
-, multivariable 105
decoupling 99
describing function 235 heatexchanger
design -, digital control 275
-, computer aided 266 high pass filter 248
-, control systems 266 hyperstability design 133
design parameters
-, adaptive control 212
decentralized control identifiability conditions
-, selftuning 224 -, closed loop 160
difference equation -, open loop (convergence) 145
scalar 6, 8 identification
stochastic 5, 8 -, batch processing 141
vector 5 -, closed loop 158
digital analog converter (DAC) -, direct 164
quantization 230 -, on-line 141
dithering 239 -, processes 268
-, program packages 269
-, real time 141
equation error 143 identification model 138
estimation -, adaptive control with 138
-, constant values 249 -, explicit 139
-, parameter 143 -, implicit 139
-, recursive 144 -, nonparametric 139
-, state variables 119 -, parametric 139
-, time variant values 152, 250 innovation 124
-, vectors 121 innovations state model 114
expectation value 4 instrumental variable method 150
Subject Index 323

integral term -, parameter-optimized 89, 103


-, adaptive controller 180, 196 -, stability regions 92
-, state variable 109
-, tuning rules 96
Kalman-Bucy-Filter 117 -, two-input two-output 96
Kalman-Filter 39, 117, 121 multi variable processes 71
Kronecker delta function 36 -, canonical structure 73
-, matrix polynomial representation 82
-, state representation 82
limit cycle 232 -, transfer function representation 71
limiting frequency 243
linearization
-, actuator 262 noise
Ljapunov design 132 -, quantization 231
Ljapunov method 235 -, white 5
loss function Non-interaction
-, parameter estimation 144 -, disturbances 100
low pass filter -, models 100
-, analog 243 -, reference signals 100
-, digital 245 nonlinearities
-, actuators 255
-, through digitalization 230
main controller 79, 90 numerical range 227
Markov signal process 6, 117
matrix polynomial controller
-, deadbeat 105 observer
-, general 105 -, state 196
-, minimum variance 107 one-step ahead prediction 143
memory (estimation algorithms) optimization
-, fading 250 -, parameter 103, 202
-, infinite 249 order selection 212
-, limited 250 orthogonality
microcomputer -, state estimation 124
-, adaptive control 290 outlier
-, computing times 292 -, filtering 252
-, digital control 290
minimum variance
-, feedback control 13 parameter-adaptive controller 139
-, feedforward control 66 -, applications 293
MIT-rule 131 -, asynchronous 181
model -, convergence 182
-, mathematical 13 -, coordination 215
-, process 13 -, deadbeat 175, 193
-, signal 3 -, design parameters 212
modelling -, design principles 170
-, experimentally 268 -, deterministic 171,192
-, theoretically 268 -, explicit combination 180
model reference adaptive systems -, feedforward 217
-, direct 138 -, implicit combination 180
-, hyperstability design 133 -, minimum variance 176
-, indirect 138 -, multivariable 220
-, Ljapunov design 132 -, parameter-optimIzation 179
-, local parameter optimization 130 -, parameter optimized 202
moving average process 9 -, PID 199
multivariable control 89 -, pole assignment 179
-, deadbeat 105 -, stability 182
-, decoupled 99, 113 -, start 208, 213
-, matrix polynomial 105 -, state-space 179, 196
-, minimum variance 107, 113 -, stochastic 188
-, parameter adaptive 220 -, supervision 215
324 Subject Index

parameter-adaptive controller (Contd.) -, offset 232


-,synchronous 181 -, products 236
-, tuning rules 200 -, unit 228
parameter estimation -, variables 230
-, closed loop 158
-, convergence 145
-, non recursive 144 reference model 128, 129
-, processes 142 residual 124, 143
rotary drier
-, recursive 149
-, digital control 280
-, signals 142, 148
rounding
-, time variant processes 153
-, analog-digital-conversion 227
-, unbiased 145
parameter estimation method
-, correlation and least squares 169
-, exponential weighting 152 sampling theorem
-, Shannon 242
-, extended least squares 149
-, instrumental variables 150 selftuning controller 138, 221
sensitivity model 131
-, least squares 143
-, square root filtering 154 separation principle
-, adaptive control 173
-, UD-factorization 156
-, stochastic state control 40
-, unified algorithms 152
Shannon's sampling theorem 242
parameter identifiability 160
parameter optimization signals
-, Hooke-Jeeves 202 -, deterministic 3
-, multivariable controller 89 -, discrete-time 4
-, Markov 6
-, stepwise 202
-, orthogonal 5
P-canonical structure 73
-, stationary 4
pH-value process
-, stochastic 3
-, adaptive control 301
-, uncorrelated 5
PID-controller
-, adaptive 199 -, vector 5
spectrum
-, stability regions 92
-, basic 242
-, tuning rules 96, 200
-, side 242
Popov-hyperstability concept 133
position algorithm 254 superheater
-, adaptive control 293
position control 257
positive real 133, 183 -, digital control 274
prediction supervision
-, one step-ahead 144 -, adaptive control 215
pre identification 213 stability
process computer -, adaptive control systems 132, 182
stability regions 92
-, digital control 280
state controller
program packages
-, adaptive 196
-, computer aided design of control
-, for external disturbances 40
algorithms 266
-, matrix Riccati 197
-, identification 269
-, minimum variance 113
-, multivariable 109
-, pole assignment 109
quantization -, stochastic disturbances 36
-, actuator 262 -, white noise 36
-, AjD converter 227 -, with state estimation 38
-, amplitude 227 state variable estimator 116
-, coefficients 235 -, method of least squares 118
-, DjA converter 230 -, minimum variance principle 119, 124
-, dead band 236 -, ort~ogonality principle 119, 124
-, effects 227, 231 state vanable observer 196
-, error 231 stationarity
-, limit cycle 233, 235 -, narrow sense 4
-, noise 231 -, wide sense 4
Subject Index 325

step motor 257 UD-factorization 156

transfer matrix 73 variance 4


truncation error 228 V-canonical structure 73
Tschebyscheff filter 245
tuning rules
-, multi variable controller 96 white noise 5
-, parameter optimized controller 200 Wiener filter 117
-, selftuning controller 200 wind-up
twovariable controller 75, 89 -, control deviation 260, 265
twovariable process 73, 89 word length 227

S-ar putea să vă placă și