Sunteți pe pagina 1din 23

v

Because the shape of the whole universe is most per-


fect and, in fact, designed by the wisest Creator, nothing
in all of the world will occur in which no maximum or
minimum rule is somehow shining forth.
Leohard Euler, 1744
vi

Dedication

My deceased parents who shaped my life

Desineni Rama Naidu

Desineni Subbamma

and

My teacher who shaped my education

Buggapati Audi Chetty


vii

Preface

Many systems, physical, chemical, and economical, can be modeled


by mathematical relations, such as deterministic and/or stochastic dif-
ferential and/or dierence equations. These systems then change with
time or any other independent variable according to the dynamical re-
lations. It is possible to steer these systems from one state to another
state by the application of some type of external inputs or controls.
If this can be done at all, there may be dierent ways of doing the
same task. If there are dierent ways of doing the same task, then
there may be one way of doing it in the best way. This best way can
be minimum time to go from one state to another state, or maximum
thrust developed by a rocket engine. The input given to the system
corresponding to this best situation is called optimal control. The
measure of best way or performance is called performance index
or cost function. Thus, we have an optimal control system, when a
system is controlled in an optimum way satisfying a given performance
index. The theory of optimal control systems has enjoyed a ourishing
period for nearly two decades after the dawn of the so-called modern
control theory around 1960s. The interest in theoretical and practical
aspects of the subject has sustained due to its applications to such di-
verse elds as electrical power, aerospace, chemical plants, economics,
medicine, biology and ecology.

Aim and Scope


In this book we are concerned with essentially the control of physical
systems which are dynamic and hence described by ordinary dier-
ential or dierence equations in contrast to static systems, which are
characterized by algebraic equations. Further, our focus is on deter-
ministic systems only.
The development of optimal control theory in the sixties revolved
around the maximum principle proposed by the Soviet Mathemati-
cian L. S. Pontryagin and his colleagues whose work was published in
English in 1962. Further contributions are due to R. E. Kalman of the
United States. Since then, many excellent books on optimal control
theory of varying levels of sophistication have been published.
This book is written keeping student in mind and intended to pro-
vide the student a simplied treatment of the subject, with appropriate
viii

dose of mathematics. Another feature of this book is to assemble all the


topics which can be covered in a one-semester class. A special feature
of this book is the presentation of the procedures in the form of a sum-
mary table designed in terms of statement of the problem and a step-by-
step solution of the problem. Further, MATLAB c
and SIMULINK1 c
,
including Control System and Symbolic Math Toolboxes, has been in-
corporated into the book. The book is ideally suited for a one-semester,
second level, graduate course in control systems and optimization.

Background and Audience


This is a second level graduate text book and as such the background
material required for using this book is a rst course on control sys-
tems, state space analysis or linear systems theory. It is suggested that
the student review the material in Appendices A and B given at the
end of the book. This book is aimed at graduate students in Electrical,
Mechanical, Chemical, and Aerospace Engineering and Applied Math-
ematics. It can also be used by professional scientists and engineers
working in a variety of industries and research organizations.

Acknowledgments
This book has grown out of my lecture notes prepared over many years
of teaching at Indian Institute of Technology (IIT), Kharagpur, and
Idaho State University (ISU), Pocatello, Idaho. As such I am indebted
to many of my teachers and students. In recent years at ISU, there
are many people whom I would like to thank for their encouragement
and cooperation. First of all, I would like to thank late Dean Hary
Charyulu for his encouragement to graduate work and research which
kept me live in the area optimal control. Also, I would like to mention
a special person, Kevin Moore, whose encouragement and cooperation
made my stay at ISU a very pleasant and scholarly productive one for
many years during 1990-98. During the last few years, Dean Kunze
and Associate Dean Stue have been of great help in providing the
right atmosphere for teaching and research work.
Next, my students over the years were my best critics in providing
many helpful suggestions. Among the many, special mention must be
1 MATLAB and SIMULINK are registered trademarks of The Mathworks, Inc., Natick, MA,
USA.
ix

made about Martin Murillo, Yoshiko Imura and Keith Fisher who made
several suggestions to my manuscript. In particular, Craig Rieger (of
Idaho National Engineering and Environmental Laboratory (INEEL))
deserves special mention for having innite patience in writing and test-
ing programs in MATLAB c
to obtain analytical solutions to matrix
Riccati dierential and dierence equations.
The camera-ready copy of this book was prepared by the author
using LATEX of the PCTEX322 Version 4.0. The gures were drawn
using CorelDRAW3 and exported into LATEX document.
Several people at the publishing company CRC Press deserve men-
tion. Among them, special mention must be made about Nora Konopka,
Acquisition Editor, Electrical Engineering for her interest, understand-
ing and patience with me to see that this book project is completed.
Also, thanks are due to Michael Buso, Michelle Reyes, Helena Redshaw
and Judith Simon Kamin. I would like to make a special mention of
Sean Davey who helped me in many issues regarding LATEX and with-
out his extremely prompt response this book project would have been
delayed much longer. Any corrections and suggestions are welcome via
email to naiduds@isu.edu
Finally, it is my pleasant duty to thank my wife, Sita and my daugh-
ters, Radhika and Kiranmai who have been a great source of encour-
agement and cooperation throughout my academic life.

Desineni Subbaram Naidu


Pocatello, Idaho
June 2002

2L
AT
EX is a registered trademark of Personal TEX, Inc., Mill Valley, CA.
3 CorelDRAW is a registered trademark of Corel Corporation or Corel Corporation Limited.
x

ACKNOWLEDGMENTS

The permissions given by

1. Prentice Hall for D. E. Kirk, Optimal Control Theory: An Intro-


duction, Prentice Hall, Englewood Clis, NJ, 1970,

2. John Wiley for F. L. Lewis, Optimal Control, John Wiley & Sons,
Inc., New York, NY, 1986,

3. McGraw-Hill for M. Athans and P. L. Falb, Optimal Control:


An Introduction to the Theory and Its Applications, McGraw-Hill
Book Company, New York, NY, 1966, and

4. Springer-Verlag for H. H. Goldstine, A History of the Calculus of


Variations, Springer-Verlag, New York, NY, 1980,

are hereby acknowledged.


xi

AUTHORS BIOGRAPHY
Desineni Subbaram Naidu received B.E. degree in Electrical Engineering
from Sri Venkateswara University, Tirupati, India, and M.Tech. and PhD
degrees in Control Systems Engineering, from Indian Institute of Technology
(IIT), Kharagpur, India. He held various positions with the Department of
Electrical Engineering at IIT. Dr. Naidu was a recipient of a Senior National
Research Council (NRC) Associateship of the National Academy of Sciences,
Washington, DC, tenable at NASA Langley Research Center, Hampton, Vir-
ginia, during 1985-87 and at the U. S. Air Force Research Laboratory (AFRL)
at Wright-Patterson Air Force Base (WPAFB), Ohio, during 1998-99. During
1987-90, he was an adjunct faculty member in the Department of Electrical
and Computer Engineering at Old Dominion University, Norfolk, Virginia.
Since August 1990, Dr. Naidu has been a professor at Idaho State University.
At present he is Director of the Measurement and Control Engineering Re-
search Center; Coordinator, Electrical Engineering program; and Associate
Dean of Graduate Studies in the College of Engineering, Idaho State Univer-
sity, Pocatello, Idaho.
Dr. Naidu has over 150 publications including a research monograph, Sin-
gular Perturbation Analysis of Discrete Control Systems, Lecture Notes in
Mathematics, Springer Verlag, Berlin, Germany, 1985; a book, Singular Per-
turbation Methodology in Control Systems, IEE Control Engineering Series,
Peter Peregrinus Limited, London, UK, 1988; and a research monograph enti-
tled, Aeroassisted Orbital Transfer: Guidance and Control Strategies, Lecture
Notes in Control and Information Sciences, Springer-Verlag, London, UK,
1994.
Dr. Naidu is (or has been) a member of the Editorial Boards of the IEEE
Transaction on Automatic Control, New York, (1993-99), the International
Journal of Robust and Nonlinear Control, John Wiley & Sons, New York, NY,
(1996-present), the International Journal of Control-Theory and Advanced
Technology (C-TAT), Tokyo, Japan (1992-1996), and a member of the Ed-
itorial Advisory Board of Mechatronics: The Science of Intelligent Machines,
An International Journal, Oxford, UK (1992-present).
Professor Naidu is an elected Fellow of The Institute of Electrical and Elec-
tronics Engineers (IEEE), a Fellow of World Innovation Foundation (WIF),
an Associate Fellow of the American Institute of Aeronautics and Astronautics
(AIAA) and a member of several other organizations such as SIAM, ASEE,
etc. Dr. Naidu was a recipient of the Idaho State University Outstanding
Researcher Award for 1993-94 and 1994-95 and the Distinguished Researcher
Award for 1994-95. Professor Naidus biography is listed (multiple years) in
Whos Who Among Americas Teachers, the Silver Anniversary 25th Edition
of WhoS Who In The West, Whos Who in Technology (WWT), and The
International Directory of Distinguished Leadership, and other biographical
sources.
xii
Contents
1 Introduction 1
1.1 Classical and Modern Control . . . . . . . . . . . . . . . 1
1.2 Optimization . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Optimal Control . . . . . . . . . . . . . . . . . . . . . . 6
1.3.1 Plant . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.2 Performance Index . . . . . . . . . . . . . . . . . 6
1.3.3 Constraints . . . . . . . . . . . . . . . . . . . . . 9
1.3.4 Formal Statement of Optimal Control System . . 9
1.4 Historical Tour . . . . . . . . . . . . . . . . . . . . . . . 11
1.4.1 Calculus of Variations . . . . . . . . . . . . . . . 11
1.4.2 Optimal Control Theory . . . . . . . . . . . . . . 13
1.5 About This Book . . . . . . . . . . . . . . . . . . . . . . 15
1.6 Chapter Overview . . . . . . . . . . . . . . . . . . . . . 16
1.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2 Calculus of Variations and Optimal Control 19


2.1 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.1 Function and Functional . . . . . . . . . . . . . . 19
2.1.2 Increment . . . . . . . . . . . . . . . . . . . . . . 20
2.1.3 Dierential and Variation . . . . . . . . . . . . . 22
2.2 Optimum of a Function and a Functional . . . . . . . . 25
2.3 The Basic Variational Problem . . . . . . . . . . . . . . 27
2.3.1 Fixed-End Time and Fixed-End State System . . 27
2.3.2 Discussion on Euler-Lagrange Equation . . . . . 33
2.3.3 Dierent Cases for Euler-Lagrange Equation . . 35
2.4 The Second Variation . . . . . . . . . . . . . . . . . . . 39
2.5 Extrema of Functions with Conditions . . . . . . . . . . 41
2.5.1 Direct Method . . . . . . . . . . . . . . . . . . . 43
2.5.2 Lagrange Multiplier Method . . . . . . . . . . . . 45
2.6 Extrema of Functionals with Conditions . . . . . . . . . 48
2.7 Variational Approach to Optimal Control Systems . . . 57

xiii
xiv

2.7.1 Terminal Cost Problem . . . . . . . . . . . . . . 57


2.7.2 Dierent Types of Systems . . . . . . . . . . . . 65
2.7.3 Sucient Condition . . . . . . . . . . . . . . . . 67
2.7.4 Summary of Pontryagin Procedure . . . . . . . . 68
2.8 Summary of Variational Approach . . . . . . . . . . . . 84
2.8.1 Stage I: Optimization of a Functional . . . . . . 85
2.8.2 Stage II: Optimization of a Functional with
Condition . . . . . . . . . . . . . . . . . . . . . . 86
2.8.3 Stage III: Optimal Control System with
Lagrangian Formalism . . . . . . . . . . . . . . . 87
2.8.4 Stage IV: Optimal Control System with
Hamiltonian Formalism: Pontryagin Principle . . 88
2.8.5 Salient Features . . . . . . . . . . . . . . . . . . . 91
2.9 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 96

3 Linear Quadratic Optimal Control Systems I 101


3.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . 101
3.2 Finite-Time Linear Quadratic Regulator . . . . . . . . . 104
3.2.1 Symmetric Property of the Riccati Coecient
Matrix . . . . . . . . . . . . . . . . . . . . . . . . 109
3.2.2 Optimal Control . . . . . . . . . . . . . . . . . . 110
3.2.3 Optimal Performance Index . . . . . . . . . . . . 110
3.2.4 Finite-Time Linear Quadratic Regulator:
Time-Varying Case: Summary . . . . . . . . . . 112
3.2.5 Salient Features . . . . . . . . . . . . . . . . . . . 114
3.2.6 LQR System for General Performance Index . . 118
3.3 Analytical Solution to the Matrix
Dierential Riccati Equation . . . . . . . . . . . . . . . 119
3.3.1 MATLAB c
Implementation of Analytical
Solution to Matrix DRE . . . . . . . . . . . . . . 122
3.4 Innite-Time LQR System I . . . . . . . . . . . . . . . . 125
3.4.1 Innite-Time Linear Quadratic Regulator: Time-
Varying Case: Summary . . . . . . . . . . . . . . 128
3.5 Innite-Time LQR System II . . . . . . . . . . . . . . . 129
3.5.1 Meaningful Interpretation of Riccati Coecient . 132
3.5.2 Analytical Solution of the Algebraic
Riccati Equation . . . . . . . . . . . . . . . . . . 133
3.5.3 Innite-Interval Regulator System:
Time-Invariant Case: Summary . . . . . . . . . . 134
3.5.4 Stability Issues of Time-Invariant Regulator . . . 139
xv

3.5.5 Equivalence of Open-Loop and Closed-Loop


Optimal Controls . . . . . . . . . . . . . . . . . . 141
3.6 Notes and Discussion . . . . . . . . . . . . . . . . . . . . 144
3.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 147

4 Linear Quadratic Optimal Control Systems II 151


4.1 Linear Quadratic Tracking System: Finite-Time Case . 152
4.1.1 Linear Quadratic Tracking System: Summary . . 157
4.1.2 Salient Features of Tracking System . . . . . . . 158
4.2 LQT System: Innite-Time Case . . . . . . . . . . . . . 166
4.3 Fixed-End-Point Regulator System . . . . . . . . . . . . 169
4.4 LQR with a Specied Degree of Stability . . . . . . . . . 175
4.4.1 Regulator System with Prescribed Degree of
Stability: Summary . . . . . . . . . . . . . . . . 177
4.5 Frequency-Domain Interpretation . . . . . . . . . . . . . 179
4.5.1 Gain Margin and Phase Margin . . . . . . . . . . 181
4.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 188

5 Discrete-Time Optimal Control Systems 191


5.1 Variational Calculus for Discrete-Time
Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
5.1.1 Extremization of a Functional . . . . . . . . . . . 192
5.1.2 Functional with Terminal Cost . . . . . . . . . . 197
5.2 Discrete-Time Optimal Control Systems . . . . . . . . . 199
5.2.1 Fixed-Final State and Open-Loop Optimal
Control . . . . . . . . . . . . . . . . . . . . . . . 203
5.2.2 Free-Final State and Open-Loop Optimal Control 207
5.3 Discrete-Time Linear State Regulator
System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
5.3.1 Closed-Loop Optimal Control: Matrix Dierence
Riccati Equation . . . . . . . . . . . . . . . . . . 209
5.3.2 Optimal Cost Function . . . . . . . . . . . . . . 213
5.4 Steady-State Regulator System . . . . . . . . . . . . . . 219
5.4.1 Analytical Solution to the Riccati Equation . . . 225
5.5 Discrete-Time Linear Quadratic Tracking System . . . . 232
5.6 Frequency-Domain Interpretation . . . . . . . . . . . . . 239
5.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 245
xvi

6 Pontryagin Minimum Principle 249


6.1 Constrained System . . . . . . . . . . . . . . . . . . . . 249
6.2 Pontryagin Minimum Principle . . . . . . . . . . . . . . 252
6.2.1 Summary of Pontryagin Principle . . . . . . . . . 256
6.2.2 Additional Necessary Conditions . . . . . . . . . 259
6.3 Dynamic Programming . . . . . . . . . . . . . . . . . . 261
6.3.1 Principle of Optimality . . . . . . . . . . . . . . 261
6.3.2 Optimal Control Using Dynamic Programming . 266
6.3.3 Optimal Control of Discrete-Time Systems . . . 272
6.3.4 Optimal Control of Continuous-Time Systems . . 275
6.4 The Hamilton-Jacobi-Bellman Equation . . . . . . . . . 277
6.5 LQR System Using H-J-B Equation . . . . . . . . . . . 283
6.6 Notes and Discussion . . . . . . . . . . . . . . . . . . . . 288

7 Constrained Optimal Control Systems 293


7.1 Constrained Optimal Control . . . . . . . . . . . . . . . 293
7.1.1 Time-Optimal Control of LTI System . . . . . . 295
7.1.2 Problem Formulation and Statement . . . . . . . 295
7.1.3 Solution of the TOC System . . . . . . . . . . . 296
7.1.4 Structure of Time-Optimal Control System . . . 303
7.2 TOC of a Double Integral System . . . . . . . . . . . . . 305
7.2.1 Problem Formulation and Statement . . . . . . . 306
7.2.2 Problem Solution . . . . . . . . . . . . . . . . . . 307
7.2.3 Engineering Implementation of Control Law . . . 314
7.2.4 SIMULINK c
Implementation of Control Law . 315
7.3 Fuel-Optimal Control Systems . . . . . . . . . . . . . . . 315
7.3.1 Fuel-Optimal Control of a Double Integral System 316
7.3.2 Problem Formulation and Statement . . . . . . . 319
7.3.3 Problem Solution . . . . . . . . . . . . . . . . . . 319
7.4 Minimum-Fuel System: LTI System . . . . . . . . . . . 328
7.4.1 Problem Statement . . . . . . . . . . . . . . . . . 328
7.4.2 Problem Solution . . . . . . . . . . . . . . . . . . 329
7.4.3 SIMULINK c
Implementation of Control Law . 334
7.5 Energy-Optimal Control Systems . . . . . . . . . . . . . 334
7.5.1 Problem Formulation and Statement . . . . . . . 335
7.5.2 Problem Solution . . . . . . . . . . . . . . . . . . 339
7.6 Optimal Control Systems with State
Constraints . . . . . . . . . . . . . . . . . . . . . . . . . 351
7.6.1 Penalty Function Method . . . . . . . . . . . . . 352
7.6.2 Slack Variable Method . . . . . . . . . . . . . . . 358
xvii

7.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 361

A Vectors and Matrices 365


A.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
A.2 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
A.3 Quadratic Forms and Deniteness . . . . . . . . . . . . 376

B State Space Analysis 379


B.1 State Space Form for Continuous-Time Systems . . . . . 379
B.2 Linear Matrix Equations . . . . . . . . . . . . . . . . . . 381
B.3 State Space Form for Discrete-Time Systems . . . . . . 381
B.4 Controllability and Observability . . . . . . . . . . . . . 383
B.5 Stabilizability, Reachability and Detectability . . . . . . 383

C MATLAB Files 385


C.1 MATLAB c
for Matrix Dierential Riccati Equation . . 385
C.1.1 MATLAB File lqrnss.m . . . . . . . . . . . . . . 386
C.1.2 MATLAB File lqrnssf.m . . . . . . . . . . . . . . 393
C.2 MATLAB c
for Continuous-Time Tracking System . . . 394
C.2.1 MATLAB File for Example 4.1(example4 1.m) . 394
C.2.2 MATLAB File for Example 4.1(example4 1p.m) 397
C.2.3 MATLAB File for Example 4.1(example4 1g.m) 397
C.2.4 MATLAB File for Example 4.1(example4 1x.m) 397
C.2.5 MATLAB File for Example 4.2(example4 1.m) . 398
C.2.6 MATLAB File for Example 4.2(example4 2p.m) 400
C.2.7 MATLAB File for Example 4.2(example4 2g.m) 400
C.2.8 MATLAB File for Example 4.2(example4 2x.m) 401
C.3 MATLAB c
for Matrix Dierence Riccati Equation . . 401
C.3.1 MATLAB File lqrdnss.m . . . . . . . . . . . . . 401
C.4 MATLAB c
for Discrete-Time Tracking System . . . . . 409

References 415

Index 425
xviii
List of Figures
1.1 Classical Control Conguration . . . . . . . . . . . . . . 1
1.2 Modern Control Conguration . . . . . . . . . . . . . . 3
1.3 Components of a Modern Control System . . . . . . . . 4
1.4 Overview of Optimization . . . . . . . . . . . . . . . . . 5
1.5 Optimal Control Problem . . . . . . . . . . . . . . . . . 10

2.1 Increment f , Dierential df , and Derivative f of a


Function f (t) . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2 Increment J and the First Variation J of the Func-
tional J . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3 (a) Minimum and (b) Maximum of a Function f (t) . . . 26
2.4 Fixed-End Time and Fixed-End State System . . . . . . 29
2.5 A Nonzero g(t) and an Arbitrary x(t) . . . . . . . . . . 32
2.6 Arc Length . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.7 Free-Final Time and Free-Final State System . . . . . . 59
2.8 Final-Point Condition with a Moving Boundary (t) . . 63
2.9 Dierent Types of Systems: (a) Fixed-Final Time and
Fixed-Final State System, (b) Free-Final Time and Fixed-
Final State System, (c) Fixed-Final Time and Free-Final
State System, (d) Free-Final Time and Free-Final State
System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.10 Optimal Controller for Example 2.12 . . . . . . . . . . . 72
2.11 Optimal Control and States for Example 2.12 . . . . . . 74
2.12 Optimal Control and States for Example 2.13 . . . . . . 77
2.13 Optimal Control and States for Example 2.14 . . . . . . 81
2.14 Optimal Control and States for Example 2.15 . . . . . . 84
2.15 Open-Loop Optimal Control . . . . . . . . . . . . . . . . 94
2.16 Closed-Loop Optimal Control . . . . . . . . . . . . . . . 95

3.1 State and Costate System . . . . . . . . . . . . . . . . . 107


3.2 Closed-Loop Optimal Control Implementation . . . . . . 117

xix
xx

3.3 Riccati Coecients for Example 3.1 . . . . . . . . . . . 125


3.4 Closed-Loop Optimal Control System for Example 3.1 . 126
3.5 Optimal States for Example 3.1 . . . . . . . . . . . . . . 127
3.6 Optimal Control for Example 3.1 . . . . . . . . . . . . . 127
3.7 Interpretation of the Constant Matrix P . . . . . . . . . 133
3.8 Implementation of the Closed-Loop Optimal Control:
Innite Final Time . . . . . . . . . . . . . . . . . . . . . 135
3.9 Closed-Loop Optimal Control System . . . . . . . . . . 138
3.10 Optimal States for Example 3.2 . . . . . . . . . . . . . . 140
3.11 Optimal Control for Example 3.2 . . . . . . . . . . . . . 141
3.12 (a) Open-Loop Optimal Controller (OLOC) and (b) Closed-
Loop Optimal Controller (CLOC) . . . . . . . . . . . . 145

4.1 Implementation of the Optimal Tracking System . . . . 157


4.2 Riccati Coecients for Example 4.1 . . . . . . . . . . . 163
4.3 Coecients g1 (t) and g2 (t) for Example 4.1 . . . . . . . 164
4.4 Optimal States for Example 4.1 . . . . . . . . . . . . . . 164
4.5 Optimal Control for Example 4.1 . . . . . . . . . . . . . 165
4.6 Riccati Coecients for Example 4.2 . . . . . . . . . . . 167
4.7 Coecients g1 (t) and g2 (t) for Example 4.2 . . . . . . . 168
4.8 Optimal Control and States for Example 4.2 . . . . . . 168
4.9 Optimal Control and States for Example 4.2 . . . . . . 169
4.10 Optimal Closed-Loop Control in Frequency Domain . . 180
4.11 Closed-Loop Optimal Control System with Unity Feed-
back . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
4.12 Nyquist Plot of Go (j) . . . . . . . . . . . . . . . . . . . 185
4.13 Intersection of Unit Circles Centered at Origin and 1+j0 186

5.1 State and Costate System . . . . . . . . . . . . . . . . . 205


5.2 Closed-Loop Optimal Controller for Linear Discrete-Time
Regulator . . . . . . . . . . . . . . . . . . . . . . . . . . 215
5.3 Riccati Coecients for Example 5.3 . . . . . . . . . . . 219
5.4 Optimal Control and States for Example 5.3 . . . . . . . 220
5.5 Optimal Control and States for Example 5.3 . . . . . . . 221
5.6 Closed-Loop Optimal Control for Discrete-Time Steady-
State Regulator System . . . . . . . . . . . . . . . . . . 223
5.7 Implementation of Optimal Control for Example 5.4 . . 226
5.8 Implementation of Optimal Control for Example 5.4 . . 227
5.9 Riccati Coecients for Example 5.5 . . . . . . . . . . . 231
5.10 Optimal States for Example 5.5 . . . . . . . . . . . . . . 232
xxi

5.11 Optimal Control for Example 5.5 . . . . . . . . . . . . . 233


5.12 Implementation of Discrete-Time Optimal Tracker . . . 239
5.13 Riccati Coecients for Example 5.6 . . . . . . . . . . . 240
5.14 Coecients g1 (t) and g2 (t) for Example 5.6 . . . . . . . 241
5.15 Optimal States for Example 5.6 . . . . . . . . . . . . . . 241
5.16 Optimal Control for Example 5.6 . . . . . . . . . . . . . 242
5.17 Closed-Loop Discrete-Time Optimal Control System . . 243

6.1 (a) An Optimal Control Function Constrained by a Bound-


ary (b) A Control Variation for Which u(t) is Not
Admissible [79] . . . . . . . . . . . . . . . . . . . . . . . 254
6.2 Illustration of Constrained (Admissible) Controls . . . . 260
6.3 Optimal Path from A to B . . . . . . . . . . . . . . . . . 261
6.4 A Multistage Decision Process . . . . . . . . . . . . . . 262
6.5 A Multistage Decision Process: Backward Solution . . . 263
6.6 A Multistage Decision Process: Forward Solution . . . 265
6.7 Dynamic Programming Framework of Optimal State Feed-
back Control . . . . . . . . . . . . . . . . . . . . . . . . 271
6.8 Optimal Path from A to B . . . . . . . . . . . . . . . . . 290

7.1 Signum Function . . . . . . . . . . . . . . . . . . . . . . 299


7.2 Time-Optimal Control . . . . . . . . . . . . . . . . . . . 299
7.3 Normal Time-Optimal Control System . . . . . . . . . . 300
7.4 Singular Time-Optimal Control System . . . . . . . . . 301
7.5 Open-Loop Structure for Time-Optimal Control System 304
7.6 Closed-Loop Structure for Time-Optimal Control System 306
7.7 Possible Costates and the Corresponding Controls . . . 309
7.8 Phase Plane Trajectories for u = +1 (dashed lines) and
u = -1 (dotted lines) . . . . . . . . . . . . . . . . . . . . 310
7.9 Switch Curve for Double Integral Time-Optimal Control
System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
7.10 Various Trajectories Generated by Four Possible Control
Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . 313
7.11 Closed-Loop Implementation of Time-Optimal Control
Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
7.12 SIMULINK c
Implementation of Time-Optimal Control
Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
7.13 Phase-Plane Trajectory for + : Initial State (2,-2) and
Final State (0,0) . . . . . . . . . . . . . . . . . . . . . . 317
xxii

7.14 Phase-Plane Trajectory for : Initial State (-2,2) and


Final State (0,0) . . . . . . . . . . . . . . . . . . . . . . 317
7.15 Phase-Plane Trajectory for R+ : Initial State (-1,-1) and
Final State (0,0) . . . . . . . . . . . . . . . . . . . . . . 318
7.16 Phase-Plane Trajectory for R : Initial State (1,1) and
Final State (0,0) . . . . . . . . . . . . . . . . . . . . . . 318
7.17 Relations Between 2 (t) and |u (t)| + u (t)2 (t) . . . . . 322
7.18 Dead-Zone Function . . . . . . . . . . . . . . . . . . . . 323
7.19 Fuel-Optimal Control . . . . . . . . . . . . . . . . . . . 323
7.20 Switching Curve for a Double Integral Fuel-Optimal Con-
trol System . . . . . . . . . . . . . . . . . . . . . . . . . 324
7.21 Phase-Plane Trajectories for u(t) = 0 . . . . . . . . . . . 325
7.22 Fuel-Optimal Control Sequences . . . . . . . . . . . . . 326
7.23 -Fuel-Optimal Control . . . . . . . . . . . . . . . . . . . 327
7.24 Optimal Control as Dead-Zone Function . . . . . . . . . 330
7.25 Normal Fuel-Optimal Control System . . . . . . . . . . 331
7.26 Singular Fuel-Optimal Control System . . . . . . . . . . 332
7.27 Open-Loop Implementation of Fuel-Optimal Control Sys-
tem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
7.28 Closed-Loop Implementation of Fuel-Optimal Control
System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
7.29 SIMULINK c
Implementation of Fuel-Optimal Control
Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
7.30 Phase-Plane Trajectory for + : Initial State (2,-2) and
Final State (0,0) . . . . . . . . . . . . . . . . . . . . . . 336
7.31 Phase-Plane Trajectory for : Initial State (-2,2) and
Final State (0,0) . . . . . . . . . . . . . . . . . . . . . . 336
7.32 Phase-Plane Trajectory for R1 : Initial State (1,1) and
Final State (0,0) . . . . . . . . . . . . . . . . . . . . . . 337
7.33 Phase-Plane Trajectory for R3 : Initial State (-1,-1) and
Final State (0,0) . . . . . . . . . . . . . . . . . . . . . . 337
7.34 Phase-Plane Trajectory for R2 : Initial State (-1.5,1) and
Final State (0,0) . . . . . . . . . . . . . . . . . . . . . . 338
7.35 Phase-Plane Trajectory for R4 : Initial State (1.5,-1) and
Final State (0,0) . . . . . . . . . . . . . . . . . . . . . . 338
7.36 Saturation Function . . . . . . . . . . . . . . . . . . . . 343
7.37 Energy-Optimal Control . . . . . . . . . . . . . . . . . . 344
7.38 Open-Loop Implementation of Energy-Optimal Control
System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
xxiii

7.39 Closed-Loop Implementation of Energy-Optimal Con-


trol System . . . . . . . . . . . . . . . . . . . . . . . . . 346
7.40 Relation Between Optimal Control u (t) vs (a) q (t) and
(b) 0.5 (t) . . . . . . . . . . . . . . . . . . . . . . . . . 348
7.41 Possible Solutions of Optimal Costate (t) . . . . . . . 349
7.42 Implementation of Energy-Optimal Control Law . . . . 351
7.43 Relation Between Optimal Control u (t) and Optimal
Costate 2 (t) . . . . . . . . . . . . . . . . . . . . . . . . 358
xxiv
List of Tables
2.1 Procedure Summary of Pontryagin Principle for Bolza
Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

3.1 Procedure Summary of Finite-Time Linear Quadratic


Regulator System: Time-Varying Case . . . . . . . . . . 113
3.2 Procedure Summary of Innite-Time Linear Quadratic
Regulator System: Time-Varying Case . . . . . . . . . . 129
3.3 Procedure Summary of Innite-Interval Linear Quadratic
Regulator System: Time-Invariant Case . . . . . . . . . 136

4.1 Procedure Summary of Linear Quadratic Tracking System 159


4.2 Procedure Summary of Regulator System with Prescribed
Degree of Stability . . . . . . . . . . . . . . . . . . . . . 178

5.1 Procedure Summary of Discrete-Time Optimal Control


System: Fixed-End Points Condition . . . . . . . . . . . 204
5.2 Procedure Summary for Discrete-Time Optimal Control
System: Free-Final Point Condition . . . . . . . . . . . 208
5.3 Procedure Summary of Discrete-Time, Linear Quadratic
Regulator System . . . . . . . . . . . . . . . . . . . . . . 214
5.4 Procedure Summary of Discrete-Time, Linear Quadratic
Regulator System: Steady-State Condition . . . . . . . . 222
5.5 Procedure Summary of Discrete-Time Linear Quadratic
Tracking System . . . . . . . . . . . . . . . . . . . . . . 238

6.1 Summary of Pontryagin Minimum Principle . . . . . . . 257


6.2 Computation of Cost During the Last Stage k = 2 . . . 269
6.3 Computation of Cost During the Stage k = 1, 0 . . . . . 270
6.4 Procedure Summary of Hamilton-Jacobi-Bellman (HJB)
Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 280

xxv
xxvi

7.1 Procedure Summary of Optimal Control Systems with


State Constraints . . . . . . . . . . . . . . . . . . . . . . 355

S-ar putea să vă placă și