Sunteți pe pagina 1din 149

ROBUST REDUCED ORDER CONTROL

FOR NONLINEAR DISTRIBUTED SYSTEMS


OF BURGERS CLASS

by

Matthias Schmid
Department of Mechanical & Aerospace Engineering
State University of New York at Buffalo
Buffalo, New York 14260

A thesis submitted to the


Faculty of the Graduate School of
the State University of New York at Buffalo
in partial fulfillment of the requirements for the degree of

Master of Science

November 2008
1461767

Copyright 2008 by
Schmid, Matthias

All rights reserved

1461767
2009
c Copyright by
Matthias Schmid
2008

ii
In Memory of Hedi Schneider

iii
Acknowledgement

I would like to express my sincerest gratitude to my adviser, Dr. John Crassidis, and to my committee
members, Dr. Tarunraj Singh and Dr. Puneet Singla. I would especially like to thank my adviser for
his immense patience and for the opportunity to resume my research after some years of interruption.
I am also grateful for the warm welcome I received from my comrades at the Advanced Navigation
and Control Laboratory, and for the many inspiring discussions we shared.

I would not have had the motivation and persistency to finish this work without the tremen-
dous support of my friends and my family. Especially, many thanks to Dr. Michael Herty, Andreas
Jung, Dr. Christoph Barbian, Lutz Meyer, and Dr. Kok-Lam Lai. Also, I truly appreciate the help
of Dr. Barbian, who provided me with the necessary resources and working environment by sharing
his office with me for nearly a year.

I would also like to thank my parents: without them I would never had the chance to
pursue my path in life as freely as I have been able to. Also, special thanks go to my godparents for
supporting me in countless situations.

iv
Contents

Preamble iv

Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv

List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x

List of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv

1 Introduction 1

1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Previous and Related Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.3 Outline and Nomenclature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2 Navier-Stokes 16

2.1 General Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.2 Identification of the Friction Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3 Burgers’ Equation 24

3.1 Derivation and Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.1.1 Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.1.1.1 From Navier-Stokes . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.1.1.2 Original Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

v
CONTENTS vi

3.1.1.3 From Traffic Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.1.2 Classification and Benchmark Problem . . . . . . . . . . . . . . . . . . . . . . 32

3.2 Analytical Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.2.1 General Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.2.2 Shock Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.2.3 Steady-State Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

3.3 Finite-Element Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4 Robust Nonlinear Control 54

4.1 Control Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4.2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

4.2.1 Linear Analysis: Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

4.2.2 Nonlinear Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

4.2.2.1 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

4.2.2.2 An Attempt Towards Controllability . . . . . . . . . . . . . . . . . . 66

4.3 Control Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

4.3.1 Nominal Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

4.3.1.1 Linear Quadratic Regulator . . . . . . . . . . . . . . . . . . . . . . . 68

4.3.1.2 Inverse Dynamics (Feedback Linearization) . . . . . . . . . . . . . . 71

4.3.2 Estimator: Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . 74

4.3.2.1 Standard Linear Kalman Filter (Continuous and Discrete) . . . . . 75

4.3.2.2 Continuous-Discrete and Extended Kalman Filter . . . . . . . . . . 77

4.4 Model-Error Control Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

4.4.1 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

4.4.2 One-Step Ahead Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

4.4.3 Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
CONTENTS vii

5 Numerical Simulation 89

5.1 Parameters and Simulation Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

5.2 Full-Order Model Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

5.3 Reduced-Order Model Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

6 Conclusions 105

6.1 Summary and Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

6.2 Outlook and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

Appendix 111

Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

Source Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

Bibliography 127
List of Figures

2.1 Shear Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.2 Components of the Friction Tensor for Newtonian Fluids . . . . . . . . . . . . . . . . 21

3.1 Channel Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.2 Analytical Solution of the Viscous Burgers’ Equation, Initial Sinus . . . . . . . . . . 39

3.3 Graphic Shock Solution (Quasilinear) . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.4 Intersecting Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.5 Analytical Solution of the Inviscid Burgers’ Equation, Initial Sinus . . . . . . . . . . 43

3.6 Basis Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

3.7 Integral Constellations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

3.8 FE Evaluation for Different Numbers of Nodes . . . . . . . . . . . . . . . . . . . . . 51

3.9 Comparison of FE Solutions of Burgers’ Equation . . . . . . . . . . . . . . . . . . . . 52

3.10 Error Distribution in Space and Time . . . . . . . . . . . . . . . . . . . . . . . . . . 53

4.1 General Closed-Loop Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

4.2 Distributed Control and Basis Functions . . . . . . . . . . . . . . . . . . . . . . . . . 58

4.3 Pole-Constellation for the Linearized System; x∗ = 1


π, N = 21, κ = 0.002 . . . . . . . 62

4.4 Pole-Constellation for the Linearized System; x∗ = 1


π, N = 21, κ = 0.002 . . . . . . . 62

4.5 Feedback Gains of the Nominal Controller . . . . . . . . . . . . . . . . . . . . . . . . 71

4.6 Model-Error Prediction, Forward Realization . . . . . . . . . . . . . . . . . . . . . . 84

viii
LIST OF FIGURES ix

4.7 Model-Error Prediction, Backward Realization . . . . . . . . . . . . . . . . . . . . . 85

4.8 Model-Error Prediction with Estimator and Slow Measurements . . . . . . . . . . . 86

5.1 Open-Loop Simulation Including Disturbance; N = 101, κ = 0.001 . . . . . . . . . . 91

5.2 Full-Order Control; no Noise, no Model-Error Correction, N = 101, κ = 0.001 . . . . 93

5.3 Full-Order Control; Noise, no Model-Error Correction, N = 101, κ = 0.001 . . . . . . 93

5.4 Cost Functions for the Kalman Filter Optimization . . . . . . . . . . . . . . . . . . . 95

5.5 Full-Order Control; no Noise, with Model-Error Correction, Nm = 101, κ = 0.001 . . 96

5.6 Full-Order Control; Noise, with Model-Error Correction, Nm = 101, κ = 0.001 . . . . 96

5.7 Full-Order Control; Noise, with Model-Error Correction, Kalman Filter, Nm = 101,
κ = 0.001 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

5.8 Spatial Mean of the Model-Error Correction Term, Nm = 101, κ = 0.001 . . . . . . . 98

5.9 Reduced-Order Control; no Noise and no Model-Error Correction, Nm = 21, κ = 0.001 99

5.10 Reduced-Order Control; Noise, Kalman Filter, Nm = 21, κ = 0.001 . . . . . . . . . . 101

5.11 Reduced-Order Control; no Noise, with ‘Filtered’ Model-Error Correction Nm = 21,


κ = 0.001 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

5.12 Reduced-Order Control; Noise, with ‘Filtered’ Model-Error Correction, Kalman Fil-
ter, Nm = 21, κ = 0.001 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
List of Tables

1.1 Comprehensive Control Approach to the Motivating Problem . . . . . . . . . . . . . 4

2.1 Navier-Stokes Dynamics for Different Fluids . . . . . . . . . . . . . . . . . . . . . . . 23

5.1 Control Performance, Full-Order LQR without MECS . . . . . . . . . . . . . . . . . 92

5.2 Control Performance, Full-Order LQR with MECS . . . . . . . . . . . . . . . . . . . 95

5.3 Control Performance, Reduced-Order LQR without MECS . . . . . . . . . . . . . . . 99

5.4 Control Performance, Reduced-Order LQR with MECS . . . . . . . . . . . . . . . . 101

5.5 Control Performance, Reduced-Order LQR with ‘Linearized’ Model . . . . . . . . . . 102

5.6 Control Performance, Reduced-Order LQR with ‘Bounded’ MECS . . . . . . . . . . 103

5.7 Control Performance, Reduced-Order LQR with ‘Filtered’ MECS . . . . . . . . . . . 103

x
List of Symbols

Physical Terms

n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Normal Vector
L . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . General Continuity
Q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Source Term (where indicated)
v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Velocity Vector
ρ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Generic Density
b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sum of Body Forces
σ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (Cauchy) Stress Tensor
Π . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Symmetric Friction Tensor
p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Static Pressure
γ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shear Angle
η . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Effective Dynamic Viscosity
τ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shear Stress (where indicated)
q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Generic Flow
R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reynolds Number

E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Distortion Tensor
ζ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ‘Volume’ Viscosity

Mathematical Terms

∇ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nabla Operator
∆ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Laplace Operator
Ω . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spatial Domain
δΩ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Boundary of the Spatial Domain

xi
LIST OF SYMBOLS xii

D
Dt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Substantial (Convective) Derivative
H p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sobolev Space of Order p
C p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Continuous Function Space of Order p
Lp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lebesgue Space or Order p

F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Forcing Term (where indicated)


= . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Generic Differential Operator
G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Green Function (where indicated)

δ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dirac Distribution
δK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kronecker Delta
I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Identity Matrix or Tensor (where indicated)
Lpf (g) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pth Lie Derivative of g along f

z(x̂, ∆t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lie Derivative Expansion


Λ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Coefficient Matrix, Taylor Series Expansion
S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sensitivity Matrix, Taylor Series Expansion

Terms Related to Burgers’ Equation

w(t, x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Solution to Burgers’ Equation


wN (t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FEM Solution to Burgers’ Equation
wst (t, x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Steady-State Solution to Burgers’ Equation
κ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viscosity (Inverse Reynolds Number Analogue)
f (t, x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nonhomogeneous Forcing Term (Control Input)
pi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FE Basis Functions

M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mass-Matrix of the FE Model


K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear Part (Stiffness Matrix) of the FE Model
N(x̂). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nonlinear Part of the FE Model
M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Forcing Term Distribution Matrix of the FE Model
K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Accumulated Linear Part of the FE Model
N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Accumulated Nonlinear Part of the FE Model
φ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cole-Hopf Transform (where indicated)
LIST OF SYMBOLS xiii

Terms Related to Control Engineering

x(t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . True State Vector


x̂(t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimated State Vector
ŷ(t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimated Output Vector

ỹk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discrete Measurements


Cp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Output Matrix, Plant
Cm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Output Matrix, Model
f (x̂(t)) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nonlinear Model
g(x(t)) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nonlinear Control Input Matrix
u(t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Accumulated Control
û(t). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model-Error Correction
ū(t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nominal Control
Bp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Control Input Matrix, Plant
Bm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Control Input Matrix, Model
d(t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Process Disturbance Vector
D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Process Disturbance Covariance Matrix
v(t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Measurement Noise (AWGN)
V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Measurement Noise Covariance Matrix
n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System Order
l . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Number of Control Inputs
m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Number of Outputs
r . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (Partial) Relative Degree

LL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LQR Gain Matrix


HL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LQR Final State Weighting Matrix
QL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LQR State Weighting Matrix
RL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LQR Control Weighting Matrix

Π . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Solution of the Riccati Equation (LQR)


LK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kalman Filter Gain Matrix
QK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kalman Filter Process Noise Weighting Matrix

RK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kalman Filter Measurement Noise Weighting Matrix


LIST OF SYMBOLS xiv

P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimation Error Covariance Matrix


ek . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimation Error (where indicated)
ē . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample Mean of ek

φ, φk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transition Matrix (where indicated


Γ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discrete System Matrix
Υ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discrete Control Input Matrix
Ĝ(x̂) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model-Error Distribution Matrix

Gc (x̂) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Control Distribution Matrix (in Context of MECS)


Ge (x̂) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . External Disturbance Distribution Matrix
WE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MECS: Correction Weighting Matrix
RE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MECS: Measurement Noise Covariance
h . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spatial Discretization Interval (where indicated)
h . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Optimization Interval of MECS (where indicated)
Np . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Number of Gridpoints, Plant
Nc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Number of Gridpoints, Model
Nt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Number of Discretization Points in Time
e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance Measure (where indicated)
tset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Settling Time
Abstract

In the presented application of control engineering to fluid flow dynamics, the basal and primal
motivation arises from laminar flow control in aerodynamics. The governing physical model is
identified as the Navier-Stokes equations. These equations are reviewed and derived in a general
approach so that the presented techniques and results can be related to a variety of continuity
problems. The key issues associated with this class of problems, distributed systems governed
by nonlinear partial differential equations, are identified, and a mathematical benchmark problem
reflecting those properties (the Burgers equation with periodic boundary conditions and a non-
homogeneous forcing term is created. Thereby, Burgers’ equation is linked by several means to
distributed nonlinear systems: as an approximation of a two-dimensional channel flow problem; as
the decisive factor in the creation of turbulence (original motivation); and as a modeling equation
for traffic flow.

The benchmark problem is stated and classified as a continuous partial differential equa-
tion subject to periodic boundary conditions (of first and second order) and a non-homogeneous
distributed forcing term (the control input). This setting has not yet been adequately addressed in
previous research. A viscosity parameter κ being an analogue to the inverse Reynolds number is
incorporated. In the following, the viscous problem is solved analytically for a certain class of initial

conditions while the general solution to the inviscid case, as well as the steady-state solution, are
derived without limitations on the initial condition.

In order to provide a suitable formulation for control engineering, a semi-discretization (in


space) is performed using a Galerkin finite element method. This results in a large-scale ODE
system which is tested for consistency with the previously derived analytical solution. The resulting
state-space formulation is expanded to an unprecedented ‘real world’ control loop design, including
process disturbance, measurement noise, model-error, and model-reduction. Also, the benchmark

xv
ABSTRACT xvi

problem is analyzed in control terms for stability and controllability. Thereby, a Lyaponuv based
proof of exponential stability of the origin, for certain initial conditions, can be established. The
exponential rate of decay is identified as being only dependent on the viscosity parameter κ. It can
be shown that a feedback law with positive semi-definite gain even improves exponential stability.

An argumentation toward the controllability of constant equilibria, previously identified as being the
only steady-state solutions, is made. For the nominal control, as well as for the required estimator,
the linear quadratic regulator and the extended Kalman filter are briefly reviewed and derived for

the benchmark problem. Additionally, model-error control synthesis is introduced in its one-step
ahead prediction formulation for nonlinear distributed systems. This provides a computationally
fast correction to cope with model-error and process disturbances. The implementation in previous
research is briefly presented while a modification for application in this work is suggested. The
derived and introduced techniques are subject to extensive numerical evaluation, and detailed results
are given. Thereby, the combination of the linear quadratic regulator with model-error control
synthesis reveals itself to be a powerful control tool, resulting in a fast attenuation of an initial
distribution as well as a robust correction of process disturbance. Results hold in face of noisy
measurements (additive white Gaussian noise) if the extended Kalman filter is added to the system.
Due to the differentiating character of the predictive filter, the model-error correction has to be
reprocessed when a reduced-order model is applied. Then, the results prove to be as powerful as in
the full-order model case. A comprehensive Matlab code is provided.

The problem is approached from a ‘worst case’ point of view, where the applied disturbance
and noise by far exceeds ‘real world’ dimensions. The reduced-order model is based on a coarse
truncation of a linear global Galerkin finite element method; even better results are expected if refined

techniques are applied. A brief review of previous research on PDE problems in control engineering,
as well as a detailed reference to publications on Burgers’ equation, is presented. Especially, results
are compared to previous work at the Virginia Polytechnic Institute and State University. An
outlook on future research and topics to be addressed concludes this thesis.
Chapter 1

Introduction

1.1 Motivation

On April 18, 1963, an experimental airplane - called the Northrop X-21 - took off for the first flight
at Edwards Air Force Base in California in order to investigate the effects of laminar flow and the
possibilities of its control. This was part of NASA’s efforts in their search for, what some called,
“the holy grail of Aerodynamics;” namely the “analysis, prediction and control of boundary layer
transition” [1].

This more or less solemn opening serves as a teaser for the basal and primal motivation of this
thesis: every discipline that is concerned with viscous fluids has to cope with characteristic frictional
effects, and a very good example of these impacts is provided by aerodynamics. The aerodynamic
drag, also known as skin-friction drag, essentially takes place along a thin boundary layer attached

to the surface (of a wing). Thereby, three different types of flow can be present in the boundary
layer: laminar flow (smooth flowing layers called laminae), turbulent flow (superimposition of a
secondary random motion on the principal flow), and - by nature - transitional flow. The turbulent
flow, which has to be distinguished from the flow separation at the airfoil surface in stall condition, is

accompanied by a dramatical increase of frictional force, such that half of the fuel-consumption of an
airplane during cruising condition is due to skin-friction [1]. Among other factors, such as geometry,
surface disturbances, and speed, the gradient of the static pressure encountered by the flow across

the surface is a pivotal factor: if the static pressure increases in flow direction, amplification and
creation of turbulence is the consequence, whereas a decrease in static pressure leads to damping.
The general tendency towards the creation of turbulence is expressed by an artificial number - the

1
CHAPTER 1. INTRODUCTION 2

Reynolds number1 - assigned to each specific fluid. In order to influence flow achieve improvement
or to realize desired dynamics there exist several principal approaches:

• Natural laminar flow (NLF): Passive solution techniques are applied, like high altitude cruising,
composite wing structure, et cetera.

• Laminar Flow Control (LFC): Active devices are used to delay or possibly eliminate turbulent
flow, through either surface cooling or the suction of air through slots and porous surfaces
respectively.
• Hybrid Flow: This method reduces the level of system requirements by combining NLF and
LFC, creating, for example, a geometry for favorable pressure gradients and air suction in the
leading edge region of the wing.

While NFL has its natural limits and has already been applied to actual aircraft design, LFC is
predicted to dramatically reduce fuel-consumption by 30 percent for long-range flights in transport-
type aircrafts. The 1980’s Langley 8-foot transonic pressure tunnel tests even achieved full-chord
laminar flow from 0.4 to 0.85 Mach and a drag reduction about 60 percent.2 However, the mechanical
complexity of LFC, auxiliary power use, extensive ductwork, increased weight, and maintenance
penalties, to mention a few, leads to many challenges in design, affecting many different disciplines,
including system integration, operation, and economic issues. During operation, the most obvious
problem to laminar flow appears: the environmental contamination of the sensitive wing leading-
edge region caused by insects, rain, ice, or other debris.3 This provoked the cancellation of the X-21
program. Although, by the program’s end, laminar flow of over 95 percent of the intended area could
be achieved. System failure was a daily problem: additionally, the oscillatory creation and melting
of ice crystal and, therefore, oscillatory interruption of laminar flow, occurred in certain conditions.

Nevertheless, the investigation of laminar flow and turbulence has been a key point of

interest since the establishment of modern fluid dynamics. Published already in 1948, [3] offers an
excellent review on the fundamental mechanics of laminar flow in boundary layers, and includes a
detailed bibliography (starting with Ludwig Prandtl’s descriptions of 1904). In this paper, it is also
mentioned that, as early as World War II, boundary layer suction was being investigated. The fact

that no large-scale implementation of laminar flow control has yet taken place can be considered an
1 Itreflects the relative probability of the existence of laminar and turbulent flow, respectively.
2 See page 135 in [2].
3 See page 129 in [2].
CHAPTER 1. INTRODUCTION 3

indicator of the complexity of the problem. For further reference [4] introduces an historic overview
of suction-type laminar flow control, while [1] and [2] present state-of-the-art approaches and current
challenges such as the B757 flight evaluation program.

But how is this motivation connected to the topic of this thesis? First of all, LFC

might use the term ‘control’, but here control is defined in a sense of causality, meaning that there
is a general technical interaction with physical processes, and that it is not referring to the well-
established definition used in control engineering. Fluid dynamics provide an extremely demanding
environment, requiring advanced and sophisticated control engineering. Results may be expanded
to include a considerable variety of actual problems. Therefore, one of the objective targets should
be to create a benchmark problem reflecting the key-features of fluid dynamics (and hence the
laminar flow challenge), such as:

• Distributed parameter (Governed by partial differential equations)


• Inherent nonlinearity (non intentional)
• Recursively coupled states
• Unknown dynamics (model-error)
• Gaussian measurement noise
• Deterministic process disturbances (e. g. oscillatory)

As a matter of fact, model-error is not only created by neglected dynamics; the system being
distributed leads to the need of semi-discretization in space yielding models of a tremendous order,
and thus leads to difficulties in computing. And yet, it is necessary to employ low (reduced) order

models for the control design so that the discretization error will add to the unmodeled dynamics.
This issue is addressed as key-factor in this work, especially when discussing robust control design
for fluid-like systems. So, the necessary techniques associated with fluid-like control problems have

to incorporate:

• Distributed systems (semi-discretization of PDE’s)


• State-feedback control
• Model-error detection and compensation

• Robustness
• Nonlinear noise filtering
• Lyapunov based stability analysis
CHAPTER 1. INTRODUCTION 4

Table 1.1: Comprehensive Control Approach to the Motivating Problem

‘Real 
World’ −→ Motivation: Laminar Flow Problem, §1.1

y
Physical Model −→ Generalization: Navier-Stokes Equations, §2
Identification of Key Properties


y
Mathematics −→ Simplified Model Problem, §3.1.1

 Analytical Solution, §3.2
Finite-Element Approximation, §3.3




y
ODE System (State space)

Control Engineering −→ Expansion to Control Loop Design, §4.1



Analysis by Control Terms, §4.2

Controller Design and Evaluation, §4.3 and §5

• Nonlinear control-synthesis

The circumstances surrounding the transition from laminar to turbulent flow should thereby be of
particular interest when defining a quality measurement for the regulator systems to be designed. In
order to address the presented motivation problem, a comprehensive control approach requires sev-
eral steps to create a test or benchmark problem resembling the mentioned key features. Therefore,
table 1.1 gives an overview of this work’s structure as outlined in the following section in detail.

1.2 Previous and Related Research

Previous research related to this work can be subdivided in three different categories: nonlinear and
general PDE control techniques, Burgers’ equation as a control problem, and numerical approaches
to the solution of Burgers’ equation.

On General PDE and Nonlinear Control: For the standard nonlinear control tech-
niques in the following chapters, the reader is referred to the comprehensive works of Sastry and
Isidori in any of their books [5], [6] or [7]. Additionally, in [8], Slotine offers an introductory overview
on advanced nonlinear control.

In [9], Byrnes, Gilliam and He present a fairly comprehensive extension of the root-locus
CHAPTER 1. INTRODUCTION 5

method in classical control from linear finite dimensional systems to distributed parameter problems.
Thereby, the system class under investigation is the one of boundary control problems of parabolic
systems employing a proportional error feedback law (for certain input and output operators). Fur-
thermore, the systems can be specified as ‘quite general, non-constant coefficient, even order ordinary

differential operators on a finite interval with boundary conditions whose highest-order terms are
separated.’4 Special attention is given to ‘high frequency’ behavior, i.e., very high (infinity) values
of the feedback gains. It can be shown that the infinitely many branches of the root-locus vary from

the open-loop poles to the open-loop zeros as the gain is increased to (plus or minus) infinity.5 The
state-space model of the closed-loop system is allowed not to be selfadjoint and the transfer function
does not have to be meromorphic at infinity.

Main research topics on controllability of partial differential equation is presented by Zuazua


in [10]. Thereby, emphasis lies on understanding of the limit behavior of the controllability properties
as the size of the (related) finite-dimensional system of ODE’s tends to infinity. Controllability of
the PDE model is recovered by this approach. For illustration purposes, the linear one-dimensional
wave equation is analyzed with rapidly oscillating coefficient while the approximate and null con-
trollability of the constant coefficient heat equation are investigated. Also, some semi-linear variants
are addressed.

An interesting extension to the feedback linearization for spatially distributed systems is


introduced by Kazantzis and Demetriou in [11]: the development bases on a lumped-parameter non-
linear dynamic model obtained via any kind of appropriate spatial discretization on the distributed
parameter system (distributed control and mixed type boundary conditions are applied). Then, dif-
ferential topology is employed to synthesize a control law that renders a certain manifold invariant
in state-space. Thereby, the restriction of the system dynamics on that specific manifold of interest
are (a priori) given by the desired dynamics of the controlled system, i.e., the target dynamics are
explicitly assigned by the control law. So far this includes the already known case of feedback lin-
earization, but [11] extends results to account for nonlinear target dynamics in the continuous-time
domain of the controlled system. Unfortunately, this very elegant approach requires some stringent
prerequisites on controllability which are very hard to prove for higher order systems.6 The solution
4 See page 1365 in [9].
5 The sign of the gain plays a more important role than in a finite dimension since differences in the asymptotic
behavior reveal to be far more exotic.
6 High order ODE systems emerge from the semi-discretization of PDE’s.
CHAPTER 1. INTRODUCTION 6

is than acquired by expansion into a multivariate Taylor series and equating the Taylor coefficients
of the same order for both sides of the invariance PDE.

On Burgers’ Equation: In the last 20 years, interest on Burgers’ equation as a control


problem has been picking up. Apparently, existing work is centered around J. A. Burnes, C. I.

Byrnes, M. Krstić and B. B. King.

Burns and Kang themselves consider [12] as ‘a first step in the development of rigorous and
practical computational algorithms for control of those nonlinear partial differential equations that
describe physically interesting problems of this nature.’7 Burgers’ equation on a finite domain with
Dirichlet boundary conditions is regarded: a bounded input operator is assumed, i.e., distributed
control. The nonlinear system is stabilized with a prescribed exponential decay rate via a linear
quadratic feedback law based on its linearization. This regulator approach enhances the stability
of the solution, and is able to smooth step gradients, too. Well-posedness (existence, uniqueness)
and stability under an appropriate selection of input and output operators is proven, while the gain
functions strongly depend on the output map. Nevertheless, analysis is limited to small perturbations
due to the linearization. A finite element approach and simulation results are shown.

Kang, Ito and Burns extend the results of [12] in [13]: they restate the well-posedness of
the passive feedback law via the linear quadratic regulator while homogeneous Dirichlet boundary
conditions are applied. This bounded input and unbounded operator approach is compared with a
Dirichlet boundary control on one end, employing a regulator also derived via a LQR cost functional.
Numerical solutions based on a finite element semi-discretization are presented while considerations
are still limited to small initial perturbations (i.e., to local results).

Byrnes’ and Gilliam’s research in [14] parallels the results in [12] with the following dif-
ferences: in their approach the uncontrolled problem is asymptotically stabilized (in contrary to
Burns and Kang) by applying zero Neumann (flux) conditions at both ends. The control is effected
through forced flux based on a sensor at the left end. The function’s value8 is proportionally feed
back as flux. For the control derivation, the linearization about zero (resulting in the heat equation)
is considered. Stability for different values of the viscosity coefficient (nonzero, positive) on the
subspace of the ‘zero dynamics’ is shown.9 The L2 -norm of the nonlinear part is estimated and the
7 Here, it is referred to systems governed by the Navier-Stokes equations.
8 In different research, this is also referred to as ‘temperature.’
9 ‘Zero dynamics’ result from constraining the output to be zero.
CHAPTER 1. INTRODUCTION 7

existence of classical solutions is shown. In this sense [14] parallels the results of Burns and Kang.
Also, an extension to the multi-input multi-output case is performed by adding flux controllers with
a proportional feedback from functions values at both ends (decoupled).

Ito and Kang suggest a dissipative feedback control synthesis in [15]. They employ a nonlin-

ear dynamic programming technique for enhancing energy dissipation, also resulting in a distributive
control. But the feedback law is derived as the nonlinear solution of the Hamilton-Jacobi-Bellman
equation. Again, the regulation to an equilibrium is the objective. Thereby, the technique is de-
veloped for a general semi-linear dynamic problem.10 The solution is applied to and simulated for
Burgers’ equation and the two-dimensional Navier-Stokes equations.

Gilliam, Lee, Martin and Shubov examine Burgers’ equation with Neumann boundary con-
ditions in context of bifurcation in [16]. They show that a feedback radiation boundary control
(another expression for flux control at the boundary) only asymptotically stabilizes Burgers equa-
tion (with Neumann conditions) for sufficiently small initial data (in L2 ). The existence of a local
attractor (of the closed-loop dynamics) for each pair of positive gain parameters,11 is disclosed.
This allows to show the existence of additional multiple stationary solutions for a certain (critical)
range of values and for a special class of disturbances. This bifurcative property of a single global
equilibrium of the zero dynamics is called turbulent behavior.

The comprehensive work of Ly, Mease and Titi in [17] removes the restrictions on the
size of the initial data (as in previous works): for Dirichlet boundary conditions they analytically
demonstrate open-loop stability as well as global existence and uniqueness of the closed-loop system

for any initial data in L2 . This is also augmented for distributed control via the linearization at the
zero state (resulting in the heat equation). For the same arbitrary set of initial data in L2 , existence,
uniqueness and regularity of the solution for the open-loop problem with Neumann homogeneous

boundary conditions is presented (stability of the uncontrolled system). Also, a closed-loop setting
with stabilization via controlled radiation boundary conditions at each end (as previously suggested
by Burns et al.) is regarded: well-posedness is revealed. The work shortly discusses nonlinear
boundary conditions as well, and exhibits a very detailed and comprehensive approach.

Likewise, Krstić also addresses the problem of global asymptotic stabilization for large initial
10 d x(t) + Ax(t) + F(x(t)) = Bu(t) + f (t)
dt
11 The control law is formulated as w (0) − k0 w(0) and wx (L) − k1 w(L), respectively.
x
CHAPTER 1. INTRODUCTION 8

conditions12 in [18]. He derives boundary control laws for global asymptotic stability for both the
viscous and inviscid Burgers’ equation as well as for pure Neumann and pure Dirichlet boundary
conditions. Uncertain viscosity leading to the stochastic Burgers’ equation is considered. The
control laws are derived in an optimal sense: regulating to a constant while the function values

remain bounded all the time.

Byrnes and Gilliam turn again to the Neumann problem with radiation flux control at the
domain’s endpoints in [19]. They extend the proof for stability by showing that the open-loop is
semi-globally stabilizable not only in L2 -norm, but also in H 1 - and L∞ -norm by the suggested laws
(even for an sufficiently small external disturbance).

In [20], Balogh and Krstić introduce cubic Neumann boundary feedback control.13 Thereby,
global asymptotical stability in the L2 -norm and semi-global asymptotical stability in the H 1 -norm
are revealed. The suggested control allows set point regulation, i.e., attenuation to a constant
equilibrium while the function values remain bounded for all t; but this kind of flux control imposes
multiple stationary solutions. The addressed different norms are insofar important since it is a
delicate question, with respect to which norm one wants to stabilize a PDE.14

A backstepping technique for the previously mentioned cubic boundary control law is es-
tablished by Krstić and Liu in [21]. They proof via an elaborative Lyapunov analysis the (sufficient)
regularity of control systems and show that the closed-loop system including the boundary dynam-
ics15 is globally H 3 stable and well-posed (unique global classical solution). The presented approach
is a hybrid system consisting of two ODE’s and one PDE. It is suggested that the necessary inte-

grators for the control should be thought of being parts of the actuator dynamics (preventing direct
actuation via boundary values).

Liu and Krstić face in [22] the problem of adaptation for an unknown viscosity (the lower
bound on the viscosity parameter does not have to be known). Closed-loop dynamics are presented
incorporating a boundary flux control law in combination with a parameter estimator as a dynamic
component. The overall setting is shown to be globally H 1 stable and well-posed. Also, the suggested
control is decentralized since it only uses measurements at the corresponding end. This has been
applied in all previous boundary flux regulators, but here, the estimate of the viscosity coefficients
12 Then, the convective term dominates the dynamics.
13 w (0, t) =k w(0, t)3 + w(0, t)
 
x
14 Although, it is irrelevant for finite-dimensional systems where vector norms coincident.
15 w |
xt x=0 = u0 (t) et cetera.
CHAPTER 1. INTRODUCTION 9

is determined at each end individually. The design is presented in a comprehensive way, together
with proofs via Lyapunov methods. Not only energy boundedness, but also pointwise boundedness
(H 1 stability), even in the absence of adaptation, is verified.

Burns and Zietsman tackle the (classic) Dirichlet boundary control problem again in [23]

to significantly reduce the energy in the flow near the boundary. This is a slight variation of the
previous Dirichlet approach: a linear quadratic regulator is derived for a boundary control at one end.
Thereby, a large penalty is put on the corresponding boundary. Additionally, the cost functional is
extended with a damping factor eαt (α > 0). Functional gains are suggested as a practical approach
to sensor location and controller reduction. The system is realized via a Galerkin-type approximation
and simulated with an additional noisy disturbance. Convergence is not proven via theory.

So far, the cited research has tackled Burgers equation as a control problem in a very
theoretical and mathematical comprehensive way. So that the following references address more
application oriented approaches:

In the recent years, N. Smaoui has published research addressing Burgers’ equation itself
and control of it: in [24], Smaoui and Belgacem transform Burgers equation to a reaction-diffusion
system via the Kwak transformation and are able to show the long term dynamics together with the
existence of an inertial manifold. They prove the coincident of the steady-state solution of Burgers’
equation and the reaction-diffusion system. In [25], Smaoui addresses the forced Burgers’ equation
and shows connections to the convective-diffusion equation with drift describing population dispersal
(a compact attractor and an inertial manifold are verified). The control problem is faced by Smaoui

the first time in [26] utilizing both, boundary and distributed control. The boundary control laws for
Neumann conditions mimic those previously introduced by Krstić, but for the distributed control
the Karhunen-Loève decomposition16 is applied. The derived basis functions are incorporated in

a Galerkin approximation yielding only two nonlinear ODE’s. This very coarse projection is then
feedback linearized in a certain sense by simply subtracting the nonlinear part via the fully-actuated
(two) control input. The same very crude approximation has been used by Smaoui, Zribi and
16 Throughout this thesis, the expressions Karhunen-Loève decomposition and proper orthogonal decomposition are

used synonymously. The same technique is known under many different names corresponding to the field to which
this technique has been applied: principal component analysis in signal processing, empirical component analysis
in statistical weather prediction, Hotelling analysis as a statistical tool for data analysis and compression, empirical
eigenfunction decomposition and factor analysis techniques in biology and economics, proper orthogonal decomposition
when used in voice and image recognition (for the special case of transport mechanisms in turbulent flow it becomes
singular value decomposition).
CHAPTER 1. INTRODUCTION 10

Almulla to derive a sliding mode regulator (static and dynamic) in [27] for the forced generalized
Burgers’ equation with periodic boundary conditions. They perform a ‘reduce-then-design’ approach
by projecting the closed-loop on the two most energetic eigenfunctions arising from the open-loop
system via simulated data snapshots. So far, no model disturbances or noise have been included.

The resulting (fully-actuated) two ODE system is only simulated forward and results are not applied
to a full (or high) order plant, making comparison to this research particularly difficult.

Another intensive research on Burgers’ equation has recently been performed at the Virginia
Tech Polytechnic Institute (Blacksburg, Virginia), centered around J. A. Burns, B. B. King et alii.
Thereby, the attention has been turned to practical control problems of PDE’s in general and
Burgers’ equation in particular. In [28], King presents theorems for integral representations of
the LQR feedback operator for hyperbolic PDE control problems (infinite dimensional problems).
These systems are considered in there abstract formulation, and hence, are assumed to contain fully
distributed control. The solution to LQR problems can then be expressed via integral kernels (here
called functional gains). Smoothness and existence of the LQR solution is proven and holds also
for damped systems (forming an analytic semigroup). The used example is the hybrid nonlinear
distributed parameter cable-mass problem. These functional gains have been implemented into the
sensor location problem in [29]. Although MEMS17 or piezoceramic actuators allow nearly truly
distributed sensors, the vast amounts of data are infeasible for real-time control. Hence, the integral
transformation of the control law18 is exploited to identify regions where accurate knowledge of
the state-estimate is necessary (large functional gain). The optimal sensor locations are found by
applying centroidal Voronoi tessellations with the the functional gains as densities.

In [30], Chambers et alii introduce the Karhunen-Loève expansion19 as a computationally


efficient way to solve Burgers’ equation with Dirichlet boundary conditions at each end (zero) and a
random forcing term. The Karhunen-Loève decomposition is a generalized Fourier expansion using
a set of orthogonal basis functions, chosen to minimize mean-square error. Thereby, the orthonor-
mal basis is created through a set of data (mostly samples or snapshots from either theoretical,
experimental or computed sources) by extracting the eigenfunctions which contain the highest en-
ergy. Hence, the problem can (potentially) be reduced significantly to a subset containing a certain
17 Micro Electromechanical Systems
18 w(t, ξ)
R
= − [kwc (t)] (ξ) = Ω k(ξ, s)wc (t, s)ds
19 See [31] for details.
CHAPTER 1. INTRODUCTION 11

required energy level. Kunisch and Volkwein use that characteristic property to derive a reduced-
order approach to Burgers’ equation in [32]. Objective is the distributed control via linear quadratic
regulation with Dirichlet boundary conditions (zero). The set of data used for the computation of
the basis functions was constructed from the dynamics of the uncontrolled equation solved by the

Newton method. Suboptimal control techniques are implemented due to expensive computational
demands.

Atwell and King apply that very same technique to parabolic equations, especially the heat
equation, and combine it with the an estimator to linear quadratic Gaussian control in [33]. The
presented approach still embeds a ‘design-then-reduce’ philosophy where different input collections
are used for the generation of the Karhunen-Loève basis functions: the basis functions of the full-
order finite element discretization as well as an augmentation with four time snapshots are employed.
The presented LQG design together with the one detailed in [34] gives reason for some criticism as
exhibited in chapter 5.

In [35], Atwell and King focus their interest on reduced-order controllers for spatially dis-
tributed systems incorporating the findings from [28]. Discretization of PDE’s leads to large-scale
problems, which raises the question at what point reduction should take place. Atwell and King
feature the ‘design-then-reduce’ approach claiming to yield robust low-order systems: a high-order
model should be used for controller design followed by an ‘intelligent’ reduction technique. Their
weapon of choice turns out to be the proper orthogonal decomposition, again. The main difference
to previous approaches (see also [26] and [27]) is the inclusion of the controller dynamics into the
reduction method. Thereby, guesswork is avoided by choosing the LQR functional gains (kernels)
as the input collection (set of data). This design philosophy is adopted by Atwell and King in [35]
to the linear heat equation on a two dimensional domain (unit square) with control on parts of
the boundary: a LQG regulator-filter combination is posed in the abstract (operator) formulation
yielding an optimal control law in term of functional gains. Subsequent semi-discretization to a
high-order ODE system and computation of the necessary matrices results in a set of functional
gains. The latter is used to perform a reduction onto a reduced-order basis claiming to retain more
important aspects of the full-order control than an inverted technique. Comparisons with two bases
from snapshot data are shown, and the new approach is claimed to be more objective (since avoided
guesswork) as well as more efficient and robust.
CHAPTER 1. INTRODUCTION 12

This very same technique is finally extended by Atwell, Borggaard and King to Burgers’
equation in [34], especially because ‘the physics of the [previously addressed] heat equation dominate
the control input in such a way, that it becomes difficult to conclude if the results are truly significant
and widely applicable.’ Burgers’ equation is combined with periodic boundary conditions and a

forcing term as (distributed) control input. A MinMax regulator-filter method is developed for the
problem in abstract form, followed by a linear Galerkin finite-element approximation and subsequent
order reduction via proper orthogonal decomposition. Thereby, the MinMax controller is compared

to the LQG approach. As already mentioned, this publication is met with criticism in the second
part of this thesis.

On Numerical Issues: For completeness, a few references are cited as examples of nu-
merical approaches to the solution of Burgers’ equation. There exist so many numerical techniques
that it is beyond the extend of this work as well as its intention to give a complete overview. As
far as general efficient numerical schemes are concerned, [36] and [37] give some ideas while the
latter explicitly addresses Burgers’ equation. Also, [38] exhibits a variety of schemes for Burgers’
equation, focused especially on discontinuous data. These schemes are only usable if one is inter-
ested in a numerical solution for already given conditions (including a given static feedback law),
i.e., for realizing the plant dynamics in simulation. To put Burgers’ equation in a setting for control
engineering, a semi-discretization by any kind of finite element method has to be performed, result-
ing in a state-space (large scale ODE) system. In order to cope with numerical instabilities arising
under certain conditions when a standard Galerkin method is applied, Atwell and King propose
a Galerkin-Least-Squares method in [39]. Thereby, the standard Galerkin setting is formulated as
a calculus of variation problem and a stabilizing weighted least squares term on each element is

added to the functional (depending on the viscosity, the function value and the grid size). But this
formulation loses the physical meaning when compared to the standard Galerkin approach. It is
also shown that results differ insignificantly for the closed-loop system and the range of viscosity
applied in this thesis. The stabilized FEM has slightly been refined by King and Krueger in [40]

through replacement of the linear B-splines used in [39] by cubic ones. However, the tremendously
increased complexity for implementation does not justify (or does even disqualify) the application
of these techniques for investigations of real-time control applications at the current stage.

In the presented control approach, a nominal state-feedback controller will be used (the
CHAPTER 1. INTRODUCTION 13

linear-quadratic regulator resulting from optimal control), accompanied by the frequently applied
extended Kalman filter for state estimation). The resulting system is simulated to serve as a refer-
ence for the robust control approach, using model-error control-synthesis. This method adopts the
predictive filter, used before by Crassidis in [41] for assessing the model-error in nonlinear systems

from measurements. It is based on a predictive controller by Lu in [42], implemented as a predictive


error-estimator for the nonlinear tracking problem (given a desired response history). Based on
the predictive filter design, Crassidis utilized the error-estimate for a signal synthesis of the control

input in [43] as a general approach to robust control problems. The application of this technique to
linear systems has been analyzed by Kim for stability in [44], and extended to a receding horizon
approach of the model-error calculation in [45]; [46] summarizes the model-error control synthesis,
in its various versions, in detail.

A different technique to tackle the underlying model-error and process disturbance problems
would be disturbance accommodating control as described in [47]. Although this method is definitely
worth to be investigated in its application to flow control, there are some potential drawbacks.
Disturbance accommodating control uses the same (Kalman) filter for simultaneous state estimation
and model-error (or disturbance, respectively) prediction. Thereby, the ODE system to be integrated
is augmented by a prediction part in the dimension of the model. Since the time-varying Kalman
filter in its use as a pure estimator already needs n2 ·(n+1) equations, where n is the model’s order, this

combined approach would require n·(2n+1) ones. Obviously, the computational load is tremendously
increased. As it will be seen later, the required semi-discretization of distributed parameter systems
(governed by partial differential equations) results in very large order ODE systems. This clearly
disqualifies the disturbance accommodating control for real-time implementation. Another weak

point is revealed by the fact that the extended Kalman filter as a state-estimator for nonlinear
systems is not directly derived from an optimality condition.20 Hence, an augmentation utilized for
disturbance accommodating control pushes that technique further away from optimality, especially
if small perturbations around the equilibrium are not guaranteed. Despite the difficulties associated

with the realization of model-error control synthesis, its underlying schemata is still directly derived
from an optimality condition.
20 Though, experience has shown satisfying performance for many systems.
CHAPTER 1. INTRODUCTION 14

1.3 Outline and Nomenclature

This thesis consists of five principal chapters. Besides the introductory part and the summary, the
main matter is divided into two parts, one concentrating on the analytical definition of a benchmark
problem, and the other concerned with the associated control problem. The introduction presents
the underlying motivation and gives an overview of existing related research. Chapter 2 sketches

a derivation of the Navier-Stokes equations, in their general form, before refining them to special
applications such as compressible and incompressible Newtonian fluids. The latter establish the
fundamentals of the motivating problem.

The first half of the main matter (chapter 3) attempts to derive and to define a mathematical,
simplified benchmark problem resembling the key-features associated with fluid dynamics. In doing
so, different approaches are drafted, each resulting in Burgers’ equation as an analytical model.
Furthermore, those key-features are brought to such a level of abstraction that Burgers’ equation
should also serve as a generalized benchmark for certain classes of nonlinear distributed parameter
problems requiring robust control.21 Therefore, the problem will be classified in section 3.1.2 before
an analytical solution is sought. That solution will serve as a verification of the following finite-
element formulation or approximation respectively, which is necessary in order to set up a control
problem. This will conclude the first part of the main matter.

The second part continues by stating the associated control problem. The benchmark prob-
lem is augmented with constraints evolving from real control environments, such as measurement
dynamics (sampling, missing full-state information, white noise), additional process disturbances
(colored or deterministic), finite computing speed, and model uncertainty. Nonlinear system analy-
sis (stability, controllability) is performed before different closed-loop control circuits are designed.
The main concept of model-error control synthesis will be derived before the previous designed con-
trol circuit is augmented. A discussion of the numerical results will then complete the main section.

The following summary outlines the results and contributions of this thesis, provides an overview of
existing work on the addressed problems, and concludes with an outlook on future research.

In this thesis, the use of variables and symbols should be clear and consistent. An overview of
variables and abbreviations can be found in the list of symbols (preamble). An emphasis is placed on
21 Here it is referred to conserving systems governing continuities and represented by second order partial differential

equations (involving diffusion and nonlinear convection).


CHAPTER 1. INTRODUCTION 15

the non-ambiguous assignment of symbols throughout the text. Certain principles of nomenclature
are considered sustainable. The adoption of the denotation most commonly applied in literature is
incorporated in this study, when possible. A general distinction principle in handling is provided for
as follows: bold lower case letters are used for vectors and vectorial functions, regular printed letters

for scalar functions, lower case Greek letters for constants or coefficients, capital regular printed
letters for matrices, and capital bold letters for tensors. The sole exception of this nomenclature
occurs in section 3.1.1.2, as the formulae used are based on the already historic publication [48] of

J. M. Burgers; his original notation has been applied.


Chapter 2

Navier-Stokes

This chapter gives an overview of a general derivation of the Navier-Stokes equations. But why
should this be necessary, since these equations have already been known for a long time, and detailed
derivations can be found in many textbooks? First, different controllers have to be evaluated, and in
order to define a meaningful measure of the quality of control, it is of great value - if not unevitable - to
have knowledge about the underlying physics. Secondly, it is a goal of this thesis to establish Burgers’
equation as a benchmark problem for a wide variety of nonlinear control problems. Therefore, a very
general approach is used in order to demonstrate a connection to many of the problems of classical
physics.

2.1 General Form

A vast subdivision of physical problems dealing with continuum systems and continuities (e.g., fluids)
respectively involves obeying the principles of conservation laws. When modeling such a system, the
approach is often the same: identifying the appropriate conservation, establishing the corresponding
equations, relating the extensive properties (mostly quantities) to the intensive ones (mass to density,
et cetera), and closing the system by proposing relationships between fluxes and densities. Therefore,
the governing equations can often be regarded as expansions of general continuity, which is derived
in the following: one postulates that the change inside a control volume equals the flux through
the boundaries plus the feed and the drain of sources and sinks inside the control volume; the

mathematical terms leads to


Z I Z
d
L dΩ = − Lv · n dδΩ + Q dΩ (2.1)
dt Ω δΩ Ω

16
CHAPTER 2. NAVIER-STOKES 17

where L is the continuity of interest (e.g. fluid). v is the associated velocity (flux), and δΩ is the
boundary of the control volume Ω. Q denotes sources and sinks. Applying Gauss’ theorem yields
Z Z Z
d
L dΩ = − ∇ · (Lv) dΩ + Q dΩ
dt Ω Ω Ω

Note that L is assumed to be continuous and continuously differentiable. Interchanging integration


and differentiation results in
Z  
∂L
+ ∇ · (Lv) + Q dΩ = 0 (2.2)
Ω ∂t

Since eq. (2.2) has to be valid for any control volume, it can be reduced to

∂L
+ ∇ · (Lv) + Q = 0 (2.3)
∂t

Equation (2.3) is called the local form of the general continuity equation, versus the global form in
eq. (2.2), and is applicable to all intensive properties which form a continuity. Now, a general form
of Newton’s second law of motion will be derived by applying eq. (2.3) to momentum and mass.
Conservation of momentum (denoted by components), where sources and sinks are represented by
the sum of body forces b, is expressed by equating the change in time of the momentum with the
sum of surface and body forces:


(ρvi ) + ∇ · (ρvi v + bi ) = 0
∂t

Applying rules of differentiation and vector calculus:


   
∂ρ ∂vi
vi + ∇ · (ρv) + ρ + v · ∇vi = bi (2.4)
∂t ∂t

Equation (2.3) additionally leads for conservation of mass:

∂ρ
+ ∇ · (ρv) = 0 (2.5)
∂t

Incorporating eq. (2.5) into eq. (2.4) reduces eq. (2.4) to

Dv
ρ =b (2.6)
Dt

Equation (2.6) constitutes a general expansion of Newton’s second law of motion by resembling
Dv
‘Force = mass × acceleration,’ where Dt is the convective or substantial derivate of the velocity,
respectively.
CHAPTER 2. NAVIER-STOKES 18

Some remarks: Equation (2.5) reduces for incompressible fluids to ∇ · v = 0 which indeed
is the conservation of volume. The convective derivative is due to the fact that (infinitesimal) small
particles in the governing differential equations of continuities are considered. These particles move
along certain paths or trajectories, so that the change in velocity consists of the change due to the

field at the current position of the particle, and of the movement of the particle itself. Put in formal
D
terms, the convective derivative is taking and rearranging the total differential of Dt v(x(t), t).

To finally describe the movements of a fluid, the forces acting on such a small cube or
particle are balanced by introducing a general stress tensor σ (total of internal forces, such as
internal friction) - called the Cauchy stress tensor - and the summation of all other body forces f
(as gravity):
Dv
ρ = ∇ · σij + f
Dt
Since pressure is, in most cases, a variable of major interest, it is common to seperate the static
pressure (scalar) from the global stress tensor so that the most general form of the Navier-Stokes
equations results:
Dv
ρ = −∇p + ∇ · Π + f (2.7)
Dt
Equations (2.7) might not be the most familiar form of the Navier-Stokes equations, but it is the
most general one, governing every fluid which obeys the conservation principles; or, in other words,
can be considered a continuity.1 Even when accompanied by the additional conservative principles,
the mass conservation of eq. (2.5), and, in some cases, the energy conservation in eq. (2.7) is still an
under-determined system. Equations eq. (2.7) and eq. (2.5) result in four equations (three spatial
directions plus mass conservation), while still containing 11 unknown variables: the density ρ, the

velocity vector v, the static pressure gradient ∇p, and the symmetric friction tensor Π containing 6
unknowns.2 Therefore, additional constraints for 7 variables (when neglecting temperature effects)
are needed for an (at least principally) integrable system of equations. Depending on the kind of
fluid, different assumptions are made, leading to more or less complex versions of the friction tensor

Π. These additional assumptions are summarized in the accompanying constitutive equations.3

When constituting the governing equations, two different interpretations have to be dis-
tinguished: compressible and incompressible fluids. When considering compressible fluids, the
1 More precisely: if an external source is present, eq. (2.7) is called a balance instead of a conservation law.
2 See also [49].
3 A concept also found in severeal other fields, e.g., the Maxwell equations governing electrodynamics.
CHAPTER 2. NAVIER-STOKES 19

density ρ and the velocity v are interpreted as independent variables, whereas the pressure and the
friction tensor are the dependent variables:

p = p̂(ρ)

Π = Π̂(v, ∇v, ρ)

If, on the other hand, incompressible fluids are concerned, the density ρ is known and constant,
as a matter of course, so that the number of unknowns is reduced to 10. While the static pressure
p and the velocity vector v remain independent, the friction tensor results in

Π = ΠR (v, ∇v)

2.2 Identification of the Friction Tensor

The basis of consideration for determining the friction tensor, and also the physical motivation for
the Navier-Stokes equations themselves, includes different shear tests where, for example, one plate
slides onto another with a fluid-film between them. The observed quantities are shear angle γ, shear
velocity γ̇, shear stress τ , and effective dynamic viscosity η. The general relations of the observed
quantities in an experimental setting according to figure 2.1 reveal as follows. Be vh the constant
velocity of the plate in horizontal direction, then the vertical velocity distribution inside the fluid is
vh dv
assumed to be linear; v(y) = h y. Hence, the shear velocity becomes γ̇ = dy , which is also related to
the shear stress and to the effective dynamic viscosity (the resistance of the fluid against the motion)
τ
by γ̇ = η(τ 2 ) . These connections result from observations in simple controllable experiments, and

are brought to a general level in the subsequent discussion.

Reiner-Rivlin fluids: The broadest class of fluids with known constitutive equations is
formed by the so-called Reiner-Rivlin fluids. A general derivation of these equations from continuum
theory would by far exceed the limits of this thesis, so only the general idea will be presented. For

further information, the reader is referred to [49]. The basic assumption is that the friction tensor
Π only depends on the velocity gradient S = ∇v, or more precisely, only on its symmetric part
D = 12 (S T + S). The constitutive equations for Π become

Π = ΠR (D, ρ) = λ(tr D)I + 2ηD + µD2 (2.8)

where λ, η and µ are newly introduced and depend on D and ρ again.


CHAPTER 2. NAVIER-STOKES 20

Figure 2.1: Shear Test

Newtonian fluids: The simplest kind of Reiner-Rivlin fluids, taking internal friction into
account, are Newtonian fluids. They are of such importance that they form the main topic of classical
fluid dynamics. Sometimes fluid dynamics automatically implies the use of Newtonian fluids. Shear
Av
tests suggest the proportional behaviour between the needed force and the velocity: F ∼ d with
A being the surface area, v the velocity, and d the distance between two plates; written in terms of
dv
shear stress, τ = η dy . Therefore, η is independent of the velocity, η = η̂(ρ, T ). For Π (as defined in
eq. (2.8)) being linear in D, λ also has to be independent of the velocity, λ = λ̂(ρ, T ); additionally,
µ ≡ 0. This simple relation was first described in Newton’s Principia in 1687 for pure shear (hence
the name of these fluids) before Claude-Louis Navier and George Gabriel Stokes brought it to its
3-dimensional form:4
Π = λ(∇ · v)I + 2ηD (2.9)

But this expression can be reformulated so as to be more useful for physical interpretation. Let’s
first consider the simple shear consisting of pure laminar flow, i.e., a velocity field parallel to a
reference plane, spanned by the w1 - and w2 -direction and varying only orthogonal to that plane (as
illustrated by figure 2.2(a)):

w1 = w1 (z, t), w2 = w2 (z, t), w3 = 0

Then, the friction in a Newtonian fluid is linear to the velocity with the η as factor of proportionality.5
The only non-trivial entries in Π, according to eq. (2.9), result in

∂vy ∂vz
Πyz = η Πxz = η
∂z ∂z
4 Note, that (tr D) = ∇ · D for the above mentioned constraints.
5 Hence its name shear viscosity.
CHAPTER 2. NAVIER-STOKES 21

The second special situation is an isotropic expansion or compression, respectively, i.e., a movement
purely radial to or from a center point. Hence, D is set proportional to the identity tensor and gives
∂w1 ∂w2 ∂w3
D = ˙ I. Only isotropic normal stress appears: ˙ = ∂x = ∂y = ∂z ; figure 2.2(b) of an isotropic
compressed cube illustrates the absence of shear stress. Hence, ∇ · v = 3˙ applied to eq. (2.9) yields

Π = (3λ + 2η)I
˙

Stretching an infinitissimally small cube in each coordinate direction by ˙ per unit time accumulates
to a rate of volume change of V̇ = 3 .
˙ Therefore, for an isotropic expansion or compression of a
Newtonian fluid, the only non-trivial friction entries are isotropic normal stresses with a viscosity
proportional to the rate of volume change. Introducing a ‘volume’ viscosity defined by ζ = λ + 23 η
and a distortion tensor defined by E = D − 31 (∇ · v)I, the friction tensor for Newtonian fluids,
eq. (2.9) becomes
Π = ζ(∇ · v)I + 2ηE (2.10)

But why is this expression more favorable? It provides an alternative formulation of the constitutive
law that separates the effects of ‘volume’ and ‘shear’ viscosity. So the ‘volume’ viscosity does not
appear at all for incompressible fluids, (∇ · v) = 0. Therefore, certain assumptions can be made for
nearly incompressible fluids or, for example, gases ζ = 0. Table 2.1 finally provides an overview of

(a) Pure Laminar Flow (b) Isotropic Expansion

Figure 2.2: Components of the Friction Tensor for Newtonian Fluids

the presented fluids and the different manifestations of the Navier-Stokes equations. The following
discussions will be focused on incompressible Newtonian fluids, i.e., the very last configuration in
CHAPTER 2. NAVIER-STOKES 22

η
table 2.1, where ν = ρ is called the kinematic viscosity. The introduced Laplace operator applied to
a vector is interpreted as the Laplacian of each component, namely ∆v := (∆vx , ∆vy , ∆vz ), whereby
the Laplacian itself shall be the appreviation of div(grad(vx )), et cetera. By using the vector identity
∆v = − rot rot v + grad div v the connection to potential flow characterized by its irrotational

property (therefore rot v = 0) can be shown. The discussion of very rare fluids, and special topics
like shear thickening and pseudo-ductile fluids, shall be omitted. For a more experimental-motivated
and demonstrative derivation of the Navier-Stokes equations for incompressible fluids, the reader is

referred to any physical compilation like [50].


CHAPTER 2. NAVIER-STOKES 23

Table 2.1: Navier-Stokes Dynamics for Different Fluids

Dv
General ρ = −∇p + ∇ · Π + f
Dt

Π = λ(tr D)I + 2ηD + µD2


1
D = (S + S T )
2
Reiner-Rivlin-Fluid S = ∇v
λ = λ(D, ρ)
η = η(D, ρ)
µ = µ(D, ρ)

µ ≡ 0
Newtonian Fluid Π = λ(∇ · v)I + 2ηD
= ζ(∇ · v) + 2ηE
1
E = D − (∇ · v)I
3

Dv
ρ = −∇p + ∇ξ∇ · v + 2∇ · ηE + ρf
Dt
compressible
For ξ, η constant:
Dv
ρ = −∇p + ξ∇2 · v + 2η∇ · E + ρf
Dt

∇·v = 0
E = D
incompressible Dv
ρ = −∇p + ν∆v + f
Dt
η
ν =
ρ
∆v := (∆vx , ∆vy , ∆vz )
Chapter 3

Burgers’ Equation

The Navier-Stokes equations, as briefly derived in chapter 2, form the fundamental physical model
for the motivation in chapter 1. But for the immediate design of prospective control and estimator
combinations, these are far to complicated in simulation. Hence, a simplified test environment
containing the key issues arising from Newtonian flow has to be created. Such a problem is found
in the one-dimensional Burgers’ equation: although simple in appearance, it reflects many of the
(mathematical) difficulties associated with nonlinear flow (as well as with other nonlinear continuity
problems).

In previous research on Burgers’ equation, there has often been the lack of a comprehensive
embedding in the context of fluid flow. Here, an illustrative example of channel flow is presented
yielding Burgers’ equation as a one-dimensional simplification, before the original motivation of

[48] is briefly reviewed. Traffic flow serves as one example of the many other continuity problems
governed or approximated by Burgers’ equation. The benchmark problem is defined and classified
in terms of a partial differential equation (section 3.1.2) and has to be transformed into a suitable
form for control engineering: this is done via the Galerkin finite-element approximation performed

in section 3.3. In order to verify results, an analytical solution for the viscous and inviscid case is
derived, as well as the steady-state solution.

24
CHAPTER 3. BURGERS’ EQUATION 25

3.1 Derivation and Classification


3.1.1 Derivation
3.1.1.1 From Navier-Stokes

In chapter 2, the Navier-Stokes equations (and constitutive equations) governing the motivating

problem from 1.1 have been derived both in their general form and in the form applicable to New-
tonian fluids. As a 3-dimensional distributed system with nonlinear coupling, they are complicated
to simulate (even before control) and lead to numerical instabilities. But fundamental research in
control engineering often requires - and often is satisfied by - a benchmark formulation represent-
ing key-features of a class of problems in order to test or develop different approaches. Such a
benchmark problem has already been formulated in 1948 by Johannes Martinus Burgers in [48], and
named Burgers’ equation.

Together with the appropriate boundary condition, the finite-element representation of this
equation is the subject matter from chapter 4 onward. But the justification for Burgers’ equation as
the right choice can be achieved by several means. When considering the introductory motivation
from 1.1, an experimental setup - realized at a later stage of a systematic control approach to fluid
dynamics - might include the regulation of flow in a channel (e.g., preventing the flow from becoming
turbulent). J. A. Atwell uses this point of departure in her work on Reduced-Order Control ([34]
and [35]) when interpreting Burgers’ equation as a 1-dimensional simplification of a 2-dimensional
channel flow problem. Her approach might be somewhat too crude and short, but surely provides,

at first, a demonstrative way to show the connection to the Navier-Stokes equations: let’s consider
an experimental setup according to figure 3.1. Be v(x, y) = (v1 (x, y) v2 (x, y))T a velocity vector,
p(x, y) the static pressure, R the Reynolds number of the used fluid and f1 and f2 control inputs (e.

g. jets or valves) at the boundary. The problem on the domain Ω = [0, L]×[−1, 1] for incompressible
newtonian fluids can be stated:

∇·v = 0 (3.1a)
Dv ∂v 1
= + (v · ∇)v = −∇p + ∆v (3.1b)
Dt ∂t R
CHAPTER 3. BURGERS’ EQUATION 26

subject to the boundary conditions1

v(t, 0, y) = v(t, L, y)

v1 (t, x, 1) = v1 (t, x, −1) = 0

v2 (t, x, 1) = f1 (t, x)

v2 (t, x, −1) = f2 (t, x)

The above boundary conditions form the mathematical model corresponding to a channel of infinite

1
v2

v1 x

-1
0 L

Figure 3.1: Channel Flow

length by using a finite computational domain with periodicity. Everything which leaves the domain
at one end again enters it a the other end. Additionally, the channel consists of massive walls,
prohibiting any flow in the x-direction directly at the walls due to friction and, also, prohibiting
any flow in the y-direction through the walls, except as prescribed by the controls (forcing vertical
velocity). Writing the impulse conservation of eq. (3.1) by components instead of vectorial notation
leads to

1 ∂ 2 v1 ∂ 2 v1
 
∂v1 ∂v1 ∂v1 ∂p
+ v1 + v2 = − + +
∂t ∂x ∂y ∂x R ∂x2 ∂y 2
 2
∂ 2 v2

∂v2 ∂v2 ∂v2 ∂p 1 ∂ v2
+ v1 + v2 = − + +
∂t ∂x ∂y ∂y R ∂x2 ∂y 2

Again, simulation of the above equations in general is rather complicated. A one-dimensional math-
ematical nonlinear model containing the same basic properties is preferred: that is a first-order
1 The applied boundary conditions are so-called no-slip conditions where bounce-back conditions are also wide-

spreadly used. No-slip conditions state that due to friction there cannot be a movement in x-direction at the channel
wall. Also, massive walls prohibit any movement in y-direction (here a control input is allowed through the walls).
Bounce-back conditions assume that a moving fluid particle is reflected at the wall, meaning that v1− = v1+ and
v2− = −v2+ , where the superscript denotes shortly before and after the collision, respectively. Obviously, these
conditions cannot be stated as simply as no-slip conditions. In order to comply with the motivation in [51], no-slip
conditions are preferred.
CHAPTER 3. BURGERS’ EQUATION 27

nonlinearity (quasi-linearity), i.e., the function coupled with its first-order spatial derivative (due
to the convective character of the motion), and, the inclusion of the second-order spatial derivative
multiplied by a (small-valued) parameter (which allows shifting from pure advective to hyperbolic
character). When reducing eq. (3.1) to a one-dimensional model problem, the forcing terms at

the boundary are not sustainable anymore. A direct (nonhomogeneous) forcing term should be
introduced, and the static pressure gradient will be neglected. Also, the mass conservation, in com-
bination with the periodic boundary conditions (v(0, t) = v(L, t)), transforms into periodicity of the
∂v ∂v
flux ( ∂x (0, t) = ∂x (L, t)), so that this crudely derived model could be stated:

∂v ∂v ∂2v
(t, x) + v(t, x) (t, x) = κ (t, x) + F (t, x)
∂t ∂x ∂v 2

3.1.1.2 Original Motivation

Besides the more or less crude analogies shown in the previous subsection, Burgers’ equation is
related by different means to fluid dynamics (so that the original motivation may be sketched).
When Johannes Martinus Burgers published the article “A Mathematical Model Illustrating the
Theory of Turbulence” in 1948 ([48]), computers were not yet in academic use. Therefore, the main
object of his paper was concerned with an analytical mathematical discussion of the phenomenon
of turbulence, and the equation - nowadays entitled as ‘his’ equation - could have been considered a
collateral outcome. “The complicated geometrical character of the hydrodynamic equations (vecto-
rial character of the velocity, condition imposed by the equation of continuity, properties of vortex
motion)”,2 the additional presence of nonlinear terms containing first order derivatives, and the very

small value of the coefficient of viscosity (large Reynolds number) are responsible for the fact that
discussions in fluid dynamics are particularly difficult to this day.

An existence and smoothness proof of the solution to the Navier-Stokes equations in three
dimensions is still one of the seven millenium problems declared by the Clay Mathematics Institute.3
This emphasizes the background for J. M. Burgers’ publication in which he introduces “a system of
mathematical equations much simpler than those of hydrodynamics” in order to elucidate the char-
acteristics associated with the problem of hydrodynamics.4 Those are in detail: “the characteristics
2 See page 171 in [48].
3 www.claymath.org/millennium/, solved in 2D.
4 It
shall be noted that J. M. Burgers limited his model to hydrodynamics, but the limitation has been due to
the use of incompressible fluids. Without loss of generality, incompressible Navier-Stokes equations can be used
synonymously.
CHAPTER 3. BURGERS’ EQUATION 28

of turbulence, among which are prominent those connected with the balance of energy and with the
appearance of dissipation layers” - which represent the main regions where energy is dissipated -
“together with the properties of the spectrum of the turbulent motion [...] the transfer of energy
through the spectrum, and the appearance of a practical limit to this spectrum”.5 This system

of equations appears as follows, where the original notation of the 1948 paper has been kept (i.e.,
this section is considered out of the nomenclature in the rest of this work, and symbols as well as
variables should not been mistaken):

1 b
Z
dU νU
b = P− − dyv 2 (3.2)
dt b v 0
∂v U ∂2v ∂v
= v + ν 2 − 2v (3.3)
∂t b ∂y ∂y

Although it was the intention of J. M. Burgers to illustrate a pure mathematical model, the terms
and expressions used are analogous to the variables of hydrodynamics: U (t) represents the mean or
primary motion of a liquid flowing through a channel, accompanied by v(y, t), the secondary motion
expressing the phenomenon of turbulence. Note that U is independent of position while v depends
on both. As long as v 6= 0, there is turbulence in the system. As channel flow is concerned, the
independent variable y represents the ‘cross-dimension’ of the channel with associated boundary
conditions at the ends 0 and b; here v has to vanish at both points. P functions as an exterior
force acting on the primary motion, while the remaining ν is the analogue of a friction parameter.
U
Thereby, the friction, in case of the primary motion, is proportional to b and, in case of the secondary
motion, proportional to the second partial derivative of v. It is not necessary to use the same friction
coefficient for both; however, nothing is lost or gained by not doing so. Another key-feature of this
system reveals: the terms of second degree in v are chosen in such a way that the representation of
the balance of energy can easily be found. Kinetic energy (for constant mass) is proportional to the
squared velocity; therefore, the left hand side of both equations can express the change of kinetic
energy by expanding with U and v, respectively. So eq. (3.2) is multiplied by U , while eq. (3.3)
is multiplied by v. Integrating both on the domain with respect to the boundary condition, and
adding the resulting expressions gives the energy distribution:
" Z b # Z b  2
d bU 2 v2 νU 2 ∂v
+ dy =P U− −ν dy (3.4)
dt 2 0 2 b 0 ∂y

Despite the energy increase P U due to the work of the exterior force, the balanced energy equation
5 See page 172 in [48].
CHAPTER 3. BURGERS’ EQUATION 29

does not show the energy transmission from the primary to the secondary motion as an interior
process, hence a compensating term in eq. (3.2). It shall be noted that the first term on the right-hand
side of eq. (3.3) is responsible for this interchange in energy, and differs from the hydrodynamical
equations in that it is dependent only on U (and not on its gradient, as in hydrodynamics).

Burgers performs sequential studies on particular solutions of this system, namely the lami-
Pb
nar solution (U = ν , v = 0) and stationary turbulent solutions (U assumed constant), together with
corresponding stability and detailed spectral analyses before he addresses the pivotal nonstationary
solutions. There, a central quality of his system is revealed: under the additional prerequisite of
U being treated as a constant - which does not interfere since turbulence is the point of interest -
and based on the spectral analysis of the stationary turbulent solution, the domain can be divided
2
∂ v
in two regions: one comparatively broad region where ν ∂y 2 can be neglected and several extremely

Uv
narrow regions in which b can be omitted (in comparison to the other two terms). So eq. (3.3),
for the secondary motion v, reduces to the following two cases:6

∂v ∂v Uv
+ 2v − =0 (3.5)
∂t ∂y b
∂v ∂v ∂2v
+ 2v −ν 2 =0 (3.6)
∂t ∂y ∂y

Equation (3.5), being quasi-linear, can be solved by the method of characteristics (trajectories
dy ∂v
determined by dt = 2v) and holds in regions where ∂y > 0. But whenever the solutions tend
∂v
to generate discontinuities (in regions where ∂y < 0), eq. (3.6) has to be applied. J. M. Burgers also
presents an approximate solution to eq. (3.6), which will be omitted here since it can be solved by
means of the heat equation (for certain boundary conditions). Therefore, a transformation developed
seperately by Julian D. Cole and Eberhard Hopf is needed (as shown as in section 3.2). The nonlinear
∂v
term 2v ∂y of eq. (3.3) is identified as being responsible for potentially creating discontinuities for
2
∂ v
large Reynolds numbers, i.e., small ν, while the diffusion term ν ∂y 2 prevents the formation of a

true discontinuity by producing a dissipation region. Also, besides the channel model, the equations

presented by J. M. Burgers can be expanded to an infinite domain flow problem. Then, when U = 0,
the resulting expression illustrates the phenomenon of free turbulence that is not activated by energy
transmission from a primary motion.

Burgers himself states that his original quantities of interest have been the amplitudes ξ of
 
6 The Uy A(y−B)
corresponding stationary turbulent solutions are v = 2b
+ const and v = −A tanh v
, respectively.
CHAPTER 3. BURGERS’ EQUATION 30

the components that together constitute the spectrum of the system. He was able to conclude that
these amplitudes are governed by an ordinary differential equation:

νπ 2 n2
 
dξn U
= − ξ n + fn
dt b b2

Thereby, the first term on the right-hand side is expressing an exponential increase, while the second
term introduces damping. The remaining fn represent coupling between the different ξn , and are of
P∞
second degree in ξ: n=1 ξn fn = 0, and hence are orthogonal. He was not able to perform a direct

investigation on the ξn , so his considerations were limited to mean values. However, he came to the
2
∂ v ∂v
conclusion that the combination of ν ∂y 2 and 2v ∂y is most decisive in giving rise to the appearance

of dissipation layers. Therefore, they characterize the peculiar mechanism operative in producing
turbulence and determine the statistical relations governing the transfers of energy. He identifies
the terms
∂2u
 
∂u ∂u
+u −ν
∂t ∂x ∂x2
as being the closest analogy in hydrodynamics to his model terms, while also stating that the same
expressions are decisive in determining the appearance of shock waves in the supersonic motion of
a gas.7 Hence, these terms mimic the basic dynamical relations.

3.1.1.3 From Traffic Flow

A third motivation for the use of Burgers’ equation originates in a complete different field, namely
traffic flow: fluid has been the archetype for traffic. The continuity hypothesis cannot be maintained
for traffic itself (too few cars to justify an intensive property of traffic), but interpreting traffic as
fluid flow has already proven to be quite useful. In the previous derivations, the physical analogies

of the appearing terms have come from fluid flow; hence, the dynamical behaviour of velocity is
described. In this third approach a time-and-space-varying function of density will be the center of
interest. Although traffic flow is the generating problem, the expressions and derivations shall be
considered in terms as general as possible. Let ρ(t, x) and q(t, x) be a 1-dimensional density and a 1-
 unit   unit 
dimensional flow of an entity so that the physical dimensions become area t,x
and time t,x
(in case
of traffic flow cars per mile and cars per hour, respectively). General conservation principles (e.g.,
the conservation of cars) state that the rate of change in time of the density equals the difference of
7 Then, densities become the quantities of interest instead of velocities.
CHAPTER 3. BURGERS’ EQUATION 31

the flow at both ends of the domain (the sign of drain and feed defined in a corresponding sense):
Z b Z b
d ∂
ρ(t, x) dx = q(t, a) − q(t, b) = − q(t, x) dx (3.7)
dt a a ∂x

Since the above statement has to be true for any domain, the integrands on both sides have to

equal (local formulation versus global form). If ρ is assumed to be sufficiently smooth, the order of
differentiation and integration can be interchanged, resulting in

∂ρ ∂q
+ =0
∂t ∂x

Interpreting flow as the multiplication of density and velocity, q = ρ · v, leads to

∂ρ ∂ρ ∂v
+ v+ ρ=0
∂t ∂x ∂x

Often, the general assumption is made that the velocity only depends on the density, v = v̂(ρ).
Hence, the above equation becomes

∂ρ ∂ρ dv ∂ρ
+ v+ ρ=0
∂t ∂x dρ ∂x

where q 0 (ρ) = v(ρ) + ρ dv


dt . Thereby, the relation of velocity and density has to be declared in a

meaningful way. Initially, there is a maximum velocity a car can achieve (average value) and a
maximum density corresponding to a traffic jam. The main motion can be stated to be proportional
to the inverse of the density:
 
ρ
V (ρ) = vmax 1 −
ρmax
Sometimes velocity is thought to be subject to the change of density. In other words, if a driver
notices a rapid increase in density, he reduces speed even more (and vice versa). This additional
constraint is integrated into the velocity expression (ν being a constant whose determination is not
subject here):
ν ∂ρ
v(ρ) = V (ρ) −
ρ ∂x
Hence, the overall equation governing one-dimensional traffic flow obeying conservation principles
can be formulated:
∂ρ ∂ρ vmax ∂ρ ∂2ρ
+ vmax − 2ρ =ν 2 (3.8)
∂t ∂x ρmax ∂x ∂x
Equation (3.8) is known as the LWR model for traffic flow, attributed independently to Lighthill
and Whitham in 1955 and to Richards in 1956. Although eq. (3.8) incorporates an additional linear
CHAPTER 3. BURGERS’ EQUATION 32

∂ρ
term in ∂x , some might still refer to eq. (3.8) as ‘Burgers’ equation’ (for example see page 583 in
[52]). The additional term does not alter the underlying dynamics, but in order to formally omit
this, an unrealistic assumption about velocity-density relations would have to be made. However,
Burgers’ equation can be derived directly from eq. (3.8) by setting vmax = ρmax = 1 and performing

a variable substitution of u(t, x) = ρ(t, x) − 21 .8 For an overview on modeling traffic flow the reader
is referred to [53].

3.1.2 Classification and Benchmark Problem

Although some properties and classifications of Burgers’ equation have been mentioned in the pre-
vious sections, a recapitulary and extended characterization is regarded as reasonable. Since the
solution to Burgers’ equation is subject to its boundary conditions (as for any differential equation),
an appropriate set has to be defined. For use as a benchmark problem in control engineering an
infinite domain is not suitable; therefore, a finite domain formulation reflecting some ‘infinite’ prop-
erties has to be found. This is done by choosing periodic boundary conditions for function value
and first derivatives (flux), meaning that everything which leaves the domain at one end enters it
at the other end. When, for example, an initial sinusoidal local disturbance is considered in the
infinite domain, the disturbance travels (wave-like) on the domain while changing its shape. If the
same disturbance is applied to a domain with periodic boundary condition, the benchmark problem
mimics an observer window moving along with the disturbance. Thereby, only changes of shape
(and some phase shifts if the initial disturbance is not symmetric to the domain’s center) appear, as
will be seen later.

As stated in the previous sections, Burgers’ equation can adopt certain key-features as an
analytical model of different physical problems with varying analogies of the appearing terms. Hence,

a generic function variable w : [0, ∞) × [0, L] → R; w(., x)|[0,∞) ∈ C 1 ∀ x ∈ [0, L] and w(t, .)|[0,L] ∈
8 Note that the nonlinear term in Burgers’ equation is due to the derivative of u2 while the one in the LWR model

is due to the derivative of u(1 − u).


CHAPTER 3. BURGERS’ EQUATION 33

C 2 ∀ t ∈ [0, ∞) and a generic coefficient κ is used when stating the general benchmark problem:

∂w ∂w ∂2w
(t, x) + w(t, x) (t, x) = κ (t, x) + f (t, x) (3.9)
∂t ∂x ∂x2
w(0, x) = w0 (x) 0≤x≤L

w(t, 0) = w(t, L) 0≤t≤∞


∂w ∂w
(t, 0) = (t, L) 0≤t≤∞
∂x ∂x

Problem (3.9) is called the forced viscous Burgers’ equation and exhibits a nonlinear, inhomogeneous
∂2w
second order partial differential equation. It is of mixed form, containing a diffusion term κ ∂w2 (x, t)
∂w
and a nonlinear advection term w(x, t) ∂x (x, t). Since the time-derivative is also involved, eq. (3.9)
is a hybrid form of a parabolic and a hyperbolic partial differential equation, where it is parabolic
for κ > 0 and degenerates for κ = 0 to a hyperbolic equation. The nonlinear advection term tends
to create discontinuities while the diffusion term rounds off steep descents so that pure discontinuity
does not appear as long as κ 6= 0. The smaller κ is chosen when the solution comes closer to forming
discontinuities, and in the case of κ = 0 the equation reduces to quasi-linear first-order advection
generating shock waves.

The equation can be considered a 1-dimensional model of impulse conservation in solenoidal


vector fields as well as an approximation of Euler’s equation.9 The nonlinear advection term mimics
the nonlinearity due to the convective derivative in the Navier-Stokes equations and provides a
one-dimensional approximation of the channel flow of an incompressible Newtonian fluid without a
pressure gradient (but including a forcing term). For disambiguation, it should be noted that the
term convection is not used consistently in literature: in context of heat and mass transfer it refers to
the sum of advective and diffusive transfer, and not to the convective derivative. In this first sense,
Burgers’ equation could be denoted as a convective equation. Furthermore, the benchmark problem
provides even more sophisticated behavior of channel flow by functioning as the decisive part in J.
M. Burgers’ mathematical model for the creation of turbulence in incompressible fluids. Thereby,
it must be noted that this model emphasizes the energy dissipation between primary motion and
secondary (turbulent) motion, while the benchmark problem does not dissipate energy due to the
periodic boundary conditions. As will be shown later, the steady-state solution is constant. In a

different interpretation, Burgers’ equation describes the behaviour of traffic flow and - for κ = 0 -
9 It shall be reminded that the conservation property only appears due to the periodic Neumann boundary condi-
tions. Otherwise, parabolic equations - and hence Burgers’ equation - do not show conservation property.
CHAPTER 3. BURGERS’ EQUATION 34

the one-dimensional pressure distribution of a compressible fluid obeying conservation princicples


and neglecting internal friction. Hence, it is also the decisive part in the motion of a nonviscous
compressible gas described by the Euler equation.

The presented derivation and characterization of the benchmark problem gives enough in-

sight to define a meaningful control purpose and quality measure: as an approximation of channel
flow, attenuation of an initial disturbance is reasonable; whereas, from Burgers’ point of view, tur-
bulence should be eliminated. Either gives reason to drive the solution to zero; for example, when
stated as traffic flow, the prevention of a ‘shock’ - symbolizing a traffic jam - is the control target.
As mentioned before, energy or momentum, respectively, is conserved by the periodic boundary
conditions; therefore, it will not be possible to drive the solution to zero without draining the total
energy or impulse by means of the control input. But this is only an artificial property imposed by
stating a solvable control problem (the state-space to be created has to be finite); the elimination of
turbulence is also sufficiently achieved by driving the solution to its steady-state constant, without
the loss of generality. Hence, the control problem will be the attenuation of an initial distur-
bance to the steady-state constant, while the quality measure of the control can be defined by
the average deviation from the target equilibrium, for example, or the time in which a certain region
surrounding the steady-state is reached. A combination with the absolute peak function value, or
the integral of the deviation from the steady-state, is also conceivable.

3.2 Analytical Solution

In section 3.1.2, the model problem - the viscous Burgers’ equation with periodic boundary conditions
- has been stated in eq. (3.9). For the sake of simplicity, the differential operator will be omitted in
the following, and the derivative will be denoted by a subindex ( ∂w
∂x (t, x) = wx (t, x)). Although there

is no general concept like the ODE’s Picard-Lindeloef theorem for initial-value problems governed by
partial differential equations, the existence and uniqueness of certain classes of PDE’s and certain -
well-behaved - boundary conditions has been shown in literature and therefore guaranteed (see [54]).

3.2.1 General Approach

Because of its nonlinearity, problem (3.9) can only be solved directly for κ = 0 (non-viscous Burgers’
equation) by the method of characteristics, which will be referred to as the ‘shock solution’ (section
CHAPTER 3. BURGERS’ EQUATION 35

3.2.2). Fortunately, the Cole-Hopf-Transformation, indepenendtly developped by J. D. Cole in [55]


and E. Hopf in [56], helps by converting eq. (3.9), in two steps, to a linear heat equation. Be w = cx :

∂ ∂ ∂2
cx + cx cx = κ 2 cx + f
∂t ∂x ∂x

If c is postulated to be smooth, the order of differentiation can be interchanged:

ctx + cx · cxx = κcxxx + f

Integration yields10

1
ct + cx cx = κcxx + g
2

where:
Z x
f (t, x0 )dx0 = g(t, x)

The second transformation step, c(t, x) = −κ ln(φ), leads to further simplification:11

φt φxx φ
−2κ (t, x) = −2κ2 2 (t, x) + g(t, x)
φ φ

When G(t, x) is omitted (that is, the homogeneous problem is considered) the equation becomes

φt (t, x) = κφxx (t, x) 0≤x≤L, 0≤t≤∞

The performed transformation merges to the following rule:

φx
w(t, x) = cx (t, x) = −2κ (t, x) (3.10)
φ
w(t,x0 )dx0
1
Rx
φ(t, x) = e− 2κ 0 (3.11)

But what happens to the initial and boundary condition? The initial condition transforms straight-
forward with eq. (3.11) to φ0 (x), and substituting the periodicity into eq. (3.10) leads to

φx φx
(t, 0) = (t, L)
φ φ
φxx φxx
(t, 0) = (t, L)
φ φ

This needs further attention since neither expression can really be interpreted as homogeneous

boundary conditions (applying φ(x, t) ≡ 0 leads to a marginal condition). Additionally, there is


10 Integration
Rx 1
by parts of the second term: cx cxx dx = [c c ]x .
2 x x
11 The second term vanishes.
CHAPTER 3. BURGERS’ EQUATION 36

a lack of information on the values of φ at both ends of the domain so that, up to this point,
the above expressions do not result in a useful relation. This is not a sudden development, since
through integration the transformation has moved the problem ‘one level upward;’ that is, additional
knowledge about φ is needed.

One might think that eq. (3.11) would help in gathering boundary information: at first
glance, eq. (3.11) provides a function value at the left boundary, φ(t, 0) = 1, but this is extremely
RL
misleading. Not only is there no additional information on the value of 0 w(t, x0 )dx0 and, there-
fore, on the function values of φ at the right boundary, the overall problem remains unsolvable by
conventional techniques. But it should also appear suspicious, that this condition for φ(t, 0) is inde-
pendent of the original boundary conditions, i.e., it would have to appear for any original problem
formulation. This does not make sense from a physical point of view: although Burgers’ equation is
only a mathematical problem or an analogy, respectively, to some physical phenomena, it still results
from conservation laws. In other words, by the transformation approach of eq. (3.11), φ(t, 0) = 1 is
always fulfilled and does not represent an additional condition.

But since the Cole-Hopf transformation performs an integration of problem (3.9), it gives
reason to consider the integral of the PDE on the entire domain, resulting in
Z L
∂ 1 L L
w(t, x) dx = − w2 (t, x) 0 + [wx (t, x)]0
∂t 0 2

Applying the boundary conditions and the transformation rule leads to12


(c(t, L) − c(t, 0)) = 0
∂t

Since there is no change in time, the difference of c(t, L) and c(t, 0) has to be constant at any time;
so using φ instead of c yields

ln φ(t, L) − ln φ(t, 0) = const

And hence, the relation of both ends of the domain results in

φ(t, 0) = econst · φ(t, L)

where the constant can be obtained from the initial condition w0 (x) via
Z L
1
const = − w0 (x) dx
2κ 0
12 Note that the problem has been reduced to finding a primitive.
CHAPTER 3. BURGERS’ EQUATION 37

The constant may always be forced to be 1 by a transformation of variables,13 but the original PDE
is usually not preserved. Hence, the remaining considerations of an analytical solution are limited
RL
to initial value distributions obeying 0 w0 (x) dx = 0. The new problem statement becomes14

φt (t, x) = κφxx (t, x) (3.12)

φ(0, x) = φ0 (x) 0≤x≤L

φx (t, 0) = φx (t, L) 0≤t≤∞

φxx (t, 0) = φxx (t, L) 0≤t≤∞

Problem (3.12) can be solved by utilizing the general solution for the heat equation on a finite
domain using Green’s functions (where G(x, t; x0 , t0 ) denotes the problem-specific Green function
itself):
Z tZ L
φ(x, t) = G(x, t; x0 , t0 ) · Q(x0 , t0 ) dx0 dt0
0 0
Z L
+ G(x, t; x0 , 0) φ(x0 , 0)dx0
0
Z t L
∂ ∂
+κ G(x, t; x0 , t0 ) c(x, t) − c(x0 , t0 ) G(x, t; x0 , t0 ) dt0
0 ∂x0 ∂x0 0

The first term represents the influence of sources dependending on observer position, source position,
and time; the second term propagates the initial condition. Both terms are essentially convolution
integrals and can be computed directly (at least numerically), where the sources have to be replaced
by the forcing term F (t, x). The third part summarizes the influence of the boundary conditions.
The Green’s function for this problem can have different representations, where

X 2 nπx nπx0 −k(nπ/L)2 (t−t0 )
G(x, t; x0 , t0 ) = sin( ) sin( )e
n=1
L L L

results from the method of ‘eigenfunction expansion’ and



1 (x−x0 −2Ln)2 (x+x0 −2Ln)2
− 4k(t−t − 4k(t−t
X
G(x, t; x0 , t0 ) = p {e 0) −e 0) }
4πk(t − t0 ) n=−∞

can be derived by the ‘method of images’.15 But, in order to avoid numerical integration, a more

closed-form solution is sought-after by the separation of variables. Let φ(t, x) = θ(t) · ψ(x), then
1 L
13 For example by w∗ (t, x) = w(t, x) − L 0 0
R
0 w0 (x )dx .
14 The condition φ(t, 0) = 1 has been omitted for the above reasons.
15 For details, see [52].
CHAPTER 3. BURGERS’ EQUATION 38

eq. (3.12) becomes

θ0 (t) · ψ(x) = κ θ(t) · ψ 00 (x)


1 1 dθ 1 d2 ψ
⇔ = = −λ
κ θ dt ψ dx2
dθ d2 ψ
⇔ = −λκθ and = −λψ
dt dx2

The partial differential equation has been converted into two ordinary differential equations, coupled
by the eigenvalue λ. The solution of the time-domain ODE only depends on λ:

θ(t) = const · e−λκt

The spatial domain ODE yields the eigenfunctions and corresponding eigenvalues by applying the
periodic boundary conditions. The general solution, i.e., the eigenfunctions, are

ψ(x) = c1 eϑ1 ·x + c2 eϑ2 ·x

Thereby, the ϑi are the solutions of the characteristic polynomial ϑ2 = −λ. But different constella-
tions have to be distinguished:
√ √ √
1. λ > 0 : ϑ = ±i λ → ψ(x) = c1 sin( λx) + c2 cos( λx)
2. λ = 0 : ϑ = 0, 0 → ψ(x) = c3 + c4 x
√ √ √
3. λ < 0 : ϑ = ± −λ → ψ(x) = c5 sinh( λx) + c6 cosh( λx)
4. λ itself non-real

The last case will be omitted since only real solutions are considered.16 The implementation of the
boundary conditions results in the following eigenvalues (n ∈ N):

1. λ > 0 ; λ = ( 2nπ
L )
2
; ψ(x) = c1 sin( 2nπ 2nπ
L x) + c2 cos( L x)

2. λ = 0 ; ψ(x) = c3 + c4 x

3. λ < 0 ; λ = 0 ; contradiction to boundary conditions

Therefore, the general solution for φ(t, x) becomes


∞     
X
−( 2nπ
2
L ) κ t
2nπ 2nπ
φ(t, x) = e · an sin x + bn cos x
n=0
L L

The coefficients an and bn can be computed from the given inital values by using the orthogonality
16 w : [0, ∞) × [0, L] → R; w(., x)|[0,∞) ∈ C 1 ∀ x ∈ [0, L] and w(t, .)|[0,L] ∈ C 2 ∀ t ∈ [0, ∞)
CHAPTER 3. BURGERS’ EQUATION 39

of sines and cosines:


Z L  
2 2nπx
an = φ0 (x) · sin dx
L 0 L
Z L  
2 2nπx
bn = φ0 (x) · cos dx
L 0 L

And with the transformation rule of eq. (3.10), the general solution of the viscous Burgers’ equation
with periodic boundary conditions and given initial values (eq. (3.9)) results in
2
L ) κt ·
2nπ −( 2nπ
P∞ 2nπ
 2nπ

n=1 L e an cos L x − bn sin L x
w(t, x) = −2κ P∞ −( 2nπ )2 κt (3.13a)
b0
an sin 2nπ 2nπ
 
2 + n=1 e
L · L x + bn cos L x
Z L  
2 1
Rx
w0 (x∗ )dx∗ 2nπx
an = e− 2κ 0 · sin dx (3.13b)
L 0 L
Z L  
2 1
Rx
w0 (x∗ )dx∗ 2nπx
bn = e− 2κ 0 · cos dx (3.13c)
L 0 L

This solution is only valid for κ 6= 0; otherwise, the shock solution of the following section comes
into operation. It shall be noted that - under the mentioned prerequisites - eq. (3.13) is always
valid; however, if evaluated numerically, it might lead to difficulties for small κ since the inverse
of κ appears in the argument of the exponential function. This may lead to very large negative
exponents in the transformation rule of eq. (3.11), as well as in the numerator and denominator of
the analytical solution (eq. (3.13)). Therefore, figure 3.2 shows a plot of the analytical solution for
κ = 1 respectively κ = 0.5, with an initial sinus of one period on the domain [0, 1] and between 0 and
2 seconds. One can already determine the influence of κ: the smaller it is the steeper the solution
becomes; and the less damped the system becomes.

1 1

0.5 0.5
w(t,x)

w(t,x)

0 0

−0.5 −0.5

1 1
−1 −1
0 0.8 0 0.8
0.2 0.6 0.2 0.6
0.4 0.4 0.4 0.4
0.6 0.6
0.8 0.2 0.8 0.2
1 0 Time t, [sec] 1 0 Time t, [sec]
Space x Space x

(a) κ = 0.5, inital sinus (b) κ = 0.1, initial sinus

Figure 3.2: Analytical Solution of the Viscous Burgers’ Equation, Initial Sinus
CHAPTER 3. BURGERS’ EQUATION 40

3.2.2 Shock Solution

If κ is set to zero, the equation reduces to the quasi-linear non-viscous Burgers’ equation, representing
pure nonlinear advection and becoming the standard model of a nonlinear wave equation:

∂w ∂w
(t, x) + c(w, t, x) (t, x) = Q(w, t, x) (3.14)
∂t ∂x

This type of nonlinear PDE can be solved by the method of characteristics reducing it to two ordinary

differential equations, one defining the so-called characteristic, and the other describing the change
of function value along such a trajectory. For the above equation, c(w, t, x) assigns the velocity of a
moving observer and is therefore called either the characteristic, local wave, or density velocity:17

dx
(t, x) = c(w, t, x) (3.15)
dt

Along a trajectory defined by eq. (3.15), eq. (3.14) reduces to

dw
(t, x) = Q(w, t, x) (3.16)
dt

The analytical considerations in this work are limited to the homogeneous case for the non-viscous
Burgers’ equation; hence, Q(w, t, x) = 0 and c(w, t, x) = w(t, x) hold, resulting in

w(x(t)) = const = w0 (x0 ) along x(t) = f (t, x0 ) = w0 (x0 )t + x0 (3.17)

This means that a characteristic trajectory - here a straight line - starts from each point of the spatial
domain. Along each characteristic the function w(t, x) remains constant, but different characteristics
yield different constant velocities. In order to find the function for a certain time and place w(tp , xp ),
one has to identify the characteristic running through point xp at that time tp , and determine
its origin x0 for t = 0. In other words, the equation xp (tp ) = f (tp , xo ) has to be solved for
x0 . When initial values w0 (x) are provided, the desired function value is then given by w0 (x0 ).
This can be formulated graphically: the initial distribution is propagated through time and space
along the specific characteristic, as illustrated by figure 3.3(a).18 Sometimes an explicit expression
x0 (tp , xp ) = f −1 (xp (tp )) is possible, so that the overall solution can be expressed in closed form:

w(t, x) = w0 (x0 (t, x)) = w0 (f −1 (x(t)))

But for most initial distributions - as for the ones used in this work - this is not possible. Yet,
17 Background ∂v ∂v dv
of this consideration is the comparison with the total differential dv = ∂x
dx + ∂t
dt. Hence, dt
=
∂v ∂v
∂t
+ v ∂x and v = dxdt
.
18 This also provides a way to solve quasi-linear partial differential equations graphically.
CHAPTER 3. BURGERS’ EQUATION 41

w(t , xs− )

w(0, x)

w(t , xs+ )
w(t , x)

x
x0 x0 + c( w0 )t

(a) Propagation of Initial Condition (b) Multiple Valued Solution

Figure 3.3: Graphic Shock Solution (Quasilinear)

w(t , x) = w(0, x 0 )
o
w(0, x)
w(0, x0 )
o

x
0 x0 L

Figure 3.4: Intersecting Characteristics

the presented technique is not valid for all situations. Figure 3.4 shows the problem of intersecting

characteristics: the solution becomes multi-valued, which might be correct from a pure mathematical
point of view, but does not represent actual physics. Hence, the underlying partial differential
equation becomes invalid as a model for the physical problem. In order to comply with physics,
a jump discontinuity representing the phenomenon of a shock wave is allowed and respectively

introduced, where different characteristics hold (and hence, solutions x− +


s and xs ) at the left and the

right side of the discontinuity, respectively (see figure 3.3(b)). In practice, eq. (3.17) holds on the
whole domain until a shock is initiated; then the position and velocity of the shock are computed.

After that initiation, eq. (3.17) is evaluated for each side of the shock separately, and the position
CHAPTER 3. BURGERS’ EQUATION 42

of the shock is propagated19 according to


− −
dxs w(x+ +
s , t) · c(xs , t) − w(xs , t) · c(xs , t)
(t) = + −
dt w(xs , t) − w(xs , t)

If the local wave velocity is an explicit function only of w (c = c(w(x, t))) as it applies for the non-
dq(w)
viscous Burgers’ equation, one can define a ‘flow’ function q with dw = c(w), so that the shock

velocity reduces to

dxs q(x+
s , t) − q(xs , t) [q]
= = (3.18)
dt w(xs , t) − w(x−
+
s , t) [w]
The initial time and position of the shock must still be determined: even for adjacent characteristics,
it takes a finite time unequal to zero until the intersection. Although not evident from a pure
mathematical point of view, physics reveals that there is a limit to the speed of propagation. Hence,
the initiation of a shock is given by20

−1 21
ts = dc
(3.19)
min dx0 (w0 , 0, x0 )
x0

If the attention is drawn back to the problem of the inviscid Burgers’ equation, it should be noted
that all previous results can be extended to a periodic boundary value problem by keeping in mind
that characteristics leaving one side of the domain enter the other again. The above considerations
for initial values given by a sine wave (w0 (x) = sin( 2πx
L )) result in

L L dxs 1 −
ts,initial = xs (ts,initial ) = = (w(t, x+
s ) + w(t, xs ))
2π 2 dt 2

This initial condition yields the beneficial property of a non-moving shock, simplifying the exact
solution shown in figure 3.5 for the domain [0, 1] between 0 and 2 seconds.

3.2.3 Steady-State Solution

Another analytical approach to the benchmark problem (3.9) is its steady-state solution. As will
become evident in section 4.2.2, the steady-state solution is of great value when approaching the
problem in control terms. By direct inspection, eq. (3.9) reveals that any constant function fulfills
the partial differential equation and the periodic boundary conditions; obviously, a constant function
19 The theory of shock waves and quasi-linear equations is motivated from problems where the variable of interest is
a density. Hence, the assumption is made that the flow into the shock has to equal the flow out of the shock (which is
represented graphically by cutting the same area left and right of the shock as shown by figure 3.3(b)). If w(x, t) is a
density, the flow becomes density
h times velocity,
i wherebyh the overall velocity
i results in the local wave velocity minus
the shock velocity: w(x− − dxs
s , t) c(xs , t) − dt = w(x+ + dxs
s , t) c(xs , t) − dt .
20 For further reading, please refer to [52].
21 If t < 0, then there is no shock.
s
CHAPTER 3. BURGERS’ EQUATION 43

Non−Viscous Burgers Equation

0.5

−0.5

1
−1
0 0.8
0.2 0.6
0.4 0.4
0.6
0.8 0.2
1 0 Time
Space

Figure 3.5: Analytical Solution of the Inviscid Burgers’ Equation, Initial Sinus

does not change in time and therefore is a steady-state solution. But, do other steady-state solutions
exist?22 In order to find those, the partial derivative in time has to be set to zero:


wst (t, x) ≡ 0
∂t

Thus, a nonlinear ordinary differential equation of second order results:

dwst (x) d2 wst (x)


wst (x) = κ
dx dx2

This equation can be put into a more favorable form:

d 1 2 d2 wst (x)
wst (x) = κ
dx 2 x2

Since constant functions are already known to be steady-state solutions, they can be eliminated
by integration of both sides and necglecting the resulting constant of integration. The remaining
22 Previous research on Burgers’ equation has shown many different steady-state solutions for different types of

boundary conditions while periodic ones have hardly been adressed.


CHAPTER 3. BURGERS’ EQUATION 44

first-order differential equation can then be solved straightforward:

1 2 dwst (x)
w (x) = κ
2 st dx
1 κ
x=− + const
2 wst (x)
κ
wst (x) =
const − x2

But the steady-state solutions have to obey the periodic boundary conditions. The Dirichlet condi-

tion
κ ! κ
wst (0) = = wst (L) = L
const const − 2

can only be fulfilled for L = 0 opposing the problem statement. Hence, it has been proven by
contradiction that constant functions are the only steady-state solutions of problem (3.9). Thereby,
eq. (3.12) has already shown that the integral on the whole domain of w(t, x) does not change in
time (conservation principle):
Z L

w(t, x) = 0
∂t 0

Thus, the constant of the steady-state solution is related to the initial distribution w0 (x) by
Z L
1
wst (x) = const = w0 (x)dx (3.20)
L 0

3.3 Finite-Element Approximation

In order to provide a suitable formulation of Burgers’ equation for control purposes, a Galerkin finite-
element approximation will be applied. Thereby, the partial differential equation is semi-discretized
in the spatial domain, resulting in a system of coupled first-order ordinary differential equations in
time (i.e., a state-space representation).23

Galerkin method: In general, the Galerkin approach is a weighted residual method designed to

transform a continuous operator problem onto a discrete one. If a partial differential equation is
considered, the generic problem can be stated as follows:

=Θ = f
23 Thecontrol approach is not limited to Galerkin approximations, in fact, any method resulting in an abstract form
can be utilized.
CHAPTER 3. BURGERS’ EQUATION 45

where = is a differential operator, Θ an unknown quantity, and f a forcing funtion, additional


conditions on the boundary Γ enclosing the domain have to be given. If an approximate solution Θ̃
of the exact solution is regarded, the residual r can be nonzero:

r = =Θ̃ − f 6= 0

In the Galerkin approach the residual is weighted and integrated with different weighting functions,
whereby the resulting integral has to be minimized:
Z
Ri = wi r dΩ → 0

The finite-element method appears as a special result when the weighting functions are chosen to
be the basis, or carrier functions, of the expansion of the appoximate solution (wi = vi ):
N
X
Θ̃ = cj vj
j=1

Since the Galerkin method yields in a bilinear form, it would be equivalent to convert the equation
into its weak form by applying calculus of variations, as employed in further development.

Bilinear form Before stating the weak form of eq. (3.9), the problem has to be embedded into
the right solution space: since in the FE approach jump discontinuities may appear (finite jumps),
the concept of weak derivatives24 is applied. Therefore, the solution space becomes a Sobolev space
H k,p , which denotes that subset of Lp , whose functions, and their derivatives up to the order of k,
have a finite Lp -norm (for p ≥ 1). In the FEM the L2 -norm is considered.25 Since problem (3.9)
is a second order partial differential equation, only first-order weak derivatives will be considered,

resulting in k = 1 and p = 2. The Sobolov embedding theorem states

H k,p ,→ C j,β

Ω ∈ Rn
n
k−j−β >
p

As eq. (3.9) is a one-dimensional (n = 1) problem, j = 0 and β = 0 follow; therefore, H 1 ,→ C 0 .


Hence, the Neumann boundary condition (periodicity of flux) has to be omitted. Only the periodic
Dirichlet condition will be valid (and necessary) for the FE approximation.
24 By integrating the entire equation, the requirements to the solution functions are less stringent, i.e., continuous

differentiability is demanded for one


R order less than the original problem. Hence, requirements have become ‘weaker.’
25 L2 (Ω, B) := {f : Ω → Rm : ∃ 2
Ω kf (x)k dx < ∞}, f measurable with respect to Lebesque measure.
CHAPTER 3. BURGERS’ EQUATION 46

The weak form of the equation is achieved by multiplication, with a testfunction which com-
plies the remaining boundary conditions (v ∈ H 1 (0, L), v(0) = v(L)), and subsequent integration:
Z L Z L
[v · wt + v · w · wx − κ · v · wxx ] dx = f · v dx
0 0

Integration by parts of the second term yields


Z L Z L
−κ · v · wxx dx = −κ[v wx ]L
0 +κ vx wx dx
0 0

If w is a strong solution, and v fullfills the boundary conditions [v wx ]L


0 equals zero,
26
the weak
problem becomes

Find w ∈ H̃ 1 := {w ∈ H 1 : w(t, 0) = w(t, L)} (3.21)

with a(ẇ, v) =< f, v >

∀ v ∈ H̃ 1 = {v ∈ H 1 : v(t, 0) = v(t, L)}


Z L
where a(ẇ, v) = (v wt + v w wx + κvx wx ) dx
0
Z L
< f, v >= (f · v) dx
0

Semi-Discretization: In the process of the FE approximation, the spatial domain is discretized


while the time domain remains continuous. Therefore, the solution space becomes H̃n1 , a finite-
dimensional space with dimension n, where H̃n1 ⊂ H̃ 1 and H̃n1 → H̃ 1 for n → ∞ holds. If p0 , ..., pn−1 ,
with x 7→ pi (x) ∈ H 1 and fulfilled boundary conditions pi (0) = pi (L), generate a basis of H̃n1 , the
approximative solution can be stated as a linear combination:
n−1
X
wh (t, x) = wj (t) pj (x) (3.22)
j=0

26 Even if w ∈ C 2 ∩ H̃ 1 , 0L [vwt + vwwx + κvx wx − f v] dx − κv(L)wx (L) + κv(0)wx (0) = 0 holds for all v ∈ H̃ 1 .
R

Especially, all v ∈ C ∞ with v(0) = v(L) = 0 are elements of H̃ 1 and the integrand has to be zero for all x ∈ (0, L).
Hence, v(L)wx (L) − v(0)wx (0) = 0 ∀ v ∈ H̃ 1 , since v(L) = v(0), the Neumann boundary condition - wx (L) = wx (0)
- follows automatically. Therefore, the FE space has been chosen appropriately.
CHAPTER 3. BURGERS’ EQUATION 47

Pn−1 Pn−1
where wh ∈ H̃n1 , since wh (t, 0) = j=0 wj (t) pj (0) = j=0 wj (t) pj (L) = wh (t, L). Substituting
eq. (3.22) into eq. (3.21) leads to the discrete problem:
n
X
Find wh ∈ H̃n1 = {wh (t, x) : wh (t, x) = wj (t) pj (x) ∧ wh (t, 0) = wh (t, L)} (3.23)
j=0

where a(w˙h , vh ) =< f, vh >


n
X
∀ vn ∈ H̃n1 = {vh (t, x) : vh (t, x) = vi (t) pi (x) , vh (t, 0) = vh (t, L)}
i=0

and H̃n1 ⊂ H̃ 1

As described in the Galerkin approach, the testfunctions v - which are equivalent to the weight-
ing functions - will be substituted by the basis functions in order to identify the unknown node
parameters wj :
Z L n n X
n
X X dpl
[pi (x) ẇj (t)pj (x) + pi (x) wk wl pk (x) (x)
0 j=0
dx
k=0 l=0
n Z L
dpi X dpq
+κ (x) wq (x)]dx = f (t, x)pi (x)dx
x q=0
dx 0

Obvious algebraic manipulation gives


n Z L n X
n Z L
X X dpl
ẇj pi (x)pj (x)dx + wk wl (x)pk (x)pi (x)dx
j=0 0 0 dx
k=0 l=0
n Z L Z L
X dpi dpq
+κ wq (x) (x)dx = f (t, x)pi (x)dx
q=0 0 dx dx 0

Rewriting in matrix notation yields

M ẇN (t) = −N(wN (t)) − κ K wN (t) + b (3.24)

Arranging for ẇN (t) results in a system of first order ODE’s:

ẇN (t) = −M −1 N(wN (t)) − κM −1 KwN (t) + M −1 b (3.25)

ẇN (t) = N(wN (t)) + AwN (t) + Mb

Where wN ∈ R(n+1) , M : R(n+1) → R(n+1) , K : R(n+1) → R(n+1) , N(wN ) ∈ R(n+1) are defined as
Z L
Mi,j = pi (x)pj (x)dx (3.26a)
0
Z L
Ki,j = p0i (x)p0j (x)dx (3.26b)
0
n X
X n Z L
N (wN )i = wk wl p0l pk pi dx (3.26c)
k=0 l=0 0
Z L
bi = f (t, x)pi (x)dx (3.26d)
0
CHAPTER 3. BURGERS’ EQUATION 48

Linear basis functions: Equation (3.26) holds for any choice of basis functions fulfilling the
discussed prerequisists (and, for any way of discretizing the spatial domain). For the purposes of
this work, linear basis functions and a linear spatial domain will be sufficient. This is only partially
due to the desire for simplicity; furthermore, this choice results from the properties of the FEM:

the art of placing the nodes is based upon the knowledge of where the discretized problem is well-
suited and where it needs further refinement. This information can be achieved through physical
investigation, or by testing different settings. A close look at Burgers’ equation suggests that areas

around very steep descent have to be regarded carefully, due to the tendency of the equation to
create shocks: one should refine the grid in those areas.

The control problem in chapter 4 will be the attenuation of an initial wave or disturbance,
respectively. Since this wave travels across the complete domain, every region requires the same
attention. Therefore, only an adaptive algorithm would be reasonable; however, a state space
representation cannot be achieved in that way. Secondly, higher order basis functions tend to
‘round off’ and might converge faster to the exact solution, but this will occur only in regions
where the solution is smooth. Again, the shock nature requires the capability to incorporate steep
discontinuities, and constitutes the challenging area. The behavior of any basis function in the
neighbourhood of a shock is very hard to predict, and therefore, reliable conclusions on the preference
of a particular method are very difficult to generate. As it will be shown later, linear basis functions
also converge quite fast, and show few errors in smooth areas, so the use of equally distributed linear
basis functions appears as absolutely sufficient. Thus, the spatial domain is discretized:

p0 pn
…… ……
x
0 x1 xi-1 xi xi+1 xn-1 xn = L

Figure 3.6: Basis Functions

L
x0 = 0 xj = x0 + h · j ; h= xn = L
n
CHAPTER 3. BURGERS’ EQUATION 49

The linear basis functions according to figure 3.6 are defined as:
x1 −x

h 0 ≤ x ≤ x1
p0 (x) =
0 x1 < x ≤ L

 0 0 ≤ x < xi−1
 x−xj−1
xj−1 ≤ x < xj

pj (x) = h
xj+1 −x

 h xj ≤ x ≤ xj+1
0 xj+1 < x ≤ L


0 0 ≤ x < xn−1
pn (x) = x−xn−1
h xn−1 ≤ x ≤ L

At this point it has to be noted that the above choice of basis funtions does not fulfill the prerequisits,

1 1

h h h h h h

1 1
2
⎛ x⎞
1 h 2
⎛ x⎞
2h
h=
6 ∫0 ⎜⎝ h ⎟⎠ dx ∫h ⎜⎝ 2 − h ⎟⎠ dx
x ⎛ x⎞
h

∫ h ⋅ ⎜⎝1 − h ⎟⎠dx
0
1
3
h
1
3
h

0 h 0 h 2h

Figure 3.7: Integral Constellations

/ H̃n1 , since p0 , pn do not obey the boundary conditions. So they cannot be


in particular p0 , pn ∈

used for constructing a basis of the finite-element space. However, this problem can be solved by
introducing p0 , pn as one basis function with

p0 (x) 0 ≤ x
(p0 , pn )(x) =
pn (x) x ≤ L

Thus, the problem is reduced by one degree of freedom, as is to be expected with (Dirichlet) periodic
boundary conditions. The same discretization could be achieved by ignoring the above fact, stating
the corresponding matrix system, replacing w0 with wn and adding the first to the last row of the
CHAPTER 3. BURGERS’ EQUATION 50

system. According to figure 3.7, the integral evaluations reduce to a few distinguishable cases; the
following matrices and vector, respectively, result:
 2 1
0 . . . 61

3h 6h 0
 1h 2h 1h 0 . . . 0 
 6 3 6
 0 1h 2h 1h . . . 0 

6 3 6
M =  . (3.27a)
 
.. .. .. .. .. 
 .. . . . . . 
 
 0 . . . 0 1h 2h 1h 
6 3 6
1
6 ... 0 0 16 h 23 h
 2
− h1 . . . − h1

h 0 0
 −1 2
− h1 0 ... 0 
 h h
 0 −1 2 1

h h −h . . . 0 
K =  . (3.27b)
 
.. .. .. .. .. 
 .. . . . . . 
 1 2

 0 ... 0 −h h − h1 
− h1 . . . 0 0 − h1 2
h
− 16 wn2 − 61 wn w0 + 0 + 61 w1 w0 + 16 w12
 
 .. 
 . 
N(wN ) = 
 1 2 1 1 1 2

− −
 6 i−1 6 i−1 i 0 + 6 wi+1 wi + 6 wi+1
w w w + 
 (3.27c)
 .. 
 . 
− 16 wn−1
2
− 16 wn−1 wn + 0 + 16 w0 wn + 16 w02

Evaluation: In order to be verified, the proposed finite element approximation is tested in com-
parsion with the analytical solution. Therefore, the error
Nx XNt
1 X
e= |wNx (tk , xn ) − w(tk , xn )|
Nt Nx n=1
k=1

is computed for different quantities of nodes, where Nt and Nx represent the number of gridpoints
in time and space, respectively; wNx (tk , xn ) denotes the FE solution of order Nx and w(tk , xn ) the
exact solution evaluated at the corresponding points. The expected decrease of the error for refined

grids is shown in figure 3.8(a). Also, finite element approximations of order Nx are compared to the
approximations resulting from the double amount of nodes, hence the error
Nx XNt
1 X
e= |wNx (tk , xn ) − w2Nx (tk , xn )|
Nt Nx n=1
k=1

is regarded in figure 3.8(b). From both diagrams it can be assessed that the proposed finite element
approximation converges even for (reasonably) small κ. The error between refined grids exponentially
decreases; additionally, the limiting function is the analytical solution at least for large κ. Both

evaluations have been executed for a spatial domain of [0, 1]. The comparison to the analytical
solution used an ending time Tend of 1 second with 101 discrete time-steps, while the figure 3.8(b) is
CHAPTER 3. BURGERS’ EQUATION 51

based on an ending time Tend of 2 seconds with 201 discrete time-steps. Furthermore, smaller values
of κ lead - not surprisingly - to larger errors in general, since the nonlinear part of the equation
prevails the diffusion term; hence, the solution is ‘less smooth,’ leading to numerical difficulties.27 To

0.02 κ = 0.5 0.35 κ = 0.01


κ = 0.1 κ = 0.002
0.018 κ = 0.001
0.3
0.016

0.014 0.25

0.012

Error (e)
Error (e)

0.2
0.01
0.15
0.008

0.006 0.1

0.004
0.05
0.002

0 0
50 100 150 200 250 300 350 400 50 100 150 200 250 300 350 400
Number of FE−Nodes (N) Number of FE−Nodes (N)

(a) Averaged Error per Gridpoint between FEM and (b) Averaged Error per Gridpoint between wN and
Analytical Solution w2N

Figure 3.8: FE Evaluation for Different Numbers of Nodes

provide an impression of the behaviour of the finite element approximation, figure 3.9 shows open-
loop simulations for different values of κ (0.01, 0.002 and 0.001) for the Galerkin solution of order
N = 81 (number of gridpoints) on the spatial domain [0, 1] and between 0 and 1 second. Both a
three-dimensional plot and a two dimensional diagram of several discrete snapshots is presented. For
comparative reasons, these values have been chosen to comply with the ones used for the Galerkin
approximation in [51]. The outcome of these simulations is the fact that an area around the ‘shock’28
is creating difficulties for the Galerkin approach. Those tendencies become more immanent the more
weight the nonlinear term gains. The numerical instabilites (oscillations and overshooting similiar

to the Gibbs phenomenon) also depend on the number of gridpoints (or, to be more precise, on
the spatial discretization interval h). The smaller the value of hx , the better the approximation
can handle small values of κ and therefore the shock tendency. But this finding can be considered
trivial since a finer grid leads to a higher order of the resulting ODE system, while its convergence

has been discussed before. For the following control problems, one should only keep in mind that
there are many factors which can be manipulated: a change of the spatial domain to L = 2π, due
to simplicity, alters h and hence the stability of the Galerkin approximation, for example. Such a
27 Note that the solution is less damped for smaller κ, so that the same behaviour is expected for the error.
28 Although it has been stated that there is no shock created as long as the diffusion term is present, the dissipation
area in which the slope is steepened over time (due to the nonlinear term in Burgers’ equation) shall - non-physically
- be referred to as the ‘shock’ area.
CHAPTER 3. BURGERS’ EQUATION 52

variation also affects the time of the shock initiation, as can be seen from eq. (3.19). In other words,
the problem parameters should be chosen in such a way that the effects to be addressed indeed
appear. Regarding only the average error on the space and time domain might be misleading.

1
0.5
0.5
w(t,x)

w(t,x)
0

−0.5

1 −0.5
−1
0 0.8
0.2 0.6
0.4 0.4 −1
0.6
0.8 0.2
1 0 Time t, [sec] 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Space x Space x

(a) κ = 0.01, N = 81 (b) κ = 0.01, N = 81

0.5
1
w(t,x)

0
w(t,x)

−1

1 −0.5
−2
0 0.8
0.2 0.6
0.4 0.4 −1
0.6
0.8 0.2
1 0 Time t, [sec] 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Space x Space (x)

(c) κ = 0.002, N = 81 (d) κ = 0.002

0.5
1
w(t,x)

0
w(t,x)

−1

1 −0.5
−2
0 0.8
0.2 0.6
0.4 0.4 −1
0.6
0.8 0.2
1 0 Time t, [sec] 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Space x Space (x)

(e) κ = 0.001, N = 81 (f) κ = 0.001

Figure 3.9: Comparison of FE Solutions of Burgers’ Equation


CHAPTER 3. BURGERS’ EQUATION 53

Additionally, figure 3.10 shows the error between a low order model of Nx = 41 and a high order
model Nx = 401 distributed in space and time. The similiar configuration will be used to adress the
reduced-order model control problem in chapter 4. Clearly, figure 3.10 identifies the areas around
a ‘shock’ as being problematical, while the low order solution provides very good results in smooth

areas.

0.5

0.4

0.3
Error e

0.2

0.1

0
1
1
0.8
0.5 0.6
0.4
0.2
Time t, [sec] 0 0
Space x

Figure 3.10: Error Distribution in Space and Time


Chapter 4

Robust Nonlinear Control

4.1 Control Problem Statement

For the remainder of this work, the benchmark problem, derived and defined in the previous section
3.1.2, has to be reformulated in terms of a realistic control statement. Therefore, additional con-
straints on measurement dynamics, measurement availability, time-delays, disturbance, and noise
will have to be made. In order to comply with common notation, the functions wN (t) of eq. (3.25)
(the time-dependent part of the finite element approximation) are replaced by the functions x(t),
the conventional notation used for state-space systems. The forcing term (or control input, respec-
tively) is denoted by u(t); the output is denoted by y(t). Figure 4.1 illustrates the total control
loop utilized for further considerations and simulations; the optional model-error compensation is
already included.

The exact equations governing the plant are ought to remain unknown as even the full
Navier-Stokes equations are only a theoretical model, and do not embrace the complete physical
truth. For simulation purposes, the use of a general analytical solution of Burgers’ equation would
be ideal; but since a closed-form solution depends on the a priori known analytical function of the

forcing term, it is not feasible for the control problem. Section 3.3 gives reason for the assumption
that the finite-element-approximation converges to the exact solution for large N . Hence, a high-
definition mesh (N = 101, as seen later in chapter 5) is applied for implementing the plant in

Matlab, while the nominal (model) equations are based on a mesh of N = 21 for the reduced
model-order approach. Thus, some model error is introduced into the system. The state-space

54
CHAPTER 4. ROBUST NONLINEAR CONTROL 55

d(t) v(t)

r(t) ẋ(t) Plant / Truth y(t)


- Bp + (High Res. FEM) +
ỹ(t)
u(t) x(t)
Sampler
û(t − τ )
Model-Error
- t−τ
Prediction
-
x̂(t) ỹk (t)
t−h

ū(t) x̂(t) Estimator


Controller (linear / nonlinear) LK

Bm

Figure 4.1: General Closed-Loop Setting

representation of a general nonlinear system is given by

ẋ(t) = f (x(t)) + g(x(t))u(t) (4.1a)

y(t) = h(x) (4.1b)

Thereby, the order of the ODE system is n (x ∈ Rn and f : Rn → Rn ), the number of inputs is
given by l (g : Rl → Rn ) and the number of outputs is m (y ∈ Rm and h : Rn → Rm ). This generic
formulation of a multi-input, multi-output system is referred to when different control and filter
approaches are derived in general. Again, the plant is treated as consisting of an unknown vector
function of the states combined with a constant control coefficient Bp and an unknown disturbance
d(t) when theoretical considerations are performed. The output function reduces to the one-to-one
reproduction of certain states (representing sensor locations):

ẋ(t) = f (x(t)) + Bp u(t) + d(t) (4.2a)

y(t) = Cp x(t) (4.2b)

The nominal (model) equations for the estimated states and the estimated outputs become (based
CHAPTER 4. ROBUST NONLINEAR CONTROL 56

on the finite element approximation)

˙
x̂(t) = −M −1 N(x̂(t)) − κM −1 Kx̂(t) + M −1 b(u(t)) (4.3a)

ŷ(t) = Cm x(t) (4.3b)

The control loop - and hence the estimator - are regarded as a continuous-time system, but the
measurements are sampled and only available (and updated) at certain intervals. Furthermore,
white Gaussian noise is added. It should be noted that it does not matter if the noise is added
before - as shown in figure 4.1 - or after the sampler. As the remaining system is continuous in time,
a mathematically correct representation of the sampled (and hold) measurements is given by
Z +∞    
t
ỹk (t) = (y(ξ) + v(ξ)) · δ ξ − ∆t dξ
−∞ ∆t
E{v(t)} = 0

E{v(t)vT (t − ξ)} = V (t) δ(t − ξ)

For the sake of simplicity, the above expressions will be abbreviated and formulated in a discrete
version:

ỹk = yk + vk (4.4a)

E{vk } = 0 (4.4b)

E{vi vjT )} = Vk δK ij (4.4c)

Some remarks: In chapter 1 airfoil flow was one of the possible applications of laminar flow
control. Hence, the underlying control idea could be the design of an area with pinholes which inject
or suck flow from either a centralized control valve or through independently controlled injectors.

The first version corresponds to a (spatially distributed) single-input system, while the second one
obviously constitutes a multi-input system. But a pinhole area forms a discrete control setup, so
how can this be realized in mathematical terms? In control engineering, Delta functions (Dirac or
Kronecker) are often used in such cases. But the reader should be warned against careless use of
Dirac distributions! This is even more important because the forcing term’s integral of eq. (3.26)
CHAPTER 4. ROBUST NONLINEAR CONTROL 57

allows for it, to be formally evaluated for Dirac distributions:1


 
Z L Z L XNC
bi = f (t, x)pi (x) dx =  δ(x − j · hc )uj (t) pi (x) dx
0 0 j=1
NC L
ui (t) for i = j hhc
X Z 
= uj (t) δ(x − j · hc )pi (x) dx =
0 0 otherwise
j=1

where Nc denotes the number of control inputs or actuator locations, while hc is the associated
spatial distance. It has been assumed without loss of generality that the actuator locations coincide

with the FE gridpoints. But this operation is not allowed at all in the context of the presented
finite element approximation. Formally, this can be substantiated by a looking at the FE problem
formulation in eq. (3.21): the used functional space for the basis- and testfunctions (and hence, for
the solution) is the Sobolev Space, here H̃ 1 . Thus, it is required that any participating function,
including the forcing function, lies in the same functional space. The Dirac delta (even being a
distribution and not a function) does not. In the FE formulation, every function is mapped on
the testfunctions. Although they can be chosen freely, they have to obey the requirements stated
in (3.23), i.e., being elements of H̃n1 . Clearly, a function (or distribution) cannot be mapped onto
basis functions which are not in the same functional space. There is also a more illustrative way
for the previous discussion: let the forcing integral be formally evaluated as shown above. Then,
the multiplying matrix would be identity for a fully actuated system and would not incorporate the
gridpoint interval. But the finite element method should always assign a weight to every participating
expression based on that interval. This property is lost due to the sampling characteristic of the
Dirac’s integral. Hence, when the forcing term is multiplied with the inverse of the mass matrix M
1
(as shown in eq. (3.25)), h appears as a factor and is not canceled by the forcing term’s matrix. This
results in a divergence of the whole FE system for N → ∞ since the control gain goes to infinity.

Since the basis functions cover the whole domain, the used finite element method assumes
the control (forcing term) to be distributed, even if discrete pulses are applied in the originating

PDE. Hence, for the purposes of this work, a distributed control using stepfunctions in space is
applied from the start:
NC
1 for − h2c < x ≤ hc
X 
f (x, t) = uj (t) · u(x − j · hc ) where u (x) = 2
0 otherwise
j=1

1 Obviously, the use of a Kronecker delta function makes no sense since the integral neglects perturbations at

singular points.
CHAPTER 4. ROBUST NONLINEAR CONTROL 58

Thereby, the number of control inputs is still limited, but they are equally distributed on the
corresponding subdomain or interval. Figure 4.2 displays an example of a possible control input at
a particular time and shows how it is mapped on the basis for both the fully-actuated model and
the under-actuated plant.

Control Inputs

Basis Functions

1 …… ……

0 = x1 xi xn = L
hm

(a) Low Order Basis Functions

Control Inputs

Basis Functions

1 …… ……

0 = x1 xi xn = L
hp

(b) High Order Basis Functions

Figure 4.2: Distributed Control and Basis Functions

The system’s outputs are the exact reproductions of certain states, i.e., the solution of
Burgers’ equation at specific sensor locations. Cp and Cm have been chosen so that the spatial
points coincide with the model ones.2 The viscosity coefficient κ is varied between 0.01, 0.002,
 T  
1 1 1 .5 0 0 ... .5 1 0 0 0 0 ... 0
 ..   .. 
0 0 0 .5 1 1 . 0 0  0 0 0 1 0 . 0
   
2 E. g.: Bp = 

and Cp = 
 
.. .. .. .. .. .. .. ..
 
 ..   
 . . . . .   . . . . 
.5 0 0 0 0 ··· 1 1 1 0 0 0 0 0 ··· 1
CHAPTER 4. ROBUST NONLINEAR CONTROL 59

and 0.001in order to provide nonlinear effects for the range and speed of the motions considered.
Thereby, the objective is asymptotic stabilization or regulation around an equilibrium point (contrary
to tracking); or, in technical terms: to find u(t) so that x → 0 for t → ∞ from anywhere in Ω.
Since a system’s response to a certain command does not reflect its response to another command

in a nonlinear world, the accuracy and speed of responses has to be evaluated for ‘typical’ motions
of the system in the region of operation (mostly via computer simulation). Therefore, as already
stated in the problem classification in section 3.1.2 and further processed in the following section

3.2, the control goal is to drive an initial disturbance (here a sinusoidal half-wave) to an equilibrium.
Robustness to disturbances, measurement noise, and unmodeled dynamics are revealed as key issues.
Especially unmodeled dynamics appear due to model reduction (limiting the amount of computation
and reducing the FEM grid) and additive process disturbances. Additionally, the number of sensors
and actuators is limited. The system is treated as being continuous in time since nonlinear systems
(in particular when resulting from PDE’s) are continuous in nature. The analysis and design of
linear systems can be performed continuously in time when high sampling rates are applied. But
this property does not hold in the nonlinear case. Since the measurements are available at a sample
interval, a continuous-discrete model results. Additional measurement dynamics are neglected.

Only discrete and noisy measurements are possible, therefore necessitating the presence
of an observer. The control approach in the following sections separates the controller from the
estimator problem. The problem-specific controllers are designed assuming full state-knowledge
disregarding any output function or noise (i.e., deterministic controller). At the same time, the filter
(or estimator) is developed, providing an, in a certain sense, optimal estimation of the states based on
measurements, the nominal model, and noise assumptions. It shall be noted that a general separation

principle3 does not hold, in general, for nonlinear systems as it does for LTI systems. For example,
stability cannot be concluded by combining a separately stable observer with a stable feedback law.
However, breaking the system into two separate parts has shown to be a reasonable approach to
stochastic nonlinear control problems; furthermore, this is the only way to perform a necessary

facilitation for conventional control. Therefore, the following sections derive, first, a deterministic
(and optimal) controller that assumes full and perfect state-knowledge before an ‘optimal’ filter
design is performed which provides the necessary state estimates.
3 The problem of designing an optimal feedback controller for a linear stochastic system can be solved by designing

an optimal observer for the state of the system, which feeds into an optimal deterministic controller for the system.
This is also known as the certainty equivalence principle and detailed in [57].
CHAPTER 4. ROBUST NONLINEAR CONTROL 60

4.2 Analysis

At first, the dynamic system shall be analyzed in control terms. Since f (x(t)) = −M −1 N(x̂(t)) −
κM −1 K x̂(t) does not explicitly depend on time, the ODE system in eq. (3.25) resulting from the
benchmark problem in eq. (3.9) is called an autonomous system of N th order. It shall be noted that
autonomous systems are always an idealized notion because physical systems are non-autonomous

by nature; slight variations in time of the viscosity κ, for example, would represent non-autonomous
behavior. Also, the use of certain controllers - with time-depending control laws - changes the
system’s characterization, so that even LTI systems become non-autonomous when combined with
adaptive control. If, in the following, a particular value of the state-vector x(t) is considered, it shall
be referred to as a ‘point.’ An important property to consider is the existence of, and the behavior
around, equilibrium points. Hence, a formal definition of such a point is necessary.

Definition (Equilibrium). A state x∗ is an equilibrium state (or equilibrium point) of the system
if once x(t) is equal to x∗ , it remains equal to x∗ for all future time.

While a linear time-invariant system has a single equilibrium point (the origin) if its system
matrix A is nonsingular,4 the steady-state solution of Burgers’ equation reveals every constant dis-
tribution to be an equilibrium. The finite-element approximation should (and indeed does) resemble
that behavior. As it has been mentioned before in section 3.2.3, every open-loop equilibrium point
corresponds to a certain initial distribution containing the same energy or impulse, respectively. As
also stated earlier, the control objective is the attenuation of disturbances to an equilibrium, which
in this application is zero. Since theoretical considerations are performed, at most, with respect to
the origin as the equilibrium point, 0 = f (x∗ ), it shall be mentioned that by state substitution this
should occur without loss of generality. Before difficult nonlinear analysis is applied, the linearized
system is studied using familiar techniques from linear algebra.

4.2.1 Linear Analysis: Stability

The justification for using linear techniques when analyzing nonlinear systems comes from Lya-
punov’s linearization method: it is a matter of consistency that for very small perturbations (or for

very small neighborhoods Ω of an equilibrium point), linearizations have to hold by continuity.5 The
4 Otherwise, it has an infinity of equilibrium points.
5 For further detail, see [8].
CHAPTER 4. ROBUST NONLINEAR CONTROL 61

Taylor series expansion of the system in eq. (3.25) at a point x∗ with respect to x is given by
 
∂N(x(t))
ẋ(t) = −M −1 + κK x(t) + Bm u(t) + h.o.t. (4.5)
∂x(t) x∗

where f (x∗ ) = 0, and h.o.t. denotes higher order terms of the nonlinear part N. The Jacobian

∂N(x(t))
∂x(t) ∗ is given by
x

−x∗n + x∗2 x∗1 + 2x∗2 −2x∗n − x∗1


 
0 ...
 −2x∗1 − x∗2 −x∗1 + x∗3 x∗2 + 2x∗3 ... 0 
1
 .. .. .. .. ..


6
 . . . . . 

 0 ... −2x∗n−2 − x∗n−1 −x∗n−2 + x∗n x∗n−1 + 2x∗n 
x∗n + 2x∗1 ... 0 −2x∗n−1 − x∗n −x∗n−1 + x∗1

For x∗ being a constant equilibrium, this matrix may be further simplified. The constant can be
factored out, allowing to draw conclusions on any constant equilibrium. Note that, at the origin, the
Jacobian matrix vanishes, allowing the first order Taylor series expansion to equal the ‘linear’ part in
the original state-space system: −κM −1 Kx(t). The linearization (ẋ = Ax) empowers local stability
analysis by theorem (3.1) in [8]: if the linearized system is strictly stable, the nonlinear system is
(locally) asymptotically stable; if the linearized system is unstable, the same is (locally) true for the
nonlinear system.6 Marginal stability of the linearized system does not allow to draw conclusions on
the nonlinear systems since the neglected high-order terms may play a decisive role. Unfortunately,
the latter applies to the system considered, as Figure 4.3(a) reveals a pole at the origin in a typical
pole constellation (x∗ = 1
π, in this example). This is not surprising, since the system demonstrates
a conserving nature due to the periodic boundary conditions: the non-tridiagonal entries in the
matrices reflect these boundary conditions, so if they are set to zero (corresponding to Dirichlet
boundary conditions) the linearized system indeed loses its pole at zero, as shown in figure 4.3(b).

Another interesting insight into the system can be achieved by factoring out the finite element’s
order:7
1 6
ẋ(t) = − (M ∗ )−1 N∗ (x(t)) − κ 2 (M ∗ )−1 K ∗ x(t) + Bp u(t) (4.6)
h h
Equation (4.6) shows the tendency of the FE approximation to put more weight on the linear part for
higher N . Again, the system is linearized, and a computation of the eigenvalues for both parts of the
system (the linearized and the truly linear) separately reveals that (at least for small neighborhoods
in which the linearization holds) −κ(M ∗ )−1 K ∗ contributes with pure real eigenvalues (poles), and
6 Anexact definition of stability will be given in the Nonlinear Analysis section 4.2.2.
7 Equation (4.6) obviously holds only for the applied linear basis functions and not for eq. (3.25) in general. The
h 1
matrices M ∗ , K ∗ and the vector N∗ (x(t)) arise by merely factoring out h , h , and 16 , respectively.
CHAPTER 4. ROBUST NONLINEAR CONTROL 62


∂N∗ (x(t))
−(M ∗ )−1 ∂x(t) with pure imaginary poles (as shown in figure 4.4). Once more, the pole at
x∗
zero is due to the stringent coupling of both ends of the domain. But, besides the coarse initial view,
15
15

10
10

5
5

Imaginary Part
Imaginary Part

0
0

−5 −5

−10 −10

−15 −15
−12 −10 −8 −6 −4 −2 0 2 −25 −20 −15 −10 −5 0
Real Part Real Part

(a) System with Periodic Boundary Conditions (b) System with Free Boundaries (Neumann Condi-
tions)

Figure 4.3: Pole-Constellation for the Linearized System; x∗ = 1


π, N = 21, κ = 0.002

0.25

x Single Pole 0.5


0.2
o Double Pole
0.4
0.15
0.3
0.1
0.2
Imaginary Part
Imaginary Part

0.05 0.1

0 0

−0.1
−0.05
−0.2
−0.1
−0.3
−0.15
−0.4
−0.2 −0.5

−0.25
−2.5 −2 −1.5 −1 −0.5 0 0.5 1 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2
Real Part Real Part

(a) Contribution of −κM −1 K∗ (b) Contribution of the Linearized −M −1 N(x(t))

Figure 4.4: Pole-Constellation for the Linearized System; x∗ = 1


π, N = 21, κ = 0.002

considering the originating PDE problem (Burgers’ equation) and its analytical open-loop solution
from section 3.2 allows for more thorough analysis, as shown in the following nonlinear section.
CHAPTER 4. ROBUST NONLINEAR CONTROL 63

4.2.2 Nonlinear Analysis


4.2.2.1 Stability

Before stability issues of the benchmark problem are addressed, it should be noted that nonlinear
analysis differs considerably from linear systems. All theorems and observations are principally

applicable only locally and can only be extended for special cases. Also, the center of interest can be
the stability of a motion instead of an equilibrium point, so that the so-called nominal motion may
be considered.8 However, stabilization is - as already stated - the issue of the benchmark problem;

hence, global propositions must be considered. The three different types, or concepts, of stability for
nonlinear systems have to be reviewed first: be BR a spherical region in the state-space, limited by
kxk < R, and be SR the sphere itself, meaning kxk = R. Then, stability in the sense of Lyapunov
is defined by

Definition (Lyapunov stability). The equilibrium state x = 0 is said to be stable if, for any R > 0,
there exists r > 0, such that if kx(0)k < r, then kx(t)k < R for all t ≥ 0. Otherwise, the equilibrium
point is unstable.

Colorful spoken: instability of a nonlinear system means ‘blowing up’. The corresponding concept
to the marginal stability of linear systems is given by

Definition (Asymptotic stability). An equilibrium point 0 is asymptotically stable if it is stable,


and if in addition there exists some r > 0 such that kx(0)k < r implies that x(t) → x as t → ∞.

The strongest concept and hence implying to be the analogue of asymptotic stability is

Definition (Exponential stability). An equilibrium point 0 is exponentially stable if there exist two
strictly positive numbers α and λ such that ∀ t > 0, kx(t)k ≤ αkx(0)ke−λt in some ball BR around

the origin.

The above definitions are given in their local form; if they hold for any initial states, global
stability is implied. The immediate application of these concepts does not reveal itself easily. But
the so-called direct method of Lyapunov makes them applicable to dynamic systems: this method is

motivated by the physical idea, which assumes that if one can show the strictly monotonic decrease
8 Nominal motion: ė = f (x∗ + e, t) − f (x∗ , t) = f (e, t).
CHAPTER 4. ROBUST NONLINEAR CONTROL 64

of the energy, the system must be stable. This concept has been expanded to a general ‘energy-
like’ scalar Lyapunov function V (x). One has to first choose a candidate Lyapunov function and
then determine its variation in time (i.e., its derivative). For a candidate function V (x) to be a
Lyapunov function in a ball BR0 , V (x) has to be scalar, continuous, uniquely defined, and positive

definite9 with continuous partial derivatives.10 Its time-derivative along any state trajectory11 has
to be negative semi-definite.12 If such a Lyapunov function exists in a ball BR0 around 0, then
this equilibrium point is stable. If V̇ (x) is even negative definite, then the stability is asymptotic.13

The conclusion on stability can be extended to a global proposition if additionally V (x) → ∞ for
kxk → ∞.

The difficulty in proofing stability via Lyapunov’s direct method lies in finding an adequate
function that covers a larger area of attraction and allows for conclusions of general movements. A
conventional first trial uses quadratic functions in order to provide the necessary positive definiteness.
1 T
For a state-space model, V (x(t)) = 2 x (t)x(t) would be such a function. But proving that the
time-derivative V̇ (x) = xT ẋ is negative (semi-)definite for any x(t) can be a tedious task. A
nonlinear system’s stability analysis is in general limited to certain inital conditions and along
system trajectories; but for Burgers’ equation it has already been shown that every constant is
an equilibrium. Thus, every initial condition should finally converge to its correspondent constant
steady-state solution. Hence, it would be preferable to verify global stability via the above method.14

Fortunately, the fact that the state-space system is the finite element approximation of a
partial differential equation allows for us to focus on the originating PDE. Let the expected system
behavior be reviewed: the analytical solution given by eq. (3.13) suggests that the open-loop solution
of an initial disturbance actually decays exponentially in time. It has already been stated that any
initial disturbance converges to a corresponding steady-state solution. Since the used benchmark

problem has conserving properties, each initial distribution corresponds to a certain final constant,
RL
and is linked by the integral 0 w0 (x)dx = const (the system is not losing impulse or energy). This
9∀ V (0) = 0, x ∈ BR0 : x 6= 0 ⇒ V (x) > 0. If this is true for all x it implies global positive definiteness.
10 V (x) ∈ C 1
11 V̇ (x) = ∂V ẋ = ∂V f (x)
∂x ∂x
12 V̇ (x) ≤ 0
13 A simple proof of this theorem can be found in [8], pages 62-63.
14 One might think that Krasovskii’s theorem (Page 84 in [8]) could be a way out, requiring F = ∂f | being
∂x x0
negative definite in a neighborhood of the equilibrium. But besides the fact that F becomes already semi-definite
for any constant x, the proof of the (negative) definiteness of F requires to show (without loss of generality) that
the eigenvalues of the matrix are strictly negative for any given x. Considering the system’s dimension and recursive
structure, finding an analytic closed-form expression for the eigenvalues may be impossible.
CHAPTER 4. ROBUST NONLINEAR CONTROL 65

fact makes it particularly difficult to use energy based Lyapunov techniques. However, without loss
RL
of generality, only initial distributions are regarded which fulfill 0 w0 (x)dx = 0.15 The steady-state
is then expected to be the origin (we (t, x) = 0). Let
Z L
1
V (t) = w2 (t, x) dx (4.7)
2 0

be a Lyapunov function. Then, taking the time derivative and subsequent substitution of the original
partial differential equation leads to

d 1 L 2 1 L ∂ 2
Z Z
V̇ (t) = w (t, x) dx = w (t, x) dx
dt 2 0 2 0 ∂t
Z L
= w(t, x) · wt (t, x) dx
0
Z L
= w [κwxx − wwx + f ] dx
0
Z L Z L Z L
=κ w wxx dx − w2 wx dx + f w dx
0 0 0

Integration by parts, and applying the boundary conditions to the first term, yields
Z L Z L Z L
V̇ (t) = −κ wx2 dx − 2
w wx dx + f w dx
0 0 0

3 L
RL RL∂ 1 3
1 
The second term can be eliminated be using 0
w2 wx dx = 0 ∂x 3
w dx = 3w 0
= 0, so that
Z L Z L
V̇ (t) = −κ wx2 dx + f w dx
0 0

This already reveals that the nonlinear part does not contribute to this particular Lyapunov function
(for the given boundary conditions),16 and that the choice of κ = 0 (the shock case) renders this
Lyapunov function dependent only on the forcing term (or control input)y. Application of the
Poincaré inequality17 to the first term on the right-hand side leads to

kw(t, x)kL2 (Ω) ≤ Ckwx (t, x)kL2 (Ω)


Z L Z L
2
w dx ≤ C wx2 (t, x) dx
0 0
Z L Z L
L 2
V̇ (t) ≤ −κ w (t, x) dx + f (t, x)w(t, x) dx (4.8)
π 0 0
15 In the case of a transformation of variables, the consistency of the presented stability results (forcing different

initial values to obey the integral condition) has not yet been investigated.
16 This could have been expected since the nonlinear convection part does not dissipate energy.
17 Be 1 ≤ p ≤ ∞ and Ω a bounded open subset of n-dimensional Euclidean space Rn having a Lipschitz boundary,

then there exists a constant C, depending only on Ω and p, such that for every function f inR the Sobolev space
1
W 1,p (Ω): kf − fΩ kLp (Ω) ≤ Ck∇f kLp (Ω) with the average value of f over Ω given by fΩ = |Ω| Ω f (y)dy where |Ω|
is the Lebesque measure of the domain Ω.
CHAPTER 4. ROBUST NONLINEAR CONTROL 66

d
The fact has been used that C for p = 2 and Ω bounded and convex can be computed as π (d
being the diameter of Ω). Here, the above limitation is useful since the correction by the average
in Poincaré’s inequality vanishes. A different approach, as used by [26] for boundary control, would
be the implementation of the Sobolev inequality,18 but in doing so, a boundary term is always

incorporated in the inequality. This might be useful for boundary control, but it cannot be applied
to distributed control and, especially, periodic boundary conditions. So far, there are three main
outcomes of eq. (4.8):

1. For the uncontrolled (open-loop) system, eq. (4.8) leads to V̇ (t) ≤ −αV (t), where α = κ 2L
π , so

that V (t) converges exponentially to zero for t → ∞. Since the Lyapunov function has been
chosen to be the continuous correspondent to the norm used in the definition of exponential
stability,19 global exponential stability of the equilibrium at the origin under the discussed
prerequisites has been shown. Furthermore, the rate of the exponential convergence is directly
proportional to the viscosity factor κ.
RL
2. Equation (4.8) becomes V̇ (t) ≤ −αV (t) − 0
[ξ(x)w(t, x)] w(t, x) dx if a pure feedback control
law (in continuous representation) is applied. At least for a strictly positive kernel or functional
gain ξ(x), the system is stable and the rate of convergence (to the origin) is higher than in the
open-loop case. If the kernel is not strictly positive, closer investigation is required.

3. In a shock situation (κ very close or equal to zero), the closed-loop system can always be
stabilized by requiring the feedback gain to be strictly positive, although the open-loop system
is only asymptotically stable for κ = 0.

4.2.2.2 An Attempt Towards Controllability

The problem setting employed in simulations has been defined in section 4.1. The model’s input
gain matrix will be identity, while the control is assumed to consist of continuous pulses in space (see
figure 4.2). Therefore, the control has to be expanded to several gridpoints when operated on the
higher-order plant. First, the model is considered: the extension of the concept of controllability to
a nonlinear system is a tedious task, heavily involving differential topology and differential geometry,
P 
18 Inshort form: kf (i) k20 = ab |f (i) |2 dx ≤ m−1 (j) |2 (a) + kf (m) k2 , for further detail the reader is referred to
R
j=i |f 0
any advanced textbook on functional analysis.
19 Euclidean norm of vector space versus L2 -norm of functions.
CHAPTER 4. ROBUST NONLINEAR CONTROL 67

especially Lie algebra. Formally, local accessibility20 of system (4.1) about a point x0 is given if,
and only if, the accessibility distribution

C = g adf g . . . adn−1
 
f g

spans n space, where the ad denotes the Lie bracket (as defined in the appendix). Note that this
expression reduces to the familiar controllability matrix for linear systems. As stated earlier, the
computation of this accessibility distribution is completely impractical for higher-order systems,

whereas ‘higher-order’ includes the dimension of the nominal model used in this work. Fortunately,
the controllability of a nonlinear system of form (4.1) is related to that of its linearization:21 if
the linearization about x0 is controllable, then the nonlinear system is accessible at x0 . As already
experienced with stability, the reverse does not hold. This attempt regards only controllability with
respect to constant equilibria. Thereby, it is divided into two parts, the first addressing x0 being
any constant unequal zero, and the second addressing the controllability of the origin itself:

1. x0 = const 6= 0: The general eigenvalue constellation has been shown in section 4.2.1 and
especially in figure 4.3(a). As long as the equilibrium differs from zero there are only non-
repeated, or distinct, eigenvalues. Hence, the minimum polynomial equals the characteris-
tic equation and, therefore, the matrices Ai for i = 0 . . . (n − 1) are row- and columnwise
linearly independent. Since the control input matrix is identity, the controllability matrix
Cc = B AB A2 B . . . An−1 B spans n-space, and the linearized system is controllable. Con-
 

sequently, the nonlinear system is locally accessible at x0 = const 6= 0.

2. x0 = 0: The eigenvalues are not distinct, so it cannot be guaranteed that the matrix powers

are linearly independent. However, this issue can be approached by incorporating the stability
analysis of the previous section and the general definition of controllability:22

Definition (Controllable). The nonlinear system in eq. (4.1) is said to be controllable if, for

any two points x0 , x1 , there exists a time T and an admissible control defined on [0, T ] such
that for x(0) = x0 we have x(T ) = x1 .
20 The concept for nonlinear systems distinguishes between reachability, accessibility, and controllability. Only if
a nonlinear system is locally accessible everywhere and it additionally consists of free dynamics (f (x(t)) = 0), it is
called controllable. For further reading see [8] or [5].
21 Proposition 11.2 in [5].
22 Definition 11.1 in [5].
CHAPTER 4. ROBUST NONLINEAR CONTROL 68

It has already been shown that the system converges to x0 = 0 for at least a state-feedback
control law with strictly positive functional gains, so the above definition of controllability is
truly fulfilled.

It remains to be seen if the higher order plant is controllable under the assumption of the given

control gain matrix (expanded control impulses in space, section 4.1). Again, the fundamental
behavior of the originating PDE helps: clearly, every steady-state’s constant value depends only on
the impulse (or energy) contained in the initial distribution and the impulse (or energy) drained

or added by the control. As far as the controllability of equilibria is concerned, the system will
finally settle on the one determined by the overall control feed. So, from a practical point, it may be
justified to argue that even the higher order model is controllable about any constant equilibrium
(under the assumption of distributed control).

4.3 Control Design

At first, this section applies standard techniques to the benchmark problem to create the nominal
controller used for the subsequent combination with the model-error corrector. In doing so, the linear
quadratic regulator has been chosen: it is a state-feedback technique based on the theory of optimal
control, and is applicable to linear (or linearized) systems; the LQR has been selected because of its
widespread use and relatively simple implementation. Once the optimal gain has been computed,
the control input simply feeds back the (observed) states, multiplied by a constant coefficient. Also,
advanced nonlinear control techniques are addressed when feedback linearization is discussed as
nominal control in section 4.3.1.2. The benchmark problem displays an excellent example for the
difficulties associated with those methods, and illustrates how they might fail. Furthermore, due to
the sampled output and the additive white Gaussian noise, the regulator has to be provided with

state estimates. Hence, section 4.3.2 briefly introduces estimator theory and reviews the popular
Kalman filter, in both, its standard and extended version.

4.3.1 Nominal Controller


4.3.1.1 Linear Quadratic Regulator

The gain for the state-feedback in closed-loop systems can be determined in different ways: it im-
mediately affects closed-loop dynamics and can be utilized for pole-placement (via Ackermann’s for-
CHAPTER 4. ROBUST NONLINEAR CONTROL 69

mula). But the question remains: which pole is optimally located? The answer requires tremendous
experience with the specific system and might work for low order systems, but direct pole-placement
soon reaches its limits when higher order systems are regarded.23 Fortunately, there are more an-
alytical ways by which to identify an optimal eigenvalue configuration, and hence a feedback gain.

For example, the linear quadratic regulator approach is based on optimal control theory. Optimal
control theory is largely based on the calculus of variations. Also other parts of this work, namely
the Kalman filter and model-error control synthesis, rely on it. For a review, the reader is referred

to [58], the applied terms and definitions may also be found there.

The linear quadratic regulator is a state-feedback controller for linear systems whose feed-
back gain is determined by minimizing a performance measure that incorporates the squared (and
weighted) states and control inputs. Hence, the general LQR problem unfolds to

1 tf  T
Z
1 T
x (t)QL x(t) + uT (t)RL u(t) dt

Minimize J(u) = x (tf )HL x(tf ) + (4.9)
2 2 t0
subject to ˙
x̂(t) = A(t)x̂(t) + Bu(t)

over all u ∈ L2 ([0, T ); R)

The final time tf is (initially) fixed and HL and QL are symmetric positive semi-definite weighting
matrices (self-adjoint), while RL is real symmetric positive definite. The final state x(tf ) is free
and the states and control are unbounded. The solution via the Hamiltonian formulation (using
costates) for this problem is

∂H ∂H ∂H
ẋ∗ = ṗ∗ = − 0=
∂p ∂x ∂u

With H = 21 (xT QL x + uT RL u) + pT [Ax + Bu] the control law becomes

−1 T
u(t) = RL B p(t)∗

accompanied by the state and costate equations:


−1 T
ẋ∗ (t) x∗ (t)
    
A BRL B
= (4.10)
ṗ∗ (t) −QL −AT p∗ (t)

The additional boundary conditions (final time fixed, final state free) give

p∗ (tf ) = HL x∗ (tf )
23 One possible backdoor for such systems could consist of assigning each eigenvalue - except for two - a large
negative real part and use the remaining two poles to determine some desired second order characteristics.
CHAPTER 4. ROBUST NONLINEAR CONTROL 70

while the solution to eq. (4.10) using a transition matrix φ(tf , f ) can be written as
 ∗   ∗ 
x (tf ) x (t)
= φ(tf , t)
p∗ (tf ) p∗ (t)

Partitioning the transition matrix, using the condition of eq. (4.3.1.1) and an appropriate substitution

leads to the solution

−1
p∗ (t) = [φ22 (tf , t) − HL φ12 (tf , t)] [HL φ11 (tf , t) − φ21 (tF , t)] x∗ (t)

The required inverse exists24 for [to , tf ], and hence the costates can be expressed by p∗ (t) =

Π(t)x∗ (t), leading to on overall control law:

−1
u∗ (t) = −RL (t)B T (t)Π(t)x(t)

But in order to utilize this linear, though time-varying regulator, one has to determine the feedback
gain; therefore, the transition matrix of eq. (4.10) is needed. This may be achieved by evaluating
an inverse Laplace transform,25 but this procedure might become very extensive. Fortunately, the
matrix Π(t) fulfills the so-called Riccati equation (matrix differential equation) with Π(tf ) = HL :

−1
Π̇(t) = −Π(t)A(t) − AT (t)Π(t) − QL (t) + Π(t)B(t)RL (t)B T (t)Π(t) (4.11)

The solution is computed by numerically integrating backward in time (starting at t = tf and


ending at t = 0). Then, the stored Π(t) can be used to determine the gain matrix. However, in
many cases the final time is not fixed, or a permanent regulator with an interval of infinite duration
is needed. Thus, the above approach can no longer be applied; either a time-dependent law is
known a priori, or the optimal feedback gain has to be stationary. This occurs under the following
prerequisites: the plant is completely controllable, HL = 0 (by nature), and A, B, RL , and QL are
constant.26 Experience shows that, in some cases, a fixed control law can even lead to a satisfactory
performance for finite duration processes. The necessary constant gain matrix can be determined
by either integrating backward in time until a steady-state solution is found, or by directly posing
the steady-state condition Π̇(t) = 0 and solving the algebraic equation

−1 T
0 = −ΠA − AT Π − QL + ΠBRL B Π (4.12)

Different numerical methods exist to solve eq. (4.12) for Π; within the scope of this work, however,
the built-in Matlab function lqr is used to identify the optimal gain. Figure 4.5 exhibits the
24 Forfurther reference, see [58], page 209.
25 See[58], page 212 for further detail.
26 According to [58], Kalman has shown that Π(t) → Π for t → ∞ under those mentioned conditions.
f
CHAPTER 4. ROBUST NONLINEAR CONTROL 71

functional gains (or the gain matrix, respectively) for the linearized benchmark problem. Thereby,
the linearization has been performed at the origin, resulting in only the linear (diffusion) part of the
original equation. The diagonal dominant structure of the system’s matrices result in a similar, but
more washed-out appearance of the functional gains. One the one side, the strong dependency on

the states in the neighborhood of the corresponding control input yields a certain decoupling of the
feedback law (giving raise to potential computational benefits); on the other side, the expansion to
several states results in a integrative smoothing property, being capable to robustly cope with noisy

state estimates (as demonstrated in chapter 5).

0.15

0.1
Functional Gain

0.05

−0.05
150
150
100
100
50
50
Control Input 0 0
Spacial Gridpoint

Figure 4.5: Feedback Gains of the Nominal Controller

4.3.1.2 Inverse Dynamics (Feedback Linearization)

A more sophisticated extension to the nominal control problem is feedback linearization in combi-
nation with the linear quadratic regulator. In doing so, an inner nonlinear loop may be designed

in such a way that the overall closed-loop system shows linear behavior. Then, an outer linear
quadratic regulator can take care of the resulting linear dynamics. There are two sub-categories
in design: input-state and input-output linearization. The objective of input-state linearization is
CHAPTER 4. ROBUST NONLINEAR CONTROL 72

the stabilization of the states around an equilibrium.27 The control is designed so that a (usually)
nonlinear feedback yields to a linear relationship for the state equations (the ODE system); i.e., a
fully linear system results. By contrast, the input-output linearization only creates a linear relation-
ship between the input and the output. Hence, nonlinear internal dynamics may still exist. While

input-output linearization (up to the relative degree of the system) can always be performed as long
as the resulting internal (zero) dynamics are stable, true state-linearization is only applicable (or
even possible) when the relative degree of the system equals its order.28 However, states are the

matter of interest for the benchmark problem; the output relation reflects the direct reproduction
of certain states. That’s why only input-state linearization is addressed. Therefore, the following
issues have to be dealt with: Can the presented benchmark problem be input-state linearized for
both the fully and the under-actuated situation? How does the choice of the controller gain matrix
affect input-linearization? One of the most fundamental statements of feedback linearization theory
and a definitive answer to the first question is given by theorem 6.2 in [8]:

Theorem. The nonlinear system (4.1), with f (x) and g(x) being smooth vector fields, is input-state
linearizable if, and only if, there exists a region Ω such that the following conditions hold:

• the vector fields g, adf g, . . . , adn−1



f g are linearly independent in Ω.

• the set g, adf g, . . . , adn−2



f g is involutive in Ω.29

The above theorem demonstrates how elegant and definitive nonlinear control design could be using
sophisticated mathematical tools; but this also exhibits how useless it could be at the same time:

although the above statement forms a clear and direct analytical proposition, it is nearly impossible
to evaluate this for even the low-order models of the benchmark problem. Even for an order of
n = 5, the required computation of the 4th order Lie bracket is an extremely challenging task, not
to mention the difficulty in showing linear independence and involutivity. Supposed that the input-

state linearization is possible, control design as described in chapter 6 of [8] would additionally need
the solution of several (nonlinear) partial differential equations in the order of n − 1. But there is a
way out in the following, the non-existence of input-linearization in an under-actuated benchmark

problem, as given by eq. (4.3), is proven by contradiction:


27 Find u(t) that x(t) → 0 for t → ∞ from anywhere in Ω; the case of x − x where x is a given reference is
d d
included.
28 Then, the input-output linearization leads to no internal dynamics and coincidents with input-state linearization.
29 For the definition, see appendix.
CHAPTER 4. ROBUST NONLINEAR CONTROL 73

Assume that the system (4.3) is input-state linearizable. Then, there exists a function z1 (x)
such that the system’s input-output linearization with z1 (x) as output function has relative degree
n (no zero dynamics).30 Let us now consider multi-input multi-output systems with the additional
requirement that the number of outputs equals the number of inputs m (the system being square),

without loss of generality. Then, the partial relative degrees ri for each output are defined as being
(ri ) 31
the smallest integer such that at least one of the inputs appears in yi :
m
(ri )
X
yi (x) = Lrf i hi (x) + Lgj Lrf i −1 hi (x)uj (t)
j=1

with Lgj Lrf i −1 hi (x) 6= 0 for at least one j, in a neighborhood of the point x0 . The system is said to
have the vector relative degree (r1 , . . . , rm ). The scalar r = r1 + · · · + rm is called the total relative
degree of the system at x0 . As stated above, in case of the total relative degree equaling the order
of the system n, there are no internal dynamics and the system is state-linearizable. However, it can
be shown that the maximum partial relative degree of the FE system in eq. (3.25) is always one (for
linear splines as basis functions). A single line of the ODE system in eq. (3.25) with the matrices
given by eq. (3.27) reveals the system to be recursively coupled:

ẇi (t) = fi (wi−1 (t), wi (t), wi+1 (t), ẇi−1 (t), ẇi+1 (t)) + bi u(t)

So every state’s derivative depends again on the derivative of the previous, and the following state.
Successive replacement with the corresponding expressions can be performed until at least one
element of u(t) appears:
ẇi (t) = fi∗ (wN (t), u(t))

Since the system matrices are symmetric, it is guaranteed that the terms will not cancel. Thus,
the relative degree vector exactly consists of ones quite independently from the choice of the output

function, so that the total relative degree results in r = m. The connection between n = r and the
ability to perform input-state linearization of the system is necessary and sufficient, so that only
a fully actuated system can be state-linearized.32 For the control gain matrix (or function) to be

invertible the inner-loop design becomes quite obvious; the nonlinear part of the model can directly
be canceled by the feedback law

ulin = g−1 (x(t)) M −1 N(x(t)) = Bp−1 M −1 N(x(t)) (4.13)


30 Lemma 6.3 in [8].
31 Here, r
Lf i h denotes the ri -th Lie derivative of h with respect to the vector field f for a system as given in eq. (4.1)
(the definition of the Lie derivative can be found in the appendix).
32 The additional assumption of the system being square does not necessarily constrain that statement.
CHAPTER 4. ROBUST NONLINEAR CONTROL 74

This section has been included to give a counter-example to the suggested control design for
distributed systems in the work of Atwell and King in [34] and [33]. Therein, a design-then-reduce
approach is preferred to the reduce-then-design technique, meaning that a high-order model is used
for design, while the resulting regulator is then reduced in order. By doing so, “vital information or

physics contained in the higher order model [should be] available for control design if reduction is
postponed.”33 Hence, the focus should lie on sophisticated reduction methods, while proper orthog-
onal decomposition results as the favored technique in [34] and [35]. The suggested approach might

work (and may be advantageous) for certain classes of PDE systems, but conditions may exist where
this procedure reaches its limits, as when an advanced nonlinear nominal controller for the regarded
benchmark problem is used: as mentioned before, the mathematical tools provided for feedback
linearization (or other sophisticated tools like sliding mode control) can only be implemented for
very low order systems that are way below n = 10. Thus, there is no way around model reduction
before control design. Furthermore, in the case of the presented finite element system, input-state
linearization can only be performed for fully actuated systems,34 while the number of control inputs
is normally given beforehand (or by external circumstances).

Hence, in case of advanced nonlinear control employed to the PDE systems of Burgers’ class,
this work suggests a different procedure: the nominal model-order should be chosen to equal the
number of actuators, so that a fully-actuated system results. The nominal controller is then formu-
lated, although the model might be coarse. An additional stabilization loop (taking care of model
error and robustness issues) should be added to the overall closed-loop system. As stated in section
4.1, the featured method is model-error control synthesis as derived and detailed in section 4.4.
Burgers’ equation only serves as a one-dimensional model for many problems of distributed param-

eters; thus, model-reduction, truncation, and coarse approximation become even more a necessity
when larger and higher dimensional settings are considered.

4.3.2 Estimator: Extended Kalman Filter

As stated at the beginning of this chapter, the closed-loop simulation will incorporate process distur-

bances and measurement noise. Therefore, the measurements have to be estimated or filtered. While
the later presented model-error control synthesis is thought to cope with the process disturbances
33 Page 1312 in [34].
34 The same is true for sliding mode control.
CHAPTER 4. ROBUST NONLINEAR CONTROL 75

and (obviously) the model-error, the Kalman filter is used to provide state estimates free of additive
white Gaussian noise.

4.3.2.1 Standard Linear Kalman Filter (Continuous and Discrete)

The Kalman filter is an optimality solution to the general observer problem, where state estimates

are corrected by the measurements. The linear observer problem is posed by

˙
x̂(t) = A(t)x̂(t) + B(t)u(t) + LK (t) [ỹ(t) − Cp (t)x̂(t)]

ŷ(t) = Cp (t)x̂(t)

ỹk = Cp (t)xk + vk

where v is a vector of additive white Gaussian noise, as previously defined. The measurement noise
covariance is given by V (t)δ(t − τ ). Also, the Kalman filter assumes additive white process noise
(covariance given by D(t)δ(t − τ )):

ẋ(t) = A(t)x(t) + B(t)u(t) + d(t)

The process disturbance and the measurement noise are uncorrelated, i.e., the cross-covariance is
zero (E {v(t)d(t − τ )} = 0). The objective is to determine the correction gain LK (t) in an optimal
way.35 The Kalman filter statement constitutes the analogon or the dual problem of the linear
quadratic regulator. The functional to be minimized with respect to LK (t) becomes

J(LK (t)) = E eT (t)QK e(t)



(4.14)

where e(t) = x̂(t) − x(t) is the estimation error. From there, the expectancy of the quadratic error
weighted via a positive semi-definite Matrix QK should be minimized.36 Although there is no direct
closed-form expression available for eq. (4.14), the expression on the right-hand side is related to the
symmetric estimation-error covariance matrix:

Minimize J(LK (t)) = E eT (t)QK e(t) tr P (t) = tr E e(t)eT (t)


  
⇐⇒ Minimize

Thus, an expression for the symmetric estimation-error covariance has to be found. The time-

derivative of P is regarded:
Ṗ (t) = E ė(t)eT (t) + e(t)ėT (t)


35 Values too high lead to scattering since the model follows the erroneous measurements while values too low neglect

useful information from the measurements.


36 Note that the functional becomes a scalar.
CHAPTER 4. ROBUST NONLINEAR CONTROL 76

Expanding this equation with the already known expressions, and using a convolution relation,
Rt
e(t) = e(A−LK Cp )t e0 + 0 e(A−LK Cp )(t−τ ) (d(τ ) − LK v(τ )) dτ , leads to37

Ṗ = (A − LK Cp )P + P (AT − CpT LTK ) + D + LK V LTK

Rt 1
In the process, 0
δ(t − τ )dτ = 2 has been used. Note that the above expression depends on LK .
Now, the functional in eq. (4.14) can be minimized, subject to the constraint given by Ṗ , and
resulting in

LK = P CpT V −1 (4.15)

Ṗ = AP + P AT + D − P CpT V −1 Cp P (4.16)

For the used lemmas and for a detailed derivation of eq. (4.15), the reader is referred to [59]. A
similar Matrix-Riccati equation to the linear quadratic regulator problem has been derived; also, a
similar ‘control’ law is appearing. In case of the LQR, the Matrix-Riccati equation has to be inte-
grated backwards in time, making only the steady-state solution suitable for an infinite time-domain
problem. Here, the optimal Kalman filter solution requires the forward integration of eq. (4.15),
and is immediately applicable to continuous filtering problems.38 The linear closed-loop estimation
setting remains stable, as long as the plant is fully state-observable.39

As aforementioned, the extended nonlinear benchmark problem (the simulation setting ac-
cording to figure 4.1) requires a continuous-discrete problem formulation: although the model and
plant equation are integrated forward, continuously in time, measurements are only available at
discrete points. In order to perform a continuous-discrete synthesis, the Kalman filter for discrete
systems is briefly sketched, and given a discrete system as follows:

xk+1 = φk x +
k + Γk uk + dk

ỹk = Cpk xk + vk

Two stages have to be distinguished: propagation and update. In the first stage, the error covariance
and the states are propagated to the the next time-step via the dynamics of the system. While
37 The dependency on time of each matrix has been omitted in notation due to simplicity.
38 Even so a steady-state solution can be derived and utilized similar to the LQR problem.
39 (A − L C ) has to be stable.
K p
CHAPTER 4. ROBUST NONLINEAR CONTROL 77

eq. (4.17) can be applied directly to the state estimates x̂k , the error covariance arises as


= E x̃− −T + T

Pk+1 k+1 x̃k+1 = φk Pk φk + QKk

P0− = E x̃− −T

0 x̃0

x̃k+1 = φk x̃+
k + dk

Once a measurement is available, the update is performed according to


x̃+
k = (I − LKk Cpk )x̃k + LKk vk

Pk+ = (I − LKk Cpk )Pk− (I − LKk Cpk )T + LKk Vk LTKk

Again, the error covariance Pk is related to the optimization problem in eq. (4.14) by minimizing
the trace: min J(LKk ) ⇔ min tr Pk+ . Using trace identities, subsequent minimization and


substitution, as described in [59], leads to the linear discrete Kalman filter solution:

Pk+ = [I − LKk Cpk ] Pk−


−1
= Pk− Cpk
T
Cpk Pk− Cpk
T

LKk + Vk

4.3.2.2 Continuous-Discrete and Extended Kalman Filter

The consolidation to a continuous-discrete application is more or less straight forward. Within the
measurement interval, the states are propagated according to the continuous model:

˙
x̂(t) = A(t)x̂(t) + B(t)u(t)

On the other hand, the update equation for the gain matrix and the state estimates resemble the
discrete ones:

− −
x̂+
 
k = xk + LKk ỹk − Cpk x̂k

Pk+ = [I − LKk Cpk ] Pk−

The error covariance P (t) is also integrated within the measurement interval, but this varies slightly
from the purely continuous case. The application of the theory of discrete-time systems and approx-

imation, by using first order terms, yields for small ∆t:

φk ≈ (I + ∆tA(t))

Dk ≈ ∆t · D(t)
CHAPTER 4. ROBUST NONLINEAR CONTROL 78

These expressions can be substituted in the discrete error covariance propagation:40

Pk+1 = (I + ∆tA(t))Pk (I + ∆tA(t))T − (I + ∆tA(t))LKk Cpk Pk (I + ∆tA(t))T + ∆t QK

Pk+t −Pk V
Collecting terms for ∆t , using the discrete expression for LKk , the fact that Vk = ∆t , and

subsequent evaluation of the limit ∆t → 0 yield, the propagation for the error covariance:

Ṗ (t) = A(t)P (t) + P (t)AT (t) + D(t) (4.17)

But, in case of the present nonlinear system, a Gaussian input does not necessarily result in
a Gaussian output.41 However, it can be assumed by continuity that the linearization holds for small
perturbations (see also section 4.2). Therefore, the above prerequisites remain valid as long as the
true states are sufficiently close to the estimates. The system could be linearized about an a priori
known nominal state, as performed for the linear quadratic regulator at the equilibrium. But this
so-called linearized Kalman filter generally does not perform as well as the extended version: here,
the system’s matrices are (continuously) linearized at every available state estimate by a first-order
Taylor series expansion. Hence, the system matrix A(t) in the above described propagation phase
becomes

∂f
A(x̂(t), t) ≡
∂x x̂(t)
In the update, both relations are used: the Kalman gain and the covariance are corrected via the
above relations. In doing so, the output matrix results from the Jacobian of the (nonlinear) output
function:

∂h
Cpk (x̂−
k)≡
∂x x̂−
k

The estimates, though, are propagated and updated incorporating the full nonlinear knowledge:

˙
x̂(t) = f (x̂(t)) + g(x̂(t))u(t)

x̂+ = x̂− −
 
k k + LKk ỹk − h(x̂k )

Note that this approach is not precisely derived from an optimality solution, i.e., from the minimiza-
tion of a functional subject to nonlinear differential equation constraints, even though, experience
has shown its successful application for many years. Since the estimates have to stay sufficiently
40 Again, the Kalman filter’s development presented here should only be a sketch; the reader is referred to [59] for

derivations in detail.
41 The probability density function is altered.
CHAPTER 4. ROBUST NONLINEAR CONTROL 79

close to the true states, tuning via the weighting matrices (as performed in the chapter on numerical
simulation 5) may be required. This is especially necessary for highly nonlinear systems, and for
a Non-Gaussian process disturbance d(t). For deeper insight into this roughly sketched derivation,
the reader is again referred to [59].

4.4 Model-Error Control Synthesis


4.4.1 Concept

The so-called model-error control synthesis uses a real-time nonlinear estimator to provide robustness
(and better performance) in the presence of model uncertainties or unmodeled dynamics. Therefore,
an adaptive correction is synthesized with the control signal. The design is built upon the predictive
filter, so as to provide an estimate of the model error present in the system. This prediction is then
utilized as an additional control input (and phase-shifted by 180 degrees) to correct the nominal
control signal. Although the motivation of this work has been outlined several times, it shall be
noted again that real-time implementation is an objective. Therefore, a fast-controller is needed.
This is achieved by a one-step ahead prediction (OSAP) of the model-error. The reader is referred to
the extensive work of Kim in [46] and [45] to review different approaches to the predictive estimate.
The model-error control synthesis assumes the real plant to be of the following extended form of
eq. (4.1):
ẋ(t) = f (x(t)) + Gc (x(t))u(t) + Ge (x(t))d(t) (4.18)

Note that Gc (x(t)) is the control input distribution matrix, hence Gc (x(t)) = g(x(t)) in eq. (4.1), and
Ge (x(t)) is the distribution matrix associated with the external disturbance d(t). The expressions
for the output and the state measurements remain the same as in eq. (4.1) and eq. (4.4),42 as does
the assumed model denoted by eq. (4.3). Let

f (x(t)) ≡ f̂ (x(t)) + ∆f (x(t))

Gc (x(t)) ≡ Ĝc (xt) + ∆Gc (x(t))


42 The measurement noise is applied as previously defined in eq. (4.4).
CHAPTER 4. ROBUST NONLINEAR CONTROL 80

where ∆f (x(t)) and ∆Gc (x(t)) are the assumed model errors in the corresponding terms. Then, the
system can be rewritten in model-error form:

ẋ(t) = f̂ (x(t)) + Gc (x(t))u(t) + Ĝ(x(t))û(t)

Ĝ(x(t))û(t) ≡ ∆f (x(t)) + ∆Gc (x(t))u(t) + Ge (x(t))d(t)

Here û(t) is the model-error associated with the corresponding model-error distribution matrix
Ĝ(x(t). Note that the expression Ĝ(x(t))û(t) reflects the accumulation of the error in the nominal
open-loop model, the error in the control-distribution matrix, and the external disturbance. The
basic idea of model-error control synthesis is quite simple: one tries to predict the future model-
error by applying any kind of predictive filter,43 then feeding that model-error (phase-shifted by 180
degrees) back into the system. Hence, the control signal synthesis becomes

u(t) = ū(t) − ûc (t − τ ) (4.19)

where ū(t) is the nominal controller’s output at time t and û(t − τ ) is the delayed estimated model-
error vector. The delay is always necessary because of computational requirements44 before the
model-error (at the current time) can be predicted. Since most real systems are under-actuated, or
the number of actuators is less than or equal to the dimension of external disturbances, the term ûc
is used for correction instead of û. Then, ûc can be determined via a pseudo-inverse:45

!
Ĝc (x̂(t))u(t) + Ĝ(x̂(t))û(t) = Ĝc (x̂(t))ū(t)

ûc (t) = U T (CC T )−1 (V T V )−1 V T

where: Ĝc n×l (x̂(t)) = Un×q Vq×l

Here, q is the rank of Ĝc (x̂(t)) and U and V are valid decompositions.46 In case of independent
actuators for each component, i.e., when the control distribution matrix has full rank, ûc (t) equals

û(t).
43 The original approach uses the so-called one-step ahead prediction while in [45] and [46] model-error control

synthesis has been augmented to incorporate certain kinds of receding-horizon techniques.


44 The implication and the implementation will be shown in the following section.
45 Moore-Penrose inverse.
46 In case of Ĝ (x̂(t)) having full column rank, this reduces to (Ĝ (x̂(t)))+ = (ĜT (x̂(t))Ĝ (x̂(t)))−1 ĜT (x̂(t)) or to
c c c c c
(Ĝc (x̂(t)))+ = ĜT T −1 for full row rank. Note that + denotes the pseudo-inverse.
c (x̂(t))(Ĝc (x̂(t))Ĝc (x̂(t)))
CHAPTER 4. ROBUST NONLINEAR CONTROL 81

4.4.2 One-Step Ahead Prediction

For the purpose of this work, the original one-step ahead prediction approach (see [41] and [43]) is
chosen, since it provides the capability of being implemented in real-time. The output estimate can
be expanded into a multi-dimensional Taylor series:

ŷ(t + h) ≈ ŷ(t) + z(x̂(t), h) + Λ(h)S(x̂(t))u(t) (4.20)

Let pi be the relative degree of each output,47 i.e., the lowest order of differentiation in which any
component of the input u(t) first appears.48 Then, the elements of the Taylor series expansion
become
pi
X hk
zi (x̂(t), h) = Lkf̂ (ci )
k!
k=1
 
here: z(x̂(t), h) = h · Cm · f̂ (x̂(t)) + Bm u(t)

Where z(x̂(t)) is a vector and z ∈ Rm×1 , Lkf̂ (ci ) is the k-th order Lie derivative of ci (x̂(t)) with
respect to f̂ , as defined in the appendix. The generalized sensitivity matrix S(x̂(t)) ∈ Rm×l consists
of the following rows:
n h i h io
si = Lg1 Lpf̂ i −1 (ci ) , . . . , Lgq Lf̂pi −1 (ci )

here: S = Cm · Bm

And the diagonal matrices Λ(h) ∈ Rm×m elements are given by

hpi
λii =
pi !
here: Λ = h · Im×m

Now, a cost functional can be formulated: the weighted sum square of the measurement-minus-
estimate residuals49 plus the weighted sum square for the model correction term (part of the control
input) shall be minimized:

1 T −1 1
J(û(t)) = {ỹ(t + h) − ŷ(t + h)} RE {ỹ(t + h) − ŷ(t + h)} + ûT (t)WE û(t) (4.21)
2 2

Thereby, two weighting matrices are introduced: WE , being positive semidefinite, determines the
effort of the correction term (meaning the more WE decreases the more model correction is added). In
47 i= 1, 2, ...m, where m is the number of outputs.
48 For the system regarded in this work, the partial relative degree for each output becomes 1, as previously discussed
in section 4.3.1.2.
49 The usual term of ‘error’ does not appear suitable to avoid confusion between estimation- and model-error.
CHAPTER 4. ROBUST NONLINEAR CONTROL 82

the original predictive filter approach, RE is assumed to be the measurement error covariance matrix
caused by the additive white Gaussian noise. Therefore, [43] includes a rule to determine the sample
measurement covariance from a recursive relationship ‘on-the-fly,’and based on a test for whiteness.
Hence, the filter dynamics become variant, and need a certain time to converge to a stochastic

steady-state. However, when the predictive filter is extended to model-error control, experience has
shown high sensitivity against measurement noise. Thus, this work will neglect the model-error
approach for compensating the additive white Gaussian noise. Instead, the extended Kalman filter

is applied and the model-error prediction will be built on top of the resulting estimates.50 The
general optimal solution to eq. (4.21) can be derived as

û(t) = −M (t) [z(x̂(t), h) − ỹ(t + h) + ŷ(t)] (4.22)


n o−1
T −1 T −1
where: M (t) = [Λ(h)S(x̂(t))] RE Λ(h)S(x̂(t)) + WE [Λ(h)S(x̂(t))] RE
−1
here: M (t) = h2 + WE

h

Note that the resulting gain matrix for the benchmark problem (obtained by using the system’s
matrices and, additionally, the assumption of RE = I for above reasons) has been expressed inde-
pendent of time.

4.4.3 Realization

So far, eq. (4.22) is the exact and analytically derived solution to the stated model-error correction
problem using the one-step ahead prediction approach. But when implementation of this technique
is considered, some problematic key properties reveal themselves immediately: first, eq. (4.22) is
an implicit equation, since the model-error correction û(t) is appearing in the control term u(t)

on the right-hand side in z(x̂(t), h); it is not guaranteed that an explicit relation can be found.51
Second, for computing the current model-error prediction (or correction), knowledge about a future
measurement ỹ(t + h) is required, disobeying causality. To cope with these issues, the model-error
solution has to be time-shifted, leading to a certain delay.

Until now, three different time-related parameters, or coefficients, have been introduced:
the measurement sampling time ∆t, the time-delay effect of the model-error control synthesis τ , and
the optimization interval h. The importance and distinction of these parameters can be interpreted
50 Also, the model-error control without the Kalman filter is implemented and evaluated in the numerical section so

as to create a reference.
51 This becomes even more complicated if G (x̂) 6= G (x̂) and the pseudo-inverse has to be used.
c e
CHAPTER 4. ROBUST NONLINEAR CONTROL 83

in different ways. This raises some important questions. The measurement sampling interval is
an external factor which has to be taken for granted. The extent of the optimization interval h,
however, is more or less up to the user: in a previous approach to tracking-error prediction by
Lu ([42]), the state equations and also the a priori known reference trajectory are expanded using

Taylor series,52 so that the right-hand side only depends on t, h, and x(t). Then, the optimization
interval h truly becomes a design parameter, simply expressing how far in time the tracking error
is projected. However, when model-error control synthesis is regarded, those pure design properties

of h are lost.

Before the resulting implications are illustrated, two remaining key issues are addressed:
since a future measurement is needed, the model error obviously cannot be computed directly, so
a time-shift appears due to the availability of all necessary information. This time-shift indeed
influences the overall time-delay, but does not coincide with it. The overall time-shift depends on
the specific incorporation of the future measurement problem. For the purposes of this work, and
for illustration, a simplified system is considered: measurements are available at every step in time.
Additionally, the controller is supposedly realized by a numerical integrator that computes much
faster than the applied integration interval. Then, all three time-related parameters are linked to
the integration step-size dt given by the applied numerical method (for example Runge-Kutta):

1. Forward Realization: Let a measurement ỹ(t), an output estimate ŷ(t), and the applied
control input u(t) be given at the current time t. Then, the current measurement is assumed to
remain constant and to equal ỹ(t + h). The output’s estimate ŷ(t + h) is predicted by applying
a (mostly) truncated Taylor series. Hence, eq. (4.22) directly weights the difference between
that predicted estimate and the assumed measurement, so as to provide the model-error at
the current time t. But all these computations take place after the the current measurement
has become available, so that the newly computed control input cannot take place at that
instantaneous moment: a finite computing time is always needed and represented by τ .53 But,
in this approach, the overall time-delay is minimized, only consisting of τ ; h can be chosen
arbitrary, although the option h > ∆t(= dt) does not make sense (additional information is
available through the next measurement). The downside of this implementation method lies in
52 In[42], the expression for the error evaluation becomes [e(t) + ζ(x(t), h) − ξ(x(t), h)], where ζ and ξ are the first
order terms of the Taylor series expansion of the state equation or the reference trajectory, respectively.
53 Note that the continuous-discrete implementation (used for the simulation in this work) actually neglects this

time-delay since every update is performed at once before the integration routine is restarted.
CHAPTER 4. ROBUST NONLINEAR CONTROL 84

the fact that additional deviation is introduced by the assumption of the current measurement
remaining constant. Although used by Kim in [46], this relation indeed violates the optimal
solution in eq. (4.22). Figure 4.6 illustrates the update relation; in short, notation of this
approach becomes

û(t) = −M (t − τ ) [z(x̂(t − τ ), h) − ỹ(t − τ ) + ŷ(t − τ )] (4.23)

2. Backward Realization: Given a current measurement ỹ(t), a previous output estimate

ŷ(t − dt), and the previously applied control u(t − dt), then, the previous model-error û(t − dt)
is computed at the current time. It is applied at the next possible numerical moment; again τ
reflects the purely computational time-delay. Since there is no estimate available within one
integration interval, h automatically has to equal dt(= ∆t), and the overall time-delay is the
addition of τ and h (or dt). Again, this can be brought to a short notation and is illustrated
in figure 4.7:

û(t) = −M (t − τ − h) [z(x̂(t − τ − h), h) − ỹ(t − τ ) + ŷ(t − τ − h)] (4.24)

ỹ(t) ỹ(t + h) ≈ ỹ(t)

ŷ(t)

ŷ(t) + z(x̂(t), dt)

computed: û(t)

applied: û(t − τ )

τ
dt dt
t

Figure 4.6: Model-Error Prediction, Forward Realization

By assuming that the output remains constant, the advantage of the second approach lies in the fact
that the optimal solution rule of eq. (4.22) is not violated. But this advantage has to be compensated

by a significantly higher time-lack of the correction term. However, if the integration step-size is
quite small, and thus the system quite fast, this becomes negligible; the second implementation
CHAPTER 4. ROBUST NONLINEAR CONTROL 85

ỹ(t)
ỹ(t − dt)
ŷ(t − dt)

ŷ(t − dt) + z(x̂(t − dt), dt)

computed: û(t − dt)

applied: û(t − dt − τ )
τ
dt dt
t

Figure 4.7: Model-Error Prediction, Backward Realization

should be the ‘weapon of choice’ for this work. The exact optimal solution is thereby only delayed,
and not violated, as with the forward realization.

In the case of ‘continuous-discrete’ systems where measurements are sampled at a slower


rate than the integration step-size, it is open to interpretation as how to apply model-error control
synthesis. It is valid to hold the model correction term constant within the sampling interval, since
there is no additional information available before the next measurement. Both presented procedures
can then be extended straightforward, while the computational time-delay τ remains independently
designated: the forward realization is exactly applied as before, whenever a measurement becomes
available. Note that h is a design parameter which can be chosen to coincide with the measurement
interval, but does not necessarily have to. Also, the backward realization is augmented straightfor-
ward; here, h equals ∆t. Note that the overall time-delay turns out to be varying in the case of
slower sampled measurements.54 This is simply due to the fact that the model-error correction is
held constant within the measurement interval.55

However, a third way method might be considered when the model-error prediction is only

used as an additional regulator, and not as an estimator. Then, the estimates resulting from the
(potentially nonlinear) observer might be more precise than the truncated Taylor series (depending
on the partial relative degrees). Thus, the prediction can be chosen to be based on the last available
54 Between τ and ∆t + τ for the forward realization and between ∆t + τ and 2∆t + τ for the backward procedure.
55 Note that this augmented backward realization is applied in chapter 5.
CHAPTER 4. ROBUST NONLINEAR CONTROL 86

ỹ(t)

ỹ(t − ∆t)
ŷ(t − ∆t) ŷ(t − dt) ŷ(t − dt) + z(x̂(t − dt), dt)

computed: û(t − ∆t)


applied: û(t − ∆t − τ )
τ
dt dt
∆t
t
Figure 4.8: Model-Error Prediction with Estimator and Slow Measurements

output estimate ŷ(t − dt) before the measurement, resulting in h = dt.56 Figure 4.8 exhibits this
policy. τ still expresses the computational need for the control update. Nevertheless, this intuitive
idea has not yet been tested, and optimality and stability issues would still have to be addressed.
Thus, the reader should regard this only as a suggestion for future work.

In [46] Kim discusses the effect of the optimization interval h and the its relation to the
weighting matrix WE very briefly: for h → 0 the smallest eigenvalue of Λ(h) obviously approaches
zero, so that WE has to be positive definite. He also states - by intuition - that the weighting for
the model-error has to be become larger the more h is decreased (to avoid a correction signal with

a high gain and frequency).

The specific model-error corrector design resulting from its application to the benchmark
problem provides a neat illustration to meet this intuition. The limits of the time-delay effect are
investigated; thereby, the overall time-shift is regarded, so it should be extraneous which numerical
technique is applied. First, the truly optimal and analytical solution of eq. (4.21) is studied, i.e., τ
is assumed to be zero. As already stated, this presents an implicit expression since the model-error
prediction is also appearing in the cumulative control signal on the right-hand side. Adopting the
56 Obviously, h can adopt any integration interval based value within dt and ∆t.
CHAPTER 4. ROBUST NONLINEAR CONTROL 87

system’s matrices and identity for RE to eq. (4.21) yields

−1
û(t) = − h2 I + WE

h [h (f (x̂(t)) + ū(t)) + hû(t) − ỹ(t + h) + ŷ(t)]

Collecting the terms for û(t) on the left-hand side results in


n −1 2 o −1
I − h2 I + WE h · û(t) = − h2 I + WE
 
h [h (f (x̂(t)) + ū(t)) − ỹ(t + h) + ŷ(t)]

Clearly, the total gain multiplying the approximated current model-error becomes
(  −1 )−1
WE  2 −1
I + −I − 2 h I + WE h
h

Using some inversion identity on and rearranging of this expression leads to

 −1
h2 I + WE WE h2 I + WE
 

Obviously, the gain approaches infinity for WE → 0. If the optimization interval h is significantly
larger than the square root of the maximum element of WE , the gain can be approximated by
1 −1 57
h4 WE . If, on the other hand, the optimization interval is chosen to be extremely small, the gain
reaches a limit of WE−3 for h → 0, potentially creating extreme amplifications. If the opposed limit
case of the time-delay is considered (a very long delay), u(t − τ ) could - in the limit - be interpreted
as independent from the right-hand side:

−1
û(t − τ ) = h2 I + WE

h [h (f (x̂(t)) + u(t)) − ỹ(t + h) + ŷ(t)]

Hence, this simplifies for WE → 0 to

1
û(t − τ ) = [h (f (x̂(t)) + u(t)) − ỹ(t + h) + ŷ(t)]
h

Since ŷ(t) + h (f (x̂(t)) + u(t)) is a first order approximation of ŷ(t + h), this can be rewritten:

ŷ(t + h) − ỹ(t + h)
û(t − τ ) ≈
h

Providing a system without measurement noise, as well as applying identity as the output matrix,
leads to the following approximation:

x̂(t + h) − x(t + h)
û(t − τ ) ≈
h
57 For example, h being 2 milliseconds and WE chosen to be 10−10 · I, as applied in chapter 5.
CHAPTER 4. ROBUST NONLINEAR CONTROL 88

If one assumes the previous model-error to be compensated completely (which also induces the
decoupling condition), for small h, the above expression becomes an estimate of a derivative:

d ˙
û(t − τ ) ≈ (x̂(t) − x(t)) = x̂(t) − ẋ(t)
dt

Certainly, the simplifications and assumptions carried out are coarse and far from a detailed analyt-
ical derivation, nevertheless they provide an illustration of the fundamental behavior of model-error
control synthesis: it incorporates a differential predictor. Also, the relationship between the opti-

mization interval and the weighting matrix can be identified.


Chapter 5

Numerical Simulation

5.1 Parameters and Simulation Setting

The previously introduced control loop setting (together with the presented control techniques) is
subject to numerical evaluation in this chapter. The system is implemented in Matlab (the used
code is provided in the appendix). The time-delay in the model-error correction term requires a con-
stant integration step-size. Hence, the system is assumed to be continuous-discrete: the simulation
is divided into a propagation and an update phase. Within the integration (propagation) interval
dt,1 the states, the estimates, and (if applicable) the covariance matrix is propagated forward in
time using the Matlab function ode23 (with a setting of 10−6 for relative and 10−8 for absolute
tolerance). In the process, the nominal control input and the model-error compensation are held
constant, whereas the linearization necessary for the covariance’s propagation is updated within the
interval. There are no measurements available within dt. Once every variable has been propagated
to the next time-step, a measurement is generated from the propagated states. Then, the nomi-
nal control input, the state estimate, the error covariance, and the model-error compensation are
updated for use in the next interval.

The control objective is the attenuation of an initial distribution, where the target equilib-
rium is the origin (xf inal = 0). The inital disturbance is given in terms of w(t, x) by

0.5 · sin( 2πx



w(0, x) = w0 (x) = L ) 0 < x ≤ 0.5
0 0.5 < x ≤ 1

Note that the spatial domain is taken to have the length L = 1 (Ω = [0, 1]). Additionally, a

deterministic disturbance d(t) = 0.75 cos(10t) is added as process noise (representing, for example,
1 Also referred to as the numerical integration step-size.

89
CHAPTER 5. NUMERICAL SIMULATION 90

unmodeled dynamics or vibrations). The initial condition of the state estimate is chosen to equal the
true initial state. So as to provide comparability between the different control designs, a performance
measure has to be introduced: here, the absolute value of the deviation from zero is added for every
(discrete) point in space and time:2
Np Nt
1 XX
e= |xi (j · dt)| (5.1)
Nt Np i=1 j=1

Also, the settling time can be regarded; i.e., the minimal time from which the states remain bounded:

tset = min {t0 : ∀t > t0 → xi (t) ∈ [ll , lu ]} (5.2)


t0

The limits are chosen to be lu = 0.055 for the upper bound and ll = −0.055 for the lower bound.3
As a reference, the open-loop simulation from 0 to 10 seconds is shown in picture 5.1. The associated
performance measures become e = 0.1589, e = 0.1608 and e = 0.1612 for κ = 0.01, κ = 0.002 and
κ = 0.001, respectively. Note that the settling time criteria is not met within the simulation interval.

As described in section 4.1, both the plant and the model are realized by the means of a
finite element approximation with different orders: Np for the plant and Nm for the model. The
model is fully controlled, as well as fully observed, so Bm = Cm = I. The input and output matrices
for the plant reveal to be identity (Bp = I, Cp = I), in case of the full model-order control, and
in case of the reduced-order control as given as in section 4.1.4 Two different simulation setups
are conducted: a full-order model run with Nm = Np = 101 and a reduced-order model run with
Np = 101 and Nm = 21. Both realize the following propagation rule within the integration interval
dt (x− −
k and x̂k at k · dt provided by the propagation):
" # 
ẋNp (t) ANp NNp (xNp (t))
 N       
0 x m (t) d(t) Cp
Nm = + + + (ūk − ûk )
x̂˙ (t) 0 ANm x̂Np (t) NNm (x̂Nm (t)) 0 Cm

The measurements are provided as aforementioned in section 4.1:

ỹk = Cp xk + vk , vk ∼ N (0, Vk )

Once the next time-step has been reached via propagation, different update relations hold according
to the implemented control design, and as detailed in the following.
2 N being the number of discretization points in time with the associated integration interval dt, and N being
t p
the number of spatial gridpoints for the plant.
3 The introduced process disturbance will create an oscillation around the origin; it is required that this oscillation

is damped by the control.


4 Exact representation of the output at the gridpoints of the model and expanded step-functions on the whole

domain as control inputs are applied.


CHAPTER 5. NUMERICAL SIMULATION 91

Figure 5.1: Open-Loop Simulation Including Disturbance; N = 101, κ = 0.001

5.2 Full-Order Model Control

The linear quadratic regulator, as described in section 4.3.1, is employed as the nominal controller
so that the associated control law
ūk = −LL x̂+
k

contains the solution to the steady-state Riccati equation LL . The functional gain is computed
beforehand based on the linearization of the system.5 The solution only depends on the ratio of
the weighting matrices QL and RL . Due to the symmetric and diagonal dominant structure of the
system, these have been chosen to be weighted diagonal matrices: RL = I and QL = q(x)I, with
q(x) = 10 on 0.7 ≤ x ≤ 0.9 and q(x) = 1 on the rest of the domain. For demonstration purposes, a
higher weight is put on the interval [0.7, 0.9]. As already stated, the nominal control is only updated
at each integration interval dt, and remains constant during function evaluations performed by
5 The system’s linearization is not updated for the linear quadratic regulator. Since the system is linearized at the
∂N(x(t))
origin, the Jacobian ∂x(t) ∗ actually equals zero and only the linear part κM −1 K comes into action.
x
CHAPTER 5. NUMERICAL SIMULATION 92

Table 5.1: Control Performance, Full-Order LQR without MECS


Viscosity Coefficient
κ = 0.01 κ = 0.002 κ = 0.001
e tset e tset e tset
vk = 0
Direct Meas. 0.0562 - 0.0567 - 0.0567 -
vk ∼ N (0, 0.05)
Direct Meas. 0.0563 - 0.0567 - 0.0567 -
Kalman Filter 0.0571 - 0.0575 - 0.0576 -

the integrator (ode23). Different update relations are necessary, depending on either the kind of
measurement or the filter technique used. Simple direct measurements can be implemented by

x̂+
k = ỹk

while the more sophisticated extended Kalman filter approach uses the following corrections:

−1
LKk = Pk− Cm Cm Pk− Cm T

+ RK
− −
x̂+
 
k = x̂k + LKk ỹk − Cm x̂k

Pk+ = [I − LKk Cm ] Pk−

Additionally, the error covariance matrix P has to be propagated via

Ṗ = A(x̂(t), t)P (t) + P (t)AT (x̂(t), t) + Q(t)



∂ f̂
A(x̂(t), t) ≡
∂x

x̂(t)

Note that required linearization of the system is performed at every function call within the in-
tegration interval. Table 5.1 indicates the resulting performance measures for different values of
κ and different noise and filter combinations applied to the full-order system without model-error
correction.

The linear quadratic regulator with direct measurements performs well in terms of atten-
uation of the initial disturbance, but it cannot cope with process disturbance (here, the harmonic

oscillations). Since the system’s damping depends on the viscosity parameter κ, one would expect
a worse response for smaller values. But interestingly the performance value only changes slightly:
this is due to the fact that the performance measure is an averaged factor. The inital disturbance
is still attenuated fast in comparison to the simulation interval, so that the shock tendency is not
CHAPTER 5. NUMERICAL SIMULATION 93

covered by the performance measure. Though a slight shock tendency is still depicted in figure 5.2
for κ = 0.001.

(a) True States (b) Nominal Control

Figure 5.2: Full-Order Control; no Noise, no Model-Error Correction, N = 101, κ = 0.001

It could already be seen from the integral representation via functional gains or kernels,
respectively, (section 4.3.1) that the linear quadratic regulator has an integrative property. This
makes it especially robust against (white Gaussian) measurement noise and numerical instabilities.
Hence, the efficiency of the noisy case exhibited in table 5.1 is indistinctive from the noise-free
situation. The linear quadratic regulator ‘averages’ the weighted sum of several states, so that part
of the noise cancels out, (as displayed in figure 5.3). But the downside of this integrative property

(a) Measurements (b) Nominal Control

Figure 5.3: Full-Order Control; Noise, no Model-Error Correction, N = 101, κ = 0.001

is given by the fact that areas of step descent are not as potentially recognized or smoothed as
by a differentiating controller (for example). Surprisingly, the addition of a Kalman filter is not
CHAPTER 5. NUMERICAL SIMULATION 94

improving the control quality (in the noisy case), it is indeed slightly downgraded. The underlying
reason could be the introduction of piecewise linearized estimates by the Kalman filter, neglecting
parts of the dynamics. The reader is also reminded that the extended Kalman filter approach is not
directly derived from an optimality condition.

Furthermore, the process noise in the system under investigation is not additive white
Gaussian noise, yet, it is deterministic. Therefore, the weighting matrices loose somehow their
meaning and become merely tuning parameter. Nevertheless, the desired behavior can be defined
to consist of solely filtering out the measurement noise and closely following the truth. There
is no reason for not choosing RK to equal the real (or applied) measurement noise covariance
matrix used for simulation: here RK = (0.05)2 INm ×Nm .6 But QL still has to be optimized for the
desired behavior: too high values cause the estimator to closely follow the model and to neglect the
disturbance7 while too low values carry over measurement noise into the estimates.8 Choosing the
discrete summation in space and time of the truth-minus-estimate as a loss function, a parameter
optimization can be performed:
Np Nt
X X
J(QL ) = |xi (j · dt) − x̂i (j · dt)|
i=1 j=1

The system’s matrices being diagonal dominant and especially consisting of the same entries along
the main and each secondary diagonal9 gives reason to chose QL of the form scalar times identity:
QL = s · INm ×Nm . The resulting one parameter loss function is shown in figure 5.4(a) for the
open-loop system with a viscosity of κ = 0.002 and also for the LQR controlled feedback loop with
κ = 0.001 in figure 5.4(b). Both graphs reveal a distinctive minimum being quite close to each other.
Since the loss function is not varying significantly around those parameters, the weighting matrix

for the simulation purposes in this work is selected to be QL = 0.025 · INm ×Nm .

One of this work’s purposes is to demonstrate the robustness and superiority of model-error
control synthesis as a computationally efficient compensation technique for process disturbance in
nonlinear distributed systems. Hence, the nominal linear quadratic regulator is combined with an
6 Although the Kalman filter is the dual problem to the linear quadratic regulator, the Kalman gain matrix does
not only depend on the ratio of the two weighting matrices, but also on the chosen absolute values. In comparison
to ‘real world’ sensor quality, the measurement variance has been chosen to be a worst case condition: the standard
deviation (here, the mean of the absolute value) is 10 percent of the initial distribution.
7 Then, the estimates are close to zero and do not contain model-error anymore. So when the model-error corrector

is fed with the estimates, the resulting predictions are way too low.
8 This potentially causes high chattering in the differentiating model-error predictor.
9 The recursive coupling in space arising from the semi-discretization is the cause for this appearance.
CHAPTER 5. NUMERICAL SIMULATION 95

11000 11000

10000 10000
Loss Function (Accumulated Estimate−Truth)

Loss Function (Accumulated Estimate−Truth)


9000 9000

8000 8000

7000 7000
Minimum ( 0.0245 / 2324 )
Minimum ( 0.028 / 2100 )
6000 6000

5000 5000

4000 4000

3000 3000

2000 2000

1000 1000
0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 0.05 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 0.05
Process Noise Weighting s, (Qkal = s ⋅ I) Process Noise Weighting s, (Qkal = s ⋅ I)

(a) Open-Loop System with κ = 0.002 (b) Closed-Loop (LQR) with κ = 0.001

Figure 5.4: Cost Functions for the Kalman Filter Optimization

Table 5.2: Control Performance, Full-Order LQR with MECS


Viscosity Coefficient
κ = 0.01 κ = 0.002 κ = 0.001
e tset e tset e tset
vk = 0
Direct Meas. 0.0166 1.93 0.0165 2.11 0.0165 2.15
Kalman Filter 0.0211 2.00 0.0212 2.08 0.0212 2.10
vk ∼ N (0, 0.05)
Direct Meas. 0.0474 - 0.0477 - 0.0488 -
Kalman Filter11 0.0218 2.25 0.0229 2.93 0.0234 3.09

additional model-error correction loop as illustrated in figure 4.1. Table 5.2 shows the simulation
results for different viscosity, noise and filter combinations. At this point it has to be noted that
every efficiency measurement presented in this work has not only be obtained from one simulation
run. Especially when stochastic noise generation is involved, the result has been obtained via the
mean of several iterations.10

As table 5.2 exhibits, there is a tremendous improvement when the model-error corrector
is added in the case of direct measurements in the absence of noise. Actually, it reveals the best
performance within the evaluations of this work by complete attenuation of the process disturbance
as depicted by figure 5.5 (true states and resulting model-error correction).

But as already illustrated in section 4.4, the model-error control synthesis is essentially a
10 Thereby, the variance as an indicator of consistency has been found to be negligible.
11 Since noise is carried into the states through the model-error corrector the upper and lower bound of the settling
criteria are sometimes exceeded by stochastic noise peaks. Hence the settling time showed a maximal deviation of
1.58.
CHAPTER 5. NUMERICAL SIMULATION 96

(a) True States (b) Model-Error Correction Term

Figure 5.5: Full-Order Control; no Noise, with Model-Error Correction, Nm = 101, κ = 0.001

differentiating control. Thus, it has to cope with the difficulties every numerical differentiator has
to face. It is not astonishing, that there is a huge kickback in the presence of noise or numerical
errors. The fatal noise covariance12 applied in this work even causes the system to become unstable
(within the simulation interval) as illustrated by figure 5.6. Although thought to cope with both,

(a) True States (b) Model-Error Correction Term

Figure 5.6: Full-Order Control; Noise, with Model-Error Correction, Nm = 101, κ = 0.001

noise and model-error, experience has shown instabilities and several problems of the model-error
control synthesis in the presence of noise. Once more, the benchmark problem depicts that behavior.
However, for the sake of completeness it has to mentioned that the original approach did not just

take direct measurements as an input collection: it still included a decoupled estimator for forward
integration of the model. Thereby, the model-error predictor has been used for both, estimator
12 Again, in comparison to the quality of ‘real world’ sensors, the measurement noise applied in this work is quite
high.
CHAPTER 5. NUMERICAL SIMULATION 97

update and control correction.

But in case of noise, the addition of the tuned Kalman filter comes in handy. Figure 5.7
supplementary exhibits the great improvement of the results. There is still a disturbance residue

(a) True States (b) Measurements

(c) Estimates (d) Model-Error Correction Term

Figure 5.7: Full-Order Control; Noise, with Model-Error Correction, Kalman Filter, Nm = 101,
κ = 0.001

left in the states and the model-error corrector carries some noise to the system. But in the face
of the rough conditions applied, the system’s performance is remarkable. Figures 5.7(b) and 5.7(d)

additionally shows the estimates and the resulting model-error prediction. Figure 5.7(d) displays a
lot of chattering, and, on first sight, one might interpret it as noise. But there is still a considerable
harmonic contained in the signal. The knowledge, that the process disturbance is equally applied
to each state, can be exploited to reveal this quite simply: the mean of the model-error correction

in space can be computed for each instant in time by adding every value and dividing the result by
Nm . Thereby, zero-mean noise contributions are roughly averaged and figure 5.8 depicts the yielded
CHAPTER 5. NUMERICAL SIMULATION 98

harmonic.
2

1.5

Averaged Model Error


0.5

−0.5

−1

−1.5

−2
0 1 2 3 4 5 6 7 8 9 10
Time

Figure 5.8: Spatial Mean of the Model-Error Correction Term, Nm = 101, κ = 0.001

5.3 Reduced-Order Model Control

It is also one objective to evaluate the robustness and performance of control based on reduced-order
models. The benchmark problem in eq. (3.9) represents only a small one-dimensional domain of a
simplified problem. But already a high order is needed when the distributed PDE is replaced by a
system of ODE’s. One can easily realize the tremendous computational demand that the application
to a spread-out multidimensional problem, governed by the Navier-Stokes equations, would create.
In order to meet these challenges, the control design has to be reduced. This objective has partially
been addressed in previous research (see section 1.2. J. Atwell et alii place their focus on ‘intelligent’
techniques of model reduction. Furthermore, they suggest to first design the controller (and also the
estimator) for the high-order model, and then reduce the order of the augmented system (instead
of reducing the model and then designing the controller). The proposed reduction technique is

Karhunen-Loève decomposition. As mentioned before, the Karhunen-Loève decomposition requires


an input collection to construct the basis functions. By choosing the functional gains of the linear
quadratic regulator as such a collection, Atwell et alii were able to incorporate the closed-loop
(controller) dynamics into the reduction process. But, the short sketch of nonlinear control layout

in section 4.3.1.2 has already revealed the immense requirement of sophisticated mathematical tools
for advanced control design; so even a comparatively low order (e.g., order of 10) might not be
addressable. Here, the emphasis lies not on well-developed reduction methods but on the design of
CHAPTER 5. NUMERICAL SIMULATION 99

Table 5.3: Control Performance, Reduced-Order LQR without MECS


Viscosity Coefficient
κ = 0.01 κ = 0.002 κ = 0.001
e tset e tset e tset
vk = 0
Direct Meas. 0.0562 - 0.0567 - 0.0568 -
vk ∼ N (0, 0.05)
Direct Meas. 0.0563 - 0.0567 - 0.0568 -
Kalman Filter 0.0571 - 0.0576 - 0.0577 -

robust controllers with sufficient performance in the face of coarse models. Therefore, the ‘reduce-
then-design’ approach chosen in this work is based on a very rude model reduction, which is arrived
at just truncating the order of the linear B-spline finite-element basis, neglecting any additional
knowledge. This gives reason to expect that a significant higher performance could be achieved if
‘intelligent’ methods were to be applied. The model-error control synthesis is preferred choice when
coping with the process disturbances.

Table 5.3 reveals the efficiency of the same controller filter combination as in table 5.1 (for
the full-order control). Similar performance and behavior is shown: the linear quadratic regulator
exhibits surprisingly good efficiency in damping the initial distribution. The process disturbance is
still obviously not addressed as expected (figure 5.9). The difficulty associated with the reduced-

(a) True States (b) Nominal Control

Figure 5.9: Reduced-Order Control; no Noise and no Model-Error Correction, Nm = 21, κ = 0.001

order model for Burgers’ equation as applied in this work is the appearance of numerical (model)
instabilities: section 3.3 has already illustrated the creation of over-shooting and chattering in ar-
eas of steep descent (jump discontinuities or shocks, respectively). This becomes even worse for
CHAPTER 5. NUMERICAL SIMULATION 100

less damped systems, i.e., for lower values of viscosity. But the integrative property of the linear
quadratic regulator helps again by disregarding parts of the discontinuities (fundamental property of
the integral). In contrast to the full-order setting, the addition of a Kalman filter improves dynam-
ics for both, the noisy and noise-free case. This could be explained by the additional ‘smoothing’

capability of the estimator (in presence of numerical instabilities).

Some Remarks: The displayed results differ tremendously from the ones exhibited by
Atwell and King in [34]. Despite several trials, their results could not be reproduced, especially not
their conclusions on the poor performance in both cases, the full-order and the reduced-order LQG
control. It has to be mentioned that they use the steady-state Kalman filter in the linear quadratic
Gaussian approach, derived from the steady-state linear Riccati equation. The fact that the proof
of the existence of the steady-state solution is only limited to a certain class of linear systems (being
both, stable and fully observable) has been ignored. One cannot be sure that the estimation error
covariance matrix converges in case of the nonlinear Burgers’ equation. Furthermore, their use of
the weighting matrices is more than confusing: it has been pointed out that the weighting matrix
of the process disturbance, WK , has to be optimized (in a certain sense) for the Non-Gaussian case.
Despite the fact that the applied disturbance is not of unit dimension, 0.75 cos(10t), the product
of the disturbance gain matrix has been used as weighting factor (being the identity matrix). Also,
their test setting did not contain any measurement noise, questioning the use of an estimator at
all. Even if not every state has been measured (which is unfortunately not clearly specified in their
work), a regular Luenberger observer should have been applied. Nevertheless, the LQG approach
has been implemented without measurement noise; but instead of choosing the measurement noise
covariance to be of very small magnitude, the identity matrix was again employed. These facts make

it particularly difficult to reproduce their results and to follow their conclusions. A similar confusing
implementation of LQG control was described in the previous work of King and Atwell in [33]. For
comparative reasons the true states and the nominal control for the noisy LQG control are depicted
in figure 5.10.

The addition of model-error correction suffers from the instabilities of the reduced-order
model: the differentiating character of the model-error control synthesis cannot cope with the ap-
pearing overshoots and chattering, so that the closed-loop system becomes unstable.13 Only the
13 This appears for direct measurements in case of any of the applied viscosities.
CHAPTER 5. NUMERICAL SIMULATION 101

(a) True States (b) Nominal Control

Figure 5.10: Reduced-Order Control; Noise, Kalman Filter, Nm = 21, κ = 0.001

Table 5.4: Control Performance, Reduced-Order LQR with MECS


Viscosity Coefficient
κ = 0.01 κ = 0.002 κ = 0.001
e tset e tset e tset
vk = 0
Kalman Filter 0.0207 1.99 0.0538 3.26 0.0785 -
vk ∼ N (0, 0.05)
Kalman Filter14 0.0244 - 0.0568 - 0.0807 -

addition of the Kalman filter with its ‘smoothing property’ helps to stabilize the control. But this
only holds for higher damped systems (κ = 0.01) where no shock region is formed (illustrated by
table 5.4). But as it has been mentioned, the test setting in this work is a worst case approach
with one of the most coarse model truncation techniques possible. We strongly believe that the
model-error control synthesis shows superior performance (even for a much lower model-order), as
soon as any advanced reduction method is applied. Actually, a further simplification of the model

could lead to a solution: the instability is due to numerical overshooting caused by the estimators
nonlinear reduced model. When this model is replaced by its linearization15 no numerical chattering
should appear in the estimates, and, hence, the model-error predictor should be stabilized.

Another quite simple approach could consist of bounding the predictor’s output by an
+
upper and lower limit, for example based on the full-order experience ( − 2.5). Also, the output
of the model-error predictor could be filtered (Fourier analyzed). Again, the knowledge on the
process disturbance can be exploited: the disturbance is equally applied to every state, so that the
14 Here, a larger deviation in the performance measure between different simulation runs was observed.
15 Here, the linearization is applied at the origin, leaving only the linear part of the system in eq. (3.25) active.
CHAPTER 5. NUMERICAL SIMULATION 102

Table 5.5: Control Performance, Reduced-Order LQR with ‘Linearized’ Model


Viscosity Coefficient
κ = 0.01 κ = 0.002 κ = 0.001
e tset e tset e tset
vk = 0
Kalman Filter 0.0211 2.01 0.0516 3.47 0.0826 -
vk ∼ N (0, 0.05)
Kalman Filter 0.0247 8.46 0.0527 - 0.0741 -

previously performed summation and division by Nm constitutes a coarse filter and could possibly
lead to an improvement. Table 5.5 reveals the performance results obtained for each of the three
modifications. The suggestions should only serve as first ideas within the scope of this work. Many
other refinements, filter alterations or combinations could be possible. Nevertheless, the reader
should be reminded that computational efficiency is a key issue in coping with distributed parameter
systems; thus, expensive sophisticated adaptations might work in theory, but could turn out to be
infeasible.

Surprisingly, the approach via the linarized model turns out to reproduce nearly the same
results as the unmodified model-error correction: instability in the face of direct measurements and
very poor performance (partially instable) for lower values of the viscosity. Although the bounded
version performs slightly better by being applicable to direct measurements, it shows amplifications
(instabilities) for the first few secondes before the system starts to settle. Also, previously not
appearing disturbances on the whole domain are introduced, creating a wavy landscape.

Table 5.5, 5.6 and 5.7 reveal the averaging approach to be by far the best suggested refine-
ment. Hence, figure 5.11 exhibits the true states and the correction term for this approach in case
of noise-free direct measurements. Additionally, figure 5.12 shows the true states, measurements,
estimates and model-error correction for the reduced-order linear quadratic Gaussian controller in
the presence of noise. Note that the oscillations in figure 5.11 only appear at the boundaries while
the interior of the domain is highly damped. The reasons for this effect have yet to be investigated.
Obviously, the capability to control individual model-errors appearing at different spatial locations
has been lost by averaging. If one wishes to preserved that property, moving averages in time could
be regarded.
CHAPTER 5. NUMERICAL SIMULATION 103

Table 5.6: Control Performance, Reduced-Order LQR with ‘Bounded’ MECS


Viscosity Coefficient
κ = 0.01 κ = 0.002 κ = 0.001
e tset e tset e tset
vk = 0
Direct Meas. 0.0296 - 0.0754 - 0.1549 -
Kalman Filter 0.0207 1.99 0.0414 4.56 0.0638 -
vk ∼ N (0, 0.05)
Direct Meas. 0.0545 - 0.0576 - 0.0594 -
Kalman Filter 0.0334 - 0.0378 - 0.0387 -

Table 5.7: Control Performance, Reduced-Order LQR with ‘Filtered’ MECS


Viscosity Coefficient
κ = 0.01 κ = 0.002 κ = 0.001
e tset e tset e tset
vk = 0
Direct Meas. 0.0168 1.92 0.0172 2.09 0.0174 2.12
Kalman Filter 0.0209 2.00 0.0212 2.07 0.0212 2.08
vk ∼ N (0, 0.05)
Direct Meas. 0.0223 2.50 0.0229 2.93 0.0229 3.37
Kalman Filter 0.0210 2.03 0.0217 2.06 0.0214 2.09

(a) True States (b) Model-Error Correction Term

Figure 5.11: Reduced-Order Control; no Noise, with ‘Filtered’ Model-Error Correction Nm = 21,
κ = 0.001
CHAPTER 5. NUMERICAL SIMULATION 104

(a) True States (b) Measurements

(c) Estimates (d) Model-Error Correction Term

Figure 5.12: Reduced-Order Control; Noise, with ‘Filtered’ Model-Error Correction, Kalman Filter,
Nm = 21, κ = 0.001
Chapter 6

Conclusions

6.1 Summary and Contributions

A control approach to the problem of fluid flow has been presented. The motivating problem of flow
over a wing’s airfoil has served as the point of departure; the Navier-Stokes equations have been
identified as the corresponding physical model. Furthermore, it has been verified that the Navier-
Stokes equations arising as a special case from general continuity and the conservation principle.
Mathematical key issues have been extracted, and Burgers’ equation has been related by several
means as a one-dimensional mimicry of the associated dynamics (not only for channel flow but also
in regards to other conservation problems like turbulence and traffic flow). A benchmark problem
has been created incorporating Burgers’ equation, followed by an analytical discussion. This has
included the analytical solution for a certain class of initial conditions, as well as the general shock
and steady-state solutions.

By applying a Galerkin linear finite element method, the benchmark problem has been
converted into an ODE system. This state-space representation has been expanded to a test setting,
additionally addressing process disturbance and noise. The following analysis, in terms of control

engineering, has revealed key features of the system. Standard control and filter techniques have
been reviewed briefly, while model-error control synthesis has been introduced as a sophisticated
approach to robustness. Different possibilities for the implementation of model-error prediction have

been put forward. The derived methods and inaugurated settings have been tested numerically, and
the expect system behavior has been affirmed. The contributions of this work are in detail:

1. Key issues of fluid flow control are identified and extended to general continuity problems.

105
CHAPTER 6. CONCLUSIONS 106

These crucial points involve: nonlinear convection with associated dissipation, diffusion, energy
or impulse conservation, and distributed parameters.

2. Burgers’ equation is once more established in detail as a one-dimensional model equation,

reflecting the identified features of fluid flow or general continuity problems. Therefore, it is
shown that Burgers’ equation is linked to more than one specific problem. It is connected to
a variety of phenomena. Its roots in modeling turbulence, one-dimensional channel flow, and
traffic flow are reviewed.

3. A general analytical solution for Burgers’ equation with periodic boundary conditions (peri-
odicity in function value and flux) is derived in detail. The presented solution is limited to
RL
initial distributions, fulfilling the integral (energy) constraint 0 w0 (x)dx = 0. Additionally,

the steady-state and the inviscid solution for any type of initial value is exposed.

4. A benchmark, or model problem, is created using periodic boundary conditions (conservation)


and distributed control. Neither have been adequately addressed in previous research. The
model is embedded in an unprecedented and augmented ‘real world’ test setting, including
considerations for measurement noise (filtering challenge) and external process disturbances
(robustness). Additionally, model-order reduction is addressed.

5. Burgers’ equation (with periodic boundary conditions and distributed control) is the subject of
a detailed analysis in control terms. Exponential stability for the open-loop (unforced) case is
proven using a Lyapunov function and the Poincaré inequality (limited to the origin as target
equilibrium and initial conditions following the above integral constraint). Also, conditions

for the stability of feedback control are presented. It is shown that a feedback with a strict
positive kernel improves the rate of convergence. An argumentation towards controllability is
disclosed.

6. A counter-example to the suggested ‘Design-Then-Reduce’ approach, featured by research


at the Virginia Polytechnic Institute and State University, is revealed:1 advanced nonlinear
control techniques, such as feedback linearization, cannot be realized (technically) for high-

order models. Furthermore, the Galerkin approximation of Burgers’ class systems is shown
1 For detail, see [34].
CHAPTER 6. CONCLUSIONS 107

to be applicable for input-state linearization only if fully-actuated. Hence, a ‘Reduce-Then-


Design’ sequence, with an additional robustness element, is featured.

7. Model-Error Control Synthesis is introduced for nonlinear distributed systems of Burgers’

class: it provides robustness and performance improvements in the case of model-uncertainties,


unmodeled dynamics, process disturbances, and, in its modified version, reduced-order models.
Different implementation methods are pointed out. The model-error correction is not used in
its original configuration, since it is both, a regulator and an estimator. Rather, it is combined

with the extended Kalman filter, efficiently taking care of measurement noise.

8. Extensive numerical simulations are performed to validate the suggested control techniques
and approaches. Therefore, the standard linear quadratic regulator is utilized as the nominal
controller. A comprehensive Matlab code is provided for future research.

9. The model problem is tackled from a worst case point of view: the nominal controller is a stan-
dard, simple (state) feedback law, neglecting the linearization of the nonlinear model parts.
The model-order reduction is coarsely performed by truncating a linear Galerkin finite-element
approximation. Hence, even better results are expected if more sophisticated reduction pro-
cedures are applied. The measurement noise covariance, the process disturbance’s amplitude,
and the bounds on the performance measure are chosen to exceed ‘real world’ demands.

6.2 Outlook and Future Work

Nonlinear design and fluid control is still in its infancy, with many issues still needing to be discussed
and investigated. The preceeding chapters only represent a first attempt at solving (or approaching)
a certain class of nonlinear problems. Therefore, the research has been limited to Burgers’ equation as
being a one-dimensional mimicry of mathematical key properties associated with general continuity
tasks (especially fluid flow). This nonlinear sub-area alone provides far too many challenges to be
addressed in this summary on future work. But even the featured topics provide many details that
may be studied more closely. Hence, this work concludes with a small outlook on related future
research.

• On Simulation Issues:
Although the numerical evaluations of the discussed techniques have been performed quite
CHAPTER 6. CONCLUSIONS 108

comprehensively, they can and should be extended: different implementations of model-error


control synthesis, as introduced in section 4.4, should be compared for performance and sta-
bility. The grade of realism could be improved by assuming a slower measurement sampling
rate, so that measurements are not available at every time-step. In a real test setting, the

computing hardware will have to be much faster than the sensor dynamics. Also, the propa-
gation interval hase been quite small, and the integration interval should be decoupled from
the sampling interval: the time-delay of the model-error correction then becomes variable, as

previously discussed. So far, there is no stability proof for this case.

Although different initial distributions and different integration intervals not mentioned in this
work have been tested for their consistency in the presented results, an additional comprehen-
sive documentation is preferable. It has to be verified as to what extent the optimal weighting
matrices of the Kalman filter are affected by different inital disturbances, as well as by process
noise and nominal regulators changes. So as to make the design approach more suitable for
real-time application, the system should be implemented in a discrete formulation obeying the
Nyquist theorem. In this way, the implementation also serves for investigation on computa-
tional cycles, accuracy, and load. Incorporating more sophisticated reduction methods, like
the proper orthogonal decomposition (also known as Karhunen-Loève decomposition), might
contribute to the effectiveness.

So far, the extended Kalman filter has been used. In the process, the covariance matrix P
was non-constant and had to be integrated forward in time. A simpler alternative, the linear
Kalman filter (based on the same constant linearization as the linear quadratic regulator) with

a converged steady-state covariance P , might be considered for furhter investigations than in


chapter 5.

• On MECS and Control Design for Nonlinear Problems:


Control methodologies applied in this work have to be expanded and verified partially. Al-
though experience in different research has shown that the separation of control and estimation
works sufficiently well for nonlinear systems, there is still the lack of comprehensive proof. Es-
pecially, due to the complexity of nonlinear system, it can be a very tedious task to derive

a combined estimation-regulation technique based on optimality. Thus, designing an optimal


control law separately from an optimal filter might be the only way at present to handle a
CHAPTER 6. CONCLUSIONS 109

global system setting. For linear systems it has been consistently proven that this break-down
indeed leads to an overall optimal solution. This is known as the separation or certainty-
equivalence principle.2 One might argue that, by continuity, this principle has to be valid
for nonlinear systems in the case of small perturbations (around the point of linearization);

but there is still the lack of an overall derivation or proof. Small perturbations are hardly
guaranteed when operating nonlinear systems; also, higher order terms may become crucial
when the linearization reveals a pole at the origin.

In the presented work, as well as in previous research on model-error control synthesis, the
states and the control input are assumed to be unbounded. But in real systems there are
always bounds and constraints on both, wether it might be saturation of the control input
or excesses of the mechanical load, et cetera. Hence, a derivation of the optimal model-error
correction from a cost functional subject to constraints is a necessity.

Advanced mathematical tools from differential topology have become the center of interest,
recently. But incorporating these tools is very demanding in both derivation and design pro-
cedure, as well as in their implementation and realization; the extraordinary computational
need for distributed systems demands simple or efficient design if these tools are to be ap-
plied in real time. Therefore, a comprehensive comparison of differential topology design with
reduced-order design (for certain classes of system) would be recommended.

• On Fluid Control and Burgers’ Equation


Although advanced nonlinear techniques, such as sliding mode, H ∞ , and MinMax control
should be investigated, the complexity and order of spatially spread parameter systems might

resist: the objective is still real-time implementation; thus, the computational speed is an
issue. Sophisticated a priori reduction procedures (such as Karhunen-Loève decomposition)
employed by previous research can partially address the computational load. Also, the func-

tional gain resulting from the linear quadratic regulator’s solution (shown in figure 4.5) reveals
that the states (spatial locations) have somehow been decoupled: each control input is mostly
dependent its current location, while the gain associated with the other states exponentially
decreases. This gives reason to introduce a purely decoupled control where each actuator lo-

cation has to be realized by an independent PID (proportional-integral-derivative) controller.


2 For a detailed approach, see [57].
CHAPTER 6. CONCLUSIONS 110

Hence, the beauty of parallel computing could be harnessed. The model-error control synthesis
might be powerful enough to cope with the residual error and the stability issue. Therefore, the
one-step ahead prediction is quite computationally efficient.3 When advancing, the nominal
controller should also be examined, i.e., an extended LQR approach would be the next step.

The same principle as the extended Kalman filter can be applied: the linearization is computed
at each available estimate, as well as the corresponding steady-state Matrix-Riccati-Solution.

A controllability condition of the under-actuated case might still hold for several Galerkin

approaches (other than the one used in this work). Also, there is the lack of an observabil-
ity proof, as well as the lack of a correct mathematical representation of discrete control.
The stabilization objective has been tackled based on the preservation of, or the attenuation
to, equilibria (steady-state solutions). If it were possible to sustain unstable modes of the
open-loop system, the possibilities of imposing some desired dynamics would be tremendously
augmented. The system has been assumed to be autonomous, but in reality the Reynolds
number is not steady; hence, a non-autonomous control setting, with a varying coefficient κ,
has to be covered.

The X-21 program did not succeed due to the clogging of the suction rails, but it is conceivable
that pure blowing might prevent this failure. Comprehensive and advanced control design,
based on the dynamic equations, could offer a solution: blowing at specific, regulated points in
space could possibly create the same desired effects than suction somewhere else. In order to
face to approach this idea, the first step to consider would include a constrained model-error
correction accompanied by a bounded nominal.

In a larger time frame, the advancement to an experimental setup of real airfoil flow should be
the ultimate goal. Therefore, research has to be developed, advancing from a one-dimensional
mathematical model to (at least) the two-dimensional Navier-Stokes equations. Such a pro-

posal would require the efforts of different fields (e.g., MEMS or PIEZO technology for the
actuators; parallel computing; hybrid flow techniques; feasible control algorithm; et cetera).
Thus, as it has been pointed out at the very beginning of this work, the search for the ‘holy
grail of Aerodynamics’ turns out (not surprisingly) to be a crusade.

3 As implemented here: five matrix multiplications, five matrix additions, and five vector multiplications, each in

the order of the system.


Appendix

Definitions

Definition (Lie Derivative). Let h : Rn → R be a smooth scalar function, and f : Rn → Rn be a


smooth vector field on Rn , then the Lie derivative of h with respect to f is a scalar function defined
by Lf h = ∇h f .

Definition (Lie Bracket). Let f and g be two vector fields on Rn . The Lie bracket of f and g is a
third vector field defined by
[f , g] = ∇g f − ∇f g

Definition (Involutive). A linearly independent set of vector fields {f1 , f2 , . . . , fm } is said to be


involutive if, and only if, there are scalar functions αijk : Rn → R such that
m
X
[fi , fj ] (x) = αijk (x) fk (x) ∀ i, j
k=1

Source Code

Matlab Program 1: main.m

function [X,perf,tset,u,y_tilde,u_mc]=main(control,filter,mecs,inp,...
noise,std,kappa)

%__________________________________________________________________________
%
% Master-File for the (Burgers) Benchmark Control Problem
%__________________________________________________________________________
%
% The function’s call is [X,perf,tset,u,y_tilde,u_mc] = main(control,

111
APPENDIX 112

% filter,mecs,inp,noise,std,kappa) where the parameters are defined as


% follows:
% control: ’opn’ - open loop simulation
% ’lqr’ - linear quadratic regulator
% filter: ’dir’ - direct state measurements (no filter)
% ’kal’ - extended kalman filter for state estimates
% mecs: 0 - no model error compensation
% 1 - model error compensation using MECS
% inp: 0 - full-order control design using identity matrices
% for both, control input and measurement output
% 1 - reduced model order control with continuous
% distributed control inputs using stepfunctions
% noise: 0 - no measurement noise (perfect measurements)
% 1 - additive white gaussian measurement noise
% std: - measurement noise standard deviation (scalar)
% kappa: - viscosity (inverse Reynolds number analogue)
%
% The function returns plots of the States, Measurements, Estimates,
% Model-Error Correction and Nominal Control. Additionally the following
% values and matrices, respectively, are provided:
% X - Matrix of the true states and estimates in space and
% time; each column consist of the plant’s states
% followed by the estimators output at a specific time
% u - Matrix of the nominal control in space (column) and
% time (row)
% u_mc - Matrix of the Model-Error Correction in space (column)
% and time (row)
% y_tilde - Matrix of the measurements in space (column) and time
% (row)
% perf - Performance index (averaged deviation from zero)
% tset - Settling time (states remain within +/- 0.055)

% Functions:
% - Initializing all states
% - Setting variables for time and domain
% - Specifying number of gridpoints plant, number of gridpoints estimator,
% number of measurement points, number of control points (inputs),
% values of kappa
% - Specifying initial condition
% - Calling system matrix computation (matrices.m)
% - Executing simulation dynamics (plant.m)
% - Result summary and output

clear global

disp(’Initializing...’)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Specifiying Space and Time Domain %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
APPENDIX 113

global width time nt nsp nsm ns nc


width = 1; % space-domain length
time = 10; % time-domain interval (seconds)
nt = 5001; % number of gridpoints in time-domain,
% here, intervals of 2 milli-seconds
switch inp
case {0}
nsp = 101; % number of spatial gridpoints, plant
nsm = 101; % number of spatial gridpoints, model
ns = 101; % number of sensor locations
nc = 101; % number of control inputs
case {1}
nsp = 101 % number of spatial gridpoints, plant
nsm = 21; % number of spatial gridpoints, model
ns = 21; % number of sensor locations
nc = 21; % number of control inputs
end
% The number of gridpoints, sensor locations and control inputs has to
% be chosen in such a way, that one is an subset of the other, so that
% interpolation is avoided (spatial points have to coincident).

%%%%%%%%%%%%%%%%%%%%%
% Initial Condition %
%%%%%%%%%%%%%%%%%%%%%

global space_p space_m space_t


space_p = linspace(0,width,nsp)’; % spatial domain plant
space_m = linspace(0,width,nsm)’; % spatial domain estimator
x = zeros(nsp,1); % initialize
x_hat = zeros(nsm,1);

% Initial disturbances, no initial perturbation for estimates !


x(1:(0.5*(nsp-1)),1) = 0.5*sin(space_p(1:(0.5*(nsp-1)))*2*pi);
x_hat(1:(0.5*(nsm-1)),1) = 0.5*sin(space_m(1:(0.5*(nsm-1)))*2*pi);

space_t = linspace(0,time,nt); % time domain


equ = zeros(nsm,1); % equilibrium for the linearized
% system

disp(’Done.’)

disp(’Computing System Matrices...’)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Computing System Matrices for Plant and Estimator %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% Calling the subrouting matrices.m for the computation of the FE


% system matrices, as well as alin.m for the linearized FE matrices
global Mp_inv Mm_inv Ap Kp Np Am Km Nm bp bm Cp Cm
[Kp,Mp,Np] = feval(’matrices’,nsp,width);
APPENDIX 114

[Km,Mm,Nm] = feval(’matrices’,nsm,width);
Mp_inv = inv(Mp);
Mm_inv = inv(Mm);
Ap = -Mp_inv*Kp;
Am = -Mm_inv*Km;
Alin=alin(nsm,width,kappa,equ);

% Computing the input and output matrices for full-order and reduced-
% order model, respectively
switch inp
case {0}
bp = eye(nsp);
bm = eye(nsm);
Cp = eye(nsp);
Cm = eye(nsm);
case {1}
bm = eye(nsm);
bp = zeros(nsp,nc);
fac = (nsp-1)/(ns-1);
for i = 1:nc-1
bp(((i-1)*fac)+2:i*fac,i)=ones(4,1);
bp((i-1)*fac+1,i)= 0.5;
bp((i*fac)+1,i)=0.5;
end
Cm = eye(nsm);
Cp = zeros(ns,nsp);
for i = 0:(ns-1)
Cp(i+1,(i*fac)+1) = 1;
end
end

disp(’Done.’)

%%%%%%%%%%%%%%%%%%%%%%
% Case Determination %
%%%%%%%%%%%%%%%%%%%%%%

global Flag_mec Flag_noise

% Processing of the passed input parameters (controller type, estimator


% type, etc.)
if filter==’dir’
dec2 = 1;
elseif filter==’kal’
dec2 = 2;
else
disp(’Error: Unknown Estimator’);
end
if control==’opn’
dec1 = 1;
dec2 = 0;
APPENDIX 115

elseif control==’lqr’
dec1 = 2;
elseif control==’lin’
dec1 = 3;
else
disp(’Error: Unknown Controller’);
end
switch mecs
case {0}
Flag_mec = 0;
case {1}
Flag_mec = 1;
otherwise
disp(’Error: Wrong Syntax’);
end
switch noise
case {0}
Flag_noise = 0;
case {1}
Flag_noise = std;
otherwise
disp(’Error: Wrong Syntax’);
end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% A Priori Weighting Matrices, Kalman Filter %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

global R_kal Q_kal

R_kal=0.0025*eye(nsm);
Q_kal=0.025*eye(nsm);

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Starting Dynamic System Routine %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

disp(’Starting Dynamic System Routine’)

[X,y_tilde,u,u_mc]=plant(kappa,x,x_hat,Alin,dec1,dec2,inp);

disp(’Simulation Finished, Processing Output’)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Performance Indizes and Output %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

perf = (1/(nsp*nt))*sum(sum(abs(X(1:nsp,:))));

tset = time;
check = sum((X(1:nsp,:) > 0.055) + (X(1:nsp,:) < -0.055));
APPENDIX 116

for i = 1:nt
if sum(check(i:end)) == 0
tset = (i-1)*(time/(nt-1));
break
end
end

switch dec1
case {1}
figure(1)
mesh(space_p,space_t,X(1:nsp,:)’)
xlabel(’Space’)
ylabel(’Time’)
zlabel(’True States’)
view([37.5 40])
otherwise
figure(1)
subplot(2,2,1)
mesh(space_p,space_t,X(1:nsp,:)’)
xlabel(’Space’);
ylabel(’Time’);
title(’Kappa = 0.001’);
view([37.5 40]);
subplot(2,2,2)
mesh(space_m,space_t,y_tilde’)
xlabel(’Space’);
ylabel(’Time’);
title(’Measurements’);
view([37.5 40]);
subplot(2,2,3)
mesh(space_m,space_t,X(nsp+1:nsp+nsm,:)’)
xlabel(’Space’);
ylabel(’Time’);
title(’Estimates’);
view([37.5 40]);
subplot(2,2,4)
mesh(space_m,space_t(2:end),u_mc(:,2:end)’)
xlabel(’Space’);
ylabel(’Time’);
title(’Model Error’);
view([37.5 40]);
figure(2)
mesh(space_m,space_t,u’)
xlabel(’Space’);
ylabel(’Time’);
zlabel(’Nominal Control’);
view([37.5 40]);
end

disp(’Performance Measure’)
perf
APPENDIX 117

disp(’Settling Time’)
tset

Matlab Program 2: matrices.m

function [K,M,N] = matrices(n,L)

%__________________________________________________________________________
%
% Computation of the Finite Element System Matrices
%__________________________________________________________________________

% This function provides the stiffness and mass matrices for the semi-
% discret Galerkin finite element approximation of Burgers’ equation, as
% well as the cofactors of the nonlinear part

% n := number of discretization gridpoints of the spatial domain


% L := length of the spatial domain

h = L/n; % here, n has to be used instead of (n-1) since one degree


% of freedom was lost due to the periodic boundary
% conditions

M = zeros(n,n);
K = zeros(n,n);
N = zeros(3,5);

N = [-1/6 -1/6 0 1/6 1/6];

M(1,1) = 2/3*h;
M(1,2) = 1/6*h;
M(1,n) = 1/6*h;
M(n,n) = 2/3*h;
M(n,n-1) = 1/6*h;
M(n,1) = 1/6*h;

K(1,1) = 2/h;
K(1,2) = -1/h;
K(1,end) = -1/h;
K(n,n) = 2/h;
K(n,n-1) = -1/h;
K(n,1) = -1/h;

for i = 2:n-1
M(i,i) = 2/3*h;
M(i,i-1) = 1/6*h;
APPENDIX 118

M(i,i+1) = 1/6*h;
K(i,i) = 2/h;
K(i,i-1) = -1/h;
K(i,i+1) = -1/h;
end;

Matlab Program 3: alin.m

function [Alin]=alin(n,L,Kappa,w)

%__________________________________________________________________________
%
% Computation of the Linearized Finite Element System Matrices
%__________________________________________________________________________

h = L/n; % n instead of (n-1) has to be used since one degree of


% freedom was lost due to the periodic boundary conditions

M = zeros(n,n);
K = zeros(n,n);
Nlin = zeros(n,n);

M(1,1) = 2/3*h;
M(1,2) = 1/6*h;
M(1,n) = 1/6*h;
M(n,n) = 2/3*h;
M(n,n-1) = 1/6*h;
M(n,1) = 1/6*h;

K(1,1) = 2/h;
K(1,2) = -1/h;
K(1,end) = -1/h;
K(n,n) = 2/h;
K(n,n-1) = -1/h;
K(n,1) = -1/h;

Nlin(1,1) = -(1/6)*w(n)+(1/6)*w(2);
Nlin(1,2) =(1/6)*w(1)+(1/3)*w(2);
Nlin(1,end) =-(1/3)*w(n)-(1/6)*w(1);
Nlin(n,n) =-(1/6)*w(n-1)+(1/6)*w(1);
Nlin(n,n-1) = -(1/3)*w(n-1)-(1/6)*w(n);
Nlin(n,1) = (1/6)*w(n)+(1/3)*w(1);

for i = 2:n-1
M(i,i) = 2/3*h;
M(i,i-1) = 1/6*h;
APPENDIX 119

M(i,i+1) = 1/6*h;
K(i,i) = 2/h;
K(i,i-1) = -1/h;
K(i,i+1) = -1/h;
Nlin(i,i) = -(1/6)*w(i-1)+(1/6)*w(i+1);
Nlin(i,i+1) = (1/6)*w(i)+(1/3)*w(i+1);
Nlin(i,i-1) = -(1/3)*w(i-1)-(1/6)*w(i);
end;

Alin=-inv(M)*Nlin-Kappa*inv(M)*K;

Matlab Program 4: plant.m

function [X,y_tilde,u,u_mc]=plant(kappa,x,x_hat,Alin,dec1,dec2,inp)

%__________________________________________________________________________
%
% Auxiliary Function for the Overall (Closed) Test Setting
%__________________________________________________________________________

global nt nsp nsm ns nc time


global space_p space_m space_t
global bp bm Cp Cm

% Auxiliary masterfunction to distinguish the different simulation settings


% (open- vs. closed-loop, controller, estimator) as well as to compute
% necessary weighting matrices for the LQR and the MECS. Values are passed
% to the systems integrator rk45dp.m

switch dec1
% _________________________________________________________________________
%
% Open loop computation
% _________________________________________________________________________

case {1}
disp(’Type: Open Loop Simulation’)

X_start = x;
gain = zeros(nsm);
[X] = rk45dp(space_t,X_start,Alin,kappa,dec1,dec2,gain);
y_tilde = X;
u = zeros(nc,nt);
u_mc = zeros(nc,nt);

% _________________________________________________________________________
APPENDIX 120

%
% LQR Controller
% _________________________________________________________________________

case {2}
disp(’Type: Linear Quadratic Regulator’)

% LQR Gain Matrix Computation


Q_lqr = eye(nsm);
switch inp
case{0}
Q_lqr(71:90,71:90)= 10*eye(20);
case{1}
Q_lqr(14:18,14:18)=10*eye(5);
end
R_lqr = eye(nc);
gain = lqr(Alin,bm,Q_lqr,R_lqr);

% Model-Error Control Gain Matrix Computation


W_mecs = 10^(-10)*eye(ns);
R_mecs = eye(ns);
global del_t Mult
del_t=time/(nt-1);
Mult = -inv(del_t^2*inv(R_mecs)+W_mecs)*del_t*inv(R_mecs);

switch dec2

%%%%%%%%%%%%%%%%%%%%%%%%%
% Direct measurements %
%%%%%%%%%%%%%%%%%%%%%%%%%

case {1};
disp(’Direct Measurements’)

X_start = [x; x_hat];


[X,y_tilde,u,u_mc] = rk45dp(space_t,X_start,Alin,kappa,...
dec1,dec2,gain,inp);

%%%%%%%%%%%%%%%%%%%%%%%%%%
% Extended Kalman Filter %
%%%%%%%%%%%%%%%%%%%%%%%%%%

case {2}
disp(’Estimates via Kalman Filter’)

X_start = [x; x_hat];


[X,y_tilde,u,u_mc] = rk45dp(space_t,X_start,Alin,kappa,...
dec1,dec2,gain,inp);
end
end
APPENDIX 121

Matlab Program 5: rk45dp.m

function [X,y_tilde,u,u_mc] = rk45dp(space_t,X_start,Alin,kappa,dec1,...


dec2,gain,inp)

%__________________________________________________________________________
%
% Main System File
%__________________________________________________________________________

% Dynamic system master file, calling the ode23 integrator for the
% propagation of the system’s dynamics (integration) within the time
% discretization intervals; it produces measurements at the given time
% intervals using the provided measurement error standard deviation; the
% resulting update relations for the nominal control, the estimates, the
% model-error correction and the estimation error covariance matrix are
% computed if indicated;
% the integrator calls the right-hand side computation in dynamics.m

global time nt nsp nsm ns nc


global Cp Cm bm
global Mult del_t R_kal
global Flag_mec Flag_noise

h_t=time/(nt-1); % Integration stepsize


X(:,1)=X_start; % Processing initial values

wait=waitbar(0,’Please wait’);

% Tolerance settings for the numerical integrator


options = odeset(’RelTol’,1e-6,’AbsTol’,1e-8);

switch dec2
case{0}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Open Loop Integration %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

loop = length(space_t)-1;

for i=1:loop
% Loop starting at t=0 and going to t=1-h_t, in the last iteration
% the entry for t=1 will be computed
t=space_t(1)+(i-1)*h_t;
APPENDIX 122

% Forward integration
[T,Y]=ode23(’dynamics’,[t t+h_t],X(:,i),options,kappa,dec1,dec2);
X(:,i+1) = Y(end,:)’;
end

case{1}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Direct Measurements %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

x_hat=X(nsp+1:nsp+nsm,1); % temporary storage variable


y_hat=Cm*x_hat; % initial output estimate
y_tilde(:,1)=Cp*X_start(1:nsp,1); % initial measurement
u=zeros(nc,nt); % initialize nominal control
u_mc=zeros(nc,nt); % initialize model-error correction

loop = length(space_t)-1;

for i=1:loop
% Loop starting at t=0 and going to t=1-h, in the last iteration
% the entry for t=1 will be computed
t=space_t(1)+(i-1)*h_t;

% Nominal control
u(:,i) = - gain*x_hat;

% Forward integration
[T,Y]=ode23(’dynamics’,[t t+h_t],X(:,i),options,kappa,dec1,dec2,...
u(:,i),u_mc(:,i));
X(:,i+1) = Y(end,:)’;

% Creating measurements
y_tilde(:,i+1) = Cp*X(1:nsp,i+1) + Flag_noise*randn(ns,1);

% Model-Error correction
f_current = feval(@dynamics,t,X(:,i),[],kappa,dec1,dec2,u(:,i),...
u_mc(:,i));
u_mc(:,i+1) = Flag_mec*Mult*(del_t*f_current(nsp+1:nsm+nsp) - ...
y_tilde(:,i+1) + x_hat);

% Averaging of the model-error for the reduced-order model


if inp == 1
u_mc(:,i+1) = (1/nc)*sum(u_mc(:,i+1))*ones(nc,1);
end

% Estimate update
x_hat= y_tilde(:,i+1);

% Storage
X(nsp+1:nsp+nsm,i+1) = x_hat;
APPENDIX 123

end

case {2}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Extended Kalman Filter %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

x_hat_plus=X(nsp+1:nsp+nsm,1); % initial state estimate


x_hat_minus=X(nsp+1:nsp+nsm,1); % initial state estimate
y_hat_minus=Cm*x_hat_plus; % initial output estimate
y_tilde(:,1)=Cp*X(1:nsp,1); % initial measurement
u=zeros(nc,nt); % initialize nominal control
u_mc=zeros(nc,nt); % initialize model-error correction

P_minus=eye(nsm)*10^(-5); % initialize estimation error


% covariance matrix
Kal_gain=zeros(ns,nc); % initialize Kalman gain
Kal_gain=P_minus*Cm’*inv(Cm*P_minus*Cm’+R_kal);

p_vec = reshape(P_minus,nsm*nsm,1);

X_current = [X_start; p_vec]; % temporary storage variable

loop = length(space_t)-1;

for i=1:loop
% Loop starting at t=0 and going to t=1-h, in the last iteration
% the entry for t=1 will be computed
t=space_t(1)+(i-1)*h_t;

% Nominal control
u(:,i) = - gain*x_hat_plus;

% Forward integration
[T,Y]=ode23(’dynamics’,[t t+h_t],X_current,options,kappa,dec1,...
dec2,u(:,i),u_mc(:,i));
X_new = Y(end,:)’;

% Extracting the propagation


x_hat_minus = X_new(nsp+1:nsp+nsm);
P_minus = reshape(X_new(nsp+nsm+1:nsp+nsm+nsm*nsm),nsm,nsm);

% Update of Kalman gain and estimation error covariance


Kal_gain = P_minus*Cm*inv(Cm*P_minus*Cm’+R_kal);
P_plus = (eye(nsm)-Kal_gain*Cm)*P_minus;

% Creating measurement
y_tilde(:,i+1) = Cp*X_new(1:nsp) + Flag_noise*randn(ns,1);
APPENDIX 124

% Estimate update
x_hat_plus = x_hat_minus + Kal_gain*(y_tilde(:,i+1)-...
Cm*x_hat_minus);

% Storage and loop update


X_current = [ X_new(1:nsp) ; x_hat_plus ; ...
reshape(P_plus,nsm*nsm,1) ];
X(:,i+1) = [ X_new(1:nsp) ; x_hat_plus ];

% Model-Error correction
f_current = feval(@dynamics,t,[X(:,i); zeros(nsm*nsm,1)],[],...
kappa,dec1,dec2,u(:,i),u_mc(:,i));
u_mc(:,i+1) = Flag_mec*Mult*(del_t*f_current(nsp+1:nsm+nsp) -...
Cm * X(nsp+1:nsp+nsm,i+1) + Cm* X(nsp+1:nsp+nsm,i));

% Averaging of the model-error for the reduced-order model


if inp == 1
u_mc(:,i+1) = (1/nc)*sum(u_mc(:,i+1))*ones(nc,1);
end

end
end

close(wait)

Matlab Program 6: dynamics.m

function [dyn_new]=dynamics(t,dyn,FLAG,kappa,dec1,dec2,u,u_mc)

%__________________________________________________________________________
%
% Plant and Estimator Dynamics (Right-Hand Side of the ODE System)
%__________________________________________________________________________

global width nsp nsm ns nc time


global Mp_inv Mm_inv Ap Np Am Nm bp bm
global Q_kal

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Plant Dynamics %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

x = dyn(1:nsp);
nonlin_x(:,1) = [x(end);x(1:nsp-1)].*[x(end);x(1:nsp-1)];
nonlin_x(:,2) = [x(end);x(1:nsp-1)].*x;
nonlin_x(:,3) = x.*x;
APPENDIX 125

nonlin_x(:,4) = x.*[x(2:nsp);x(1)];
nonlin_x(:,5) = [x(2:nsp);x(1)].*[x(2:nsp);x(1)];

nonlinterm = nonlin_x*Np’;
nonlinterm(1) = nonlin_x(1,:)*Np’;
nonlinterm(nsp) = nonlin_x(nsp,:)*Np’;

switch dec1
%%%%%%%%%%%%%%%%%%%%%%%%
% Open Loop Simulation %
%%%%%%%%%%%%%%%%%%%%%%%%

case {1}
x_new = kappa*Ap*x - Mp_inv*nonlinterm + 0.75*cos(10*t);
dyn_new = x_new;
waitbar(t/time);

otherwise

x_hat = dyn(nsp+1:nsp+nsm);
nonlin_x_hat(:,1) = [x_hat(end);x_hat(1:nsm-1)].*...
[x_hat(end);x_hat(1:nsm-1)];
nonlin_x_hat(:,2) = [x_hat(end);x_hat(1:nsm-1)].*x_hat;
nonlin_x_hat(:,3) = x_hat.*x_hat;
nonlin_x_hat(:,4) = x_hat.*[x_hat(2:nsm);x_hat(1)];
nonlin_x_hat(:,5) = [x_hat(2:nsm);x_hat(1)].*...
[x_hat(2:nsm);x_hat(1)];

nonlinterm_x_hat = nonlin_x_hat*Nm’;
nonlinterm_x_hat(1) = nonlin_x_hat(1,:)*Nm’;
nonlinterm_x_hat(nsm) = nonlin_x_hat(nsm,:)*Nm’;

switch dec2
%%%%%%%%%%%%%%%%%%%%%%%%
% Direct Measurements %
%%%%%%%%%%%%%%%%%%%%%%%%
case {1}
x_new = kappa*Ap*x - Mp_inv*nonlinterm + bp*u +...
0.75*cos(10*t) - bp*u_mc;
x_hat_new = kappa*Am*x_hat - Mm_inv*nonlinterm_x_hat +...
bm*u - bm*u_mc;
dyn_new = [x_new; x_hat_new];
waitbar(t/time);

%%%%%%%%%%%%%%%%%
% Kalman Filter %
%%%%%%%%%%%%%%%%%
case {2}
x_new = kappa*Ap*x - Mp_inv*nonlinterm + bp*u +...
0.75*cos(10*t) - bp*u_mc;
x_hat_new = kappa*Am*x_hat - Mm_inv*nonlinterm_x_hat +...
APPENDIX 126

bm*u - bm*u_mc;

F_lin = alin(nsm,width,kappa,x_hat);
P = reshape(dyn(nsp+nsm+1:nsp+nsm+nsm*nsm),nsm,nsm);
P_new = F_lin*P + P*F_lin’ + Q_kal;
p_new = reshape(P_new,nsm*nsm,1);

dyn_new = [x_new ; x_hat_new ; p_new];


waitbar(t/time);
end
end

X
Bibliography

[1] Joslin, R. D., “Overview of Laminar Flow Control,” Tech. Rep. NASA/TP-1998-208705, NASA,
October 1998.

[2] Chambers, J. R., “Innovation in Flight: Research of the NASA Langley Research Center on
Revolutionary Advanced Concepts for Aeronautics,” Tech. Rep. NASA SP-2005-4539, NASA,
August 2005.

[3] Dryden, H. L., “Recent Advances in the Mechanics of Boundary Layer Flow,” Advances in
Applied Mechanics, Vol. 1, 1948, pp. 1–40.

[4] Braslow, A., A History of Suction-Type Laminar-Flow Control with Emphasis on Flight Re-
search, No. 13 in Monographs in Aerospace History, NASA History Division, Washington,
1999.

[5] Sastry, S., Nonlinear Systems: Analysis, Stability and Control , Vol. 10 of Interdisciplinary
Applied Mathematics, Springer, New York, 1999.

[6] Isidori, A., Nonlinear Control Systems, Communication & Control Engineering Series, Springer,
London, 3rd ed., 1995.

[7] Isidori, A., Nonlinear Control Systems II , Springer, London, 1999.

[8] Slotine, J. E. and Li, W., Applied Nonlinear Control , Prentice Hall, Upper Saddle River, NJ,
1991.

[9] Byrnes, C. I., Gilliam, D. S., and He, J., “Root-Locus and Boundary Feedback Design for a
Class for Distributed Parameter Systems,” SIAM Journal on Control and Optimization, Vol. 32,
No. 5, 1994, pp. 1364–1427.

127
BIBLIOGRAPHY 128

[10] Zuazua, E., “Controllability of Partial Differential Equations and its Semi-Discrete Approxima-
tions,” Discrete and Continuous Dynamical Systems, Vol. 8, No. 2, 2002, pp. 469–513.

[11] Kazantzis, N. and Demetriou, M. A., “Singular Control-Invariance PDEs for Nonlinear Sys-
tems,” SIAM: Multiscale Modelling and Simulation, Vol. 3, No. 4, 2005, pp. 731–748.

[12] Burns, J. A. and Kang, S., “A Control Problem for Burgers’ Equation with Bounded In-
put/Output,” Nonlinear Dynamics, Vol. 2, No. 4, 1991, pp. 235–262.

[13] Kang, S., Ito, K., and Burns, J. A., “Unbound Observation and Boundary Control Problems
for Burgers’ Equation,” Proceedings of the 30th Conference on Decision and Control, Brighton,
England , IEEE, December 1991, pp. 2687–2692.

[14] Byrnes, C. I. and Gilliam, D. S., “Boundary Feedback Stabilization of a Controlled Viscous
Burgers’ Equation,” Proceedings of the 31st Conference on Decision and Control, Tucson, AZ ,
IEEE, December 1992, pp. 803–808.

[15] Ito, K. and Kang, S., “A Dissipative Feedback Control Synthesis for Systems Arising in Fluid
Dynamics,” SIAM Journal on Control and Optimization, Vol. 32, No. 3, 1994, pp. 831–854.

[16] Gilliam, D. S., Lee, D., Martin, C. F., and Shubov, V. I., “Turbulent Behaviour for a Boundary
Controlled Burgers’ Equation,” Proceedings of the 33rd Conference on Decision and Control,
Lake Buena Vista, FL, IEEE, December 1994, pp. 311–315.

[17] Ly, H. V., Mease, K. D., and Titi, E. S., “Distributed and Boundary Control of the Viscous
Burgers’ Equation,” Numerical Functional Analysis and Optimization, Vol. 18, No. 1-2, 1997,
pp. 143–188.

[18] Krstić, M., “On Global Stabilization of Burgers’ Equation by Boundary Control,” Systems and
Control Letters, Vol. 37, 1999, pp. 123–141.

[19] Byrnes, C. I., Gilliam, D. S., and Shubov, V. I., “Semiglobal Stabilization of a Boundary
Controlled Viscous Burgers’ Equation,” Proceedings of the 38th Conference on Decision and

Control, Phoenix, AZ , IEEE, December 1999, pp. 680–681.

[20] Balogh, A. and Krstić, M., “Burgers’ Equation with Nonlinear Boundary Feedback: H 1 Stabil-

ity, Well-Posedness and Simulation,” Mathematical Problems in Engineering, Vol. 6, No. 2-3,
2000, pp. 189–200.
BIBLIOGRAPHY 129

[21] Liu, W.-J. and Krstić, M., “Backstepping Boundary Control of Burgers’ Equation with Actuator
Dynamics,” Systems and Control Letters, Vol. 41, 2000, pp. 291–303.

[22] Liu, W.-J. and Krstić, M., “Adaptive Control of Burgers’ Equation with Unknown Viscosity,”

International Journal of Adaptive Control and Signal Processing, Vol. 15, 2001, pp. 745–766.

[23] Burns, J. A., Zietsman, L., and Myatt, J. H., “Boundary Layer Control for the Viscous Burg-
ers’ Equation,” Proceedings of the International Conference on Control Applications, Glasgow,

Scotland , IEEE, September 2002, pp. 548–553.

[24] Smaoui, N., “Analyzing the Dynamics of the Forced Burgers’ Equation,” Journal of Applied
Mathematics and Stochastic Analysis, Vol. 13, No. 3, 2000, pp. 269–285.

[25] Smaoui, N. and Belgacem, F., “Connections between the Convective Diffusion Equation and the
Forced Burgers’ Equation,” Journal of Applied Mathematics and Stochastic Analysis, Vol. 15,
No. 1, 2002, pp. 53–69.

[26] Smaoui, N., “Boundary and Distributed Control of the Viscous Burgers’ Equation,” Journal of
Computational and Applied Mathematics, Vol. 182, 2005, pp. 91–104.

[27] Smaoui, N., Zribi, M., and Almulla, A., “Sliding Mode Control of the Forced Generalized
Burgers’ Equation,” IMA Journal of Mathematical Control and Information, Vol. 23, 2006,
pp. 301–323.

[28] King, B. B., “Representation of Feedback Operators for Hyperbolic Partial Differential Equation
Control Problems,” Computation and Control IV, Birkhäuser, Boston, MA, 1995, pp. 57–74.

[29] Faulds, A. L. and King, B. B., “Sensor Location in Feedback Control of Partial Differential
Equation Systems,” Proceedings of the International Conference on Control Applications, An-
chorage, AK , IEEE, September 2000, pp. 536–541.

[30] Chambers, D. H., Adrian, R. J., Moin, P., Stewart, D. S., and Sung, H. J., “Karhunen-Loève
Expansion of Burgers’ Model of Turbulence,” Physics of Fluids, Vol. 31, No. 9, 1988, pp. 2573–
2582.

[31] Chatterjee, A., “An Introduction to the Proper Orthogonal Decomposition,” Current Science,
Vol. 78, No. 7, 2000, pp. 808–817.
BIBLIOGRAPHY 130

[32] Kunisch, K. and Volkwein, S., “Control of the Burgers Equation by a Reduced-Order Approach
Using Proper Orthogonal Decomposition,” Journal of Optimization Theory and Applications,
Vol. 102, No. 2, 1999, pp. 345–371.

[33] Atwell, J. A. and King, B. B., “Proper Orthogonal Decomposition for Reduced Basis Feedback
Controllers for Parabolic Equations,” Mathematical and Computer Modelling, Vol. 33, 2001,
pp. 1–19.

[34] Atwell, J. A., Borggaard, J. T., and King, B. B., “Reduced Order Controllers for Burgers’ Equa-
tion with a Nonlinear Observer,” International Journal of Applied Mathematics and Computer
Science, Vol. 11, No. 6, 2001, pp. 1311–1330.

[35] Atwell, J. A. and King, B. B., “Reduced Order Controllers for Spatially Distributed Systems
via Proper Orthogonal Decomposition,” SIAM Journal on Scientific Computing, Vol. 26, No. 1,
2004, pp. 128–151.

[36] Shampine, L. F., “Implementation of Rosenbrock Methods,” ACM Transactions on Mathemat-


ical Software, Vol. 8, No. 2, 1982, pp. 93–113.

[37] Gorguis, A., “A Comparison between Cole-Hopf Transformation and the Decomposition Method
for Solving Burgers’ Equation,” Applied Mathematics and Computation, Vol. 173, 2006, pp. 126–
136.

[38] Cuesta, C. M. and Pop, I. S., “Numerical Schemes for a Pseudo-Parabolic Burgers Equation:
Discontinuous Data and Long-Time Behaviour,” Journal of Computational and Applied Math-

ematics, Vol. 3, 2008, in press: doi:10.1016/j.cam.2008.05.001.

[39] Atwell, J. A. and King, B. B., “Stabilized Finite Element Methods and Feedback Control
for Burgers’ Equation,” Proceedings of the American Control Conference, Chicago, IL, Vol. 4,
IFAC, June 2000, pp. 2745 – 2749.

[40] King, B. B. and Krueger, D. A., “Burgers’ Equation: Galerkin Least-Squares Approximations
and Feedback Control,” Mathematical and Computer Modelling, Vol. 38, 2003, pp. 1075–1085.

[41] Crassidis, J. L. and Markley, F. L., “Predictive Filtering for Nonlinear Systems,” Journal of
Guidance, Control, and Dynamics, Vol. 20, No. 3, 1997, pp. 566–572.
BIBLIOGRAPHY 131

[42] Lu, P., “Nonlinear Predictive Controllers for Continuous Systems,” Journal of Guidance, Con-
trol, and Dynamics, Vol. 17, No. 3, 1994, pp. 553–560.

[43] Crassidis, J. L., “Robust Control of Nonlinear Systems Using Model-Error Control Synthesis,”

AIAA Journal of Guidance, Control, and Dynamics, Vol. 22, No. 4, 1999, pp. 595–601.

[44] Kim, J. and Crassidis, J. L., “Linear Stability Analysis of Model Error Control Synthesis,”
Guidance, Navigation, and Control Conference and Exhibit, AIAA, August 2000.

[45] Kim, J.-R. and Crassidis, J. L., “Model-Error Control Synthesis using Approximate Receding-
Horizon Control Laws,” Guidance, Navigation, and Control Conference and Exhibit, AIAA,
August 2001.

[46] Kim, J.-R., Model-Error Control Synthesis: A new Approach to Robust Control , Ph.D. thesis,
Texas A&M University, College Station, TX, August 2002.

[47] George, J., Singla, P., and Crassidis, J. L., “Stochastic Disturbance Accomodating Control
Using a Kalman Estimator,” Guidance, Navigation, and Control Conference and Exhibit, Hon-
olulu, HI , AIAA, August 2008.

[48] Burgers, J. M., “A Mathematical Model Illustrating the Theory of Turbulence,” Advances in
Applied Mechanics, Vol. 1, 1948, pp. 171–199.

[49] Hutter, K., Fluid- und Thermodynamik: Eine Einführung, Springer-Verlag, Berlin, 1995.

[50] Vogel, H., Gerthsen Physik , Springer-Verlag, Berlin, 18th ed., 1995.

[51] Atwell, J. A., Proper Orthogonal Decomposition for Reduced Order Control of Partial Differen-
tial Equations, Ph.D. thesis, Virginia Polytechnic Institute and State University, Blacksburg,
VA, April 2000.

[52] Haberman, R., Elementary Applied Partial Differential Equations: with Fourier Series and
Boundary Value Problems, Prentice Hall, Upper Saddle River, NJ, 3rd ed., 1998.

[53] Rosales, R. R., “Simplest Car Following Traffic Flow Model,” Lecture Notes, MIT, March 1999.

[54] Evans, L. C., Partial Differential Equations, Vol. 19 of Graduate Studies in Mathematics, Amer-
ican Mathematical Society, Providence, RI, 3rd ed., 1998.
BIBLIOGRAPHY 132

[55] Cole, J. D., “On a Quasi-Linear Parabolic Equation Occurring in Aerodynamics,” Quarterly of
Applied Mathematics, Vol. 9, No. 3, 1951, pp. 225–236.

[56] Hopf, E., “The Partial Differential Equation ut + uux = µuxx ,” Communications on Pure and

Applied Mathematics, Vol. 3, 1950, pp. 201–230.

[57] de Water, H. V. and Willems, J. C., “The Certainty Equivalence Property in Stochastic Control
Theory,” IEEE Transactions on Automatic Control , Vol. 26, No. 5, 1981, pp. 1080–1087.

[58] Kirk, D. E., Optimal Control Theory, Prentice Hall, Upper Saddle River, NJ, 1970.

[59] Crassidis, J. L. and Junkins, J. L., Optimal Estimation of Dynamic Systems, Chapman & Hall,
Boca Raton, FL, 2004.

S-ar putea să vă placă și