Documente Academic
Documente Profesional
Documente Cultură
d) > 0. There-
fore the extended state (x, u) satisfies
V(x
+
, u
+
) V(x, u) c |(x, u)|
2
(x, u) X
N
U
N
(2.3)
in which c = c( c)
2
. Together with (2.2), (2.3) establishes that V()
is a Lyapunov function of the extended state (x, u) for all x X
N
and u U
N
. Hence for all (x(0), u(0)) X
N
U
N
and k 0, we
have
|(x(k), u(k))| |(x(0), u(0))|
k
in which > 0 and 0 < < 1. Notice that X
N
U
N
is forward
invariant for the extended system (2.1).
Finally, we remove the input sequence and establish that the
origin is exponentially stable for the closed-loop system. We have
for all x(0) X
N
and k 0
|(k, x(0))| = |x(k)| |(x(k), u(k))| |(x(0), u(0))|
k
(|x(0)| +|u(0)|)
k
(1 +
d) |x(0)|
k
|x(0)|
k
in which = (1 +
d) > 0, and we have established exponential
stability of the origin by observing that X
N
is forward invariant for
the closed-loop system(k, x(0)).
Remark 1. For Lemma 2, we use the fact that U is compact. For
unbounded U exponential stability may instead be established by
compactness of X
N
.
3. Cooperative model predictive control
We now show that cooperative MPC is a form of suboptimal
MPC and establish stability. To simplify the exposition and proofs,
in Sections 35 we assume that the plant consists of only two
subsystems. We then establish in Section 6 that the results extend
to any finite number of subsystems.
3.1. Definitions
3.1.1. Models
We assume for each subsystem i that there exist a collection of
linear models denoting the effects of inputs of subsystem j on the
states of subsystem i for all (i, j) I
1:2
I
1:2
x
+
ij
= A
ij
x
ij
+B
ij
u
j
in which x
ij
R
n
ij
, u
j
R
m
j
, A
ij
R
(n
ij
n
ij
)
, and B
ij
R
(n
ij
m
j
)
.
For a discussion of identification of this model choice, see [20]. The
Appendix shows how these subsystem models are related to the
centralized model. Considering subsystem 1, we collect the states
to form
_
x
11
x
12
_
+
=
_
A
11
A
12
_ _
x
11
x
12
_
+
_
B
11
0
_
u
1
+
_
0
B
12
_
u
2
which denotes the model for subsystem1. To simplify the notation,
we define the equivalent model
x
+
1
= A
1
x
1
+
B
11
u
1
+
B
12
u
2
for which
x
1
=
_
x
11
x
12
_
A
1
=
_
A
11
A
12
_
B
11
=
_
B
11
0
_
B
12
=
_
0
B
12
_
in which x
1
R
n
1
, A
1
R
(n
1
n
1
)
, and
B
1j
R
(n
1
m
j
)
with
n
1
= n
11
+ n
12
. Forming a similar model for subsystem 2, the
plantwide model is
_
x
1
x
2
_
+
=
_
A
1
A
2
_ _
x
1
x
2
_
+
_
B
11
B
21
_
u
1
+
_
B
12
B
22
_
u
2
.
We further simplify the plantwide model notation to
x
+
= Ax +B
1
u
1
+B
2
u
2
for which
x =
_
x
1
x
2
_
A =
_
A
1
A
2
_
B
1
=
_
B
11
B
21
_
B
2
=
_
B
12
B
22
_
.
3.1.2. Objective functions
Consider subsystem 1, for which we define the quadratic stage
cost and terminal penalty, respectively
1
(x
1
, u
1
) =
1
2
(x
1
Q
1
x
1
+u
1
R
1
u
1
) (3.1a)
V
1f
(x
1
) =
1
2
x
1
P
1f
x
1
(3.1b)
in which Q
1
R
(n
1
n
1
)
, R
1
R
(m
1
m
1
)
, and P
1f
R
(n
1
n
1
)
. We
define the objective function for subsystem 1
V
1
_
x
1
(0), u
1
, u
2
_
=
N1
k=0
1
_
x
1
(k), u
1
(k)
_
+V
1f
_
x
1
(N)
_
.
Notice V
1
is implicitly a function of both u
1
and u
2
because x
1
is a
function of both u
1
and u
2
. For subsystem2, we similarly define an
objective function V
2
. We define the plantwide objective function
V
_
x
1
(0), x
2
(0), u
1
, u
2
_
=
1
V
1
_
x
1
(0), u
1
, u
2
_
+
2
V
2
_
x
2
(0), u
1
, u
2
_
in which
1
,
2
> 0 are relative weights. For notational simplicity,
we write V(x, u) for the plant objective.
3.1.3. Constraints
We require that the inputs satisfy
u
1
(k) U
1
u
2
(k) U
2
k I
0:N1
in which U
1
and U
2
are compact and convex such that 0 is in the
interior of U
i
i I
1:2
.
Remark 2. The constraints are termed uncoupled because the fea-
sible region of u
1
is not affected by u
2
, and vice versa.
3.1.4. Assumptions
For every i I
1:2
, let A
i
= diag(A
1i
, A
2i
) and B
i
=
_
B
1i
B
2i
_
. The
following assumptions are used to establish stability.
Assumption 3. For all i I
1:2
(a) The systems (A
i
, B
i
) are stabilizable.
(b) The input penalties R
i
> 0.
(c) The state penalties Q
i
0.
(d) The systems (A
i
, Q
i
) are detectable.
(e) N max
iI
1:2
(n
u
i
), inwhichn
u
i
is the number of unstable modes
of A
i
, i.e., the number of eig(A
i
) such that || 1.
The Assumption 3(e) is required so that the horizon N is
sufficiently large to zero the unstable modes.
3.1.5. Unstable modes
For an unstable plant, we constrain the unstable modes to be
zero at the end of the horizon to maintain closed-loop stability. To
construct this constraint, consider the real Schur decomposition of
A
ij
for each (i, j) I
1:2
I
1:2
A
ij
=
_
S
s
ij
S
u
ij
_
_
A
s
ij
A
u
ij
_ _
S
s
ij
S
u
ij
_
(3.2)
in which A
s
ij
is stable and A
u
ij
has all unstable eigenvalues.
3.1.6. Terminal penalty
Given the definition of the Schur decomposition (3.2), we define
the matrices
Please cite this article in press as: B.T. Stewart, et al., Cooperative distributed model predictive control, Systems & Control Letters (2010),
doi:10.1016/j.sysconle.2010.06.005
ARTICLE IN PRESS
4 B.T. Stewart et al. / Systems & Control Letters ( )
S
s
i
= diag(S
s
i1
, S
s
i2
) A
s
i
= diag(A
s
i1
, A
s
i2
) i I
1:2
(3.3a)
S
u
i
= diag(S
u
i1
, S
u
i2
) A
u
i
= diag(A
u
i1
, A
u
i2
) i I
1:2
. (3.3b)
Lemma 4. The matrices (3.3) satisfy the Schur decompositions
A
i
=
_
S
s
i
S
u
i
_
_
A
s
i
A
u
i
_ _
S
s
i
S
u
i
_
i I
1:2
.
Let
1
and
2
denote the solution of the Lyapunov equations
A
s
1
1
A
s
1
1
= S
s
1
Q
1
S
s
1
A
s
2
2
A
s
2
2
= S
s
2
Q
2
S
s
2
. (3.4)
We then choose the terminal penalty for each subsystem to be the
cost to go under zero control, such that
P
1f
= S
s
1
1
S
s
1
P
2f
= S
s
2
2
S
s
2
. (3.5)
3.1.7. Cooperative model predictive control algorithm
Let
0
be the initial conditionfor the cooperative MPCalgorithm
(see Section 3.2 for the discussion of initialization). At each iterate
p 0, the following optimization problemis solved for subsystem
i, i I
1:2
min
i
V(x
1
(0), x
2
(0),
1
,
2
) (3.6a)
subject to
_
x
1
x
2
_
+
=
_
A
1
A
2
_ _
x
1
x
2
_
+
_
B
11
B
21
_
1
+
_
B
12
B
22
_
2
(3.6b)
i
U
N
i
(3.6c)
S
u
ji
x
ji
(N) = 0 j I
1:2
(3.6d)
|
i
| d
i
jI
1:2
x
ji
(0)
if x
ji
(0) B
r
j I
1:2
(3.6e)
j
=
p
j
j I
1:2
\ i (3.6f)
in which we include the hard input constraints, the stabilizing con-
straint on the unstable modes, and the Lyapunov stability con-
straint. We denote the solutions to these problems as
1
(x
1
(0), x
2
(0),
p
2
),
2
(x
1
(0), x
2
(0),
p
1
).
Given the prior, feasible iterate (
p
1
,
p
2
), the next iterate is defined
to be
(
p+1
1
,
p+1
2
) = w
1
_
1
(
p
2
),
p
2
_
+w
2
_
p
1
,
2
(
p
1
)
_
(3.7)
w
1
+w
2
= 1, w
1
, w
2
> 0
for which we omit the state dependence of
1
and
2
to reduce
notation. This distributed optimization is of the GaussJacobi type
(see [21], pp. 219223). At the last iterate p, we set u (
p
1
,
p
2
)
and inject u(0) into the plant.
The following properties follow immediately.
Lemma 5 (Feasibility). Given a feasible initial guess, the iterates
satisfy
(
p
1
,
p
2
) U
N
1
U
N
2
for all p 1.
Lemma 6 (Convergence). The cost V(x(0),
p
) is nonincreasing for
each iterate p and converges as p .
Lemma 7 (Optimality). As p the cost V(x(0),
p
) converges
to the optimal value V
0
(x(0)), and the iterates (
p
1
,
p
2
) converge to
(u
0
1
, u
0
2
) in which u
0
= (u
0
1
, u
0
2
) is the Pareto (centralized) optimal
solution.
The proofs are in the Appendix.
Remark 3. This paper presents the distributed optimization algo-
rithm with subproblem (3.6) and iterate update (3.7) so that the
Lemmas 57 are satisfied. This choice is nonunique and other op-
timization methods may exist satisfying these properties.
3.2. Stability of cooperative model predictive control
We define the steerable set X
N
as the set of all x such that there
exists a u U
N
satisfying (3.6d).
Assumption 8. Given r > 0, for all i I
1:2
, d
i
is chosen large
enough such that there exists a u
i
U
N
satisfying |u
i
|
d
i
jI
1:2
x
ij
Qx + (1/2)u
Ru + x
Su. Defining
H = [
Q S
S
R
] > 0, V() satisfies (2.2a) by choosing a =
(1/2) min
i
(
i
(H )) and b = (1/2) max
i
(
i
(H )). Next we show
that V() satisfies (2.2b). Using the warm start at the next sample
time, we have the following cost
V(x
+
, u
+
) = V(x, u)
1
2
1
(x
1
, u
1
)
1
2
2
(x
2
, u
2
)
+
1
2
1
x
1
(N)
_
A
1
P
1f
A
1
P
1f
+Q
1
_
x
1
(N)
+
1
2
2
x
2
(N)
_
A
2
P
2f
A
2
P
2f
+Q
2
_
x
2
(N). (3.8)
Using the Schur decomposition defined in Lemma 4, and the con-
straints (3.6d) and (3.5), the last two terms of (3.8) can be written
as
Please cite this article in press as: B.T. Stewart, et al., Cooperative distributed model predictive control, Systems & Control Letters (2010),
doi:10.1016/j.sysconle.2010.06.005
ARTICLE IN PRESS
B.T. Stewart et al. / Systems & Control Letters ( ) 5
1
2
1
x
1
(N)
S
s
1
_
A
s
1
1
A
s
1
1
+S
s
1
Q
1
S
s
1
_
S
s
1
x
1
(N)
+
1
2
2
x
2
(N)
S
s
2
_
A
s
2
2
A
s
2
2
+S
s
2
Q
2
S
s
2
_
S
s
2
x
2
(N) = 0.
These terms are zero because of (3.4). Using this result and apply-
ing the iteration of the controllers gives
V(x
+
, u
+
) V(x, u)
1
2
1
(x
1
, u
1
)
1
2
2
(x
2
, u
2
).
Because
i
is quadratic in both arguments, there exists a c > 0 such
that
V(x
+
, u
+
) V(x, u) c |(x, u)|
2
.
The Lyapunov stability constraint (3.6e) for x
11
, x
12
, x
21
, x
22
B
r
implies for (x
1
, x
2
) B
r
that |(u
1
, u
2
)| 2
d |(x
1
, x
2
)| in which
d = max(d
1
, d
2
), satisfying (2.2c). Therefore the closed-loop sys-
tem satisfies Lemma 2. Hence the closed-loop system is exponen-
tially stable.
4. Output feedback
We now consider the stability of the closed-loop system with
estimator error.
4.1. Models
For all (i, j) I
1:2
I
1:2
x
+
ij
= A
ij
x
ij
+B
ij
u
j
(4.1a)
y
i
=
jI
1:2
C
ij
x
ij
(4.1b)
in which y
i
R
p
i
is the output of subsystem i and C
ij
R
(p
i
n
ij
)
.
Consider subsystem1. As above, we collect the states to formy
1
=
[C
11
C
12
]
_
x
11
x
12
_
and use the simplified notation y
1
= C
1
x
1
to form
the output model for subsystem 1.
Assumption 10. For all i I
1:2
, (A
i
, C
i
) is detectable.
4.2. Estimator
We construct a decentralized estimator. Consider subsystem 1,
for which the local measurement y
1
and both inputs u
1
and u
2
are
available, but x
1
must be estimated. The estimate satisfies
x
+
1
= A
1
x
1
+
B
11
u
1
+
B
12
u
2
+L
1
(y
1
C
1
x
1
)
in which x
1
is the estimate of x
1
and L
1
is the Kalman filter gain.
Defining the estimate error as e
1
= x
1
x
1
we have e
+
1
=
(A
1
L
1
C
1
)e
1
. By Assumptions 3 and 10 there exists an L
1
such that
(A
1
L
1
C
1
) is stable and therefore the estimator for subsystem1 is
stable. Defining e
2
similarly, the estimate error for the plant evolves
_
e
1
e
2
_
+
=
_
A
L1
A
L2
_ _
e
1
e
2
_
in which A
Li
= A
i
L
i
C
i
. We collect the estimate error of each
subsystem together and write e
+
= A
L
e.
4.3. Reinitialization
We define the reinitialization step required to recover feasibil-
ity of the warm start for the perturbed state terminal constraint.
For each i I
1:2
, define
h
+
i
(e) = arg min
u
i
_
u
i
u
+
i
2
R
i
F
ji
(u
i
u
+
i
) = f
ji
e
j
j I
1:2
u
i
U
N
_
in which R
i
= diag(R
i
), F
ji
= S
u
ji
C
ji
, f
ji
= S
u
ji
A
N
ji
L
ji
, and C
ji
=
[B
ji
A
ji
B
ji
A
N1
ji
B
ji
] for all i, j I
1:2
. We use h
+
i
(e) as the initial
condition for the control optimization (3.6) for all i I
1:2
.
Proposition 11. The reinitialization h
+
() = (h
+
1
(), h
+
2
()) is Lips-
chitz continuous on bounded sets.
Proof. The proof follows from Prop. 7.13 [12, p.499].
4.4. Stability with estimate error
We consider the stability properties of the extended closed-
loop system
_
x
u
e
_+
=
_
F( x, u) +Le
g
p
( x, u, e)
A
L
e
_
(4.2)
in which F( x, u) = A x +B
1
u
1
+B
2
u
2
and L = diag(L
1
C
1
, L
2
C
2
). The
function g
p
includes the reinitialization step. Because A
L
is stable
there exists a Lyapunov function J () with the following properties
a |e|
J (e)
b |e|
J (e
+
) J (e) c |e|
in which > 0, a,
X
N
and E both containing the origin such that the following
conditions hold: (i)
X
N
E X
N
, where indicates the
Minkowski sum; (ii) for each x(0)
X
N
and e(0) E , the solution
of the extended closed-loop system(4.2) satisfies x(k) X
N
for all
k 0.
Consider the sum of the two Lyapunov functions
W( x, u, e) = V( x, u) +J (e).
We next show that W() is a Lyapunov function for the perturbed
system and establish exponential stability of the extended state
origin ( x, e) = (0, 0). From the definition of W() we have
a
( x, u)
2
+ a |e| W( x, u, e) b
( x, u)
2
+
b |e|
a
_
( x, u)
2
+|e|
_
W( x, u, e)
b(
( x, u)
2
+|e|) (4.3)
in which a = min(a, a) > 0 and
b = max(b,
b) > 0. Next we
compute the cost change
W( x
+
, u
+
, e
+
) W( x, u, e)
= V( x
+
, u
+
) V( x, u) +J (e
+
) J (e).
The Lyapunov function V is quadratic in ( x, u) and, hence, Lipschitz
continuous on bounded sets. By Proposition 11
V(F( x, u) +Le, h
+
(e)) V(F( x, u) +Le, u
+
)
L
h
L
V
u
|e|
V(F( x, u) +Le, u
+
) V(F( x, u), u
+
)
L
V
x
|Le|
in which L
h
, L
V
u
, and L
V
x
are Lipschitz constants for h
+
and the
first andsecondarguments of V, respectively. Combining the above
inequalities
V(F( x, u) +Le, h
+
(e)) V(F( x, u), u
+
)
L
V
|e|
Please cite this article in press as: B.T. Stewart, et al., Cooperative distributed model predictive control, Systems & Control Letters (2010),
doi:10.1016/j.sysconle.2010.06.005
ARTICLE IN PRESS
6 B.T. Stewart et al. / Systems & Control Letters ( )
in which
L
V
= L
h
L
V
u
+L
V
x
|L|. Using the system evolution we have
V( x
+
, h
+
(e)) V(F( x, u), u
+
) +
L
V
|e|
and by Lemma 6
V( x
+
, u
+
) V(F( x, u), u
+
) +
L
V
|e| .
Subtracting V( x, u) from both sides and noting that u
+
is a stabi-
lizing input sequence for e = 0 gives
V( x
+
, u
+
) V( x, u) c
( x, u(0))
2
+
L
V
|e|
W( x
+
, u
+
, e
+
) W( x, u, e) c
( x, u(0))
2
+
L
V
|e| c |e|
c
( x, u(0))
2
( c
L
V
) |e|
W( x
+
, u
+
, e
+
) W( x, u, e) c(
( x, u(0))
2
+|e|) (4.4)
in which we choose c >
L
V
and c = min(c, c
L
V
) > 0. This
choice is possible because c can be chosen arbitrarily large. Notice
this step is what motivated the choice of = 1. Lastly, we require
the constraint
|u| d
, x B
r
. (4.5)
Theorem 13 (Exponential Stability of Perturbed System). Given
Assumptions 3, 10 and 12, for each x(0)
X
N
and e(0) E , there
exist constants > 0 and 0 < < 1, such that the solution of the
perturbed system (4.2) satisfies, for all k 0
|( x(k), e(k))| |( x(0), e(0))|
k
. (4.6)
Proof. Using the same arguments as for Lemma 2, we write:
W( x
+
, u
+
, e
+
) W( x, u, e) c(
( x, u)
2
+|e|) (4.7)
in which c c > 0. Therefore W() is a Lyapunov function for
the extended state ( x, u, e) with mixed normpowers. The standard
exponential stability argument can be extended for the mixed
norm power case to show that the origin of the extended closed-
loop system (4.2) is exponentially stable [12, p.420]. Hence, for all
k 0
k
in which > 0 and 0 < < 1. Notice that Assumption 12 implies
that u(k) exists for all k 0 because x(k) X
N
.
We have, using the same arguments used in Lemma 2
( x(k), e(k))
( x(0), e(0))
k
in which = (1 +
d) > 0.
Corollary 14. Under the assumptions of Theorem 13, for each x(0)
and x(0) such that e(0) = x(0) x(0) E and x(0)
X
N
, the
solution of the closed-loop state x(k) = x(k) +e(k) satisfies:
|x(k)| |x(0)|
k
(4.8)
for some > 0 and 0 < < 1.
5. Coupled constraints
In Remark 2, we commented that the constraint assumptions
imply uncoupled constraints, because each input is constrained
by a separate feasible region so that the full feasible space is
defined (u
1
, u
2
) U
N
= U
N
1
U
N
2
. This assumption, however,
is not always practical. Consider two subsystems sharing a scarce
resource for which we control the distribution. There then exists
an availability constraint spanning the subsystems. This constraint
is coupled because each local resource constraint depends upon the
amount requested by the other subsystem.
Remark 5. For plants with coupled constraints, implementing
MPC problem (3.6) gives exponentially stable, yet suboptimal,
feedback.
In this section, we relax the assumption so that (u
1
, u
2
) U
N
for any U compact, convex and containing the origin in its interior.
Consider the decomposition of the inputs u = (u
U1
, u
U2
, u
C
) such
that there exists a U
U1
,U
U2
, and U
C
for which
U = U
U1
U
U2
U
C
and
u
U1
U
N
U1
, u
U2
U
N
U2
, u
C
U
N
C
for which U
U1
, U
U2
, and U
C
are compact and convex. We denote u
Ui
the uncoupled inputs for subsystem i, i I
1:2
, and u
C
the coupled
inputs.
Remark 6. U
U1
,U
U2
, or U
C
may be empty, and therefore such a
decomposition always exists.
We modify the cooperative MPC problem(3.6) for the above de-
composition. Define the augmented inputs ( u
1
, u
2
)
u
1
=
_
u
U1
u
C
_
u
2
=
_
u
U2
u
C
_
.
The implemented inputs are
u
1
=
E
1
u
1
u
2
=
E
2
u
2
,
E
1
=
_
I
I
1
_
E
2
=
_
I
I
2
_
in which (I
1
, I
2
) are diagonal matrices with either 0 or 1 diagonal
entries and satisfy I
1
+ I
2
= I. For simplicity, we summarize the
previous relations as u =
E u with
E = diag(
E
1
,
E
2
). The objective
function is
V(x
1
(0), x
2
(0), u
1
, u
2
) = V(x
1
(0), x
2
(0),
E
1
u
1
,
E
2
u
2
). (5.1)
We solve the augmented cooperative MPC problem for i I
1:2
min
i
V(x
1
(0), x
2
(0),
1
,
2
) (5.2a)
subject to
_
x
1
x
2
_
+
=
_
A
1
A
2
_ _
x
1
x
2
_
+
_
B
11
B
21
_
E
1
1
+
_
B
12
B
22
_
E
2
2
(5.2b)
i
U
N
Ui
U
N
C
(5.2c)
S
u
ji
x
ji
(N) = 0 j I
1:2
(5.2d)
d
i
|x
i
(0)| if x
i
(0) B
r
(5.2e)
j
=
p
j
j I
1:2
\ i. (5.2f)
The update (3.7) is used to determine the next iterate.
Lemma 15. As p the cost
V(x(0),
p
) converges to the optimal
value V
0
(x(0)), and the iterates (
E
1
p
1
,
E
2
p
2
) converge to the Pareto
optimal centralized solution u
0
= (u
0
1
, u
0
2
).
Therefore, problem (5.2) gives optimal feedback and may be
used for plants with coupled constraints.
6. M Subsystems
In this section, we show that the stability theory of cooperative
control extends to any finite number of subsystems. For M > 0
Please cite this article in press as: B.T. Stewart, et al., Cooperative distributed model predictive control, Systems & Control Letters (2010),
doi:10.1016/j.sysconle.2010.06.005
ARTICLE IN PRESS
B.T. Stewart et al. / Systems & Control Letters ( ) 7
subsystems, the plantwide variables are defined
x =
_
_
_
_
x
1
x
2
.
.
.
x
M
_
_
u =
_
_
_
_
u
1
u
2
.
.
.
u
M
_
_
B
i
=
_
_
_
_
_
B
1i
B
2i
.
.
.
B
Mi
_
_
i I
1:M
V(x, u) =
iI
1:M
i
V
i
(x
i
, u
i
) A = diag(A
1
, . . . , A
M
).
Each subsystem solves the optimization
min
i
V(x(0), )
subject to
x
+
= Ax +
iI
1:M
B
i
i
U
N
i
S
u
ji
x
ji
(N) = 0 j I
1:M
|
i
| d
i
jI
1:M
x
ji
(0)
if x
ji
(0) B
r
j I
1:M
j
=
p
j
j I
1:M
\ i.
The controller iteration is given by
p+1
=
iI
1:M
w
i
(
p
1
, . . . ,
i
, . . . ,
p
M
)
in which
i
=
i
_
x(0);
p
j
, j I
1:M
\ i
_
. After p iterates, we set
u
p
and inject u(0) into the plant.
The warm start is generated by purely local information
u
+
i
= {u
i
(1), u
i
(2), . . . , u
i
(N 1), 0} i I
1:M
.
The plantwide cost function then satisfies for any p 0
V(x
+
, u
+
) V(x, u)
iI
1:M
i
(x
i
, u
i
)
|u| d |x| x B
r
.
Generalizing Assumption 3 to all i I
1:M
, we find that Theorem 9
applies and cooperative MPC of M subsystems is exponentially
stable.
Moreover, expressing the M subsystem outputs as
y
i
=
jI
1:M
C
ij
x
ij
i I
1:M
and generalizing Assumption 10 for i I
1:M
, cooperative MPC
for M subsystems satisfies Theorem 13. Finally, for systems with
coupled constraints, we can decompose the feasible space such
that U = (
iI
1:M
U
Ui
) U
C
. Hence, the input augmentation
scheme of Section 5 is applicable to plants of M subsystems. Notice
that, in general, this approach may lead to augmented inputs for
each subsystem that are larger than strictly necessary to achieve
optimal control. The most parsimonious augmentation scheme is
described elsewhere [22].
7. Example
Consider a plant consisting of two reactors and a separator. A
stream of pure reactant A is added to each reactor and converted
to the product B by a first-order reaction. The product is lost by a
parallel first-order reaction to side product C. The distillate of the
separator is split and partially redirected to the first reactor (see
Fig. 1).
Fig. 1. Two reactors in series with separator and recycle.
The model for the plant is
dH
1
dt
=
1
A
1
(F
f 1
+F
R
F
1
)
dx
A1
dt
=
1
A
1
H
1
(F
f 1
x
A0
+F
R
x
AR
F
1
x
A1
) k
A1
x
A1
dx
B1
dt
=
1
A
1
H
1
(F
R
x
BR
F
1
x
B1
) +k
A1
x
A1
k
B1
x
B1
dT
1
dt
=
1
A
1
H
1
(F
f 1
T
0
+F
R
T
R
F
1
T
1
)
1
C
p
(k
A1
x
A1
H
A
+k
B1
x
B1
H
B
) +
Q
1
A
1
C
p
H
1
dH
2
dt
=
1
A
2
(F
f 2
+F
1
F
2
)
dx
A2
dt
=
1
A
2
H
2
(F
f 2
x
A0
+F
1
x
A1
F
2
x
A2
) k
A2
x
A2
dx
B2
dt
=
1
A
2
H
2
(F
1
x
B1
F
2
x
B2
) +k
A2
x
A2
k
B2
x
B2
dT
2
dt
=
1
A
2
H
2
(F
f 2
T
0
+F
1
T
1
F
2
T
2
)
1
C
p
(k
A2
x
A2
H
A
+k
B2
x
B2
H
B
) +
Q
2
A
2
C
p
H
2
dH
3
dt
=
1
A
3
(F
2
F
D
F
R
F
3
)
dx
A3
dt
=
1
A
3
H
3
(F
2
x
A2
(F
D
+F
R
)x
AR
F
3
x
A3
)
dx
B3
dt
=
1
A
3
H
3
(F
2
x
B2
(F
D
+F
R
)x
BR
F
3
x
B3
)
dT
3
dt
=
1
A
3
H
3
(F
2
T
2
(F
D
+F
R
)T
R
F
3
T
3
) +
Q
3
A
3
C
p
H
3
in which for all i I
1:3
F
i
= k
vi
H
i
k
Ai
= k
A
exp
_
E
A
RT
i
_
k
Bi
= k
B
exp
_
E
B
RT
i
_
.
The recycle flow and weight percents satisfy
F
D
= 0.01F
R
x
AR
=
A
x
A3
x
3
x
BR
=
B
x
B3
x
3
x
3
=
A
x
A3
+
B
x
B3
+
C
x
C3
x
C3
= (1 x
A3
x
B3
).
The output and input are denoted, respectively
y =
_
H
1
x
A1
x
B1
T
1
H
2
x
A2
x
B2
T
2
H
3
x
A3
x
B3
T
3
_
u =
_
F
f 1
Q
1
F
f 2
Q
2
F
R
Q
3
_
.
We linearize the plant model around the steady state defined by
Table 1 and derive the following linear discrete-time model with
sampling time = 0.1 s
x
+
= Ax +Bu y = x.
Please cite this article in press as: B.T. Stewart, et al., Cooperative distributed model predictive control, Systems & Control Letters (2010),
doi:10.1016/j.sysconle.2010.06.005
ARTICLE IN PRESS
8 B.T. Stewart et al. / Systems & Control Letters ( )
Fig. 2. Performance of reactor and separator example.
Table 1
Steady states and parameters.
Parameter Value Units Parameter Value Units
H
1
29.8 m A
1
3 m
2
x
A1
0.542 wt(%) A
2
3 m
2
x
B1
0.393 wt(%) A
3
1 m
2
T
1
315 K 0.15 kg/m
3
H
2
30 m C
p
25 kJ/kg K
x
A2
0.503 wt(%) k
v1
2.5 kg/m s
x
B2
0.421 wt(%) k
v2
2.5 kg/m s
T
2
315 K k
v3
2.5 kg/m s
H
3
3.27 m x
A0
1 wt(%)
x
A3
0.238 wt(%) T
0
313 K
x
B3
0.570 wt(%) k
A
0.02 1/s
T
3
315 K k
B
0.018 1/s
F
f 1
8.33 kg/s E
A
/R 100 K
Q
1
10 kJ/s E
B
/R 150 K
F
f 2
0.5 kg/s H
A
40 kJ/kg
Q
2
10 kJ/s H
B
50 kJ/kg
F
R
66.2 kg/s
A
3.5
Q
3
10 kJ/s
B
1.1
C
0.5
7.1. Distributed control
In order to control the separator and each reactor indepen-
dently, we partition the plant into 3 subsystems by defining
y
1
=
_
H
1
x
A1
x
B1
T
1
_
u
1
=
_
F
f 1
Q
1
_
y
2
=
_
H
2
x
A2
x
B2
T
2
_
u
2
=
_
F
f 2
Q
2
_
y
3
=
_
H
3
x
A3
x
B3
T
3
_
u
3
=
_
F
R
Q
3
_
.
Following the distributed model derivation in the Appendix (see
Appendix B), we form the distributed model for the plant.
7.2. Simulation
Consider the performance of distributed control with the parti-
tioning defined above. The tuning parameters are
Q
yi
= diag(1, 0, 0, 0.1) i = I
1:2
Q
y3
= diag(1, 0, 10
3
, 0)
Q
i
= C
i
Q
yi
C
i
+0.001I R
i
= 0.01I i I
1:3
.
The input constraints are defined in Table 2. We simulate a setpoint
change in the output product weight percent x
B3
at t = 0.5 s.
In Fig. 2, the performance of the distributed control strategies
are compared to the centralized control benchmark. For this exam-
ple, noncooperative control is an improvement over decentralized
control (see Table 3). Cooperative control with only a single itera-
tion is significantly better than noncooperative control, however,
and approaches centralized control as more iteration is allowed.
8. Conclusion
In this paper we present a novel cooperative distributed con-
troller in which the subsystem controllers optimize the same ob-
jective function in parallel without the use of a coordinator. The
Please cite this article in press as: B.T. Stewart, et al., Cooperative distributed model predictive control, Systems & Control Letters (2010),
doi:10.1016/j.sysconle.2010.06.005
ARTICLE IN PRESS
B.T. Stewart et al. / Systems & Control Letters ( ) 9
Table 2
Input constraints.
Parameter Lower bound Steady state Upper bound Units
F
f 1
0 8.33 10 kg/s
Q
1
0 10 50 kJ/s
F
f 2
0 0.5 10 kg/s
Q
2
0 10 50 kJ/s
F
R
0 66.2 75 kg/s
Q
3
0 10 50 kJ/s
Table 3
Performance comparison.
Cost Performance loss (%)
Centralized MPC 1.76 0
Cooperative MPC (10 iterates) 1.94 9.88
Cooperative MPC (1 iterate) 3.12 76.8
Noncooperative MPC 4.69 166
Decentralized MPC 185 1.04 e+04
control algorithm is equivalent to a suboptimal centralized con-
troller, allowing the distributed optimization to be terminated at
any iterate before convergence. At convergence, the feedback is
Pareto optimal. We establish exponential stability for the nomi-
nal case and for perturbation by a stable state estimator. For plants
with sparsely coupled constraints, the controller can be extended
by repartitioning the decision variables to maintain Pareto opti-
mality.
We make no restrictions on the strength of the dynamic cou-
pling inthe network of subsystems, offering flexibility inplantwide
control design. Moreover, the cooperative controller can improve
performance of plants over traditional decentralized control and
noncooperative control, especially for plants with strong open-
loop interactions between subsystems. A simple example is given
showing this performance improvement.
Acknowledgements
The authors gratefully acknowledge the financial support of
the industrial members of the Texas-Wisconsin-California Control
Consortium, and NSF through grant #CTS-0456694. The authors
would like to thank Prof. D.Q. Mayne for helpful discussion of the
ideas in this paper.
Appendix A. Further proofs
Proof of Lemma 5. By assumption, the initial guess is feasible.
Because U
1
and U
2
are convex, the convex combination (3.7) with
p = 0 implies (
1
1
,
1
2
) is feasible. Feasibility for p > 1 follows by
induction.
Proof of Lemma 6. For every p 0, the cost function satisfies the
following
V(x(0),
p+1
) = V(x(0), w
1
(
1
,
p
2
) +w
2
(
p
1
,
2
))
w
1
V(x(0), (
1
,
p
2
)) +w
2
V(x(0), (
p
1
,
2
)) (A.1a)
w
1
V(x(0), (
p
1
,
p
2
)) +w
2
V(x(0), (
p
1
,
p
2
)) (A.1b)
V(x(0),
p
).
The first equality follows from (3.7). The inequality (A.1a) follows
from convexity of V(). The next inequality (A.1b) follows from
the optimality of
i
i I
1:2
, and the final line follows from
w
1
+w
2
= 1. Because the cost is bounded below, it converges.
Proof of Lemma 7. We give a proof that requires only closedness
(not compactness) of U
i
, i I
1:2
. From Lemma 6, the cost
converges, say to V. Since V is quadratic and strongly convex, its
sublevel sets lev
a
(V) are compact and bounded for all a. Hence,
all iterates belong to the compact set lev
V(
0
)
(V) U, so there is
at least one accumulation point. Let be any such accumulation
point, and choose a subsequence P {1, 2, 3, . . .} such that
{
p
}
pP
converges to . We obviously have that V(x(0), ) = V,
and moreover that
lim
pP,p
V(x(0),
p
) = lim
pP,p
V(x(0),
p+1
) = V. (A.2)
By strict convexity of V and compactness of U
i
, i I
1:2
, the
minimizer of V(x(0), ) is attained at a unique point u
0
= (u
0
1
, u
0
2
).
By taking limits in (A.1) as p for p P, and using w
1
> 0,
w
2
> 0, we deduce directly that
lim
pP,p
V(x(0), (
1
(
p
2
),
p
2
)) = V (A.3a)
lim
pP,p
V(x(0), (
p
1
,
2
(
p
1
))) = V. (A.3b)
We suppose for contradiction that V = V(x(0), u
0
) and thus
= u
0
. Because V(x(0), ) is convex, we have
V(x(0), )
(u
0
) V := V(x(0), u
0
) V(x(0), ) < 0
where V(x(0), ) denotes the gradient of V(x(0), ). It follows
immediately that either
V(x(0), )
_
u
0
1
1
0
_
(1/2)V or (A.4a)
V(x(0), )
_
0
u
0
2
2
_
(1/2)V. (A.4b)
Suppose first that (A.4a) holds. Applying Taylors theorem to V
V(x(0), (
p
1
+(u
0
1
p
1
),
p
2
))
= V(x(0),
p
) +V(x(0),
p
)
_
u
0
1
p
1
0
_
+
1
2
2
_
u
0
1
p
1
0
_
2
V(x(0),
p
1
+ (u
0
1
p
1
),
p
2
)
_
u
0
1
p
1
0
_
V +(1/4)V +
2
(A.5)
in which (0, ). (A.5) applies for all p P sufficiently large,
for some independent of and p. By fixing to a suitably small
value (certainly less than 1), we have both that the right-hand side
of (A.5) is strictly less than V and that
p
1
+ (u
0
1
p
1
) U
1
. By
taking limits in (A.5) and using (A.3) and the fact that
1
(
p
2
) is
optimal for V(x(0), (,
p
2
)) in U
1
, we have
V = lim
pP,p
V(x(0), (
1
(
p
2
),
p
2
))
lim
pP,p
V(x(0), (
p
1
+(u
0
1
p
1
),
p
2
))
< V
giving a contradiction. By identical logic, we obtain the same
contradiction from (A.4b). We conclude that V = V(x(0), u
0
) and
thus = u
0
. Since was an arbitrary accumulation point of the
sequence {
p
}, and since this sequence is confined to a compact
set, we conclude that the whole sequence converges to u
0
.
Proof of Corollary 14. We first note that: |x(k)| | x(k)| + |e(k)|
2|( x(k), e(k))|. From Theorem 13 we can write:
|x(k)|
( x(0), e(0))
x(0) +e(0)
k
Please cite this article in press as: B.T. Stewart, et al., Cooperative distributed model predictive control, Systems & Control Letters (2010),
doi:10.1016/j.sysconle.2010.06.005
ARTICLE IN PRESS
10 B.T. Stewart et al. / Systems & Control Letters ( )
with =
2, which concludes the proof by noticing that x(0) =
x(0) +e(0).
Proof of Lemma 15. Because
V() is convex and bounded below,
the proof follows from Lemma 7 and from noticing that the point
u
0
= (
E
1
u
0
1
,
E
2
u
0
2
), with u
0
i
= lim
p
i
, i I
1:2
, is Pareto
optimal.
Appendix B. Deriving the distributed model
Consider the possibly nonminimal centralized model
x
+
= Ax +
jI
1:M
B
j
u
j
(B.1)
y
i
= C
i
x i I
1:M
.
For eachinput/output pair (u
j
, y
i
) we transformthe triple (A, B
j
, C
i
)
into its Kalman canonical form [23, p.270]
_
_
_
_
z
oc
ij
z
oc
ij
z
o c
ij
z
o c
ij
_
_
+
=
_
_
_
_
_
_
A
oc
ij
0 A
oc c
ij
0
A
ooc
ij
A
oc
ij
A
oco c
ij
A
oc c
ij
0 0 A
o c
ij
0
0 0 A
o co
ij
A
o c
ij
_
_
_
_
_
_
z
oc
ij
z
oc
ij
z
o c
ij
z
o c
ij
_
_
+
_
_
_
B
oc
ij
B
oc
ij
0
0
_
_u
j
y
ij
=
_
C
oc
ij
0 C
o c
ij
0
_
_
_
_
_
z
oc
ij
z
oc
ij
z
o c
ij
z
o c
ij
_
_
y
i
=
jI
1:M
y
ij
.
The vector z
oc
ij
captures the modes of A that are both observable by
y
i
and controllable by u
j
. The distributed model is then
A
ij
A
oc
ij
B
ij
B
oc
ij
C
ij
C
oc
ij
x
ij
z
oc
ij
.
References
[1] D.Q. Mayne, J.B. Rawlings, C.V. Rao, P.O.M. Scokaert, Constrained model
predictive control: stability andoptimality, Automatica 36(6) (2000) 789814.
[2] S.J. Qin, T.A. Badgwell, A survey of industrial model predictive control
technology, Control Eng. Pract. 11 (7) (2003) 733764.
[3] S. Huang, K.K. Tan, T.H. Lee, Decentralized control design for large-scale
systems with strong interconnections using neural networks, IEEE Trans.
Automat. Control 48 (5) (2003) 805810.
[4] N.R. Sandell Jr., P. Varaiya, M. Athans, M. Safonov, Survey of decentralized
control methods for larger scale systems, IEEE Trans. Automat. Control 23 (2)
(1978) 108128.
[5] H. Cui, E.W. Jacobsen, Performance limitations in decentralized control, J.
Process Control 12 (2002) 485494.
[6] R.A. Bartlett, L.T. Biegler, J. Backstrom, V. Gopal, Quadratic programming
algorithms for large-scale model predictive control, J. Process Control 12 (7)
(2002) 775795.
[7] G. Pannocchia, J.B. Rawlings, S.J. Wright, Fast, large-scale model predictive
control by partial enumeration, Automatica 43 (2007) 852860.
[8] E. Camponogara, D. Jia, B.H. Krogh, S. Talukdar, Distributed model predictive
control, IEEE Control Syst. Mag. (2002) 4452.
[9] W.B. Dunbar, Distributed receding horizon control of dynamically coupled
nonlinear systems, IEEE Trans. Automat Control 52 (7) (2007) 12491263.
[10] T. Baar, G.J. Olsder, Dynamic Noncooperative Game Theory, SIAM, Philadel-
phia, 1999.
[11] J.B. Rawlings, B.T. Stewart, Coordinating multiple optimization-based con-
trollers: new opportunities and challenges, J. Process Control 18 (2008)
839845.
[12] J.B. Rawlings, D.Q. Mayne, Model Predictive Control: Theory and Design, Nob
Hill Publishing, Madison, WI, ISBN: 978-0-9759377-0-9, 2009, 576 pages.
[13] A.N. Venkat, I.A. Hiskens, J.B. Rawlings, S.J. Wright, Distributed MPC strategies
with application to power system automatic generation control, IEEE Control
Syst. Tech. 16 (6) (2008) 11921206.
[14] P.O.M. Scokaert, D.Q. Mayne, J.B. Rawlings, Suboptimal model predictive
control (feasibility implies stability), IEEE Trans. Automat Control 44 (3) (1999)
648654.
[15] E.M.B. Aske, S. Strand, S. Skogestad, Coordinator MPC for maximizing plant
throughput, Comput. Chem. Eng. 32 (2008) 195204.
[16] J. Liu, D. Muoz de la Pea, P.D. Christofides, Distributed model predictive
control of nonlinear process systems, AIChE J. 55 (5) (2009) 11711184.
[17] J. Liu, D. Muoz de la Pea, B.J. Ohran, P.D. Christofides, J.F. Davis, A two-tier
architecture for networked process control, Chem. Eng. Sci. 63 (22) (2008)
53945409.
[18] R. Cheng, J. Forbes, W. Yip, Price-driven coordination method for solving plant-
wide MPC problems, J. Process Control 17 (5) (2007) 429438.
[19] A.N. Venkat, J.B. Rawlings, S.J. Wright, Distributed model predictive control of
large-scale systems, in: Assessment and Future Directions of Nonlinear Model
Predictive Control, Springer, 2007, pp. 591605.
[20] R.D. Gudi, J.B. Rawlings, Identification for decentralized model predictive
control, AIChE J. 52 (6) (2006) 21982210.
[21] D.P. Bertsekas, J.N. Tsitsiklis, Parallel and Distributed Computation, Athena
Scientific, Belmont, Massachusetts, 1997.
[22] G. Pannocchia, S.J. Wright, B.T. Stewart, J.B. Rawlings, Efficient cooperative
distributed mpc using partial enumeration, in: ADCHEM 2009, International
Symposium on Advanced Control of Chemical Processes, Istanbul, Turkey,
2009.
[23] P.J. Antsaklis, A.N. Michel, Linear Systems, McGraw-Hill, New York, 1997.
Please cite this article in press as: B.T. Stewart, et al., Cooperative distributed model predictive control, Systems & Control Letters (2010),
doi:10.1016/j.sysconle.2010.06.005