Documente Academic
Documente Profesional
Documente Cultură
,
where y R
m
and s E.
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 3
Semidenite Programming (SDP)
(SDP) inf C X
subject to A
i
X = b
i
, i = 1, 2, ..., m, X 0,
where C, A
i
S
n
.
Then, the dual problem to (SDP) is:
(SDD) sup b
T
y
subject to
m
i
y
i
A
i
+S = C, S 0,
where y R
m
and S S
n
.
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 4
More Operator Notations
AX =
_
_
_
_
_
_
_
A
1
X
A
2
X
...
A
m
X
_
_
_
_
_
_
_
,
A
T
y =
m
i=1
y
i
A
i
.
Thus, the constraints in (SDP) and (SDD) can be written as
AX = b, X 0, and A
T
y +S = C, S 0.
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 5
Weak Duality Theorem for CLP
Theorem 1 Let F
p
and F
d
denote the feasible regions of (CLP) and (CLD),
respectively, and be non-empty. Then,
c x b
T
y where x F
p
, (y, s) F
d
.
c x b
T
y = (c A
T
y) x = s x 0.
This quantity is called the duality or complementarity gap.
Corollary 1 i) If (LP) or (LD) is feasible but unbounded (its objective value is
unbounded) then the other is infeasible or has no feasible solution.
ii) If a pair of feasible solutions can be found to the primal and dual problems with
an equal objective value, then they are both optimal.
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 6
Strong Duality Theorem for LP
Theorem 2 i) If (LP) or (LD) has no feasible solution, then the other is either
unbounded or has no feasible solution.
ii) If (LP) or (LD) is feasible and bounded, then the other is feasible.
iii) If (LP) and (LD) both have feasible solutions then both problems have optimal
solutions and the optimal objective values of the objective functions are equal,
that is, optimal solutions for both (LP) and (LD) exist and there is no duality
gap.
A case that neither (LP) nor (LD) is feasible:
c = (1; 0), A = (0, 1), b = 1.
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 7
Proof of LP Strong Duality Theorem
i) Let F
d
be empty. Then, from the Farkas Lemma, there is a d such that
c
T
d < 0 and Ad = 0, d 0.
Thus, if (LP) is feasible, say x F
p
, then x +d F
p
for any > 0. But its
objective value c
T
x +(c
T
d) goes to as goes to .
ii) Let LP be feasible and bounded. Then, there is no d such that
c d < 0 and Ad = 0, d 0.
Thus, from the Farkas Lemma, (LD) is feasible.
iii) We like to prove that there is a solution to
Ax = b, A
T
y +s = c, c
T
x b
T
y 0, (x, s) 0,
given that both (LP) and (LD) are feasible.
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 8
Suppose not, from Farkas lemma, we must have an infeasibility certicate
(x
, , y
) such that
Ax
b = 0, A
T
y
c 0, (x
; ) 0
and
b
T
y
c
T
x
= 1
If > 0, then we have
0 (y
)
T
(Ax
b) +x
T
(A
T
y
c) = (b
T
y
c
T
x
) =
which is a contradiction.
If = 0, then the weak duality theorem also leads to a contradiction, since it
leads to one of (LP) and (LD) being unbounded.
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 9
Strong Duality Theorem for CLP?
Example 1:
C =
_
_
1 0
0 0
_
_
, A
1
=
_
_
1 0
0 0
_
_
, A
2
=
_
_
0 1
1 0
_
_
, b =
_
_
0
1
_
_
, K = S
2
+
.
Example 2:
C =
_
_
_
_
0 1 0
1 0 0
0 0 0
_
_
_
_
, A
1
=
_
_
_
_
0 0 0
0 1 0
0 0 0
_
_
_
_
, A
2
=
_
_
_
_
0 1 0
1 0 0
0 0 2
_
_
_
_
and
b =
_
_
0
2
_
_
; K = S
3
+
.
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 10
Strong Duality Theorem for CLP
The key is that, in these examples, the Farkas lemma does not hold.
Theorem 3 i) Let (CLP) or (CLD) be infeasible, and furthermore the other be
feasible and have an interior. Then the other is unbounded.
ii) Let (CLP) and (CLD) be both feasible, and furthermore one of them have an
interior. Then there is no duality gap between (CLP) and (CLD).
iii) Let (CLP) and (CLD) be both feasible and have interior. Then, both have
optimal solutions with no duality gap.
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 11
Proof of CLP Strong Duality Theorem
i) Suppose F
d
is empty and F
p
be feasible and have an interior. Then, we have
x int K and > 0 such that A x b = 0, ( x, ) int(K R
+
).
Then, for any z
< 0, (x, ) K R
+
,
and
A
T
y +s = c, b
T
y +s = z
, (s, s) K
R
+
.
But the latter is infeasible, so that the formal has a feasible solution for any z
. At
such an solution, if > 0, we have c (x/) < z
; if = 0, we have x +x,
where x is any feasible solution for (CLP), being feasible for (CLP) and its
objective value goes to as goes to .
ii) Let F
p
be feasible and have an interior, and let z
< 0, (x, ) K R
+
,
and
A
T
y +s = c, b
T
y +s = z
, (s, s) K
R
+
.
But the former is infeasible, so that we have a solution for the latter. From the
Weak Duality theorem, we must have s = 0, that is, we have a solution (y, s)
such that
A
T
y +s = c, b
T
y = z
, s K
.
iii) We only need to prove that there exist a solution x F
p
such that
c x = z
, that is, the inmum of (CLP) is attainable. But this is just the other
side of the proof given that F
d
is feasible and has an interior, and z
is also the
supremum of (CLD).
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 13
SDP Example with Zero-Duality Gap but not Attainable
C =
_
_
1 0
0 0
_
_
, A
1
=
_
_
0 1
1 0
_
_
, and b
1
= 2.
The primal has an interior, but the dual does not.
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 14
LP Optimality Conditions
_
_
(x, y, s) (R
n
+
, R
m
, R
n
+
) :
c
T
x b
T
y = 0
Ax = b
A
T
y s = c
_
_
,
which is a system of linear inequalities and equations. Now it is easy to verify
whether or not a pair (x, y, s) is optimal.
Since both x and s are nonnegative, x
T
s = 0 implies that x
j
s
j
= 0 for all
j = 1, . . . , n, where we say x and s are complementary to each other.
x. s = 0
Ax = b
A
T
y s = c.
This system has total 2n +munknowns and 2n +mequations including n
nonlinear equations.
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 15
Solution Support for LP
Let x
and s
)| +|supp(s
)| n.
There are x
and s
and s
are maximal,
respectively.
There are x
and s
and s
are minimal,
respectively.
If there is s
is d.
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 16
LP strict complementarity theorem
Theorem 4 If (LP) and (LD) are both feasible, then there exists a pair of strictly
complementary solutions x
F
p
and (y
, s
) F
d
such that
x
.
s
= 0 and supp(x
)| +|supp(s
)| = n.
Moreover, the supports
P
= {j : x
j
> 0} and Z
= {j : s
j
> 0}
are invariant for all strictly complementary solution pairs.
Given (LP) or (LD), the pair of P
and Z
x
P
= b, x
P
0, x
Z
A
T
Z
y 0, c
P
A
T
P
y = 0} is called the
dual optimal face.
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 17
An LP Example
Consider the primal LP problem:
minimize 2x
1
+x
2
+x
3
subject to x
1
+x
2
+x
3
= 1,
(x
1
, x
2
, x
3
) 0,
where
P
= {2, 3} and Z
= {1}.
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 18
Proof of the LP strict complementarity theorem
For any given index 1 j n, consider
z
j
:= minimize x
j
subject to Ax = b,
c
T
x z
,
x 0;
and its dual
maximize b
T
y z
subject to A
T
y c +s = e
j
,
s 0, 0.
If z
j
< 0, then we have an optimal solution for (LP) such that x
j
> 0. On the
other hand, if z
j
= 0, from the LP strong duality theorem, we have a solution
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 19
(y, s, ) such that
b
T
y z
= 0, A
T
y c +s = e
j
, (s, ) 0.
In the solution, if > 0, then we have
b
T
(y/) z
= 0, A
T
(y/) + (s +e
j
)/ = c
which is an optimal solution for the dual with slack s
j
> 0 where
y
= y/, s
= (s +e
j
)/.
Thus, for each given 1 j n, there is an optimal solution pair (x
j
, s
j
) such
that either x
j
j
> 0 or s
j
j
> 0. Let an optimal solution pair be
x
=
1
n
j
x
j
and s
=
1
n
j
s
j
.
Then it is a strict complementarity solution pair.
Let (x
1
, s
1
) and (x
2
, s
2
) be two strict complementarity solution pairs. Note that
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 20
we still have
0 = (x
1
)
T
s
2
= (x
2
)
T
s
1
from the Strong Duality theorem. This indicates that they must have same strict
complementarity partition, since, otherwise, we must have an j such that x
1
j
> 0
and s
2
j
> 0 or (x
1
)
T
s
2
> 0.
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 21
Optimality Conditions for SDP with zero duality gap
C X b
T
y = 0
AX = b
A
T
y S = C
X, S 0,
, (1)
or
XS = 0
AX = b
A
T
y S = C
X, S 0
(2)
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 22
Solution Rank for SDP
Let X
and S
) + rank(S
) n.
There are X
and S
and S
are maximal,
respectively.
There are X
and S
and S
are minimal,
respectively.
If there is S
is
bounded by d.
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 23
SDP strict complementarity?
Given a pair of SDP and (SDD) where the complementarity solution exist, is there
a solution pair such that
rank(X
) + rank(S
) = n?
C =
_
_
_
_
0 0 0
0 0 0
0 0 0
_
_
_
_
, A
1
=
_
_
_
_
0 0 0
0 1 0
0 0 0
_
_
_
_
, A
2
=
_
_
_
_
0 1 0
1 0 0
0 0 2
_
_
_
_
and
b =
_
_
0
0
_
_
; K = S
3
+
.
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 24
Uniqueness Theorem for CLP: LP case
Given an optimal solution x
? We rst
consider the LP case.
Theorem 5 An LP optimal solution x
)
are linear independent.
It is easy to see both conditions are necessary, since otherwise, one can nd an
optimal solution with a different support size. To see sufciency, suppose there
there is another optimal solution y
such that x
= 0. We must have
supp(y
) supp(x
). Then we see
0 = Ax
Ay
= A(x
) = A
supp(x
)
(x
)
supp(x
)
which implies that columns of A
supp(x
)
are linearly dependent.
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 25
Uniqueness Theorem for CLP: SDP case
Given an SDP optimal and complementary solution X
?
Theorem 6 An SDP optimal and complementary solution X
is unique if and
only if the rank of X
)
T
,
i = 1, ..., m, are linearly independent, where X
= (V
)
T
V
, V
R
rn
,
and r is the rank of X
.
It is easy to see why the rank of X
, we have S
(V
)
T
V
= 0
which implies that S
(V
)
T
= 0. Consider any matrix
X = (V
)
T
UV
where U S
r
+
and
b
i
= A
i
(V
)
T
UV
= V
A
i
(V
)
T
U, i = 1, ..., m.
Conic Linear Optimization and Appl. MS&E314 Lecture Note #03 26
One can see that X remains an optimal SDP solutions for any such U S
r
+
,
since it makes X feasible and remain complementary to any optimal dual slack
matrix. If V
A
i
(V
)
T
, i = 1, ..., m, are not linearly independent, then one can
nd
V
A
i
(V
)
T
W = 0, i = 1, ..., m, 0 = W S
r
.
Now consider
X() = (V
)
T
(I + W)V
,
and then we can choose = 0 such that X() 0 is another optimal solution.
To see sufciency, suppose there there is another optimal solution Y
such that
X
= 0. We must have Y
= (V
)
T
UV
for some I = U S
r
+
.
Then we see
V
A
i
(V
)
T
(I U) = 0, i = 1, ..., m,
contradicts that they are linear independent.