Sunteți pe pagina 1din 10

International Journal of Mathematics and

Computer Applications Research (IJMCAR)


ISSN(P 2249-6955; ISSN(E): 2249-8060
Vol. 5, Issue 4, Aug 2015, 83-92
TJPRC Pvt. Ltd.

CONDITION FOR FESIABLE SOLUTION TO PRIMAL AND DUAL


NONLINEAR PROGRAMMING PROBLEM
PRASHANT CHAUHAN
S. L. Education Institute, Moradabad, Uttar Pradesh, India

ABSTRACT
In this paper, we take a quick look at some results that have been worked out for a kind of duality theory for
nonlinear programming problems. These are quite parallel to those in the linear programming case.

KEYWORDS: Nonlinear Programming, Duality, Lagrangian Function


INTRODUCTION
The general nonlinear programming maximization problem is
Maximize

II ( X ) = f ( X )

Subject to

h( x) 0

And

X 0.

Where, as usual

(1)

h( x) = [h1 ( X ),..., h m ( X )]. the associated Lagrangian function, used primarily in

development of Kuhn-Tucker [18] conditions and in the discussion of saddle points, is

L( X , Y ) = f ( X ) Y ' [h( X )]

(2)

Where Y = y1 ,..., ym . The gradient of L(X,Y) with respect to the ys is just

LY = h( X ), and so f(X) in

(1) could be expressed as

f ( X ) = L( X , Y ) + Y [h( X )] = L( X , Y ) Y LY
In addition, the requirements in h( X ) 0 can be expressed as

LY 0, so the general nonlinear programming

maximization problem in (1) could be also be written


Maximize
Subject to
And

L( X , Y ) Y LY
LY 0

(3)

X 0

This appears only to add complexity to the expression in (1), but it does suggest a symmetric problem:

www.tjprc.org

editor@tjprc.org

84

Prashant Chauhan

Minimize
Subject to

L( X , Y ) X LX
LX 0

(4)

Y 0

And

From L(X, Y) in (2), and using the Jacobian matrix

[h1 ]

J =

[h m ]
We have

LX = f J Y
And so (4) in more detail is
Minimize ( X , Y ) = f ( X ) Y [ h( X )] X [ f J Y ]
Subject to J Y f 0
And

(5)

Y 0

Written out in more detail this is

Minimize

i =1

j =1

i =1

f ( X ) yi hi ( X ) x j ( f j yi hij )
m

Subject to

f ( X ) yi hij f j 0

(j = 1,....., n)

(6)

i =1

And

yi 0

(i = 1,....., m )

When f(X) in the maximization problem is concave and each constraint is convex or quasiconvex, so that both
necessary and sufficient conditions for a maximum to the nonlinear programming problem, the (6) [or( 5) is taken to be
the dual to (1) or (3)]. This is primarily because (I) a pair of dual linear programs corresponds precisely to (3) and (4), and
(II) a set of theorems parallel to those in duality theory for linear programs can be derived.
Regarding the linear programming connection, the general linear programming maximization problem was

Impact Factor (JCC): 4.6257

NAAS Rating: 3.80

85

Condition for Fesiable Solution to Primal and Dual Nonlinear Programming Problem

Maximize P X
Subject to
And

AX B

X 0

The associated Lagrangian function, as in (2) would be

L ( X , Y ) = PX Y ( AX B )

(7)

LX = [ P AY ]
P X Y [ AX B ] X [ P AY ]

AX B0 X 0
And so LY

= [ AX B] and LX = [ P AY ]. (The latter gradient is defined as a column vector to

conform to the convention of expressing gradient as columns.) The primal linear program can now be written in the style of
(3) as
Maximize P X Y [ AX B ] + Y [ AX B ]
Subject to

AX B 0 or AX B

(8)

X 0

And

Corresponding to (4) we have, for this linear program,

P X Y [ AX B ] X [ P AY ]

Minimize

Or, since P X = X P , Y B = B Y and Y AX = X AY


Minimize BY
Subject to
And

P + AY 0 or AY P

Y 0.

Clearly, (7) and (8) are exactly a pair of dual linear programs.
We now explore a set of primal-dual theorems for the nonlinear programming problems in (1) and (5). We use a
prime for these in the nonlinear case; there are obvious parallels to the results of linear programs.

CONDITION FOR THE PRIMALAND DUAL PROBLEM TO HAVE FESIABLE SOLUTION


Theorem 1
Feasible solutions to the primal and dual problems are optimal if and only if,

II ( X p ) = ( X d Y d ) .

Clearly, if II = , then both objective functions have reached their limits and so the solutions are optimal. An

www.tjprc.org

editor@tjprc.org

86

Prashant Chauhan

important outcome of the proof is t6haty, given an optimal X* for the primal, a vector Y* can be found such that (X*, Y*)
is an optimal solution to the dual.
Theorem 2
A pair of feasible solutions has II ( X *) = ( X *, Y *) if and only if (I) (Y *) '[ h( X *)] = 0 and (II)

( X *) '[f * ( J *) ' Y *] = 0.
Note that [ h ( X *)] in (I) is the vector of slack variables for the constraints in (1), and

[f * ( J *) 'Y *] in

(5). So this theorem describes a kind of complementary slackness for optimal solutions that is parallel to that in the linear
programming case. This property of the optimal solutions to the pair of dual nonlinear programs shows the equivalence of
the dual variables (Y) and Lagrange multipliers the development of Kuhn-Tucker conditions and the saddle point problem
connection.
Theorem 3
Under certain conditions, II ( X *) / bi = yi . This marginal valuation property of the dual variables at optimum
*

held for linear programs, subject to the qualification that the appropriate derivatives existed. The same sort of requirement
*

is needed here. Under those conditions yi , is a measure of the impact on the4 optimum value of the primal objective
function of a marginal change in

bi .

The Solution of Nonlinear Equations


There are many numerical methods which exist for locating the roots. We present on simple method here. The
method presented here is called Newtons method and is motivated as follows.
We assume that f has continuous second derivatives and that some estimate

f ( x ) = 0

(9)

Is a available. If no such estimate is known,


Taylor series expansion of f about

x1 of a solution to

x1 is chosen at random. If x1 is a reasonably good estimate, the

x1 can be approximated as:

f ( x) = f ( x1 ) + ( x x1 ) f ( x1 ).
Hence if x is a solution to (9),

0 = f ( x1 ) + ( x x1 ) f ( x1 ) .
x = x1 f ( x1 ) / f ( x1 )

(10)

Now unless f is a quadratic, x will not in general be an exact solution to (9). However, x can be used as an
improved estimate. Indeed, (10) can be looked upon as the first equation in a family which generates successive improved
estimates of a solution to (9). The family has the following general form:

Impact Factor (JCC): 4.6257

NAAS Rating: 3.80

Condition for Fesiable Solution to Primal and Dual Nonlinear Programming Problem

f ( x n )
x n +1 = x n
.
f ( x n )

n = 1,2,...

87

(11)

Once an estimate is finally found which is sufficiently close to a root, a new starting point can be selected in an
effort to find a new root. This procedure is repeated until all roots are found.
Multidimensional Optimization with Equality Constraints
This problem can be stated in general as follows

Maximize
subject to

f(X)

(12)

g j (X) = 0.

j = 1,2,...., m

(13)

Where

X = (x1 , x 2 ,....x n ) T
Consider the problem:

f(X) = x12 + 2(x 2 4 )2 + 8

Maximize
x1 = x22 4.

We are left with following unconstrained problem in one dimension:

f(x1 ) = (x22 4 )2 + 2(x2 4 )2 + 8,

Maximize

Which is easier to solve. Of course, this approach of elimination will be successful in reducing the number of
variables in the problem only if it is possible to express a solution for one or more of the variables explicitly. Often,
however, this cannot be done.
The Jacobian Method
We now present a method which solves the problem (12), (13).It is assumed that f and g j , j = 1, 2,3,......, m.
has continuous second derivatives. The strategy is to find a suitable expression for the first derivatives of f at all points
which satisfy (13). The feasible stationary points of f are the ones among these for which

f
=0
x i

i = 1,2,..., n

(14)

The maximum points are identified among those satisfying (14).


These ideas are now placed on a firm mathematical basis. Consider any point X which satisfies (13). In any
neighborhood of X there will exist at least one point X+h which satisfies (13), because X is on the boundary of the region
defined by (13). Expanding f and g j , j = 1,2,..., m , in a Taylor series about X, we get

www.tjprc.org

editor@tjprc.org

88

Prashant Chauhan

f ( X + h) = f ( X ) + f ( X )T h +

1
hH f (X + (1 )( X + h)) h,
2

g j ( X + h ) = g j ( X ) + g j ( X )T h +
For some

1
hH g j (X + (1 )( X + h))h,
2

j = 1,2,..., m,

,0 < < 1 .As X+h approaches X, we get

f ( X + h ) f ( X ) + f ( X ) T h
g j ( X + h ) g j ( X ) + g j ( X ) T h

j = 1,2,.., m

Therefore

f ( X ) f ( X )T X
g j ( X ) g j ( X ) T X

j = 1,2,..., m,

Using (13) we get

g j ( X ) = 0

j = 1,2,..., m.

Thus we can state, to within a first order approximation

g j ( X )T X = 0,

j = 1,2,..., m

(15)

Now as f ( X ) and g j ( X ), j = 1,2,..., m consist of known constants, (15) constitutes a set of (m+1) linear
equations in (n+1) unknowns,

x1 , x2 , x3 ,..., xm , f ( X ) .If the equations are linearly dependent one discards the

smallest number whose removal leaves an independent set. Hence we can assume that there are no more equations than
variables, i.e.

mn
Now
m=n
Leads to the unique solution

X = 0
Which implies that there are no feasible points other than X in any neighbourhood of X. That is, the set of feasible
points is discrete. Hence, we can assume that
m<n.
We redefine X = ( x1 , x 2 ,.....x n )

Impact Factor (JCC): 4.6257

as

NAAS Rating: 3.80

Condition for Fesiable Solution to Primal and Dual Nonlinear Programming Problem

X = ( w1 , w2 ,..., wm , y1 , y 2 ,.... y n m ) T

89

(16)

wi , i = 1,2,..., m are called state variables and the variables yi , i = 1,2,..., (n m) are called decision

The variables

variables. Now (15) can be rewritten using (16), as follows:


m

i =1

n-m
f(X)
f(X)
wi +
y i = f(X)
w i
i =1 y i

g j (X)

i =1

wi

w i +

n-m

g j (X)

i =1

y i

y i = 0

(17)

j = 1 ,2 ,...,m

(18)

Suppose now that the y i , i = 1, 2 ,..., ( n m ) are given arbitrary values. When these are substituted into unique
values for the

wi , i = 1,2,..., m can be found which keep X+h inside the feasible region. One can then use all these

values in (17) to see if

f(X) > 0
i.e., the new point X+h is an improvement over X.
We now state the explicit steps needed to carry this out using vector notation. The matrix

g 1

w1
g
2
w1
M

g m
w
1

g 1
L
w 2

g 1

w m
g 2
g 2

L
w 2
w m

M
M

g m
g m
L
w 2
w m

Is called the Jacobian matrix, and the matrix

g1

y1
g
2
C = y1
M

g m
y
1

g1
L
y 2

g 1

y n m
g 2
g 2
L

y 2
y n m

M
M

g m
g m
L
y 2
y n m

Is called the control matrix. It is important in defining the state and decision variables that the left-hand sums in
(17) and (18) be linearly independent. It is always possible to make a choice of which x i s become state variables. So this
happens because we have assumed that the equations in (15) are linearly independent. The implication of this is that J is
nonsingular. Now let

W = ( w1 , w2 ,..., wm )T
www.tjprc.org

editor@tjprc.org

90

Prashant Chauhan

Y = ( y1 , y 2 ,.... y n m ) T
Then

w f T W + w f T Y = f (W , Y )

(19)

And

JW + CY = 0

(20)

Respectively. As J is nonsingular, we can multiply (20) by J

W = J 1CY

(21)

It can be seen, that if the elements in

Y are given values, W can be calculated using (21). Substituting this

into (19) yields

f (W , Y ) = ( y f T w f T J 1C )Y

(22)

From (22) we can form what is known as the constrained gradient of f with respect to Y, which is

cy f =

c f ( w, y )
= y f T w f T J 1C
c
y

Each element of

cy f , namely

rate of change of f resulting from perturbing

c f
, i = 1,2,...(n m)
c yi

(23)

is called a constrained derivative. It represents the

xi from yi (all other xi s being held constant) to feasible points.

When constrained derivatives are used, i.e. X* is a feasible maximum it is necessary that

cy f ( X *) = 0

(24)

Equation (24) can be used identify all the stationary points; it remains to find which one is the global maximum.
With the modification that H is the matrix of constrained second derivatives with respect to the independent variables

y1 , y 2 ,..., y n m only, and not w1 , w2 ,..., wm . The complete method will be illustrated with a numerical example.
CONCLUSIONS
There are two principal reasons for interest in nonlinear duality at this point:(1) for any feasible solutions to a pair
of primal and dual nonlinear programming problems, the dual objective function value provides a limit on the value of the
primal objective function (as with linear programs) and (2) for a pair of optimal solutions, the value of the dual variables
may have the same kind of shadow price interpretation that we associate with the linear programming case-giving a
possible marginal valuation to resource that are used up in the optimal solution and a value of zero to those resources that
are in excess supply at an optimal solution.

Impact Factor (JCC): 4.6257

NAAS Rating: 3.80

91

Condition for Fesiable Solution to Primal and Dual Nonlinear Programming Problem

REFERENCES
1.

Andreani, R., and Marti Nez, J.M., On the Solution of Mathematical Programming Problems with Equilibrium
Constraints, Mathematical Methods of Operations Research, Vol. 54, pp. 345358, 2001.

2.

Bertsekas, D.P., Nonlinear Programming, 2nd Edition, Athena Scientific,Belmont, Massachusetts, 1999.

3.

Bigi, G., and Pappalardo, M., Regularity Conditions in Vector Optimization, Journal of Optimization Theory
and Applications, Vol. 102, pp. 8396, 1999.

4.

Boggs, P. T., and Tolle, J.W., A Strategy for Global Convergence in a Sequential Quadratic Programming
Algorithm, SIAM Journal on Numerical Analysis, Vol. 26, pp. 600623, 1989.

5.

Castellani,M., Separation of Sets and Optimality Conditions, Rendiconti del Circolo Matematico di-Palermo,
Series II, Vol. 48, pp. 2738, 1997.

6.

Dennis, J.E., El-Alem, M. and Maciel, M.C.,A Global Convergence Theory for General Trust-Region-Based
Algorithms for Equality Constrained Optimiz-ation,SIAM Journal on Optimization, Vol. 5, pp. 177207, 1995.

7.

Dien,P.H.,Mastroeni,G,Pappalardo,M.,andQUANG,P.H., Regularity Conditions for Constrained Extremum


Problems in Image Space: The Nonlinear Case, Journal of Optimization Theory and Applications, Vol. 80, pp.
1937,1994.

8.

Facchinei, F., Jiang, H., and QI, L., A Smoothing Method for Mathematical Programs with Equilibrium
Constraints, Mathematical Programming, Vol. 85,pp. 107134, 1999.

9.

Favati, P., and Pappalardo, M., On the Reciprocal Vector Optimization Problem, Journal of Optimization
Theory and Applications, Vol. 47, pp. 181193, 1985.

10. Fernandes, L., Friedlander, A.,Guedes, M., and JU Dice, J. J., Solution of Generalized Linear
Complementarity Problems Using Smooth Optimization and Its Application to Bilinear Programming and LCP,
Applied Mathematics and Optimization, Vol. 43, pp. 119, 2001.
11. Fiacco, A. V., and Mccormick, G. P., Nonlinear Programming, John Wiley,New York, NY, 1968.
12. Giannessi, F., General Optimality Conditions in a Separation Scheme, Algorithms for Continuous Optimization,
Edited by E. Spedicato, Kluwer, Dord-recht, Netherlands, pp. 123, 1994.
13. Giannessi, F., and Rapcsak, T., Images, Separation of Sets, and ExtremumProblems, Recent Trends in
Optimization Theory and Applications, Edited by R. P. Agarwal, Series in Applicable Analysis, World Scientific,
Singapore,pp. 79106, 1996.
14. Giannessi,F., On Connections among Separation, Penalization, and Regularization for variational inequalities
with Point-to-Set Operators, Rendiconti del Cir-colo Matematico di Palermo, Series` II, Vol. 48, pp. 137145,
1997.
15. Giannessi, F., Mastroeni, G., and Pellegrini, L., On the Theory of Vector Optimization and Variational
Inequalities: Image Space Analysis and Separation, Vector Variational Inequalities and Vector Equilibrium,
Mathematical Theories, Edited by F. Giannessi, Kluwer, Dordrecht, Netherlands, pp. 153216, 2000.
www.tjprc.org

editor@tjprc.org

92

Prashant Chauhan

16. Giannessi, F., and Pellegrini, L., Image Space Analysis for Vector Optimization and Variational Inequalities:
Scalarization, Advances in Combinatorial and Global Optimization, Edited by A. Migdalas, P. Pardalos, and R.
Burkard, World Scientific, River Edge, pp. 9821121, 1999.
17. John, F., Extremum problems with inequalities as subsidiary conditions, Studies and Essays, courant anniversary
volume Wiley-Interscience, New York, 187-204, 1948.
18. Kuhn H.W. and Tucker A.W., Non-linear programming,proceeding of second Berkley symposium of
mathematical statistics and probability,481-492,1951.
19. Mangasarian, O. L., and Fromovitz, S., The FritzJohn Necessary Optimality Conditions in Presence of
Equality and Inequality Constraints, Journal of Mathematical Analysis and Applications, Vol. 17, pp. 3747,
1967.
20. Marti, Nez, J. M., Two-Phase Model Algorithm with Global Con convergence for Nonlinear Programming,
Journal of Optimization Theory and Applications, Vol. 96, pp. 397436, 1998.
21. Marti, Nez, J. M., and Pilotta, E.A., Inexact Restoration Algorithms for Con-strained Optimization, Journal of
Optimization Theory and Applications, Vol. 104, pp. 135163, 2000.
22. Mond B. and Weir T.., Generalized concavity and duality, in Generalized concavity in optimization and
economics (eds. S. Schaible and W.T. Ziemba), (Academic Press 1981), 263-279.
23. Pappalardo,M.,On Stationarity in Vector Optimization Problems, Rendicontidel Circolo Matematico di Palermo,
Serie II, Supplemento 48, pp. 195201,1997.
24. Pappalardo, M., Vector Optimization, Encyclopedia of Optimization, Edited by P. Pardalos and C. Floudas,
Kluwer, Dordrecht, Netherlands (to appear).
25. Psiaki, M. L., and Park, S., Augmented Lagrangian Nonlinear Programming Algorithm that Uses SQP and Trust
Region Techniques, Journal of Optimization Theory and Applications, Vol. 86, pp. 311326, 1995.
26. Rockafellar, R.T., Lagrange Multipliers and Optimality, SIAM Review, Vol.35, pp. 183238, 1993.
27. Spellucci, P., An SQP Method for General Nonlinear Programs Using Only Equality Constrained Sub problems,
Mathematical Programming, Vol. 82, pp. 413447, 1998.
28. Vicente, L. N., and Calamai, P., A Bibliography, Journal of Global Optimization, Vol. 5, pp. 291306, 1994.
29. Zoutendijk, G., Methods of feasible directions, American Elsevier, New York, 1960.
30. Zoutendijk, G., Nonlinear programming, A numerical survey, SIAM J. control, 4, no. 1, Feb 1966.
31. Zoutendijk, G., Nonlinear programming,Computational methods, in Abadie[2],1970.

Impact Factor (JCC): 4.6257

NAAS Rating: 3.80

S-ar putea să vă placă și