Sunteți pe pagina 1din 92

Linear programming (LP, or linear optimization) is a mathematical method for determining a way to achieve the best outcome (such

as maximum profit or lowest cost) in a given mathematical model for some list of requirements represented as linear relationships. Linear programming is a specific case of mathematical programming (mathematical optimization). More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints. Its feasible region is a convex polyhedron, which is a set defined as the intersection of finitely many half spaces, each of which is defined by a linear inequality. Its objective function is a real-valued affine function defined on this polyhedron. A linear programming algorithm finds a point in the polyhedron where this function has the smallest (or largest) value if such a point exists. Linear programs are problems that can be expressed in canonical form:

where x represents the vector of variables (to be determined), c and b are vectors of (known) coefficients, A is a (known) matrix of coefficients, and is the matrix transpose. The expression T to be maximized or minimized is called the objective function (c x in this case). The inequalities Ax b are the constraints which specify a convex polytope over which the objective function is to be optimized. In this context, two vectors are comparable when they have the same dimensions. If every entry in the first is less-than or equal-to the corresponding entry in the second then we can say the first vector is less-than or equal-to the second vector. Linear programming can be applied to various fields of study. It is used in business and economics, but can also be utilized for some engineering problems. Industries that use linear programming models include transportation, energy, telecommunications, and manufacturing. It has proved useful in modeling diverse types of problems in planning, routing, scheduling, assignment, and design.
Contents
[hide]

1 History 2 Uses 3 Standard form

3.1 Example

4 Augmented form (slack form)

4.1 Example

5 Duality

5.1 Example

5.2 Another example

6 Covering-packing dualities

6.1 Examples

7 Complementary slackness 8 Theory

o o

8.1 Existence of optimal solutions 8.2 Optimal vertices (and rays) of polyhedra

9 Algorithms

9.1 Basis exchange algorithms

9.1.1 Simplex algorithm of Dantzig 9.1.2 Criss-cross algorithm 9.1.3 Conic sampling algorithm of Serang

9.2 Interior point

9.2.1 Ellipsoid algorithm, following Khachiyan 9.2.2 Projective algorithm of Karmarkar 9.2.3 Path-following algorithms

9.3 Comparison of interior-point methods versus simplex algorithms

10 Open problems and recent work 11 Integer unknowns 12 Integral linear programs 13 Solvers and scripting (programming) languages 14 See also 15 Notes 16 References 17 Further reading 18 External links

[edit]History

Leonid Kantorovich

The problem of solving a system of linear inequalities dates back at least as far as Fourier, after whom the method of FourierMotzkin elimination is named. The linear programming was first [1] developed by Leonid Kantorovich in 1939. Leonid Kantorovich developed the earliest linear programming problems in 1939 for use during World War II to plan expenditures and returns in order to reduce costs to the army and increase losses to the enemy. The method was kept secret until 1947 when George B. Dantzig published the simplex methodand John von Neumann developed the theory of duality as a linear optimization solution, and applied it in the field of game theory. Postwar, many industries found its use in their daily planning. The linear-programming problem was first shown to be solvable in polynomial time by Leonid Khachiyan in 1979, but a larger theoretical and practical breakthrough in the field came in 1984 when Narendra Karmarkar introduced a new interior-point method for solving linear-programming problems. Dantzig's original example was to find the best assignment of 70 people to 70 jobs. The computing power required to test all the permutations to select the best assignment is vast; the number of possible configurations exceeds the number of particles in the observable universe. However, it takes only a moment to find the optimum solution by posing the problem as a linear program and applying the simplex algorithm. The theory behind linear programming drastically reduces the number of possible optimal solutions that must be checked. [edit]Uses Linear programming is a considerable field of optimization for several reasons. Many practical problems in operations research can be expressed as linear programming problems. Certain special cases of linear programming, such as network flow problems and multicommodity flow problems are considered important enough to have generated much research on specialized algorithms for their solution. A number of algorithms for other types of optimization problems work by solving LP

problems as sub-problems. Historically, ideas from linear programming have inspired many of the central concepts of optimization theory, such as duality, decomposition, and the importance of convexity and its generalizations. Likewise, linear programming is heavily used in microeconomics and company management, such as planning, production, transportation, technology and other issues. Although the modern management issues are ever-changing, most companies would like to maximize profits or minimize costs with limited resources. Therefore, many issues can be characterized as linear programming problems. [edit]Standard

form

Standard form is the usual and most intuitive form of describing a linear programming problem. It consists of the following three parts: A linear function to be maximized e.g. e.g. Problem constraints of the following form

e.g.

Non-negative variables

The problem is usually expressed in matrix form, and then becomes:

Other forms, such as minimization problems, problems with constraints on alternative forms, as well as problems involving negative variables can always be rewritten into an equivalent problem in standard form. [edit]Example Suppose that a farmer has a piece of farm land, say L km , to be planted with either wheat or barley or some combination of the two. The farmer has a limited amount of fertilizer, Fkilograms, and insecticide, P kilograms. Every square kilometer of wheat requires F1 kilograms of fertilizer, and P1 kilograms of insecticide, while every square kilometer of barley requires F2 kilograms of fertilizer, and P2 kilograms of insecticide. Let S1 be the selling price of wheat per square kilometer, and S2 be the price of barley. If we denote the area of land planted with wheat and barley by x1 and x2 respectively, then profit can be maximized by choosing optimal values
2

for x1 and x2. This problem can be expressed with the following linear programming problem in the standard form: Maximize: S1x1 Subject to:

+ S2x2

(maximize the revenuerevenue is the "objective function") (limit on total area)

0 x1 + x2 L

0 F1x1 + F2x2 F (limit on fertilizer) 0 (limit on insecticide) P1x1 + P2x2 P x1 0, x2 0 (cannot plant a negative area).
Which in matrix form becomes:

maximize

subject to [edit]Augmented

form (slack form)

Linear programming problems must be converted into augmented form before being solved by the simplex algorithm. This form introduces non-negative slack variables to replace inequalities with equalities in the constraints. The problem can then be written in the following block matrix form: Maximize Z:

x, xs 0
where xs are the newly introduced slack variables, and Z is the variable to be maximized. [edit]Example The example above is converted into the following augmented form: Maximize: S1x1

+ S2x2 (objective function) Subject to: x1 + x2 + x3 = L (augmented constraint) F1x1 + F2x2 + x4 = F (augmented constraint)

P1x1 + P2x2 + x5 = P (augmented constraint) x1, x2, x3, x4, x5 0.


where x3, x4, x5 are (non-negative) slack variables, representing in this example the unused area, the amount of unused fertilizer, and the amount of unused insecticide. In matrix form this becomes: Maximize Z:

[edit]Duality Main article: Duality (optimization) Every linear programming problem, referred to as a primal problem, can be converted into a dual problem, which provides an upper bound to the optimal value of the primal problem. In matrix form, we can express the primal problem as: Maximize c x subject to Ax b, x 0; with the corresponding symmetric dual problem, Minimize b y subject to A y c, y 0. An alternative primal formulation is: Maximize c x subject to Ax b; with the corresponding asymmetric dual problem, Minimize b y subject to A y = c, y 0. There are two ideas fundamental to duality theory. One is the fact that (for the symmetric dual) the dual of a dual linear program is the original primal linear program. Additionally, every feasible solution for a linear program gives a bound on the optimal value of the objective
T T T T T T

function of its dual. The weak duality theorem states that the objective function value of the dual at any feasible solution is always greater than or equal to the objective function value of the primal at any feasible solution. The strong duality theorem states that if the primal has an optimal * solution, x , then the dual also has * an optimal solution, y , such T * T * that c x =b y . A linear program can also be unbounded or infeasible. Duality theory tells us that if the primal is unbounded then the dual is infeasible by the weak duality theorem. Likewise, if the dual is unbounded, then the primal must be infeasible. However, it is possible for both the dual and the primal to be infeasible (See also Farkas' lemma). [edit]Example Revisit the above example of the farmer who may grow wheat and barley with the set provision of some L land, F fertilizer and P insecticide. Assume now that unit prices for each of these means of production (inputs) are set by a planning board. The planning board's job is to minimize the total cost of procuring the set amounts of inputs while providing the farmer with a floor on the unit price of each of his crops (outputs), S1 for wheat and S2 for barley. This corresponds to the following linear programming problem: Minimize: LyL

+ FyF + P (minimi
ze the

yP

total cost of the means of producti on as the "objecti ve function ") (the farmer must P1yP receive no less than S1 for his wheat) (the farmer must P2yP receive no less than S2 for his barley) (prices cannot be negativ e).

Subje ct to:

yL + F1yF + S1

yL + F2 yF + S2

yL 0, yF 0, yP 0

Which in matrix form becomes:

Minimize:

Subject to:

The primal problem deals with physical quantities. With all inputs available in limited quantities, and assuming the unit prices of all outputs is known, what quantities of outputs to produce so as to maximize total revenue? The dual problem deals with economic values. With floor guarantees on all output unit prices, and assuming the available quantity of all inputs is known, what input unit pricing scheme to set so as to minimize total expenditure? To each variable in the primal space corresponds an inequality to satisfy in the dual space, both indexed by output type. To each inequality to satisfy in the primal space corresponds a variable in the dual space, both indexed by input type. The coefficients that bound the inequalities in the primal space are used to compute the objective in the dual space, input quantities in this example. The coefficients used to compute the objective in the primal space bound the inequalities in the dual space, output unit prices in this example. Both the primal and the dual problems make use of

the same matrix. In the primal space, this matrix expresses the consumption of physical quantities of inputs necessary to produce set quantities of outputs. In the dual space, it expresses the creation of the economic values associated with the outputs from set input unit prices. Since each inequality can be replaced by an equality and a slack variable, this means each primal variable corresponds to a dual slack variable, and each dual variable corresponds to a primal slack variable. This relation allows us to speak about complementary slackness. [edit]Another

example
Sometimes, one may find it more intuitive to obtain the dual program without looking at the program matrix. Consider the following linear program: mi ni mi ze su bj ec t to

, We have m + n conditions and all variables are nonnegative. We shall define m + n dual variables: yj and si. We get: m ini m iz e s u bj e ct to

, , Since this is a minimization problem, we would like to obtain a dual program that is a lower bound of the primal. In other words, we would like the sum of all right hand side of the constraints to be the maximal under the condition that for each primal variable the sum of its coefficients do not exceed its coefficient in the

linear function. For example, x1 appears in n + 1 constraints. If we sum its constraints' coefficients we get a1,1y1 + a1,2y2 + ... + a1,n yn + f1s1. This sum must be at most c1. As a result we get: m axi mi ze su bj ec t to

, Note that we assume in our calculations steps that the program is in standard form. However, any linear program may be transformed to standard form and it is therefore not a limiting factor. [edit]Covering-

packing dualities
Covering-packing dualities

Covering

Packing

problems

problems

Minimum set cover

Maximum set packing

Minimum vertex cover

Maximum matching

Minimum edge cover

Maximum independent set

A covering LP is a linear program of the form: Minimize: b


T

y,
T

Subject to: A

y c , y 0,
such that the matrix A and the vectors b and c ar e non-negative. The dual of a covering LP is a packing LP, a linear program of the form:

Maximize: c

x, b, x 0 ,
such that the matrix A a nd the vectors b and c are nonnegative. [edit]Exa

Subject to: Ax

mples

Covering and packing LPs commonly arise as a linear programm ing relaxation of a combinat orial problem and are important in the study of approxi mation algorithm [2] s. For example, the LP relaxation s of the set packing problem, the indep endent set problem, and the match ing problem a re packing LPs. The LP relaxation s of the set

cover problem, thevertex cover problem, and the domin ating set problem a re also covering LPs. Finding a fraction al coloring o f a graph is another example of a covering LP. In this case, there is one constraint for each vertex of the graph and one variable for each inde pendent set of the graph. [edit]Co

mplem entary slackn ess

It is possible to obtain an optimal solution to the dual when only an optimal solution to the primal is known using the complem entary slackness theorem. The theorem states: Suppose that x = (x 1, x2, ... , xn) is primal feasible and that y = (y 1, y2, ... , ym) is dual feasible. Let (w1, w2, ... , wm) denote the correspon ding primal slack variables,

and let (z1, z2, ... , zn) denote the correspon ding dual slack variables. Then x an d y are optimal for their respective problems if and only if xj zj = 0, for j = 1, 2, ... , n, and wi yi = 0, for i = 1, 2, ... , m .

So if the ith slack variable of the primal is not zero, then the ith variable of the dual is equal to zero.

Likewise, if the j-th slack variable of the dual is not zero, then the j-th variable of the primal is equal to zero. This necessary condition for optimality conveys a fairly simple economic principle. In standard form (when maximizin g), if there is slack in a constrain ed primal resource (i.e., there are "leftovers" ), then additional quantities of that resource must

have no value. Likewise, if there is slack in the dual (shadow) price nonnegativity constraint requireme nt, i.e., the price is not zero, then there must be scarce supplies (no "leftovers" ). [edit]Th

eory
[edit]Exi

stence of optima l solutio ns


Geometri cally, the linear constraint s define the feasibl e region, which is a convex polyhedro n. A linear

function is a convex function, which implies that every loca l minimum i s aglobal minimum; similarly, a linear function is a concave function, which implies that every loca l maximum is a global maximum . Optimal solution need not exist, for two reasons. First, if two constraint s are inconsiste nt, then no feasible solution exists: For

instance, the constraint sx2 and x 1 cannot be satisfied jointly; in this case, we say that the LP is infeasib le. Second, when the polyto pe is unbounde d in the direction of the gradient of the objective function (where the gradient of the objective function is the vector of the coefficient s of the objective function), then no optimal value is attained.

[edit]Opt

imal vertice s (and rays) of polyhe dra


Otherwise , if a feasible solution exists and if the (linear) objective function is bounded, then the optimum value is always attained on the boundary of optimal level-set, by themaxim um principle f or convex functions (alternativ ely, by the minim um princi ple for conca ve functions) : Recall that linear

functions are both convex and concave. However, some problems have distinct optimal solutions: For example, the problem of finding a feasible solution to a system of linear inequalitie s is a linear programm ing problem in which the objective function is the zero function (that is, the constant function taking the value zero everywhe re): For this feasibility

problem with the zerofunction for its objectivefunction, if there are two distinct solutions, then every convex combinati on of the solutions is a solution. The vertices of the polytope are also called bas ic feasible solutions. The reason for this choice of name is as follows. Let d den ote the number of variables. Then the fundamen tal theorem of linear

inequalitie s implies (for feasible problems) that for every * vertex x of the LP feasible region, there exists a set of d (or fewer) inequality constraint s from the LP such that, when we treat those d c onstraints as equalities, the unique solution * is x . Thereby we can study these vertices by means of looking at certain subsets of the set of all constraint s (a

discrete set), rather than the continuu m of LP solutions. This principle underlies the simple x algorithm for solving linear programs. [edit]Alg

orithm s

A series of linear constra ints on two variabl es

produc es a region of possibl e values for those variabl es. Solvabl e proble ms will have a feasibl e region in the shape of a simpl e polygo n.

[edit]Bas

is exchan ge algorit hms


[edit]Sim

plex algorith m of Dantzig


The simpl ex

algorithm, develope d by Georg e Dantzig in 1947, solves LP problems by constructi ng a feasible solution at a vertex of the polytope and then walking along a path on the edges of the polytope to vertices with nondecreasin g values of the objective function until an optimum is reached for sure. In many practical problems, "stalling" occurs: Many pivots are made with

no increase in the objective [3] function. [4] In rare practical problems, the usual versions of the simplex algorithm may actually [4] "cycle". To avoid cycles, researche rs develope d new pivoting [5][6][3 rules.
][4][7][8]

In practice, the simplex al gorithm is quite efficient and can be guarantee d to find the global optimum if certain precautio ns againstcy cling are

taken. The simplex algorithm has been proved to solve "random" problems efficiently, i.e. in a cubic number of [9] steps, w hich is similar to its behavior on practical problems.
[3][10]

However, the simplex algorithm has poor worstcase behavior: Klee and Minty construct ed a family of linear programm ing problems for which the simplex method

takes a number of steps exponenti al in the problem [3][6][7] size. In fact, for some time it was not known whether the linear programm ing problem was solvable in polyno mial time, i.e. of comple xity class P. [edit]Cris

s-cross algorith m
Like the simplex algorithm of Dantzig, the crisscross algorithm is a basisexchange algorithm that pivots between bases.

However, the crisscross algorithm need not maintain feasibility, but can pivot rather from a feasible basis to an infeasible basis. The crisscross algorithm does not have poly nomial timecomplexit y for linear programm ing. Both algorithm s visit D all 2 corn ers of a (perturbe d) cube in dimension D, the Klee Minty cube, in the worst [8][11] case. [edit]Con

ic

samplin g algorith m of Serang


Like other basisexchange algorithm s, Serang's conic sampling algorithm moves between vertices; but where the simplex algorithm moves along edges by removing and adding one basis at a time, the conic sampling method exchange s multiple bases at a time, and is not restricted to moving along edges of the [ polytope.

12]

Startin g at a current vertex, the conic sampling method chooses a random vector that improves the objective value without violating any adjacent constraint s. The algorithm then travels along this vector until a limiting constraint is encounter ed. From this point, the algorithm projects the objective vector orthogona l to this limiting constraint , and

moves along this orthogona l projection until a new constraint is reached. This advance ment and projection is repeated until a vertex is reached. Then, a new random vector is chosen. This process is repeated until no vector exists that can improve the objective without violating any local constraint s, implying optimality. Essentiall y, the conic

sampling method can be thought of as a vertex sampling method that randomly samples from the collection of vertices with improved objective value. If the vertices with superior objective value are sampled in a roughly uniform manner, then the expected runtime is logarithmi c in the number of vertices (and thus polynomia l). Sampling the vertices in this manner

can permit large, beneficial jumps through the interior, and yield a substantia l runtime improvent over the simplex method, especially when the number of constraint s, and thus the number of potential vertices, is large; however, the tightest existing upper bound on the worstcase complexit y of the conic sampling method is still exponenti al.

[edit]Inte

rior point
[edit]Elli

psoid algorith m, followin g Khachiy an


This is the first worst -case polynomia ltime algor ithm for linear programm ing. To solve a problem which has n vari ables and can be encoded in L input bits, this algorithm 4 usesO(n L) pseudo arithmetic operation s on numbers with O(L) digits. Khachiya n's algorit hm and

his long standing issue was resolved by Leonid Khachiya n in 1979 with the introducti on of the ellipso id method. The converge nce analysis have (realnumber) predeces sors, notably the iterati ve methods develope d by Naum Z. Shor and the appro ximation algorithm s by Arkadi Nemirovs ki and D. Yudin. [edit]Proj

ective algorith m of

Karmar kar
Khachiya n's algorithm was of landmark importanc e for establishi ng the polynomia l-time solvability of linear programs. The algorithm was not a computati onal breakthrough, as the simplex method is more efficient for all but specially construct ed families of linear programs. However, Khachiya n's algorithm inspired new lines of research

in linear programm ing. In 1984, N. Karmarka r propose d a projectiv e method fo r linear programm ing. Karm arkar's algorithm improved on Khachiya n's worstcase polynomia l bound (giving ). Karmarka r claimed that his algorithm was much faster in practical LP than the simplex method, a claim that created great interest in interiorpoint

methods.
13]

[edit]Pat

hfollowin g algorith ms
In contrast to the simplex algorithm, which finds an optimal solution by traversing the edges between vertices on a polyhedra l set, interiorpoint methods move through the interior of the feasible region. Since then, many interiorpoint methods have been

proposed and analyzed. Early successfu l implemen tations were based on affine scaling va riants of the method. For both theoretica l and practical purposes, barrier function o r pathfollowing methods have been the most popular since the [14] 1990s. [edit]Co

mparis on of interior -point metho ds versus simple x algorit hms

The current opinion is that the efficiency of good implemen tations of simplexbased methods and interior point methods are similar for routine applicatio ns of linear programm [14] ing. Ho wever, for specific types of LP problems, it may be that one type of solver is better than another (sometim es much better). LP solvers are in widesprea d use for

optimizati on of various problems in industry, such as optimizati on of flow in transporta tion [ networks.
15]

[edit]Op

en proble ms and recent work

List of unsolved prob

Does linear programmin polynomial-time algorithm

There are several open problems in the theory of linear programm ing, the solution of which would represent fundamen tal

breakthro ughs in mathemat ics and potentially major advances in our ability to solve largescale linear programs. Does LP admit a stro ngly polyn omial -time algori thm? Does LP admit a stron gly polyn omial algori thm to find a strictl y comp leme ntary soluti on?

Does LP admit a polyn omial algori thm in the real numb er (unit cost) mode l of comp utatio n?

This closely related set of problems has been cited by Stephe n Smale as among the 18 greatest unsolved problems of the 21st century. In Smale's words, the third version of the

problem "is the main unsolved problem of linear programm ing theory." While algorithm s exist to solve linear programm ing in weakly polynomia l time, such as the ellipso id methodsa nd interior -point technique s, no algorithm s have yet been found that allow strongly polynomia l-time performan ce in the number of constraint s and the number of variables. The developm

ent of such algorithm s would be of great theoretica l interest, and perhaps allow practical gains in solving large LPs as well. Although the Hirsch conjectur e was recently disproved for higher dimension s, it still leaves the following questions open. Are there pivot rules which lead to polyn omial -time Simpl ex

varia nts? Do all polyt opal graph s have polyn omiall y boun ded diam eter?

These questions relate to the performan ce analysis and developm ent of Simplexlike methods. The immense efficiency of the Simplex algorithm in practice despite its exponenti al-time theoretica l performan ce hints that there

may be variations of Simplex that run in polynomia l or even strongly polynomia l time. It would be of great practical and theoretica l significan ce to know whether any such variants exist, particularl y as an approach to deciding if LP can be solved in strongly polynomia l time. The Simplex algorithm and its variants fall in the family of edgefollowing algorithm

s, so named because they solve linear programm ing problems by moving from vertex to vertex along edges of a polytope. This means that their theoretica l performan ce is limited by the maximum number of edges between any two vertices on the LP polytope. As a result, we are interested in knowing the maximum graphtheoretica l

diameter of polytopal graphs. It has been proved that all polytopes have subexpon ential diameter. The recent disproof of the Hirsch conjectur e is the first step to prove whether any polytope has superpoly nomial diameter. If any such polytopes exist, then no edgefollowing variant can run in polynomia l time. Questions about polytope diameter are of independ

ent mathemat ical interest. Simplex pivot methods preserve primal (or dual) feasibility. On the other hand, crisscross pivot methods do not preserve (primal or dual) feasibility they may visit primal feasible, dual feasible or primaland-dual infeasible bases in any order. Pivot methods of this type have been studied since the 1970s. Essentiall

y, these methods attempt to find the shortest pivot path on the arrang ement polytope under the linear programm ing problem. In contrast to polytopal graphs, graphs of arrangem ent polytopes are known to have small diameter, allowing the possibility of strongly polynomia l-time crisscross pivot algorithm without resolving questions about the

diameter of general polytopes.


[8]

[edit]Int

eger unkno wns


If all of the unknown variables are required to be integers, then the problem is called an integer programm ing (IP) or integer linear program ming (ILP ) problem. In contrast to linear programm ing, which can be solved efficiently in the worst case, integer programm ing problems

are in many practical situations (those with bounded variables) NPhard. 0-1 integer program ming or b inary integer program ming (BI P) is the special case of integer programm ing where variables are required to be 0 or 1 (rather than arbitrary integers). This problem is also classified as NPhard, and in fact the decision version was one of Karp's 21 NP-

complete problems. If only some of the unknown variables are required to be integers, then the problem is called a mixed integer program ming (MI P) problem. These are generally also NPhard because they are even more general than ILP programs. There are however some important subclasse s of IP and MIP problems that are efficiently solvable, most

notably problems where the constraint matrix is totally unimodul ar and the right-hand sides of the constraint s are integers or - more general where the system has the total dual integrality (TDI) property. Advanced algorithm s for solving integer linear programs include: cuttin gplane meth od branc h and boun d

branc h and cut branc h and price if the probl em has some extra struct ure, it may be possi ble to apply dela yed colu mn gener ation.

Such integerprogramm ing algorithm s are discussed by Padberg and in Beasley. [edit]Int

egral linear progra ms

A linear program in real variables is said to be integr al if it has at least one optimal solution which is integral. Likewise, a polyhedro n is said to beintegra l if for all bounded feasible objective functions c, the linear program has an optimum with integer coordinat es. As observed by Edmonds and Giles in 1977, one can equivalent ly say that the

polyhedro n is integral if for every bounded feasible integral objective function c, the optimal va lue of the linear program is an integer. Integral linear programs are of central importanc e in the polyhedra l aspect of combin atorial optimizati on since they provide an alternate characteri zation of a problem. Specificall y, for any problem, the convex

hull of the solutions is an integral polyhedro n; if this polyhedro n has a nice/comp act descriptio n, then we can efficiently find the optimal feasible solution under any linear objective. Conversel y, if we can prove that a linear programm ing relaxation is integral, then it is the desired descriptio n of the convex hull of feasible (integral) solutions. Note that terminolo

gy is not consistent throughou t the literature, so one should be careful to distinguis h the following two concepts, in an int eger linear progr am, d escri bed in the previ ous sectio n, varia bles are forcib ly const raine d to be integ ers, and this probl em is NPhard

in gener al, in an int egral linear progr am, d escri bed in this sectio n, varia bles are not const raine d to be integ ers but rather one has prove n some how that the conti nuou s probl em alway s has an integr al

optim al value (assu ming c is integr al), and this optim al value may be found efficie ntly since all polyn omial -size linear progr ams can be solve d in polyn omial time. One common way of proving that a polyhedro n is integral is to show that it

is totally unimodul ar. There are other general methods including the intege r decompo sition property a ndtotal dual integrality. Other specific wellknown integral LPs include the matching polytope, lattice polyhedra , submod ular flow polyhedra , and the intersectio n of 2 generaliz ed polymatro ids/gpolymatro ids --- e.g. see Schrijver 2003.

A bounded integral polyhedro n is sometime s called a convex lattice polytope, particularl y in two dimension s. [edit]Sol

vers and scripti ng (progr ammin g) langua ges


Free opensource p ermissiv e license s:

N a m e

L i Bri c ef e inf n o s e

jav a con vex J opti O miz pt A atio i S n m L libr iz ary er ope n sou rce

Uni ver sal cro ssplat for m nu mer O ical p B opti e S miz n D atio O n pt fra me wor k, see its LP pag e and

oth er pro ble ms inv olv ed

MA TL AB Too lbo x for sol vin g line ar, O non P line TI ar, T B con o S tinu ol D ous b and o dis x cret e opti miz atio n pro ble ms. Se e the

OP TI L P Exa mpl es pag e for sev eral exa mpl es. Free opensource c opyleft (reciproc al) licens es:

N a m e

L i Bri c ef e inf n o s e

C a ss o w ar y c

L G P L

an incr em ent al con stra int

o n st ra in t s ol v er

solv ing tool kit that effi cie ntly solv es syst em s of line ar equ aliti es and ine qua litie s.

GN U Lin ear Pro gra mm gl G ing p P Kit, k L a free LP/ MIL P solv er. Use sG

NU Mat hPr og mo delli ng lan gua ge.

a libr ary for incr em ent ally solv ing syst Q em G o s of P c line L a ar equ atio ns with vari ous goa l fun ctio ns

an C C LP L P solv

P L er fro m COI NOR

a pro gra mm ing lan gua ge and soft war R e env G Pr iron P oj me L e nt ct for stat istic al co mp utin g and gra phi cs

MINTO ( Mixed Integer Optimizer, an integer programm

ing solver which uses branch and bound algorithm) has publicly available source [16] code b ut not open source. Proprieta ry:

Na Brief me info

AI M M S

A popul ar mode ling langu A age M for PL largescale linear , mixe d integ

er and nonli near optim isatio n with a free stude nt versi on avail able.

API to MAT LAB and Pytho n. Solve an exam AP ple Li Mo near nit Progr or ammi ng (LP) probl em th rough a webinterf ace.

CP Popul

LE ar X solve r with an API for sever al progr ammi ng langu ages, and also has a mode lling langu age and work s with AIM MS, AMP L, GA MS, MPL, Open Opt, OPL Devel opme nt Studi o, and T OML AB. Free for acad

emic use.

EX CE LS olv er Fu nct ion

A .N ET n umeri cal librar y conta ining dens e and Fin spars Ma e th versi ons of an pri maldual interi orpoint solve r.

Fo rtM P

GA M S

Solve r with parall el algori thms for largescale linear progr ams, quadr Gu atic rob progr i ams and mixe dinteg er progr ams. Free for acad emic use.

IM SL Nu me ric al Lib

Colle ction s of math and statis tical

rari algori es thms avail able in C/C+ +, Fortr an, Java and C#/.N ET. Opti mizat ion routin es in the IMSL Librar ies inclu de unco nstrai ned, linear ly and nonli nearl y const raine d mini mizat ions, and linear progr ammi

ng algori thms.

Free optim izatio n pack age inten ded for solvin LiP g S linear , integ er and goal progr ammi ng probl ems.

A gener alpurpo se M and AT matri LA xB orient ed progr ammi nglangu

age for nume rical comp uting. Linea r progr ammi ng in MAT LAB requir es the Opti mizat ion Toolb ox in additi on to the base MAT LAB produ ct; avail able routin es inclu de BINT PRO G and LINP ROG

A WYSI WYG math editor . It has functi ons for Ma solvin thc g ad both linear and nonli near optim izatio n probl ems.

Ma the ma tic a

A gener alpurpo se progr ammi nglangu age for math emati cs, inclu ding symb olic

and nume rical capa bilitie s.

A solve r for large scale optim izatio n with API M for OS sever EK al langu ages (C++, java,. net, Matla b and pytho n).

NA G Nu me ric al Lib rar y

A collec tion of math emati cal and statis tical routin

es devel oped by the N umeri cal Algori thms Grou p for multi ple progr ammi ng langu ages (C, C++, Fortr an, Visua l Basic , Java and C#) and pack ages (MAT LAB, Excel , R, LabV IEW). The Opti mizat ion chapt

er of the NAG Librar y inclu des routin es for linear progr ammi ng probl ems with both spars e and nonspars e linear const raint matri ces, toget her with routin es for the optim izatio n of quadr atic, nonli near, sums of squar

es of linear or nonli near functi ons with nonli near, boun ded or no const raints . The NAG Librar y has routin es for both local and globa l optim izatio n, and for conti nuou s or integ er probl ems.

N A Ma gener th al-

St purpo ats se .N ET st atistic al librar y conta ining a simpl ex solve [17] r.

A Javabase d mode ling langu age Op for tim optim J izatio n with a free versi on avail [ able.
18][19]

A SA suite S/ of O solve R rs for Linea

r, Integ er, Nonli near, Deriv ativeFree, Netw ork, Com binat orial and Cons traint Opti mizat ion; the Al gebra ic mode ling langu ageO PTM ODE L; and a variet y of vertic al soluti ons aime d at speci fic probl ems/ mark

ets, all of which are fully integr ated with the S AS Syste m.

A gener alpurpo se const raint integ er progr ammi ng solve SC r with IP an emph asis on MIP. Com patibl e with Zimpl mod elling langu age. Free

for acad emic use and avail able in sourc e code.

A Javabase d math librar y that supp orts Su linear an progr Sh ammi u ng and other kinds of nume rical optim izatio [20] n.

An UF easy FL API P for Mixe d,

Integ er and Linea r Progr ammi [21] ng

A visual bloc k diagr am la Vis ngua Si ge for m simul ation of dy nami cal syste ms.

S-ar putea să vă placă și