Sunteți pe pagina 1din 15

APPENDIX

Introduction to
Linear Programming
Many important problems involve linear inequalities rather than linear equations. For
example, a condition on the variables x and y might take the form of an inequality
2x 3y 4 rather than that of an equality 2x 3y = 4 . Linear programming is a
method of finding a solution to a system of such inequalities that maximizes a function
of the form p = ax + by, where a and b are constants. The general method of solving
such problems (called the simplex method) involves Gaussian elimination techniques, so
it is natural to include a discussion of it here. However, the proofs of the main theorems
are omitted. The interested reader should consult a text on linear programming. [For
example, S. I. Gass, Linear Programming, 4th ed. (New York: McGraw-Hill, 1975) gives a
thorough treatment. J. G. Kemeny, J. L. Snell, and G. L. Thompson, Introduction to Finite
Mathematics (Englewood Cliffs, N.J.: Prentice Hall, 1974) gives a more elementary treatment and relates linear programming to the theory of games.]

B.1

X2
0,

5
3

2x1 + 3x2 > 5


(1, 1)
2x1 + 3x2 = 5
2x1 + 3x2 < 5
5
,
2

X1

Figure B.1

Example 1

GRAPHICAL METHODS
When only two variables are present, there is a geometric method of solution that,
although it is not useful as a practical tool when more variables are involved, is useful
in illustrating how solutions to these problems arise. Before giving an example, we must
clarify what an inequality of the form
2x1 + 3x2 5
means in geometric terms. (We use x1 and x2 in place of x and y to conform to later
notations.) Of course, the graph of the corresponding equation 2x1 + 3x2 = 5 is well
known; it is a line and consists of all points P(x1, x2) in the plane whose coordinates
x1 and x2 satisfy the equation. The lines parallel to this one all have equations
2x1 + 3x2 = c for some value of c, so the points P(x1, x2) whose coordinates satisfy the
inequality 2x1 + 3x2 < 5 are just those points lying on one side of the line 2x1 + 3x2 = 5
(points on the other side are those satisfying 2x1 + 3x2 > 5). The situation is illustrated in
Figure B.1. The points P(x1, x2) satisfying 2x1 + 3x2 5 are just the points on or below
the line. In general, the region on or to one side of a straight line is called a half-plane.
We shall loosely speak of the half-plane ax1 + bx2 c (where a, b, and c are constants).
The line ax1 + bx2 = c will be called the boundary of the half-plane.
When two or more inequalities are given, the set of points that satisfies all of them
is the region common to all the half-planes involved, and such regions are important
in linear programming. Here is an example.
Determine the region in the plane given by

x1 0
x1 x2 0
x1 + x2 4

APPENDIX B
SOLUTION

Introduction to Linear Programming

The half-plane x1 0 consists of all points on or to the


right of the X2 axis, the half-plane x1 x2 0 consists
of all points on or above the line x2 = x1, and the halfplane x1 + x2 4 consists of all points on or below the
line x2 = x1 + 4. The lines in question are plotted in
the diagram, and the region common to all these halfplanes is just the shaded portion.

X2

x 1 x2 = 0
x1 + x2 = 4
(2, 2)

X1
The general linear programming problem (with two variables) can now be stated:
Suppose a region in the plane (called the feasible region) is given as in Example 1
by a set of linear inequalities (called constraints) in two variables x1 and x2 (so the
feasible region consists of all points common to all the corresponding half-planes).
The problem is to find all points in this region at which a linear function of the form
p = ax1 + bx2 is as large as possible. This function p is called the objective function,
and these points are said to maximize p over the feasible region. In applications, p
might be the profit in some commercial venture, or some other quantity that is desired
to be large. The precise nature of p plays no part in the solution, except that it should be
a linear function of the variables x1 and x2 (hence the name linear programming). The
following example illustrates how the method works.
Example 2

SOLUTION

Find the point (or points) P(x1, x2) in the region


4x1 + x2 16
x1 + x2 6
x1 + 3x2 15
x1 0
x2 0
for which the quantity
p = 2x1 + 3x2
is as large as possible.

X2
(0, 16)

4x1 + x2 = 16
The lines 4x1 + x2 = 16, x1 + x2 = 6, and x1 + 3x2 = 15
are plotted in the first diagram (see figure), and the corx1 + x2 = 6
(0, 6)
responding half-planes lie below these lines. Hence the
x1 + 3x2 = 15
(0, 5)
feasible region in question is the shaded part. The second
(15, 0)
diagram exhibits this region again (but with a larger scale)
and also shows the line 2x1 + 3x2 = p, plotted for various
X1
(4, 0)(6, 0)
values of p. Because p has the same value at any point on
one of these lines, they are sometimes called level lines for
X2
p. The aim is to find a point in the shaded region at which p p = 15 (0, 5)
3 9
,
has as large a value as possible.
2 2
 3 These
 values increase as the p = 12
9
10 8
level lines rise, so the vertex 2 , 2 the intersection of
,
p=9
3 3
the lines x1 + 3x2 = 15 and x1 + x2 = 6clearly gives the
p=6
largest value of p. This value is
3
9
p
=3
p = 2 2 + 3 2 = 16.5
(4, 0)
p=0
and it is the desired maximal value.
X1

B . 1 Graphical Methods

It is quite clear that the method used in Example 2 will work in a variety of similar
situations. However, before we attempt to say anything in general, consider the following
example.

x
1

The region in question is sketched in the diagram, where


the main difference from the previous example emerges:
The feasible region in this case is unbounded. Three of
the level lines (corresponding to p = 6, 9, and 12) are also
plotted, and it is clear that there are points in the feasible
region at which the objective function p is as large as we x1 + x2 = 3
please. Consequently p has no maximum over the feasible
region in question.

X2

6
p=

12
p=
9
p=

SOLUTION

Maximize p = 2x1 + x2 over the region


x1 + x2 3
x1 + x2 3
x1 + 2x2 0

x2

Example 3

x2
+2
x 1

=0

X1

With this example in mind, the reader might guess that, in general, the objective
function has no maximum whenever the feasible region is unbounded. But this is not
the case. Exercise 2 gives a situation wherein the feasible region is unbounded but the
objective function does indeed have a maximum (the reader should try to construct
such an example before working this exercise).
What is true is that if the feasible region is bounded (that is, can be contained in
some circle), then the objective function p has a maximum value. In fact, the level lines
for p form a set of parallel lines, each corresponding to a fixed value of p. If p were to
increase continuously, the corresponding level line would move continuously in a direction perpendicular to itself. As this moving level line crosses the feasible region, it is
clear that a largest value of p can be found so that the corresponding line intersects the
feasible region. Because the edges of the feasible region are line segments, even more
can be said: Either this level line corresponding to the largest value of p will intersect
the feasible region at a vertex or, possibly, the intersection will consist of an entire edge
of the feasible region. Either way, the maximum value of p will be achieved at a vertex
of the feasible region. This is good news. There are at most a finite number of vertices
(there are only finitely many constraints), so only a finite number of feasible points (the
vertices) need be looked at. If p is evaluated at each of them, the vertex yielding the
largest value of p will be the desired feasible point.
Finally, the same argument shows that p achieves a minimum value over a bounded
feasible region, and that minimum is found at a vertex. The following theorem summarizes this discussion.
T HEOREM 1

Let p be a linear function of two variables x1 and x2:


p = ax1 + bx2
If a finite set of linear inequalities in x1 and x2 determine a bounded feasible region,
then there is a point in this feasible region (in fact, a vertex point) that maximizes p,
and there is a vertex point that minimizes p.

APPENDIX B
Example 4

SOLUTION

Introduction to Linear Programming

A manufacturer wants to make two types of toys. The large toy requires 4 square feet of
plywood and 50 milliliters (ml) of paint, whereas the smaller toy requires 3 square feet
of plywood and only 20 ml of paint. There are 1800 square feet of plywood and 16 liters
of paint available. If the large toys sell for a profit of $21 each, and each of the small
toys yields an $18 profit, determine the number of toys of each size that the manufacturer must make to maximize the total profit.
Let x1 and x2 denote the number of large and small toys,
respectively, to be made. These variables satisfy the following constraints:
4x1 + 3x2 1800 (plywood)
5x1 + 2x2 1600 (paint)
x1 0
x2 0
The feasible region corresponding to these inequalities is
plotted in the diagram. The four vertices have coordinates


, 2600
(0, 0), (0, 600), 1200
, and (320, 0). The total profit
7
7
p of the enterprise is
p = 21x1 + 18x2
and the values of p at each of the vertices are
p=0
p = 10, 800
p = 10, 285.71
p = 6, 720

X2
(0, 600)
1200 2600
, 7
7

(0, 0)

(320, 0)

X1

at (0, 0)
at (0, 600)


at 1200
, 2600
7
7
at (320, 0)

Hence the manufacturer will maximize profits by producing 600 small toys and no large
toys at all.
Theorem 1 can be extended. First of all, the objective function p may very well have
a maximum over the feasible region even if that region is unbounded. Moreover, the
argument leading to Theorem 1 can be modified to show that, if the objective function
p has a maximum over the feasible region, that maximum will be attained at a vertex.
Second, Theorem 1 can be extended to more than two variables x1, x2, . . . , xn.
A function of the form:
p = c1x1 + c2x2 + + cnxn
is called a linear function of these variables, and a condition of the form
a1x1 + a2x2 + + anxn b
is called a linear constraint on the variables. The set of n-tuples (x1, x2, . . . , xn) satisfying a finite number of such linear constraints is called the feasible region determined
by these constraints, and the n-tuples themselves are called feasible points. Consider
the equation
a1x1 + a2x2 + + anxn = b
which was obtained from the foregoing constraint by replacing the inequality by equality.
The set of all n-tuples (x1, x2, , xn) satisfying this equation is called a bounding
hyperplane for the feasible region. In the case of two variables these are lines; they are
actual planes when n = 3. By analogy with the two-variable situation, an n-variable
feasible point is called an extreme point (or corner point) of the feasible region if it lies
on n (or more) bounding hyperplanes.

B . 1 Graphical Methods

The general linear programming problem is to find a feasible point such that the
objective function p is as large as possible, in which case the point is said to maximize
p. (Similarly, we could seek a feasible point that minimizes p.) The extended theorem is
stated below. The proof (though similar in spirit to that of Theorem 1) is omitted. The
feasible region is said to be bounded if there exists a number M such that |xi| M
holds for every feasible point (x1, x2, . . . , xn) and each i = 1, 2, . . . , n.
T HEOREM 2

Let p be a linear function of the variables x1, x2, . . . , xn:


p = c1x1 + c2x2 + + cnxn
and consider the feasible region determined by a finite number of linear constraints on
these variables.
1. If p has a maximum value in the feasible region, then that maximum occurs at
an extreme point.
2. If p has a minimum value in the feasible region, then that minimum occurs at an
extreme point.
3. If the feasible region is bounded, then p has both a maximum and a minimum.

Example 5

Find the maximum and minimum value of


p = 4x1 3x2 + 7x3
subject to the following constraints;
5x1 + 2x2 + 4x3
x1
x2
x3

SOLUTION

20
0
0
0

These constraints are sufficiently simple that a picture


X3 P (0, 0, 5)
3
of the feasible region can easily be drawn. The bounding
hyperplanes in this case are ordinary planes. In the
P 2 (0, 10, 0)
diagram, x1 0 represents the region in front of the
(0, 0, 0)
X2X3 -plane, x2 0 gives the region to the right of the
X2
X1X3 -plane, and x3 0 yields the region above the
X1
X1X2 -plane. The fourth bounding hyperplane is
P 1 (4, 0, 0)
5x1 + 2x2 + 4x3 = 20
and the intersections of this plane with the X1, X2, and X3 axes are plotted as P1, P2, and
P3, respectively. The constraint
5x1 + 2x2 + 4x3 20
determines the region below this plane, so the feasible region is the tetrahedron with
vertices P1, P2, P3, and the origin. Hence these are the extreme points. If the objective
function P is evaluated at the extreme points, the results are
p = 0 at the origin
p = 16 at P1(4, 0, 0)
p = 30 at P2(0, 10, 0)
p = 35 at P3(0, 0, 5)
Because the feasible region is bounded, p has both a maximum and a minimum; they
are p = 35 [at P3(0, 0, 5)] and p = 30 [at P2(0, 10, 0)] .

APPENDIX B

Introduction to Linear Programming

The procedure we just used has two drawbacks. First, it is not always easy to determine whether a maximum exists. But even when this is known (if the feasible set is
bounded, for example), the number of extreme points can be very large and the amount
of computation required to find them can be excessive, even for a computer. This is why
the method is not pursued here.
A much more efficient procedure exists that reduces the number of extreme points
that must be examined. The idea is quite simple. To get some insight into how it works,
consider the general linear programming problem with three variables. Then the
bounding hyperplanes are real planes, and the edges of the feasible region are the lines
of intersection of pairs of bounding planes. Now suppose the objective function p is
evaluated at some extreme point. Choose an edge emanating from this point along
which the function p increases (or decreases if a minimum is desired). If no such edge
exists, it can be shown that the extreme point gives the maximum. Otherwise there are
two possibilities: (1) We encounter another extreme point on that edge at which p is
larger and then repeat the process. (2) There is no other vertex along this edge so p
increases without bound, and no maximum exists. At each stage we either discover
there is no maximum, or we find the maximum, or we are led to another extreme point
at which p is larger. Clearly the same extreme point cannot be encountered twice in this
fashion (p increases), and so, because there are only finitely many extreme points, the
process is effective: It either shows that no maximum exists, or, if there is a maximum, it
leads to an extreme point yielding that maximum. Furthermore, in the types of problems usually found in practice, the method converges quickly.
This description of the algorithm is geometric in nature. However, the whole thing
can be cast in algebraic form. This will be described in the next section.

Exercises B.1
1. In each case, find the maximum and minimum values of p

by finding the feasible region and examining p at the


vertex points.
a) p = 3x1 + 2x2
b) p = 4x1 + 3x2
x2 5
3x1 + x2 9
2x1 + 5x2 10
x1 + x2 8
x1 0
x1 x2 2
x2 0
x1 0
x2 0
c) p = 2x1 x2
d) p = x2 x1
3x1 + 2x2 12
x1 + x2 2
x1 + 2x2 14
x2 x1 6
x1 + x2 4
2x1 + 3x2 38
x1 6
x1 10
x1 0
x1 0
x2 0
x2 0
2. Consider the problem of maximizing p

subject to:

= x2 2x1

x1 + x2 6
x1 2x2 0
x1 0
x2 0
Show that the feasible region is unbounded but that p still
has a maximum.

3. Show that there are no points in the following feasible

region.

x1 + x2 4
x2 x1 4
3x1 + x2 12

4. In Example 4, assume that small toys continue to earn a

profit of $18 per toy but that profits for large toys increase.
Find the number of toys that should be produced to maximize profits in each of the following cases.
a) Large-toy profit is $23 per toy.
b) Large-toy profit is $24 per toy
c) Large-toy profit is $25 per toy.
5. A man wishes to invest a portion of $100,000 in two stocks

A and B. He feels that at most $70,000 should go into A


and at most $60,000 should go into B. If A and B pay 10%
and 8% dividends, respectively, how much should he invest
in each stock to maximize his dividends?
6. Repeat Exercise 5 where A pays 8% and B pays 10%.
7. A vitamin pill manufacturer uses two ingredients P and Q.

The amounts of vitamins, A, C, and D per gram of ingredient are given in the table. The ingredients are mixed with
at least 85 grams of filler to make batches of 100 grams
that are then pressed into pills. The law requires that each

B.2 The Simplex Algorithm


batch contain at least 12 units of vitamin A, at least 12
units of vitamin C, and at least 10 units of vitamin D. If P
costs $5 per gram and Q costs $2 per gram, how many
grams of each should be used per batch to minimize the
cost?
VITAMIN

10. Repeat Exercise 9 where the profits are $7 on grade 1 oil

2 units

1 unit

5 units

2 units

4 units

1 unit

8. Repeat Exercise 7 where P costs $2 per gram and Q costs

$3 per gram.
9. An oil company produces two grades of heating oil, grade

1 and grade 2, and makes a profit of $8 per barrel on grade


1 oil and $5 per barrel on grade 2 oil. The refinery operates
100 hours per week. Grade 1 oil takes 14 hour per barrel
to produce, whereas grade 2 oil takes only 18 hour per barrel.

B.2

The pipeline into the refinery can supply only enough


crude to make 500 barrels of oil (either grade). Finally,
warehouse constraints dictate that no more than 400 barrels
of either type of oil can be produced per week. What production levels should be maintained to maximize profit?
and $6 on grade 2 oil.
11. A small bakery makes white bread and brown bread in

batches, and it has the capacity to make 8 batches per day.


Each batch of white bread requires 1 unit of yeast, but the
brown bread takes 2 units of yeast per batch. On the other
hand, white bread costs $30 per day for marketing, whereas
brown bread only costs $10 per day. If $180 per day are
available for marketing, and if 13 units of yeast are available per day, find the number of batches of each type that
the bakery should make per day to maximize profits if it
makes $300 profit per batch of white bread and $200 profit
per batch of brown bread.

THE SIMPLEX ALGORITHM


The simplex algorithm is a simple, straightforward method for solving linear programming problems that was discovered in the 1940s by George Dantzig. The idea is to identify
certain basic feasible points and to prove that the maximum value (if it exists) of the
objective function p occurs at one of these points. Then the algorithm proceeds roughly
as follows: If a basic feasible point is at hand, a procedure is given for deciding whether
it yields the maximum value of the objective function and, if not, for finding a basic
feasible point that produces a larger value of the objective function. The process continues until a maximum is reached (or until it is established that there is no maximum).
We will develop the algorithm only for the standard linear programming problem:
Maximize the linear objective function
p = c1x1 + c2x2 + + cnxn
of the variables x1, x2, , xn subject to a finite collection of constraints:
a11x1 + a12x2 + + a1nxn b1
a21x1 + a22x2 + + a2nxn b2
..
..
..
..
.
.
.
.
am1x1 + am2x2 + + amnxn bm
Furthermore, the variables xi and the constants bj are all required to be nonnegative:
xi 0 for i = 1, 2, . . . , n
bj 0 for j = 1, 2, . . . , m
The requirement that p be a linear function of the variables is vital (nonlinear programming is much more difficult), but the condition that the xi be nonnegative and the fact
that we are maximizing p (not minimizing) are not severe restrictions. On the other hand,
the requirement that the constants bj be nonnegative is a serious restriction (although it
is satisfied in many practical applications). We refer the reader to texts on linear programming for ways in which the algorithm can be used in the non-standard case.

APPENDIX B

Introduction to Linear Programming

The various steps in the algorithm are best explained by working a specific example
in detail.

PROTOTYPE EXAMPLE
Maximize p = 2x1 + 3x2 x3 subject to:
x1 + 2x2 + 2x3
3x1 x2 + x3
2x1 + 3x2 + 5x3
xi

6
9
20
0

for i = 1, 2, 3
The first step in the procedure is to convert the constraints from inequalities to equalities.
This is achieved by introducing new variables x4, x5, and x6 (called slack variables), one for
each constraint. The new problem is to maximize p = 2x1 + 3x2 x3 + 0x4 + 0x5 + 0x6
subject to:
x1 + 2x2 + 2x3 + x4
= 6
3x1 x2 + x3
+ x5
= 9
2x1 + 3x2 + 5x3
+ x6 = 20
xi 0 for i = 1, 2, 3, 4, 5, 6
The claim is that if (x1, x2, x3, x4, x5, x6) is a solution to this problem, then (x1, x2, x3)
is a solution to the original problem. In fact, the constraints are satisfied (because x4 0,
x5 0, and x6 0), so (x1, x2, x3) is a feasible solution for the original problem.
Moreover, if (x1, x2, x3) were another feasible point for the original problem yielding a
larger value of p, then taking
x4 = 6 x1 2x2 2x3
x5 = 9 3x1 + x2 x3
x6 = 20 2x1 3x2 5x3
would give a feasible point (x1, x2, x3, x4, x5, x6) for the new problem yielding a larger
value of p, which is a contradiction.
So it suffices to solve the new problem. To do so, write the relationship
p = 2x1 + 3x2 x3 as a fourth equation to get
= 6
x1 + 2x2 + 2x3 + x4
3x1 x2 + x3
+ x5
= 9
2x1 + 3x2 + 5x3
+ x6
= 20
+p= 0
2x1 3x2 + x3
This amounts to considering p as yet another variable. The augmented matrix
(Section 1.1) for this system of equations is
x1

x2 x3 x4 x5 x6 p

20

This is called the initial simplex tableau for the problem. The idea is to use elementary
row operations to create a sequence of such tableaux (keeping p in the bottom row) that
will lead to a solution. This is analogous to the modification of the augmented matrix
in Gaussian elimination, except that here we allow only feasible solutions.

B.2 The Simplex Algorithm

Note that the columns corresponding to the slack variables x4, x5, and x6 all consist
of zeros and a single 1 and that the 1s are in different rows (the way these variables were
introduced guarantees this). These will be called basic columns, and the slack variables
are called the basic variables in the initial tableau. (They are indicated by a box.)
Because of this, there is one obvious solution to the equations: Set all the nonbasic variables equal to zero and solve for the basic variables: x4 = 6, x5 = 9, x6 = 20 (and p = 0).
In other words,
(0, 0, 0, 6, 9, 20) is a feasible solution yielding p = 0
Such a solution (with all nonbasic variables zero) is called a basic feasible solution.
Note the role of the last column in all this: The numbers 6, 9, and 20 are the constants
in the original constraints. It is the fact that these are positive that makes (0, 0, 0, 6, 9, 20)
a feasible solution. Also, the bottom entry in the last column is the value of p at the basic
feasible solution (0 in this case).
The key to the whole algorithm is the following theorem. The proof is not difficult
but would require some preliminary discussion of convex sets. Hence we omit it and
refer the reader to texts on linear programming, such as S. I. Gass, Linear Programming,
4th ed. (New York: McGraw-Hill, 1975).
T HEOREM 1

If a standard linear programming problem has a solution, then there is a basic feasible
solution that yields the maximum value of the objective function. (Such basic feasible
solutions are called optimal.)
Hence our goal is to find an optimal basic feasible point. Our construction using slack
variables guarantees an initial basic feasible solution; the next step is to see whether it is
optimal.
The bottom row of the initial tableau gives p in terms of the nonbasic variables (the
original expression for p in this case):
p = 2x1 + 3x2 x3
The fact that some of the coefficients here are positive suggests that this value of p is not
optimal because increasing x1 or x2 at all will increase p. In fact, it would seem better to
try to increase x2; it has the larger of the two positive coefficients (equivalently the most
negative entry in the last row of the tableau). This in turn suggests that we try to modify
the tableau so that x2 becomes a new basic variable. For this reason, x2 is called the
entering variable. Its column is called the pivot column.
This is accomplished by doing elementary row operations to convert the pivot column into a basic column. The question is where to locate the 1. We do not put the 1 in
the last row because we do not want to disturb p, but it can be placed at any other location in the pivot column where the present entry is nonzero (they all qualify in this
example). The entry chosen is called the pivot, and it is chosen as follows:
1. The pivot entry must be positive.
2. Among the positive entries available, the pivot is the one that produces the

smallest ratio when divided into the right-most entry in its row.
These are chosen so that the basic feasible solution in the tableau we are creating will
indeed be feasible. (The situation where no unique pivot entry is determined by conditions 1 and 2 will be discussed later).
Returning to the prototype example, we rewrite the initial tableau and circle the
pivot entry. The ratios corresponding to the two positive entries in the pivot column

10

APPENDIX B

Introduction to Linear Programming

(column 2 here) are shown at the right. No ratio is computed for row 2, because the
corresponding entry in the pivot column is negative. Hence the pivot is 2 (circled).
x1

x2 x3 x4 x5 x6 p

20

ratio: 6/2 = 3
ratio: 20/3 = 6.7

Now do elementary row operations to convert the pivot to 1 and all other entries in its
column to 0. The result is
x1

x2 x3

x4 x5 x6 p

1
2
7
2
1
2
1
2

1
2
1
2
3
2
3
2

12

11

Note that the former basic variable x4 is no longer basic (this is because it had a 1 in the
same row as the pivot), and it is sometimes called the departing variable. The new basic
variables are x2, x5, and x6, and the new basic feasible solution (taking the new nonbasic
variables equal to zero) is x2 = 3, x5 = 12, x6 = 11, and p = 9. In other words,
(0, 3, 0, 0, 12, 11) is the feasible solution yielding p = 9
This is better than before; p has increased from 0 to 9.
Now repeat the process. The last row here yields
p = 9 + 12 x1 4x3 32 x4
so there is still hope of increasing p by making x1 basic (it has a positive coefficient).
Hence the first column is the pivot column and all three entries (above the bottom row)
are positive. The tableau is displayed once more, with the ratios given and the next pivot
(with the smallest ratio) circled.
x1

x2 x3

x4 x5 x6 p

1
2
7
2
1
2
1
2

1
2
1
2
3
2
3
2

3
ratio: 1/2
=6

12

12
=
ratio: 7/2
11
ratio: 1/2

11

24
7

= 22

Row operations give the third tableau with x1, x2, and x6 as basic variables.
x1 x2 x3

x4

5
7
4
7
12
7
30
7

3
7
1
7
11
7
11
7

x5 x6 p
1
7
2
7
1
7
1
7

9
7
24
7
65
7
75
7

The corresponding basic feasible solution (setting x3 = x4 = x5 = 0) is x1 =


x6 = 65
,
p = 75
.
7 and
7 In other words,

24
,
7

x2 = 97 ,

( 24
, 9 , 0, 0, 0, 65
)
p = 75
7 7
7 is a feasible solution yielding
7
However, we claim that this is optimal. The last row of the third tableau gives
p = 75
30
x 11
x 17 x5
7
7 3
7 4
so, because x3, x4, and x5 are nonnegative, p can be no greater than 75
7 . The preceding
,
solution achieves p = 75
so
it
must
be
optimal.
This
completes
the
solution
of the
7
prototype example.
Note that the test for optimality is clear: If the last row of a tableau has only positive
entries, then the corresponding basic feasible solution is optimal. If not, the column

11

B.2 The Simplex Algorithm

corresponding to the most negative entry in the last row is the pivot column, the corresponding variable is the entering variable, and a new tableau is constructed with that
variable as a new basic variable.
Of course not every standard linear programming problem has a solution (see
Exercise 1). It can happen that feasible points can be found that make the objective
function p as large as we like. In this case p has no maximum and we say p is unbounded.
The simplex algorithm provides a way to determine whether this is the case (step 3 in
the flow chart that follows).
The algorithm works in exactly the same way for any standard linear programming
problem. Suppose that
p = c1x1 + c2x2 + + cnxn
is to be maximized subject to the following m constraints:
a11x1 + a12x2 + + a1nxn b1
a21x1 + a22x2 + + a2nxn b2 xi 0 for i = 1, 2, . . . , n
..
..
..
..
.
.
.
.
bj 0 for j = 1, . . . , m
am1x1 + am2x2 + + amnxn bm
Introduce m slack variables xn + 1, . . . , xn + m to make the inequalities into equalities.
The new problem is to maximize
p = c1x1 + c2x2 + + cnxn + 0xn + 1 + + 0xn + m
subject to:

= b1
a11x1 + a12x2 + + a1nxn + xn + 1
a21x1 + a22x2 + + a2nxn + + xn + 2
= b2
..
..
..
..
.
.
.
.
am1x1 + am2x2 + + amnxn +
+ xn + m = bm
xi 0
bj 0

for i = 1, 2, . . . , n + m
for j = 1, 2, . . . , m

Any optimal solution (x1, . . . , xn, xn + 1, . . . , xn + m) to this new problem yields an optimal
solution (x1, . . . , xn) to the original problem (the argument in the prototype example
works), so we solve the new problem. Because the constraints are now equations, some
of the methods of Gaussian elimination apply. For convenience, write the expression for
p as another equation:
c1x1 c2x2 cnxn + + p = 0
The augmented matrix for this larger system of equations is
x1

x2

xn xn + 1 xn + 2

a11

a12

a1n

xn + m p
0

b1

a21

a22

a2n

b2

am1

am2

amn

bm

c1

c2

cn

and this is called the initial simplex tableau for the problem. The slack variables are
called basic variables because their columns are basic columns (all entries are zero
except for a single one). The fact that bj 0 holds for each j means that we can obtain
a feasible solution by setting all the nonbasic variables equal to zero. The result:
(0, 0, . . . , 0, b1, b2, . . . , bm) is a feasible solution yielding p = 0
This is the initial basic feasible solution. Of course, it may not be optimal.

12

APPENDIX B

Introduction to Linear Programming

Now the algorithm starts. Suppose a tableau has been constructed with m basic variables (that is, m variables corresponding to basic columns in the tableau) such that the
last column contains no negative entries (for example, the initial tableau). Then, if the
nonbasic variables are set equal to zero, the values of the basic variables are determined
(they are just the entries of the last column, the value of p being the last entry). Hence
this gives the basic feasible solution corresponding to the tableau.
The actual execution of the algorithm is best described in the following steps. For
convenience, we display them first as a flow chart.

THE SIMPLEX ALGORITHM


Step 0
Prepare the initial tableau.

Step 1
Test for optimality.

YES

STOP

NO
Step 2
Choose the pivot column.

Step 3
Test for unbounded objective function.

YES

STOP

NO
Step 4
Choose the pivot entry.

Step 5
Make the pivot column basic.

The details of the steps are as follows. As before, we assume that there are n variables
and m constraints.
STEP 0. Prepare the initial tableau.
Introduce slack variables, one for each constraint, and convert each
inequality into an equation. Then write the expression for p as another
equation (as before). The augmented matrix for the resulting system
of m + 1 equations (the equation for p last) is the initial tableau.
STEP 1. Test for optimality.
Given a tableau, the corresponding basic feasible solution is optimal if
no entry in the last row (except the last) is negative. (The argument in
the prototype example works.) In this case, stop the maximum
value of p is the lower right entry. Otherwise go on to step 2.
STEP 2. Choose the pivot column.
This is the column (not the last) whose bottom entry is the most
negative (the worst offender, as it were). If there is a tie, choose
either possibility.
STEP 3. Test for unbounded objective function.
This occurs if no entry in the pivot column is positive. (We omit the
proof.) In this case, stopthe objective function has no maximum.
Otherwise go on to step 4.

13

B.2 The Simplex Algorithm


STEP 4.

STEP 5.

Example 1

SOLUTION

Choose the pivot entry.


Among the positive entries in the pivot column, choose the one that
has the smallest ratio when divided into the last entry in its row. If
two ratios are equal, choose either. (This may lead to cycling; see
Remark 2, which follows Example 1.)
Make the pivot column basic.
Use elementary row operations to make the pivot entry 1 and every
other entry in the pivot column (including the last) zero.

Maximize p = 3x1 + x2 + 2x3 subject to:


2x1 x2 + 3x3 2
3x1 + x2 + x3 5

xi 0 for i = 1, 2, 3

Introduce slack variables x4 and x5, and rewrite the equation for p.
=2
2x1 x2 + 3x3 + x4
3x1 + x2 + x3
+ x5
= 5 xi 0 for i = 1, 2, 3, 4, 5
3x1 x2 2x3
+p=0
Hence the initial tableau (with the basic variables boxed) is
x1

x2

x3 x4 x5

ratio: 2/2 = 1

ratio: 5/3

The basic feasible solution here is not optimal (the last row has negative entries), and
the pivot column is the first (3 is the most negative). The ratios are computed as
before and the pivot is circled. Hence row operations give the next tableau (the new
basic variables are boxed).
x1
1
0
0

x2

x3

x4 x5 p

1
2
5
2
5
2

3
2
7
2
5
2

1
2
3
2
3
2

This is still not optimal, and the pivot column is the second. Here the pivot is the only
positive entry, so no ratios need to be computed. Row operations give
x1 x2

x3

x4

x5

0
1

1
5
3
5

1
5
2
5

4
5
7
5

7
5
4
5

Again no optimal solution exists, but p has increased to 5 (lower right entry). The next
tableau is
x1 x2 x3
5
4
7
4
5
4

Hence p has a maximum of

27
4

x4

x5

1
4
1
4
1
4

1
4
3
4
5
4

when x1 = 0, x2 =

0
1
13
,
4

7
4
13
4
27
4

and x3 = 74.

We conclude with two remarks on the algorithm.


Remark 1: Rationale for selecting the pivot entry (step 4).

14

APPENDIX B

Introduction to Linear Programming

Suppose the rth column is the pivot column. Write it as

t1r




tqr





tmr
sr

where sr < 0 (from step 2) and at least one other entry is positive. Suppose we decide to
take tqr as the pivot. Write its row as
(tq1, tq2, . . . , tqr, . . . , 0, dq)
where the last entry dq 0 (because the tableau produced a feasible solution when all
nonbasic variables were set equal to zero, so dq is the value of one of the basic variables).
We want to divide this row by tqr (so tqr 0), and the last entry of the new row, dq/tqr
is required to be nonnegative too (we want a new tableau). Hence the pivot tqr must be
positive.
Then we convert each other entry tir in the pivot column to zero by subtracting tir
times the pivot row. If the right entry of row i is di, the new right entry is

d
di tir tqrq = tir tdiri tqrq

This is clearly positive if tir is negative or zero (use the left side). If tir > 0, it is positive
if the ratio dq/tqr for the pivot is less than the ratio for tir. This shows why we choose the
pivot with minimal ratio (in step 4).
Remark 2: Degeneracy and cycling.
If two ratios in step 4 are equal, the argument in Remark 1 shows that in the next
tableau, some entry in the last column will be zero (so some basic variable will take the
value 0). In this case the algorithm is said to degenerate, and it may lead to cycling (that
is, the sequence of basic feasible solutions we are creating may contain the same solution
twice and so continue to loop indefinitely). This is rare in practical problems (computer
round-off error tends to eliminate it), and algorithms exist to deal with it.

Exercises B.2
1. Consider the following standard linear programming

problem: Maximize p = x1 + x2 subject to:


x1 + x2 1
x1 2x2 2
x1 0, x2 0
a) Using the methods of Section B.1, show that p is

unbounded over the feasible region.


b) Use the simplex algorithm to arrive at the same

conclusion.
2. In each case, maximize p subject to the given constraints,

and find values of the xi that yield the maximum. Assume


xi 0 for all i.
a) p = x1 + 2x2 + 3x3
3x1 x2 x3 3
x1 + x2 + 2x3 2

b) p = 2x1 + x2 + x3

3x1 + x2 + 2x3 2
x1 + x2 + 3x3 5
c) p = x1 + 2x2
3x1 + x2 4
x1 + 2x2 3
2x1 + 3x2 5
d) p = 3x1 + 2x2
4x1 + 3x2 5
x1 + x2 2
3x1 + 4x2 4
e) p = 3x1 + x2 + 2x3
2x1 + x2 x3 3
x1 + x2 + x3 4
x1 + 2x2 + x3 5

15

B.2 The Simplex Algorithm


f) p = 2x1 + 3x2 + 2x3

2x1 + 3x2 x3 4
x1 x2 + x3 2
3x1 + 4x2 + x3 5
g) p = x1 + x2 + 2x3 + x4
3x1 + x2 x3 + 2x4
x1 + 2x2 + x3 x4
h) p = 2x1 + x2 + 3x3 + 2x4
3x1 + 4x2 + x3 2x4
2x1 + 3x2 x3 + 3x4

5
4
6
5

3. Can the maximum of the objective function ever be negative

in a standard linear programming problem? Explain.


4. Suppose a standard linear programming problem has more

variables than constraints. If the objective function has a


maximum, show that this must have at least one optimal
solution with one original variable zero.
5. An automobile company makes three types of cars: compact,

sports, and full-size; the profits per unit are $500, $700,
and $600, respectively. Transportation costs per vehicle are
$300, $400, and $500, respectively. And labour costs are
$500, $500, and $400, respectively. If the total transportation cost is not to exceed $40,000 and the total labour cost
is not to exceed $30,000, find the maximum profit.
6. A short-order restaurant sells three dinnersregular,

diet, and superon which it makes profits of $1.00, $2.00,


and $1.50, respectively. The restaurant cannot serve more
than 300 dinners daily. The three dinners require 2, 4,
and 2 minutes to prepare, and at most 1000 minutes of

preparation time are available per day. The dinners require


2, 0, and 3 minutes to cook, and 1000 minutes of cooking
time are available daily. Finally, the dinners require 50, 300,
and 100 grams of fresh produce, and this commodity is
limited to 45 kilograms daily. Find the numbers of dinners
of each type that will maximize profits.
7. A trucking company has 100 trucks, which are dispatched

from three locations: A, B, and C. Each truck at A, B and C


uses 40, 30, and 30 units of fuel daily, and 2500 units per
day are available. The costs of labour to operate and maintain each truck are $70, $80, and $70 per truck per day at
the three locations, and $8000 per day is the maximum
that the company can pay for labour. How many trucks
should be allocated to each location if the daily profits per
truck are $300, $250, and $200 at locations A, B, and C?
8. A lawn mower company makes three models: standard,

deluxe, and super. The construction of each mower involves


three stages: motor construction, frame construction, and
final assembly. The following table gives the number of
hours of labour required per mower for each stage and the
total number of hours of labour available per week for
each stage. It also gives the profit per week. Find the weekly
production schedule that maximizes profit.
Standard

Deluxe

Super

Hours
Available

motor
frame
assembly

1
1
1

1
2
1

2
2
1

2500
2000
1800

PROFIT

$30

$40

$55

S-ar putea să vă placă și