Documente Academic
Documente Profesional
Documente Cultură
Introduction to
Linear Programming
Many important problems involve linear inequalities rather than linear equations. For
example, a condition on the variables x and y might take the form of an inequality
2x 3y 4 rather than that of an equality 2x 3y = 4 . Linear programming is a
method of finding a solution to a system of such inequalities that maximizes a function
of the form p = ax + by, where a and b are constants. The general method of solving
such problems (called the simplex method) involves Gaussian elimination techniques, so
it is natural to include a discussion of it here. However, the proofs of the main theorems
are omitted. The interested reader should consult a text on linear programming. [For
example, S. I. Gass, Linear Programming, 4th ed. (New York: McGraw-Hill, 1975) gives a
thorough treatment. J. G. Kemeny, J. L. Snell, and G. L. Thompson, Introduction to Finite
Mathematics (Englewood Cliffs, N.J.: Prentice Hall, 1974) gives a more elementary treatment and relates linear programming to the theory of games.]
B.1
X2
0,
5
3
X1
Figure B.1
Example 1
GRAPHICAL METHODS
When only two variables are present, there is a geometric method of solution that,
although it is not useful as a practical tool when more variables are involved, is useful
in illustrating how solutions to these problems arise. Before giving an example, we must
clarify what an inequality of the form
2x1 + 3x2 5
means in geometric terms. (We use x1 and x2 in place of x and y to conform to later
notations.) Of course, the graph of the corresponding equation 2x1 + 3x2 = 5 is well
known; it is a line and consists of all points P(x1, x2) in the plane whose coordinates
x1 and x2 satisfy the equation. The lines parallel to this one all have equations
2x1 + 3x2 = c for some value of c, so the points P(x1, x2) whose coordinates satisfy the
inequality 2x1 + 3x2 < 5 are just those points lying on one side of the line 2x1 + 3x2 = 5
(points on the other side are those satisfying 2x1 + 3x2 > 5). The situation is illustrated in
Figure B.1. The points P(x1, x2) satisfying 2x1 + 3x2 5 are just the points on or below
the line. In general, the region on or to one side of a straight line is called a half-plane.
We shall loosely speak of the half-plane ax1 + bx2 c (where a, b, and c are constants).
The line ax1 + bx2 = c will be called the boundary of the half-plane.
When two or more inequalities are given, the set of points that satisfies all of them
is the region common to all the half-planes involved, and such regions are important
in linear programming. Here is an example.
Determine the region in the plane given by
x1 0
x1 x2 0
x1 + x2 4
APPENDIX B
SOLUTION
X2
x 1 x2 = 0
x1 + x2 = 4
(2, 2)
X1
The general linear programming problem (with two variables) can now be stated:
Suppose a region in the plane (called the feasible region) is given as in Example 1
by a set of linear inequalities (called constraints) in two variables x1 and x2 (so the
feasible region consists of all points common to all the corresponding half-planes).
The problem is to find all points in this region at which a linear function of the form
p = ax1 + bx2 is as large as possible. This function p is called the objective function,
and these points are said to maximize p over the feasible region. In applications, p
might be the profit in some commercial venture, or some other quantity that is desired
to be large. The precise nature of p plays no part in the solution, except that it should be
a linear function of the variables x1 and x2 (hence the name linear programming). The
following example illustrates how the method works.
Example 2
SOLUTION
X2
(0, 16)
4x1 + x2 = 16
The lines 4x1 + x2 = 16, x1 + x2 = 6, and x1 + 3x2 = 15
are plotted in the first diagram (see figure), and the corx1 + x2 = 6
(0, 6)
responding half-planes lie below these lines. Hence the
x1 + 3x2 = 15
(0, 5)
feasible region in question is the shaded part. The second
(15, 0)
diagram exhibits this region again (but with a larger scale)
and also shows the line 2x1 + 3x2 = p, plotted for various
X1
(4, 0)(6, 0)
values of p. Because p has the same value at any point on
one of these lines, they are sometimes called level lines for
X2
p. The aim is to find a point in the shaded region at which p p = 15 (0, 5)
3 9
,
has as large a value as possible.
2 2
3 These
values increase as the p = 12
9
10 8
level lines rise, so the vertex 2 , 2 the intersection of
,
p=9
3 3
the lines x1 + 3x2 = 15 and x1 + x2 = 6clearly gives the
p=6
largest value of p. This value is
3
9
p
=3
p = 2 2 + 3 2 = 16.5
(4, 0)
p=0
and it is the desired maximal value.
X1
B . 1 Graphical Methods
It is quite clear that the method used in Example 2 will work in a variety of similar
situations. However, before we attempt to say anything in general, consider the following
example.
x
1
X2
6
p=
12
p=
9
p=
SOLUTION
x2
Example 3
x2
+2
x 1
=0
X1
With this example in mind, the reader might guess that, in general, the objective
function has no maximum whenever the feasible region is unbounded. But this is not
the case. Exercise 2 gives a situation wherein the feasible region is unbounded but the
objective function does indeed have a maximum (the reader should try to construct
such an example before working this exercise).
What is true is that if the feasible region is bounded (that is, can be contained in
some circle), then the objective function p has a maximum value. In fact, the level lines
for p form a set of parallel lines, each corresponding to a fixed value of p. If p were to
increase continuously, the corresponding level line would move continuously in a direction perpendicular to itself. As this moving level line crosses the feasible region, it is
clear that a largest value of p can be found so that the corresponding line intersects the
feasible region. Because the edges of the feasible region are line segments, even more
can be said: Either this level line corresponding to the largest value of p will intersect
the feasible region at a vertex or, possibly, the intersection will consist of an entire edge
of the feasible region. Either way, the maximum value of p will be achieved at a vertex
of the feasible region. This is good news. There are at most a finite number of vertices
(there are only finitely many constraints), so only a finite number of feasible points (the
vertices) need be looked at. If p is evaluated at each of them, the vertex yielding the
largest value of p will be the desired feasible point.
Finally, the same argument shows that p achieves a minimum value over a bounded
feasible region, and that minimum is found at a vertex. The following theorem summarizes this discussion.
T HEOREM 1
APPENDIX B
Example 4
SOLUTION
A manufacturer wants to make two types of toys. The large toy requires 4 square feet of
plywood and 50 milliliters (ml) of paint, whereas the smaller toy requires 3 square feet
of plywood and only 20 ml of paint. There are 1800 square feet of plywood and 16 liters
of paint available. If the large toys sell for a profit of $21 each, and each of the small
toys yields an $18 profit, determine the number of toys of each size that the manufacturer must make to maximize the total profit.
Let x1 and x2 denote the number of large and small toys,
respectively, to be made. These variables satisfy the following constraints:
4x1 + 3x2 1800 (plywood)
5x1 + 2x2 1600 (paint)
x1 0
x2 0
The feasible region corresponding to these inequalities is
plotted in the diagram. The four vertices have coordinates
, 2600
(0, 0), (0, 600), 1200
, and (320, 0). The total profit
7
7
p of the enterprise is
p = 21x1 + 18x2
and the values of p at each of the vertices are
p=0
p = 10, 800
p = 10, 285.71
p = 6, 720
X2
(0, 600)
1200 2600
, 7
7
(0, 0)
(320, 0)
X1
at (0, 0)
at (0, 600)
at 1200
, 2600
7
7
at (320, 0)
Hence the manufacturer will maximize profits by producing 600 small toys and no large
toys at all.
Theorem 1 can be extended. First of all, the objective function p may very well have
a maximum over the feasible region even if that region is unbounded. Moreover, the
argument leading to Theorem 1 can be modified to show that, if the objective function
p has a maximum over the feasible region, that maximum will be attained at a vertex.
Second, Theorem 1 can be extended to more than two variables x1, x2, . . . , xn.
A function of the form:
p = c1x1 + c2x2 + + cnxn
is called a linear function of these variables, and a condition of the form
a1x1 + a2x2 + + anxn b
is called a linear constraint on the variables. The set of n-tuples (x1, x2, . . . , xn) satisfying a finite number of such linear constraints is called the feasible region determined
by these constraints, and the n-tuples themselves are called feasible points. Consider
the equation
a1x1 + a2x2 + + anxn = b
which was obtained from the foregoing constraint by replacing the inequality by equality.
The set of all n-tuples (x1, x2, , xn) satisfying this equation is called a bounding
hyperplane for the feasible region. In the case of two variables these are lines; they are
actual planes when n = 3. By analogy with the two-variable situation, an n-variable
feasible point is called an extreme point (or corner point) of the feasible region if it lies
on n (or more) bounding hyperplanes.
B . 1 Graphical Methods
The general linear programming problem is to find a feasible point such that the
objective function p is as large as possible, in which case the point is said to maximize
p. (Similarly, we could seek a feasible point that minimizes p.) The extended theorem is
stated below. The proof (though similar in spirit to that of Theorem 1) is omitted. The
feasible region is said to be bounded if there exists a number M such that |xi| M
holds for every feasible point (x1, x2, . . . , xn) and each i = 1, 2, . . . , n.
T HEOREM 2
Example 5
SOLUTION
20
0
0
0
APPENDIX B
The procedure we just used has two drawbacks. First, it is not always easy to determine whether a maximum exists. But even when this is known (if the feasible set is
bounded, for example), the number of extreme points can be very large and the amount
of computation required to find them can be excessive, even for a computer. This is why
the method is not pursued here.
A much more efficient procedure exists that reduces the number of extreme points
that must be examined. The idea is quite simple. To get some insight into how it works,
consider the general linear programming problem with three variables. Then the
bounding hyperplanes are real planes, and the edges of the feasible region are the lines
of intersection of pairs of bounding planes. Now suppose the objective function p is
evaluated at some extreme point. Choose an edge emanating from this point along
which the function p increases (or decreases if a minimum is desired). If no such edge
exists, it can be shown that the extreme point gives the maximum. Otherwise there are
two possibilities: (1) We encounter another extreme point on that edge at which p is
larger and then repeat the process. (2) There is no other vertex along this edge so p
increases without bound, and no maximum exists. At each stage we either discover
there is no maximum, or we find the maximum, or we are led to another extreme point
at which p is larger. Clearly the same extreme point cannot be encountered twice in this
fashion (p increases), and so, because there are only finitely many extreme points, the
process is effective: It either shows that no maximum exists, or, if there is a maximum, it
leads to an extreme point yielding that maximum. Furthermore, in the types of problems usually found in practice, the method converges quickly.
This description of the algorithm is geometric in nature. However, the whole thing
can be cast in algebraic form. This will be described in the next section.
Exercises B.1
1. In each case, find the maximum and minimum values of p
subject to:
= x2 2x1
x1 + x2 6
x1 2x2 0
x1 0
x2 0
Show that the feasible region is unbounded but that p still
has a maximum.
region.
x1 + x2 4
x2 x1 4
3x1 + x2 12
profit of $18 per toy but that profits for large toys increase.
Find the number of toys that should be produced to maximize profits in each of the following cases.
a) Large-toy profit is $23 per toy.
b) Large-toy profit is $24 per toy
c) Large-toy profit is $25 per toy.
5. A man wishes to invest a portion of $100,000 in two stocks
The amounts of vitamins, A, C, and D per gram of ingredient are given in the table. The ingredients are mixed with
at least 85 grams of filler to make batches of 100 grams
that are then pressed into pills. The law requires that each
2 units
1 unit
5 units
2 units
4 units
1 unit
$3 per gram.
9. An oil company produces two grades of heating oil, grade
B.2
APPENDIX B
The various steps in the algorithm are best explained by working a specific example
in detail.
PROTOTYPE EXAMPLE
Maximize p = 2x1 + 3x2 x3 subject to:
x1 + 2x2 + 2x3
3x1 x2 + x3
2x1 + 3x2 + 5x3
xi
6
9
20
0
for i = 1, 2, 3
The first step in the procedure is to convert the constraints from inequalities to equalities.
This is achieved by introducing new variables x4, x5, and x6 (called slack variables), one for
each constraint. The new problem is to maximize p = 2x1 + 3x2 x3 + 0x4 + 0x5 + 0x6
subject to:
x1 + 2x2 + 2x3 + x4
= 6
3x1 x2 + x3
+ x5
= 9
2x1 + 3x2 + 5x3
+ x6 = 20
xi 0 for i = 1, 2, 3, 4, 5, 6
The claim is that if (x1, x2, x3, x4, x5, x6) is a solution to this problem, then (x1, x2, x3)
is a solution to the original problem. In fact, the constraints are satisfied (because x4 0,
x5 0, and x6 0), so (x1, x2, x3) is a feasible solution for the original problem.
Moreover, if (x1, x2, x3) were another feasible point for the original problem yielding a
larger value of p, then taking
x4 = 6 x1 2x2 2x3
x5 = 9 3x1 + x2 x3
x6 = 20 2x1 3x2 5x3
would give a feasible point (x1, x2, x3, x4, x5, x6) for the new problem yielding a larger
value of p, which is a contradiction.
So it suffices to solve the new problem. To do so, write the relationship
p = 2x1 + 3x2 x3 as a fourth equation to get
= 6
x1 + 2x2 + 2x3 + x4
3x1 x2 + x3
+ x5
= 9
2x1 + 3x2 + 5x3
+ x6
= 20
+p= 0
2x1 3x2 + x3
This amounts to considering p as yet another variable. The augmented matrix
(Section 1.1) for this system of equations is
x1
x2 x3 x4 x5 x6 p
20
This is called the initial simplex tableau for the problem. The idea is to use elementary
row operations to create a sequence of such tableaux (keeping p in the bottom row) that
will lead to a solution. This is analogous to the modification of the augmented matrix
in Gaussian elimination, except that here we allow only feasible solutions.
Note that the columns corresponding to the slack variables x4, x5, and x6 all consist
of zeros and a single 1 and that the 1s are in different rows (the way these variables were
introduced guarantees this). These will be called basic columns, and the slack variables
are called the basic variables in the initial tableau. (They are indicated by a box.)
Because of this, there is one obvious solution to the equations: Set all the nonbasic variables equal to zero and solve for the basic variables: x4 = 6, x5 = 9, x6 = 20 (and p = 0).
In other words,
(0, 0, 0, 6, 9, 20) is a feasible solution yielding p = 0
Such a solution (with all nonbasic variables zero) is called a basic feasible solution.
Note the role of the last column in all this: The numbers 6, 9, and 20 are the constants
in the original constraints. It is the fact that these are positive that makes (0, 0, 0, 6, 9, 20)
a feasible solution. Also, the bottom entry in the last column is the value of p at the basic
feasible solution (0 in this case).
The key to the whole algorithm is the following theorem. The proof is not difficult
but would require some preliminary discussion of convex sets. Hence we omit it and
refer the reader to texts on linear programming, such as S. I. Gass, Linear Programming,
4th ed. (New York: McGraw-Hill, 1975).
T HEOREM 1
If a standard linear programming problem has a solution, then there is a basic feasible
solution that yields the maximum value of the objective function. (Such basic feasible
solutions are called optimal.)
Hence our goal is to find an optimal basic feasible point. Our construction using slack
variables guarantees an initial basic feasible solution; the next step is to see whether it is
optimal.
The bottom row of the initial tableau gives p in terms of the nonbasic variables (the
original expression for p in this case):
p = 2x1 + 3x2 x3
The fact that some of the coefficients here are positive suggests that this value of p is not
optimal because increasing x1 or x2 at all will increase p. In fact, it would seem better to
try to increase x2; it has the larger of the two positive coefficients (equivalently the most
negative entry in the last row of the tableau). This in turn suggests that we try to modify
the tableau so that x2 becomes a new basic variable. For this reason, x2 is called the
entering variable. Its column is called the pivot column.
This is accomplished by doing elementary row operations to convert the pivot column into a basic column. The question is where to locate the 1. We do not put the 1 in
the last row because we do not want to disturb p, but it can be placed at any other location in the pivot column where the present entry is nonzero (they all qualify in this
example). The entry chosen is called the pivot, and it is chosen as follows:
1. The pivot entry must be positive.
2. Among the positive entries available, the pivot is the one that produces the
smallest ratio when divided into the right-most entry in its row.
These are chosen so that the basic feasible solution in the tableau we are creating will
indeed be feasible. (The situation where no unique pivot entry is determined by conditions 1 and 2 will be discussed later).
Returning to the prototype example, we rewrite the initial tableau and circle the
pivot entry. The ratios corresponding to the two positive entries in the pivot column
10
APPENDIX B
(column 2 here) are shown at the right. No ratio is computed for row 2, because the
corresponding entry in the pivot column is negative. Hence the pivot is 2 (circled).
x1
x2 x3 x4 x5 x6 p
20
ratio: 6/2 = 3
ratio: 20/3 = 6.7
Now do elementary row operations to convert the pivot to 1 and all other entries in its
column to 0. The result is
x1
x2 x3
x4 x5 x6 p
1
2
7
2
1
2
1
2
1
2
1
2
3
2
3
2
12
11
Note that the former basic variable x4 is no longer basic (this is because it had a 1 in the
same row as the pivot), and it is sometimes called the departing variable. The new basic
variables are x2, x5, and x6, and the new basic feasible solution (taking the new nonbasic
variables equal to zero) is x2 = 3, x5 = 12, x6 = 11, and p = 9. In other words,
(0, 3, 0, 0, 12, 11) is the feasible solution yielding p = 9
This is better than before; p has increased from 0 to 9.
Now repeat the process. The last row here yields
p = 9 + 12 x1 4x3 32 x4
so there is still hope of increasing p by making x1 basic (it has a positive coefficient).
Hence the first column is the pivot column and all three entries (above the bottom row)
are positive. The tableau is displayed once more, with the ratios given and the next pivot
(with the smallest ratio) circled.
x1
x2 x3
x4 x5 x6 p
1
2
7
2
1
2
1
2
1
2
1
2
3
2
3
2
3
ratio: 1/2
=6
12
12
=
ratio: 7/2
11
ratio: 1/2
11
24
7
= 22
Row operations give the third tableau with x1, x2, and x6 as basic variables.
x1 x2 x3
x4
5
7
4
7
12
7
30
7
3
7
1
7
11
7
11
7
x5 x6 p
1
7
2
7
1
7
1
7
9
7
24
7
65
7
75
7
24
,
7
x2 = 97 ,
( 24
, 9 , 0, 0, 0, 65
)
p = 75
7 7
7 is a feasible solution yielding
7
However, we claim that this is optimal. The last row of the third tableau gives
p = 75
30
x 11
x 17 x5
7
7 3
7 4
so, because x3, x4, and x5 are nonnegative, p can be no greater than 75
7 . The preceding
,
solution achieves p = 75
so
it
must
be
optimal.
This
completes
the
solution
of the
7
prototype example.
Note that the test for optimality is clear: If the last row of a tableau has only positive
entries, then the corresponding basic feasible solution is optimal. If not, the column
11
corresponding to the most negative entry in the last row is the pivot column, the corresponding variable is the entering variable, and a new tableau is constructed with that
variable as a new basic variable.
Of course not every standard linear programming problem has a solution (see
Exercise 1). It can happen that feasible points can be found that make the objective
function p as large as we like. In this case p has no maximum and we say p is unbounded.
The simplex algorithm provides a way to determine whether this is the case (step 3 in
the flow chart that follows).
The algorithm works in exactly the same way for any standard linear programming
problem. Suppose that
p = c1x1 + c2x2 + + cnxn
is to be maximized subject to the following m constraints:
a11x1 + a12x2 + + a1nxn b1
a21x1 + a22x2 + + a2nxn b2 xi 0 for i = 1, 2, . . . , n
..
..
..
..
.
.
.
.
bj 0 for j = 1, . . . , m
am1x1 + am2x2 + + amnxn bm
Introduce m slack variables xn + 1, . . . , xn + m to make the inequalities into equalities.
The new problem is to maximize
p = c1x1 + c2x2 + + cnxn + 0xn + 1 + + 0xn + m
subject to:
= b1
a11x1 + a12x2 + + a1nxn + xn + 1
a21x1 + a22x2 + + a2nxn + + xn + 2
= b2
..
..
..
..
.
.
.
.
am1x1 + am2x2 + + amnxn +
+ xn + m = bm
xi 0
bj 0
for i = 1, 2, . . . , n + m
for j = 1, 2, . . . , m
Any optimal solution (x1, . . . , xn, xn + 1, . . . , xn + m) to this new problem yields an optimal
solution (x1, . . . , xn) to the original problem (the argument in the prototype example
works), so we solve the new problem. Because the constraints are now equations, some
of the methods of Gaussian elimination apply. For convenience, write the expression for
p as another equation:
c1x1 c2x2 cnxn + + p = 0
The augmented matrix for this larger system of equations is
x1
x2
xn xn + 1 xn + 2
a11
a12
a1n
xn + m p
0
b1
a21
a22
a2n
b2
am1
am2
amn
bm
c1
c2
cn
and this is called the initial simplex tableau for the problem. The slack variables are
called basic variables because their columns are basic columns (all entries are zero
except for a single one). The fact that bj 0 holds for each j means that we can obtain
a feasible solution by setting all the nonbasic variables equal to zero. The result:
(0, 0, . . . , 0, b1, b2, . . . , bm) is a feasible solution yielding p = 0
This is the initial basic feasible solution. Of course, it may not be optimal.
12
APPENDIX B
Now the algorithm starts. Suppose a tableau has been constructed with m basic variables (that is, m variables corresponding to basic columns in the tableau) such that the
last column contains no negative entries (for example, the initial tableau). Then, if the
nonbasic variables are set equal to zero, the values of the basic variables are determined
(they are just the entries of the last column, the value of p being the last entry). Hence
this gives the basic feasible solution corresponding to the tableau.
The actual execution of the algorithm is best described in the following steps. For
convenience, we display them first as a flow chart.
Step 1
Test for optimality.
YES
STOP
NO
Step 2
Choose the pivot column.
Step 3
Test for unbounded objective function.
YES
STOP
NO
Step 4
Choose the pivot entry.
Step 5
Make the pivot column basic.
The details of the steps are as follows. As before, we assume that there are n variables
and m constraints.
STEP 0. Prepare the initial tableau.
Introduce slack variables, one for each constraint, and convert each
inequality into an equation. Then write the expression for p as another
equation (as before). The augmented matrix for the resulting system
of m + 1 equations (the equation for p last) is the initial tableau.
STEP 1. Test for optimality.
Given a tableau, the corresponding basic feasible solution is optimal if
no entry in the last row (except the last) is negative. (The argument in
the prototype example works.) In this case, stop the maximum
value of p is the lower right entry. Otherwise go on to step 2.
STEP 2. Choose the pivot column.
This is the column (not the last) whose bottom entry is the most
negative (the worst offender, as it were). If there is a tie, choose
either possibility.
STEP 3. Test for unbounded objective function.
This occurs if no entry in the pivot column is positive. (We omit the
proof.) In this case, stopthe objective function has no maximum.
Otherwise go on to step 4.
13
STEP 5.
Example 1
SOLUTION
xi 0 for i = 1, 2, 3
Introduce slack variables x4 and x5, and rewrite the equation for p.
=2
2x1 x2 + 3x3 + x4
3x1 + x2 + x3
+ x5
= 5 xi 0 for i = 1, 2, 3, 4, 5
3x1 x2 2x3
+p=0
Hence the initial tableau (with the basic variables boxed) is
x1
x2
x3 x4 x5
ratio: 2/2 = 1
ratio: 5/3
The basic feasible solution here is not optimal (the last row has negative entries), and
the pivot column is the first (3 is the most negative). The ratios are computed as
before and the pivot is circled. Hence row operations give the next tableau (the new
basic variables are boxed).
x1
1
0
0
x2
x3
x4 x5 p
1
2
5
2
5
2
3
2
7
2
5
2
1
2
3
2
3
2
This is still not optimal, and the pivot column is the second. Here the pivot is the only
positive entry, so no ratios need to be computed. Row operations give
x1 x2
x3
x4
x5
0
1
1
5
3
5
1
5
2
5
4
5
7
5
7
5
4
5
Again no optimal solution exists, but p has increased to 5 (lower right entry). The next
tableau is
x1 x2 x3
5
4
7
4
5
4
27
4
x4
x5
1
4
1
4
1
4
1
4
3
4
5
4
when x1 = 0, x2 =
0
1
13
,
4
7
4
13
4
27
4
and x3 = 74.
14
APPENDIX B
t1r
tqr
tmr
sr
where sr < 0 (from step 2) and at least one other entry is positive. Suppose we decide to
take tqr as the pivot. Write its row as
(tq1, tq2, . . . , tqr, . . . , 0, dq)
where the last entry dq 0 (because the tableau produced a feasible solution when all
nonbasic variables were set equal to zero, so dq is the value of one of the basic variables).
We want to divide this row by tqr (so tqr 0), and the last entry of the new row, dq/tqr
is required to be nonnegative too (we want a new tableau). Hence the pivot tqr must be
positive.
Then we convert each other entry tir in the pivot column to zero by subtracting tir
times the pivot row. If the right entry of row i is di, the new right entry is
d
di tir tqrq = tir tdiri tqrq
This is clearly positive if tir is negative or zero (use the left side). If tir > 0, it is positive
if the ratio dq/tqr for the pivot is less than the ratio for tir. This shows why we choose the
pivot with minimal ratio (in step 4).
Remark 2: Degeneracy and cycling.
If two ratios in step 4 are equal, the argument in Remark 1 shows that in the next
tableau, some entry in the last column will be zero (so some basic variable will take the
value 0). In this case the algorithm is said to degenerate, and it may lead to cycling (that
is, the sequence of basic feasible solutions we are creating may contain the same solution
twice and so continue to loop indefinitely). This is rare in practical problems (computer
round-off error tends to eliminate it), and algorithms exist to deal with it.
Exercises B.2
1. Consider the following standard linear programming
conclusion.
2. In each case, maximize p subject to the given constraints,
b) p = 2x1 + x2 + x3
3x1 + x2 + 2x3 2
x1 + x2 + 3x3 5
c) p = x1 + 2x2
3x1 + x2 4
x1 + 2x2 3
2x1 + 3x2 5
d) p = 3x1 + 2x2
4x1 + 3x2 5
x1 + x2 2
3x1 + 4x2 4
e) p = 3x1 + x2 + 2x3
2x1 + x2 x3 3
x1 + x2 + x3 4
x1 + 2x2 + x3 5
15
2x1 + 3x2 x3 4
x1 x2 + x3 2
3x1 + 4x2 + x3 5
g) p = x1 + x2 + 2x3 + x4
3x1 + x2 x3 + 2x4
x1 + 2x2 + x3 x4
h) p = 2x1 + x2 + 3x3 + 2x4
3x1 + 4x2 + x3 2x4
2x1 + 3x2 x3 + 3x4
5
4
6
5
sports, and full-size; the profits per unit are $500, $700,
and $600, respectively. Transportation costs per vehicle are
$300, $400, and $500, respectively. And labour costs are
$500, $500, and $400, respectively. If the total transportation cost is not to exceed $40,000 and the total labour cost
is not to exceed $30,000, find the maximum profit.
6. A short-order restaurant sells three dinnersregular,
Deluxe
Super
Hours
Available
motor
frame
assembly
1
1
1
1
2
1
2
2
1
2500
2000
1800
PROFIT
$30
$40
$55