Sunteți pe pagina 1din 16

OPERATIONS RESEARCH

College of Business & Economics

Bahir Dar University

Lecture Notes: Graphical Technique for Linear Optimization

Instructor: Anteneh E.

Date: March 2017

Overview

There are two major techniques for solving a LP problem: the simplex method and the interior-
point method. Of the two, we will focus in this course on the simplex method. However, in the
case of simple models with two decision variables, we can also use the graphic method to solve
linear programs. Though the graphic method doesnt help solve real life problems with several
variables, the method illustrates how the algebraic simplex method works to find a solution.
Therefore, a brief study of the method sharpens your understanding of the underlying logic of the
simplex method. This section introduces the graphic technique and illustrates its applications for
simple two variable problems. Several useful concepts will also be discussed in the process.

The graphic method

To see how we can use the graphic method lets consider a simple example. Suppose a company
produces metal doors and windows using three plants (1,2,3). For a given week the company can
operate plant 1 up to 4 hours, plant 2 up to 12 hours, and plant 3 up to 18 hours. Producing a metal
door requires 1 hour of processing in plant 1, 0 hours in plant 2 and 3 hours in plant 3. Producing a
window requires no processing in plant 1, but 2 hours of processing in each of the other plants. The
profits per unit of door and window are 3 thousand and 5 thousand birr respectively. Table below
shows a summary of the relevant data.

Table: data for the problem


Plants Hours per unit produced
doors windows hours available
Plant 1 1 0 4
Plant 2 0 2 12
Plant 3 3 2 18
Unit profit/thousands 3 5
We first need to develop the mathematical LP model of the problem based on the data provided
above. We proceed as follows.

Decision variables

The decision to be made regards the number of doors and windows to be produced. So, let

1 = number of doors

2 = number of windows

The Objective function

The objective is to maximize the profit from the sale of the two products: doors and windows.
Mathematically,

Maximize Z = 3x1 + 5x2

Constraints

We have constraints on the available machine hours for each of the three plants. These constraints
are:

x1 4 Plant I capacity constraint

2x2 12 Plant 2 capacity constraint

3x1 + 2x2 100 Plant 3 capacity constraint

Non-negativity Restrictions

Lets impose a non-negativity requirement on the decision variables. Thus, we have the additional
constraints:

x1 0, and x2 0,

Model summary

The LP model for this problem looks the following

Maximize Z = 10x1 + 8x2

Subject to
x1 4 Plant I capacity constraint

2x2 12 Plant 2 capacity constraint

3x1 + 2x2 100 Plant 3 capacity constraint

and x1 0, x2 0,

Solving the problem using the graphic method gives the results: x1 = 2, x2 = 6, Z=36.

This very small problem has only two decision variables and therefore only two dimensions, so a
graphical procedure can be used to solve it. This procedure involves constructing a graph with x1
and x2 as the axes. The first step is to identify the values of (x1, x2) that are permitted by the
restrictions. This is done by drawing each line that borders the range of permissible values for one
restriction. To begin, note that the non-negativity restrictions x1 0, and x2 0 require (x1, x2) to
lie on the positive side of the axes (including actually on either axis), i.e., in the first quadrant. Similarly,
we can draw the graphs for the other constraints in the model. The resulting region of permissible
values of (x1, x2), called the feasible region.

Steps of the graphic method

We can summarize the steps in using the graphic method to solve simple LP problems as follows

Step 1: Draw the constraint boundary

First, treat all constraints as equations. Then draw the line for each constraint equation, including
non negativity constraint equations.

Definition: A constraint boundary is a line that forms the boundary of what is permitted by the
corresponding constraint

The constraint boundary equation for any constraint is obtained by replacing its , or , signs by an
= sign. The form of a constraint boundary equation is

for functional constraints and

xj=0

for nonnegativity constraints


Each such equation defines a line in two-dimensional space. This line forms the constraint boundary
for the corresponding constraint. When the constraint has an = sign, only the points on the
constraint boundary satisfy the constraint.

Next, take these lines as the boundaries of the respective constraints and shade the relevant regions
as suggested by the direction of the inequalities of the constraints.

Step 2: Identify the feasible region

When we shade the relevant region for each of the constraints in the problem, we will find a region
that satisfies all the constraints in the problem. This region is called the feasible region. It is from those
points in this region that we can choose to optimize our objective function. In a passing, note that it is
possible for a problem to have no feasible region at all. The feasible region consists of feasible
solutions. A feasible solution is defined as follows.

Definition: A feasible solution is a solution for which all the constraints are satisfied. An infeasible
solution is a solution for which at least one constraint is violated.

In other words, a feasible solution is a solution that is possible given the restrictions imposed in the
model. The feasible region consists of all possible (feasible) solutions to the model.

Definition: The feasible region is the collection of all feasible solutions.

The boundary of the feasible region contains just those feasible solutions that satisfy one or more of
the constraint boundary equations.

Step 3: Determine the optimal point

Finally, we will choose the point in the feasible region that will give us the optimal value for the
objective function.

Definition: An optimal solution is a feasible solution that has the most favorable value of the
objective function. The most favorable value is the largest value if the objective function is to be
maximized, whereas it is the smallest value if the objective function is to be minimized.

Determination of the optimal point can be done in two alternative ways.

Alternative 1: The first alternative involves the following two steps

1. Plot the objective function by assigning an arbitrary value to Z; then move this line in the
direction that Z increases (or decreases, if the objective is minimization) to locate the
optimal solution point. The optimal solution point in this case is the last point the objective
function touches as it leaves the feasible solution area.
2. Solve the simultaneous equations at the solution point to find the optimal solution values
Alternative 2: The second alternative involves the following two steps

1. Solve the simultaneous equations at each corner point to find the solution values at each
corner point
2. Substitute these values into the objective function to find the set of values that results in the
optimum Z value

What the second alternative suggests is that in the process of finding an optimal solution for a LP
problem, one can simply check the corner points in the feasible region. This is because of three
fundamental properties that we will discuss below. The first property is that for any LP problem
with bounded feasible region, if there is an optimal solution, it must be a corner point feasible
solution. Second, there are a finite number of coroner point feasible solutions. The third property
serves as an optimality check and says that for any linear programming problem that possesses at
least one optimal solution, if a CPF solution has no adjacent CPF solutions that are better (as measured
by Z), then it must be an optimal solution.

Corner points

The points of intersection of the constraint equations in a LP problem are the corner-point
solutions of the problem.

Definition: A corner point in the two dimensional space is a point that is obtained as a solution for the
simultaneous equation of two constraint equations.

We can generalize this definition to models with more than two decision variables. For any
linear programming problem with n decision variables, each CP solution lies at the
intersection of n constraint boundaries; i.e., it is the simultaneous solution of a system of n
constraint boundary equations. For instance, if the problem consists of three decision
variables, then a corner point in this model is obtained as the simultaneous solution of three
constraint equations.

Corner-point feasible (CPF) solutions

Note that a corner point may be either in the feasible region or outside of it (infeasible). The corner
points that lie in the feasible region are called corner point feasible solutions (CPF) solutions. A
corner-point feasible (CPF) solution is a feasible solution that does not lie on any line segment
connecting two other feasible solutions. Formally we define a CPF solution as follows.

Definition: A CPF is a solution that lies at a corner of the feasible region. It is a point that cant
be determined as a convex combination of two other points in the feasible region.

As this definition implies, a feasible solution that does lie on a line segment connecting two other
feasible solutions is not a CPF solution. In this relation it may be useful to give a formal definition
of convex combination
Definition: A point x in a convex set C (the feasible region) is said to be an extreme point (corner point
feasible solution) of C if there are no two distinct points x1 and x2 in C such that x = x1 +(1)x2
for some 0<<1. An extreme point (Corner point feasible solution) is thus a point that does not lie
strictly within a line segment connecting two other points of the feasible set.

The above is not to say that every set of n constraint boundary equations chosen from the n +m
constraints (n nonnegativity and m functional constraints) yields a CPF solution. In particular, the
simultaneous solution of such a system of equations might violate one or more of the other m
constraints not chosen, in which case it is a corner-point infeasible solution.

Adjacent CPF solutions

Consider any linear programming problem with n decision variables and a bounded feasible region.
A CPF solution lies at the intersection of n constraint boundaries (and satisfies the other constraints
as well). An edge of the feasible region is a feasible line segment that lies at the intersection of n -1
constraint boundaries, where each endpoint lies on one additional constraint boundary (so that these
endpoints are CPF solutions). Two CPF solutions are adjacent if the line segment connecting them
is an edge of the feasible region. Emanating from each CPF solution are n such edges, each one
leading to one of the n adjacent CPF solutions.

Definition: For any linear programming problem with n decision variables, two CPF solutions are
adjacent to each other if they share n - 1 constraint boundaries. The two adjacent CPF solutions are
connected by a line segment that lies on these same shared constraint boundaries. Such a line segment is
referred to as an edge of the feasible region.

When you shift from a geometric viewpoint to an algebraic one, intersection of constraint boundaries
changes to simultaneous solution of constraint boundary equations. The n constraint boundary equations
yielding (defining) a CPF solution are its defining equations, where deleting one of these equations
yields a line whose feasible segment is an edge of the feasible region.

Properties of CPF Solutions

The following three properties of CPF solutions hold for any linear programming problem that has
feasible solutions and a bounded feasible region.

Property 1

The following are a set of properties that are very useful in this and later sections.

Any LP problem with feasible solutions and a bounded feasible region must possess CPF
solutions.
Any LP problem with feasible solutions and a bounded feasible region must possess at least
one optimal solution.
If there is an optimal solution to a LP problem, the solution point is always on the boundary
of the feasible region because the boundary contains the points farthest from the origin
(corresponding to the greatest value to the objective function). This reduces the number of
possible solution points considerably, from all points in the feasible region to just those
points on the boundary.
The number of possible solutions is reduced even more by another characteristic of LP
problems: if there is exactly one optimal solution, then it must be a CPF solution. The
intuition here is that for any problem having just one optimal solution, it always is possible
to keep raising the objective function line until it just touches one point (the optimal
solution) at a corner of the feasible region.
If there are multiple optimal solutions (and hence a bounded feasible region), then at least
two must be adjacent CPF solutions.

The real significance of Property 1 is that it greatly simplifies the search for an optimal solution
because now only CPF solutions need to be considered. The magnitude of this simplification is
emphasized in Property 2.

Property 2

The main idea in this property is that there are only a finite number of CPF solutions for any LP
problem with a bounded feasible region. Formally, this is stated as:

Property 2: The feasible set K corresponding to a LP problem possesses at most a finite number of
corner point feasible solutions (extreme points).

To see why the number is finite in general, recall that each CPF solution is the simultaneous solution
of a system of n out of the m + n constraint boundary equations. The number of different
combinations of m+ n equations taken n at a time is

which is a finite number. This number, in turn, in an upper bound on the number of CPF solutions.

Since for a problem having n variables and m constraints there are finite number of CPF solution,
this property with property 1 imply that we only need to search for CPF solutions to obtain the
optimal solution for any LP problem. Property 2 suggests that, in principle, an optimal solution can
be obtained by exhaustive enumeration; i.e., find and compare all the finite number of CPF
solutions. Since the number of feasible solutions generally is infinite, reducing the number of
solutions that need to be examined to a small finite number is a tremendous simplification.

Property 3
Property 3 is useful as an optimality check, especially later in the simplex method. It says that

Property 3: If a CPF solution has no adjacent CPF solutions that are better (as measured by Z), then
there are no better CPF solutions anywhere.

Therefore, such a CPF solution is guaranteed to be an optimal solution (by Property 1), assuming
only that the problem possesses at least one optimal solution (guaranteed if the problem possesses
feasible solutions and a bounded feasible region). The basic reason that Property 3 holds for any
linear programming problem is that the feasible region always has the property of being a convex
set. For two-variable linear programming problems, this convex property means that the angle inside
the feasible region at every CPF solution is less than 180. None convex feasible region can never
occur in linear programming problems. Thus, the feasible region must be a convex set. With this
property, if a given CPF solution is better than its adjacent CPF solutions, then it must be the only
optimal solution in the entire feasible set.

Minimization problems

Solving minimization problems with two variables is essentially the same as solving maximization
problems of similar dimension. The only difference is the direction we move our objective function
in searching for an optimal point in the feasible region, if there is one, for the problem.

Consider the following model

Minimize Z = 40x1 + 50x2

Subject to the restrictions

2x1 + 3x2 12

x1 + x2 30

2x1 + x2 20

and

x1 0, x2 0

Use the graphical method to solve this model.

Mixed constraints

An LP model may come up with a set of mixed constraints with some less than or equal to
constraints, some greater than or equal to constraints, and some equality constraints. Again the
graphic method can be used to solve such models if there are only two decision variables.
Example: Consider the following model

Maximize Z = 10x1 + 12x2

Subject to the restrictions

x1 + 4x2 = 40

x1 + x2 12

x2 8

and

x1 0, x2 0

Special cases

Non unique optimal solutions

Sometimes we may get no unique solution for the problem. Any problem having multiple optimal
solutions will have an infinite number of them, each with the same optimal value of the objective
function.

Consider the following model

Maximize Z = 2x1 +3 x2

Subject to the restrictions

2x1 + 3x2 12

x1 + x2 30

2x1 + x2 20

and

x1 0, x2 0
Solve the model using the graphic method

No optimal solution

Another possibility is that a problem has no optimal solutions. This occurs only if (1) it has no
feasible solutions or (2) the constraints do not prevent improving the value of the objective function
(Z) indefinitely in the favorable direction (positive or negative)-unbounded solution. The latter case
is referred to as having an unbounded Z.

Example: Consider the following model

Maximize Z = 2x1 +3 x2

Subject to the restrictions

2x1 + 3x2 12

2x1 + x2 20

and

x1 0, x2 0

Solve the model using the graphic method

Example: Consider the following model

Maximize Z = 2x1 +3 x2

Subject to the restrictions

2x1 + 3x2 12

2x1 + x2 20

and

x1 0, x2 0

Solve the model using the graphic method


Example: Consider the following model

Maximize Z = 4x1 + 2x2

Subject to the restrictions

x1 4

x2 2

and

x1 0, x2 0

Solve the model using the graphic method

Introduction to sensitivity analysis

One of our assumptions thus far is the certainty of the parameters of the model. In reality, managers
do not know the exact values of the parameters. At best the model parameters are reasonable
guesses that are subject to changes. It is therefore important to evaluate the effect of changes in the
parameter values on the optimal solution. Sensitivity analysis deals with the evaluation of the effects
of changes in model parameters on the optimal solution of a LP problem. We will discuess
sensitivity analysis at length in later chapter after presenting the simplex method. But, some insights
can be gained by raising the issue at this point. So, lets have a brief discussion based on the
following examples.

Example: Consider the following model

Minimize Z = 40x1 + 50x2

Subject to the restrictions

2x1 + 3x2 12

x1 + x2 30

2x1 + x2 20

and
x1 0, x2 0

(a) Use the graphical method to solve this model.

(b) How does the optimal solution change if the objective function is changed to Z = 40x1 + 70x2?

(c) How does the optimal solution change if the third functional constraint is changed to 40x1 +
50x2 15?

Example: Consider the following model

Maximize Z = 3x1 + 2x2

Subject to the restrictions

x1 + x 2 6

2x1 + x2 8

And

x1 0, x2 0

Find the optimal solutions for the problem

1. Solve the problem after replacing the objective function by a new objective function Z =
2x1 + 2x2
2. Solve the problem with the first constraints upper bound changing to 4
3. Solve the problem with the coefficient of x1 in the second constraint changed to 3

Example: Consider the following model

Maximize Z = 40x1 + 50x2

Subject to the restrictions

x1 + 2x2 40

4x1 + 3x2 120

and

x1 0, x2 0

a) Solve the model using the graphic method


b) How would the optimal solution change if the coefficient of x1 in the objective function
changes to 100
c) Evaluate the effect of changing the coefficient of x2 in the objective function from 50 to 100
d) Evaluate the effect of changing the right hand side value of the first constraint from 40 to 60
e) Evaluate the effect of changing the value of the coefficient of x1 in the second constraint
from 4 to 2.

Introduction to Duality

Consider the following two problems

Maximize Z = 3x1 + 2x2

Subject to the restrictions

x1 + x 2 6

2x1 + x2 8

and

x1 0, x2 0

Minimize Z = 6y1 + 8y2

Subject to the requirement

y1 + 2y2 3

y1 + y 2 2

and

y1 0, y2 0

Find the optimal solutions for each of the problems and compare the results. What lessons do you
learn from the results from the two models. How do the two models relate each other?

Conclusion

The graphic method provides us with highly valuable insights that will be of great service in our
discussion on the simplex method in the next section. But, the trial-and-error method of the graphic
approach is applicable only when we have two decision variables. When we have many decision
variables the graphical method doesnt help solve the problem. In this case we need an algebraic
solution method. Of course, in the remaining sections we will study a widely used algorithm known
as the simplex algorithm that can be used to solve problems with any number of variables.

Exercises

Problem 1: Use graphical method to solve the following two LP problems

Maximize Z = 2x1 + x2

Subject to

x2 10

2x1 + 5x2 60

x1 + x2 18

3x1 + x2 44

and x1 0, x2 0

Maximize Z = 10x1 + 20x2

Subject to

-x1 + 2x2 15

x1 + x2 12

5x1 + 3x2 45

and x1 0, x2 0

Problem 2: Consider the following problem, where the value of k has not yet been ascertained.

Maximize Z = x1 + 2x2

Subject to
-x1 + x2 2

x2 3

kx1 + x2 2k+ 3 where k 0

and x1 0, x2 0

The solution currently being used is x1 = 2, x2=3. Use graphical analysis to determine the values of k
such that this solution actually is optimal.

Problem 3 An oil refinery has two sources of crude oil: a light crude that costs $35/barrel and a
heavy crude that costs $30/barrel. The refinery produces gasoline, heating oil, and jet fuel from
crude in the amounts per barrel indicated in the following table:

Data for refinery problem


Gasoline Heating oil Jet fuel
Light crude 0.3 0.2 0.3
Heavy crude 0.3 0.4 0.2

The refinery has contracted to supply 900,000 barrels of gasoline, 800,000 barrels of heating oil, and
500,000 barrels of jet fuel. The refinery wishes to find the amounts of light and heavy crude to
purchase so as to be able to meet its obligations at minimum cost. Formulate this problem as a linear
program.

References

Chian, Alpha C. 1984 (3rd ed). Fundamental Methods of Mathematical Economics. McGraw Hill, Inc: New
York.

Hillie, Frederick S., Gerald J. Lieberman, 2001. Introduction to Operations Research. McGraw-Hill: New
York.

Luenberger, David G; Yinyu Ye, 2008 (3rd ed). Linear and Nonlinear Programming. Springer
Science+Business Media, LLC: New York.

Taha, Hamdy A, 2007 (8th ed), Operations Resaearch: an introduction. Pearson Education, Inc: New
Jersey.

Taylor III, Bernard W, 1996(5th ed), Introduction to Management Science. Prentice Hall: New
Jersey.

S-ar putea să vă placă și