Sunteți pe pagina 1din 16

Available online at www.sciencedirect.

com

Computers & Industrial Engineering 55 (2008) 94–109


www.elsevier.com/locate/caie

A comparative study of heuristic algorithms on Economic


Lot Scheduling Problem
Asif S. Raza, Ali Akgunduz *

Department of Mechanical and Industrial Engineering, Concordia University, 1455 de Maisonneuve Blvd., West,
Montréal Que., Canada H3G 1M8

Received 14 December 2006; received in revised form 20 July 2007; accepted 4 December 2007
Available online 14 December 2007

Abstract

The Economic Lot Scheduling Problem (ELSP) has been well-researched for more than 40 years. As the ELSP has been
generally seen as NP-hard, researchers have focused on the development of efficient heuristic approaches. In this paper, we
consider the time-varying lot size approach to solve the ELSP. A computational study of the existing solution algorithms,
Dobson’s heuristic, Hybrid Genetic algorithm, Neighborhood Search heuristics, Tabu Search and the newly proposed Sim-
ulated Annealing algorithm are presented. The reviewed methods are first tested on two well-known problems, those of
Bomberger’s [Bomberger, E. E. (1966). A dynamic programming approach to a lot size scheduling problem. Management
Science 12, 778–784] and Mallya’s [Mallya, R (1992). Multi-product scheduling on a single machine: A case study.
OMEGA: International Journal of Management Science 20, 529–534] problems. We show the Simulated Annealing algo-
rithm finds the best known solution to these problems. A similar comparison study is performed on various problem sets
previously suggested in the literature. The results show that the Simulated Annealing algorithm outperforms Dobson’s
heuristic, Hybrid Genetic algorithm and Neighborhood search heuristics on these problem sets. The Simulated Annealing
algorithm also shows faster convergence than the best known Tabu Search algorithm, yet results in solutions of a similar
quality. Finally, we report the results of the design of experiment study which compares the robustness of the mentioned
meta-heuristic techniques.
Crown copyright  2007 Published by Elsevier Ltd. All rights reserved.

Keywords: Economic Lot Scheduling Problem; Time-varying lot size; Simulated annealing; Tabu search; Neighborhood search

1. Introduction

The Economic Lot Scheduling Problem (ELSP) deals with the production assignment of several different
products on a given single production facility to minimize the total cost. A typical ELSP as described in
Maxwell (1964) has the following features:

*
Corresponding author. Tel.: +1 514 848 2424; fax: +1 514 848 3175.
E-mail addresses: asif_s@encs.concordia.ca (A.S. Raza), akgunduz@encs.concordia.ca (A. Akgunduz).

0360-8352/$ - see front matter Crown copyright  2007 Published by Elsevier Ltd. All rights reserved.
doi:10.1016/j.cie.2007.12.004
A.S. Raza, A. Akgunduz / Computers & Industrial Engineering 55 (2008) 94–109 95

1. Only one product can be produced at a time on the machine.


2. Each product has a deterministic and constant demand and production rates.
3. The set-up cost and set-up times are independent of the production sequence.
4. The production facility is assumed to be capable of satisfying demand predicted during the planning
horizon.
5. The inventory holding cost is directly proportional to the amount of inventory.

With the ELSP generally viewed as NP-hard, the focus of most research efforts has been to generate near
optimal repetitive schedule(s). To date, several heuristic solutions have been proposed using any one of the
common cycle, basic period, or time-varying lot size approaches.
The common cycle approach always produces a feasible schedule and is the simplest to implement, however
in some cases, the solution when compared to the Lower Bound (LB) is of poor quality. Unlike the common
cycle approach, the basic period approach allows different cycle times for different products, however the cycle
times must be an integer multiple of a basic period. Although the basic period approach generally produces a
better solution to the ELSP than the common cycle approach, but getting a feasible schedule is NP-hard
(Bomberger, 1966). Lastly, the time-varying lot size approach is more flexible than the other two approaches,
allowing for different lot sizes for the different products in a cycle. Dobson (1987) showed that the time-varying
lot size approach always produced a feasible schedule, as well as giving a better quality solution.

2. Literature review

For more than four decades there have been a variety of articles published on the ELSP. Earlier works
include Eilon (1959), Hanssmann (1962), Rogers (1958) and Maxwell (1964). In these earlier works, the LB
was calculated using an independent solution approach which ignored the sharing constraint and the machine
capacity issues. An improved LB approach was developed by Bomberger (1966) in which Karush Kuhn
Tucker conditions were applied to the ELSP to recognize only the capacity constraint. Several researchers also
re-discovered this LB (Moon, Giri, & Choi, 2002). The bulk of the ELSP research has focused on cyclic sched-
ules which satisfy the Zero-Switch-Rule (ZSR), meaning an item is produced only if its inventory depletes to a
zero level. There are some rare examples, such as Maxwell (1964) and Delporte and Thomas (1978) where the
ZSR was not considered.
As noted earlier, there are three approaches to solve the ELSP. The common cycle approach provides
an upper bound to the optimal solution and very good results to the ELSP under certain conditions
(Jones & Inman, 1989; Gallego, 1990). The heuristic methods developed using the basic period approach
first selected a frequency for each item (i.e., the number of times an item is produced in a production
cycle). After the frequency was determined, a basic time period to satisfy this frequency was then deter-
mined. Earlier works using the basic period approach include Bomberger (1966), Doll and Whybark
(1973), while Elmaghraby (1978) provided a comprehensive review. Hsu (1983) showed that using the
basic period approach to solve the ELSP is NP-hard and the NP-hardness increased with an increase
in the utilization ratio. Unlike the basic period approach, the time-varying lot size approach does not
require equal production runs. The time-varying lot size approach was first researched by Maxwell
(1964), Delporte and Thomas (1978), while Dobson (1987) developed an efficient heuristic to show that
given enough time for production and set-up, any production sequence could be converted into a feasible
production, however the production timing and lot size might not be equal. Gallego and Shaw (1997) pro-
vided support that the time-varying lot size approach to the ELSP was generally NP-hard. Dobson’s
(1987) heuristic can be integrated with Zipkin’s (1991) algorithm to find a near optimal schedule (Moon
et al., 2002). Several extensions have also been made to the time-varying lot size approach. Dobson (1992)
reviewed the ELSP when set-up time is sequence dependent and proposed a heuristic solution. Wagner
and Davis (2002) also studied the ELSP with sequence dependent set-up times and proposed a heuristic
procedure capable of determining a range of optimal solutions. The proposed heuristic is particularly use-
ful in precarious manufacturing environments. Gallego and Roundy (1992) considered back-orders in the
ELSP. Silver (1993) offered several extensions to the existing quantitative models that can better support
managerial decisions. He pointed out that in many practical situations, commonly known parameters to
96 A.S. Raza, A. Akgunduz / Computers & Industrial Engineering 55 (2008) 94–109

the model could be in fact unknown. He also described some improvements that could be incorporated
within manufacturing systems, such as set-up time/cost reduction, quality improvement, controllable pro-
duction rates, reduced lead times, etc. There has been an increase in research considering common param-
eters as decision variables in manufacturing systems modeling and Silver, Pyke, and Peterson (1998) have
documented a variety of such issues. Researchers considering production as a controllable variable
include, Silver (1990), Moon, Gallego, and Simchi-Levi (1991), Gallego (1993), Khouja (1997), and Moon
and Christy (1998). They concluded that decreasing the production rate of an under-utilized facility was
profitable. Silver (1995) and Viswanathan and Goyal (1997) considered the shelf life constraint on a single
production facility producing multiple products in repetitive cycles. Allen (1990) developed a graphical
method to determine the production rate and the cycle time for a production facility producing only
two products. Faaland, Schmitt, and Arreola-Risa (2004) introduced a new modeling frame work for
the ELSP that allows for lost sales leading to higher profits. They also considered the set-up times in
the model without an investment option and showed that lost sales can be profitable, even with determin-
istic production and demand rates.
Gellego and Moon (1992) studied the trade-off between a reduction in the set-up time and an increase in
the set-up cost. They considered a factory producing multiple items, and assumed that the set-up time
could be reduced by externalizing the set-up operations with an increase in set-up cost, showing that a
higher utilization of the production facility can provide a significant improvement. Moon (1994) considered
a reduction in set-up time by a one time investment in an ELSP model. Hwang, Kim, and Kim (1993) and
Moon (1994) developed an ELSP model while considering both the set-up time reduction and quality
improvement in the investment set-up cost. Moon, Hahm, and Lee (1998) also considered stabilizing the
production rate as a concept for the ELSP. In a later study, Moon et al. (2002) considered an unreliable
production facility in the ELSP, where production starts with an ‘‘In-Control’’ state but could lead to an
‘‘Out-of-Control’’ state, resulting in production of non-conforming items during its operation in the ‘‘Out-
of-Control’’ state. Mathematical models have been developed using the common cycle and time-varying lot
size approaches. Giri, Moon, and Yun (2003) also considered preventive and corrective maintenance, as
well as the related costs.
Silver (2004) in his overview covered a wide range of heuristic solution methods that can be of interest
to researchers and practitioners. Aytug, Khouja, and Vergara (2003) presented a survey of literature about
the use of the Genetic algorithm in production and operation management. Khouja, Michalewicz, and
Wilmot (1998) successfully applied the Genetic algorithm to the ELSP while considering the basic period
approach, where the deviation of the Genetic algorithm from LB at a higher machine utilization reached
over 85%. Moon et al. (2002) proposed a hybrid Genetic Algorithm (GA) to the ELSP using the time-
varying lot size approach. The GA has outperformed the best known Dobson’s (1987) heuristic (DH).
Feldmann and Biskup (2003) reported the successful implementation of a simulated annealing based
meta-heuristic for solving a single machine scheduling problem with earliness and tardiness penalties.
The use of Genetic algorithm to solve ELSP with deteriorating items is discussed in Yao and Huang
(2005). Gaafar (2006) also described solving the dynamic lot sizing problem with batch ordering using
Genetic algorithm. Raza, Akgunduz, and Chen (2006) proposed a Tabu Search (TS) algorithm and
Neighborhood Search heuristics (NSa , NSb ) to solve the same problem. The proposed solution methods
were able to outperform the two best known heuristic methods, DH and GA, while the TS algorithm
provided the best solution.
The recent success of meta-heuristics such as GA, NSa , NSb and TS in solving the ELSP using the time-
varying lot size approach has motivated the development of the Simulated Annealing (SA) algorithm to solve
the ELSP. The SA algorithm is a well-known meta-heuristic which has been successfully used to solve many
complex combinatorial optimization problems. In this paper a new meta-heuristic based on the SA algorithm
is presented to solve the ELSP. We present a computational study to compare the SA algorithm with the exist-
ing solution methods. The remainder of this paper is organized as follows: in Section 3 Dobson’s (1987) ELSP
model is briefly described; Section 4 presents a brief description of existing heuristic algorithms and the pro-
posed SA algorithm; a statistical study of the SA algorithm design of experiments is presented in Section 5.
Section 6 reports the computational experience with the proposed and existing solution methods; while con-
cluding remarks and research agenda are discussed in Section 7.
A.S. Raza, A. Akgunduz / Computers & Industrial Engineering 55 (2008) 94–109 97

3. ELSP model

We present Dobson’s (1987) ELSP model which is based on the time-varying lot size approach. Typically
the problem is stated as a single production facility on which m distinct products are to be produced with the
following assumptions.

1. Items (products) do not have any precedence over each other. They compete for the same production
facility.
2. Back-orders are not allowed.
3. The production facility is assumed to be failure free and to always produce perfect quality products.

The data for the problem is:


i product index
m total number of products
pi production rate of the item i, 8i ¼ 1; 2; . . . ; m
di demand rate of the item i, 8i ¼ 1; 2; . . . ; m
hi inventory holding cost ( $ per unit per day), 8i ¼ 1; 2; . . . ; m
Ai set-up cost ($) 8i ¼ 1; 2; . . . ; m
si set-up time for the product i. (days) 8i ¼ 1; 2; . . . ; m
T length of the production cycle
P di
K 1 m i¼1 p (machine idleness)
i

In a typical production sequence f having n production runs such that f ¼ ff 1 ; f 2 ; . . . ; f n g,


8f 2 f1; 2; . . . ; mg. In a production sequence f, production times, t ¼ ft1 ; t2 ; . . . ; tn g and idle times
j

u ¼ fu1 ; u2 ; . . . ; un g are determined such that the production sequence remains executable in the chosen cycle
length T , which can be repeated infinitely. In addition, demand is satisfied and total cost of inventory and set-
up is minimized.
Subscripts are used to refer to the ith part: pi , d i , hi , Ai , si , however in the production cycle f of n indexes,
n P m, likewise superscripts are used to refer to the data related to the part produced, where data related to
the part produced at jth position in the sequence: pj , d j , hj , Aj , sj ; that is, pj ¼ pf j ; . . . ; sj ¼ sf j . Let F represent
a set of all possible finite sequences of the parts.
Let’s consider the ith product which is produced at the jth position in the production sequence. Its produc-
tion involves a production time of tj , a set-up time of sj and an idle time of uj . The considered part will not be
produced again until the remaining products are produced. The total number of parts produced in the jth posi-
j j
tion are pj tj . These parts will satisfy the demand for the product in period ½0; v, where v ¼ pd jt . The highest
j
inventory level is ðpj  d Þtj and the total inventory holding cost of the product that is produced at the jth
position in the sequence is 12 hj ðpj  d j Þðpj =d j Þðtj Þ2 . Let J i be the positions in a given sequence where part i
is produced, that is J i ¼ fjjf j ¼ ig. Let Lk represents the set containing the products that are produced in a
given sequence from k to the position in the sequence where the product k is produced again, but not included
in the same cycle. The ELSP problem that minimizes the total cost can be formulated as:
 j !
1 X n
1 j j j p j 2
X n
j
inf min h ðp  d Þ j ðt Þ þ A ð1Þ
j2F tP0;uP0;T P0 T j¼1 2 d j¼1
subject to
X
pi tj ¼ d i T i ¼ 1; 2; . . . ; m ð2Þ
j2J i
X  k
j j j p k
ðt þ s þ u Þ ¼ t k ¼ 1; 2; . . . ; n ð3Þ
j2Lk dk
X n
ðtj þ sj þ uj Þ ¼ T ð4Þ
j¼1
98 A.S. Raza, A. Akgunduz / Computers & Industrial Engineering 55 (2008) 94–109

The first constraint, Eq. (2), states that the satisfactory amount of item i should be produced to fulfill demand
for it with a rate of d i over a cycle time of T . Eq. (3) constrains the production rate of the item i, hence its
demand is satisfied until it is produced again in the next cycle. Eq. (4) requires that the sum of production,
set-up and idle time of the product produced in any given sequence is equal to the cycle time T .

4. Solution methodologies

A solution to the ELSP is a combination of production sequence f and production times t. For a given pro-
duction sequence f the production times t must satisfy Eqs. (2)–(4). The cost of a solution is determined using
Eq. Eq. (2) for a combination of f and t. Currently there are five heuristic algorithms that exist in the literature
to solve the ELSP using the time-varying lot size approach. Each of these existing heuristics uses the following
solution procedure:

• Step 1: Determine the LB estimate on the optimal cycle length T i for each item using the LB computation
procedure given by Bomberger (1966).
• Step 2: Assume that T i is the optimal cycle length of an item i. Also let zi represent the optimal production
frequency for an item i. zi is determined as follows:
maxj fT j g
zi ¼ 8i ¼ 1; 2; . . . ; m: ð5Þ
Ti
The frequencies obtained are either rounded to the nearest integer or a rounding off algorithm to the
power of 2 suggested in Roundy (1989) is used to determine production frequencies.
• Step 3: For a given production sequence f obtained following the production frequencies rounding off rule
mentioned in Step 2, determine production times t. Assume zero idle times i.e, u ¼ 0 and solve Eqs.
(2)–(4) to estimate production times t. This approximation works well for a highly loaded produc-
tion facility. Moon et al. (2002) entitled this approximation as the quick and dirty heuristic.

Once the production frequencies are known a heuristic can be employed in Step 3 to find a solution. Next,
we briefly describe the existing heuristics which include DH and meta-heuristics NSa , NSb , TS and GA, fol-
lowed by a new meta-heuristic based on the SA algorithm. All of these heuristics, with the exception of DH,
are iterative and thus repeat Step 3 until a stopping criterion in reached. With the exception of GA, the other
meta-heuristics have similar basic working principles. Each of these meta-heuristics starts with a seed solution.
This seed solution is recorded as the best solution and becomes the current solution in that iteration. Depend-
ing upon the meta-heuristic requirements, either one or several neighborhoods of this current solution are gen-
erated. A common practice observed in each of these meta-heuristics is to accept a solution which is found to
be better than the best visited during the search and select it as the current solution for the next iteration in the
search. The selection of a neighborhood may vary from one meta-heuristic to another. The comparative sum-
mary of the selection procedures for each meta-heuristics analyzed in this work is given in Table 1. The table
does not include the GA as it uses a different search scheme as previously discussed. However the search pro-
cesses of NSa , NSb , TS and SA have more resemblance as identified in Table 1. The table also shows that the
algorithms were tested on comparable operating conditions. Now we briefly discuss existing meta-heuristics.

4.1. Dobson’s heuristic

The DH is a single pass heuristic which uses the previously mentioned solution steps only once. To deter-
mine production frequencies DH uses a rounding off to the power of 2 algorithm. The production sequences
are determined using bin packing heuristic once the production frequencies are known. Finally, the quick and
dirty heuristic is used to determine the production times t.

4.2. Neighborhood search heuristics

A neighborhood search heuristic (French, 1982; Sait & Youssef, 1999) explores the neighborhood of a given
solution. The NS is a greedy heuristic in that it only accepts those solutions that are superior to the best solu-
A.S. Raza, A. Akgunduz / Computers & Industrial Engineering 55 (2008) 94–109 99

Table 1
A comparison of meta-heuristics
Feature Existing meta-heuristics TS, NSa and NSb Proposed meta-heuristic SA
Production frequency TS and NSa use rounding of to the nearest integer Rounding of to nearest integer
determination scheme
NSb uses rounding off to the power of 2 algorithm
Seed solution Randomly generated feasible production sequence Randomly generated feasible production
sequence
Neighborhood generation Random swap of two entities in a production Random swap of two entities in a production
scheme sequence sequence
Stopping criterion 1000 iterations of no improvement 1000 iterations of no improvement
Miscellaneous For TS, Candidate list size = 20, Tabu list size = 7 For SA, Markov chain length = 20, a ¼ :99,

tion visited so far. There are two neighborhood search heuristics, NSa and NSb reported in the literature.
These two heuristics differ in the production frequency rounding off scheme. NSa uses the production frequen-
cies rounded off to the nearest integers. On the other hand, the NSb rounds the production frequencies using a
rounding off to the power of 2 algorithm. The implementation issues such as seed solution, neighborhood solu-
tion generation scheme and stopping criterion are discussed as follows:

4.2.1. Seed solution


A seed solution is also called the initial solution. A seed solution in this study is a randomly generated fea-
sible production sequence, f, which follows either of the two production frequencies rounding off schemes
noted earlier in Step 2. A seed solution is considered feasible if it does not contain the same items at two adja-
cent positions and also does not contain the same items at the start and finish of a production sequence f.

4.2.2. Neighborhood generation scheme


A neighborhood production sequence f0 for a given production sequence f is randomly generated using the
principle of exchanging an item at the ith position with the item at the kth position in f. In the search only
feasible neighborhood solutions are considered.

4.2.3. Stopping criterion


It is suggested to terminate the search once 1000 iterations of no improvement are completed.

4.3. Tabu search algorithm

Glover (1989, 1990) introduced the TS algorithm. It is an iterative heuristic for solving combinatorial opti-
mization problems such as the ELSP. The TS algorithm is a generalization of a local search. At each step, the
local neighborhood of a current solution is explored and thus a number of candidate solutions are generated.
This list is known as the candidate list. The best candidate solution in that neighborhood qualifies for the next
iteration in the search. Unlike the local search, which stops when no improved solution is found in the current
neighborhood, TS continues to explore new solutions even if the new ones are worse than the current best
solution. To prevent cycling, the information pertaining to the most recently visited solution is recorded in
a list called the Tabu list. The tabu status of a solution is overridden when the aspiration criterion is satisfied.
A detailed discussion on the basic structure of the TS algorithm can be found in Glover, Glover, and Laguna
(1998), Sait and Youssef (1999). The performance of the TS algorithm can be improved by the incorporation
of some additional features such as a search intensification and diversification (Glover et al., 1998; Ben-daya &
Al-Fawzan, 1998). As with NSa , TS adopts the production scheme rounding off scheme to the nearest integer.
The TS algorithm also uses the same seed solution, neighborhood generation scheme and stopping criterion
suggested for NSa . We now discuss some new features and control parameters of the TS algorithm designed
for solving the ELSP:
100 A.S. Raza, A. Akgunduz / Computers & Industrial Engineering 55 (2008) 94–109

4.3.1. Tabu restriction


This is a control logic used to avoid cycling back to previously visited solutions and is achieved by moving
the selected attributes of an alternative solution to Tabu (forbidden). The attributes of a solution are recorded
in the tabu list. In this study, the attributes of a randomly generated neighbor production sequence f are the
two randomly selected positions in f that result in f 0 .

4.3.2. Aspiration criterion


The aspiration criterion is used to override the tabu status of a solution when a new solution is better than
the current best solution.

4.3.3. Search intensification and diversification


Search intensification is employed to explore the regions of good solutions in the search space with more
intensity. Revisits to the previously explored neighborhoods are conducted to avoid the possibility of missing
any good solutions due to any temporal locking. Search diversification is the opposite of intensification, where
the search changes its direction to a new region that has not yet been explored by the algorithm. A statistical
analysis that uses design of experiments (DOE) is carried out in Raza et al. (2006) and the parameters selected
are reported as follows:

• Candidate list size is 20 and it is the best among the candidate neighbors.
• Tabu list size is 7 and it is updated using the first-in-first-out rule.
• Intensification and diversification schemes are employed. Intensification is invoked after 250 iterations
when no improvement is observed in the current search. Similarly diversification is invoked at 500 iterations
when no improvement is recorded.

4.4. Hybrid genetic algorithm

A genetic algorithm is a stochastic search algorithm based on the mechanism of natural selection and nat-
ural genetics. It has been widely used in various areas for three decades. The algorithm differs from conven-
tional search techniques in starting with. It starts with an initial set of (random) solutions called a population.
Each individual in the population is identified as a chromosome, representing a possible solution to the prob-
lem at hand. The chromosomes evolve through successive iterations, called generations. During each genera-
tion, the chromosomes are evaluated, using some measures of fitness. The algorithm is well suited to problems
that are complex and have a large search space, making them impossible to search exhaustively. It is generally
accepted that any genetic algorithm to solve a problem must have certain basic components. These basic com-
ponents are: (i) solution representation and initialization, (ii) objective and fitness function, (iii) reproduction,
crossover and mutation, and (iv) fitness scaling. A production sequence f is directly used to represent the solu-
tion (chromosome) instead of the mostly practised binary representation. Given f is known, Eqs. (2)–(4) can
be solved using the quick and dirty heuristic to determine t. For any solution (chromosome) given that t is
known, the cost (objective function) can be calculated using Eq. 2. The reproduction operator may be imple-
mented in an algorithmic form in a number of ways, however a biased roulette wheel is perhaps the easiest
where a roulette wheel slot is sized in proportion to its fitness. The implementation of the GA in Moon
et al. (2002) used the stochastic tournament method. In this method, the selection probabilities were calculated
and successive pairs of individuals were drawn using the roulette wheel selection. A pair was drawn and the
solution with the higher fitness was declared the winner. This process continued until the population was full.
The crossover operator took two chromosomes and swapped a part of their genetic information to produce
new chromosomes. The crossover was a function which merged two solutions (chromosomes) and get one
solution (chromosome). In the implementation of the GA by Moon et al. (2002), a partially matched crossover
(PMX) was used. This crossover scheme maintained the feasibility criterion, such as the production frequency
of the resulting solution. Mutation is a background operator, which produces spontaneous random changes in
various chromosomes. A simple way to achieve mutation is to alter one or more genes. In genetic algorithms,
mutation serves the crucial role of replacing the genes lost from the population during the selection process so
that they can be tried in a new context, or providing the genes that were not present in the initial population.
A.S. Raza, A. Akgunduz / Computers & Industrial Engineering 55 (2008) 94–109 101

Moon et al. (2002) suggested a very simple mutation function to randomly swap the two indexes of the pro-
duction sequence such that the resulting production sequence remained feasible. The fitness scaling is done to
keep appropriate levels of competition throughout a search. Without scaling, early on there is a tendency for a
few super individuals to dominate the selection process. The literature suggests various scaling methods such
as linear scaling, sigma (r) truncation, power law scaling, logarithmic scaling, etc. Moon et al. (2002) chose
sigma truncation developed by Forrest (1985) which uses a population standard deviation (r) as variation
information to preprocess raw fitness values prior to scaling. In the computational study by Moon et al.
(2002) using the GA, the following parameters were selected:

• Population size: 100


• Elitist strategy (the best individual was always kept from generation to generation).
• Termination condition was to stop the algorithm when the number of generations reached 1000 (actually
most runs of the GA converged before 300 generations) or the best individual did not improve in over 150
consecutive generations.
• Crossover rate: 0.9
• Mutation rate: 1/(string length of chromosome).

4.5. Simulated annealing

The SA algorithm is an efficient combinatorial optimization technique. The SA algorithm follows an anal-
ogy from the annealing of metals, where during the search process SA not only accepts the better solutions
(Downhill move) but also accepts the bad solutions (Uphill move) with some probability. This feature of
SA enables it to escape from a local minimum. More general details on the SA algorithm may be found in
Kirkpatrick, Gelatt, and Vecchi (1983) and Eglese (1990). A typical SA algorithm application requires a seed
solution, a cooling schedule, an acceptance probability function and a stopping criterion. As with TS, the SA
algorithm also uses a same seed solution, a neighborhood scheme and a stopping criterion. The proposed SA
algorithm is employed in Step 3 to determine the best production sequence. Now we discuss the following
issues more specific to the SA algorithm: (i) Cooling schedule, (ii) Markov chain length, and (iii) Acceptance
probability function.

4.5.1. Cooling schedule


A typical SA cooling schedule includes three parameters: (i) Initial temperature, (ii) Temperature decrement
rule, and (iii) Final temperature.

4.5.1.1. Initial temperature. With the SA algorithm, the initial temperature should be sufficiently high to avoid
a premature convergence. In practice, the SA algorithm starts with an initial temperature where most of the
moves are accepted. The initial temperature Y 0 can be estimated using the probability of initial acceptance
(p0 ). A higher p0 is desired which results in a higher Y 0 . We adopted the Y 0 estimation method proposed
by White (1984). In this approach Y 0 is considered acceptable, if Y 0  r where r is standard deviation of
the cost function at an initial temperature Y 0 . For a given seed solution 100 neighborhood solutions of the
seed solution are generated and the standard deviation of these solutions is determined. It is assumed that
the variation in the cost function is normally distributed and it is expected that at Y 0 a neighbor, whose cost
is 3r worst than the seed solution, will be accepted. The following equation demonstrates the stated criterion:
3
k¼ ð6Þ
ln p0
Thus the temperature is determined using the following equation:
Y 0 ¼ kr ð7Þ

4.5.1.2. Temperature decrement rule. In most SA algorithm approaches a geometric temperature decrement rule
is employed. If the temperature at kth iteration is Y k , then the temperature at (k+1)th iteration is given by:
102 A.S. Raza, A. Akgunduz / Computers & Industrial Engineering 55 (2008) 94–109

Y kþ1 ¼ aY k ð8Þ
The typical value of a ranges is 0:8 6 a 6 0:99 in most SA algorithm applications (Sait & Youssef, 1999).

4.5.1.3. Final temperature. The temperature is kept unchanged if it reaches the lowest temperature limit. In our
implementation the lowest allowable temperature is set at 103 .

4.5.2. Markov chain length


The Markov chain length describes the number of randomly generated solutions that are visited at a given
temperature to attain the quasi-equilibrium (Aarts & Laarhoven, 1985) and is also called the Metropolis loop
(Eglese, 1990). The acceptance of a randomly visited solution is dependent on certain criterion. In this imple-
mentation of the SA algorithm, we used a probabilistic acceptance function.
An acceptance function is embedded into the Metropolis loop. We used an acceptance probability function
suggested in Lyu, Gunasekaran, and Ding (1996). At a given temperature Y i , the acceptance probability func-
tion pa of a randomly generated solution f0 can be given as:

1 If costðf 0 Þ < costðfÞ
pa ¼ ð9Þ
expðD=Y i Þ If costðf 0 Þ P costðfÞ
where D ¼ costðf 0 Þ  costðfÞ.

5. A statistical study with SA

A statistical DOE is used to analyze the performance of the SA algorithm. The study is important in
improving the performance by identifying the potential control parameters of the SA algorithm. The results
of DOE enable us to investigate the robustness of the SA algorithm. There are some earlier studies on the SA
algorithm which showed the SA algorithm to be very sensitive to its control parameter, however it was prob-
lem dependent (Lyu et al. (1996).
In order to study the effect of some of these control parameters, we suggested a general factorial DOE.
Three factors related to the cooling schedule were considered in the experimentation and each factor was given
three levels. These level values are identified in Table 2. For each combination of the three factors under study
ten randomly selected problems were solved and responses were measured. These problems were selected from
the data set 3 as this data set contained the most difficult problems in comparison to the other data sets (see
Dobson (1987)). Three performance measures, the ratio of the cost of the SA algorithm solution and the cost
of the LB i.e., SA/LB, the ratio of the cost of the DH solution and the cost of the SA algorithm i.e., DH/SA
and CPU time were considered. The first two measures are related to the quality of the SA algorithm in com-
parison to the LB and the DH. The last performance measure is related to the convergence speed of the SA
algorithm. Tables 3–5 show the analysis of variance (ANOVA) table with SA/LB, DH/SA and CPU time as
performance measures, respectively.
The DOE demonstrates that the SA algorithm is quite robust in convergence to a near optimal solution.
Although CPU time increases with Metropolis loop size, as noted in Table 5, it is considered to be negligible
as most of the problems converged in less than 30 s. We recommend an acceptance ratio p0 at 0.90, a Metrop-
olis loop size of M = 20 and a temperature decrement rule of a ¼ 0:99. Using a 0.90 acceptance ratio helps to
start the SA algorithm at a higher temperature. The Metropolis loop size resembles a candidate list size which
is a parameter in TS. The candidate list size of 20 in TS where the Markov chain length is also selected as 20.

Table 2
Levels of DOE
Factor Label Levels
Acceptance ratio (p0 ) at initial temperature A 0.60 0.80 0.90
Metropolis loop size (M) B 5 20 50
Temperature decrement (a) C 0.60 0.80 0.99
A.S. Raza, A. Akgunduz / Computers & Industrial Engineering 55 (2008) 94–109 103

Table 3
3  3  3 Factorial design response is SA/LB
Source DF Sum of squares F Prob. of larger F
A 2 5:668  105 0.021 0.9793
B 2 1:625  103 0.599 0.5500
C 2 3:492  104 0.129 0.8792
A·B 4 1:121  105 0.002 1.0000
A·C 4 3:386  105 0.006 0.9999
B·C 4 2:994  104 0.055 0.9943
A·B·C 8 5:735  105 0.005 1.0000

Table 4
3  3  3 Factorial design response is DH/SA
Source DF Sum of squares F Prob. of larger F
A 2 5:290  105 .020 .9798
B 2 1:591  103 .613 .5426
C 2 2:877  104 .111 .8951
A·B 4 1:183  105 .002 1.0000
A·C 4 3:152  105 .006 .9999
B·C 4 2:830  104 .055 .9944
A·B·C 8 5:019  105 .005 1.0000

Table 5
3  3  3 Factorial design response is CPU time
Source DF Sum of squares F Prob. of larger F
A 2 347.7 .025 .9754
B 2 199552.6 14.318 .0000
C 2 73.8 .005 .9947
A·B 4 50.7 .002 1.0000
A·C 4 589.5 .021 .9991
B·C 4 849.2 .030 .9982
A·B·C 8 2641.0 .047 1.0000

This ensures the comparability of the two algorithms in the computational study, while a ¼ :99 allows the SA
algorithm to find better solution.

6. Computational results

In this section, we present a comparative computational study of the existing solution methods and the
newly developed SA algorithm. We compare the performance of the SA algorithm with DH, GA, NSa ,
NSb and TS algorithms. All heuristics with the exception of GA were coded using MATLAB and tested on
an Intel Pentium 4, 2.40 GHz processor with 256 MB RAM. For comparison with GA, we used the compar-
ison format and values provided in Moon et al. (2002). The test problems that are provided in Mallya (1992)
and Bomberger (1966) are considered as benchmark problems in ELSP literature. Hence, we tested aforemen-
tioned solutions methodologies on these two problem sets. The minimum cost found by each algorithm is give
in Table 6. We noticed that the SA algorithm find the best solution to Mallya’s problem with $60.76 when the
production frequencies are rounded off to the power of 2. When the product frequencies are rounded to the
nearest integer, we noticed that the performance of SA is not as good. The GA proposed in Moon et al. (2002)
resulted in an average daily cost of $60.91. It is important to note that in addition to being the best known
solution to this problem, the SA algorithm also resulted with an alternative solution at the same cost. The
production schedules found by the tested algorithms are presented in Tables 7 and 8.
Similarly, the test results obtained for Bomberger’s (1966) problem at 99% utilization are, the SA algorithm
finds the best known solution and outperformed all other methods, including the TS algorithm.
104 A.S. Raza, A. Akgunduz / Computers & Industrial Engineering 55 (2008) 94–109

Table 6
Comparison on Bomberger and Mallya’s problem
Problem type LB Existing solutions Proposed solution
DH GA TS NSa NSb SA
Mallya 57.726 60.874 60.911 60.911 60.911 60.782 60.911
Mallya* – – – 60.782 – – 60.782
Bomberger 122.945 128.339 126.12 125.31 125.754 130.346 125.135
* Both TS and SA algorithms use production frequencies rounded off to the power of 2.

Table 7
Production frequencies found by heuristic algorithms on Mallya’s problem
Heuristic Details
DH f ¼ f3; 4; 3; 1; 2; 3; 4; 3; 1; 5g
t ¼ f2:5047; 16:07; 4:4432; 12:774; 15:743; 2:0777; 13:263; 3:5533; 12:32; 10:546g
TS f ¼ f2; 3; 5; 4; 1; 3; 2; 4; 3; 1; 4g
t ¼ f9:192; 5:615; 12:919; 11:607; 11:647; 3:412; 10:093; 11:596; 6:382; 19:094; 12:730g
NSa f ¼ f3; 1; 4; 2; 3; 5; 4; 1; 3; 2; 4g
t ¼ f6:382; 19:094; 12:730; 9:192; 5:615; 12:919; 11:607; 11:647; 3:412; 10:093; 11:596g
NSb f ¼ f2; 3; 1; 4; 3; 5; 1; 3; 4; 3g
t ¼ f15:743; 3:8749; 10:551; 14:329; 3:8914; 10:546; 14:543; 2:3425; 15:004; 2:4701g
TS f ¼ f1; 4; 3; 5; 1; 3; 4; 3; 2; 3g
t ¼ f10:5512; 14:3294; 3:89142; 10:546; 14:5431; 2:34248; 15:0036; 2:47009; 15:7427; 3:87493g
SA f ¼ f4; 2; 3; 5; 4; 1; 3; 2; 4; 3; 1g
t ¼ f12:7299; 9:19219; 5:61506; 12:9188; 11:6074; 11:6471; 3:41226; 10:0926; 11:5956; 6:38185; 19:0935g
SA f ¼ f5; 1; 3; 4; 3; 2; 3; 1; 4; 3g
t ¼ f10:546; 14:543; 2:3425; 15:004; 2:4701; 15:743; 3:8749; 10:551; 14:329; 3:8914g

In the literature both DH and GA were tested using the data sets given in Dobson (1987). The provided
data sets contain the criteria to randomly generate test cases for the ELSP. The general characteristics of
the designed random problem is having high machine utilization (over 90%) which suits well for our compar-
ative study in this work. Previous ELSP studies such as Dobson (1987), Moon et al. (2002) and Raza et al.
(2006) also used the same data sets. Total of fifty test problems were randomly generated for each of the three
data sets shown in Table 9. The deviation from the LB, which is the the ratio of cost by the solution method to
the cost of the LB, is used as the performance measure. For further detailed analysis, we compared the algo-
rithms based on the deviation from DH, CPU time and the number of problems out of the total tested prob-
lems that resulted in a solution superior to the DH.
We discuss our findings with each of the three experiments as follows. Table 10 reports the computational
results with 50 randomly generated problems using data set 1. It is evident that the SA algorithm outper-
formed the DH, NSa and NSb . The SA algorithm is also very competitive to the TS algorithm. The SA algo-
rithm also outperformed the GA which was only 1.1% better than that of the DH (see Moon et al., 2002), for a
2.67% improvement using the SA algorithm.
A similar study was conducted on the problems generated using data sets 2 and 3 from Table 9. Since the
computational results of the GA were not provided in Moon et al. (2002) for these two sets, comparisons of
the SA algorithm are only made using the other methods. Data set 2 contained very similar problems as
reported in data set 1 and the related computational study with 50 randomly generated problems is shown
in Table 11. The SA algorithm produced solutions of superior quality and was also able to achieve the highest
number of superior solutions in this comparative study. Data set 3 contains relatively the harder problems
(Dobson, 1987). Results given in Table 12 shows that the SA algorithm is the only heuristic performs better
that or equally to the TS algorithm. An in-depth comparison of the two best performing, TS and SA algo-
A.S. Raza, A. Akgunduz / Computers & Industrial Engineering 55 (2008) 94–109 105

Table 8
Production frequencies found by heuristic algorithms on Bomberger’s problem
Heuristic Details
DH f = {8, 4, 5, 8, 9, 8, 4, 10, 6, 8, 3, 2, 1, 8, 4, 5, 8, 9, 8, 4, 10, 7, 8, 3, 2, 8, 4, 5, 8, 9, 8, 4, 10, 6, 8, 3, 2, 8, 4, 5, 8, 9, 8, 4, 10, 7, 8, 3, 2}
t = {35.8256, 62.313, 22.8308, 40.0922, 95.293, 37.4462, 61.6508, 13.9507, 13.5698, 40.1746, 42.6083, 25.2726, 27.8604,
30.4963, 53.124, 19.1867, 34.6237, 82.1248, 30.5236, 48.4096, 13.1926, 10.1493, 28.2923, 42.1434, 25.11, 32.3382, 56.4482,
20.298, 36.5595, 86.7862, 33.3795, 51.2624, 13.8256, 14.2906, 29.3761, 43.702, 26.1615, 33.6866, 58.7276, 21.2655, 38.316,
91.0158, 33.6701, 53.8305, 14.7519, 10.746, 31.6915, 47.5066, 27.9323}
TS f = {4, 8, 5, 9, 8, 6, 7, 3, 2, 8, 4, 5, 10, 8, 9, 8, 4, 3, 8, 5, 2, 1, 4, 8, 9, 8, 4, 6, 5, 3, 8, 2, 10, 4, 8, 9, 5, 8, 4, 3, 2, 8}
t = {93.1766, 36.0186, 14.2169, 70.7668, 39.1481, 12.2983, 17.6698, 36.6966, 25.4799, 39.3299, 57.1912, 14.1954, 22.0708,
31.5664, 74.7627, 36.6643, 46.959, 40.4543, 46.3969, 14.6485, 22.5563, 23.5598, 49.5853, 32.9021, 77.9791, 44.8312, 46.0915,
11.2615, 13.6737, 35.3029, 40.6085, 18.7135, 25.0488, 53.1489, 38.4438, 76.8786, 13.9449, 37.2705, 30.8039, 36.3449, 21.5995,
38.9538}
NSa f = {4, 2, 8, 5, 9, 6, 8, 4, 1, 3, 8, 2, 5, 4, 8, 9, 7, 8, 4, 3, 5, 2, 8, 10, 4, 6, 8, 9, 5, 8, 4, 3, 2, 8, 4, 8, 5, 9, 8, 10, 3, 8}
t = {68.1266, 20.3381, 44.3999, 14.2791, 80.086, 10.551, 44.1094, 43.7623, 23.5598, 37.8943, 39.9535, 20.8636, 15.1851,
58.9104, 41.2746, 79.4703, 17.6698, 45.9338, 39.5632, 36.6152, 12.0651, 20.8661, 36.4121, 18.1524, 55.52, 13.0088, 36.6449,
73.3055, 13.1861, 37.678, 29.8507, 33.5974, 26.2814, 33.9898, 81.2233, 35.398, 15.9639, 67.5254, 29.2912, 28.9671, 40.6917,
37.0489}
NSb f = {8, 4, 2, 8, 10, 3, 5, 8, 4, 7, 8, 9, 8, 1, 4, 2, 8, 5, 3, 6, 8, 10, 4, 2, 8, 9, 8, 4, 8, 3, 5, 8, 10, 4, 8, 9, 8, 2, 7, 4, 8, 6, 3, 8, 5, 4, 8, 9, 10}
t = {37.1454, 59.1409, 29.5561, 36.199, 18.409, 47.7908, 19.5933, 34.2828, 68.5239, 12.4048, 40.4191, 12.1963, 51.3413,
32.9175, 32.0596, 75.9503, 20.2406, 48.1149, 28.5767, 44.7419, 22.8214, 34.6986, 20.0756, 62.7295, 36.3913, 86.3812, 35.3577,
29.7402, 8.49047, 45.1615, 26.0659, 15.6664, 46.1008, 33.1044, 20.4657, 58.1256, 43.1214, 97.4225, 5.03985}
SA f = {8, 4, 3, 10, 8, 2, 4, 6, 8, 5, 9, 8, 4, 3, 2, 8, 5, 4, 8, 9, 8, 10, 3, 5, 8, 4, 2, 6, 8, 7, 9, 8, 4, 5, 3, 8, 1, 2, 4, 8, 9, 5}
t = {39.3604, 35.9223, 36.8779, 20.9805, 36.6372, 18.5442, 57.0423, 11.6366, 37.0452, 12.362,a 75.0937, 42.524, 37.8421,
35.3653, 28.1914, 41.3358, 13.8737, 84.5389, 31.5848, 74.8069, 35.1145, 26.1391, 40.3426, 16.6999, 40.7355, 64.6932, 20.4757,
11.9232, 38.9956, 17.6698, 73.9825, 39.305, 43.2727, 13.7869, 36.2127, 41.2029, 23.5598, 21.1378, 53.645, 38.2932, 76.504,
13.9568}

Table 9
Distribution for randomly generated data for the test problems
Parameters Set 1 Set 2 Set 3
Number of items (units) [5, 15] [5, 15] [5, 15]
Production rate (units/unit time) [2000, 20000] [4000, 20000] [1500, 30000]
Demand rate (units/unit time) [1500, 2000] [1000, 2000] [500, 2000]
Set-up time (time/unit) [1, 4] [1, 4] [1, 8]
Set-up cost ($) [50, 100] [50, 100] [10, 350]
Holding cost ($) [1/240, 6/240] [1/240, 5/240] [5/240000, 5/240]
K 60.1

rithms shows that out of 150 randomly generated problems, the SA algorithm finds solutions that are better
than or equal to DH by about 83.3% of the time. Same comparisons for the TS, NSa , and NSb are approx-
imately 78.6%, 74%, and 47.3%, respectively.
In Table 13, a summary of computational results is presented. The findings are the percentage average devi-
ation from LB, improvement over DH, and the CPU time taken for all the data sets tested during the com-
putational experiment. While considering the quality of the solution obtained, both the TS and SA outperform
the other heuristic algorithms tested. The average % improvement of TS and SA over DH is 3.13 and 3.11,
whereas the CPU time measure are 14.87 and 13.17 s, respectively. Next, a statistical analysis based on a
paired comparison t-test (Kutner, Nachtsheim, Neter, & Li, 2005) is also reported in Table 14. The perfor-
mance measure percentage relative deviation from the LB. It is inferred that the performance is competitive
to the best known TS algorithm, as well as the SA algorithm results statistically significant improvement in the
solution quality when test at 5% significance level.
In an analysis of CPU time for convergence, reported in Fig. 1, we found that the SA algorithm had a much
faster convergence to the optimal solution when compared to TS.
106 A.S. Raza, A. Akgunduz / Computers & Industrial Engineering 55 (2008) 94–109

Table 10
Comparison of algorithms on randomly generated problems using Set 1
Parameters Comparison with Lower Bound Comparison with Dobson heuristic
DH SA TS NSa NSb DH DH DH DH
LB LB LB LB LB SA TS NSa NSb

Mean 1.0517 1.0243 1.0242 1.0250 1.0484 1.0267 1.0268 1.0260 1.0031
Min. 1.0114 1.0074 1.0074 1.0074 1.0114 0.9957 0.9956 0.9919 0.9919
Max. 1.2216 1.1098 1.1100 1.1192 1.2176 1.1056 1.1056 1.1055 1.0283
Avg. CPU time (s) – – – – – 8.3641 11.0777 0.3828 0.4081
Best time (s) – – – – – 0.6713 3.2662 0.0288 0.0447
Nbr. of problems
with ratio 6 1 0 0 0 0 0 7 8 8 27

Table 11
Comparison of algorithms on randomly generated problems using Set 2
Parameters Comparison with Lower Bound Comparison with Dobson heuristic
DH SA TS NSa NSb DH DH DH DH
LB LB LB LB LB SA TS NSa NSb

Mean 1.0503 1.0274 1.0272 1.0278 1.0491 1.0225 1.0227 1.0220 1.0011
Min. 1.0070 1.0051 1.0051 1.0051 1.0070 0.9754 0.9758 0.9722 0.9904
Max. 1.2336 1.0713 1.0708 1.0748 1.2054 1.2034 1.2034 1.1967 1.0234
Avg. CPU time (s) – – – – – 8.7303 11.3887 0.4297 0.4088
Best time (s.) – – – – – 0.8628 4.5018 0.0528 0.0590
Nbr. of problems
with ratio 61 0 0 0 0 0 8 12 13 31

Table 12
Comparison of algorithms on randomly generated problems using Set 3
Parameters Comparison with Lower Bound Comparison with Dobson heuristic
DH SA TS NSa NSb DH DH DH DH
LB LB LB LB LB SA TS NSa NSb

Mean 1.2550 1.1594 1.1592 1.1745 1.2301 1.0440 1.0443 1.0311 1.0089
Min. 1.0193 1.0111 1.0107 1.0123 1.0192 0.9272 0.9304 0.9304 0.9502
Max. 8.1570 5.3715 5.3720 5.4625 7.3938 1.5186 1.5184 1.4933 1.1032
Avg. CPU time (s) – – – – – 22.4303 22.1410 2.4722 5.0455
Best time (s) – – – – – 11.8694 14.5094 2.2650 4.8128
Nbr. of problems
with ratio 6 1 0 0 0 0 0 13 12 18 21

Table 13
Algorithm performance summary table
Heuristic Algorithm % Average deviation from LB % Average improvement over DH CPU time (s)
DH 11.90
SA 7.04 3.11 13.17
TS 7.02 3.13 14.87
NSa 7.58 2.64 1.09
NSb 10.92 0.44 1.95

Finally, we tested all the heuristics based on the average number of iterations they use prior to converging
to the optimal solution as reported for GA in Moon et al. (2002). The average number of iterations taken by
each of the meta-heuristics to converge to the best solution are reported in Table 15. The results shows that the
average number of iterations taken by each of the meta-heuristics is quite comparable to GA which requires
about 300 iterations.
A.S. Raza, A. Akgunduz / Computers & Industrial Engineering 55 (2008) 94–109 107

Table 14
A statistical paired comparison
Comparison Paired differences t df Sig.
Mean Std. deviation Std. Error Mean 95% CI
Lower Upper
SA-TS 0.018 0.297 0.024 0.030 0.066 0.755 149 0.4517
SA-NSa 0.544 1.743 0.142 0.826 0.263 3.825 149 0.0002
SA- NSb 3.884 16.979 1.386 6.624 1.145 2.802 149 0.0058

16

14
Time to best solution (sec.)

12

10
SA
8
TS
6

0
1 2 3
Data Set

Fig. 1. Convergence comparison.

Table 15
Iterations to converge
Meta-heuristics Iterations to best solution
NSa 66
NSb 90
TS 541
SA 301

7. Conclusions and research agenda

In this research, we present the development of a new heuristic using Simulated Annealing (SA) algorithm
to solve ELSP. In a series of numerical experimentation with the algorithm we show that the algorithm is
robust in its performance and has a faster convergence in comparison with the best known Tabu Search algo-
rithm to the problem. The robustness of SA algorithms is tested using statistical DOE under three distinct per-
formance measures. The SA algorithm is found quite robust in maintaining its optimum performance with
respect to its control parameters. The numerical experimentation is also carried out with two well known
problems reported in Bomberger’s (1966) and Mallya’s (1992). SA algorithm is able to find the best known
solution to the Bomberger’s (1966) problem, as well as, it enables determination of an alternative solution
to Mallya (1992) problem yet producing a quality similar to that of TS algorithm reported in Raza et al.
(2006). Furthermore, following an experimental setup presented in Dobson (1987), an extensive computa-
tional experimentation is carried out comparing the proposed SA, and the existing algorithms, DH, GA,
NSa , NSb , and TS. Computational results clearly show that the SA algorithm is able to outperform DH,
GA, NSa , NSb , as well as be very competitive to the TS algorithm. Furthermore the SA algorithm had a faster
108 A.S. Raza, A. Akgunduz / Computers & Industrial Engineering 55 (2008) 94–109

convergence than the TS in most test problems. Later, a statistical analysis based on a multiple paired com-
pared is also reported which establishes an evidence that the performance of SA is competitive to TS and also
superior to the rest of the heuristic tested in this study.
As previously noted, the ELSP has been well researched, though there have been many aspects of the prob-
lem related to uncertainty that have not been explored yet. An important aspect of this study was examining
the ELSP under certain stochastic circumstances, often observed in a manufacturing environment. Among
those to be considered are random demand effects, machine failure and the effect of random production rates.
The real time lot scheduling and the parallel machine lot sizing problems would also be further interesting
areas to investigate.

Acknowledgement

The authors would like to thank Carol-Ann Tetrault Sirsly for her careful editing of our paper.

References

Aarts, E. H. L., & Laarhoven, P. J. N. V. (1985). Statistical cooling: A general approach to combinatorial optimization problem. Philips
Journal of Research, 40, 193–226.
Allen, S. J. (1990). Production rate planning for two products sharing a single process facility: A real world case study. Production and
Inventory Management, 31, 24–29.
Aytug, H., Khouja, M., & Vergara, F. E. (2003). Use of genetic algorithm to solve production and operations management problems: A
review. Interantional Journal of Production Research, 41, 3955–4009.
Ben-daya, M., & Al-Fawzan, M. (1998). A tabu search approach for the flow shop scheduling problem. European Journal of Operational
Research, 109(1), 88–95.
Bomberger, E. E. (1966). A dynamic programming approach to a lot size scheduling problem. Management Science, 12, 778–784.
Delporte, C., & Thomas, L. (1978). Lot sizing and sequencing for N products on one facility. Management Science, 23, 1070–1079.
Dobson, G. (1987). The economic lot-scheduling problem: Achieving feasibility using time-varying lot sizes. Operations Research, 35,
764–771.
Dobson, G. (1992). The cyclic lot scheduling problem with sequence-dependent setups. Operations Research, 40, 736–749.
Doll, C. L., & Whybark, D. C. (1973). An iterative procedure for the single-machine multi-product lot scheduling problem. Management
Science, 20, 50–55.
Eglese, R. W. (1990). Simulated annealing: A tool for operational research. European Journal of Operational Research, 46, 271–281.
Eilon, S. (1959). Economic batch-size determination for multi-product scheduling. Operations Research, 10, 217–227.
Elmaghraby, S. (1978). The economic lot scheduling problem (ELSP): Review and extension. Management Science, 24, 587–598.
Faaland, B. H., Schmitt, T. G., & Arreola-Risa, A. (2004). Economic lot scheduling with lost sales and setup times. IIE Transactions, 36,
629–640.
Feldmann, M., & Biskup, D. (2003). Single-machine scheduling for minimizing earliness and tardiness penalties by meta-heuristic
approaches. Computers & Industrial Engineering, 44, 307–323.
Forrest, S. (1985). Documentation for PRISONERS DILEMMA and NORMS programs that Use the Genetic Algorithm. Ann Arbor:
University of Michigan.
French, S. (1982). Sequencing and Scheduling: An Introduction to the Mathematics of the Job-Shop. Ellis Horwood Limited.
Gaafar, L. (2006). Applying genetic algorithms to dynamic lot sizing with batch ordering. Computers & Industrial Engineering, 51,
433–444.
Gallego, G. (1990). An extension to the class of easy economic lot scheduling problem easy?. IIE Transactions 22, 189–190.
Gallego, G. (1993). Reduced production rates in the economic lot scheduling problem. International Journal of Production Research, 31,
1035–1046.
Gallego, G., & Roundy, R. (1992). The economic lot scheduling problem with finite backorder costs. Naval Research Logistics, 39,
729–739.
Gallego, G., & Shaw, X. (1997). Complexity of the ELSP with general cyclic schedules. IIE Transactions, 29, 109–113.
Gellego, G., & Moon, I. (1992). The effect of externalizing setups in the economic lot scheduling problem. Operations Research, 40,
614–619.
Giri, B. C., Moon, I., & Yun, W. Y. (2003). Scheduling economic lot sizes in detriorating production systems. Naval Research Logistics,
50, 650–661.
Glover, F. (1989). Tabu Search – Part I. OSAR Journal on Computing, 1, 190–206.
Glover, F. (1990). Tabu Search – Part II. OSAR Journal on Computing, 2, 4–32.
Glover, F., Glover, F. W., & Laguna, M. (1998). Tabu Search. Kluwer Academic Publishers.
Hanssmann, F. (1962). Operations Research in Production Planning and Control. New York: John Wiley.
Hsu, W. (1983). On the general feasibility test for scheduling lot sizes for several products on one machine. Management Science, 29,
93–105.
A.S. Raza, A. Akgunduz / Computers & Industrial Engineering 55 (2008) 94–109 109

Hwang, H., Kim, D., & Kim, Y. (1993). Multiproduct economic lot size models with investment costs for set-up reduction and quality
improvement. International Journal of Production Research, 31, 691–703.
Jones, P., & Inman, R. (1989). When is the economic lot scheduling problem easy? IIE Transactions, 21, 11–20.
Khouja, M. (1997). The economic lot scheduling problem under volume flexibility. International Journal of Production Research, 48, 73–86.
Khouja, M., Michalewicz, Z., & Wilmot, M. (1998). The use of genetic algoritms to solve the economic lot size scheduling problem.
European Journal of Operational Research, 110, 509–524.
Kirkpatrick, S., Gelatt, C., & Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220, 671–680.
Kutner, M. H., Nachtsheim, C. J., Neter, J., & Li, W. (2005). Applied Linear Statistical Models. McGraw-Hill/Irwin.
Lyu, J., Gunasekaran, A., & Ding, J. H. (1996). Simulated annealing algorithm for solving the single machine early/tardy problem.
International Journal of Systems Science, 27, 605–610.
Mallya, R. (1992). Multi-product scheduling on a single machine: A case study. OMEGA: International Journal of Management Science,
20, 529–534.
Maxwell, W. L. (1964). The scheduling of economic lot sizes. Naval Research Logisitcs Quarterly, 11, 89–124.
Moon, D., & Christy, D. (1998). Determination of optimal priduction rates on a single facility with dependent mold lifespan. International
Journal of Production Economics, 54, 29–40.
Moon, I. (1994). Multiproduct economic lot size models with investment costs for setup reduction and quality improvements: Reviews and
extensions. International Journal of Production Research, 32, 2795–2801.
Moon, I., Gallego, G., & Simchi-Levi, D. (1991). Controllable production rates in a family production context. IIE Transaction, 30,
2459–2470.
Moon, I., Giri, B., & Choi, K. (2002). Economic lot scheduling problem with imperfect production processes and setup times. Journal of
Operational Research Society, 53, 620–629.
Moon, I., Hahm, J., & Lee, C. (1998). The effect of the stabilization period on the economic lot scheduling problem. IIE Transactions, 30,
1009–1017.
Raza, S. A., Akgunduz, A., & Chen, M. Y. (2006). A tabu search algorithm for solving economic lot scheduling problem. Journal of
Heuristics, 12, 413–426.
Rogers, J. (1958). A computational approach to the economic lot scheduling problem. Management Science, 4, 264–291.
Roundy, R. (1989). Rounding off to powers of two in continuous relaxation of capcitated lot sizing problems. Management Science, 35,
1433–1442.
Sait, S. M., & Youssef, H. (1999). Iterative Computer Algorithms with Applications in Engineering. IEEE Computer Society.
Silver, E. (1990). Deliberately slowing down output in a family production context. International Journal of Production Research, 28,
17–27.
Silver, E. (1993). Perspectives in operations management: Essays in honor of E.S. Buffa. In Modelling in support of continuous
improvements towards achieving world class operations (pp. 23–44). Dordrecht: Kluwer Academic Publishers.
Silver, E. (1995). Dealing with shelf life constraint in cyclic scheduling by adjusting both cycle time and production rate. International
Journal of Production Research, 33, 623–629.
Silver, E. A. (2004). An overview of heuristic solution methods. Journal of Operational Research Society, 55, 936–956.
Silver, E., Pyke, D., & Peterson, R. (1998). Inventory Management and Production Planning and Scheduling (3rd ed.). New York: John
Wliey.
Viswanathan, S., & Goyal, S. K. (1997). Optimal cycle time and production rate in a family production context with shelf life
considerations. International Journal of Production Research, 35, 1703–1711.
Wagner, B. J., & Davis, D. J. (2002). A search heuristic for the sequence-dependent economic lot scheduling. European Journal of
Operational Research, 141, 133–146.
White, S. R. (1984). Concept of scale in simulated annealing. In IEEE international conference of computer design, pp. 646–651.
Yao, M.-J., & Huang, J.-X. (2005). Solving the economic lot scheduling problem with deteriorating items using genetic algorithms. Journal
of Food Engineering, 70, 309–322.
Zipkin, P. H. (1991). Computing optimal lot sizes in the economic lot scheduling problem. Operations Research, 39, 56–63.

S-ar putea să vă placă și