Documente Academic
Documente Profesional
Documente Cultură
Department of Control and Automation Engineering, Pontifical Catholic University of Paran,
Imaculada Conceio 1155, Curitiba, PR, 80215-901 - Brazil
Paulo Csar Ribas
Marechal Hermes, 600 33, Curitiba, PR, 80530-230 - Brazil
Abstract
Searching for the absolute optimal solution in a Master Production Scheduling problem might demand an
effort most industries are not willing to bear, especially when being agile became mandatory in the worldwide
market. The use of a heuristic that generates a good solution in reasonable time becomes, therefore, an
attractive alternative. However, configuring the parameters for such techniques or heuristics is also not a
trivial task, since they involve a great number of, usually conflicting, objectives. Depending on the heuristic
configuration, it can generate a solution in short computer time but with poor quality, or, on the contrary,
good solutions, but in unacceptable time. The use of statistical methods that facilitate the set-up of the
heuristics main parameters becomes necessary. Knowing which parameters are more important, that is, those
who most affect the generated solution quality and which ones are irrelevant, is important for the performance
of the chosen technique. This paper presents how a statistical method called fractional factorial analysis can
be applied to the configuration of simulated annealing applied to the optimization of Master Production
Scheduling problems. Two production scheduling scenarios illustrate the use of the proposed method.
Keywords: Fractional factorial analysis, simulated annealing optimization, master production scheduling.
(
(
+
(
(
+
(
(
+
(
(
= = max
4
1 1
3
max
2
max
1
) 0 ; 1 max(
BSS
BSS
c
AC
CU
c
RNM
RNM
c
EI
EI
c
x
r
i
p
j ij
ij x x
where:
x : current solution
r : number of resources
p : number of periods
k : number of products
th : total horizon span
x EI : Average ending inventory level for a solution x:
=
=
k
i
i x
EI EI
1
(2)
max EI : Biggest EI found during warm up from the initial population created.
x RNM : Average of requirement not met for a solution x:
th
RNM
RNM
k
i
p
j
ij
x
= =
=
1 1
(3)
max RNM : Biggest RNM found during warm up from the initial population created.
CU
ij
: capacity used at resource i during period j.
AC
ij
: available capacity at resource i during period j.
x BSS : Average below safety stock level for a solution x:
th
EI SS
BSS
k
i
p
j
ij ij
x
= =
=
1 1
) 0 , max(
(4)
max BSS : Biggest BSS found during warm up from the initial population created.
SS
ij
: Safety stock for product i at period j.
The C
1
, C
2
, C
3
and C
4
coefficients are used to set the importance of each factor to the MPS quality.
Therefore, the appropriate definition for these coefficients is very important and depends on each companys
goals.
With this objective function, there are only four adjustable parameters (C
1
, C
2
, C
3
and C
4
), which facilitate its
use and, at the same time, allow one to use different policies, by varying the parameters combination.
3.2 Input information
For the master scheduling optimization, the proposed system takes into consideration as much parameters as
possible found in industrial environments:
Number and description of products;
Number and description of productive resources (production lines, workstations, machines, production
cells);
Number and description of time periods and duration for each period (periods with different durations
are allowed);
Initial (on-hand) inventories product quantities in the beginning of the planning horizon;
Gross requirements needed quantity per product per period, estimated from forecasting and
customers orders;
Batch sizes production standard lot sizes per product per period;
Safety inventory level per product per period;
Production rate the quantity a resource can manufacture of a product per time unit;
Setup time per product, non-depended on operation sequence;
Available capacity per resource per period.
3.3 Initial solution creation
Radhakishnan and Ventura [15] emphasized the importance of using an initial solution of good quality for the
simulated annealing to converge rapidly and efficiently, decreasing computer effort and increasing the
probability to reach a near-optimal solution.
With a good initial solution, the simulated annealing algorithm reaches better results in less computer time,
since there are fewer chances for wasting time by searching solutions that are too far from the optimal. In this
study, two approaches to create the initial solution are considered: (a) it can use a given (external) MPS; (b) it
can create the initial solution from on an internal heuristic based on the SPT (shortest processing time) rule,
which prioritizes product quantities requiring less processing time - The pseudo code for this heuristic can be
found at [18].
3.4 Factors that alter temperature
According to how temperature is changed, two types of simulated annealing approaches can be used:
Homogenous and non-homogenous [17]. The homogenous algorithm runs N iterations at a certain
temperature before changing it. At the non-homogenous SA, at every accepted (new) solution found, the
algorithm changes its temperature.
After a few initial experiments, it quickly became evident that the non-homogenous annealing approach,
despite the fact that it runs much faster than the homogenous, does not present satisfactory results when
applied to the type of MPS problems considered. Therefore, only homogenous SA was considered.
By using the homogenous annealing process, one faces the problem of defining the appropriate value for the
parameter N. For small N, being able to escape from local optimum is reduced; on the other hand, large N
demands long computer time.
In this study N is proportional to the problem complexity and is calculated as
N = K x R x P (5)
where:
K: number of products
R: number of resources
P: number of periods
After N iterations, search restarts from the best solution found.
3.5 The initial temperature
In the literature, one can find different criteria for the selection of the initial temperature. In this work, one
hundred feasible solutions are randomly created, respecting each resource maximum available capacity, and
the initial temperature is considered the standard deviation calculated from these solutions.
Since in the beginning k = 1 and E = T = s, from equation (1):
P(E) = e(-E /kT)
P(E) = e(-s/s)
P(E) = e
-
= 0.3679
Therefore, the way the initial temperature is calculated will correspond to a 37% of chance to accept a worse
solution in the initial temperature (this percentage will decrease as the temperature cools off). This permits
the search method to quickly escape from local optima, going to a smaller search space as the temperature
decreases.
3.6 Neighborhood
Choosing the neighborhood is an important aspect in simulated annealing. If the chosen neighbor solution is
too far (different) from the original, there is a risk of running a random solution generator (performing,
therefore, a random search). On the other hand, there is the risk of getting trapped into a local optimum. In
this work, a neighbor solution is created by randomly altering a product quantity allocated to a resource at a
random time period by adding or subtracting one production batch size.
3.7 Temperature change
Temperature change allows for the heuristic to escape from a local optimum point and for continuously
reaching to a better solution.
It is important that temperature cools off smoothly in order for the method to work properly. There are two
commonly used ways to do this: use of logarithmical formulae, specifically written for this purpose; or the use
of a decreasing factor close to one hundred percent.
Initial tests showed that the utilization of a decreasing factor greater than 85% gave, for the type of problem
considered, better results than the logarithmic-based formulae. Therefore, change in temperature was
implemented as:
T
n+1
= T
n
(for = 0.85, 0.90 or 0.98 - depending on the scenario considered)
(6)
3.8 Reheating
As with the physical annealing process, to avoid premature solidification, one should sporadically reheat the
system. Here, after M unsuccessful trials to accept a solution, temperature is raised, increasing the probability
of accepting a worse solution, which, as said previously, allows the system to escape from a local optimal
point.
T
n+1
= T
n
(with = 2 or 10)
(7)
3.9 Stopping criteria
The system stops running when it reaches the best possible solution or when it reaches a local optimum it can
not escape from.
The best possible solution is regarded as the solution which objective function is the minimum one, a criterion
quite difficult to be met for most MPS problems because of the conflicting goals composing the objective
function see Section 3.1.
The other stopping criterion regards reaching the maximum number of consecutive reheatings without
improvement to the objective function.
4 2
k
Fractional factorial design
The 2
k
factorial experiment analysis is based on the variation of k factors, which increase or decrease results
in 2
k
different experiment configurations. Analyzing the results allow one to quantify the influences of the
different parameters, separately or in association [19].
The number of experiment combinations grows exponentially with the value of k, alternatively, to reduce the
total number of these combinations, one can use 2
k
fractional factorial experiments, suggested when a large
number of factors affecting the response exist [20].
In the 2
k
factorial experiments, all first order influences (influence of factor A, B, C, etc), second order
influences (influence of A and B, called AB, influence AC, BC, etc), and third order influences (ABC, AND,
BCD, etc) are calculated with their means and standard deviations. However, in 2
k
fractional factorial design,
one calculates the low order effects added to the high order effects (for instance, A + BCDEF). Since usually
an influence lowers exponentially with its order, it is possible to estimate which factors have greater or lesser
influence on the results [19].
This type of statistical analysis can be used in several different fields. Mulligan and Mackworth [21], for
instance, used 2
k
fractional factorial design for the analysis and evaluation of a robotic system applied to
intelligent tasks. Santos and Ludermir [22] used 2
k
factorial experiment analyses to identify the main
parameters of a neural network, as a first step to the optimization of the net. Hardy et al. [23] applied 2
k
fractional factorial design as a tool to determine factors that influence the preparation of (Bi,La)
4
Ti
3
O
12
in
chemical reactions. Kleijnen [24] considered 2
k
fractional factorial design for validation, optimization and
analysis of simulation models. Wang and Lin [25] applied this analysis in the identification of relevant
factors that can improve industrial processes.
In this study, 2
k
fractional factorial design is used to identify SA parameters that affect a solution (MPS)
quality.
5 Experiments performed
For the analysis of the SA parameters influence on the optimization of master production scheduling problems
through 2
k
fractional factorial design, the parameters (factors) were classified as follows:
Parameter A: Reheating occurrence;
Parameter B: Maximum number of reheatings;
Parameter C: Reheating coefficient;
Parameter D: Type of the initial solution: Use of an internal heuristic or use of a give external solution;
Parameter E: Neighborhood type: near or distant; (jumping step)
Parameter F: Cooling coefficient; and
Parameter G: Maximum number of iterations correction factor.
Parameters B and C will only make sense when reheating takes place (Parameter A). Parameters were
classified this way so that it could be possible to vary them using two levels only, which then characterizes it
as a 2
k
factorial experiment.
The analysis was conducted on two problems (or scenarios). In the first scenario, the simplest one (from this
point on simply called Problem 1), a master plan had to be developed for four products, four productive
resources (machines, production lines, production cells, etc) and a planning horizon of seven periods (days,
weeks, months, or any combination). A more complex scenario (Problem 2) considered twenty products, four
productive resources, and thirteen time buckets. For sake of conciseness, detailed input data are not shown.
5.1 Problem 1 experiments
For this scenario, the factor levels used are presented on Table 2. For each configuration, 50 runs
(executions) were made (this number of runs was considered enough for the 95% confidence level (CL)
adopted). Since the number of factors is reasonably large, a 2
k
fractional experiment in the order of 2
7-2
was
chosen. This resulted in 32 different factor combinations (also called design points [20]) shown on Table 3.
[TABLE 2 ABOUT HERE.]
Combinations for the experiments were taken from Werkema and Aguiar [19], where for a 2
7-2
fractional
factorial design, F=ABCD and G=ABDE. With these combinations, first order effects are compounded with
higher order effects. These effects are deducted as follows.
I G I F I D I C I B I A = = = = = =
2 2 2 2 2 2
ABDEG I ABDE G
ABCDF I ABCD F
= =
= =
CEFG EFG CD B A I ABDEG ABCDF I I = = =
2 2 2
Therefore:
CEFG ABDEG ABCDF I = = = (8)
And:
CEFG A ABDEG A ABCDF A I A = = =
This leads to:
ACEFG BDEG BCDF A = = = (9)
So, when the A effect is calculated, in fact, the effects of BCDF, BDEG and ACEFG are also calculated. This
method is valid since the third (or higher) order effects usually have their values negligible, compared to first
order effects. Applying the same idea for the other main effects, it can be shown that:
BCEFG ADEG ACDF B = = = (10)
EFG ABCDEG ABDF C = = = (11)
CDEFG ABEG ABCF D = = = (12)
CFG ABDG ABCDEF E = = = (13)
CEG ABDEFG ABCD F = = = (14)
CEF ABDE ABCDFG G = = = (15)
[TABLE 3 ABOUT HERE.]
Processing the 50 replications results for each experiment shown on Table 3, the mean, variance, of the
objective function can be calculated. Table 4 presents these results, for the 95% confidence level (CL) used.
[TABLE 4 ABOUT HERE.]
To calculate the effect of each parameter (A, B, C, D, E, F and G), it is necessary to bring in the concept of
contrast. A contrast is a linear combination the average totals of the response variable obtained from the
experiment and is given by:
=
=
a
i
i i P
y c Contrast
1
(16)
where:
Contrast
P
: contrast for factor P;
a: number of combinations from experiment 2
k-j
;
y
i
: average total of the response variable obtained from i
th
combination; (design point)
c
i
: coefficient indicating if y
i
will be added or subtracted corresponding to the i
th
combination
according to Table 5 (based on Table 3). In case one wishes to calculate contrasts for superior orders,
one just needs to multiply the coefficient signals from inferior orders.
From the contrast, one can calculate the expected value of the effect of factor P (E
p
) as:
P
j k
p
Contrast E
=
2
2
(17)
where:
2
k-j
: number of used combinations.
Werkema and Aguiar [19] also indicate a way to estimate the combined variance value for this case and the
respective calculation for the standard deviation:
=
j k
i
i
j k
s s
2
1
2 2
2
1
(18)
j k
n
s
Effect SD
=
2
2 ) (
2
(19)
where:
s
2
: effect estimate of the combined variance;
s
i
2
: variance for the i
th
experiment combination;
SD (Effect): effect standard deviation;
n: number of executions for each experiment combination;
2
k-j
: number of used combinations.
The confidence interval for a 95% confidence level for the main effect or for a compound effect can be
approximated by E
p
2SD(Effect).
[TABLE 5 ABOUT HERE.]
Main effects results were calculated using equations (16), (17), (18) and (19) and are shown at Table 6.
[TABLE 6 ABOUT HERE.]
Within a confidence level of 95%, an effect is considered influent if minimum and maximum values have
same signal, that is, one can affirm with 95% of confidence that the effect caused by the parameter in not null
since it does not include the zero. This is a minimization problem; therefore, if the effect is negative, it means
the parameter with value 1 represents the best result compared to the value 0. On the other hand, the
positive effect means the opposite, that the value 0 represents the best value (less) compared to the value 1.
Results were shown to be sensible to all parameters (last column of Table 6). The effect of neighborhood
parameter was positive, which means that working with close neighborhood (0) gives better solutions than
distant solutions (1); the parameter for correcting the maximum number of iterations had slightly positive
influence, meaning that if one uses deeper exploration in a given temperature, the final result will not
improve (a greater number of iterations in a certain temperature does not mean a better global result). The
effects of the remaining coefficients were negative, which leads to the following conclusions:
(a) The use of reheating is favorable in the search for a solution, which means, it should be included in
the SA heuristic when applied to MPS optimization problems.
(b) Making a reasonable number of reheatings is better than few reheatings (in this study, up to seven
reheatings is preferable to four).
(c) Using a high reheating coefficient generates better results than a low coefficient (in the example, a
coefficient of value ten influences more than one of valuetwo).
(d) The use of an internal heuristic to create the initial solution showed to generate better solutions
compared to the use of a given external initial solution.
(e) Solutions from a SA configuration with = 0,98 performed better solutions, that is, with slower
cooling the SA reaches better solutions.
These results are in accordance to the definitions of the annealing process and to the simulated annealing
theory, which state that a slow cooling with sporadic reheatings generate solutions with more quality.
5.2 Problem 2 experiments
For this scenario, the ranges for the parameter (cooling coefficient) and for parameter (maximum number
of iterations) were modified and are shown at Table 7.
[TABLE 7 ABOUT HERE.]
A further reduction to the original 2
k
fractional experiment was also considered. Now a 2
7-3
was used instead
of a 2
7-2
. This resulted in only 16 different configurations instead of 32, previously used. Also, 25 replications
were used for each one of these configurations instead of 50 (to speed up the analysis process)
Combinations for the experiments were taken from Werkema and Aguiar [19], where for a 2
7-3
fractional
factorial design, E= ABC , F= BCD e G=ACD, generating the combinations shown at Table 8.
[TABLE 8 ABOUT HERE.]
The identity for the 2
7-3
experiment is as follows:
ABCE I ABC E = =
BCDF I BCD F = =
ACDG I ACD G = =
ADEF DEF C AB BCDF ABCE I I = = =
2 2
BDEG DEG BC A ACDG ABCE I I = = =
2 2
ABFG FG D ABC ACDG BCDF I I = = =
2 2
CEFG EFG D C B A ACDG BCDF ABCE I I I = = =
2 3 2 2
CEFG ABFG BDEG ADEF ACDG BCDF ABCE I = = = = = = = (20)
Higher order effects, compounded with first order influences are deducted as follows:
CEFG A ABFG A BDEG A ADEF A ACDG A BCDF A ABCE A I A = = = = = = =
ACEFG BFG ABDEG DEF CDG ABCDF BCE A = = = = = = = (21)
CEFG B ABFG B BDEG B ADEF B ACDG B BCDF B ABCE B I B = = = = = = =
BCEFG AFG DEG ABDEF BBCDG CDF ACE B = = = = = = = (22)
CEFG C ABFG C BDEG C ADEF C ACDG C BCDF C ABCE C I C = = = = = = =
EFG ABCFG BCDEG ACDEF ADG BDF ABE C = = = = = = = (23)
CEFG D ABFG D BDEG D ADEF D ACDG D BCDF D ABCE D I D = = = = = = =
CDEFG ABDFG BEG AEF ACG BCF ABCDE D = = = = = = = (24)
CEFG E ABFG E BDEG E ADEF E ACDG E BCDF E ABCE E I E = = = = = = =
CFG ABEFG BDG ADF ACDEG BCDEF ABC E = = = = = = = (25)
CEFG F ABFG F BDEG F ADEF F ACDG F BCDF F ABCE F I F = = = = = = =
CEG ABG BDEFG ADE ACDFG BCD ABCEF F = = = = = = = (26)
CEFG G ABFG G BDEG G ADEF G ACDG G BCDF G ABCE G I G = = = = = = = (27)
CEF ABF BDE ADEFG ACD BCDFG ABCEG G = = = = = = = (28)
After running the experiments, the mean and variance were obtained (Table 9).
[TABLE 9 ABOUT HERE.]
The coefficient values for true problem are shown at Table 10 and the effect results were obtained from
equations (16), (17), (18) and (19) are shown at Table 11.
[TABLE 10 ABOUT HERE.]
[TABLE 11 ABOUT HERE.]
These results confirmed the effects of the simulated annealing parameters already observed in Problem 1:
Parameters related to the reheating procedure improve the solution quality.
Strong dependence on the initial solution, which means that, it is quite difficult to search the
whole solution space.
Using close neighborhood brings better results.
Slower cooling improves the solution quality.
And, regarding the correction of the maximum number of iterations, once again, this does not
seem to improve the solution quality.
6 Conclusions
This study showed how fractional factorial design can be applied to the parameterization of the simulated
annealing meta-heuristic applied to the optimization of master production scheduling problems. Main SA
configuration parameters considered were: occurrence of reheating, maximum number of reheatings,
reheating coefficient, type of initial solution, type of neighborhood: near or far, cooling coefficient
factor for the correction of the maximum number of iterations.
Two production scenarios were used for the experiments, and, for this type of SA application, the results
showed that the SA parameters considered significantly affect the meta-heuristic performance when applied to
MPS problems:
The reheating and the its associated parameters, as number of reheatings and reheating rate.
The cooling off process, which confirmed that the working with slow temperature cooling
produces better solutions.
The neighborhood in this case, it was also confirmed that jumping among close solutions, making
the search more concentrate, improves the result quality.
The initial solution, used affects the quality and the search space region of solutions, contributing
to a convergence of better quality.
From these results, one should focus on such parameters to develop further improvements to the use of
simulated annealing to the optimization of master production scheduling problems. Other parameters can
probably be fixed at reasonable values and omitted from further detailed considerations this needs further
investigation.
The used method was considered adequate to the SA configuration when applied to this type of problem,
however, one can suppose that its application to other types of production planning and control optimization
problems are possible but need investigation In fact, production scheduling has already been solved using
SA. Other artificial intelligence techniques, as such genetic algorithms, for instance, can also consider a
similar approach to their configuration.
Acknowledgement
The authors would like to thank the Coordenao de Aperfeioamento de Pessoal de Nvel Superior (CAPES
Brazil) for funding this research project (grant # 40003019010P1).
References
[1] American Production and Inventory Control Society (APICS) dictionary. 9
th
Edition, 1998.
[2] FERNANDES, C. A. O.; CARVALHO, M. F. H.; and FERREIRA, P. A. V. Planejamento
multiobjetivo da produo da manufatura atravs de programao alvo. 13
o
Congresso Brasileiro de
Automtica, Vol. 1, 67-72, Florianpolis, Brasil, 2000.
[3] BONOMI, E.; and LUTTON, J. The n-city travelling salesman problem: statistical mechanics and the
metropolis algorithm. SIAM Review, 26, 551-568, 1984.
[4] GAREY, M.; and JOHNSON, D. Computers, complexity and intractability. A guide to theory of np-
completeness. Freeman, San Francisco, USA, 1979.
[5] FRANA, P. M.; ARMENTANO, V. A.; BERRETTA, R. E.; and CLARK A. R., A heuristic for lot
sizing in multi stage system. Unicamp, Campinas, Brasil, 1997.
[6] RODRIGUES, L.F.; and BERRETTA R.E. Evolutionary meta-heuristics for capacitated lot-sizing
problems in multi-stage systems, Anais do XXXII Simpsio Brasileiro de Pesquisa Operacional, p.
817-834, Viosa, MG, 2000.
[7] CLARK, A. R. Approximate combinatorial optimization models for large-scale production lot sizing
and scheduling with sequence-dependent setup times. IV ALIO/EURO Workshop on Applied
Combinatorial Optimization, Pucn, Chile, 2002.
[8] ARAJO, S. A.; and CLARK A. R. Um problema de programao da produo numa fundio. 23o
Simpsio Brasileiro de Pesquisa Operacional, Campos do Jordo, Brasil, 2001.
[9] STAGGEMEIER, A. T.; and CLARK, A. R. A survey of lot sizing and scheduling models. 23o
Simpsio Brasileiro de Pesquisa Operacional, Campos do Jordo, SP, Brasil, 2001.
[10] METROPOLIS, N.; ROSENBLUTH, A.; ROSENBLUTH, M.; TELLER, A.; and TELLER, E.
Equations of state calculations by fast computing machines. J.Chemical Physics, vol. 21, 1087-1091,
1953.
[11] KIRKPATRICK, S.; GELATT, C. D. JR.; and VECCHI, M. P. Optimization by simulated annealing.
Science, vol. 220, no. 4598, 671-680, 1983.
[12] BONOMI, E.; and LUTTON, J. The n-city travelling salesman problem: statistical mechanics and the
metropolis algorithm. SIAM Review, 26, 551-568, 1984.
[13] MCLAUGHLIN, M. P. Simulated annealing. Dr. Dobbs Journal, 26-37, 1989.
[14] CONNOLLY, D. T. An improved annealing scheme for the quadratic assignment problem. European
Journal of Operation Research, vol. 46, 93-100, 1990.
[15] RADHAKRISHMAN, S.; and VENTURA, J. A. Simulated annealing for parallel machine scheduling
with earliness-tardiness penalties and sequence-dependent setup times. International Journal of
Production Research, vol. 38, no. 10, 2233-2252, 2000.
[16] MOCCELLIN, J. V.; DOS SANTOS, M. O.; and NAGANO, N. S., Um mtodo heurstico busca tabu
simulated annealing para flowshops permutacionais. 23o Simpsio Brasileiro de Pesquisa
Operacional, Campos do Jordo, SP, Brasil, 2001.
[17] ZOLFAGHARI, S.; and LIANG, M. Comparative study of simulated annealing, genetic algorithms
and tabu search of solving binary and comprehensive machine-grouping problems. International
Journal of Production Research, vol. 40, no. 9, 2141-2158, 2002.
[18] VIEIRA, G. E.; and RIBAS, P. C. A new multi-objective optimization method for master production
scheduling problems using simulated annealing. Accepted for publication at the International Journal
of Production Research, 2004.
[19] WERKEMA, M. C. C.; and AGUIAR, S. Planejamento e anlise de experimentos: como identificar e
avaliar as principais variveis influentes em um processo. Belo Horizonte: Fundao Christiano Ottoni,
1996.
[20] LAW, A. M.; and KELTON W. D. Simulation modeling & analysis. New York, USA: McGraw-Hill,
1991.
[21] MULLIGAN, J.; and MACKWORTH, A. K. Experimental task analysis. Proceedings of the 1997
IEEE, International Conference on Robotics and Automation. Albuquerque, New Mexico, USA, 1997.
[22] SANTOS, M. S.; and LUDERMIR, T. B. Using factorial design to optimise neural networks.
International Joint Conference on Neural Networks 1999, Washington, 1999.
[23] HARDY, A.; VANHOYLAND, G.; VAN BAEL, M.; MULLENS, J.; and VAN POUCKE, L. A
statistical approach to the identification of determinant factors in the preparation of phase pure
(Bi,La)
4
Ti
3
O
12
from an aqueous citrate gel. Journal of the European Ceramic Society, no. 24, 2575
2581, 2004.
[24] KLEIJNEN, J. P. C. An overview of the design and analysis of simulation experiments for sensitivity
analysis. European Journal of Operational Research. 2004. IN PRESS.
[25] WANG, P. C.; and LIN, D. F. Dispersion effects in signal-response data from fractional factorial
experiments. Computational Statistics & Data Analysis, vol. 38, 95111, 2001.
Table 1 - An example of MPS for product A
Per 1 Per 2 Per 3 Per 4
On Hand 100
Initial Inventory 100 75 125 0
Batch Size 100 100 100 100
Gross Requirements 225 250 350 200
Safety Stock 100 100 100 100
Net Requirements 300 300 400 200
MPS Resource 1 150 150 200 150
MPS Resource 2 50 150 0 100
Total MPS 200 300 200 250
Ending Inventory 75 125 0 50
Requirements met 225 250 325 200
Requirements not met 0 0 25 0
Service level 1,00 1,00 0,93 1,00
Below safety stock 25 0 100 50
Resource Capacity (Hours)
Resource 1 30 40 50 40
Resource 2 30 40 0 35
Table 2 Parameter values for Problem 1
Parameter Symbol Used Value 0 Value 1
Reheating occurrence A No reheating With reheating
Number of reheatings B 4 7
Reheating coefficient ''
C 2 10
Type of the initial solution D External Internal Heuristic
Neighborhood type E Near Distant
Cooling coefficient ''
F 0.90 0.98
Number of iterations correction factor ''
G 1 2
Table 3 Experiment combinations for Problem 1
Experiment
(design points)
A B C D E F=ABCD G=ABDE
fg 0 0 0 0 0 1 1
a 1 0 0 0 0 0 0
b 0 1 0 0 0 0 0
abfg 1 1 0 0 0 1 1
cg 0 0 1 0 0 0 1
acf 1 0 1 0 0 1 0
bcf 0 1 1 0 0 1 0
abcg 1 1 1 0 0 0 1
d 0 0 0 1 0 0 0
adfg 1 0 0 1 0 1 1
bdfg 0 1 0 1 0 1 1
abd 1 1 0 1 0 0 0
cdf 0 0 1 1 0 1 0
acdg 1 0 1 1 0 0 1
bcdg 0 1 1 1 0 0 1
abcdf 1 1 1 1 0 1 0
ef 0 0 0 0 1 1 0
aeg 1 0 0 0 1 0 1
beg 0 1 0 0 1 0 1
abef 1 1 0 0 1 1 0
ce 0 0 1 0 1 0 0
acefg 1 0 1 0 1 1 1
bcefg 0 1 1 0 1 1 1
abce 1 1 1 0 1 0 0
deg 0 0 0 1 1 0 1
adef 1 0 0 1 1 1 0
bdef 0 1 0 1 1 1 0
abdeg 1 1 0 1 1 0 1
cdefg 0 0 1 1 1 1 1
acde 1 0 1 1 1 0 0
bcde 0 1 1 1 1 0 0
abcdefg 1 1 1 1 1 1 1
Table 4 Summary for the Problem 1 results
Objective Function Experiment
Mean Variance
fg 123,07 64,27
a 128,85 102,75
b 144,85 83,18
abfg 113,10 57,74
cg 147,28 77,31
acf 113,89 55,81
bcf 122,71 72,80
abcg 120,54 93,35
d 50,77 5,96
adfg 39,38 3,97
bdfg 40,86 4,49
abd 40,70 6,51
cdf 41,74 6,57
acdg 39,95 5,85
bcdg 50,20 4,31
abcdf 37,73 4,35
ef 140,07 62,38
aeg 146,66 84,79
beg 157,62 47,51
abef 121,73 73,30
ce 158,01 52,44
acefg 118,78 68,23
bcefg 139,23 45,16
abce 125,26 76,65
deg 54,32 1,92
adef 41,29 9,30
bdef 48,65 5,73
abdeg 49,36 11,76
cdefg 48,41 5,51
acde 44,83 12,64
bcde 54,20 3,50
abcdefg 38,76 4,71
Table 5 Coefficients for the calculation of problem 1 contrasts
Experiment A B C D E F=ABCD G=ABDE
fg - - - - - + +
a + - - - - - -
b - + - - - - -
abfg + + - - - + +
cg - - + - - - +
acf + - + - - + -
bcf - + + - - + -
abcg + + + - - - +
d - - - + - - -
adfg + - - + - + +
bdfg - + - + - + +
abd + + - + - - -
cdf - - + + - + -
acdg + - + + - - +
bcdg - + + + - - +
abcdf + + + + - + -
ef - - - - + + -
aeg + - - - + - +
beg - + - - + - +
abef + + - - + + -
ce - - + - + - -
acefg + - + - + + +
bcefg - + + - + + +
abce + + + - + - -
deg - - - + + - +
adef + - - + + + -
bdef - + - + + + -
abdeg + + - + + - +
cdefg - - + + + + +
acde + - + + + - -
bcde - + + + + - -
abcdefg + + + + + + +
Table 6 Main effects for Problem 1
Effect Mean Combined
Variance
Minimum Value
(CL= 95%)
Maximum Value
(CL= 95%)
Is effect significant
(influent)?
A -12,57 37,96 -13,09 -12,06 yes
B -1,99 37,96 -2,50 -1,47 yes
C -2,49 37,96 -3,00 -1,97 yes
D -87,53 37,96 -88,05 -87,01 yes
E 8,22 37,96 7,71 8,74 yes
F -11,50 37,96 -12,02 -10,98 yes
G 0,77 37,96 0,25 1,28 yes
Table 7 Parameter values for Problem 2
Parameter Symbol Used Value 0 Value 1
Reheating occurrence A No reheating With reheating
Number of reheatings B 4 7
Reheating coefficient ''
C 2 10
Type of the initial solution D External Internal Heuristic
Neighborhood type E Near Distant
Cooling coefficient ''
F 0.85 0.95
Number of iterations correction factor ''
G 0.1 0.2
Table 8 Experiment combinations for Problem 2.
Experiment A B C D E= ABC F=BCD G=ACD
(1) 0 0 0 0 0 0 0
aeg 1 0 0 0 1 0 1
bef 0 1 0 0 1 1 0
abfg 1 1 0 0 0 1 1
cefg 0 0 1 0 1 1 1
acf 1 0 1 0 0 1 0
bcg 0 1 1 0 0 0 1
abce 1 1 1 0 1 0 0
dfg 0 0 0 1 0 1 1
adef 1 0 0 1 1 1 0
bdeg 0 1 0 1 1 0 1
abd 1 1 0 1 0 0 0
cde 0 0 1 1 1 0 0
acdg 1 0 1 1 0 0 1
bcdf 0 1 1 1 0 1 0
abcdefg 1 1 1 1 1 1 1
Table 9 Summary for problem 2 results
Objective
Function
Experiment
Mean Variance
(1) 255,38 779,13
aeg 330,27 32,23
bef 327,16 19,24
abfg 56,43 1,66
cefg 324,71 50,11
acf 55,50 0,69
bcg 247,62 411,13
abce 254,89 1.247,35
dfg 59,19 1,73
adef 57,55 2,86
bdeg 58,13 0,62
abd 52,17 8,56
cde 58,25 0,63
Acdg 49,17 2,30
Bcdf 59,65 1,50
abcdefg 47,18 3,41
Table 10 Coefficients for the calculation of problem 2 contrasts
Experiment A B C D E=ABC F=ASDSDF G=ASDFAS
(1) - - - - - - -
aeg + - - - + - +
bef - + - - + + -
abfg + + - - - + +
cefg - - + - + + +
acf + - + - - + -
bcg - + + - - - +
abce + + + - + - -
dfg - - - + - + +
adef + - - + + + -
bdeg - + - + + - +
abd + + - + - - -
cde - - + + + - -
acdg + - + + - - +
bcdf - + + + - + -
abcdefg + + + + + + +
Table 11 Main effects for Problem 2
Effect Mean Combined
Variance
Minimum Value
(CL= 95%)
Maximum Value
(CL= 95%)
Is effect significant
(influent)?
A -60,87 160,19 -63,03 -58,70 yes
B -10,85 160,19 -13,01 -8,68 yes
C -12,41 160,19 -14,58 -10,25 yes
D -176,33 160,19 -178,50 -174,17 yes
E 77,88 160,19 75,71 80,04 yes
F -39,81 160,19 -41,98 -37,65 yes
G 6,52 160,19 4,35 8,68 yes