Sunteți pe pagina 1din 7

A comparison of several heuristic algorithms for

solving high dimensional optimization problems


Emmanuel Karlo Nyarko
J. J. Strossmayer University of Osijek,
Faculty of Electrical Engineering, Department of Automation and Process Computing
Kneza Trpimira 2B, 31000 Osijek, Croatia
nyarko@etfos.hr
Robert Cupec
J. J. Strossmayer University of Osijek,
Faculty of Electrical Engineering, Department of Automation and Process Computing
Kneza Trpimira 2B, 31000 Osijek, Croatia
robert.cupec@etfos.hr
Damir Filko
J. J. Strossmayer University of Osijek,
Faculty of Electrical Engineering, Department of Automation and Process Computing
Kneza Trpimira 2B, 31000 Osijek, Croatia
damir.filko@etfos.hr

Abstract The number of heuristic optimization algorithms has exploded over the last decade with new methods being proposed
constantly. A recent overview of existing heuristic methods has listed over 130 algorithms. The majority of these optimization
algorithms have been designed and applied to solve real-parameter function optimization problems, each claiming to be superior to
other methods in terms of performance. In this paper, three heuristic algorithms are systematically analyzed and tested in detail for
real-parameter optimization problems, especially those involving a large number of parameters. Three traditional methods, i.e.,
genetic algorithms (GA), particle swarm optimization (PSO) and differential evolution (DE) are compared in terms of accuracy and
runtime, using several high dimensional standard benchmark functions and real world problems.

Keywords heuristic optimization, high dimensional optimization, optimization techniques, nature-inspired algorithms

1. INTRODUCTION explored. In situations, where the search space cannot


be efficiently explored, e.g. a high dimensional search
Optimization is the process of minimizing or space, implementing a deterministic algorithm might
maximizing a goal (or goals) taking into consideration result in an exhaustive search which would be
existing constraints. Optimization algorithms are unfeasible due to time constraint. In such situations,
basically iterative in nature and as such the quality of probabilistic algorithms are used. Probabilistic
an optimization algorithm is determined by the quality algorithms, generally, optimize a problem by iteratively
of the result obtained in a finite amount of time. Global trying to improve a candidate solution with respect to a
optimization algorithms can generally be divided into given measure of quality. They make few or no
two categories: deterministic and probabilistic assumptions about the problem being optimized and
algorithms [1]. The main difference between the two can search very large spaces of candidate solutions.
categories is that deterministic algorithms are However, probabilistic algorithms provide no
designed in such a way that the optimal solution is guarantee of an optimal solution being found, only a
always found in a finite amount of time. Thus, good solution in a finite amount of time
deterministic algorithms can only be implemented in
situations where the search space can efficiently be Examples of deterministic optimization algorithms
include the pattern search or direct search by Hooke
and Jeeves [2], the Nelder-Mead method [3] and the where x x1 , xD represents a vector of real
Branch and Bound algorithm [4] while examples of
parameters of dimension D. Since
probabilistic algorithms include Genetic Algorithms
(GA) [5,6], Differential evolution (DE) [7,8], Particle min f x max f x , the restriction to
Swarm Optimization (PSO) [9,10] and Ant Colony maximization is without loss of generality. The
Optimization (ACO) [11,12] just to name a few. domains of the real parameters are defined by their
Heuristics used in global optimization are functions lower and upper bounds: low j , up j ; j 1,2, , D . In
or methods that help decide which solution candidate
is to be examined or tested next or how the next practice, no a priori knowledge of the objective function
solution candidate can be produced. Deterministic exists, and it can generally be assumed that the
algorithms usually employ heuristics in order to define objective function is nonlinear and may have multiple
the processing order of the solution candidates. local minima. In this paper, the objective function will
Probabilistic methods, on the other hand, may only also be referred to as the fitness function or the quality
consider those elements of the search space in further of the parameter vector. The quality or fitness of a
computations that have been selected by the heuristic candidate solution x i is defined by
[1]. In this paper, the term heuristic algorithms refers to
probabilistic algorithms employing heuristic methods. f i f xi (3)

Real-world optimization problems are often very


challenging to solve, and are often NP-hard problems.
Thus, heuristic algorithms are often employed. Many 3. AN OVERVIEW OF GA, DE AND PSO
heuristic algorithms using various optimization GA, DE and PSO are population based algorithms
techniques have been developed to deal with these and as such, always work on a set of candidate
challenging optimization problems. The number of
x N during each iteration of the
T
heuristic optimization algorithms has exploded over the solutions X x1,
last decade with new methods being proposed
algorithm. N represents the number of candidate
constantly. A recent overview of existing heuristic
methods has listed over 130 algorithms [13]. These solutions and is usually kept constant during the
algorithms can be classified into four main groups: execution of the algorithm.
biology-, physics-, chemistry-, and mathematics-based 3.1. GENETIC ALGORITHMS (GA)
algorithms depending on the source of inspiration for
the researchers. The largest group of heuristic Concepts from biology, genetics and evolution are
optimization algorithms is biology based, i.e., bio- freely borrowed to describe GA. The element
inspired. Two of the most important subsets of x j , j 1,..., D is referred to as a gene, a candidate
heuristic algorithms, which coincidentally are bio-
inspired, are Evolutionary Algorithms (EA) and Swarm solution xi , i 1,..., N is referred to as an individual,
Intelligence (SI). GA and DE are the most well-known the set of candidate solutions X is referred to as a
evolutionary algorithms, while PSO is a well-known population and an iteration of the algorithm is referred
swarm intelligence algorithm. to as a generation. During the execution of the
algorithm, a candidate solution or parent is modified in
The main focus of this paper is to analyze, test and a particular way to create a new candidate solution or
compare in detail, GA, DE and PSO in solving high child.
dimensional real-parameter optimization problems. All
tests and analyses were conducted using Matlab. The The basic evolutionary computation algorithm first
rest of this paper is structured as follows. In Section 2, constructs an initial population, then iterates through
a description of the optimization problem is provided. three procedures. It first assesses the quality or fitness
An overview of the heuristic algorithms: GA, DE and of all the individuals in the population. Then, it uses
PSO is provided in Section 3, while the test results this fitness information to reproduce a new population
obtained while solving three examples of high of children. Finally, it merges the parents and children
dimensional real-parameter optimization problems are in some fashion to form a new next-generation
given and analyzed in Section 4. Finally, the paper is population, and the cycle continues. This procedure is
concluded with Section 5. outlined in Algorithm 1 [14].
kmax denotes the maximum number of iterations to
be performed, while xBEST represents the best solution
2. PROBLEM DESCRIPTION found by the EA. All the algorithms analyzed herein
generate the initial population or set of candidate
The problem considered in this paper can be solutions randomly according to equation (4):
formulated as follows. Given an objective function
f: Y, (1) Xi , j
U low j , up j , (4)

where D and Y , one has to estimate the for i 1,..., N and j 1,..., D where Xi , j denotes the
optimal parameter vector x * such that

j-th element, x j , of the i-th vector, x i . U low j , up j is
f x * max f x ,
x D
(2) a random number in low j , up j drawn according to a
uniform distribution and the symbol denotes
sampling from a given distribution.
Algorithm 1 An abstract Evolutionary Algorithm (EA) f1 > f6 > f3 > f5. The total fitness range f is initially
Input: N, kmax determined using equation (5):
Output: xBEST N
1: xBEST , f BEST 0 f fi . (5)
i 1
2: Build an initial population X
k:=0 Then, the sampling length f/NS is determined, where
3: repeat NS denotes the number of individuals that need to be
4: k:=k+1 selected from the entire population.
4: for each individual x i in X
0 f
5: Calculate f i
Total Fitness Range
6: If xBEST = or fi f BEST then
Individuals 1 2 3 4 5 6 7
7: xBEST x i (sized by fitness)
f1 f2 f3 f4 f5 f6 f7
f BEST fi
end if Initial search range 0 f/NS
8: (NS = 6 in this example)
9: end
10: X Merge( X , Reproduce( X )) NS sampled individuals
1 2 3 4 6 7
11: until xBEST is the ideal solution or k > kmax
Fig.1. Array of individual ranges, initial search
range, and chosen positions in Stochastic Universal
Evolutionary algorithms differ from one another Sampling.
largely in how they perform the Reproduce and Merge
operations. The Reproduce operation usually has two A random position is generated between 0 and f/NS
parts: Selecting parents from the old population, then and the individual covering this position is selected as
creating new individuals or children (usually mutating the first individual. The value f/NS is then added to this
or recombining them in some way) to generate initial position to determine the second position and,
children. The Merge operation usually either thus, the second individual. Hence, each subsequent
completely replaces the parents with the children, or individual is selected by adding the value f/NS to the
includes fit parents along with their children to form the previous position. This process is performed until N
next generation [14]. individuals have been selected.
The stopping condition of the algorithm is often Crossover
defined in a few ways such as: 1) limiting the execution The representation of an individual in GA determines
time of the algorithm. This is normally done either by the type of crossover and mutation operators that can
defining the maximum number of iterations, as is be implemented. By far, the most popular
shown in Algorithm 1, or by limiting the maximum representation of an individual in GA is the vector
number of fitness function evaluations; 2) f BEST does representation. Depending on the problem, the
not change appreciably over successive iterations; 3) individual can be defined using a boolean vector, an
attaining a pre-specified objective function value. integer vector or a real-valued vector as is the case in
One of the first EA is GA invented by John Holland in this paper.
1975 [5]. The standard GA consists of three genetic The crossover operator used in this paper is the
operators: selection, crossover and mutation. During Scattered or Uniform crossover method. Assuming the
each generation, parents are selected using the parents x i and x k have been selected, a random
selection operator. The selection operator selects
individuals in such a way that individuals with better binary vector or mask is generated. The children xi ,new
fitness values have a greater chance of being and x k ,new are then formed by combining genes of
selected. Then new individuals, or children, are
both parents. This recombination is defined by
generated using the crossover and mutation operators.
equations (6) and (7):
The Reproduce operation used in Algorithm 1. consists
xi j , if mask ( j ) 1

of these 3 operators. There are many variants of GA
due to the different selection, crossover and mutation xi ,new j , (6)
operators proposed, some of which can be found in [1, xk j , otherwise

5-6, 14-17]. The GA analyzed in this paper is available
in the Global Optimization Toolbox of Matlab R2010a. xk j , if mask ( j ) 1

The implemented genetic operators used in this study xk ,new j . (7)
are defined as follows. xi j , otherwise

Selection The number of children to be formed by the
The selection function used in this paper is the crossover operator is provided by a user defined
Stochastic Universal Sampling (SUS) method [17]. parameter Pcrossover which represents the fraction of the
Parents are selected in a biased fitness-proportionate population involved in crossover operations.
way such that fit individuals get picked at least once. The crossover operator tends to improve the overall
This method can be explained with the aid of Fig. 1. quality of the population since better individuals are
which shows an array of all individuals sized by their involved. As a result, the population will eventually
fitness values (N = 7). It can be noticed that f4 > f7 > f2 >
converge, often prematurely, to copies of the same ui , if f (ui ) f (xi )
individual. In order, to introduce new information i.e., xi ,new (10)
move to unexplored areas of the search space, the xi , otherwise.
mutation operator is needed.
Thus, the individual xi is replaced by the new
Mutation
individual u i only if u i represents a better solution.
The Uniform mutation operator is used in this paper.
Uniform mutation is a two-step process. Assuming an Based on these equations, it can be noticed that DE
individual has been selected for mutation, the has three main control parameters: F, CR and N which
algorithm selects a fraction of the vector elements for are problem dependent. Storn and Price [8]
mutation. Each element has the same probability, recommended N to be chosen between 5*D and 10*D,
Rmutation, of being selected. Then, the algorithm and F to be between 0.4 and 1. A lot papers and
replaces each selected element by a random number research work have been published indicating
selected uniformly from the domain of that element. methods to improve the ultimate performance of DE by
tuning its control parameters [20-22]. In this paper, a
For example, assuming the element x j of the
variant of the types of DE discussed in [19] is used
individual x i has been selected for mutation, then the where the mutation scale factor and the cross over
value of element x j is changed by generating a rate are generated randomly from continuous uniform
distributions.

random number from U low j , up j . 3.3. PARTICLE SWARM OPTIMIZATION (PSO)
In order to guarantee convergence of GA, an PSO belongs to the set of swarm intelligence
additional feature - elitism is used. Elitism ensures that algorithms. Even though there are some similarities to
at least one of the best individuals of the current EA, it is not modeled after evolution but after swarming
generation is passed on to the next generation. This is and flocking behaviors in animals and birds. It was
often a user defined value, Nelite, and indicates the top initially proposed by Kennedy and Eberhart in 1995 [9].
Nelite individuals, ranked according to their fitness A lot of variations and modifications of the basic
values, that are copied on to the next generation algorithm have been proposed ever since [23,24]. A
directly. candidate solution in PSO is referred to as a particle,
while a set of candidate solutions is referred to as a
3.2. DIFFERENTIAL EVOLUTION (DE)
swarm. A particle i is defined completely by 3 vectors:
DE is a very powerful yet simple real-parameter its position, xi, its velocity, vi, and its personal best
optimization algorithm proposed by Storn and Price position xi,Best. The particle moves through the search
about 20 years ago [7,8]. As with GA, a lot of variants space defined by a few simple formulae. Its movement
of the basic algorithm with improved performance have is determined by its own best known position, xi,Best, as
been proposed [18,19]. The evolutionary operations of
well as the best known position of the whole swarm,
classical DE can be summarized as follows [8].
xBEST. First, the velocity of the particle is updated using
Mutation equation (11):
The mutation of a given individual x i is defined by vi ,new c0 vi c1 r1 xi , Best xi
(11)
vi x k F x m x n , (8) c2 r2 x BEST xi ,

where i, k , m, n 1, N are mutually different, F 0 is then, the position is updated using equation (12):
the mutation scale factor used to control the differential xi ,new xi vi ,new , (12)
variation di xm xn .
where r1 and r2 are random numbers generated from
Crossover U(0,1), c0 is the inertia weight, and c1 and c2 are the
The crossover operator is defined by equation (9): cognitive and social acceleration weights respectively.
Modern versions of PSO such as the one analyzed in
vi j , if U 0,1 CR
this paper do not use the global best solution, xBEST, in
ui j , (9)
xi j , otherwise
equation (11) but rather the local best solution xi,LBest
[23,25]. Hence the velocity update equation is given by
where CR 0,1 is the cross over rate and controls vi ,new c0 vi c1 r1 xi , Best xi
(13)
c2 r2 xi , LBest xi .
how many elements of an individual are changed. u i
is the new individual generated by recombining the
mutated individual v i and the original individual x i . The local best solution of a give individual is
This operator is basically the Uniform crossover ((6) or determined by the best-known position within that
(7)) except for the fact that only one child is generated. particles neighborhood. Different ways of defining the
neighborhood of a particle can be found in [23, 25-28].
Selection
The analyzed PSO algorithm in this paper uses an
The selection operator is defined by equation (10): adaptive random topology, where each particle
randomly informs K particles and itself (the same
particle may be chosen several times), with K usually
set to 3. In this topology the connections between minimum lies in a narrow, parabolic valley. Even
particles randomly change when the global optimum though this valley is easy to find, convergence to the
shows no improvement [23, 25]. minimum is difficult [29]. The 2D plot is shown in Fig 4.
D 1
100 xi 1 xi2 xi 1 .
4. EXPERIMENTAL ANALYSIS 2 2
f ( x) (16)
i 1

Three standard optimization test functions were used
in performing the analyses: the Ackley function, The domain is defined on the hypercube
Rastrigin function and Rosenbrock function. xi 5,5, i 1, , D , with the global minimum

Ackleys function, (14), in its 2D form is characterized
by a nearly flat outer region, has a lot of local minima, f x* 0 , at x* 1, ,1 .
and a large hole at the center (Fig 2). Rosenbrock function

1 D 2 4

f (x) 20 exp 1 20exp 0.2 xi


x 10

10
D i 1 (14) 8

1 D
exp cos 2 xi .

f(x1,x2)
6

i 1
D 4

2
The domain is defined on the hypercube 0
xi 5,5, i 1, , D , with the global minimum 5
5


0
f x 0 , at x* 0, ,0 .
* x2 0
-5 -5 x1

Ackley function
Fig. 4. Rosenbrock function for D = 2.
15 GA, DE and PSO were tested on these 3 test
functions for D = 2, 5, 10, 50 and 100. All analyses
10
were performed in Matlab. The algorithm specific
f(x1,x2)

5 control parameters values are given in Table 1. The


0
common control parameters for the heuristic
algorithms are:
-5
5 - The size of the solution set, N = 50.
5
0
x2
0 - The maximum number of iterations, kmax =
x1
-5 -5 3000.
All experiments were performed 100 times. Details of
Fig. 2. Ackley function for D = 2.
the results are presented in Figs 5 - 6. Analyzing the
Rastrigins function, (15), also has several local results displayed in Figs. 6. and 7, it can be noticed
minima and is highly multimodal. The 2D form is that:
shown in Fig 3.
- for optimization problems with D 10, all three
D algorithms had comparable results;
f (x) 10 D xi2 10cos 2 xi . (15)
i 1 - for D 50, GA and PSO performed better than
DE especially for the Rastrigin and Rosenbrock
The domain is defined on the hypercube
functions. However, GA showed degraded
xi 5.12,5.12, i 1, ,D , with the global performance with the Ackley function.
minimum also f x 0 , at x* 0,
*
,0 . - the runtime of PSO rapidly increases with an
increase in the dimensionality of the problem,
Rastrigin function
while that of GA and DE remain relatively low.

100

80
Table 1. Algorithm specific control parameter values
used in the experiments
f(x1,x2)

60

40 Algorithm Control parameters


20

0
GA Nelite = 2; Pcrossover = 0.8; Rmutation = 0.01
5

F U 0.5,2 ; CR U 0.2,0.9
5
0 DE
x2 0
-5 x1
-5
1
PSO c0 ; c c 0.5 ln(2)
Fig. 3. Rastrigin function for D = 2. 2 ln(2) 1 2
On the other hand, the Rosenbrock function, (16),
which is a popular test problem for gradient-based
optimization algorithms, is unimodal, and the global
It can be concluded that PSO, in general, has a better 5. CONCLUSION
accuracy for high dimensional problems but with a very
poor runtime performance. If runtime is the main In this paper, three heuristic algorithms are
condition, then GA is a better optimization tool. systematically analyzed and tested in detail for high
However, care should be taken if the optimization dimensional real-parameter optimization problems.
problem is similar to the Ackley function. These algorithms are GA, DE and PSO. An overview
of the implemented algorithms is provided. The
algorithms are tested on three standard optimization
functions, namely: Rastrigin, Rosenbrock and Ackley
functions. For lower dimensional problems, i.e.
problems involving at most 10 parameters, all three
algorithms had comparable results. However, for
higher dimensional problems, PSO outperformed the
other algorithms in terms of accuracy but had a very
poor runtime performance. On the other hand, the
runtime performances of GA and DE did not change
much with an increase in problem dimensionality.
(a)
6. REFERENCES:
[1] T. Weise, Global Optimization Algorithms: Theory and
Applications, E-book, Online available at http://www.it-
weise.de/, accessed 20.08.2014.

[2] R. Hooke, T. A. Jeeves, T.A. Direct search solution of


numerical and statistical problems, Journal of the
Association for Computing Machinery (ACM) Vol. 8, No.

(b) 2, 1961, pp. 212229. doi:10.1145/321062.321069.

[3] J. A. Nelder, R. Mead, A simplex method for function


minimization, Computer Journal, Vol. 7, 1965, pp.
308313. doi:10.1093/comjnl/7.4.308.

[4] A. H. Land, A. G. Doig, An automatic method of


solving discrete programming problems,
Econometrica, Vol. 28, No. 3, 1960, pp. 497520.
doi:10.2307/1910129.

(c) [5] J. H. Holland. Adaptation in Natural and Artificial

Fig. 6. Accuracy performance of the heuristic Systems: An Introductory Analysis with Applications to
algorithms for 100 trials for a) Rastrigin b) Biology, Control, and Artificial Intelligence. The University
Rosenbrock and c) Ackley function. of Michigan Press, Ann Arbor, 1975. ISBN: 0-4720-8460-
7, 978-0-47208-460-9,0-5850-3844-9, 978-0-58503-844-
5, 978-0-26258-111-6. Reprinted by MIT Press, April
1992, NetLibrary, Inc.

[6] D. E. Goldberg, Genetic Algorithms in Search,


Optimization, and Machine Learning, Addison-Wesley
Longman Publishing Co., Inc. Boston, MA, USA, 1989.
ISBN: 0-2011-5767-5.

[7] R. Storn, On the usage of differential evolution for


Fig. 7. Runtime performance of the heuristic function optimization, Biennial Conference of the North
algorithms for 100 trials for the Ackley function American Fuzzy Information Processing Society
(similar results are obtained for Rastrigin and
Rosenbrock functions). (NAFIPS), 1996, pp. 519523.

[8] R.Storn, K. Price, Differential evolution - a simple and


efficient heuristic for global optimization over
continuous spaces, Journal of Global Optimization,
Vol, 11, 1997, pp. 341359. Evolutionary Computation, vol.15, no.1, 2011, pp.4-31.
doi:10.1023/A:1008202821328. doi: 10.1109/TEVC.2010.2059031

[9] J. Kennedy, R. Eberhart, Particle Swarm Optimization. [20] J. Brest, S. Greiner, B. Boskovic, M. Mernik, V. Zumer,

Proceedings of IEEE International Conference on Neural Self-adapting control parameters in differential evolution:

Networks, Vol. 4, 1995, pp. 19421948. a comparative study on numerical benchmark problems,

doi:10.1109/ICNN.1995.488968. IEEE Transactions on Evolutionary Computation, vol. 10,


2006, pp. 646657.
[10] Y. Shi, R.C. Eberhart, A modified particle swarm
[21] J. Zhang, A.C. Sanderson, JADE: adaptive differential
optimizer. Proceedings of IEEE International
evolution with optional external archive, IEEE
Conference on Evolutionary Computation, 1998, pp.
Transactions on Evolutionary Computation, vol.13, 2009,
6973.
pp. 945958.
[11] M. Dorigo, Optimization, Learning and Natural
[22] A.K. Qin, P.N. Suganthan, Self-adaptive differential
Algorithms, PhD thesis, Politecnico di Milano, Italy,
evolution algorithm for numerical optimization, IEEE
1992.
Transactions on Evolutionary Computation, vol. 13,
[12] M. Dorigo, V.Maniezzo, A. Colorni, The ant system: 2005, pp. 17851791.
Optimization by a colony of cooperating agents, IEEE [23] M. Zambrano-Bigiarini, M. Clerc, R. Rojas, Standard
Transactions on Systems, Man, and Cybernetics Part Particle Swarm Optimisation 2011 at CEC-2013: A
B: Cybernetics, Vol. 26, No. 1, 1996, pp. 2941. baseline for future PSO improvements, IEEE Congress
[13] B. Xing, W. J. Gao, Innovative Computational on Evolutionary Computation (CEC), 2013, pp.2337 -
Intelligence: A Rough Guide to 134 Clever Algorithms, 2344,
Intelligence Systems Reference Library, Vol. 62, doi: 10.1109/CEC.2013.6557848
Springer International Publishing, 2014. [24] J.J. Liang, A.K. Qin, P.N. Suganthan, S. Baskar,

[14] S. Luke, Essentials of Metaheuristics, Lulu, 2nd edition, Comprehensive learning particle swarm optimizer for

Online available at global optimization of multimodal functions, IEEE

http://cs.gmu.edu/sean/book/metaheuristics/, 2013, Transactions on Evolutionary Computation, vol. 10,

accessed 20.08.2014. 2006, pp. 281295.

[25] M. Clerc, Standard Particle Swarm Optimisation,


[15] M. Srinivas, L. Patnaik, Adaptive probabilities of
Particle Swarm Central, Technical Report, 2012,
crossover and mutation in genetic algorithms, IEEE
http://clerc.maurice.free.fr/pso/SPSO descriptions. pdf.
Transactions on System, Man and Cybernetics, vol.24,
[Online. Last accessed 24.08.2014].
no.4, 1994, pp.656667.
[26] J. Kennedy, Small worlds and mega-minds: effects of
[16] J. Zhang, H. Chung, W. L. Lo, Clustering-Based
neighborhood topology on particle swarm performance,
Adaptive Crossover and Mutation Probabilities for
in Proceedings of the 1999 Congress on Evolutionary
Genetic Algorithms, IEEE Transactions on Evolutionary
Computation, vol. 3, 1999.
Computation vol.11, no.3, 2007, pp. 326335.
[27] J. Kennedy, R. Eberhart, Y. Shi, Swarm Intelligence,
[17] J. E. Baker, Reducing bias and inefficiency in the
Morgan Kaufmann Publishers Inc. San Francisco, CA,
selection algorithm, Genetic Algorithms and Their
USA, 2001, ISBN: 1-55860-595-9.
Applications: Proceedings of the Second International
Conference on Genetic Algorithms (ICGA), 1987, pp. [28] J. Kennedy, R. Mendes, Population structure and particle

1421. swarm performance, in Proceedings of the 2002


Congress on Evolutionary Computation, CEC 02, May
[18] Z. Yang, K. Tang, X. Yao, Differential evolution for high-
2002, pp. 16711676.
dimensional function optimization IEEE Congress on
Evolutionary Computation, CEC 2007. 2007, pp.3523- [29] V. Picheny, T. Wagner, D. Ginsbourger, A benchmark of

3530. kriging-based infill criteria for noisy optimization,


Structural and Multidisciplinary Optimization, vol 48, no.
doi: 10.1109/CEC.2007.4424929
3, 2013, pp. 607-626.
[19] S. Das, P.N. Suganthan, Differential Evolution: A Survey
of the State-of-the-Art, IEEE Transactions on

S-ar putea să vă placă și