Sunteți pe pagina 1din 8

WCCI 2012 IEEE World Congress on Computational Intelligence

June, 10-15, 2012 - Brisbane, Australia

IEEE CEC

An Adaptive Memetic Algorithm Using a Synergy of


Differential Evolution and Learning Automata
Abhronil Sengupta1, Tathagata
Chakraborti2, Amit Konar3

Eunjin Kim
Department of Computer Science
University of North Dakota
Grand Forks, United States
ejkim@cs.und.edu

Department of Electronics and


Telecommunication Engineering
Jadavpur University
Kolkata, India
senguptaabhronil@gmail.com1,
tathagata.net@live.com2,
konaramit@yahoo.co.in3

Abstract In recent years there has been a growing trend in the


application of Memetic Algorithms for solving numerical
optimization problems. They are population based search
heuristics that integrate the benefits of natural and cultural
evolution. In this paper, we propose an Adaptive Memetic
Algorithm, named LA-DE which employs a competitive variant
of Differential Evolution for global search and Learning
Automata as the local search technique. During evolution
Stochastic Automata Learning helps to balance the exploration
and exploitation capabilities of DE resulting in local refinement.
The proposed algorithm has been evaluated on a test-suite of 25
benchmark functions provided by CEC 2005 special session on
real parameter optimization. Experimental results indicate that
LA-DE outperforms several existing DE variants in terms of
solution quality.
Keywords-Numerical Optimization,
Evolutionary Algorithm, Differential
Automata.

I.

Memetic Algorithm,
Evolution, Learning

INTRODUCTION

The term Memetic Algorithm includes a broad category of


population based meta-heuristics that incorporate strategies for
individual learning. The word was coined by R. Dawkins [1]
and bears resemblance to the gene in the context of cultural
evolution. A basic characterization of MAs is the utilization of
problem domain knowledge which is processed and enhanced
by the communicating parts.
Natural evolution realized by Evolutionary Algorithms
(EAs) works on the Darwinian principle of the struggle for
existence, and aims at determining the global optima in a given
search landscape. Traditional EAs usually take an excessively
large time to locate a precise enough solution because of its
inability to exploit local information. Cultural evolution, on the
other hand, is capable of local refinement. MA captures the
power of global search by its evolutionary component and local
search by its cultural component and has consequently
outperformed conventional EAs in various scientific and
engineering fields.
The earliest research in this field can be traced back to the
work of Moscato [2]. Our research falls in the domain of

U.S. Government work not protected by U.S. copyright

Atulya K. Nagar
Department of Computer and Math
Sciences
Liverpool Hope University
Liverpool, England
nagara@hope.ac.uk

Adaptive Memetic Algorithms (AMAs) which involve adaptive


selection of memes from the meme pool. The adaptive
selection is governed by the ability of the meme to perform
local improvement. Several variants of AMAs are found in
literature [3, 4].
The paper proposes a MA which employs Differential
Evolution (DE) [5-9] as the global search tool and Learning
Automata [10-12] for local refinement. In recent years DE has
emerged as one of the most powerful EA. However, it has been
unable to overcome the problems of premature convergence or
stagnation. The performance of DE is highly dependent on the
control parameters, namely the scaling factor F and the
crossover ratio Cr. Of these, the parameter Cr is highly
sensitive to problem selection while the parameter F effectively
controls the evolution process of a particular gene. In the
proposed method we incorporate Learning Automata to choose
appropriate control parameter F from the meme pool for each
population member during successive generations. The
algorithm offers several advantages:
(a) A member with good fitness should search in the local
neighborhood, whereas a poor performing member
should participate in the global search. Thus the
scaling factor F should decrease for fitter individuals
in comparison to the others in order to ensure fitter
genes to participate in local search and the remaining
ones to participate in global search [5]. This is
realized in the paper by means of Learning Automata.
The utility of using Learning Automata lies in the
assumption of non availability of a priori information
about the control parameters and its ability to adapt to
unexpected changes in environment. Thereby it is
possible to implement local search for the fitter
individuals and global search for the poorer ones.
(b) The proposed AMA employs a Roulette-Choice
function based hyper-heuristic scheme for adaptive
selection of memes for the individual members before
participation in the DE process. This helps in
maintaining the diversity of the meme population.

(c) It evaluates the total reward/penalty to be given to the


evolved members based on their immediate reward
measured by improvements in fitness because of
selection of suitable memes prior to evolution. This is
achieved by one stage of the Learning Automata.
The rest of the paper is organized as follows. Sections 2 and
3 provide an overview of the Classical DE algorithm and
Learning Automata. Our proposed approach has been described
in Section 4. Extensive experimental results and convergence
curves comparing the LA-DE algorithm with several existing
DE variants have been presented in Section 5. Experiments
have also been undertaken to compare the performance of a
classical Self-adaptive Differential Evolution (SaDE) [14]
algorithm with its extended counterpart realized with the
proposed adaptation mechanism of the scale factor. The results
confirm that the extended SaDE, referred to as LA-SaDE,
outperforms the classical SaDE with respect to most of the
benchmark functions suggested in CEC-2005 conference.
Section 6 finally concludes the paper along with future research
directions.
II.

THE DIFFERENTIAL EVOLUTION ALGORITHM

Like any other evolutionary algorithm, DE starts


population of NP D-dimensional parameter
representing the candidate solutions. We shall
subsequent generations in DE by
0,1, ,
represent the i-th vector of the population at the
generation as ,
,, ,
,, ,
, , ,,
,, .

with a
vectors
denote
. We
current

0) should cover the entire


The initial population (at
search space as much as possible by uniformly randomizing
individuals within the search space constrained by the
prescribed minimum and maximum bounds:
,

, ,

and

, ,

Hence we may initialize the j-th component of the i-th


vector as
,,

0,1 .

. (1)

where
, 0,1 is a uniformly distributed random
number lying between 0 and 1 and is instantiated independently
for each component of the i-th vector. The following steps are
taken next: mutation, crossover, and selection (in that order),
which are explained in the following subsections.
A. Mutation
After initialization, DE creates a donor vector

corresponding to each population member , in the current


generation through mutation and sometimes using arithmetic
recombination too. It is the method of creation of donor vector
that differentiates one DE scheme from another. Five most
frequently referred strategies implemented in the publicdomain DE codes for producing the donor vectors (available
online at http://www.icsi.berkeley.edu/storn/code.html) are
listed below:

DE/rand/1:

DE/best/1:

,
,

(2)

(3)

DE/current-to-best/1:
,

.(4)

DE/best/2:
,

(5)
DE/rand/2:
,

.(6)

The indices , , ,
and
are mutually exclusive
integers randomly chosen from the range [1, NP], and all are
different from the base index i. The scaling factor F is a
positive control parameter for scaling the difference vectors.
, is the vector with the best fitness in the population at
generation G. The general convention used for naming the
various mutation strategies is DE/x/y/z, where DE stands for
differential evolution, x represents a string denoting the vector
to be perturbed, y is the number of difference vectors
considered for perturbation of x, and z stands for the type of
crossover being used.
B. Crossover
To increase the potential diversity of the population, a
crossover operation comes into play after generating the donor
vector through mutation. The donor vector exchanges its
components with the target vector , under this operation to
form the trial vector ,
,, ,
,, ,
, , ,,
,, .
In our paper we consider binomial crossover where the donor
vector exchanges its components with the target vector , for
each of the D variables whenever a randomly picked number
between 0 and 1 is less than or equal to the Cr value. In this
case, the number of parameters inherited from the donor has a
(nearly) binomial distribution. The scheme may be outlined as
,,

,,

,
,,

0,1

(7)

where
, 0,1 0,1 is a uniformly distributed
random number lying between 0 and 1 and is instantiated
independently for each j-th component of the i-th vector.
1,2, ,
is a randomly chosen index, which
ensures that , gets at least one component from , .
C. Selection
To keep the population size constant over subsequent
generations, the next step of the algorithm calls for selection to
determine whether the target or the trial vector survives to the
next generation i.e., at G = G +1. The selection operation is
described as follows:

where
is the function to be minimized. So if the new
trial vector yields an equal or lower value of the objective
function, it replaces the corresponding target vector in the next
generation; otherwise the target is retained in the population.
Hence the population either gets better (with respect to the
minimization of the objective function) or remains the same in
fitness status, but never deteriorates.
The above three processes are repeated till maximum
number of generations is reached.
III.

PRELIMINARIES ON LEARNING AUTOMATA

Learning Automata is one of the most popular


reinforcement learning schemes. In reinforcement learning, the
learner adapts its parameters by determining the status
(reward/punishment) of the feedback signal from its
environment. The major advantage of the Learning Automata
scheme is that it requires no assumptions about the system
operations and uncertainties and hence can efficiently operate
and adapt itself under unanticipated conditions. The basic
operation of the Learning Automata is outlined in the following
modules.
A. The Stochastic Automaton
The

Automaton is characterized by a sextuple


, , , , ,
where
, ,,
is a set of
, ,,
is the output set with
internal states,
, is the input set, is the state transition probability
vector governing the choice of state at each instant, is the
1 from
reinforcement scheme which generates
is the stochastic or deterministic output
and :
function. The states of the stochastic Automaton represent the
states of a discrete-state discrete-parameter Markov process.

Figure 1. The Learning Automaton

The learning cycle commences with the generation of a


stimulus from the environment. The Automaton on receiving
the stimulus generates a response to the environment. The
environment receives and evaluates the response and offers a
new stimulus to the Automaton. The learner then automatically
adjusts its parameters based on the last response of the
Automaton. The requirement for learning and adaptation in
systems is due to the fact that the environment changes as time
progresses.
The basic operation performed by the Learning Automaton
is the updating stage of action probabilities on the basis of
environment responses. If no a priori information is available
the actions are chosen with equal probability i.e. at random.
The most crucial factor that affects the performance of the
Automaton is the Reinforcement Scheme that is responsible for
updating the action probabilities. In our paper we consider a
Linear Reinforcement Scheme as described below:

0
1

B. The Environment
The Environment is represented by the triple
, ,
, ,,
is the input set and is the output
where
or response set. Here, we consider a P-model environment
where the responses are binary {0,1} with zero representing
the non-penalty response and one as the penalty response.
, ,,
denotes the penalty probability set and is
dependent on the input set. It is assumed that the penalty
probabilities are unknown initially and may vary with variation
of t.
C. The Learning Automata
A deterministic Automaton represents a control mechanism
devised to follow a predetermined sequence of instructions. In
our context the term stochastic emphasizes the adaptive
nature of the Automaton which adapts itself to changes in its
environment and does not follow predetermined rules by virtue
of the learning mechanism described in this module. The
scheme is outlined in the following figure which represents a
stochastic Automaton and environment connected in feedback.

. 1

(9)

1
1

1
1

10

The parameter 0,1 is associated with the reward


response while parameter 0,1 is associated with the
penalty response. Here, we employ the linear reward-penalty
scheme where a = b.
The reinforcement scheme provides the learning ability of
the Automaton. The basic idea behind the reinforcement
scheme is rather simple. If the Automaton selects a particular
action at time t and a non penalty input occurs then the
is increased and all other components
action probability
is decreased
are decreased. In the case of a penalty input
and all other components are increased. Based on the average
values of penalties several definitions of behavior, namely
expediency, optimality and absolute expediency are found in

literature [10]. An Automaton is absolutely expedient if the


expected value of the average penalty at one iteration step is
less than it was at the previous step for all steps. Necessary and
sufficient conditions for design are available only for the
absolutely expedient learning schemes.
IV.

THE PROPOSED LA-DE ALGORITHM

The proposed algorithm employs a synergy of Differential


Evolution and Learning Automata to realize a Memetic
Algorithm for achieving superior performance in global
optimization problems. The global search mechanism is
accomplished by successive generations of Differential
Evolution. A meme pool for parameter F is maintained in order
to select the control parameters for individual members of the
population. The state transition probability matrix plays a vital
role in the meme selection process. After each stage of
evolution, the population members are ranked according to
their fitness. The row indices of the matrix corresponding to the
states of the stochastic automata denote the members ranked in
order of increasing fitness value. On the other hand, the column
indices which represent the actions performed by the automata
at a particular state correspond to uniform quantized values of
the control parameter in the range [0, 2]. For example, let the
parameter under consideration be F with possible quantized
, ,,
. Then
represents the action
values
.
probability given to a member at state Si for selecting F=Fj.
The Roulette-Choice strategy is used to select a particular value
of F from the meme pool
1 , 2 , , 20 using the
,
1,2,

,20
for
the
individual
member located at state
.
Si. The state transition probability matrix is updated after each
evolutionary stage using the Reinforcement Scheme described
in the previous section.
The basic algorithm is outlined in the following sections:
A. Initialization
The algorithm employs a population of NP D-dimensional
parameter vectors representing the candidate solutions. The
initial population (at G = 0) should cover the entire search
space as much as possible by uniformly randomizing
individuals within the search space constrained by the
prescribed minimum and maximum bounds. Thus the j-th
component of the i-th population member is initialized
according to (1) as mentioned in section 2.
The state transition probability matrix is initialized with
equal and small values. This is in accordance with the principle
of unavailability of a priori information about the environment
and assuming all actions to be equally likely at a particular
stage.
B. Adaptive Selection of Memes
The proper choice of Reinforcement Learning Scheme
facilitates the adaptive selection of memes from the meme
pool. We employ Fitness proportionate selection, also known
as Roulette-Wheel selection, for the selection of potentially
useful memes. A basic advantage of this selection mechanism
is that diversity of the meme population can be maintained.
Although fitter memes would enjoy much higher probability of

Step 1 Set the generation number G=0 and randomly


initialize a population of NP individuals,
with
, ,
, ,,
,
,

,,

,,

,,

,,

,,

and each individual uniformly distributed in the


range
,
where,
, , , , ,
and
,
,
,

,
with
,
,
,
i= [1, 2,., NP].
Initialize the state transition probability matrices:
0.05
1, ,
1, ,20
.
Initialize learning parameter a=b=0.01.
Step 2 Evaluate the population.
Step 3 WHILE stopping criterion is not reached, DO
Step 3.1 /*Adaptive Selection of memes*/
FOR i=1 to NP
Select F=Fj by Roulette-Wheel
Selection.
END FOR
Step 3.2/* Mutation Step*/
FOR i=1 to NP
Generate a donor vector
,
corresponding to the target vector
, via strategy (4)
END FOR
Step 3.3/*Crossover Step*/
FOR i=1 to NP
Generate trial vector , for the i-th
target vector , using (7).
END FOR
Step 3.4/*Selection*/
FOR i=1 to NP
Evaluate trial vector , .
IF
THEN
,
,
,

/*Learning Automata*/
Update state transition matrix
according to (9).
ELSE
Update state transition matrix
according to (10).
END IF
END FOR
Step 3.5/*State Assignment*/
FOR i=1 to NP
.
Evaluate
,
END FOR
Rank members in increasing order of
fitness and assign corresponding ranks.
Step 3.6 G=G+1
Step 4 END WHILE
Algorithm1. Pseudo-code of LA-DE

11

C. Differential Evolution
In our paper we utilize the DE/current-to-best/1 strategy
for performing the mutation, recombination and selection
processes as discussed in (4). The parameter F is obtained by
selection from the meme pool.
D. Update of State Transition Probability Matrix
Let a member at state Sm on selection of Fj from the meme
pool produces a trial vector after mutation and recombination.
If the fitness of the trial vector decreases then the state
transition probabilities are updated according to (9) for state
Sm. Otherwise equation (10) is used for the updating stage. The
process is repeated for all the population members.
E. State Assignment
The population members are now ranked in increasing order
of fitness and assigned corresponding states. For example, a
member of rank k is assigned the state Sk.
The sections B-E are repeated till maximum number of
iterations is reached.
V.

NUMERICAL EXPERIMENTS AND RESULTS

A. Experimental Setup
We evaluate the performance of our proposed LA-DE
algorithm on a test-suite of 25 benchmark functions provided
by CEC-2005 special session on real parameter optimization
[8]. It includes 25 functions with varying degrees of
complexity. Among them, the first five functions are unimodal
while the remaining are multimodal. Detailed descriptions are
available in [8].
In the next section we compare the performance of the LADE algorithm with three state-of-the-art variants of DE namely,
DE/rand/1, DE/best/2 and DE/current-to-best/1. For
comparison purposes we set F=0.5 and Cr=0.9 for these
algorithms. No further tuning of the parameters were allowed.
Results have been presented for 10 and 30 dimensions of all the
benchmark functions. The population size NP = 50. The
learning rate for the LA-DE algorithm was made equal to 0.01.
The latter part of the experiment attempts to improve the
performance of a popular variant of DE, called Self-adaptive
Differential Evolution (SaDE) [14] by incorporating LA in the
algorithm for adaptation of the scale factor. SaDE focuses on
adaptation for crossover ratio and mutation strategies of DE.

The motivation in SaDE is to solve the dilemma that crossover


ratio Cr and mutation strategies involved in DE are often
highly problem dependant. SaDE adopts four DE mutation
strategies and introduces a probability to determine the right
one to use. The probability is gradually adapted according to its
learning experience. Additionally, cross-over ratio Cr is selfadapted by recording Cr values that make trial vectors
successfully enter the next generation. The scope of
performance enhancement of SaDE due to adaptation of scale
factors by LA is examined through a comparitive study that
utilizes SaDE as the competing evolutionary algorithm for
10,30 and 50 dimensions of the benchmark functions in Table
3. The mean and standard deviation of the best-of-run values of
25 independent runs for each algorithm have been presented in
the subsequent sections.
B. Simulation Results
In order to make the comparisons as fair as possible we
allowed each of the algorithms run for a specified number of
function evaluations namely, D*1e+04 where D is the
dimension of the problem. After each run the best fitness value
of the population was recorded. The mean value and standard
deviation (within parenthesis) of the error in fitness value were
evaluated over 25 independent runs of each algorithm.
Since all the algorithms start with the same initial
population over each problem instance, we used paired t-tests
to compare the means of the results produced by the best and
second-best algorithm (with respect to their final accuracies)
for each benchmark function. The t-tests are quite popular
among researchers of evolutionary computation and they are
fairly robust to violations of a Gaussian distribution with large
number of samples like 25 [13]. We have reported the
statistical significance level of the difference of means of the
two algorithms in the respective columns of Table 1,2 and 3.
The best performance has been highlighted in each row. The
sign indicates the t value of 49 degrees of freedom is
significant at a 5% tolerance level of significance by 2 tailed
test. The sign indicates that it is non-significant.
A few characteristic median convergence curves have also
been presented to illustrate the faster convergence speed of our
proposed approach.
20
LA-DE
SaDE
LA-SaDE

15

Error (logscale) --->

selection, yet the memes with poorer fitness do manage to


survive and may contribute some components as evolution
continues. Mathematically, the selection commences by the
selection of a random number in the range [0, 1] for each
population member. Let us consider the selection from the F
meme pool for a member of state Si. The next step involves the
selection of Fj such that the cummulative probability of
selction of F=F1 through Fj-1 is greater than r. Symbolically,

10

-5

5
FEs --->

Figure 2. Convergence graph of f3

10
4

x 10

30
LA-DE
SaDE
LA-SaDE

Error (logscale) --->

20

TABLE I.

10

Func
No.

f1

-10

f2

-20

f3

-30

f4

-40

5
FEs --->

10

f5

x 10

Figure 3. Convergence graph of f6

f6
f7

f8

f9

-5
Error (logscale) --->

f10
-10

f11
-15

f12

-20

f13

-25
LA-DE
SaDE
LA-SaDE

-30
-35

5
FEs --->

f14
10

f15

x 10

Figure 4. Convergence graph of f9

f16
f17

5.5

Error (logscale) --->

f18

LA-DE
SaDE
LA-SaDE

f19

4.5

f20

f21

3.5

f22

f23

2.5

f24

2
0

5
FEs --->

Figure 5. Convergence graph of f10

10
4

x 10

f25

RESULTS FOR 10D PROBLEMS

DE/rand/1

DE/best/2

6.838e+03
(1.230e+02)
7.211e+03
(4.091e+02)
2.112e+07
(7.523e+03)
1.293e+02
(5.782e+01)
1.748e+04
(6.389e+03)
4.544e+08
(2.803e+06)
1.277e+03
(8.618e+02)
2.071e+01
(2.193e-02)
4.235e+01
(3.738e+00)
3.336e+01
(7.380e+00)
8.151e+00
(4.568e-01)
2.511e+04
(2.388e+03)
6.811e+00
(2.881e-01)
3.748e+00
(3.184e-01)
8.247e+02
(1.276e+01)
2.116e+02
(1.539e+01)
2.034e+02
(4.678e+01)
1.240e+03
(3.490e+01)
1.167e+03
(6.181e+01)
1.255e+03
(4.017e+01)
1.106e+003
(8.269e+00)
9.899e+02
(5.298e+01)
1.109e+03
(3.129e+00)
1.219e+03
(2.300e+01)
4.828e+02
(4.278e+00)

9.792e+02
(3.273e+02)
5.399e+03
(7.384e+02)
5.384e+06
(3.474e+05)
5.267e+01
(2.482e+00)
7.390e+03
(2.939e+02)
1.382e+08
(2.233e+06)
9.267e+02
(7.525e+02)
2.043e+01
(3.377e-02)
5.289e+01
(2.489e+00)
3.518e+01
(4.183e+00)
8.980e+00
(6.465e-01)
2.791e+04
(6.213e+03)
2.912e+00
(9.371e-01)
3.027e+00
(3.981e-01)
6.808e+02
(9.362e+01)
1.834e+02
(5.762e+01)
2.014e+02
(1.740e+01)
1.182e+03
(3.588e+01)
9.299e+02
(1.671e+01)
9.591e+02
(5.052e+01)
1.074e+03
(1.191e+01)
8.617e+02
(3.493e+01)
1.095e+03
(9.377e+00)
1.012e+03
(8.388e+01)
4.029e+02
(3.228e+00)

DE/currentto-best/1
8.187e+02
(2.484e+02)
2.388e+03
(2.399e+02)
4.874e+06
(3.920e+05)
8.921e+00
(3.878e-01)
4.889e+03
(1.329e+02)
9.388e+07
(5.970e+06)
1.084e+03
(8.299e+02)
2.035e+01
(7.913e-02)
2.281e+01
(4.711e+00)
2.683e+01
(5.231e+00)
8.826e+00
(3.454e-01)
2.256e+04
(5.717e+03)
2.112e+00
(4.312e-01)
3.396e+00
(2.471e-01)
4.289e+02
(7.511e+01)
1.753e+02
(7.934e+01)
1.372e+02
(1.678e+01)
8.895e+02
(3.577e+02)
1.231e+03
(1.281e+01)
8.140e+02
(1.648e+02)
1.193e+03
(1.154e+02)
8.912e+02
(5.820e+01)
1.021e+03
(1.022e+01)
9.366e+02
(7.954e+00)
4.781e+02
(8.197e+00)

LA-DE
5.684e-14
(4.198e-14)
3.411e-13
(8.311e-12)
5.339e+02
(2.489e+00)
0.000e+00
(0.000e+00)
4.939e-13
(1.299e-14)
1.136e-13
(1.211e-20)
3.299e-01
(2.488e-01)
2.018e+01
(3.828e-02)
3.979e+00
(6.298e+00)
9.964e+00
(1.843e+00)
4.647e+00
(2.469e-01)
2.714e+03
(2.371e+01)
5.581e-01
(2.712e-01)
3.621e+00
(6.271e-01)
3.054e+02
(2.832e+01)
1.183e+02
(6.886e+01)
1.118e+02
(1.448e+01)
8.470e+02
(2.737e+02)
7.157e+02
(1.198e+01)
6.211e+02
(2.560e+02)
9.012e+02
(1.810e+01)
8.841e+02
(2.894e+01)
8.881e+02
(1.283e+02)
9.661e+02
(9.885e+00)
3.812e+02
(1.828e+01)

F for every population member in the mutation process


TABLE II.

Func
No.
f1
f2
f3
f4
f5
f6
f7
f8
f9
f10
f11
f12
f13
f14
f15
f16
f17
f18
f19
f20
f21
f22
f23
f24
f25

RESULTS FOR 30D PROBLEMS

DE/rand/1

DE/best/2

7.229e+04
(2.599e+03)
7.176e+04
(6.289e+03)
6.286e+08
(5.271e+07)
4.389e+02
(1.865e+01)
2.747e+04
(2.577e+03)
3.281e+10
(2.744e+09)
2.836e+03
(4.899e+02)
2.115e+01
(4.436e-02)
2.321e+02
(2.987e+01)
5.287e+02
(4.731e+01)
3.663e+01
(1.393e+00)
9.825e+05
(1.281e+05)
5.973e+02
(1.385e+02)
1.453e+01
(1.121e-01)
8.832e+02
(2.281e+01)
7.567e+02
(6.405e+01)
6.546e+02
(7.213e+01)
1.017e+03
(2.785e+01)
1.495e+03
(2.745e+02)
1.297e+03
(2.523e+01)
1.126e+03
(2.747e+01)
1.163e+03
(9.299e+01)
1.131e+03
(2.046e+01)
1.638e+03
(2.519e+01)
3.536e+02
(5.875e+01)

3.882e+04
(8.737e+03)
6.9910e+04
(1.628e+03)
8.381e+08
(7.639e+07)
7.239e+02
(1.335e+01)
2.553e+04
(1.385e+03)
7.381e+09
(8.438e+09)
1.395e+03
(9.272e+02)
2.096e+01
(3.478e-02)
3.119e+02
(2.748e+01)
4.672e+02
(4.313e+01)
3.182e+01
(1.376e+00)
1.066e+06
(1.247e+05)
1.998e+02
(1.105e+02)
1.415e+01
(7.263e-02)
9.075e+02
(2.122e+01)
5.737e+02
(6.102e+01)
5.163e+02
(6.122e+01)
9.893e+02
(1.605e+01)
1.636e+03
(1.802e+01)
1.262e+02
(1.932e+01)
1.086e+03
(1.832e+01)
1.371e+03
(1.729e+01)
1.072e+03
(2.233e+01)
1.106e+02
(7.321e+01)
2.822e+02
(6.643e+01)

VI.

DE/currentto-best/1
1.452e+04
(1.380e+03)
9.271e+03
(1.281e+02)
9.810e+07
(3.589e+07)
5.377e+02
(8.289e+01)
5.483e+03
(6.748e+02)
5.382e+06
(3.893e+06)
9.3734e+01
(1.573e+01)
2.098e+01
(5.372e-02)
2.034e+02
(3.772e+00)
2.436e+02
(1.191e+01)
3.356e+01
(1.039e+00)
9.698e+05
(1.622e+05)
1.754e+01
(1.244e+00)
1.365e+01
(1.131e-01)
8.534e+02
(1.780e+01)
4.481e+02
(1.287e+01)
5.391e+02
(1.381e+01)
8.228e+02
(2.304e+00)
1.344e+03
(2.892e+01)
1.024e+03
(3.783e+01)
9.813e+02
(8.382e+01)
9.726e+02
(4.278e+01)
8.192e+02
(1.417e+00)
9.527e+02
(6.395e+01)
3.106e+02
(4.297e+01)

LA-DE
4.206e-12
(6.366e-13)
8.697e-11
(8.281e-13)
2.303e+05
(3.083e+04)
1.310e+02
(3.829e+01)
2.307e+03
(1.472e+02)
4.521e-02
(1.218e-01)
2.291e-02
(8.836e-03)
2.097e+01
(6.341e-02)
7.838e+01
(2.749e+01)
1.012e+02
(1.266e+01)
2.421e+01
(3.746e+00)
3.069e+05
(1.181e+05)
6.834e+00
(1.354e+00)
1.317e+01
(2.281e-01)
5.071e+02
(7.395e+01)
2.829e+02
(1.930e+01)
3.148e+02
(8.049e+01)
9.138e+02
(2.460e+01)
9.191e+02
(1.957e+01)
9.781e+02
(7.548e+01)
5.000e+02
(0.000e+00)
9.563e+02
(6.410e+00)
8.305e+02
(3.652e+01)
9.621e+02
(9.453e+01)
2.115e+02
(1.918e+01)

CONCLUSIONS

This paper presents a novel approach to global optimization


by employing Learning Automata to control the scaling factor

participating in Differential Evolution. The global search and


local search techniques are combined to produce an extremely
robust memetic algorithm.
The algorithm has been compared with three variants of DE
algorithm over a test-suite of 25 benchmark functions. The
results confirmed the efficacy of the proposed improvements.
The convergence curves depicted for a few of the benchmark
functions provide further support to our claim. As opposed to
the conventional algorithms our approach overcomes the
problem of stagnation and pre-convergence even for high
dimensional optimization scenarios to a much greater extent.
The proposed approach has been integrated with SaDE, an
extremely popular DE variant and the obtained results clearly
indicate the fact that the hybrid algorithm, LA-SaDE
outperforms the traditional SaDE algorithm as well as other
conventional DE variants in terms of solution quality.
REFERENCES
[1]
[2]

[3]

[4]

[5]

[6]
[7]

[8]

[9]

[10]

[11]

[12]

[13]
[14]

Richard Dawkins, The Selfish Gene, Oxford University Press 1976.


P. Moscato, On Evolution, Search, Optimization, Genetic Algorithms
and Martial Arts: Towards Memetic Algorithms, in Caltech Concurrent
Computation Program (report 826).
Yew-Soon Ong, Meng-Hiot Lim, Ning Zhu and Kok-Wai Wong,
Classification of Adaptive Memetic Algorithms: A Comparative
Study, in IEEE Trans. on Systems, Man and Cybernetics, Vol.36, No.1,
February 2006.
G. Kendall, P.Cowling and E. Soubeiga, Choice function and random
hyperheuristics, in Proc.4th Asia-Pacific Conferrence on simulated
Evolution and Learning, Singapore, Nov.2002, pp.667-671.
R. Storn and K. V. Price, Differential Evolutiona simple and efficient
heuristic for global optimization over continuous spaces, in J. Global
Optimization, vol. 11, no. 4, pp. 341359, 1997.
R. Storn, K. V. Price, and J. Lampinen, Differential EvolutionA
Practical Approach to Global Optimization. in Springer- Verlag, 2005.
J. Lampinen and I. Zelinka, On stagnation of the differential evolution
algorithm, in Proc. MENDEL 2000, 6th Int.Mendel Conf. Soft
Computing, Brno, Czech Republic, Jun. 2000, pp. 768.
P.N. Suganthan, N. Hansen , J.J. Liang, K. Deb, Y. P. Chen A. Auger
and S. Tiwari, Problem Definitions and Evalution Criteria for the
CEC 2005 Special Session on Real-Parameter Optimization, Technical
Report, Nanyang Technological University, Singapore.
J. Ronkkonen, S. Kukkonen, and K. V. Price, Real parameter
optimization with differential evolution, in Proc. IEEE Congr. Evol.
Comput. (CEC-2005), vol. 1. Piscataway, NJ: IEEE Press, pp. 50651.
K.S. Narendra and M. A. L. Thathachar , Learning Automata-A
survey, in IEEE Trans. on Systems, Man and Cybernetics, Vol.4, No. 4,
July 1974.
M. R. Meybodi and H. Beigy, A Note on Learning Automata Based
Schemes for Adaptation of Bp Parameters, in Journal of
Neurocomputing, vol. 48, no. 4, pp. 957-974, 2002.
C. Unsal, P. Kachroo and J.S. Bay, Multiple Stochastic Learning
Automata for Vehicle Path Control in an Automated Highway System,
in IEEE Trans. On Systems, Man and Cybernetics, Part A, Vol.29, No.1,
January 1999.
B. Flury, A First Course in Multivariate Statistics, Springer, 28,
(1997).
A.K. Qin, V.L. Huang and P.N. Suganthan, Differential Evolution
algorithm with strategy adaptation for global numerical optimization, in
IEEE Trans. On Evolutionary Computation, Vol. 13, No.2, April 2009

TABLE III.

COMPARISON OF SADE AND SADE-LA

D=10

f1
f2
f3
f4
f5
f6
f7
f8
f9
f10
f11
f12
Function
No.

f13
f14
f15
f16
f17
f18
f19
f20
f21
f22
f23
f24
f25

SaDE

LA-SaDE

0.000e+00
(0.000e+00)
5.187e-14
(2.580e-13)
6.019e+01
(2.095e+00)
0.000e+00
(0.000e+00
6.141e+00
(2.389e-01)
3.881e-01
(5.020e-01)
5.428e-02
(7.291e-02)
2.052e+01
(2.115e-03)
0.000e+00
(2.188e-16)
1.211e+01
(5.218e+00)
7.191e+00
(1.781e-02)
1.415e+03
(3.821e+01)
6.281e-01
(3.121e-02)
3.631e+00
(7.288e-01)
9.921e+01
(8.289e+00)
1.322e+02
(1.139e+01)
1.872e+02
(2.638e-02)
8.023e+02
(1.278e+01)
9.031e+02
(2.186e+01)
8.287e+02
(4.299e+01)
8.560e+02
(4.376e+01)
7.899e+02
(2.399e+01)
7.291e+02
(1.211e+02)
2.000e+02
(4.872e-18)
2.000e+02
(6.279e+00)

0.000e+00
(0.000e+00)
0.000e+00
(0.000e+00)
8.061e-02
(4.291e-02)
0.000e+00
(0.000e+00)
6.478e-02
(2.300e-02)
0.000e+00
(3.990e-15)
1.400e-01
(6.232e-01)
2.031e+01
(1.852e-03)
0.000e+00
(3.010e-17)
8.853e+00
(4.299e+00)
6.172e+00
(3.858e-03)
1.222e+03
(1.624e+01)
5.451e-01
(2.981e-01)
3.224e+00
(3.771e-01)
8.191e+01
(2.718e+00)
1.131e+02
(3.125e+01)
1.547e+02
(1.377-01)
4.223e+02
(1.256e+00)
8.012e+02
(2.621e+01)
8.521e+02
(2.102e+01)
8.285e+02
(5.285e+01)
8.221e+02
(4.209e+01)
6.239e+02
(1.009e+02)
2.000e+02
(1.345e-17)
2.000e+02
(5.181e+00)

D=30
Statistical
significance
NA

NA

NA

NA

NA
NA

SaDE
2.167e-12
(2.788e-10)
5.281e-06
(2.481e-07)
1.986e+05
(4.825e+04)
4.847e+00
(3.828e+00)
2.013e+03
(7.291+02)
1.872e-02
(6.726e-03)
1.271e-03
(4.382e-04)
2.044e+01
(4.761e-02)
8.361e-08
(5.731e-08)
5.333e+01
(7.496e-02)
8.401e+00
(5.272e-02)
7.525e+04
(2.389e+02)
1.612e+00
(1.118e-01)
1.273e+01
(1.05e-05)
4.210e+02
(5.383e+01)
1.531e+02
(3.847e+00)
2.717e+02
(5.382e+01)
8.382e+02
(6.392e+01)
8.149e+02
(3.471e+01)
8.371e+02
(3.820e+01)
8.201e+02
(9.372e+01)
7.82e+02
(1.41e+01)
1.081e+03
(6.281e+01)
7.912e+02
(2.881e+01)
2.000e+02
(0.000e+00)

LA-SaDE
5.684e-14
(2.180e-15)
1.023e-12
(6.172e-07)
1.271e+05
(8.175e+04)
1.764e+00
(4.924e+00)
1.910e+03
(5.297e+02)
1.728e-04
(5.201e-03)
7.259e-04
(9.248-03)
2.064e+01
(3.762e-02)
5.684e-14
(7.391e-12)
6.581e+01
(4.819e-02)
6.943e+00
(1.922e-02)
4.291e+04
(8.992e+02)
1.309e+00
(7.824e-01)
1.331e+01
(7.451e-01)
3.281e+02
(1.211e+01)
1.461e+02
(5.490e+00)
2.219e+02
(4.290e+01)
7.215e+02
(9.296e+01)
7.672e+02
(8.822e+01)
8.141+02
(8.439e+01)
5.152e+02
(6.482e+01)
5.158e+02
(1.90e+01)
5.581e+02
(3.292e+01)
2.000e+02
(0.000e+00)
2.000e+02
(0.000e+00)

D=50
Statistical
significance

NA

SaDE

LA-SaDE

2.671e-10
(5.823e-09)
1.467e-02
(3.781e-02)
5.175e+05
(5.278e+04)
2.773e+04
(1.722e+03)
5.572e+03
(6.438e+02)
5.812e+00
(5.181e-01)
1.275e-03
(8.269e-02)
2.062e+01
(2.781e-02)
1.728e-06
(3.918e-08)
1.887e+02
(4.544e+01)
6.739e+01
(4.101e+01)
6.247e+05
(9.344e+03)
7.240e+01
(8.739e+00)
2.211e+01
(4.391e-01)
3.981e+02
(6.793e+01)
1.683e+02
(8.622e+01)
2.068e+02
(6.628e+01)
8.329e+02
(4.399e+01)
8.652e+02
(4.123e+01)
8.189e+02
(5.391e+01)
8.040e+02
(2.126e+01)
7.626e+02
(4.248e+01)
1.184e+03
(1.775e+01)
2.000e+02
(0.000e+00)
2.000e+02
(0.000e+00)

6.739e-14
(8.023e-12)
4.679e-05
(3.216e-08)
4.315e+05
(7.589e+04)
8.194e+03
(5.185e+02)
1.817e+03
(2.392e+02)
4.122e+00
(7.011e-01)
7.153e-12
(5.292e-10)
2.079e+01
(2.366e-02)
1.347e-07
(2.391e-07)
2.819e+02
(4.342e+01)
6.138e+01
(4.342e+01)
4.733e+05
(6.775e+03)
4.994e+01
(2.379e+00)
2.199e+01
(6.629e-01)
5.629e+02
(6.281e+01)
2.126e+02
(5.017e+01)
2.135e+02
(2.184e+01)
8.671e+02
(4.618e+01)
8.152e+02
(9.061e+01)
8.015e+02
(4.810e+01)
6.170e+02
(6.395e+01)
8.295e+02
(7.459e+01)
8.914e+02
(3.600e+01)
2.000e+02
(0.000e+00)
2.000e+02
(0.000e+00)

Statistical
significance

NA
NA

S-ar putea să vă placă și