Documente Academic
Documente Profesional
Documente Cultură
IEEE CEC
Eunjin Kim
Department of Computer Science
University of North Dakota
Grand Forks, United States
ejkim@cs.und.edu
I.
Memetic Algorithm,
Evolution, Learning
INTRODUCTION
Atulya K. Nagar
Department of Computer and Math
Sciences
Liverpool Hope University
Liverpool, England
nagara@hope.ac.uk
with a
vectors
denote
. We
current
, ,
and
, ,
0,1 .
. (1)
where
, 0,1 is a uniformly distributed random
number lying between 0 and 1 and is instantiated independently
for each component of the i-th vector. The following steps are
taken next: mutation, crossover, and selection (in that order),
which are explained in the following subsections.
A. Mutation
After initialization, DE creates a donor vector
DE/rand/1:
DE/best/1:
,
,
(2)
(3)
DE/current-to-best/1:
,
.(4)
DE/best/2:
,
(5)
DE/rand/2:
,
.(6)
The indices , , ,
and
are mutually exclusive
integers randomly chosen from the range [1, NP], and all are
different from the base index i. The scaling factor F is a
positive control parameter for scaling the difference vectors.
, is the vector with the best fitness in the population at
generation G. The general convention used for naming the
various mutation strategies is DE/x/y/z, where DE stands for
differential evolution, x represents a string denoting the vector
to be perturbed, y is the number of difference vectors
considered for perturbation of x, and z stands for the type of
crossover being used.
B. Crossover
To increase the potential diversity of the population, a
crossover operation comes into play after generating the donor
vector through mutation. The donor vector exchanges its
components with the target vector , under this operation to
form the trial vector ,
,, ,
,, ,
, , ,,
,, .
In our paper we consider binomial crossover where the donor
vector exchanges its components with the target vector , for
each of the D variables whenever a randomly picked number
between 0 and 1 is less than or equal to the Cr value. In this
case, the number of parameters inherited from the donor has a
(nearly) binomial distribution. The scheme may be outlined as
,,
,,
,
,,
0,1
(7)
where
, 0,1 0,1 is a uniformly distributed
random number lying between 0 and 1 and is instantiated
independently for each j-th component of the i-th vector.
1,2, ,
is a randomly chosen index, which
ensures that , gets at least one component from , .
C. Selection
To keep the population size constant over subsequent
generations, the next step of the algorithm calls for selection to
determine whether the target or the trial vector survives to the
next generation i.e., at G = G +1. The selection operation is
described as follows:
where
is the function to be minimized. So if the new
trial vector yields an equal or lower value of the objective
function, it replaces the corresponding target vector in the next
generation; otherwise the target is retained in the population.
Hence the population either gets better (with respect to the
minimization of the objective function) or remains the same in
fitness status, but never deteriorates.
The above three processes are repeated till maximum
number of generations is reached.
III.
0
1
B. The Environment
The Environment is represented by the triple
, ,
, ,,
is the input set and is the output
where
or response set. Here, we consider a P-model environment
where the responses are binary {0,1} with zero representing
the non-penalty response and one as the penalty response.
, ,,
denotes the penalty probability set and is
dependent on the input set. It is assumed that the penalty
probabilities are unknown initially and may vary with variation
of t.
C. The Learning Automata
A deterministic Automaton represents a control mechanism
devised to follow a predetermined sequence of instructions. In
our context the term stochastic emphasizes the adaptive
nature of the Automaton which adapts itself to changes in its
environment and does not follow predetermined rules by virtue
of the learning mechanism described in this module. The
scheme is outlined in the following figure which represents a
stochastic Automaton and environment connected in feedback.
. 1
(9)
1
1
1
1
10
,20
for
the
individual
member located at state
.
Si. The state transition probability matrix is updated after each
evolutionary stage using the Reinforcement Scheme described
in the previous section.
The basic algorithm is outlined in the following sections:
A. Initialization
The algorithm employs a population of NP D-dimensional
parameter vectors representing the candidate solutions. The
initial population (at G = 0) should cover the entire search
space as much as possible by uniformly randomizing
individuals within the search space constrained by the
prescribed minimum and maximum bounds. Thus the j-th
component of the i-th population member is initialized
according to (1) as mentioned in section 2.
The state transition probability matrix is initialized with
equal and small values. This is in accordance with the principle
of unavailability of a priori information about the environment
and assuming all actions to be equally likely at a particular
stage.
B. Adaptive Selection of Memes
The proper choice of Reinforcement Learning Scheme
facilitates the adaptive selection of memes from the meme
pool. We employ Fitness proportionate selection, also known
as Roulette-Wheel selection, for the selection of potentially
useful memes. A basic advantage of this selection mechanism
is that diversity of the meme population can be maintained.
Although fitter memes would enjoy much higher probability of
,,
,,
,,
,,
,,
,
with
,
,
,
i= [1, 2,., NP].
Initialize the state transition probability matrices:
0.05
1, ,
1, ,20
.
Initialize learning parameter a=b=0.01.
Step 2 Evaluate the population.
Step 3 WHILE stopping criterion is not reached, DO
Step 3.1 /*Adaptive Selection of memes*/
FOR i=1 to NP
Select F=Fj by Roulette-Wheel
Selection.
END FOR
Step 3.2/* Mutation Step*/
FOR i=1 to NP
Generate a donor vector
,
corresponding to the target vector
, via strategy (4)
END FOR
Step 3.3/*Crossover Step*/
FOR i=1 to NP
Generate trial vector , for the i-th
target vector , using (7).
END FOR
Step 3.4/*Selection*/
FOR i=1 to NP
Evaluate trial vector , .
IF
THEN
,
,
,
/*Learning Automata*/
Update state transition matrix
according to (9).
ELSE
Update state transition matrix
according to (10).
END IF
END FOR
Step 3.5/*State Assignment*/
FOR i=1 to NP
.
Evaluate
,
END FOR
Rank members in increasing order of
fitness and assign corresponding ranks.
Step 3.6 G=G+1
Step 4 END WHILE
Algorithm1. Pseudo-code of LA-DE
11
C. Differential Evolution
In our paper we utilize the DE/current-to-best/1 strategy
for performing the mutation, recombination and selection
processes as discussed in (4). The parameter F is obtained by
selection from the meme pool.
D. Update of State Transition Probability Matrix
Let a member at state Sm on selection of Fj from the meme
pool produces a trial vector after mutation and recombination.
If the fitness of the trial vector decreases then the state
transition probabilities are updated according to (9) for state
Sm. Otherwise equation (10) is used for the updating stage. The
process is repeated for all the population members.
E. State Assignment
The population members are now ranked in increasing order
of fitness and assigned corresponding states. For example, a
member of rank k is assigned the state Sk.
The sections B-E are repeated till maximum number of
iterations is reached.
V.
A. Experimental Setup
We evaluate the performance of our proposed LA-DE
algorithm on a test-suite of 25 benchmark functions provided
by CEC-2005 special session on real parameter optimization
[8]. It includes 25 functions with varying degrees of
complexity. Among them, the first five functions are unimodal
while the remaining are multimodal. Detailed descriptions are
available in [8].
In the next section we compare the performance of the LADE algorithm with three state-of-the-art variants of DE namely,
DE/rand/1, DE/best/2 and DE/current-to-best/1. For
comparison purposes we set F=0.5 and Cr=0.9 for these
algorithms. No further tuning of the parameters were allowed.
Results have been presented for 10 and 30 dimensions of all the
benchmark functions. The population size NP = 50. The
learning rate for the LA-DE algorithm was made equal to 0.01.
The latter part of the experiment attempts to improve the
performance of a popular variant of DE, called Self-adaptive
Differential Evolution (SaDE) [14] by incorporating LA in the
algorithm for adaptation of the scale factor. SaDE focuses on
adaptation for crossover ratio and mutation strategies of DE.
15
10
-5
5
FEs --->
10
4
x 10
30
LA-DE
SaDE
LA-SaDE
20
TABLE I.
10
Func
No.
f1
-10
f2
-20
f3
-30
f4
-40
5
FEs --->
10
f5
x 10
f6
f7
f8
f9
-5
Error (logscale) --->
f10
-10
f11
-15
f12
-20
f13
-25
LA-DE
SaDE
LA-SaDE
-30
-35
5
FEs --->
f14
10
f15
x 10
f16
f17
5.5
f18
LA-DE
SaDE
LA-SaDE
f19
4.5
f20
f21
3.5
f22
f23
2.5
f24
2
0
5
FEs --->
10
4
x 10
f25
DE/rand/1
DE/best/2
6.838e+03
(1.230e+02)
7.211e+03
(4.091e+02)
2.112e+07
(7.523e+03)
1.293e+02
(5.782e+01)
1.748e+04
(6.389e+03)
4.544e+08
(2.803e+06)
1.277e+03
(8.618e+02)
2.071e+01
(2.193e-02)
4.235e+01
(3.738e+00)
3.336e+01
(7.380e+00)
8.151e+00
(4.568e-01)
2.511e+04
(2.388e+03)
6.811e+00
(2.881e-01)
3.748e+00
(3.184e-01)
8.247e+02
(1.276e+01)
2.116e+02
(1.539e+01)
2.034e+02
(4.678e+01)
1.240e+03
(3.490e+01)
1.167e+03
(6.181e+01)
1.255e+03
(4.017e+01)
1.106e+003
(8.269e+00)
9.899e+02
(5.298e+01)
1.109e+03
(3.129e+00)
1.219e+03
(2.300e+01)
4.828e+02
(4.278e+00)
9.792e+02
(3.273e+02)
5.399e+03
(7.384e+02)
5.384e+06
(3.474e+05)
5.267e+01
(2.482e+00)
7.390e+03
(2.939e+02)
1.382e+08
(2.233e+06)
9.267e+02
(7.525e+02)
2.043e+01
(3.377e-02)
5.289e+01
(2.489e+00)
3.518e+01
(4.183e+00)
8.980e+00
(6.465e-01)
2.791e+04
(6.213e+03)
2.912e+00
(9.371e-01)
3.027e+00
(3.981e-01)
6.808e+02
(9.362e+01)
1.834e+02
(5.762e+01)
2.014e+02
(1.740e+01)
1.182e+03
(3.588e+01)
9.299e+02
(1.671e+01)
9.591e+02
(5.052e+01)
1.074e+03
(1.191e+01)
8.617e+02
(3.493e+01)
1.095e+03
(9.377e+00)
1.012e+03
(8.388e+01)
4.029e+02
(3.228e+00)
DE/currentto-best/1
8.187e+02
(2.484e+02)
2.388e+03
(2.399e+02)
4.874e+06
(3.920e+05)
8.921e+00
(3.878e-01)
4.889e+03
(1.329e+02)
9.388e+07
(5.970e+06)
1.084e+03
(8.299e+02)
2.035e+01
(7.913e-02)
2.281e+01
(4.711e+00)
2.683e+01
(5.231e+00)
8.826e+00
(3.454e-01)
2.256e+04
(5.717e+03)
2.112e+00
(4.312e-01)
3.396e+00
(2.471e-01)
4.289e+02
(7.511e+01)
1.753e+02
(7.934e+01)
1.372e+02
(1.678e+01)
8.895e+02
(3.577e+02)
1.231e+03
(1.281e+01)
8.140e+02
(1.648e+02)
1.193e+03
(1.154e+02)
8.912e+02
(5.820e+01)
1.021e+03
(1.022e+01)
9.366e+02
(7.954e+00)
4.781e+02
(8.197e+00)
LA-DE
5.684e-14
(4.198e-14)
3.411e-13
(8.311e-12)
5.339e+02
(2.489e+00)
0.000e+00
(0.000e+00)
4.939e-13
(1.299e-14)
1.136e-13
(1.211e-20)
3.299e-01
(2.488e-01)
2.018e+01
(3.828e-02)
3.979e+00
(6.298e+00)
9.964e+00
(1.843e+00)
4.647e+00
(2.469e-01)
2.714e+03
(2.371e+01)
5.581e-01
(2.712e-01)
3.621e+00
(6.271e-01)
3.054e+02
(2.832e+01)
1.183e+02
(6.886e+01)
1.118e+02
(1.448e+01)
8.470e+02
(2.737e+02)
7.157e+02
(1.198e+01)
6.211e+02
(2.560e+02)
9.012e+02
(1.810e+01)
8.841e+02
(2.894e+01)
8.881e+02
(1.283e+02)
9.661e+02
(9.885e+00)
3.812e+02
(1.828e+01)
Func
No.
f1
f2
f3
f4
f5
f6
f7
f8
f9
f10
f11
f12
f13
f14
f15
f16
f17
f18
f19
f20
f21
f22
f23
f24
f25
DE/rand/1
DE/best/2
7.229e+04
(2.599e+03)
7.176e+04
(6.289e+03)
6.286e+08
(5.271e+07)
4.389e+02
(1.865e+01)
2.747e+04
(2.577e+03)
3.281e+10
(2.744e+09)
2.836e+03
(4.899e+02)
2.115e+01
(4.436e-02)
2.321e+02
(2.987e+01)
5.287e+02
(4.731e+01)
3.663e+01
(1.393e+00)
9.825e+05
(1.281e+05)
5.973e+02
(1.385e+02)
1.453e+01
(1.121e-01)
8.832e+02
(2.281e+01)
7.567e+02
(6.405e+01)
6.546e+02
(7.213e+01)
1.017e+03
(2.785e+01)
1.495e+03
(2.745e+02)
1.297e+03
(2.523e+01)
1.126e+03
(2.747e+01)
1.163e+03
(9.299e+01)
1.131e+03
(2.046e+01)
1.638e+03
(2.519e+01)
3.536e+02
(5.875e+01)
3.882e+04
(8.737e+03)
6.9910e+04
(1.628e+03)
8.381e+08
(7.639e+07)
7.239e+02
(1.335e+01)
2.553e+04
(1.385e+03)
7.381e+09
(8.438e+09)
1.395e+03
(9.272e+02)
2.096e+01
(3.478e-02)
3.119e+02
(2.748e+01)
4.672e+02
(4.313e+01)
3.182e+01
(1.376e+00)
1.066e+06
(1.247e+05)
1.998e+02
(1.105e+02)
1.415e+01
(7.263e-02)
9.075e+02
(2.122e+01)
5.737e+02
(6.102e+01)
5.163e+02
(6.122e+01)
9.893e+02
(1.605e+01)
1.636e+03
(1.802e+01)
1.262e+02
(1.932e+01)
1.086e+03
(1.832e+01)
1.371e+03
(1.729e+01)
1.072e+03
(2.233e+01)
1.106e+02
(7.321e+01)
2.822e+02
(6.643e+01)
VI.
DE/currentto-best/1
1.452e+04
(1.380e+03)
9.271e+03
(1.281e+02)
9.810e+07
(3.589e+07)
5.377e+02
(8.289e+01)
5.483e+03
(6.748e+02)
5.382e+06
(3.893e+06)
9.3734e+01
(1.573e+01)
2.098e+01
(5.372e-02)
2.034e+02
(3.772e+00)
2.436e+02
(1.191e+01)
3.356e+01
(1.039e+00)
9.698e+05
(1.622e+05)
1.754e+01
(1.244e+00)
1.365e+01
(1.131e-01)
8.534e+02
(1.780e+01)
4.481e+02
(1.287e+01)
5.391e+02
(1.381e+01)
8.228e+02
(2.304e+00)
1.344e+03
(2.892e+01)
1.024e+03
(3.783e+01)
9.813e+02
(8.382e+01)
9.726e+02
(4.278e+01)
8.192e+02
(1.417e+00)
9.527e+02
(6.395e+01)
3.106e+02
(4.297e+01)
LA-DE
4.206e-12
(6.366e-13)
8.697e-11
(8.281e-13)
2.303e+05
(3.083e+04)
1.310e+02
(3.829e+01)
2.307e+03
(1.472e+02)
4.521e-02
(1.218e-01)
2.291e-02
(8.836e-03)
2.097e+01
(6.341e-02)
7.838e+01
(2.749e+01)
1.012e+02
(1.266e+01)
2.421e+01
(3.746e+00)
3.069e+05
(1.181e+05)
6.834e+00
(1.354e+00)
1.317e+01
(2.281e-01)
5.071e+02
(7.395e+01)
2.829e+02
(1.930e+01)
3.148e+02
(8.049e+01)
9.138e+02
(2.460e+01)
9.191e+02
(1.957e+01)
9.781e+02
(7.548e+01)
5.000e+02
(0.000e+00)
9.563e+02
(6.410e+00)
8.305e+02
(3.652e+01)
9.621e+02
(9.453e+01)
2.115e+02
(1.918e+01)
CONCLUSIONS
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
TABLE III.
D=10
f1
f2
f3
f4
f5
f6
f7
f8
f9
f10
f11
f12
Function
No.
f13
f14
f15
f16
f17
f18
f19
f20
f21
f22
f23
f24
f25
SaDE
LA-SaDE
0.000e+00
(0.000e+00)
5.187e-14
(2.580e-13)
6.019e+01
(2.095e+00)
0.000e+00
(0.000e+00
6.141e+00
(2.389e-01)
3.881e-01
(5.020e-01)
5.428e-02
(7.291e-02)
2.052e+01
(2.115e-03)
0.000e+00
(2.188e-16)
1.211e+01
(5.218e+00)
7.191e+00
(1.781e-02)
1.415e+03
(3.821e+01)
6.281e-01
(3.121e-02)
3.631e+00
(7.288e-01)
9.921e+01
(8.289e+00)
1.322e+02
(1.139e+01)
1.872e+02
(2.638e-02)
8.023e+02
(1.278e+01)
9.031e+02
(2.186e+01)
8.287e+02
(4.299e+01)
8.560e+02
(4.376e+01)
7.899e+02
(2.399e+01)
7.291e+02
(1.211e+02)
2.000e+02
(4.872e-18)
2.000e+02
(6.279e+00)
0.000e+00
(0.000e+00)
0.000e+00
(0.000e+00)
8.061e-02
(4.291e-02)
0.000e+00
(0.000e+00)
6.478e-02
(2.300e-02)
0.000e+00
(3.990e-15)
1.400e-01
(6.232e-01)
2.031e+01
(1.852e-03)
0.000e+00
(3.010e-17)
8.853e+00
(4.299e+00)
6.172e+00
(3.858e-03)
1.222e+03
(1.624e+01)
5.451e-01
(2.981e-01)
3.224e+00
(3.771e-01)
8.191e+01
(2.718e+00)
1.131e+02
(3.125e+01)
1.547e+02
(1.377-01)
4.223e+02
(1.256e+00)
8.012e+02
(2.621e+01)
8.521e+02
(2.102e+01)
8.285e+02
(5.285e+01)
8.221e+02
(4.209e+01)
6.239e+02
(1.009e+02)
2.000e+02
(1.345e-17)
2.000e+02
(5.181e+00)
D=30
Statistical
significance
NA
NA
NA
NA
NA
NA
SaDE
2.167e-12
(2.788e-10)
5.281e-06
(2.481e-07)
1.986e+05
(4.825e+04)
4.847e+00
(3.828e+00)
2.013e+03
(7.291+02)
1.872e-02
(6.726e-03)
1.271e-03
(4.382e-04)
2.044e+01
(4.761e-02)
8.361e-08
(5.731e-08)
5.333e+01
(7.496e-02)
8.401e+00
(5.272e-02)
7.525e+04
(2.389e+02)
1.612e+00
(1.118e-01)
1.273e+01
(1.05e-05)
4.210e+02
(5.383e+01)
1.531e+02
(3.847e+00)
2.717e+02
(5.382e+01)
8.382e+02
(6.392e+01)
8.149e+02
(3.471e+01)
8.371e+02
(3.820e+01)
8.201e+02
(9.372e+01)
7.82e+02
(1.41e+01)
1.081e+03
(6.281e+01)
7.912e+02
(2.881e+01)
2.000e+02
(0.000e+00)
LA-SaDE
5.684e-14
(2.180e-15)
1.023e-12
(6.172e-07)
1.271e+05
(8.175e+04)
1.764e+00
(4.924e+00)
1.910e+03
(5.297e+02)
1.728e-04
(5.201e-03)
7.259e-04
(9.248-03)
2.064e+01
(3.762e-02)
5.684e-14
(7.391e-12)
6.581e+01
(4.819e-02)
6.943e+00
(1.922e-02)
4.291e+04
(8.992e+02)
1.309e+00
(7.824e-01)
1.331e+01
(7.451e-01)
3.281e+02
(1.211e+01)
1.461e+02
(5.490e+00)
2.219e+02
(4.290e+01)
7.215e+02
(9.296e+01)
7.672e+02
(8.822e+01)
8.141+02
(8.439e+01)
5.152e+02
(6.482e+01)
5.158e+02
(1.90e+01)
5.581e+02
(3.292e+01)
2.000e+02
(0.000e+00)
2.000e+02
(0.000e+00)
D=50
Statistical
significance
NA
SaDE
LA-SaDE
2.671e-10
(5.823e-09)
1.467e-02
(3.781e-02)
5.175e+05
(5.278e+04)
2.773e+04
(1.722e+03)
5.572e+03
(6.438e+02)
5.812e+00
(5.181e-01)
1.275e-03
(8.269e-02)
2.062e+01
(2.781e-02)
1.728e-06
(3.918e-08)
1.887e+02
(4.544e+01)
6.739e+01
(4.101e+01)
6.247e+05
(9.344e+03)
7.240e+01
(8.739e+00)
2.211e+01
(4.391e-01)
3.981e+02
(6.793e+01)
1.683e+02
(8.622e+01)
2.068e+02
(6.628e+01)
8.329e+02
(4.399e+01)
8.652e+02
(4.123e+01)
8.189e+02
(5.391e+01)
8.040e+02
(2.126e+01)
7.626e+02
(4.248e+01)
1.184e+03
(1.775e+01)
2.000e+02
(0.000e+00)
2.000e+02
(0.000e+00)
6.739e-14
(8.023e-12)
4.679e-05
(3.216e-08)
4.315e+05
(7.589e+04)
8.194e+03
(5.185e+02)
1.817e+03
(2.392e+02)
4.122e+00
(7.011e-01)
7.153e-12
(5.292e-10)
2.079e+01
(2.366e-02)
1.347e-07
(2.391e-07)
2.819e+02
(4.342e+01)
6.138e+01
(4.342e+01)
4.733e+05
(6.775e+03)
4.994e+01
(2.379e+00)
2.199e+01
(6.629e-01)
5.629e+02
(6.281e+01)
2.126e+02
(5.017e+01)
2.135e+02
(2.184e+01)
8.671e+02
(4.618e+01)
8.152e+02
(9.061e+01)
8.015e+02
(4.810e+01)
6.170e+02
(6.395e+01)
8.295e+02
(7.459e+01)
8.914e+02
(3.600e+01)
2.000e+02
(0.000e+00)
2.000e+02
(0.000e+00)
Statistical
significance
NA
NA