Documente Academic
Documente Profesional
Documente Cultură
Submitted by,
Prof. A. Vasan
Differential Evolution
Using
Matlab
ii
CERTIFICATE
This is certified that the project entitled
Prof. A. Vasan
(Department of Civil Engineering)
Birla Institute of Technology and Science, Pilani. Hyderabad Campus.
2013-14.
iii
ACKNOWLEDGMENT
Firstly, I would like to thank the curriculum of BITS, Pilani for giving me this
opportunity in doing Study oriented project.
I sincerely appreciate Prof. P. N. Rao for giving me opportunity to work on this topic. I
am greateful for his support and guidance that have helped me to expand my horizon of thought
and expression.
I am also thankful to my friends for their help in sharing the information about
various aspects of this project.
iv
Mandar P. Ganbavale
ID No.
2013H143013H
Dr. A. Vasan
Qualification
Ph. D
Designation
Professor
Organization
Project topic
Differential Evolution using MATLAB.
Introduction
Optimization problems are everywhere in academic research and real-world applications such
as in engineering, finance, and scientific areas. Wherever resources like space, time and cost
are limited, optimization problem arises. So, with no doubt, researchers and practitioners need
an efficient and robust optimization approach to solve problems of different characteristics that
are fundamental to their daily work, but at the same time, it is expected that solving a complex
optimization problem itself should not be very difficult. In addition, an optimization algorithm
should be able to reliably converge to the true optimum for a variety of different problems.
Furthermore, the computing resources spent on searching for a solution should not be
excessive. Thus, a useful optimization method should be easy to use, reliable and efficient to
achieve satisfactory solutions.
Differential Evolution (DE) is such an optimization approach that addresses these
requirements.
- Differential evolution algorithm is invented by Storn and Prince in 1995.
- Differential Evolution has been shown to be a simple yet efficient optimization
approach in solving a variety of benchmark problems as well as many real-world
applications.
- Differential evolution together with evolution strategies (ESs) and evolutionary
programing (EP) can be categorized into a class of population based, Derivative free
methods known as Evolutionary algorithm.
- All this approaches mimic Darwinian evolution and evolve a population of individuals
from one generation to another by analogous evolutionary operations such as mutation,
crossover and selection.
Steps involves differential evolution algorithm
Initialization
Mutation
Cross-over
Selection
No
Convergence?
Yes
1. Sampling the search space at multiple, randomly chosen initial points i.e. a population
of individual vectors.
2. Since, Differential evolution is in nature a derivative-free continuous function
optimizer, it encodes parameters as floating-point numbers and manipulates them with
simple arithmetic operations such as addition, subtraction, and multiplication and
generates new points that are the perturbations/mutations of existing points. For this
differential evolution mutates a (parent) vector in the population with a scaled
difference of other randomly selected individual vectors.
3. The resultant mutation vector is crossed over with the corresponding parent vector to
generate a trial or offspring vector.
4. Then, in a one-to-one selection process of each pair of offspring and parent vectors, the
one with a better fitness value survives and enters the next generation.
This procedure repeats for each parent vector and the survivors of all parent-offspring pairs
become the parents of a new generation in the evolutionary search cycle. The evolutionary
search stops when the algorithm converges to the true optimum or a certain termination
criterion such as the number of generations is reached.
Follows the principle of natural evolution and survival of the fittest that are described
by the Darwinian Theory.
The major applications of evolutionary algorithms are in optimization, although they
have also been used to conduct data mining, generate learning systems, and build
experimental frameworks to validate theories about biological evolution and natural
selection, etc.
EAs differ from traditional optimization techniques in that they usually evolve a
population of solutions or individual points in the search space of decision variables,
instead of starting from a single point. At each iteration, an evolutionary algorithm
generates new solutions which is also called as offspring by mutating and/or
recombining current or parent solutions and then conducts a competitive selection to
weeds out poor solutions.
In comparison with traditional optimization techniques, such as calculus-based
nonlinear programming methods in, evolutionary algorithms are usually more robust
and achieve a better balance between the exploration and exploitation in the search
space when optimizing many real-world problems.
Different main streams of evolutionary algorithms have evolved over the past forty
years. The majority of the current implementations descend from three strongly related
but independently developed branches:
o Evolution strategies
o Genetic algorithms, and
o Evolutionary programming.
These approaches are closely related to each other in terms of their underlying
principles, while their exact operations and usually applied problem domains differ
from one approach to another.
They are better suited for discrete optimization because the decision variables are
originally encoded as bit strings and are modified by logical operators.
Evolution strategies and evolutionary programming, however, concentrate on mutation,
although evolution strategies may also incorporate crossover or recombination as an
operator.
Evolutionary strategies is a continuous function optimizer in nature because it encodes
parameters as floating-point numbers and manipulates them by arithmetic operations.
Although differences exist among these evolutionary algorithms, they all rely on the
concept of a population of individuals or solutions, which undergo such probabilistic
operations as mutation, crossover and selection to evolve toward solutions of better fitness
in the search space of decision variables.
- The mutation introduces new information into the population by randomly generating
variations to existing individuals.
- The crossover or recombination typically performs an information exchange between
different individuals in the current population.
- The selection imposes a driving force towards the optimum by preferring individuals
of better fitness.
- The fitness value may reflect the objective function value and/or the level of constraint
satisfaction.
- These operations compose a loop, and evolutionary algorithms usually execute a
number of generations until the obtained best-so-far solution is satisfactory or other
termination criterion is fulfilled.
2. Differential evolution
Similar to other evolutionary algorithms, differential evolution is a population based,
derivative-free function optimizer. It usually encodes decision variables as floatingpoint numbers and manipulates them with simple arithmetic operations such as
addition, subtraction, and multiplication.
The initial population {1,,0 = (1,,0 , 2,,0 , 3,,0 , . . , ,,0 )| = 1,2,3, . . , } is
randomly generated according to a normal or uniform distribution
,,0 = 1,2,3,4,5, .
Where, = Population size
D = Dimension of the problem
= Lower limit of jth vector component
= Upper limit of jth vector component
After initialization, DE enters a loop of evolutionary operations: mutation, crossover
and selection.
2.1.Mutation
At each generation g, this operation creates mutation vectors , based on the
current
parent
population
{1,,0 = (1,,0 , 2,,0 , 3,,0 , . . , ,,0 )| =
1,2,3, . . , } the following are different mutation strategies frequently used in
the literature,
DE/rand/1
, = 0 , + (1, 2 , )
DE/current-to-rest/1
, = , + (, , ) + (1 , 2, )
DE/best/1
, = , + (1, 2 , )
Where,
0 , 1 , 2 = distinct integers uniformly chosen from the set {1,2, . . , }\{} ,
1, 2, = difference vector to mutate the parent,
, = best vector at the current generation
= the mutation factor which usually ranges on the interval (0, 1+).
In classic DE algorithms, Fi = F is a single parameter used for the generation of all
mutation vectors, while in many adaptive DE algorithms each individual i is
associated with its own mutation factor Fi. The above mutation strategies can be
generalized by implementing multiple difference vectors other than 1 , 2 , .
The resulting strategy is named as DE//k depending on the number k of difference
vectors adopted. The mutation operation may generate trial vectors whose
components violate the predefined boundary constraints. Possible solutions to
tackle this problem include resetting schemes, penalty schemes, etc. A simple
method is to set the violating component to be the middle between the violated
bound and the corresponding components of the parent individual, i.e.
,,
( + ,, )
=
,, <
2
,, =
( + ,, )
2
,, >
where ,, and ,, are the j-th components of the mutation vector , and the
parent vector , at generation g, respectively.
This method performs well especially when the optimal solution is located near or
on the boundary.
2.2.Cross-over
After mutation, a binomial crossover operation forms the final trial vector
, = (1,, , 2,, , . , ,, )
,, = {
,, . . (0,1) =
,, . .
Where,
(, )= uniform random number on the interval (a, b) and newly generated
for each j,
= (1, D) = integer randomly chosen from 1 to D and newly generated
for each i, the crossover probability, [0,1], roughly corresponds to the
average fraction of vector components that are inherited from the mutation vector.
In classic DE, = CR is a single parameter that is used to generate all trial vectors,
while in many adaptive DE algorithms, each individual I is associated with its own
crossover probability .
5
2.3.Selection
The selection operation selects the better one from the parent vector , and the
trial vector , according to their fitness values f (). For example, if we have a
minimization problem, the selected vector is given by
,+1 = {
, . . (, ) < ,
, . . .
We see that the value obtained for F = 0.5 and CR = 0.95 is the lowest i.e. fmin = 1.7251
Problem 2
Multi-objective optimization
The optimal design of the three-bar truss shown in Figure is considered using two different
objectives with the cross-sectional areas of members 1 (and 3) and 2 as design variables.
ASSUMING VALUE OF H AS 1,
10
11
12
Results
The results shown below are for 250 iterations and a population size of 2000:
Hence it is seen that not much change is there in the values of fmin for both objective functions
even though the value of CR is changing.
13
14
Figure 3: The figures shows the results for F = 0.5 and CR = 0.8 for both the objective functions
15
16
References
[1] Godfrey C.Onwubolu and Donald Davendra Differential Evolution: A Handbook for
Global Permutation-Based Combinatorial Optimization Studies in Computational
Intelligence, Volume 175.
[2] Kenneth V. Price Rainer M. Storn Jouni A. Lampinen Differential Evolution: APractical
Approach to Global Optimization.
[3] Panos M. Pardalos (University of Florida), Ding-Zhu Du (University of Texas at Dallas)
DIFFERENTIAL EVOLUTION In Search of Solutions
[4] Vasan Arunachalam Optimization Using Differential Evolution Water Resources
Research Report. Report No: 060, Date: July 2008.
[5] Feifei Zheng, Angus R. Simpson and Aaron C. Zecchin A coupled binary linear
programming-differential evolution algorithm approach for water distribution system
optimization doi: 10.1061/(ASCE)WR.1943-5452.0000367.
[6] Wenyin Gong, Zhihua Cai, Li Zhu An Efficient Multiobjective Differential Evolution
Algorithm for Engineering Design
17