Documente Academic
Documente Profesional
Documente Cultură
Approach
Ali B. Hashemi and M.R. Meybodi
Computer Engineering and Information Technology Department, Amirkabir University of
Technology, Tehran, Iran
{a_hashemi, mmeybodi}@aut.ac.ir
Abstract
PSO, like many stochastic search methods, is very
sensitive to efficient parameter setting. As modifying a
single parameter may result in a large effect. In this
paper, we propose a new a new learning automatabased approach for adaptive PSO parameter selection.
In this approach three learning automata are utilized
to determine values of each parameter for updating
particles velocity namely inertia weight, cognitive and
social components. Experimental results show that the
proposed algorithms compared to other schemes such
as SPSO, PSO-IW, PSO TVAC, PSO-LP, DAPSO,
GPSO, and DCPSO have the same or even higher
ability to find better local minima. In addition,
proposed algorithms converge to stopping criteria
significantly faster than most of the PSO algorithms.
1. Introduction
The particle swarm optimization (PSO) algorithm is
introduced by Kennedy and Eberhart [1] based on the
social behavior metaphor. In particle swarm
optimization, a group of particles, without quality and
volume, fly through a D dimensional space, adjusting
their positions in search space according to their own
experience and their neighbors. The ith particle is
represented as Pi=(pi1, pi2,, pDi) in the D-dimensional
space. The velocity for particle i is represented as
Vi=(vi1, vi2,, vDi), which is usually clamped to a
maximum velocity Vmax, specified by the user. In each
time step t, the particles are manipulated according to
the following equations:
v i t
1
p i t 1
p i t v i t 1
(1)
(2)
2. Related Work
PSO Parameter selection schemes introduced in
literature can be classifies in three categories. In the
first category, all parameters of PSO are selected
empirically. For example assigning different values to
c1 and c2 sometimes leads to faster convergence and
improved performance [15]. Typically, these are set to
a value of 2.0 [1], but experimental results indicate that
alternative configurations, depending on the problem,
may produce superior performance. An extended
study of the acceleration coefficients in the first
version of PSO, is given in [16]. Although this
approach may lead the suboptimal results of a single
problem with constant parameters, it would not result
the optimal parameters because of different optimal
values for various stages of swarm, i.e. more
exploration rather than exploitation at the beginning.
To address this problem, algorithms in the second
category try to choose time variant parameter values.
Generally, in population-based optimization
methods, considerably high diversity is necessary
during the early stages of the search to allow use of the
full range of the search space. On the other hand,
during the latter stages of the search, when the
algorithm is converging to the optimal solution, finetuning of the solution is important in order to find the
global optima efficiently. Considering these concerns,
Shi and Eberhart [4], [3], and [17] have found a
significant improvement in the performance of the
PSO method with a linearly varying inertia weight
over the generations. They experimentally show that it
is better to initially set the inertia to a large value, in
order to promote global exploration of the search
space, and gradually decrease it to get more refined
solutions [4] [3]. Moreover Ratnaweera et al. [7]
proposed a PSO with time-varying acceleration
coefficients (PSO-TAVC), in which c1 decreases
linearly over time while c2 increases linearly. In their
method in case of no improvement, the velocity is
mutated by a mutation probability to the maximum
allowed velocity multiplies by the mutation step size.
Chatterjee and Siarry [5] proposed DAPSO, a PSO
3. Learning Automata
The automata approach to learning can be
considered as the determining of an optimal action
from a set of actions. An automaton can be regarded as
an abstract object that has finite number of actions. It
selects an action from its finite set of actions. This
action is applied to an environment. The environment
evaluates the applied action and sends a reinforcement
signal to automata (i.e. favorable or unfavorable in a Pmodel environment). The response from environment
is used by learning algorithm of automata to update its
internal state. By continuing this process, the
p (k ) 1 b
if i j
j
b
r 1 1 b p j (k ) if i z j
(4)
4. Proposed Algorithm
In the proposed algorithms, we use three learning
automata to set three parameters of PSO velocity
update equation, namely w, c1, and c2. To adjust a
parameter value we utilized both adventurous and
conservative approach. In the first approach, a learning
automaton adjusts value of each parameter. In the
second approach, learning automata of each parameter
decides how to change value of the parameter, whether
increase/decrease it or does not change it.
PSO-APLA1 uses three learning automata LAw,
LAc1, and LAc2 for adjusting value of swarm parameters
w, c1, and c2 respectively. Each automaton has Np
actions, where each action corresponds to a value in
range [Pmin, Pmax] where Pmin, and Pmax are boundaries
of parameter P. For each of the parameters w, c1, and
c2 their corresponding learning automata select an
action at each iteration. Afterwards, the parameters are
set to the value selected by learning automata. The
pseudocode of this algorithm is presented in Figure 1.
Procedure PSO-APLA1
begin
Initialize a D-dimensional PSO
repeat
for each parameters [ ={ w, c1, and c2 } do
Select an action of LA[ as D[, whereD[={0,1,N[}
[ max [ min
N[
[ (t ) [ min D .
end
for each particle pi , i >1! N
1)
[ (t ) T[
[ (t )
[ (t ) T
[
if D [ = increase
if D[ = no - chane
if D [ = decrease
(5)
5. Experimental Results
Proposed PSO algorithms were tested on a suite of
benchmark functions used in literature and compare
with well known PSO algorithms: SPSO [22],
PSOIW[4], GPSO[19], PSOLP[18], DCPSO[23],
DAPSO[20], and PSO-TVAC[7].
Function
Rosenbrock : f 1
100 x
n 1
x i2
i 1
i 1
Sphere:f 2
x
2
1
Range
Initial
Range
Goal
30
[30,30]n
>15, 30@
100
30 > 5.12,5.12@
2
i
>50,100@
0.01
i 1
10n
Rastrigin:f 3
n
2
i
10 cos 2S x i
i 1
Ackley:f 4
Griewank : f 5
100
n
30 [32,32]
>16,32@
30 [600, 600]n
>300, 600@
0.1
1
x
x
cos i
i 1 4000
i 1
i
2
i
20 e
1 n 2
20 exp 0.2
x i
n i 1
1 n
exp cos 2S x i
n i 1
0.1
f2
f3
f4
f5
SPSO, c1,2=1.49
SPSO, c1,2=2.0
SPSO, c1,2=2.05
PSOIW, c1,2=1.49
PSOIW, c1,2=2.0
PSOIW, c1,2=2.05
DCPSO
GPSO
DAPSO
PSO-LP
PSO-TAVC
PSO-APLA 11
7.60E-011.78E-01
3.97E+003.85E-01
5.72E+001.27E+00
6.46E-011.61E-01
1.36E+001.84E-01
2.05E+002.37E-01
1.55E+013.19E+00
3.39E+004.45E-01
1.70E+023.20E+01
7.66E-011.71E-01
4.43E-011.40E-01
5.42E-2730.00E+00
6.60E-355.16E-35
1.59E-199.46E-20
0.00E+000.00E+00
1.17E-1301.28E-130
7.59E-1075.25E-107
8.45E-106.54E-10
1.68E-181.06E-19
4.01E-012.88E-02
0.00E+000.00E+00
0.00E+000.00E+00
7.13E+004.35E-01
5.52E-018.43E-02
4.67E-018.65E-02
2.10E+001.41E-01
1.43E+001.22E-01
1.48E+001.19E-01
1.37E+001.62E-01
5.92E-181.17E-17
6.55E+002.34E-01
2.63E+011.58E+00
1.83E+001.36E-01
3.19E-012.46E-01
3.68E-157.60E-17
1.56E-101.20E-10
3.56E-154.04E-17
3.53E-153.29E-17
3.54E-154.04E-17
1.54E+006.08E-01
1.51E-095.10E-11
1.96E+006.69E-02
1.95E+012.56E-01
3.60E-155.69E-17
7.42E-024.61E-03
7.62E-024.00E-03
1.04E-016.81E-03
6.23E-023.34E-03
5.89E-022.98E-03
6.53E-023.30E-03
6.67E-024.03E-03
5.16E-022.64E-03
6.71E-011.80E-02
1.78E-011.96E-01
5.36E-022.83E-03
1.75E+002.89E-01
1.02E-281.97E-28
1.19E-014.22E-02
1.94E-123.78E-12
5.84E-022.93E-03
PSO-APLA 12
5.60E-011.43E-01
7.35E-461.41E-45
1.19E+001.84E-01
3.78E-159.85E-17
6.46E-023.68E-03
3.06E+004.69E-01
9.25E-121.82E-11
9.33E-025.83E-03
1.79E-016.57E-02
2.20E-813.95E-81
PSO-APLA2
Table 3 Success rate and average number of FEs to reach goal in for 10-D functions, MaxIteration=5000
Success Rate
Average Number of FEs to Reach Goal
f1
f1
f2
f3
f4
f5
f2
f3
f4
f5
4,38832
47723
4,375302
9,8531,725
4,450519
SPSO, c1,2=1.49 100% 100% 100% 96.7% 79.3%
82846
20,199342
84,3166,470
SPSO, c1,2=2.0 100% 100% 100% 100% 77.0% 18,5691,526 22,499323
90453
29,977561
104,4257,092
SPSO, c1,2=2.05 99.3% 100% 100% 100% 58.0% 26,1601,889 33,642583
86589
28,884189
46,5052,099
PSOIW,c1,2=1.49 100% 100% 100% 100% 90.0% 26,923801 30,112167
74,422282
107,7232,757
PSOIW, c1,2=2.0 100% 100% 100% 100% 94.3% 70,4211,214 76,688260 2,904283
82,052332
117,2272,917
PSOIW,c1,2=2.05 100% 100% 100% 100% 87.0% 78,2601,264 84,351297 3,133313
97.7% 100% 100% 92.3% 85.0% 21,1693,981 4,988187
9,5412,085
49,1225,660
41714
DCPSO
10,637622
16,945214
53128
15,225269
24,0822,152
100% 100% 100% 100% 98.0%
GPSO
53.0% 0.0% 100% 0.0%
0.0% 85,8096,452
1,34797
DAPSO
4,78545
2,55591 18,84054,993 23,8525,696
100% 100% 100% 1.0% 74.0% 7,1241,447
PSO-LP
33,340701
36,708159 3,746234
35,881173
52,9171,665
100% 100% 100% 100% 95.3%
PSO-TAVC
PSO-APLA 11
81255
14,628845
46,8054,536
100% 100% 100% 100% 95.0% 12,9421,940 11,322741
PSO-APLA 12
100%
100%
100%
100%
86.7%
7,3551,146
5,682364
62941
10,0461,363
28,3414,391
PSO-APLA2
100%
100%
100%
100%
66.7%
8,4341,028
8,190406
1,01782
12,4901,609
23,9204,825
f2
f3
f4
f5
SPSO, c1,2=1.49
SPSO, c1,2=2.0
SPSO, c1,2=2.05
PSOIW, c1,2=1.49
PSOIW, c1,2=2.0
PSOIW, c1,2=2.05
DCPSO
GPSO
DAPSO
PSO-LP
PSO-TAVC
PSO-APLA 11
8.55E+001.74E+00
5.16E+037.32E+02
1.99E+053.47E+04
1.48E+013.06E+00
4.45E+014.98E+00
5.14E+015.59E+00
1.71E+022.43E+01
7.12E+015.15E+00
8.28E+036.51E+02
8.22E+002.38E+00
1.10E+012.42E+00
2.24E-1061.72E-106
9.67E+001.46E+00
3.71E+023.41E+01
2.83E-1125.58E-112
6.86E-296.57E-29
1.00E-203.97E-21
6.54E-041.29E-04
4.90E-069.17E-07
1.11E+023.99E+00
4.33E-978.51E-97
2.01E-862.73E-86
1.62E+024.87E+00
1.11E+024.62E+00
1.76E+024.44E+00
7.94E+012.55E+00
3.60E+011.47E+00
3.43E+011.38E+00
4.47E+012.09E+00
5.33E-042.43E-04
1.39E+024.21E+00
1.12E+024.51E+00
7.71E+012.04E+00
1.74E+017.79E-01
6.52E+009.09E-01
9.57E+006.84E-01
1.09E+011.31E+00
4.87E+001.15E+00
4.29E+001.12E+00
1.25E+011.13E+00
5.51E-044.91E-05
1.72E+016.34E-01
1.98E+011.29E-02
6.34E+009.41E-01
2.07E-024.30E-03
1.07E+001.80E-02
4.60E+003.23E-01
1.74E-022.85E-03
1.44E-022.19E-03
1.67E-022.83E-03
4.34E-025.03E-03
3.38E-023.69E-03
1.96E+003.37E-02
4.56E-027.24E-03
3.55E-024.17E-03
5.59E+015.90E+00
4.15E-066.78E-06
1.53E+019.23E-01
1.08E+011.11E+00
2.02E-022.64E-03
PSO-APLA 12
6.05E+017.89E+01
1.04E+021.06E+02
3.18E+012.86E+00
1.47E+019.42E-01
5.44E-022.78E-02
PSO-APLA2
3.40E+013.94E+00
4.09E-186.35E-18
6.88E+003.84E-01
1.51E+018.54E-01
2.90E-023.69E-03
Table 5 Success rate and average number of FEs to reach goal for 30-D functions, MaxIteration=5000
Success Rate
f1
SPSO, c1,2=1.49
SPSO, c1,2=2.0
SPSO, c1,2=2.05
PSOIW,c1,2=1.49
PSOIW,c1,2=2.0
PSOIW,c1,2=2.05
DCPSO
GPSO
DAPSO
PSO-LP
PSO-TAVC
PSO-APLA 11
PSO-APLA
2
1
f5
f1
f4
f5
1.5%
0.0%
0.0%
25.0%
72.5%
77.0%
34.0%
97.0%
0.0%
0.0%
99.5%
100%
99.5%
89.7%
100% 95.7%
0.0% 0.0%
0.0% 89.3%
19.3% 95.3%
20,0532,196
63,9431,925
138,0472,069
150,1081,632
92,0246,635
104,2763,889
23,7263,038
69,7161,756
13,940138
10,6972,115
143,9527,307
165,300397,196
57,572193
54,780969
127,846321
113,680920
141,160344 125,1561,199
72,8753,526
39,3563,497
114,360922
15,948274
168,4787,968
17,789273
104,4006,184
61,719219
60,896639
15,2801,292
59,516857
132,5131,812
144,3821,230
116,6965,531
94,7541,206
67,5291,871
12,655123
55,987224
125,664357
138,223494
64,6614,012
102,1801,364
12,336347
60,825410
72,9334,637
54,5282,781
47,4682,529
82,3135,666
54,0982,851
33,9163,056
20,2841,012
54,7724,274
72,05611,772
23,6752,354
f2
f3
f2
f3
PSO-APLA2 97.0% 100% 100% 12.7% 95.7% 57,4974,094 40,8221,078 48,0141,579 132,40518,088 36,4811,169
Table 6 Average best fitness value for 30-D functions, MaxIteration=10,000
f1
f2
f3
f4
f5
3.45E+001.27E+00 1.07E-2170.00E+00 1.58E+024.81E+00
1.70E+018.47E-01
2.52E-026.06E-03
SPSO, c1,2=1.49
4.89E+028.08E+01
6.59E-022.95E-02
7.11E+013.61E+00
1.80E+006.63E-01
2.69E-013.29E-02
SPSO, c1,2=2.0
2.98E+047.25E+03
6.95E+018.36E+00
1.35E+024.14E+00
5.39E+004.20E-01
1.62E+007.49E-02
SPSO, c1,2=2.05
7.47E+001.99E+00 2.43E-2370.00E+00 5.37E+012.13E+00
8.69E+001.32E+00
1.85E-022.86E-03
PSOIW, c1,2=1.49
2.80E+014.24E+00
4.52E-633.15E-63
2.62E+011.06E+00
2.00E+007.95E-01
1.74E-022.95E-03
PSOIW, c1,2=2.0
6.11E-458.38E-45
2.50E+019.78E-01
1.36E+006.73E-01
1.62E-022.68E-03
3.66E+014.88E+00
PSOIW, c1,2=2.05
1.40E+022.91E+01
7.15E-051.72E-05
3.16E+012.16E+00
1.31E+011.37E+00
3.63E-025.19E-03
DCPSO
5.30E+014.56E+00
1.49E-133.96E-14
7.95E-123.09E-12
1.08E-072.20E-08
2.80E-023.76E-03
GPSO
1.47E+031.34E+02
1.89E+018.97E-01
8.77E+013.05E+00
1.12E+019.70E-01
1.16E+007.62E-03
DAPSO
1.49E+007.86E-01
8.28E-2410.00E+00 9.54E+015.52E+00
1.98E+011.12E-02
5.96E-021.01E-02
PSO-LP
2.55E+008.59E-01
5.45E-1830.00E+00 5.09E+011.96E+00
4.64E+001.09E+00
3.38E-024.66E-03
PSO-TAVC
PSO-APLA 11
3.02E+014.36E+00
2.69E-144.30E-14
3.20E+003.81E-01
8.29E+001.34E+00
1.87E-023.17E-03
PSO-APLA 12
9.84E+003.92E+00
3.46E+016.31E+01
1.16E+011.33E+00
1.11E+011.33E+00
3.30E-021.06E-02
PSO-APLA2
1.12E+012.71E+00
3.67E-426.26E-42
2.49E+003.96E-01
6.64E-014.30E-01
2.40E-024.17E-03
Dim=10,MaxIteration=5000
superior to #
4
SPSO, c1,2=1.49
3
SPSO, c1,2=2.0
2
SPSO, c1,2=2.05
9
PSOIW, c1,2=1.49
9
PSOIW, c1,2=2.0
7
PSOIW, c1,2=2.05
1
DCPSO
4
GPSO
0
DAPSO
1
PSO-LP
PSO-TAVC
12
PSO-APLA 11
10
2
1
PSO-APLA2
PSO-APLA
inferior to #
8
6
10
1
0
5
11
7
13
7
0
Dim=30,MaxIteration=5000
superior to #
6
2
0
8
12
8
3
6
1
3
6
inferior to #
5
10
12
1
0
1
8
4
11
8
2
Dim=30,MaxIteration=10000
superior to #
4
2
0
8
9
9
4
6
1
4
6
inferior to #
6
10
12
1
0
1
8
4
11
8
2
11
6. Conclusion
In this paper, we proposed LA-based algorithms to
adjust parameters of PSO. Two approaches for
adjusting value of parameters of PSO were adopted. In
first approach, value of a parameter is selected from a
finite set while in the second approach, value of a
parameter either changes conservatively by a fixed
amount or does not change at all. Experimental results
show that having the same error (and sometimes less)
as SPSO, PSOIW, DCPSO, DAPSO, GPSO, PSOLP,
and PSO-TAVC for all benchmark functions, proposed
algorithms can reach the goal of test functions faster
than most of other PSO algorithms with the same, or
even better, success rate. Moreover, considering all
benchmark functions to compare algorithms, the
proposed algorithms can only be defeated by
PSO-TVAC in terms of error but significantly
converge faster than it. Also, when there is more
computational power available, the proposed
algorithm, PSO-APLA2, performs better than all tested
PSO algorithms considering all benchmark functions.
Although the proposed algorithms do not need any
priori knowledge about the search space, they require
some parameter tuning like GPSO and PSO-TAVC.
10. References
[1] J. Kennedy and R. C. Eberhart, "Particle Swarm
Optimization," in IEEE International Conference on
Neural Networks, Piscataway, NJ, 1995, pp. 1942-1948.
[2] A. Carlisle and G. Dozier, "An Off-the-Shelf Pso," in
Workshop on Particle Swarm Optimization, 2001, pp. 16.
[3] Y. Shi and R. C. Eberhart, "Parameter Selection in
Particle Swarm Optimization," in 7th International
Conference on Evolutionary Programming Vii Lecture
Notes in Computer Science, vol. 1447, 1998, pp. 591
600.
[4] Y. Shi and R. C. Eberhart, "A Modified Particle Swarm
Optimizer," in IEEE International Conference on
Evolutionary Computation, Anchorage, Alaska, USA,
1998, pp. 69-73.
[5] A. Chatterjee and P. Siarry, "Nonlinear Inertia Weight
Variation for Dynamic Adaptation in Particle Swarm
Optimization," Computers & Operations Research, vol.
33, no. 3, pp. 859-871, 2006.
[6] M. Clerc and J. Kennedy, "The Particle Swarm Explosion, Stability, and Convergence in a
Multidimensional Complex Space," IEEE Transactions
on Evolutionary Computation, vol. 6, no. 1, pp. 58-73,
2002.
[7] A. Ratnaweera, S. K. Halgamuge, and H. C. Watson,
"Self-Organizing Hierarchical Particle Swarm Optimizer
with Time-Varying Acceleration Coefficients," IEEE