0 Voturi pozitive0 Voturi negative

9 (de) vizualizări14 paginiconv

Nov 24, 2014

© © All Rights Reserved

DOCX, PDF, TXT sau citiți online pe Scribd

conv

© All Rights Reserved

9 (de) vizualizări

conv

© All Rights Reserved

- BIg_data
- Lesson1
- Pso Convergence
- Network Reconfiguration by PSO
- ACO better as Optimization Technique in MANET: A survey
- Economic Load Dispatch Using Swarm Intelligence
- A Mixed Integer Programming Approach for Supply Chain Management Using PSO
- CLAPSO
- A 1103010108
- ijdkp030208
- Improvement in Quality of Power by PI Controller Hybrid PSO using STATCOM
- 101008
- OFDM Systems Resource Allocation using Multi-Objective Particle Swarm Optimization
- Image Mining
- Paper1-Fast-Convergent Artificial Bee Colony With an Adaptive Local Search
- PSO_codes
- Optimized Fuzzy Control by Particle Swarm
- 12. Guangyong Sun, 2010.pdf
- Pso
- Kumar 2016

Sunteți pe pagina 1din 14

Abstract: Particle Swam Optimization (PSO) has become popular choice for solving

complex problems which are otherwise difficult to solve by traditional methods. One of the

drawbacks of PSO is premature convergence and trapping into local optima. This paper

attempts to avoid premature convergence by modifying the velocity updation function in

PSO. Variants in inertia weight, chaotic operators, neighbourhood selection and self

adaptation of inertia weight through three methods are introduced with velocity update

function to avoid convergence at local optima. The variants when tested on five datasets for

mining Association rules (AR) avoid premature convergence thereby enhancing the

predictive accuracy of the rules mined.

Keywords: Particle Swarm Optimization, Premature Convergence, Inertia weight, Chaotic

operator, Neighbourhood selection, Association Rule.

1.Introduction

Association rule mining is a data mining task that discovers associations among items in a

large database. Association rules have been extensively studied in the literature for their

usefulness in many application domains such as recommender systems, diagnosis decisions

support, telecommunication, intrusion detection, etc. Efficient discovery of such rules has

been a major focus in data mining research.

Apriori algorithm is the most widely represented algorithm for association rule mining. Many

modifications have been made in this algorithm focusing on improvement of its efficiency

and accuracy. However, two parameters, minimal support and confidence, determined by the

decision-maker or trial-and-error identifies that the algorithm lack in both objectiveness and

efficiency. Traditional methods for rule mining namely decision tree, Bayesian classifier and

statistic methods are usually accurate, but the computation complexity could be very high.

Metaheuristic optimization algorithms have been the popular choice for solving complex and

intricate problems which are otherwise difficult to solve by traditional methods [1]. The

Particle Swarm Optimization (PSO) algorithm is an evolutionary computation technique and

an important heuristic algorithm in recent years. The mechanism of PSO algorithm is to

mimic the social behaviour of animals such as fish schooling and bird flocking. A potential

solution to the involved problem is depicted with a particle (individual). The particle adjusts

its position by flying with some velocity in the search space. The flying velocity of an

individual depends both on its personal experience and neighbours experience.

Despite having several attractive features, it has been observed that PSO algorithms do not

always perform as per expectations. Particle swarm optimization algorithms can easily get

trapped in the local optima when solving complex multimodal problems. The success of PSO

algorithm to a large extent depends on the careful balancing of two conflicting goals,

exploration (diversification) and exploitation (intensification). While exploration is important

to ensure that every part of the solution domain is searched enough to provide a reliable

estimate of the global optimum; exploitation, on the other hand, is important to concentrate,

the search effort around the best solutions found so far by searching their neighbourhoods to

reach better solutions [2]. Accelerating convergence speed and avoiding the local optima

have become the two most important and appealing goals in PSO research.

Since PSO was proposed, investigations have been made theoretically and experimentally to

analyze and improve PSO. Clerc and Kennedy [7] explored how PSO works from a

mathematical perspective, introduced a constriction factor v to guarantee the convergence of

PSO, and analyzed the trajectory of a single particle in both discrete time and continuous

time. Van den Bergh and Engelbrecht [8] analyzed how the inertia weight and acceleration

constants affect the trajectories of particles and provided theoretical findings on the dynamics

of the PSO systems. These studies provided theoretical supports for the research on the

improvement of PSO. In order to achieve good balanceween exploitation capability and

exploration capability, neighborhood topologies designed for particles are studied. Four

neighborhood topologies comprising circles, wheels, stars and random edges were tested in

[9].

Eberhart and Shi [17] proposed a Random Inertia Weight strategy and experimentally found

that this strategy increases the convergence of PSO in early iterations of the algorithm. In

Global-Local Best Inertia Weight [18], the Inertia Weight is based on the function of local

best and global best of the particles in each generation. It neither takes a constant value nor a

linearly decreasing time-varying value. Using the merits of chaotic optimization, Chaotic

Inertia Weight has been proposed by Feng et al. [19]. A novel rule-based classifier [10]

design method was constructed by using improvised simple swarm optimization, to mine a

thyroid gland dataset from University of California Irvine repository. An elite concept is

added to the proposed method to improve solution quality and close interval encoding is

added to efficiently represent the rule structure.

Yang Shi et al. [6] proposes a cellular particle swarm optimization, hybridizing cellular

automata and particle swarm optimization (PSO) for function optimization. In the proposed

method, a mechanism of Cellular Automata is integrated in the velocity update to modify the

trajectories of particles to avoid being trapped in the local optimum. To prevent the PSO from

premature convergence, many researchers have proposed adaptive or self-adaptive strategies

such as the adaptive variable population size method in Chen and Zhao [20], the self-adaptive

method for generating the particles velocity in Jin et al. [21], and the adaptive inertia weight

method in Nickabadi et al. [22].

This paper analyzes various methods for avoiding local optima (premature) convergence,

thereby resulting in better predictive accuracy of the mined rules. The rest of this paper is

organized as follows. In Section 2, framework of PSO is described. Then Section 3 discusses

the variations introduced in PSO for avoiding premature convergence. Section 4 compares

the results of these variants when applied for association rule mining followed by conclusion

in section 5.

2. Preliminaries

This section will briefly present the general backgrounds of association rule mining and the

particle swarm optimization method, respectively.

In many applications of data mining technology, applying association rules are the most

broadly discussed method. This method is capable of finding interesting associative and

relative characteristics from commercial transaction records and helping decision-makers

formulate business strategy.

The concept of association rule mining was first proposed by Agrawal et al. [4] in 1993. Let

I = {i1, i2, ..., im} be a set of m distinct attributes, T be the transaction that contains a set of

items such that T I, D be a database with different transaction records Ts. An association

rule is an implication in the form of X Y, where X, Y I are sets of items called itemsets,

and X Y = . X is called antecedent while Y is called consequent, the rule means X implies

Y.

However, association rule mining must accord with

confidence.

contain X Y to the total number of records in the database. It means the support

count does not take the quantity of the item into account.

(

(1)

transactions that contain X Y to the total number of records that contain X. If

the percentage exceeds the threshold of confidence an interesting association rule

X Y can be generated.

(

(2)

2.2 Particle Swarm Optimization

Particle Swarm Optimization algorithm was inspired by the social behaviour of biological

organisms, specifically the ability of groups of some species of animals to work as a whole in

locating desirable positions in a given area, e.g. birds flocking to a food source. This seeking

behaviour is associated with that of an optimization search for solutions to non-linear

equations in a real-valued search space.

In PSO there is a set of particles, called swarm [5], that are possible solutions for the

problem. These particles move through an n-dimensional search space based on their

neighbours best positions and on their own best position. In order to achieve this in each

generation the position and velocity of the particles are updated based on the best position

obtained by that particle and global best position obtained from all particles in the swarm.

The best particles are derived based on the fitness function, which is the problems objective

function.

Each particle p, at some iteration t, has a position x (t), and a displacement velocity v(t). The

particles best (pBest) position p(t) and global best (gBest) position g(t) are stored in the

associated memory. The velocity and position are updated using equations 3 and 4

respectively.

()(

()(

(3)

(4)

Where

vi

is the particle velocity of the ith particle

xi

is the ith, or current, particle

i

is the particles number

d

is the dimension of searching space

rand ( ) is a random number in (0, 1)

c1

is the individual factor

c2

is the societal factor

pBest is the particle best

gBest is the global best

Both c1 and c2 are set to be 2 in all literature works analyzed and hence the same is adopted

here. The velocity vi of each particle is clamped to a maximum velocity vmax which is

specified by the user. vmax determines the resolution with which regions between the present

position and the target position are searched.

The pseudo code for PSO algorithm is given below

For each particle

Initialize particle position and velocity

END

Repeat

For each particle

Calculate fitness value

If the fitness value is better than its personal best

set current value as the new pBest

End

Choose the particle with the best fitness value of all as gBest

For each particle

Calculate particle velocity according equation (3)

Update particle position according equation (4)

End

Until maximum number of iterations or minimum error criteria

The initial population is selected based on fitness value. The velocity and position of all the

particles are set randomly. Based on the fitness function the importance of the particles is

evaluated. The fitness function designed is based on support and confidence of the

association rule. The objective of fitness function is maximization. The fitness function is

shown in equation 5.

( )

( )

( )

( )

(5)

Fitness (k) is the fitness value of association rule type k, confidence (x) is the confidence of

association rule type k and support(x) is the actual support of association rule type k. When

the support and confidence values are larger, then larger is the fitness value meaning that it is

an important association rule.

2.3 Predictive Accuracy

Predictive accuracy measures the effectiveness of the rules mined. The mined rules must have

high predictive accuracy.

(6)

where |X&Y| is the number of records that satisfy both the antecedent X and consequent Y,

|X| is the number of rules satisfying the antecedent X.

3. PSO and its Variants

Particle swarm optimization is based on the intelligence. PSO has no overlapping and

mutation calculation. During the development of several generations, only the most optimist

particle can transmit information onto the other particles. The speed of the searching is very

fast and it occupies the bigger optimization ability, thereby completing easily.

The swarm behaviour varies between exploratory behaviour, that is, searching a broader

region of the search-space, and exploitative behaviour, that is, a locally oriented search so as

to get closer to a (possibly local) optimum. The PSO algorithm and its parameters must be

chosen properly to balance between exploration and exploitation to avoid premature

convergence to a local optimum and yet also ensures a good rate of convergence to the

optimum. To avoid premature convergence at local optima Particle swarm optimization

variants are proposed and tested for mining association rules.

Variations have been introduced in velocity updation function to ensure convergence towards

global optima rather than local optima.

3.1 Particle Swarm Optimization with Inertia Weight

Inertia weight is added to the velocity update function and the equation 3 is modified as

()(

()(

(7)

is employed to control the impact

of the previous history of velocities on the current velocity, thus to influence the trade-off

between global (wide-ranging) and local (nearby) exploration abilities of the "flying points".

A larger inertia weight facilitates global exploration (searching new areas) while a smaller

inertia weight tends to facilitate local exploration to fine-tune the current search area. Suitable

selection of the inertia weight can provide a balance between global and local exploration

abilities and thus require less iteration on average to find the optimum.

The canonical PSO tends to struck at local optima and thereby leading to premature

convergence when applied for solving practical problems. To improve the global searching

capability and escape from local optima chaos is introduced in PSO [14]. Chaos is a

deterministic dynamic system which is very sensitive and dependent on its initial conditions

and parameters. The common method of generating chaotic behaviour is based on Zaslavskii

map[15]. This representation of map involves many variables. Setting right values for all

these variables involved increases the complexity of the system. Erroneous values might bring

down the accuracy of the system involved. Logistic map and tent map are also most

frequently used chaotic behaviour. The drawback of these maps is that the range of values

generated by both the maps after some iteration becomes fixed to a particular range. To

overcome this defect the tent map undisturbed by the logistic map [16] is introduced as the

chaotic behaviour. The new chaotic map model is proposed with the following equation.

(

{

)

(

(8)

)

(

The initial value of u0 and v0 are set to 0.1. The slight tuning of initial values of u0 and v 0

creates wide range of values with good distribution. The chaotic operator chaotic_operator(k)

= vk is designed therefore to generate different chaotic operators by tuning u0 and v0. The

value of u0 is set to two different values for generating the chaotic operators 1 and 2.

The velocity updation equation based on chaotic PSO is given in equation 9.

(

)

(

)

( )

In the original PSO, two kinds of neighbourhoods are defined for PSO:

In the gBest swarm, all the particles are neighbours of each other; thus, the position of

the best overall particle in the swarm is used in the social term of the velocity update

equation. The gBest swarms converge fast, as all the particles are attracted

simultaneously to the best part of the search space. However, if the global optimum is

not close to the best particle, it may be impossible to the swarm to explore other areas;

this means that the swarm can be trapped in local optima.

In the lBest swarm, only a specific number of particles (neighbour count) affect the

velocity of a given particle. The swarm will converge slower but can locate the global

optimum with a greater chance.

As the local best (lBest) value leads to convergence at the global optima the lBest value is

selected from neighbourhood values rather than the particles best values so far. The

neighbourhood best (lBest) selection is done as follows;

Calculate the distance of the current particle from other particles by equation 10.

(

(10)

Find the nearest m particles as the neighbour of the current particle based on distance

calculated

Choose the local optimum lBest among the neighbourhood in terms of fitness values

particles are based on equation 3 and 4. The velocity updation is restricted to maximum

velocity Vmax set by the user. The termination condition is set as fixed number of

generations.

3.4 Self Adaptive Particle Swarm Optimization (SAPSO1 and SAPSO2)

The original PSO has pretty good convergence ability, but suffers with the demerit of

premature convergence [11], due to the loss of diversity [12]. Improving the exploration

ability of PSO has been an active research topic in recent years. Thus, the proposed algorithm

introduces the concept of self-adaptation as the primary key to tune the two basic rules

velocity and position. Effectively, reinforcing a PSO implies improving the inertia weight

formulae and thereby maintaining diversity of population. The basic PSO, presented by

Eberhart and Kennedy in 1995 [3], has no Inertia Weight. In 1998, first time Shi and Eberhart

[13] presented the concept of Inertia Weight by introducing Constant Inertia Weight.

By looking at equation (3) more closely, it can be seen that the maximum velocity allowed

actually serves as a constraint that controls the maximum global exploration ability PSO can

have. By setting a too small maximum velocity allowed, maximum global exploration ability

is limited, and PSO will always favour a local search no matter what the inertia weight is. By

setting a large maximum velocity allowed, the PSO can have a large range of exploration

ability to select by selecting the inertia weight. Since the maximum velocity allowed affects

global exploration ability indirectly and the inertia weight affects it directly, it will generally

be better to control global exploration ability through inertia weight only. A way to do that is

to allow inertia weight itself to control exploration ability. Thus the inertia weight is made

self adaptive. Two self adaptive inertia weights are introduced for mining association rules in

this paper.

In order to linearly decrease the inertia weight as iteration progress the inertia weight is made

adaptive through the equation 11 in SAPSO1.

(

(11)

Where

and

are the maximum and minimum inertia weights, g is the generation

index and G is the predefined maximum number of generation.

In SAPSO2 the inertia weight adaptation is made to depend upon the values from previous

generation so as to linearly decrease its value with increasing iterations as shown in equation

12.

(

( )

(12)

) is the inertia weight for the current generation, ( ) is the inertia weight for

Where (

the previous generation,

and

are the maximum and minimum inertia weights and

G is the predefined maximum number of generation.

The steps in self adaptive PSO1 and PSO 2 are as follows.

Step1: Initialize the position and velocity of particles.

Step 2: The importance of each particle is studied utilizing fitness function. Fitness value is

evaluated using the fitness function. The objective of the fitness function is maximization.

Equation 13 describes the fitness function.

( )

( )

( )

( )

(13)

where fitness(x) is the fitness value of the association rule type x, support(x) and

confidence(x) are as described in equation 1 and 2 and length(x) is length of the association

rule type x. If the support and confidence factors are larger then, greater is the strength of the

rule with more importance.

Step 3: Get the local best and particle best for the swarm. The local best is the best fitness

attained by the individual particle till present iteration and the overall best fitness attained by

all the particles so far is the global best value.

Step 4: Set max as 0.9 and min as 0.4 and find the adaptive weights for both SAPSO1 and

SAPSO2. Update velocity of the particles using equation 5.

Step 5: Update position of the particles using equation 6.

Step 6: Terminate if the condition is met.

Step 7: Go to step 2.

3.5 Self Adaptive Chaotic Particle Swarm Optimization (SACPSO)

The major drawback of standard PSO lies in its premature convergence, especially while

handling problems with many local optima. Based on the standard PSO, a novel chaotic

operator is introduced with the expectation of keeping the local diversity, as well as

enhancing the reliability of the algorithm. The velocity of each particle is updated by the

following equation:

[

[ ]

[ ]

[ ])

[ ]

[ ])

(14)

where, chaotic_operator is an iterative value as chaotic mapping. The chaotic operators are

generated based on equation 8. The use of a fixed inertia weight does not have an impact on

the global and local search. When value is greater, it could undermine the search space's

excellent solutions, the algorithm does not even slow down the convergence. Hence, a

method of adaptive system optimization, where is made dynamic is proposed as given in

equation 11.

To test the performance of the variants of PSO for mining association rules, computational

experiments were carried out on the well-known benchmark datasets from University of

California Irvine (UCI) repository. The experiments were carried out in Java on windows

platform. The datasets considered for the experiments is listed in Table 1.

Table 1. Datasets Description

Dataset

Attributes

Instances

4

6

3

8

16

24

1728

310

87

101

Lenses

Car Evaluation

Habermans Survival

Post-operative Patient Care

Zoo

Attribute

characteristics

Categorical

Categorical, Integer

Integer

Categorical, Integer

Categorical, Binary,

Integer

Table 2. Parameter values set for the Experiment

Dataset

Lenses

Car

Evaluation

Habermans

Survival

Postoperative

Patient Care

Zoo

Swarm

Size

24

700

C1

C2

2

2

2

2

300

87

101

Inertia

Weight

0.2

0.4

Generations

max

min

100

100

0.9

0.9

0.4

0.4

0.4

100

0.9

0.4

0.3

100

0.9

0.4

0.3

100

0.9

0.4

Balancing between exploration and exploitation is carried out using the variants of PSO

proposed and the results for the five datasets are plotted in figures 1 to 5

.

100

95

90

85

80

75

70

65

60

55

50

45

PSO

WPSO

CPSO

NPSO

SAPSO1

SAPSO2

SACPSO

10

20

30

40

50

60

No. of Iterations

70

80

90

100

102

100

Predictive Accuracy

98

PSO

96

WPSO

94

CPSO

92

NPSO

90

SAPSO1

88

SAPSO2

86

84

10

20

30

40

50

60

70

80

90

100

No.of Iterations

100

90

PSO

WPSO

CPSO

NPSO

SAPSO1

SAPSO2

SACPSO

80

70

60

10

20

30

40

50

60

70

80

90

100

No. of Iterations

100

90

PSO

WPSO

CPSO

NPSO

SAPSO1

SAPSO2

SACPSO

80

70

60

50

40

10

20

30

40

50

60

70

No. of Iterations

80

90

100

Figure4. Convergence of Predictive Accuracy for Post Operative Patient Care Dataset

100

90

80

PSO

WPSO

70

CPSO

NPSO

60

SAPSO1

50

SAPSO2

SACPSO

40

10

20

30

40

50

60

70

80

90 100

No. of Iterations

The Self adaptive variants SAPSO1, SAPSO2 and SACPSO give consistent performance

when compared to other variants throughout the generations. The predictive accuracy

achieved by applying these self adaptive methods for association rule mining is better when

compared to the normal variants. The traditional particle swam optimization method when

applied for AR mining converges at very early stage for all the datasets. The performance of

WPSO, CPSO and NPSO varies from dataset to dataset. It is consistent for Zoo and Post

operative patient care datasets while inconsistent for Lenses, Habermans survival and Car

evaluation datasets.

The scope of introducing the variants in PSO is to avoid premature convergence and in turn

increase the predictive accuracy of the mined rules. The predictive accuracy is plotted for the

variants of PSO for all the five datasets in figure 6.

100

95

PSO

90

CPSO

NPSO

WPSO

85

SAPSO2

SAPSO1

80

SACPSO

75

Lenses

Survival

Zoo

The variants of PSO perform better when compared to traditional PSO for mining association

rules. In terms of predictive accuracy the self adaptive methods SAPSO1, SAPSO2 and

SACPSO perform better than the normal PSO variants CPSO, WPSO and NPSO. The

weighted PSO gives better performance for all the datasets among the chaotic PSO and

neighbourhood selection PSO.

The iteration at which maximum predictive accuracy attained for the five datasets by

applying the variants of PSO in association rule mining is shown in figure 7.

100

90

80

PSO

70

Iteration

WPSO

60

CPSO

50

NPSO

40

SAPSO1

30

SAPSO2

20

SACPSO

10

0

Lenses

Car Evaluation

Habermans

Survival

Po-opert Care

Zoo

The convergence rate varies from dataset to dataset for all the methods. The method in which

the convergence at local optima is avoided generates association rules with maximum

accuracy. This could be noted from figures 6 and 7.

The variants of PSO attempt to avoid convergence at the local optima by balancing between

exploration and exploitation. The predictive accuracy achieved by the variants is also

enhanced for all the datasets. The inertia weight, chaotic operators, neighborhood selection

and adapting the inertia weight dynamically, introduced in velocity updation function

maintains the balancing of convergence at local optima and deviation from global optima.

The self adaptive methods perform better than other methods.

5. Conclusion

Association rule mining is one of the most important tasks in data mining community because

the data being generated and stored in databases are already enormous and continues to grow

very fast. Particle Swarm Optimization algorithm mimics the social behaviour instead of

survival of fitness used in most of evolution algorithms. This principle reduces the time

complexity of PSO when compared to other algorithms. The convergence at local optima also

tends to reduce the time complexity.

In this paper inertia weight, chaotic operators, Neighbourhood selection and two adaptive

methods for inertia weight are introduced in the velocity updation function. These variants

when applied for association rule mining results in increased predictive accuracy for all the

five datasets used. The shift in convergence rate is achieved by avoiding convergence at local

optima though the variants of PSO. This also enhances the efficiency of the rules mined.

When compared to PSO the PSO variants perform better both in terms of predictive accuracy

and balancing between exploration and exploitation. The three self adaptive methods

SAPSO1, SAPSO2 and SACPSO exhibit consistent performance for all the datasets. The

inertia weight factor performs better among the other PSO variants. The behaviour of Chaotic

PSO and neighbourhood selection in PSO varies from dataset to dataset depending on the

attributes involved and its values.

Avoiding exploitation at global search and testing on more datasets could be taken up for

further exploration.

References

1.

discovery, Advances in Evolutionary Computation. Springer-Verlag, 2001.

2. Torn, A. Zilinskas (Eds.), Global Optimization, Lecture Notes in Computer Science, vol.

350, Springer-Verlag, 1989.

3. J. Kennedy, R. Eberhart, Particle swarm optimization, International Conference on

Neural Networks, pp. 19421948, 1995.

4. R. Agrawal, T. Imielin ski, A. Swami, Mining association rules between sets of items in

large databases, ACM SIGMOD 22 (2), pp.207216, 1993.

5. Tiago Sousa, Ana Paula Neves F. da Silva, Arlindo Silva , Ernesto Costa, Particle Swarm

Based Data Mining Algorithms for Classification Tasks, Parallel Computing, 30, pp.

767-783, Elsevier, 2004

6. Yang Shi, Hongcheng Liu, Liang Gao, Guohui Zhang, Cellular particle swarm

optimization, Information Sciences, 181, pp.44604493,2011.

7. M. Clerc, J. Kennedy, The particle swarm-explosion, stability, and convergence in a

multidimensional complex space, IEEE Transactions on Evolutionary Computation,

pp.5873, 2002.

8. F. Van den Bergh, A.P. Engelbrecht, A study of particle swarm optimization particle

trajectories, Information Sciences, 176, pp.937971, 2006.

9. J. Kenndy, Small worlds and mega-minds: effects of neighborhood topology on particle

swarm performance, In: Proceedings of IEEE Congress on Evolutionary Computation,

pp. 19311938, 1999.

10. W.-C. Yeh, Novel swarm optimization for mining classification rules on thyroid gland

data, Inform. Sci. , doi:10.1016/j.ins.2012.02.009, 2012

11. Zhao Xinchao, A perturbed particle swarm algorithm for numerical optimization,

Applied Soft Computing 10 (1), pp. 119124, 2010.

12. Yuxin Zhao, Wei Zub, Haitao Zeng, A modified particle swarm optimization via particle

visual modeling analysis, Computers and Mathematics with Applications 57, pp. 2022

2029, 2009.

13. Y. Shi and R. Eberhart., A modified particle swarm optimizer, International Conference

on Evolutionary Computation Proceedings, IEEE, pp. 6973, 1998.

14. W.J. Kong, W.J. Cheng, J.L. Ding, T.Y. Chai, A Reliable and Efficient Hybrid PSO

Algorithm for Parameter Optimization of LS-SVM for Production Index Prediction

Model, Third International Symposium on Computational Intelligence and Design, vol.2,

pp.140-143, 2010.

15. Bilal Atlas, Erhan Akin, Multi-objective rule mining using a chaotic particle swarm

optimization algorithms, Knowledge based systems,23, pp. 455-460,2009.

16. lal Alatas, Erhan Akin, A. Bedri Ozer, Chaos embedded particle swarm optimization,

Chaos,Solitons&Fractals, vol. 40, no.4, pp. 1715 - 1734, 2009.

17. R.C. Eberhart and Y. Shi., Tracking and optimizing dynamic systems with particle

swarms, Proceedings of the 2001 Congress on Evolutionary Computation, volume 1, pp.

94100, IEEE, 2002

18. M.S. Arumugam and MVC Rao., On the performance of the particle swarm optimization

algorithm with various Inertia Weight variants for computing optimal control of a class

of hybrid systems, Discrete Dynamics in Nature and Society, 2006.

19. Y. Feng, G.F. Teng, A.X. Wang, and Y.M. Yao., Chaotic Inertia Weight in Particle

Swarm Optimization, Proceedings of the 2001 Congress on Innovative Computing,

Information and Control, pp. 475-481. IEEE, 2008.

20. D.B. Chen, C.X. Zhao, Particle swarm optimization with adaptive population size and

its application, Applied Soft Computing. Pp.3948, 2009.

21. Y.S. Jin, K. Joshua, H.M. Lu, Y.Z. Liang, B.K. Douglas, The landscape adaptive

particle swarm optimizer, Applied Soft Computing. 8, pp. 295 304, 2008.

22. A. Nickabadi, M.M. Ebadzadeh, R. Safabakhsh, A novel particle swarm optimization

algorithm with adaptive inertia weight, Appl. Soft Computing, 11,pp. 36583670, 2011.

- BIg_dataÎncărcat deSangeetha K
- Lesson1Încărcat depriyanka
- Pso ConvergenceÎncărcat deAnonymous TxPyX8c
- Network Reconfiguration by PSOÎncărcat deQuazi Mosaddequl Haque
- ACO better as Optimization Technique in MANET: A surveyÎncărcat deInternational Journal for Scientific Research and Development - IJSRD
- Economic Load Dispatch Using Swarm IntelligenceÎncărcat deFarhan Hafeez
- A Mixed Integer Programming Approach for Supply Chain Management Using PSOÎncărcat detradag
- CLAPSOÎncărcat devinodkumar57
- A 1103010108Încărcat deIOSRjournal
- ijdkp030208Încărcat deLewis Torres
- Improvement in Quality of Power by PI Controller Hybrid PSO using STATCOMÎncărcat deAnonymous kw8Yrp0R5r
- 101008Încărcat deIJCNSVol2NO10
- OFDM Systems Resource Allocation using Multi-Objective Particle Swarm OptimizationÎncărcat deAIRCC - IJCNC
- Image MiningÎncărcat deashokkumarl
- Paper1-Fast-Convergent Artificial Bee Colony With an Adaptive Local SearchÎncărcat derajat
- PSO_codesÎncărcat desugengsa
- Optimized Fuzzy Control by Particle SwarmÎncărcat deayou_smart
- 12. Guangyong Sun, 2010.pdfÎncărcat decezarina8afteni
- PsoÎncărcat deranjan
- Kumar 2016Încărcat deveki
- 1-s2.0-S1568494613001841-mainÎncărcat deGowthamUcek
- PSS 03072015Încărcat deFernando Ramos
- Probabilistic Minimal Loss Reconfiguration.pdfÎncărcat deMauricio Alejandro Pozo Uribe
- De-noising of Audio Signal Using Distributed Approximate Message Passing (Damp) AlgorithmÎncărcat deOAIJSE
- Active Power Loss Minimization using Differential Evolutionary Based Bat Algorithm StrategyÎncărcat deIJIRST
- IJERTV1IS4222Încărcat dekeepingbusy
- Mining Quantitative Association Rules on OverlappeÎncărcat deshweta sontakke
- currentissue_IJCSSÎncărcat deRiri Alif
- 3.GHAMISI_2013.2260552Încărcat deAnonymous PsEz5kGVae
- Performance Improvement of Classifier Using Attribute Selection With Association Rule Mining Technique. 4Încărcat deInternational Journal of Innovative Science and Research Technology

- 1000 English ProverbsÎncărcat devlmanojkumar
- Adobe Presenter Phy SampÎncărcat deAnonymous TxPyX8c
- gaand psoÎncărcat deAnonymous TxPyX8c
- Icicca Dm SurveyÎncărcat deAnonymous TxPyX8c
- 21. B.E.CSEÎncărcat deSangeetha Shankaran
- Important Days National InternationalÎncărcat deKarthik Manoharan
- IJCSI-9-3-2-522-529Încărcat deAnonymous TxPyX8c
- IJCSI-9-3-3-432-438Încărcat deAnonymous TxPyX8c
- MagicInfo Lite Edition Server_Eng05Încărcat deIndira Sivakumar
- Survey Table2Încărcat deAnonymous TxPyX8c
- 118-136-145Încărcat deAnonymous TxPyX8c
- b.ed PropspectousÎncărcat deAnonymous TxPyX8c
- CurriculumÎncărcat deTharaneeswaran Thanigaivelu
- STUDENTS PromisesÎncărcat deAnonymous TxPyX8c
- 2 Nd Year Class Time Table for DeptÎncărcat deAnonymous TxPyX8c
- Ece Master Timetable 2013Încărcat deIndira Sivakumar
- Practical Oct 13 Tea&LucnchÎncărcat deAnonymous TxPyX8c
- Bed Cp TrainingÎncărcat deAnonymous TxPyX8c
- Control Structures PgmsÎncărcat deAnonymous TxPyX8c
- X Standard Public Time Table 2013Încărcat deAnonymous TxPyX8c
- GD 2016 Welcome SpeechÎncărcat deAnonymous TxPyX8c
- Application Form for Submission of PhD ThesisÎncărcat deIndira Sivakumar
- xin-121220153447-phpapp01Încărcat deAnonymous TxPyX8c
- III rd YEARÎncărcat deAnonymous TxPyX8c
- 736E09.docÎncărcat deAnonymous TxPyX8c
- gk-and-current-affairs.pdfÎncărcat deakhilyerawar
- 2014 ANNA UNIV-BOOKS & JR..docxÎncărcat deAnonymous TxPyX8c
- Pd TeacherÎncărcat deAnonymous TxPyX8c

- The Evolution of Decision Making: How Leading Organizations Are Adopting a Data-Driven CultureÎncărcat deAser A. Abdullah
- Amol Ghorpade CV.PDFÎncărcat deMakrand
- How AI Impacts Memory Systems - Semiconductor Magazine - March 2018Încărcat derozo_aster
- ES-150p.pdfÎncărcat decleimarsantos
- UNIT 3 Torsion and SpringÎncărcat del8o8r8d8s8i8v8
- First Ever High Resolution Images Molecule ReformsÎncărcat deCarlos Bellatin
- Size-dependent structural and electronic properties of Ti clustersÎncărcat deumarlucio
- Success_at_DriveThruÎncărcat deMeido_000
- stem career day - announcement letterÎncărcat deapi-293015639
- Design and Analysis of Kinematic Couplings for Modular MachiÎncărcat degheinba
- Reasoning.pptÎncărcat debrahmbhattvyoma
- 65555555555555Încărcat deKaroPerry KillerQueen
- BiostatisticsÎncărcat deSaquib Azeem
- Flat BeltÎncărcat deYashraj
- Control Valves Sizing & SelectionÎncărcat deABVSAI
- 6. Manage-Intelligent Traffic Management System-Shrishti DeepÎncărcat deImpact Journals
- Malafouris How Things Shape the MindÎncărcat depatricio8008
- Uav BarrettÎncărcat deSteve Fortson
- Visibility analysis of Satellite based Automatic Identification System (SB-AIS)Încărcat deInternational Journal for Scientific Research and Development - IJSRD
- I+a Complete Project Charter TemplateÎncărcat deSebastían Alejandro Pérez Duque
- Gna-training-report -Navdeep singh.docxÎncărcat deBabbu Mehra
- Programacion Cientifica Scientific ProgrammingÎncărcat dePedro Elias Romero Nieto
- Generator Test C175-16 SN (WYB01557)Încărcat deDwi Mulyanti Dwimulyantishop
- Source India Event Participant DetailsÎncărcat denischal
- PM500-datasheet.pdfÎncărcat deNASSER
- Case Fact LogÎncărcat dedamarlamca
- 46332 en VEGABAR 82 Slave Sensor for Electronic Differential PressureÎncărcat deinstengg
- Case- Preparation Questions 2013Încărcat deAshima Aggarwal
- 'Holistic Philosophy' by Dr Romesh Arya Chakravarti (MD) $30.00Încărcat deDr Romesh Senewiratne-Alagaratnam Arya Chakravarti
- Monitoring Software Product Process MetricsÎncărcat deijcsis

## Mult mai mult decât documente.

Descoperiți tot ce are Scribd de oferit, inclusiv cărți și cărți audio de la editori majori.

Anulați oricând.