Documente Academic
Documente Profesional
Documente Cultură
and
. While
contains all the pixels having grey level value above threshold
value in Bi-level thresholding. Assume that an image can be represented by grey levels; bi-level thresholding can be
deIined as in given equation:
} 1 ) , ( , ) , (
} 1 ) , ( 0 , ) , (
1
0
=
=
L y x g t I y x g C
t y x g I y x g C
(1)
Multilevel thresholding use more than one threshold value and partition the image into more partitions as in given
equation.
} 1 ) , ( , ) , (
} 1 ) , ( , ) , (
} 1 ) , ( , ) , (
} 1 ) , ( 0 , ) , (
1 0
1 0
2 1 0
1 0
=
=
=
=
+
+
n n
i i
t y x g t I y x g C
t y x g t I y x g C
t y x g t I y x g C
t y x g I y x g C
(2)
Where
, where
is the probability oI the system in possible state |4|. The probability oI each gray level is the
relative occurrence Irequency oI the gray level , normalized by the total number oI gray levels as described in equation:
1 ,..., 2 , 1 , 0 ,
) (
) (
1
0
= =
=
L i
i h
i h
p
L
i
i
(3)
For bi-level thresholding, Kapur`s entropy may be described by equation:
=
=
=
1
1 1
1
1
0
0 0
0
, ln
, ln
L
t i
i i
t
i
i i
p p
H
p p
H
=
=
=
1
1
1
0
0
L
t i
i
t
i
i
p
p
(4)
The threshold is optimum when the summation oI the class entropies are maximum as described in given equation, it is
objective Iunction.
) max( arg
1 0
*
H H t + = (5)
For multilevel thresholding Kapur`s entropy can be extended as described in given equation.
+
=
=
=
=
=
=
1
1
1
3
2
2 2
2
1
2
1
1 1
1
1
1
0
0 0
0
, ln
, ln
, ln
, ln
n
t
n
t i
c
i
c
i
c
t
t i
i i
t
t i
i i
t
i
i i
p p
H
p p
H
p p
H
p p
H
=
=
=
=
=
1
3
1
2
2
1
1
1
1
1
0
0
L
t i
i c
L
t i
i
L
t i
i
t
i
i
p
p
p
p
(6)
The multilevel thresholding consists dimensional vector
+ =
=
= =
= =
1
1
2 2
0
1 1
) Pr(
) Pr(
L
t i
i
t
i
i
p C w
p C w
(8)
The Means oI class
are
606 Elsevier Publications, 2013
Sushil Kumar, Millie Pant and A. K. Ray
2
1
1
2
1
0
1
w
ip
u
w
ip
u
L
t i
i
t
i
i
+ =
=
=
=
(9)
The total mean oI gray levels is denoted by
2 2 1 1
u w u w u
T
+ = (10)
The class variances are
(11)
The between-class variance is
2
2 2
2
1 1
2
) ( ) (
T T B
u u w u u w + = (12)
Otsu method chooses the optimal threshold t by maximizing the between-class variance, which is equivalent to
minimizing the within-class variance, since the total variance (the sum oI the within-class variance and the between-class
variance) is constant Ior diIIerent partitions. Objective Iunction is:
)}} ( argmax
2
1
t t
B L t o
= (13)
For Multilevel thresholding the extended between-class variance can be extended to:
2
2
2
2 2 2
2
1 1 1
2
0 0 0
0
) (
) (
) (
) (
) (
) (
T m m m
T j j j
T
T
T
m
i
i
u u w
u u w
u u w
u u w
u u w
t
=
=
=
=
=
=
(14)
5. Brief Explanation of the Algorithms
PSO, ABC and DE all are optimization algorithms, have applied in many real time applications. PSO, ABC and DE have
Iewer control parameters to be settled. All these algorithms have a high perIormance and low complexity. A brieI introduction
to all these algorithms is as Iollows:
5.1. Particle Swarm Optimization
Particle Swarm Optimization |5| is inspired by the social Ioraging behaviour oI some animals such as Ilocking behaviour
oI birds and the schooling behaviour oI Iish. PSO was proposed by by Eberhart and Kennedy in 1995. The goal oI the
algorithm is to have all the particles locate the optima in a multi-dimensional hyper-volume. This is achieved by assigning
initially random positions to all particles in the space and small initial random velocities. The algorithm is executed like a
simulation, advancing the position oI each particle in turn based on its velocity, the best known global position in the problem
space and the best position known to a particle. The objective Iunction is sampled aIter each position update. Over time,
through a combination oI exploration and exploitation oI known good positions in the search space, the particles cluster or
converge together around optima, or several optima. |6|
All oI the particles are initialised at random positions by (19), and they start to move in the search space by changing
their velocities and then positions:
) )( 1 , 0 (
min max min
j j j ij
x x rand x x + = (15)
2
1
1
2
2
2
1
1
0
2
1
2
2
) (
) (
w
p u i
w
p u i
L
t i
i
t
i
i
+ =
=
is the position oI the th particle, and is the dimension oI the problem. Once a population is generated
the algorithm iterates as in Algorithm 1in Fig. 1.
Algorithm 1 (Main steps of the PSO algorithm)
1: Initialize the population
2: repeat
3: Calculate the Iitness values oI the particles
4: Update the best experience oI each particle
5: Choose the best particle
6: Calculate the velocities oI the particles
7: Update the positions oI the particles
8: until requirements are met
Fig. 1. Pseudo-Code Ior Particle Swarm Optimization
The PSO algorithm is comprised oI a collection oI particles that move around the search space inIluenced by their own
best past location and the best past location oI the whole swarm or a close neighbour. In each iteration a particle's velocity is
updated using:
))) ( ( () ( ))) ( ( () ( ) ( ) 1 (
2 1
t p p rand c t p p rand c t v t v
i
best
g i
best
i i i
+ + = + (16)
Where,
is the
is the best position known to the swarm. The Iunction generates a uniIormly random variable .
Variants on this update equation consider best positions within a particles local neighbourhood at time .
5.2. Differential Evolution
DE is a parallel direct search method using a population oI N parameter vectors Ior each generation. At generation G, the
population
is composed oI
and
. II
some a priori knowledge is available about the problem, the preliminary solution can be included to the initial population by
adding normally distributed random deviations to the nominal solution.
For each parent parameter vector, DE generates a candidate child vector based on the distance oI two other parameter
vectors. For each dimension j|1, d|, this process is shown, as is reIerred to as scheme DE/rand/1 by Storn and Price.
) .(
2 1 3
' G
r
G
r
G
r
x x F x x + = (17)
Where, the random integers
are used as indices to index the current parent object vector. As a result, the
population size N must be greater than 3. F is a real constant positive scaling Iactor and normally F (0, 1). F controls the
scale oI the diIIerential variation
|8|, |9|.
Selection oI this newly generated vector is based on comparison with another DE1 control variable, the crossover
constant CR |0, 1| to ensure the search diversity. Some oI the newly generated vectors will be used as child vector Ior the
next generation, others will remain unchanged. The process oI creating new candidates is described in the pseudo code shown
in Fig. 2, |8| |9|, |10-12|.
1: Initialization
2: Evaluation
3: repeat
4: Mutation
5: Crossover
6: Selection
7: Memorize the best solution achieved so Iar
8: until A termination criteria is satisIied
Fig. 2. Pseudocode Ior DiIIerential Evolution
5.3. Artificial Bee Colony
608 Elsevier Publications, 2013
Sushil Kumar, Millie Pant and A. K. Ray
ABC The ArtiIicial Bee Colony (ABC)|34| algorithm is a swarm based meta-heuristic algorithm that was introduced by
Karaboga in 2005 Ior optimizing numerical problems. It was inspired by the intelligent Ioraging behavior oI honey bees. The
algorithm is speciIically based on the model proposed by Tereshko and Loengarov (2005) Ior the Ioraging behavior oI honey
bee colonies. The model consists oI three essential components: employed and unemployed Ioraging bees, and Iood sources.
The Iirst two components, employed and unemployed Ioraging bees, search Ior rich Iood sources, which is the third
component, close to their hive. The model also deIines two leading modes oI behavior which are necessary Ior selI-organizing
and collective intelligence: recruitment oI Ioragers to rich Iood sources resulting in positive Ieedback and abandonment oI
poor sources by Ioragers causing negative Ieedback.
The main phases oI the algorithm are given step-by-step in Algorithm 3.
Algorithm 3 (Main steps of the ABC algorithm)
1: Initialization
2: Evaluation
3: repeat
4: Employed Bee Phase
5: Onlooker Bee Phase
6: Scout Bee Phase
7: Memorize the best solution achieved so Iar
8: until A termination criteria is satisIied
Fig. 3. Pseudocode Ior ArtiIicial Bee Colony
6. Experimental Results
The multilevel thresholding deals with Iinding optimal thresholds within the range that maximise a Iitness
criterion. Search space oI the problem will be