Documente Academic
Documente Profesional
Documente Cultură
conventional prescription until today. We drawback in above is that they uses EMG
have described a relatively simple signal as stationary signal but for multi DOF
application of hybrid microelectronics in it is required to process it as non stationary
this novel field. signal. Real time EMG pattern recognition
[6] is done to improve the hand functioning.
The purpose of this project is to develop a But still signal is not sufficient so fingers
myoelectric controlled individual finger, can‟t be controlled separately. For this
multi-DOF adaptive grasping technique purpose SRI EMG (surface recorded
equipped commercial hand capable to fast intramuscular EMG [7] signals can be used
response. Also amputee can feel touch, they are more accurate than surface EMG
temperature and force applied by prosthetic signals. Prosthetic hand with individual
hand on the joint of prosthetic and natural finger movement will be more similar to
hand. natural hand. Prosthetic hand approximately
similar to natural hand is made by Bryan
Christie [8] and Dean Kamen [9] with the
LITERATURE SURVEY help of DARPA. But it is connected by
nerve system so not easy to use also it‟s still
In the field of prosthetic arm, lots of in experimental state. Chappell [10] has
research work is in progress. Maximum presented an approach having an artificial
prosthetic hands are simple grippers with hand with sensors allowing for the inclusion
one or two degrees of freedom (DOF). They of automatic control loops, freeing the user
are using smart hooks (passive fingers and from the cognitive burden of object holding
thumb) like Otto Bock [1] they have 2 or 3 which is similar to the natural low level
point of contact so more force is required for spinal loops that automatically compensate
grasping. Also they are capable of gripping for object movement. Force, object slip and
as in myohand variplus speed [2] but not finger positions are variables that need to be
capable of some more works that include measured in a hand designed for the
finger, like door opening, rotating car key physically impaired person. It shows that a
etc. Kenzo Akazawa [3] developed a hand high specification sensor is required for
using dynamic behaviour of antagonist designing an arm. Also it must be designed
muscles, flexor muscles and extensor separately for each amputee. But latest
muscles but is similar as in [1]. Also its technology provides adaptive signal
response is slow and a large training session processing helping to adapt the amputees‟
is required to train the amputee. For fast requirements. An electrically driven locking
processing, Isamu [4] has presented mechanism has been built by Law and
evolvable hardware chips with genetic Hewson [11], which is controlled by the
algorithm. It takes less time to train electromyogram (EMG) of the surviving
amputee. But it restricts the DOF of hand. muscles in the upper arm. Hybrid
Also dedicated chip is required for technology is used for the construction of
prosthetic hand which reduces versatility to the associated electronic circuitry. Many
use high speed microcontrollers. similar applications are now being
Microprocessor and high torque motor based considered in an attempt to improve the
hand is being made by Ryuhei [5], however, performance of upper-limb prostheses using
this uses only two surface EMG signals, due latest researches. Development, testing and
to which less DOF is obtained. Also, more experimentation work of a device for the
training required for amputee. Another main hand rehabilitation was done by Mulas et al
COPO1O1-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
[12]. The system designed is intended for invasive). Cipriani et al [14] briefly
people who have partially lost the ability to describes the mechatronic design of the
control correctly the hand musculature, for prosthesis, the set of sensors embedded in
example after a stroke or a spinal cord the hand and finally focuses on the design of
injury. Based on EMG signals the system the control architecture that allows action
can "understand" the subject volition to and perception for such a sophisticated
move the hand and actuators can help the device. It consists of 8-bit architecture
fingers movement in order to perform the microcontrollers, however, not using the
task. This paper describes the device and available signal processing techniques.
discusses the first results conducted on a
healthy volunteer. It requires a number of Herrera et al [15] designed and constructed
actuators to increase the DOF of prosthetic a prosthesis that will be strong and reliable,
arm. Also EMG processing done is not while still offering control on the force
sufficient to provide significant exerted by the artificial hand. The design
performance. Massa et al [13] designed a had to account for mechanical and electrical
hand to augment the dexterity of traditional design reliability and size. These goals were
prosthetic hands while maintaining targeted by using EMG in the electrical
approximately the same dimension and control system and a linear motion approach
weight. This approach is aimed at providing in the mechanical system. The prosthetic
enhanced grasping capabilities and natural gripper uses EMG to detect the amputee's
sensory-motor coordination to the amputee, intended movement. The control system
by integrating miniature mechanisms, requires an adaptation mechanism for each
sensors, actuators, and embedded control. A amputee's characteristics. Gordon et al [16]
biomechatronics hand prototype with three used proportional myoelectric control of a
fingers and a total of six independent DOFs one-dimensional virtual object to investigate
has been designed and fabricated. This differences in efferent control between the
research work is focused on the actuators proximal and distal muscles of the upper
system, which is based on miniature limbs. Restricted movement was allowed
electromagnetic motors. However, still not while recording EMG signals from elbow or
using better EMG processing technologies wrist flexors/extensors during isometric
which can dramatically increase contractions. Subjects used this proportional
performance of the hand also the grasping EMG control to move the virtual object
force is low because of the limited torque through two tracking tasks, one with a static
generated by the miniature actuators used target and one with a moving target (i.e., a
for the hand (which are among the best sine wave). Eriksson et al [17] studied the
available on the market in that range of neural network feasibility for categorizing
size). Embedded control architecture for the patterns of EMG signals. The signals
action and perception of an recorded by the surface electrodes are
anthropomorphic 16 degree of freedom, 4 sufficient to control the movements of a
degree of actuation prosthetic hand for use virtual prosthesis. The presented method
by transradial amputees has also been offers great potential for the development of
reported. The prosthetic hand is provided future hand prostheses. A signal processing
with 40 structurally integrated sensors useful system based on RAM as a look-up table
both for automatic grasp control and for (LUT) has been presented by Torresen et al
biofeedback delivery to the user through an [18]. This provides a fast response besides
appropriate interface (either neural or non- being compact in size. Several algorithms
for programming it have been proposed For
COPO1O1-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
the given data set used in the following upper extremity prosthesis has been
experiments, the time used for programming considered as a system in which the
the RAM was approximately equal to the necessary components to design a better
time used for training a feed-forward neural prosthetic arm are viewed and divided into
network – solving the same problem in the four subsystems: input, effecter, feedback
best possible way. However, the main and support. Current research is reviewed in
advantage of this scheme is the fast runtime terms of these subsystems. Each subsystem
speed. Ferguson [19] described the performs its own task, but they are related to
development of a system that will allow each other and together they function to
complex grasp shapes to be identified based make up a prosthetic upper extremity, which
on natural muscle movement. The provides the movement to the amputee [22].
application of this system can be extended to Hands such as the All Electric Prosthetic
a general device controller where input is Hand utilize a series of gears to transmit the
obtained from forearm muscle, measured motion of motors housed in the forearm to
using surface electrodes. This system the relevant fingers [23]. Other designs have
provides the advantage of being less the actuators transmitting the power directly
fatiguing than traditional input devices. V. to the joint. An example of this is the
Tawiwat et al. [20] applied a mouse„s roller Anthroform Arm, which uses pneumatic
with a gripper to increase the efficiency of a „muscles‟ mimicking the muscles of the
gripper could lead to material handling human arm connected directly to the „bones‟
without slipping. To apply a gripper, the it moves [24]. Shape Memory Alloy wires
optimization principle is used to develop are also used, to both provide the force and
material handling by use of a signal for transmit the motion. SMA wires contract
checking a roller mouse that rotates. In case when heated and return to their initial shape
the roller rotates, meaning that the material when cooled [25]. This method of actuation
slips. A gripper will slide to material is utilized in the Shape Memory Alloy
handling until the roller does not rotate. In Activated Hand constructed by DeLaurentis
an attempt to improve the functionality of a et al [26].
prosthetic hand device, a new fingertip has
been developed, that incorporates sensors to
measure temperature and grip force and to
CONCLUSIONS
detect the onset of object-slip from the hand.
The sensors have been implemented using
A lot of work is done for developing the
thick-film printing technology and exploit
prosthetic arm but still more precision work
the piezoresistive characteristics of
could be done. Grasping technique can be
commercially available screen printing
enhanced so that amputees can work more
resistor pastes and the piezoelectric
effectively, response time of available arms
properties of proprietary lead-zirconate-
is not sufficient. Still the available prosthetic
titanate (PZT) formulated pastes. The force
arms are not comparable with natural hand
sensor exhibits a highly linear response to
in terms of DOF. The main limitations are
forces. The force sensor response is also
less space available for motors and their low
extremely stable with temperature. The
size to torque ratio. Power consumed is also
ability of the piezoelectric PZT vibration
more if number of motors increases. The
sensor to detect small vibrations of the
lack of enhanced signal processing
cantilever, indicative of object slip, has also
techniques in artificial hands and associated
been demonstrated [21]. Externally powered
controllers has limited their functionality
COPO1O1-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
and technological progress. Still a lot of 25th Annual International Conference of the
work can be done to improve the available IEEE Vol.2 pp.1674-1677.
prosthetic arms. Use of hydraulics in place
of motors can provide great amount of [6] Jun-Uk Chu, Inhyuk Moon, Member,
controlled force in lesser space. Due to this IEEE, and Mu-Seong Mun “A Supervised
available control of force applied by the Feature Projection for Real-Time
hand on the object, amputees can handle soft Multifunction Myoelectric Hand Control”
or brittle items easily. By effective use of International Conference of the IEEE
hydraulics, DOF can also be increased Engineering in Medicine and Biology
without any increased power consumption. Society pp.282-290.
Also use of adaptive signal processing
[7] Nikolay S. Stoykov, Madeleine M.
techniques can improve overall performance
Lowery, Charles J. Heckman, Allen Taflove,
of artificial hand.
and Todd A. Kuiken “Recording
Intramuscular EMG Signals Using Surface
Electrodes” Proceedings of the 2005 IEEE
REFERENCES 9th International Conference on
Rehabilitation Robotics June 28 - July 1,
[1]Available on: 2005, Chicago, IL, USA pp. 291-294.
http://www.ottobock.com.au/cps/rde/xchg/o
b_au_en/hs.xsl/384.html [8] Available on:
http://spectrum.ieee.org/robotics/medical-
[2Available on: robots/winner-the-revolution-will-be-
http://www.ottobock.com.au/cps/rde/xchg/o prosthetized/2
b_au_en/hs.xsl/19932.html
[9] Available on:
[3] Kenzo Akazawa, Ryuhei Okuno and http://spectrum.ieee.org/biomedical/bionics/
Masaki Yoshida “Biomimetic EMG- dean-kamens-luke-arm-prosthesis-readies-
prosthetic-hand” 18th Annual International for-clinical-trials
Conference of the IEEE Engineering in
Medicine and Biology Society Amsterda, [10] P H Chappell, “A fist full of sensors”
pp. 535-536. Journal of Physics: Conference Series, vol.
15, 2005, pp. 7-12.
[4] Isamu Kajitani and Tsukuba Masahiro
Murakawa “An Evolvable Hardware Chip [11] H.T. Law and J. J. Hewson, “An
for Prosthetic Hand Controller” Electromyographically Controlled Elbow
Microelectronics for Neural, Fuzzy and Bio- Locking Mechanism for an Upper Limb
Inspired Systems, 1999. MicroNeuro' 99. Prosthesis”, Electro component Science and
Proceedings of the Seventh International Technology, vol. 10, 1983, pp. 87-93.
Conference on pp. 179-186.
[12] Marcello Mulas, Michele Folgheraiter
[5] Ryuhei Okunol, Masahiro Fujikawa', and Giuseppina Gini, “An EMG-controlled
Masaki Yoshida2, Keno Akazawal Exoskeleton for Hand Rehabilitation”, IEEE
“Biomimetic hand prosthesis with easily 9th International Conference on
programmable microprocessor and high Rehabilitation Robotics, Chicago, IL, USA,
torque motor” Engineering in Medicine and June 28 - July 1, 2005, pp 371-374.
Biology Society, 2003. Proceedings of the
[13] M. C. Carrozza, S. Micera, R.
Lazzarini, M. Zecca and P. Dario,
COPO1O1-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO1O1-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
http://www.cronos.rutgers.edu/~mavro/pape
rs/act2000.pdf
COPO1O1-7
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO1O2-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
J(i,j,k,l)=J(i,j,k,l)+Jcc(θ’(j,k,l),P(j,k,l)). (2)
J (θi(j+1,k,l))<J(θi(j,k,l))
COPO1O2-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
This results in a step of size C(i) in the direction of problems but in recent years the emergence of
the tumble for bacterium i. another member of the EA family[5]– bacterial
[f] Compute J (i, j +1, k, l) and foraging algorithm (BFA), the self adaptability of
Let J(i, j,k,l)=J(i, j,k,l)+Jcc( θi( j,k,l),P( j,k,l)) individuals in the group searching activities has
[g] Swim attracted a great deal of interests including dynamic
i) Let m=0 (counter for swim length). problems. W. J. Tang and Q. H. Wu have
ii) While m<Ns(if have not climbed down too long). contributed their work by proposing DBFA, which
• Let m=m+1. is especially designed to deal with dynamic
• If J (i, j +1, k, l)<Jlast( if doing better), let optimization problems, combining the advantage of
Jlast= J (i, j+1, k, l) and both local search in BFA and a new selection
scheme for diversity generating. They used the
Let moving peaks benchmark (MPB) [6] as the test bed
for experiments. The performance of the DBFA is
evaluated in two ways. The first is concerned with
And use this θi(i+1, j, k) to compute the new J (i, the convergence of the algorithm in random
j+1, k, l) as we did in [f] periodical changes in an environment, which are
Else, let m= Ns. This is the end of the while divided into three ranges from a low probability of
statement. changes to a higher one. The second is testing a set
[h] Go to next bacterium (i, 1) if i≠N (i.e., go to [b] of combinations of the algorithm parameters which
to process the next bacterium). are largely related to the accuracy and stability of
5. If j<Nc, go to Step 3. In this case, continue the algorithm. All results are compared with the
chemotaxis, since the life of the bacteria is not existing BFA [1], and show the effectiveness of
over. DBFA for solving dynamic optimization problems.
6. Reproduction: It is worth mentioning that the diversity of DBFA
[a] For the given k and l, and for each i =1,2,...,N, changes after each chemotactic process rather than
the dispersion adopted by the BFA after several
Let generations. The DBFA utilizes not only the local
be the health of the bacterium i (a measure of how search but also applies a flexible selection scheme
many nutrients it got over its lifetime and how to maintain a suitable diversity during the whole
successful it was at avoiding noxious substances). evolutionary process. It outperforms BFA in almost
Sort bacteria and chemotactic parameters C(i) in all dynamic environments. The results are shown in
order of ascending cost health J (higher cost means [5]. They have further given solution for global
lower health). optimization given in [7].
[b] The Sr bacteria with the highest Jhealth values die The novel BSA has been proposed for global
and the remaining Sr bacteria with the best values optimization. In this algorithm, the adaptive tumble
split (this process is performed by the copies that and run operators have been developed and
are made are placed at the same location as their incorporated, which are based on the understanding
parent). of the details of bacterial chemotactic process. The
7. If k<Nre, go to Step 3. In this case, we have not operators involve two parts: the first is concerned
reached the number of specified reproduction steps, with the selections of tumble and run actions, based
so we start the next generation of the chemotactic on their probabilities which are updated during the
loop. searching process; the second is related to the
8. Elimination-dispersal: For i=1,2...,N, with length of run steps, which is made adaptive and
probability Ped, eliminate and disperse each independent of the knowledge of optimization
bacterium, and this result in keeping the number of problems. These two parts are utilized to balance
bacteria in the population constant. To do this, if a the global and local searching capabilities of BSA.
bacterium is eliminated, simply disperse one to a Beyond the tumble and run operators, attraction
random location on the optimization domain. If and mutation operations have also been developed.
l<Ned , then go to Step. 2 ; otherwise end. A.ABRAHAM, A. BISWAS, S. DASGUPTA AND S.
DAS have shown [8] that the major driving forces
of Bacterial Foraging Optimization Algorithm
2.3. ADVANCEMENTS IN BFO AND ITS APPLICATION AND (BFOA) is the reproduction phenomenon of virtual
RESEARCH AREAS: bacteria each of which models one trial solution of
the optimization problem.
Vast applications have been found where BFO has BFO and PSO have been used in combination and
shown remarkable results and has been modified their combined performance has been utilised to
for different problems according to the objective incorporate the merits [9] of two bio-inspired
function. Initial applications of evolutionary algorithms to improve the convergence for high-
algorithm were meant for static optimization dimensional function optimization. It is assumed
COPO1O2-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
that the bacteria have the similar ability like birds distance (Kennedy and Eberhart 1995, Eberhart and
to follow the best bacterium (bacterium with the Kennedy 1995). At some point in the evolution of
best position in the previous chemotactic process) the algorithm, it was realized that the conceptual
in the optimization domain. The position of each model was, in fact, an optimizer. Through a process
bacterium after every move (tumble or run) is of trial and error, a number of parameters
updated according to (3) extraneous to optimization were eliminated from
the algorithm, resulting in the very simple original
θi(j+1,k,l)=θi(j+1,k,l)+Ccc(θb(j,k,l)-θi(j,k,l), (3) implementation (Eberhart, Simpson and Dobbins
1996).
if Ji(j+1,k,l)>Jmin(j,k,l) PSO emulates the swarm behaviour and the
individuals represent points in the -dimensional
where θb(j,k,l) and Jmin(j,k,l) are the position and search space. A particle represents a potential
fitness value of the best bacterium in the previous solution. The velocity Vid and position Xid of the dth
chemotactic process respectively, Ccc is a new dimension of the ith particle are updated as follows
parameter, called attraction factor, to adjust the (4) & (5).
bacterial trajectory according to the location of the
best bacterium. Vid ← Vid + C1* rand1id *(pbestid - Xid) + C2*
rand2id *(gbestd - Xid) (4)
Particle swarm optimization is a high-performance Where Xi = (Xi1, Xi2, … , XiD) is the position of ith
optimizer that is very easy to understand and particle Vi = (Vi1, Vi2, … , ViD) represents velocity
implement. It is similar in some ways to genetic of the particle i. pbest =( pbesti1, pbesti2, … ,
algorithms or evolutionary algorithms, but requires pbestiD) is the best previous position yielding the
less computational bookkeeping and generally only best fitness value for their ith particle; and gbest =
a few lines of code [10]. Particle swarm (gbesti1, gbesti2, … , gbestiD) is the best position
optimization originated in studies of synchronous discovered by whole population[14]. C1 and C2 and
bird flocking and fish schooling, when the are the acceleration constants reflecting the
investigators realized that their simulation weighting of stochastic acceleration terms that pull
algorithms possessed an optimizing each particle toward pbest and gbest positions,
characteristic[11]-[13]. As the particles traverse the respectively. rand1id and rand2id are two random
problem hyperspace, each particle remembers its numbers in the range [0, 1].
own personal best position that it has ever found,
called its local best. Each particle also knows the 3.2. PSEUDOCODE:
best position found by any particle in the swarm,
called the global best. Overshoot and undershoot 1:Generate the initial swarm by randomly
combined with stochastic adjustment explore generating the position and velocity for each
regions throughout the problem hyperspace, particle;
eventually settling down near a good solution. This 2: Evaluate the fitness of each particle;
process can be visualized as a dynamical system, 3: repeat
although the behaviour is extraordinarily complex 4: for each particle i do
even when only a single particle is considered with 5: Update particle i according to (1) and (2);
extremely simplified update rules. This new 6: if (xi) <f(xpbesti) then
optimization technique has much promise, and 7: xpbesti:=xi;
electromagnetic researchers are just beginning to 8: if f (xi) <f(xgbest) then
explore its capabilities. 9: xgbest := xi
10: end if
11: end if
3.1. CLASSICAL ALGORITHM:
12: end for
The particle swarm concept originated as a 13: until the stop criterion is satisfied
simulation of a simplified social system. The
original intent was to graphically simulate the
3.3. ADVANCEMENTS IN PSO AND ITS APPLICATION AND
graceful but unpredictable choreography of a bird RESEARCH AREAS:
flock. Initial simulations were modified to
incorporate nearest-neighbour velocity matching, APPSO (Agent based parallel PSO) is based on two
eliminate ancillary variables, and incorporate types of agents: one coordination agent and several
multidimensional search and acceleration by swarm agents. The swarm is composed of various
COPO1O2-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
sub-swarms, one for each swarm agent. The applications. Particle swarm optimization has been
coordination agent has administrative and used for approaches that can be used across a wide
managing duties. All the calculations are done by range of applications, as well as for specific
the swarm agents (see Figure 2). applications focused on a specific requirement. In
In order to gain benefit from the large knowledge this brief section, we cannot describe all of particle
and insights achieved in the research field of swarm’s applications, or describe any single
sequential PSO it is important to modify the application in detail. Rather, we summarize a small
swarm’s behavior as little as possible. The sample.
Generally speaking, particle swarm optimization,
inevitable changes to the algorithm due to the like the other evolutionary computation algorithms,
parallelization should also lead to positive effects can be applied to solve most optimization problems
and problems that can be converted to optimization
problems. Among the application areas with the
most potential are system design, multi-objective
optimization, classification, pattern recognition,
biological system modelling, scheduling
(planning), signal processing, games, robotic
applications, decision making, simulation and
identification. Examples include fuzzy controller
design, job shop scheduling, real time robot path
planning, image segmentation, EEG signal
simulation, speaker verification, time-frequency
analysis, modelling of the spread of antibiotic
resistance, burn diagnosing, gesture recognition
and automatic target detection, to name a few.
4. FIREFLY OPTIMIZATION
COPO1O2-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO1O2-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO1O2-7
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
[8]. A. ABRAHAM, A. BISWAS, S. DASGUPTA AND [13]. J. J. LIANG AND A. K. QIN, “Comprehensive Learning
SWAGATAM DAS, “Analysis of Reproduction Operator in Particle Swarm Optimizer for Global Optimization of
Bacterial Foraging Optimization Algorithm.” Proc. of IEEE Multimodal Functions.” Trans of IEEE on Evolutionary
Congress on Evolutionary Computation, pp. 1476-1483, 2008. Computation, vol. 10, no. 3, June 2006.
[9]. YING CHU, HUA MI, HUILIAN LIAO, ZHEN JI, AND [14]. LI ZHI-JIE, LIU XIANG-DONG, DUAN XIAO-DONG,
Q. H. WU, “A Fast Bacterial Swarming Algorithm For High- WANG CUN-RUI, “An Improved Particle Swarm Algorithm
dimensional Function Optimization.” Proc of IEEE Congress on for Search Optimization.” Proc. of IEEE Global Congress on
evolutionary computation, pp. 3135-3140, 2008. Intelligent System, pp.154-158, 2009.
[10]. DANIEL W. BOERINGER AND DOUGLAS H. [15]. MICHAEL BREZA, JULIE MCCANN, “Can Fireflies
WERNER, “Particle Swarm Optimization Versus Genetic Gossip and Flock?: The possibility of combining well-knowbio-
Algorithms for Phased Array Synthesis.” Trans. Of IEEE on inspired algorithms to manage multiple global parameters in
Antennas and Propagation, vol. 52, no. 3, pp. 771-779,March wireless sensor networks without centralised control.
2004. [16]. MING-HUWI HORNG AND TING-WEI JIANG,
[11] R. EBERHART AND J. KENNEDY, “A new optimizer “Multilevel Image Thresholding Selection based on the Firefly
using particle swarm theory,” in Proc. 6th Int. Symp. Micro Algorithm.” Symposia and Workshops on Ubiquitous,
Machine and Human Science (MHS ’95), pp. 39–43, 1995. Autonomic and Trusted Computing, pp. 58-63, 2010.
[12] J. KENNEDY AND R. EBERHART, “Particle swarm [17]. LIN CUI, HONGPENG WANG, “Reachback Firefly
optimization,” in Proc. IEEE Int. Conf. Neural Networks, vol. 4, Synchronicity with Late Sensitivity Window in Wireless Sensor
pp. 1942–1948, 1995. Networks.” Proc. of IEEE Ninth International Conference on
Hybrid Intelligent Systems, pp. 451-456, 2009.
COPO1O2-8
BIOMETRIC AUTHENTICATION USING IMAGE PROCESSING TOOLS
of the decomposed image. These three detail where i is the decomposition level. Hi,Vi and
images are superimposed in [8] to yield Di are the detail images in horizontal, vertical
composite mage. It may be noted that as the and diagonal directions respectively. Other
decomposition level increases the size of the features are under investigation.In [8] all the
detail images also decreases. detail images are super imposed and then
The results of authentication correspond to energy is calculated.
the composite image.
4. Results and implementation
From each detail images, energy feature is
calculated as [1]: In [8] a 100% recognition score is obtained
using fuzzy feature. A simple Euclidean
M N
(1) distance measure is used to find out the
Eid ( Si ( x, y )) 2 , i 1, 2,..., 5
x 1 y 1
recognition rate. In [8] the database used, is
created in the biometrics lab of IIT Delhi.
ROI is divided in to non-overlapping
windows. Features are calculated from these
windows. The size of the window is varied
and the recognition score is obtained.
Given two data sets of features corresponding
to the training and testing samples, a
matching algorithm determines the degree of
similarity between them. A Euclidean
distance is adopted as a measure of
dissimilarity for the palmprint matching using
Fig. 3: Two dimensional one level DWT both wavelet and fuzzy features.
decomposition
The wavelet feature in [8] is applied on polyU
database [15] on 50 users and 5 images of
each user (total 250 images).4 images are
taken for training data and 1 image for testing
data. A recognition score of 82% is obtained.
Euclidean distance measure is taken as a
classifier.ROC plot is shown in fig.5
COPO1O1-3
BIOMETRIC AUTHENTICATION USING IMAGE PROCESSING TOOLS
COPO1O1-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
ABSTRACT
Image quality and utility become crucial issues for engineers, scientist, doctors, patients,
insurance companies and lawyer whenever there are changes in the technology by which
medical images are achieved. Examples of such changes include analog to digital conversion,
lossy compression for transmission and storage, image enhancement, and computer aided
methodology for diagnosis of disease in medical images. Edit an image so that it is more suitable
for a specific application than the original image is termed as image enhancement technique.
Image is defined as a two dimensional function f(x, y), where x and y are spatial coordinates,
that bears information, which can be generated in any form such as visual, x-ray and so on. X-
rays are the oldest source of electromagnetic radiation used for medical imaging. Medical image
enhancement methods are used like all other methods and algorithm in image processing, as
chain subsequent edits aimed at achieving a suitable result. Improving one function in the chain
is only useful if the end result is really unproved, and that does not solely depend on that
particular function; it also depends on the quality of the first image. In this paper we have
compared different types of image enhancement technique of medical image in spatial domain
and also presented a statistical analysis.
COPO1O5-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
images are not affected as much as that of correction performs much better as
global histogram method. If window size compared to all other methods addressed.
increases then proportionately larger region This conclusion is on the basis of the change
is chosen out of entire image region causing in standard deviation and mean value on the
mean values to decrease. If value of constant histogram of the enhanced image.
k changes or increases, more uniform
distribution is seen in the histogram it Figure 4, 5, 6 shows plot of mean value and
indicates that more equalization is achieved. standard deviation value of four methods of
It is observed that in log transformation Leg1, Hand1, and chest 1 image. In bar
method for different value of c, if gray value graph the global histogram method is not
increases then statistical values (mean, shown.
standard deviation) of enhanced image also Conclusion
increases which in turn increases the image
brightness. Similarly in power law In this present paper we have compared
transformation using fixed values of gamma different techniques of image enhancement
and different value of c, if the gray value is in spatial domain. They have been evaluated
increased, the statistical values are also in terms of standard deviation, mean values
increased proportionately. In power law as statistical measures. As medical image
transformation with gamma correction has only a finite number of gray scales, an
approach, if the value of gamma increases, ideal equalization is not possible. Image
the value of mean of enhanced image enhancement technique can be improved if
decreases while the value of standard the enhancement criteria are application
deviation increases, as result the brightness dependant and can be illustrated precisely.
of enhanced image is decreased.
References
Similar observation have been found in
Millener AE, AubinM, Pouliot J,
piecewise contrast stretching method i.e.
ShinoharaK, Roach HIM (2004)
enhanced image is dark for gamma=0.1 and
“Daily electronic portal imaging
contrast increase as the value of gamma
from morbidly obes men undergoing
increases from gamma-0.2 onwards.
radiotherapy for localized prostate
Table 1, 2, 3 shows the comparison of cancer”. Int J Radiat Oncol
different method of Radiographic Image Bioiphys, 6.10.
Enhancement.
Van Denberg DL, De Ridder M,
Because of flexibility of changing window Storm GA (2004) “Imaging in
size and equalization factor k an auxiliary radiotherapy” European Journal of
region can be enhanced to the extent radiology, 41-48.
required in the local histogram equalization
method. Therefore this method is also Shi X.Q., Sallstrom P, and Welander
sometimes referred to as adaptive method. U. “A Color Coding Method for
In the present enhancement approaches, Radiographic Image and vision
power law transformation using gamma
COPO1O5-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Log Transformed
Original Image
COPO1O5-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO1O5-7
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Chest
Hand
COPO1O5-8
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Leg
COPO1O5-9
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO3O1-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
BRIA2003, MISR1991, MUHA2002), Load compute total processing cost by adding total
Balancing (JACQ2005, DANI2005, processing cost of task, which are assigned at
SAEE2005, SHU2009) and Modeling specified processor.
(RENE2002, JAVI2000, KENT2002). Tasks
are allocated to various processors of the ALGORITHM
distributed network in such a way that overall
processing cost of the network should be Start Algorithm
minimized. As it is well known that the tasks Read the number of task in m
are more than the number of processors of the Read the number of processor in n
network. For i = 1 to m
For j = 1 to n
OBJECTIVE
Read the value of
processing cost (c) in
In the Distributed Processing Environment
Processor Cost Matrix
(DPE), it is the common problem to allocate
tasks where the number of tasks is more than namely PCM(,)
j=j+1
the number of processors. The objective of the
Endfor
present research paper is to enhance the
i=i+1
performance of the distributed networks by
Endfor
using the proper utilization of its processors
Calculate the sum of each row and
and as well as proper allocation of tasks. In
column and store the results in
the present research paper the type of
Modified Processor Task Matrix
allocation of task to the processor is static in
MPTM(,)
nature. As in this paper the performance is
By arranging the MPTM(,) in
measured in terms of processing cost, so we
ascending order of their row_sum and
have to minimize the processing cost to obtain
column_sum, we get Arranged
the best performance of the processors. To
Processor Task Matrix APTM(,)
overcome from the problem we have designed
i=1
an intelligent algorithm for task allocation.
While all tasks != SELECTED
Select the biggest possible
TECHNIQUE
square matrix from left upper
corner and store it into SMi(,)
In order to evaluate the overall optimal
Apply algorithm of Assignment
processing cost of a distributed network, we
Problem [KANT2002] on SMi(,)
have chosen the problem where a set P = {p1,
i=i+1
p2, p3, …….pn} of „n‟ processors and a set T =
Endwhile
{t1, t2, t3, …….tm} of „m‟ tasks, where m>n.
Club processor-wise overall optimal
The processing cost of each task to each and
processing cost
every processor is known and it is mentioned
State the results
in the Processing Cost Matrix of order m x n.
End Algorithm
After making a matrix of same order taking in
ascending order of its sum of row and sum of
column, we apply the algorithm of assignment IMPLEMENTATION
problem on it. For each processor we evaluate
In the present research paper, the distributed
the overall allocation of each task; and
allocation of the task on the processor which network consist a set P of 4 processors {p1, p2,
p3, p4} and a set T of 10 tasks {t1, t2, t3, t4, t5,
has the minimum processing cost. Finally we
COPO3O1-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
t6, t7, t8, t9, t10}. It is shown in the figure1. The in the Processor Cost Matrix PCM(, ) of order
processing cost (c) of each task to each and 10 x 4.
every processor is known and it is mentioned
p1 p2 p3 p 4 Row_Sum
t1 t2 t3 t1 t2 t3 t1 600 200 900 300 2000
t4 t5 t6 t7 t4 t5 t6 t7
t2 500 300 200 100 1100
t8 t9 t 10 t8 t9 t 10
COPO3O1-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO3O1-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
8 25
effectiveness of the pseudo code. It is the 7
common requirement for any allocation 6
20
5
15
8
time required by an algorithm to run to 7
30
20
mentioned algorithm is O(mn). By taking 5
4 15
several input examples, the above algorithm 3
10
returns following results as in table 2. 2
5
1
0 0
1 2 3 4
Example
COPO3O1-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
m
No. of Processors = 5 No. of Processors = 3 Algorithm SAGA1991
Complexity
Present algorithm
10 50
9 45 250
8 40
7 35
200
No. of tasks
6 30
5 25
150
4 20
3 15
2 10 100
1 5
0 0
50
1 2 3 4
Example
200
Table 3: Comparison Table 150
Time Time 100
Complexity Complexity 50
n m of algorithm of present
0
(SAGA1991) algorithm 1 2 3 4 5 6 7
3 5 75 15 600
Present algorithm
3 6 108 18
500
3 7 147 21
400
3 8 192 24
4 5 100 20 300
4 6 144 24 200
4 7 196 28 100
4 8 256 32 0
4 9 324 36 1 2 3 4 5 6 7
COPO3O1-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO3O1-7
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO3O1-8
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
2 3
2 9 4
1. Introduction 2
5
COPO3O2-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
4. Searching Area
0 2 ∞ ∞ ∞ ∞ ∞ 3 ∞ ∞ Since the searching area of a classical
2 0 4 ∞ ∞ ∞ ∞ 3 2 ∞
Dijkstra Algorithm is large, there are some
∞ 4 0 3 ∞ ∞ ∞ ∞ ∞ 5
process by which the searching area of the
∞ ∞ 3 0 5 2 ∞ ∞ ∞ ∞
∞ ∞ ∞ 5 0 4 ∞ ∞ ∞ ∞
classical Dijkstra algorithm can be reduced.
∞ ∞ ∞ 2 4 0 3 ∞ ∞ ∞ Since the shortest path between two points is
∞ ∞ ∞ ∞ ∞ 3 0 3 ∞ 1 a straight line. The direction from the start
3 3 ∞ ∞ ∞ ∞ 3 0 ∞ ∞ point to the is generally strike of the shortest
∞ 2 ∞ 3 ∞ ∞ ∞ ∞ 0 5
∞ ∞ ∞ ∞
path when plan the route of the real road
∞ 5 ∞ 1 5 0
network. The shortest path between two
points is generally the both side of connection
line of the start point to the destination point
Figure.2 adjancy matrix(10x10) and usually it is near to the connection line. If
there are only one edge between the start
The storage format of the above matrix get point and the destination point, the edge itself
reduced and given as- is the shortest path.
3. Complexity Analysis
Let for the list array, the space
complexity is given as O(T). where T is the
edgentain one or more tables to no. of the
directed graph.
In the worst case T=n2 , then the space
complexity will be O(n2).
COPO3O2-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
S
T2
= [ ]2
COPO3O2-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
from the above equation, it can be compared The abutment matrix for the example is
that the ratio of time complexities of, when given as A=
searching area is an ellipse is less in
comparison to the ratio of time complexities
0 2 1 ∞ 7 12
of , when searching area is rectangular. 2 0 ∞ 2 ∞ ∞
Elliptical searching area gives better result. 1 ∞ 0 1 2 ∞
∞ 2 1 0 3 ∞
5. Feature matrix[2] 7 ∞ 2 3 0 4
Another way for the improvement of 12 ∞ ∞ ∞ 4 0
classical dijkstra algorithm. Since if we to
find the shortest path between two points, for
this purpose there are a no. of operations by There are following steps which from which
which it can be done. But if in any way it can feature matrix can be obtained.
reduce these no of operations, then the 1. source S={v1}, D={0,2,1,∞,7,12}
efficiency of the classical dijkstra algorithm 2. firstly find the shortest distance from the
will get increases. For this purpose there is a source i.e. D[3]=1, so S={v1,v3}
concept of feature matrix. From this feature 3. now the nodes connected via 3 to the 1 ,
matrix we can draw the shortest path tree and distance can be obtain by
can get the shortest path and shortest path D[3]+A[3][4]=2<D[4]=∞
length from source node to all other & D[3]+A[3][5]=3<D[5]=7
destination node. So that modification will D[4]=2 ,D[5]=3 and
For understand the concept of feature matrix the D matrix will become D={0,2,1,2,3,12}
let’s take a example of 6 node which are 4. iterate operation as follows,
connected as shown in figure.6. The second time we get that,
S={v1,v3,v2,v4}
& D={0,2,1,2,3,12}
The third time we get S={v1,v3,v2,v4,v5}
1
12 & D={0,2,1,2,3,7}
6
2 7
The fourth time S={v1,v3,v2,v4,v5,v6}
1 4 & D={0,2,1,2,3,7}
3
2
2 5 5. compare the following D and A matrix we
2
1 will get the following matrix called feature
4
3
matrix(F).
0 0 0 0 0 0
Figure.6 nodes arrangement
2 0 0 0 0 0
1 0 0 0 0 0
0 0 1 0 0 0
0 0 2 0 0 0
0 0 0 0 4 0
COPO3O2-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
7. References:
4 5 [1]. Dong Kai fan, Ping Shi, “Improvement of dijkstra
4
algorithm and its application in route planning”,
Shandong University of technology, China, IEEE,
6
International conference on FSKD, 2010, pp 1901-
1904.
[2]. Ji-Xian Xiao, Fang-Ling Lu, “An improvement of
Figure.7 shortest path tree
the shortest Path algorithm based on Dijkstra
algorithm”, IEEE, ICCAE, 2010, pp. 383–385.
From the shortest path tree we can find the
shortest path from the source to the all other
nodes and also can find the path length.
COPO3O2-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO303-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011
.
Times of recording of different movements
were different.
III. METHODOLOGY
B.Data Processing
EEG data was notch filtered to remove 50Hz
frequency from each channel present in the
data due ac power supply of the EEG
machine. Harmonics of 50Hz frequency
were also removed from the data. IIR second
order notch filter with the quality factor (or
Q factor) of 3.55 was used to remove the
desired frequency.
Fig1: Block diagram for feature extraction of wrist movement
COPO303-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011
-10
-15
D. CLASSIFIER DESIGN:
Filter #1
-20 Motor movements are finally differentiated
-25 by using different classifiers on the recorded
-30
data
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency ( rad/sample) Table1: percentage values of classification for motor
Fig3: Magnitude response of a notch filter designed at 50Hz (or positions
.39∏ radian per sample) frequency
400
From the observations in Table1 we see that
Quad classifier gives us the best possible
200 results with high classification percentage
0
accuracy for both extension and pronation
movements.
-200
0 20 40 60 80 100 120 140
frequency
COPO303-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011
activity.
3500
frequency of occurance
2500
can prove to be good features for
classification of wrist movement. 2000
1500
1000
500
F3-Extension 0
2500 -40 -30 -20 -10 0 10 20 30 40
Range of values
1500
Fig5: Histogram plot of frontal electrode (F3) for Table2: Statistical features values for different
extension.
positions
F3-Pronation
2500 From the result tabulated in Table2
Extension has larger variance, mean, kurtosis
2000 and skewness values while pronation has
smaller values. Also for both the motor
frequency of occurance
0
From Table1, we justify our findings by
-60 -40 -20 0 20 40 60
range of values designing different classifiers and training
Fig6: Histogram plot of frontal electrode (F3) for
them for the two class movements to get the
pronation. best possible accuracy from the 16 channel
recorded sample values
COPO303-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011
V. CONCLUSION
EEG data class separation was investigated
for two wrist movement classes, pronation
and extension using a 16 channel setup. Four
features were extracted from the histogram
plots of signal for both the movement and
neutral state. Variance, mean, skewness and
kurtosis are the features which easily
demarcates between the two movements. It
was a three class problem since three data
sets were used. Different classifiers were
designed and trained based on the recorded
data sets on the basis of which we classified
between motor movement
VI. ACKNOWLEDGEMENT
The authors are indebted to the UGC.This
work is a part of the funded major research
project CF.No 32-14/2006(SR)
REFERENCES
COPO303-5
Color Images Enhancement Based on Piecewise
Linear Mapping
Anubhav kumar 1, Awanish Kr Kaushik 1 , R.L.Yadava 1, Divya Saxena 2
1
Department of Electronics & Communication Engineering
Galgotia’s College of Engineering & Technology, Gr.Noida, India
2
Department of Mathematics
Vishveshwarya Institute of Engineering and Technology , G.B.Nagar,India
rajput.anubhav@gmail.com
Keywords - Color image enhancement, RGB color space, YCbCr Similarly Fairweather [8] have used techniques such as
color space, Piecewise linear mapping, RFSIM. contrast stretching and Markov Random field. They applied
bimodal histogram model to the images in order to enhance
the underwater image. Yoav [9] have used a Physics-based
I.INTRODUCTION model. They developed scene recovery algorithm in order to
clear underwater images/scenes through polarizing filter. This
Image enhancement processes consist of a collection of approach addresses the issue of backscatter rather than blur.
techniques that seek to improve the visual appearance of an
image or to convert the image to a form better suited for In this paper color images enhancement based on piecewise
analysis by a human or machine. Nowadays there is a rapid linear mapping has been proposed .In Section II, an illustration
increase in the application of color video media. This has of theoretical foundations of piecewise linear mapping
resulted in a growing interest in color image enhancement function are presented. The proposed enhancement algorithm
techniques. is developed in Section III. Experiments conducted using a
variety of color images are described in Section IV and results
Some other common technique to enhance the contrast of are discussed. A conclusion is drawn in Section V.
images is to perform histogram equalization and
Homomorphic methods. The advantage of histogram
equalization technique is that it works very well for grayscale II. THEORETICAL FOUNDATIONS
images, however, when histogram equalization is used to
enhance color images, it may cause shift in the color scale Piecewise linear Mapping Function –
resulting in artifacts and an imbalance in image color. The
homomorphic filtering is used to correct non uniform The enhancement is done by first transforming the intensity
values using a piecewise linear mapping function for the
intensity component. The piecewise linear mapping function original image are transformed to create the optimally
consists of three line segments as indicated in Figure 1, where enhanced color image in NTSC YCbCr space directly.
vmin and vmax denote the minimum and maximum intensity
levels in the original image, respectively. This type of
mapping function permits proper allocation of the dynamic
range to different ranges of intensity levels. The actual values
of vlower and vupper determine the dynamic range allocation
for lower, intermediate and higher ranges of intensity levels.
1 1 <
,
(1)
(a ) (b)
(a ) (b)
(c)
Figure-4 –Watch image (a) Original image (b) Enhanced from
Murtaza method [1] (c) Enhanced from Proposed Method
(c)
Figure-3 -Lena image (a) Original image (b) Enhanced from Murtaza
method [1] (c) Enhanced from Proposed Method
(a ) (b)
(a ) (b)
(c)
Figure-4 –Play image (a) Original image (b) Enhanced from Murtaza
method [1] (c) Enhanced from Proposed Method
∑ ∑ ,. ,
enhancement and intensity preservation for gray-level images
∑ ∑ ,
using multi objective particle swarm optimization,” IEEE
Di = (3) Trans. on Automation Science and Engineering, vol. 6, no. 1,
pp. 145–155, 2009.
The similarity between two feature maps fi (i = 1~5) and gi at the [6] S. K. Naik and C. A. Murhty, “Hue-preserving color image
corresponding location (x, y) is defined as enhancement without gamut problem,” IEEE Trans. on Image
,.!,"#
Processing, vol. 12, no. 12, pp. 1591–1598, 2003.
$ ,.! $ ,"#
[7] Q. Chen, X. Xu, Q. Sun, and D. Xia, “A solution to the
di (x,y) = (4) deficiencies of image enhancement,” Signal Processing, vol.
90, pp. 44–56, 2010.
[8] A J R Fairweather, M A Hodgetts, A R Greig, “Robust
= ∏'() &
Then, we compute the RFSIM index between f and g as scene interpretation of underwater image sequences”, In 6th
International Conference on Image Processing and its
RFSIM [3] (5) Applications, 1997, pp. 660 -664, ISBN: 0 85296 692 X
[9] Schechner, Y and Karpel, N., “Clear Underwater Vision”.
Proceedings of the IEEE CVPR, Vol. 1, 2004, pp. 536-543.
Table-I [10] M.Isa,M.Y.Mashor,N.H.Othman,"Contrast Enhancement
Image Processing on Segmented Pap smear Cytology Images",
RFSIM (Image Quality Assessment) Prof. of Int. Conf. on Robotic, Vision, Information and Signal
Images Murtaza et al. [1] Proposed Method Processing.pp. 118 – 125, 2003.
Lena Image 0.3445 0.7261
Flower Image 0.4183 0.8321
Watch Image 0.4604 0.8953
Play image 0.4577 0.7614
V .CONCLUSION
V. REFERENCE
COPO305-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
combined to yield a strong learner with enhanced predictive The class identities were given numeric representation of 1
capability. The ensemble learning methods produce a fused and 1 . That is, yi Y 1, 1 for all x i X for all
decision fusion. In principle the performance of any classifier i 1, 2,..., N .
can be enhanced. However, usually the risk of overfitting Step 1. Define input. It consists of N training examples, a
prompts to use an ensemble of weak learners [5]-[7]. There is base learning algorithm, and number of training runs T .
no limit on the number of learners in the ensemble. The Step 2. Initialize weight distribution over training examples
performance of an ensemble learning method can be optimized
according to w1 (i) 1 / N for i 1,2,..., N where i stands for
by proper selection of ensemble size and fusion method to suit
any specific application [5], [8]. the training examples and subscript 1 denotes the first round
Most popular among many ensemble learning algorithms is of T runs to determine weights wt (i ) . In the first round all
the AdaBoost (nickname for adaptive boosting) algorithm training examples are assigned equal weights. This assignment
described by Freund and Schapire [9]. A summary of the ensures that weight distribution is normalized, that is,
AdaBoost algorithm is given in section II. The objective of N
present study is to show that the predictive ability of the w1(i) 1.
i 1
AdaBoost algorithm can be further boosted by varied
Step 2. Start a loop „for t 1 to T ‟ that will create
representation of the same raw data by employing several
sequential base classifiers according to the following substeps
preprocessing and feature extraction methods. Different
in succession:
preprocessing and feature extraction procedures reveal hidden
Train the base classifiers ht based on the example
data structure from different perspectives, and yield alternate
sets of features to represent the same example. By combining weight distribution wt (i ) .
these sets in some way can, in principle, provide a more Determine the training error defined as t wt i
reliable and accurate representation. Motivated by this idea, i:ht xi yi
we report here a study on enhancing performance of the which is the sum of weights for all misclassified
AdaBoost algorithm by using a simple model of feature fusion examples by ht .
based on the two common preprocessors and one feature Assign weight to the t-th classifier ht according to:
extractor combination. The procedure is described in section
III. Using a linear threshold classifier for the AdaBoost 1 1 t
t ln .
ensemble generation, section IV presents validation results 2 t
based on some benchmark data sets available from open Update the example weights according to:
sources. The paper concludes with a discussion in section V.
wt i e t for ht xi yi
wt 1 i
zt e t for ht xi yi
II. THE ADABOOST ALGORITHM where z t is the normalization factor for making
The AdaBoost [9] is a supervised learning boosting wt 1 (i) to be a probability distribution. That is, for
algorithm that produces a strong classifier by combining N
several weak classifiers from a family. It needs a set of training making wt 1 (i) 1.
examples and a base learning algorithm as input. Let i 1
X x1 , x 2 ,..., x N be the set of N training vectors drawn End loop.
Step 3. Output the boosted classifier as weighted sum of
from various target classes, and let Y y1, y2 ,..., y N denote
the T base classifiers
their class labels. The base learner is chosen such that it T
produces at least more than 50% correct classification on the H x sign t ht x .
training set. Let base learner be denoted by ht x . The basic t 1
COPO305-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Vector Autoscaling
III. MULTIPLE PREPROCESSORS AND INFORMATION FUSION Data Set 2
1 N 1 N
where xi xij and ( xij xi ) 2 . IV. VALIDATION
i
N j 1 N j 1 Four data sets of two class problems have used in the
validation analysis. These data were collected from the UCI
Dimensional autoscaling machine learning repository. When analyzed by a single strong
The matrix elements are mean-centered and variance classifier (backpropagation neural network) in combination
normalized for each sample separately (row wise) as with dimensional autoscaling and PCA feature extraction, the
xij x j classification rate for the two data sets (sonar and heart) were
xij typically <60% and that for the other two (Haberman breast
j cancer and Pima Indian diabetes) were typically >70%. The
1 M 1 M analysis of the same data sets were done by the proposed
where x j xij and j ( xij x j ) 2 . method consisted of the weak threshold classifier based
Mi1 M i 1 AdaBoost algorithm and the two methods of feature extraction.
The feature extraction has been done by the principal The division of the available data between the training and the
component analysis (PCA). test sets were done by random selection in nearly 50-50 ratio.
In data space fusion, the sample vectors transformed by the The description of the data sets is given Table 1.
two methods were fused by simple concatenation of the vector Table 2 presents the best classification results obtained by
components. That is, if the i-th training vector processed by the AdaBoost algorithm with the linear threshold base
the vector autoscaling is x1i {x1ij } {x1i1, x1i 2 ,..., x1iM } and that classifiers as described in the preceding. The data are
processed by four combinations of preprocessing, fusion and
processed by the dimensional auto scaling is
feature extraction strategies before AdaBoosting. These
xi2 {xij2} {xi21, xi22 ,..., xiM
2
} then the i-th fused data vector is combinations are: vector-autoscaling + PCA; dimensional-
autoscaling + PCA; (vector-autoscaling + dimensional-
defined by z i {x1i1, x1i 2 ,..., x1iM , xi21, xi22 ,..., xiM
2
} . The feature
autoscaling) data space fusion + PCA; and, (vector-autoscaling
extraction is done by PCA of the new N 2M dimensional + PCA) + (dimensional-autoscaling + PCA) feature space
data matrix. fusion. It can be seen that in all cases the performance of the
In feature space fusion, two alternate feature spaces are AdaBoost algorithm has improved after the multiple
created first by the PCA of the vector autoscaled and the
COPO305-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
preprocessor based data space or feature space fusions. The vector autoscaling dimensional autoscaling
amount of improvement however depends on the type of data. 1 1
error rate
error rate
diabetes data the improvements are very marginal. However, 0.5 0.5
in case of sonar data and heart data the improvements are
substantial after data space fusion.
0 0
20 40 60 80 20 40 60 80
TABLE I
DATA SETS USED IN PRESENT ANALYSIS data space fusion feature space fusion
1 1
Data Classes Sample Attribute Remark
s s
error rate
error rate
Sonar 2 208 60 Classes: sonar returns from a 0.5 0.5
metal cylinder and from a
similarly shaped rock.
Attributes: integrated energy
within a frequency band. 0 0
20 40 60 80 20 40 60 80
Heart 2 267 22 Classes: cardiac normal and X - axis: number of threshold classifiers in AdaBoost ensemble
abnormal condition.
Attributes: SPECT images. Fig. 1. Variation of error rate for sonar test data with ensemble size in
Haberman‟s 2 306 3 Classes: patients‟ survival after AdaBoost algorithm for linear threshold classifiers.
Breast-Cancer 5 years and death within 5 years
of breast cancer surgery. vector autoscaling dimensional autoscaling
1 1
Attributes: Age, year of
operation, positive axillary
nodes.
error rate
error rate
Pima-Indian 2 768 8 Classes: signs or no-sign of 0.5 0.5
Diabetes diabetes pima-indian females
above 21 years age.
Attributes: patient‟s history and 0 0
physiological parameters 20 40 60 80 20 40 60 80
error rate
Classification Rate (%) 0.5 0.5
error rate
error rate
samples, and the plots show the variation of error rate with the 0.5 0.5
example, the selected threshold classifier results in the initial X - axis: number of threshold classifiers in AdaBoost ensemblle
error rates for the sonar and heart data close to 50%, and for Fig. 3. Variation of error rate for Haberman test data with ensemble size in
the breast-cancer and diabetes data close to 25%. The AdaBoost algorithm for linear threshold classifiers.
AdaBoosting reduces error rate significantly in case of former,
Fig. 1 and Fig. 2, but not so much in case of latter, Fig. 3 and apparent from the results in Table II. Another notable point is
Fig. 4. The similar impact on the classification rates is that the data space fusion facilitates better boosting.
COPO305-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
error rate
0.5 0.5
AdaBoosted linear threshold classifier by 12% to 22% for the
sonar and heart data compared to the AdaBoosting without
fusion. We thus conclude that by bringing in diversity in the
0 0 preprocesssing methods for data representation to the feature
20 40 60 80 20 40 60 80
extractor yields more accurate feature set, which further
data space fusion feature space fusion
1 1
enhances the efficiency of the AdaBoost algorithm.
error rate
error rate
REFERENCES
V. DISCUSSION AND CONCLUSION .
[1] A. K. Jain, R. P. W. Duin, and J. Mao, “Statistical pattern recognition: a
Sonar data is a complex distribution of energy of chirped review,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol.
sonar signals in different frequency bands returned from two 22, no. 1, pp. 4-37, Jan. 2000.
types of targets (metallic cylinder and cylinder shaped rock). [2] R. J. Schalkoff, Pattern Recognition – Statistical, Structural and
All the attributes are therefore of the same kind. Besides, there Neural Approaches, Wiley & Sons, 1992, ch. 1.
[3] T.G. Dietterich, “Ensemble learning,” in The Handbook of Brain
could be appreciable correlation between different attributes in Theory and Neural Networks, 2nd ed. M.A. Arbib, Ed. Cambridge,
raw data. The best result for this data set is obtained by the MA: The MIT Press, 2002, pp. 405-408.
combination of data space fusion with AdaBoosting. The error [4] R. E. Schapire, “The strength of weak learnability,” Machine Learning,
rate on the training data set drops to 0 after 20 rounds of base vol. 5, no. 2, pp. 197-227, 1990.
[5] T. G. Dietterich, “Ensemble methods in machine learning,” in Lecture
learner creation. On the test data set however the error rate Notes in Computer Science, J. Kittler, F. Roli, Eds. Berlin Heidelberg:
continued to decrease up to 80 rounds, Fig. 1. The use of other Springer-Verlag, vol. 1857, 2000. pp. 1–15.
preprocessing methods did not produce much boosting for [6] Y. Freund, Y. Mansour, R. Schapire, “Why averaging classifiers can
ensemble sizes. The use of dimensional autoscaling and protect against overfitting,” in Artificial Intelligence and Statistics 2001
(Proc. of the Eighth International Workshop: January 4-7, 2001, Key
feature space fusion reduced the error rate significantly after
West, Florida), T. Jaakkola, T. Richardson, Eds. San Francisco CA:
only few rounds of iteration. Later, the error rate increased. Morgan Kaufmann Publishers, 2001.
The vector autoscaling did not produce boosting effect in any [7] D. Chen, J. Liu, “Averaging weak classifiers,” in Lecture Notes in
condition, Fig. 1. Computer Science, J. Kittler and F. Roli, Eds. Berlin Heidelberg:
Heart data is diagnostic cardiac data based on SPECT Springer-Verlag, vol. 2096, 2001, pp. 119-125.
[8] G. Levitin, “Threshold optimization for weighted voting classifiers,”
(single proton emission computed tomography) images for Naval Research Logistics, vol. 50, 2003, pp.322-344.
patients belonging to two categories: normal and abnormal. [9] Y. Freund, R. E. Schapire, “A decision-theoretic generalization of on-
The classification result in Table II and error result in Fig. 2 line learning and an application to Boosting,” Journal of Computer and
indicates a trend similar to that for sonar data. The System Sciences, vol. 55, no. 1, 1997, pp. 119–139.
[10] Cuneyt Mertayak (2007, may 25). AdaBoost, version 1.0. Available:
combination of data space fusion with AdaBoosting yields the http://www.mathworks.com/matlabcentral/fileexchange/21317-
best result. The attributes of image data are also likely to be adaboost.
correlated. [11] R. G. Osuna, H. T. Nagle, “A method for evaluating data preprocesssing
In contrast, the Haberman‟s survival data after breast cancer techniques for odor classification with an array of gas sensors,” IEEE
Trans. Syst. Man Cybern. B, vol. 29, May 1999, pp. 626–632.
surgery and the Pima Indian Diabetes data consist of patient‟s
history and physiological parameters like number of positive
axillary nodes and body glucose level. The variables in these
data sets are of different types and do not seem to have direct
correlation. The AdaBoosting in any combination of
preprocessing fusion strategy does not yield significantly
enhanced classification rate.
It appears therefore that the strategy of multiple
preprocessors based fusion of information enhances the
efficiency of the AdaBoost algorithm in those multivariate
situations where the variables are of the same type, and are
COPO305-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Abstract- This paper presents some real-time including the complicated nature of static and
video processing techniques for intelligent and dynamic hand gestures, complex backgrounds, and
efficient hand gesture recognition, with an aim occlusions. Attacking the problem in its generality
of establishing a virtual interfacing platform for requires elaborate algorithms requiring intensive
Human-Computer Interaction (HCI). The first computer resources. Due to real-time operational
step of the process is colour segmentation based requirements, we are interested in a
skin detection, followed by area-based noise computationally efficient algorithm.
filtering. If a gesture is detected, the next step is
to calculate a number of independent Previous approaches to the hand gesture
parameters of the available visual data and recognition techniques include the use of markers
assign a distinct range of values of each on various points on the hand, including fingertips.
parameter to a predefined set of different Calculation and observation of relative placement
gestures. The final step is the hierarchical and orientation of these markers specifies a
mapping of the obtained parameter values to particular gesture. The inconvenience of placing
recognise a particular gesture from the whole set. markers on the user’s hand makes this an infeasible
Deliberately, the mapping of gestures is not approach in practice. Another approach is to use
exhaustive, so as to prevent incorrect mapping sensor- fitted gloves to detect the orientation and
(misinterpretation) of any random gesture not other geometrical properties of the hand. The
belonging to the predefined set. The applications demerit of this approach is its cost ineffectiveness.
of the same are inclusive of, but not limited to,
Sign Language Recognition, robotics, computer The approach proposed in this text is quite user-
gaming etc. Also, the concept may be extended, friendly as it does not require any kind of markers
using the same parameters, to facial expression or special gloves for its operation. Also, the
recognition techniques. memory requirements are low because the
subsequent video frames are not stored in memory,
they are just processed and overwritten. Obviously,
1. INTRODUCTION it adds a new challenge to make the algorithm very
Gestures and gesture recognition are terms fast and efficient, fulfilment of which is ensured by
increasingly encountered in discussions of human- using low-complexity calculation techniques. For
computer interaction. The term includes character the ease of implementation, the proposed algorithm
recognition, the recognition of proof readers is based on three basic assumptions:
symbols, shorthand, etc. Every physical action
involves a gesture of some sort in order to be 1. The background should be dark.
articulated. Furthermore, the nature of that gesture 2. The hand should always be at a constant
is generally an important component in establishing distance from the camera.
the quality of feel to the action. The general 3. There should be a time gap of at least 200
problem is quite challenging due a number of issues ms between every two gestures.
COPO306-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO306-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Figure 4(b)
COPO306-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
(a)
COPO306-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO306-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO307-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO307-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO307-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
William Sheldon classified personality type of mimics and voice may define
according to body type [17]. He called personality traits. For example, it’s used
this a person’s somatotype and identified in socionics (see Table 2) that is a
three main somatotypes shown in branch of psychology based on Carl
Table 1. Sheldon’s somatotypes and Jung’s work on Psychological Types.
character interpretations Moreover, many socionics experts use
the visual method of personality
Sheldon’s Character Shape
Somatotype
characteristics identification as a main
method for personality traits and types
Endomorph Relaxed, sociable, Plump, recognition.
[viscerotonic] tolerant, buxom,
comfort-loving, developed
Table 2. Example of some outer
peaceful visceral appearance characteristics and their
structure interpretation
Mesomorph Active, assertive, Muscular
[somatotonic] vigorous,
OUTER APPEARANCE
combative No Physical Sensoring Intuitive
characte
Ectomorph Quiet, fragile, Lean,
[cerebrotonic] restrained, delicate,
r
non-assertive, poor 01 The Short and Lengthy
sensitive muscles form of thick, and thin,
bones muscles are muscles
Person is rated on each of these three and pronounced aren’t
dimensions using a scale from 1 (low) muscles pronounce
to 7 (high) with a mean of 4 (average). d
Therefore, for example, a person who is 02 Form of Sen Sen Intu Intui
a pure mesomorph would have a score of the nose sori sori itive tive
1-7-1. ng ng + +
In Ayurvedic medicine (used in India + + Ethi Logi
since ˜3000 BC) there are three main Log Eth cal cal
metabolic body types (doshas) – Vata, ical ical
Pita, & Kapha – which in some way «triangle
correspond to Sheldon’s somatotypes. Horizonta with peak
Body types have been criticized for very l line in on the top»
weak empirical methodology and are not the nose «triangle
generally used in Western psychology bridge with peak
(they are used more often in alternative in the
therapies and Eastern psychology and bottom»
spirituality).
Complex physical appearance Neuropsychological tests
evaluation Around the 1990s, neuroscience entered
This is approach of evaluation of face the domain of personality psychology.
and body parts in complex, and it is It introduced powerful brain analysis
considered to be physiognomy too. tools like Electroencephalography
Physical appearance characteristics such (EEG), Positron Emission Tomography
as appearance of some facial features, of (PET), Functional Magnetic Resonance
the skull, shoulders, hands, fingers, legs, Imaging (fMRI) and structural MRI
including diffusion tensor imaging (DTI)
COPO307-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
to this study. One of the founders of this are often able to distort their responses.
area of brain research is Richard This is particularly problematic in
Davidson of the University of employment contexts and other contexts
Wisconsin-Madison [18]. Davidson’s where important decisions are being
research lab has focused on the role of made and there is an incentive to present
the prefrontal cortex and amygdala in oneself in a favorable manner. Social
manifesting human personality. In desirability is a tendency to portray self
particular, this research has looked at in a positive light, and faking bad also
hemispheric asymmetry of activity in happens, that is purposely saying ’no’ or
these regions. Neuropsychological looking bad if there’s a ’reward’ (e.g.
studies have illustrated how hemispheric attention, compensation, social welfare,
asymmetry can affect an individual’s etc.). Work in experimental settings
personality. [20,21] has shown that when student
In contemporary psychological research samples have been asked to deliberately
there should be an instrument which fake on a personality test, they
would provide a maximum amount and demonstrated that they are capable of
type of objective/unbiassed information doing this.
about personality in as short a time as Though several strategies have been
possible, preferably with no participation adopted for reducing respondent faking,
of person whose characteristics are this is still a problem for such traditional
identified. Comparison of approaches to psychological testing instruments like
identi- fication of psychological questionnaires, interviews, direct
characteristics described above is observations. Surprisingly,
represented in Table 3. neuropsychological tests are prone to
Some comparison of approaches to respondent faking, too [22,23]. Faking
identification of psychological response styles include faking bad
characteristics (malingering), faking good
Criterion Psy Inte Face, Neurops (defensiveness), attempts at invalidation,
chol rvie body ycholog
ogic w, evalua ical
mixed responding (faking good and
al dire tion tests bad), and a fluctuating, changing style
que ct that occurs within one evaluation
stio obs
nnai erva
session. These response styles lead to
res tion getting incorrect results.
Easy and not – – + – Concerning face and facial features,
time-consuming
for person who is
faking becomes much more complicated:
tested it’s impossible to change the shape of a
Person may not – – + – nose or cheekbones just when person
participate in
testing process
wants. Besides, it is often unknown to a
High validity and + – ? – holder what his/her face reveals exactly.
reliability [19] Theoretically people can “fake” facial
Practically no – – + – features intentionally changing their
possibility for
respondent faking shape, color, texture, for instance, using
No need in + + + – plastic surgery, and identifying personal
expensive hi–tech psychological characteristics becomes
hardware
much harder in this case, though it may
In psychological testing there is
be also accomplished.
considerable problem that respondents
COPO307-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Face is the first subject that is unique for recognition, facial expression
people and used for people recognition. recognition, face animation, face
Thus, face is the most available means of retrieval, etc., and finally contribute to
evaluation among other instruments development of human-computer
based on questionnaires, interviews, interaction on higher level. Thus, the
neuropsychological tests. People in relations between such research areas as
general may not participate in testing face recognition, facial expression
process, identification of personality recognition and psychological
characteristics may be done remotely, characteristics recognition are mutually
even by exterior parties. beneficial.
Summarizing, face provides researchers 3. Approaches to psychological
and psychologists with instrument of characteristics recognition from face
obtaining information about personality There are three main approaches to
and psychological traits that would be psychological characteristics recognition
much more objective than questionnaires from face: physiognomy, phase facial
and neuropsychological tests (as we portrait and ophthalmogeometry, see
can’s change facial features just when Fig.1. The first originally interprets
such desire appears) and could be different facial features, the second
obtained remotely using person’s facial works with angles of facial features and
portrait, with no need for personal facial asymmetry, and the third extracts
involvement. and interprets eye region parameters.
If such instrument is working Methods developed for these approaches
automatically (system gets facial are described below. Physiognomy is a
portrait, processes it and in result gives theory based upon the idea that the
out information about personality assessment of the person’s outer
characteristics) and has straight-forward appearance, primarily the face, facial
layout, then: 1) psychological testing features, skin texture and quality, may
becomes more accurate, fast, objective give insights into one’s character or
and available for different kinds of personality. Physiognomy has flourished
research and applications; 2) deep since the time of the Greeks
knowledge in interpretation of facial (Empedocles, Socrates, Hippocrates and
features, which is rather rare in modern Aristotle), amongst the Chinese and
society, isn’t needed to administer and Indians, with the Romans (Polemon and
use the instrument. Methods and Adamantius), in the Arab world
algorithms originally developed for face (including Avicenna), and during the
detection, face recognition and facial European renaissance (Gerolamo
expression recognition research fields as Cardano and Giovanni Battista della
well as contemporary trends (applying Porta). It faded in
standard face images, multimodality,
three-dimensionality) should be applied
and adjusted to so-called Automatic
Psychological Characteristics
Recognition from Face. From its side,
Automatic Recognition of Psychological
Characteristics from Face is believed to Figure 1. Approaches to psychological characteristics
bring scientific benefits to face recognition from facial portrait
COPO307-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
popularity during the 18th century, was including an estimation of the accuracy
eclipsed by phrenology in the 19th and of the sources of information.
has been refreshed by personologists in “Digital physiognomy“ software
the 20th century. determines a person’s psychological
During 20th century attempts had been characteristics based on temperament
made to perform scientific experiments types, intellect, optimism – pessimism,
concerning validity of different facial conformism – adventurism, egoism –
features interpretations and high altruism, philanthropy – hostility,
accuracy results had been claimed [24], laziness, honesty, etc.
though they are mostly aren’t accepted
by official science [25]. At the same
time, science step by steps proves some
physiognomy beliefs. For instance,
correlations have been established
between IQ and cranial volume
[26,27,28,29]. Testosterone levels,
which are known to correlate with
aggressiveness, are also strongly
correlated with features such as finger-
Figure 2. Example of the table and interface of Visage
length ratios and square jaws [30,31]. demonstration application: facial features in the
Interpretation of facial features forehead and eyebrow area [34]
based on physiognomy has been and then presents a detailed person’s
implemented into psychological character analysis in a graphic format.
characteristics diagnosis tools such as The tool works like a police sketch
“Visage” Project [32] developed by Dr. (photo robot), so user has to select
Paul Ekman and “Digital physiognomy“ different parts of the person’s face, and
software [33] developed by Uniphiz Lab. doesn’t need to have a person’s
“Visage” is a project for collecting photograph, see Fig. 3. It’s claimed that
and organizing information about only the facial features that can be
relatively permanent facial features. It interpreted with high accuracy were
includes methods for storing, retrieving, used, and the confidence factor is
and inspecting the data. Visage is a calculated for each interpretation by the
unique database schema for representing tool. It should be noted that “Digital
physiognomy and the interpretation of physiognomy“ tool also uses visual
physiognomic signs. The Visage systematic classification of 16
demonstration application illustrates personality types based upon Myers-
limited variations of some facial features Briggs typology, see Fig. 4.
in the following categories: forehead and “Visage” and “Digital Physiognomy”
eyebrows (see the Fig.2), eyes and projects are some of the first attempts
eyelids, nose, mouth and jaw, cheeks, to develop physiognomic database and
chin, ears. User should select features use modern technology for
that are distinctive about the face that is physiognomic interpretations. In spite of
going to be interpreted and then click the having value for psychological diagnosis
“Get...” button. The application retrieves based on physiognomy, both projects use
information from the database relevant manual selection of facial features, and
to description of physiognomy, thus, can’t be used extensively and
applied in scientific research.
COPO307-7
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO307-8
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Figure 7. Translated picture from Muldashev’s book pattern extraction and further
[39]: here two parameters of facial eye region are used
for recognition of some basic psychological traits, e.g. investigation.
strong will and fearfulness, etc. 4. Conclusion
The paper represents general idea that
face provides researchers and
psychologists with objective instrument
of obtaining information about
personality and psychological traits. An
Figure 8. Ophthalmogeometrical pattern extraction 40]
up-to-date survey of approaches and
methods in psychological characteristics
of brain asymmetry phenomena and face
recognition from facial image is
asymmetry. Although Anuashvili claims
provided.
that application developed for video-
In perspective new research task of
computer psychological diagnosis and
automating procedures in applications of
correction method is entirely automated,
psychological characteristics recognition
practically it may be considered to be
from face should be explored. Various
semi-automated as manual selection of
approaches and methods developed
facial points on image is required. This
within face recognition, facial
limits usage of such application for
expression recognition, face retrieval,
extensive research and other purposes.
face modeling and animation may be
Concerning ophthalmogeometry
applied and adjusted for recognition of
approach, it is based on idea that
psychological characteristics from face.
person’s emotional, physical and
Undeniably, such automated system of
psychological states can be recognized
psychological characteristics recognition
by 22 parameters of an eyes part of the
from face will get countless
face [39], see Fig. 7. phthalmogeometry
psychological, educational, business
phenomenon has been discovered by
applications. It may be used also as part
prof. Ernst Muldashev. Apart from other
of medical systems: 1) patient’s
interesting facts, E. Muldashev has
psychological state and traits influence
found that in 4-5 years after birth the
the process of medical treatment, and it
only practically constant parameter of
should be taken into consideration and
human body is the diameter of the
researched; 2) patient’s psychological
transparent part of cornea which equals
characteristics should be taken into
10±0,56 mm. He also represented an
account to reflect and construct the
idea that ophthalmogeometrical pattern
psychosomatic model of disease in the
is unique for people. The procedure of
environment, which includes biological,
this pattern identification and calculation
psychological, and social factors.
is described by Leonid Kompanets [40],
References
see Fig. 8. [1] Carver C. S., Scheier M. F. Perspectives on
Ophthalmogeometry is based on personality (4th ed.) Boston: Allyn and
interesting ideas and may be applied to Bacon, 2000, page 5.
[2] DSM, Diagnostic and Statistical Manual of Mental
psychological, medical research as well Disorders, can be found at
as to biometrics, though this is not very http://www.psych.org/ research/ .
deeply investigated area of facial [3] Hampson S. E. Advances in Personality
analysis which primarily needs Psychology. Psychology Press, 2000.
[4] Holigrocki R. J., Kaminski P. L., Frieswyk S. H.
automation of ophthalmogeometric (2002). PCIA-II: Parent-Child Interaction
COPO307-9
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Assessment Version II. Unpublished manuscript, Journal of Human Genetics (2003) 11, pages 555-560.
University of Indianapolis. [17] Irvin L. Child, William H. Sheldon. The
(Update of PCIA Tech. Rep. No. 99-1046. Topeka, correlation between components of physique
KS: Child and Family Center, and scores on certain psychological tests. Journal of
The Menninger Clinic.) (Available from Dr. Richard J. Personality, Vol. 10, Issue 1,
Holigrocki or Dr. Patricia September 1941, page 23.
L. Kaminski). [18] Richard Davidson, Ph.D. Vilas Professor of
[5] Nigel Barber. The evolutionary psychology of Psychology and Psychiatry. Can be
physical attractiveness: Sexual selection found at https:// psychiatry.wisc.edu/
and human morphology. Ethology and Sociobiology, faculty/FacultyPages/Davidson.htm.
Volume 16, Issue 5, September [19] The Validity of Graphology in Personnel
1995, pages 395-424. Assessment. Psychological Testing Centre.
[6] John P. Swaddle, Innes C. Cuthill. Asymmetry and Found at www.psychtesting.org.uk, November 1993
Human Facial Attractiveness: reviewed April 2002.
Symmetry May not Always be Beautiful. Proceedings: [20] Chockalingam Viswesvaran, Deniz S. Ones.
Biological Sciences, Vol. 261, Meta-Analyses of Fakability Estimates:
No. 1360 (Jul. 22, 1995), pages 111-116. Implications for Personality Measurement,
[7] Thomas R. Alley, Michael R. Cunningham. Educational and Psychological Measurement,
Averaged faces are attractive, but very Vol. 59, No. 2, 1999, pages 197-210.
attractive faces are not average. Psychological Science [21] Deniz S. Ones, Chockalingam Viswesvaran,
2 (2), 1991, pages 123-125. Angelika D. Reiss. Role of Social Desirability
[8] Leslie A. Zebrowitz, Gillian Rhodes. Sensitivity to in Personality Testing for Personnel Selection: The
”Bad Genes” and the Anomalous Red Herring. Journal
Face Overgeneralization Effect: Cue Validity, Cue of Applied Psychology, 1996. Vol. 81, No. 6, pages
Utilization, and Accuracy in Jud 660-679.
[22] Hall, Harold V.; Poirier, Joseph G.; Thompson,
72 Ekaterina Kamenskaya, Georgy Kukharev Jane S. Detecting deception in
ging Intelligence and Health. Journal of Nonverbal neuropsychological cases: toward an applied model.
Behavior Volume 28, Number 3 From: The Forensic Examiner,
/ September, 2004, pages 167-185. 9/22/2007.
[9] Caroline F. Keating. Gender and the Physiognomy [23] Allyson G. Harrison, Melanie J. Edwards and
of Dominance and Attractiveness, Kevin C.H. Parker. Identifying students
Social Psychology Quarterly, Vol. 48, No. 1 (Mar., faking ADHD: Preliminary findings and strategies for
1985), pages 61-70. detection. Archives of
[10] Ulrich Mueller, Allan Mazur. Facial Dominance Clinical Neuropsychology. Volume 22, Issue 5, June
of West Point Cadets as a Predictor 2007, pages 577-588
of Later Military Rank. Social Forces, Vol. 74, No. 3 [24] Naomi Tickle. You Can Read a Face Like a
(Mar., 1996), pages 823-850. Book: How Reading Faces Helps You
[11] J. Liggett, The human face. New York: Stein and Succeed in Business and Relationships, Daniels
Day, 1974, page 276. Publishing, 2003.
[12] Phisiognomics, attributed to Aristotle. Cited in [25] Robert Todd Carroll. The Skeptic’s Dictionary: A
J.Wechsler (1982), A human comedy: Collection of Strange Beliefs,
Physiognomy and caricature in 19th century Paris Amusing Deceptions, and Dangerous
(p.15). Chicago: University Delusions.Wiley; 1st edition (August 15, 2003)
of Chicago Press. [26] J. Philippe Rushton, C. Davison Ankney. Brain
[13] A. Brandt. Face reading: The persistence of size and cognitive ability: Correlations
physiognomy. Journal Psychology Today, with age, sex, social class, and race. Psychonomic
1980, December, page 93. Bulletin & Review, 1996, 3
[14] Sibylle Erle. Face to Face with Johann Caspar (1), pages 21-36.
Lavater, Literature Compass 2 (2005) [27] Michael A. McDaniel. Big-brained people are
RO 131, pages 1 -4. smarter: A meta-analysis of the re
[15] Stefan Boehringer, Tobias Vollmar, Christiane Recognition of Psychological Characteristics from
Tasse, Rolf P Wurtz, Gabriele Face 73
Gillessen-Kaesbach, Bernhard Horsthemke and lationship between in vivo brain volume and
Dagmar Wieczorek. Syndrome identification intelligence. Intelligence, Volume 33,
based on 2D analysis software. European Journal of Issue 4, July-August 2005, pages 337-346.
Human Genetics [28] J. Philippe Rushton. Cranial size and IQ in Asian
(2006), pages 1-8. Americans from birth to age
[16] Hartmut S Loos, Dagmar Wieczorek, Rolf seven. Intelligence, Volume 25, Issue 1, 1997, pages
PWürtz, Christoph von der Malsburg and 7-20.
Bernhard Horsthemke. Computer-based recognition of [29] John C. Wickett, Philip A. Vernon, Donald H.
dysmorphic faces, European Lee. Relationships between factors of
COPO307-10
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO307-11
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Abstract:
Object Tracking is an important task in video accident detection on highways, and routine
processing because of its various applications like maintenance in nuclear facilities[1,2,3,4]. Hu et.
visual surveillance, human activity monitoring and al.[1], provide a good survey on visual surveillance
recognition, traffic flow management etc. Multiple and its various applications. Detecting and Tracking
object detection and tracking in outdoor environment humans in a video is a step to take in the process of
is a challenging task because of the problems raised analyzing and predicting their behavior and
by poor lighting conditions, occlusion and clutter. intention. This is a good practice to detect objects in
This paper proposes a noble technique for detecting a video sequence before tracking them. The problem
and tracking the multiple humans in a video. A of multiple object tracking is more complex and
classifier is trained for object detection using haar- challenging than the single object tracking because
like features from the training image set. The human of the issue of the management of multiple tracks
objects are detected with the help of this trained caused by newly appearing objects and the
detector and are tracked with the help of a particle disappearance of already existing targets. Viola et
filter. The experimental results show that the propose al[5] proposed an object detection framework for
technique can detect and track the multiple humans in and used it for first time for detecting the human
a video adequately fast in the presence poor lighting faces in an image. The adaptive boosting can be
conditions, clutter and partial occlusion and the used to speed up a binary classifier [6] and this can
technique can handle varying number of human be used in machine learning for creating the real-
objects in the video at various points of time. time detectors. After detecting an object or multiple
objects in a video sequence captured by the
Keywords: Human detection, Automatic multiple surveillance camera, the very next step is to track
object tracking, Haar-like features, Machine learning, these objects (human, vehicle etc.) in the subsequent
Particle filter. frames of the video stream. The particle filter based
tracking techniques are gaining popularity because
1. Introduction:
of their ease of implementation and capability to
Detecting humans and analyzing their activities by represent a non-linear object tracking system and
vision is key for a machine to interact intelligently non-Gaussian nature of noise. Various object
and effortlessly with a human inhabited tracking techniques based on particle filtering are
environment .The aim of visual surveillance is the found in literature [7,8,9,10,11]. These approaches
real-time observation of targets such as human fall under two main categories; single object
beings or vehicles in some environment which leads trackers and multiple object trackers. Lanvin et.
to the description of objects’ activities with the al.[7], propose an object detection and tracking
environment or within them. Visual surveillance has technique and solve the non-linear state equations
been used for security monitoring, anomaly using particle filtering. Single object trackers suffer
detection, intruder detection, traffic flow measuring, from the problem of false positives when severe
COPO4O1-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
occlusions occur because of hidden first order the object detection framework proposed by Viola
Markov hypotheses [8]. The problem of tracking et.al.[5] using Haar-like features and then tracked
multiple objects using particle filters can be solved using a simple particle filter in the subsequent video
in two ways. One is by creating multiple particle frames. Binary adaptive boosting [6] reduces the
filters for each track and another one is by having a training time of the human detector. The earlier
single particle filter for all tracks. The second detection of the objects simplifies the process of
approach works fine as long as the objects under tracking and sheds the load of detection from tracker.
tracking are not occluded but in case of occlusion The exhaustive dataset used for training the detector
when objects come close by, these techniques fail to makes the system detect and track the multiple
track the objects. In [8] a multi-object tracking objects in critical lighting conditions, in dynamic
technique using multiple particles has been background, partial occlusion and clutter.
proposed. Algorithms which attempt to find the
target of interest without using segmentation have The rest of the paper is organized as follows;
been proposed for single target tracking based on section(2) discusses the proposed technique for
cues such as color, edges and textures [12]. human detection and tracking. In section (3) the
various experimental results using the proposed
Chen et. al.[9] propose a color based particle filter technique are given which prove the validity and
for object tracking. Their technique is based on novelty of the method. Section(4) comprises of the
Markov Chain Monte Carlo(MCMC) particle filter conclusions and at last references are given.
and object color distribution. Sequential Monte Carlo
techniques also known as particle filtering and 2. Methodology
condensation algorithms and their applications in the
Here we solve two sub-problems. One is to detect
specific context of visual tracking, have been
the humans in the video and another one is to track
described in length in the literature[13,14,15]. In
them in the subsequent video frames. The sub
[10], object tracking and classifications are
problem of object detection is solved using
performed simultaneously. Many of the particle filter
machine learning approach for which we train our
based multiple object tracking schemes rely on
human detector using binary adaptive boosting.
hybrid sequential state estimation.
Algorithm for multi human detection and
The particle filter developed in[16] has multiple
tracking :-
models for the objects motion, and comprises an
additional discrete state component to denote which Let Z be the input video to the algorithm.
of the motion models is active. The Bayesian
multiple-blob tracker[17] presents a multiple tracking
In first frame 0
of Z, detect humans using
system based on statistical appearance model. The the human detector. Let be the number
multiple blob tracking is managed by incorporating of detected humans.
the number of objects present in the state vector and Initialize trajectories j
, 1≤ j≤ with
the state vector is augmented as in [18] when a new
object enters the scene. The joint task of object
initial positions j,0
.
detection and tracking comes with a heavy Initialize the appearance model (color
computational overhead but for visual surveillance
applications the speed is one of the most important
histogram) j
for each trajectory from
factors. Many previously proposed techniques are the region around j,0
.
slow in speed so cannot be good for visual
surveillance and some fail to cope with dynamic For each subsequent frame of input
i
outdoor environment conditions. This paper presents video,
a noble technique based on machine learning and
particle filtering to detect and track the humans in a
(a) For each existing trajectory j
,
COPO4O1-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
i. Use motion model to predict the speed up the process. We collected positive and
distribution ( │
j ,i
),
j , i 1
negative image samples for training. The
positive images are those comprising the human
over locations for human j in
frame i, creating a set of beings and negative images do not contain any
human being. We cropped the humans from the
(k )
candidate particles j ,i
, 1 ≤k positive images and resized to the dimensions
≤ K. of 40*40. Our positive dataset consisted of
ii. Compute the color histogram 2,000 images while the negative dataset
(k )
and likelihood consisted of 2,700 images. Fig.1 consists of
some cropped human images from positive
( │ , )
k (k )
for each samples used for human detector training. This
j ,i j human detector is nothing but a binary classifier
particle k using the appearance having two classes: human and non-human.
model. Next step after the sample collection is Haar-
iii. Resample the particles according to
their likelihood. Let k* be the like feature extraction from these samples. We
index of the most likely use here the rectangular Haar-like features and
particle. these features have their intuitive similarity to
iv. Perform confirmation by the Haar-wavelets. Integral Image
classification: run the Humans representation is used for fast feature evaluation
detector on the location (see fig. 3).
*
( k )
. If the location is
j ,i
classified as a human, reset
j
0; else increase
j
1.
j
v. If j
is greater than threshold,
remove trajectory j.
distance j ,k
between each
newly detected human k and each
Fig. 1: Sample positive human images used for detector
existing trajectory j
. When training.
j ,k
for all j, where is a
threshold (in pixels) less than the
width of the tracking window,
initialize a new trajectory for
detection k.
2.1 Sample Creation and Human Detector
Training
COPO4O1-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
The Integral Image, ii ( x, y ) at location based appearance model. After computing the
likelihood of each particle we treat the
( x, y ) contains the sum of all pixels above and left likelihoods as weights, normalizing them to sum
of ( x, y ) and can be computed in single pass over to 1.
image using the following pair of equations.
2.2.3 Resample:- We resample the particles to
( x, y) ( x, y 1) i( x, y) (2.1.1) avoid degenerate weights. Without re-sampling,
over time, the highest-weight particle would tend
to a weight of one and the other weights would
ii( x, y) ii( x 1, y) ( x, y) (2.1.2) tend to zero. Re-sampling removes many of the
Where, ( x, y) is the cumulative row sum and low weight particles and replicates the higher-
i ( x, y ) is the original image. We use adaptive weight particles. We thus obtain a new set of
equally-weighted particles. We use the re-
boosting technique[6] for training the human detector sampling technique described in [13].
system. The parameters and their values used in the
training set up are given in the table 1. 2.3 Motion and Appearance Models:-
Table 1: Human detector training parameters and their values.
Our motion model is based on a second-order auto-
regressive dynamical model. The autoregressive
Parameters Values Description
Npos 2,000 Number of positive images
consisting only human beings
model assumes the next state t
of a system is a
Nneg 2,700 Number of negative images not function of some number of previous states and a
Nstages 20
consisting any human
Number of training stages
noise random variable t
Minhitrate 0.991 Per stage minimum hit-rate
f ( t 1, t 2,........., t p , i )
(99.10%)
maxfalsealarm 0.5 Maximum false alarm rate per t
(2.3.1)
stage (50%)
Mode All Use upright and tilted features
Width*height 40*40 Size of training images
We assume the simple second-order linear
Boosttypes DAB Discrete Adaptive Boosting autoregressive model
2 j ,i 2 i (2.3.2)
j ,i j ,i 1
For tracking we use a particle filter. We use an Our appearance model is based on color histograms.
approach in which the uncertainty about an human's
state (position) is represented as a set of weighted
We compute a color histogram j
in HSV space
particles, each particle representing one possible for each newly detected human and save it to
state. The filter propagates particles from frame i-1 to compute particle likelihoods in future frames. To
frame i using a motion model, computes a weight for compute a particle's likelihood we use the
each propagated particle using an appearance model, Bhattacharyya similarity coefficient between model
then re-samples the particles according to their (k )
histogram and observed histogram as
weights. The initial distribution for the filter is j
centered on the location of the object the first time it follows, assuming n bins in each histogram:
is detected. Here are the steps in more detail:
COPO4O1-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
( │ j , i , j ) e
(k )
k (k ) d ( , )
j (2.3.4)
And ,
d ( ,
n
) 1
(k ) (k )
(2.3.5)
j j ,b b
b 1
and
(k )
and
(k )
and b
denote bin b of ,
j ,b j Frame 100 Frame 125
respectively. A more sophisticated appearance model
based on local histograms along with other
information such as spatial or structural information
would most likely improve our tracking performance,
but we currently use a global histogram computed
over the entire detection window because of its
simplicity.
3. Experimental Results:-
Frame 150 Frame 175
We have experimented the proposed automatic
human detection and tracking technique on a number
of videos. The results with some of the representative
videos are given here. The human detection and
tracking starts automatically without providing any of
the initialization details unlike many other tracking
techniques in which operator intervention is required.
Our proposed detector detects the human beings
irrespective of their body poses and locations in
video (fig. 4). The results given in fig. 5 show that
the proposed technique performs fairly good in
outdoor environment in low lighting conditions and Frame 200 Frame 225
is quite suitable for visual surveillance applications.
COPO4O1-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
[8] Jingling Wang, Yan Ma, Chuanzhen Li, Hui Wang, Jianbo Liu,
“An Efficient Multi-Object Tracking Method using Multiple
Particle Filters” In the Proceedings of World Congress on
Computer Science and Information Engineering, pp. 568-572,
2009.
Frame 350 Frame 375 [10] Francois Bardet, Thierry Chateau, Datta Ramadasan, “
Illumination Aware MCMC Particle Filter for Long-term Outdoor
Multi-Object Simultaneous Tracking and Classification,” In
Proceedings of IEEE 12th International Conference on Computer
Vision, pp. 1623-1630, 2009.
[4] D. Koller, J. Weber, T. Huang, J. Malik, B. Rao G. Ogasawara, [18] J. Czyz, B. Ristic, and B. Macq, “A Color-Based Particle
and S. Russell, “Toward Robust Automatic Traffic Scene Analysis Filter for Joint Detection and Tracking of Multiple Objects,” in
in Real-time,” In Proceedings of Int. Conference on Pattern Proc. of the ICASSP, 2005
Recognition, Vol. 1, pp. 126–131, 1994,.
[5] Paul Viola and Michael Jones, “Rapid Object Detection using a
Boosted Cascade of Simple Features,” In the Proceedings of IEEE
International Conference on Computer vision and Pattern
Recognition, 2001.
COPO4O1-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Abstract
An efficient method for segmentation of images has been proposed by incorporating the advantages of the
normalized cut partitioning method.The proposed method pre-processes an image by using the normalized cut
algorithm to form segmented regions and applied to form the weight matrix W. Since the number of the
segmented region nodes is much smaller than that of the image pixels. The proposed algorithm allows a varied
dimensional images with significant reduction of the computational complexity comparing to conventional
Ncut method based on direct image pixels. The experimental results also verify that the proposed algorithm
behaves an improved performance comparing to Ncut algorithm. This paper presents different ways to
approach image segmentation, explain an efficient implementation for each approach, and show sample
segmentations results. We show that an efficient computational technique based on a generalized eigen value
problem can be used to optimize this criterion. We have applied this approach to segmenting static images and
found the results to be very encouraging.
COPO4O2-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
The eigenvector associated with the smallest
(D W ) y
eigenvalue will minimize and is
Dy
equivalent to Ncut(A,B). But, if we assume it for
now, we can observe a number of properties of λ and
(4) the associated eigenvectors.
(a)
(a)
(b)
(b)
(c)
(c)
(d)
(d)
Fig. 1. (a) Original image. (b) The resultant image after applying
the Ncut Algorithm for edge computation. (c)The segmented Fig. 2. (a) Original image. (b) The resultant image after applying
results by directly applying the Ncut algorithm to the image pixels. the proposed Algorithm for edge computation. (c)The segmented
(d)The results of the Ncut algorithm partitioning . results by applying the proposed algorithm to the image pixels.
(d)The results of the proposed algorithm partitioning .
COPO4O2-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
1 2 3 4 5 6 7 8
1 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0
3 0 63.96937 63.04625 0 0 0 0 0
4 0 0 0 61.45696 0 0 0 0
5 0 0 0 0 56.69138 0 0 0
6 0 0 0 0 0 52.0375 53.44258 54.79843
7 0 0 0 0 0 0 0 0
8 0 0 0 0 0 0 0 0
COPO5O1-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO5O1-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Figure 3 Ethernet Delay (sec) Figure 4 Ethernet Throughput (bits/sec) for link variation
As shown in Figure 3, the delay occurs due to the 3.3 Performance Analysis using Load-Balancer
heavy traffic that passes through the network. The cause of
heavy traffic is the large number of users (30) as shown in the In another simulation for performance analysis, the
Figure 1. The amount of traffic sent and traffic received number of stations in a network is varied as shown in the
increases/ decreases depending on the number of users and the figure5, in which there is only one server, a load-balancer, a
amount of data accessed. In first scenario according to Table firewall and other network objects as per the specifications of
4.1, the type of Ethernet link used is 10 Base T. The graphs in the network required.
Figures 3, show the delay statistics (which is maximum than the
other two types of links). The reason is the low data rate of the
10 Base T links as compared to the other two types of links
listed in Table 1 i.e. 100 Base T and 1000 Base X. Another
parameter used for performance analysis is:
Ethernet – Traffic Received (bits/sec): This statistic
defines the throughput (bits/sec) of the data forwarded by the
Ethernet layer to the higher layers in this node.
The Ethernet throughput depends on the amount of
delay that occurs in the transmission of packets. Figure 3
indicates that the delay in case of 100 Base T and 1000 Base X
is less as compared to a network with 10 Base T. Hence the
throughput achieved in the case of 100 Base T and 1000 Base X
is much higher as compared to throughput achieved in case of
Figure5 Wired LAN Network Model using Load Balancer
10 Base T as shown in following figure 4. Moreover, the links –
100 Base T and 1000 Base X provide more bandwidth as Then scenarios described in Table 2 have modelled and
compared to the 10 Base T. Hence the traffic received through simulated using the network design shown in figure 5. The table
the Ethernet increases with the increase in bandwidth. shows the variations in the number of nodes:
COPO5O1-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Figure 8 A wired LAN network model with load- balanced multiple servers
COPO5O1-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Figure10 Traffic sent & Received (Bps) using No. of connections Policy
COPO5O1-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Figure 13 Traffic sent Received (bytes/sec) using different load balancing policies
COPO5O1-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
In this paper work performance analysis of the wired network [11] Dr. Reinhard Kuch, “Studienbrief 2: Simulation of
configuration through simulation was started with the networks using OPNET ITGURU Academic Edition v
investigation of the network performance using various types of 9.1,” version: 1.0, 26-04-2009.
[12] “Opnet_Modeler_Manual,” available at
links. The impact of various network configurations on the
http://www.opnet.com
network performance was analyzed using the network [13] T.Velmurugan, Himanshu Chandra and S. Balaji,
simulator- OPNET. It has been investigated that performance of “Comparison of Queuing disciplines for Differentiated
the wired Networks is good if high speed Ethernet links are used Services using OPNET,” IEEE, ARTComm.2009, pp.
under heavy network loads. Moreover, the mechanism of load 744-746, 2009.
balancing also improves the performance by reducing and [14] Yang Dondkai and Liu Wenli, “The Wireless Channel
balancing the load equally among multiple servers. Modeling for RFID System with OPNET,” in the
Proceedings of the IEEE communications society
REFERENCES sponsored 5th International Conference on Wireless
communications, networking and mobile computing,
[1] Behrouz A Forouzan, Data Communications and
Beijing, China, pp. 3803-3805, September 2009.
Networking: Tata McGraw Hill, 4th Edition, 2007.
[15] Ikram Ud Din, Saeed Mahooz and Muhammad Adnan,
[2] Sameh H. Ghwanmeh, “Wireless network performance
“Performance evaluation of different Ethernet LANs
optimisation using Opnet Modeler,” Information
connected by Switches and Hubs,” European Journal of
Technology Journal, vol. 5, No 1, pp. 18-24, 2006.
Scientific Research, vol. 37, No 3, pp. 461- 470, 2009.
[3] Sarah Shaban, Dr. Hesham, M.El Badawy, Prof. Dr.
Attallah Hashad, “Performance Evaluation of the IEEE
802.11 Wireless LAN Standards,” in the Proceedings
of the World Congress on Engineering-2008 , vol. I,
July 2-4, 2008.
[4] A. Goldsmith, Wireless Communications: Cambridge
University Press, 1st Edition, August 2005.
[5] Mohammad Hussain Ali and Manal Kadhim Odah,
“Simulation Study of802.11b DCF using OPNET
Simulator,” Eng. & Tech. Journal, vol. 27, No. 6, pp.
1108-1117, 2009.
[6] Hafiz M. Asif, Md. Golam Kaosar, “Performance
Comparison of IP, ATM and MPLS Based Network
Cores Using OPNET,” in 1st IEEE International
Conference on Industrial & Information Systems
(ICIIS 2006), Sri Lanka, 8-11 August, 2006.
[7] Dibyendu Shekhar, Hua Qin, Shivkumar
Kalyanaraman, Kalyan Kidambi, “Performance
Optimization of TCP/IP over Asymmetric Wired and
Wireless Links” in the Proceeding of conference on
Next Generation Wireless Networks: Technologies,
Protocols, Services and Applications (EW-2002), –
Florence, Italy, February 25-28, 2002.
[8] Born Shilling, "Qualitative Comparison of Network
Simulation Tools,” Institute of Parallel and Distributed
Systems (IPVS), University of Stuttgart, 2005.
[9] “OPNET IT Guru Academic Edition” available at
http://www.opnet.com/university_program/itguru_acad
emic_edition
[10] Ranjan Kaparti, “OPNET IT Guru: A tool for
networking education,” REGIS University.
COPO5O1-7
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO3O1-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
BRIA2003, MISR1991, MUHA2002), Load compute total processing cost by adding total
Balancing (JACQ2005, DANI2005, processing cost of task, which are assigned at
SAEE2005, SHU2009) and Modeling specified processor.
(RENE2002, JAVI2000, KENT2002). Tasks
are allocated to various processors of the ALGORITHM
distributed network in such a way that overall
processing cost of the network should be Start Algorithm
minimized. As it is well known that the tasks Read the number of task in m
are more than the number of processors of the Read the number of processor in n
network. For i = 1 to m
For j = 1 to n
OBJECTIVE
Read the value of
processing cost (c) in
In the Distributed Processing Environment
Processor Cost Matrix
(DPE), it is the common problem to allocate
tasks where the number of tasks is more than namely PCM(,)
j=j+1
the number of processors. The objective of the
Endfor
present research paper is to enhance the
i=i+1
performance of the distributed networks by
Endfor
using the proper utilization of its processors
Calculate the sum of each row and
and as well as proper allocation of tasks. In
column and store the results in
the present research paper the type of
Modified Processor Task Matrix
allocation of task to the processor is static in
MPTM(,)
nature. As in this paper the performance is
By arranging the MPTM(,) in
measured in terms of processing cost, so we
ascending order of their row_sum and
have to minimize the processing cost to obtain
column_sum, we get Arranged
the best performance of the processors. To
Processor Task Matrix APTM(,)
overcome from the problem we have designed
i=1
an intelligent algorithm for task allocation.
While all tasks != SELECTED
Select the biggest possible
TECHNIQUE
square matrix from left upper
corner and store it into SMi(,)
In order to evaluate the overall optimal
Apply algorithm of Assignment
processing cost of a distributed network, we
Problem [KANT2002] on SMi(,)
have chosen the problem where a set P = {p1,
i=i+1
p2, p3, …….pn} of „n‟ processors and a set T =
Endwhile
{t1, t2, t3, …….tm} of „m‟ tasks, where m>n.
Club processor-wise overall optimal
The processing cost of each task to each and
processing cost
every processor is known and it is mentioned
State the results
in the Processing Cost Matrix of order m x n.
End Algorithm
After making a matrix of same order taking in
ascending order of its sum of row and sum of
column, we apply the algorithm of assignment IMPLEMENTATION
problem on it. For each processor we evaluate
In the present research paper, the distributed
the overall allocation of each task; and
allocation of the task on the processor which network consist a set P of 4 processors {p1, p2,
p3, p4} and a set T of 10 tasks {t1, t2, t3, t4, t5,
has the minimum processing cost. Finally we
COPO3O1-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
t6, t7, t8, t9, t10}. It is shown in the figure1. The in the Processor Cost Matrix PCM(, ) of order
processing cost (c) of each task to each and 10 x 4.
every processor is known and it is mentioned
p1 p2 p3 p 4 Row_Sum
t1 t2 t3 t1 t2 t3 t1 600 200 900 300 2000
t4 t5 t6 t7 t4 t5 t6 t7
t2 500 300 200 100 1100
t8 t9 t 10 t8 t9 t 10
COPO3O1-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO3O1-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
8 25
effectiveness of the pseudo code. It is the 7
common requirement for any allocation 6
20
5
15
8
time required by an algorithm to run to 7
30
20
mentioned algorithm is O(mn). By taking 5
4 15
several input examples, the above algorithm 3
10
returns following results as in table 2. 2
5
1
0 0
1 2 3 4
Example
COPO3O1-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
m
No. of Processors = 5 No. of Processors = 3 Algorithm SAGA1991
Complexity
Present algorithm
10 50
9 45 250
8 40
7 35
200
No. of tasks
6 30
5 25
150
4 20
3 15
2 10 100
1 5
0 0
50
1 2 3 4
Example
200
Table 3: Comparison Table 150
Time Time 100
Complexity Complexity 50
n m of algorithm of present
0
(SAGA1991) algorithm 1 2 3 4 5 6 7
3 5 75 15 600
Present algorithm
3 6 108 18
500
3 7 147 21
400
3 8 192 24
4 5 100 20 300
4 6 144 24 200
4 7 196 28 100
4 8 256 32 0
4 9 324 36 1 2 3 4 5 6 7
COPO3O1-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO3O1-7
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO3O1-8
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO1O2-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
J(i,j,k,l)=J(i,j,k,l)+Jcc(θ’(j,k,l),P(j,k,l)). (2)
J (θi(j+1,k,l))<J(θi(j,k,l))
COPO1O2-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
This results in a step of size C(i) in the direction of problems but in recent years the emergence of
the tumble for bacterium i. another member of the EA family[5]– bacterial
[f] Compute J (i, j +1, k, l) and foraging algorithm (BFA), the self adaptability of
Let J(i, j,k,l)=J(i, j,k,l)+Jcc( θi( j,k,l),P( j,k,l)) individuals in the group searching activities has
[g] Swim attracted a great deal of interests including dynamic
i) Let m=0 (counter for swim length). problems. W. J. Tang and Q. H. Wu have
ii) While m<Ns(if have not climbed down too long). contributed their work by proposing DBFA, which
• Let m=m+1. is especially designed to deal with dynamic
• If J (i, j +1, k, l)<Jlast( if doing better), let optimization problems, combining the advantage of
Jlast= J (i, j+1, k, l) and both local search in BFA and a new selection
scheme for diversity generating. They used the
Let moving peaks benchmark (MPB) [6] as the test bed
for experiments. The performance of the DBFA is
evaluated in two ways. The first is concerned with
And use this θi(i+1, j, k) to compute the new J (i, the convergence of the algorithm in random
j+1, k, l) as we did in [f] periodical changes in an environment, which are
Else, let m= Ns. This is the end of the while divided into three ranges from a low probability of
statement. changes to a higher one. The second is testing a set
[h] Go to next bacterium (i, 1) if i≠N (i.e., go to [b] of combinations of the algorithm parameters which
to process the next bacterium). are largely related to the accuracy and stability of
5. If j<Nc, go to Step 3. In this case, continue the algorithm. All results are compared with the
chemotaxis, since the life of the bacteria is not existing BFA [1], and show the effectiveness of
over. DBFA for solving dynamic optimization problems.
6. Reproduction: It is worth mentioning that the diversity of DBFA
[a] For the given k and l, and for each i =1,2,...,N, changes after each chemotactic process rather than
the dispersion adopted by the BFA after several
Let generations. The DBFA utilizes not only the local
be the health of the bacterium i (a measure of how search but also applies a flexible selection scheme
many nutrients it got over its lifetime and how to maintain a suitable diversity during the whole
successful it was at avoiding noxious substances). evolutionary process. It outperforms BFA in almost
Sort bacteria and chemotactic parameters C(i) in all dynamic environments. The results are shown in
order of ascending cost health J (higher cost means [5]. They have further given solution for global
lower health). optimization given in [7].
[b] The Sr bacteria with the highest Jhealth values die The novel BSA has been proposed for global
and the remaining Sr bacteria with the best values optimization. In this algorithm, the adaptive tumble
split (this process is performed by the copies that and run operators have been developed and
are made are placed at the same location as their incorporated, which are based on the understanding
parent). of the details of bacterial chemotactic process. The
7. If k<Nre, go to Step 3. In this case, we have not operators involve two parts: the first is concerned
reached the number of specified reproduction steps, with the selections of tumble and run actions, based
so we start the next generation of the chemotactic on their probabilities which are updated during the
loop. searching process; the second is related to the
8. Elimination-dispersal: For i=1,2...,N, with length of run steps, which is made adaptive and
probability Ped, eliminate and disperse each independent of the knowledge of optimization
bacterium, and this result in keeping the number of problems. These two parts are utilized to balance
bacteria in the population constant. To do this, if a the global and local searching capabilities of BSA.
bacterium is eliminated, simply disperse one to a Beyond the tumble and run operators, attraction
random location on the optimization domain. If and mutation operations have also been developed.
l<Ned , then go to Step. 2 ; otherwise end. A.ABRAHAM, A. BISWAS, S. DASGUPTA AND S.
DAS have shown [8] that the major driving forces
of Bacterial Foraging Optimization Algorithm
2.3. ADVANCEMENTS IN BFO AND ITS APPLICATION AND (BFOA) is the reproduction phenomenon of virtual
RESEARCH AREAS: bacteria each of which models one trial solution of
the optimization problem.
Vast applications have been found where BFO has BFO and PSO have been used in combination and
shown remarkable results and has been modified their combined performance has been utilised to
for different problems according to the objective incorporate the merits [9] of two bio-inspired
function. Initial applications of evolutionary algorithms to improve the convergence for high-
algorithm were meant for static optimization dimensional function optimization. It is assumed
COPO1O2-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
that the bacteria have the similar ability like birds distance (Kennedy and Eberhart 1995, Eberhart and
to follow the best bacterium (bacterium with the Kennedy 1995). At some point in the evolution of
best position in the previous chemotactic process) the algorithm, it was realized that the conceptual
in the optimization domain. The position of each model was, in fact, an optimizer. Through a process
bacterium after every move (tumble or run) is of trial and error, a number of parameters
updated according to (3) extraneous to optimization were eliminated from
the algorithm, resulting in the very simple original
θi(j+1,k,l)=θi(j+1,k,l)+Ccc(θb(j,k,l)-θi(j,k,l), (3) implementation (Eberhart, Simpson and Dobbins
1996).
if Ji(j+1,k,l)>Jmin(j,k,l) PSO emulates the swarm behaviour and the
individuals represent points in the -dimensional
where θb(j,k,l) and Jmin(j,k,l) are the position and search space. A particle represents a potential
fitness value of the best bacterium in the previous solution. The velocity Vid and position Xid of the dth
chemotactic process respectively, Ccc is a new dimension of the ith particle are updated as follows
parameter, called attraction factor, to adjust the (4) & (5).
bacterial trajectory according to the location of the
best bacterium. Vid ← Vid + C1* rand1id *(pbestid - Xid) + C2*
rand2id *(gbestd - Xid) (4)
Particle swarm optimization is a high-performance Where Xi = (Xi1, Xi2, … , XiD) is the position of ith
optimizer that is very easy to understand and particle Vi = (Vi1, Vi2, … , ViD) represents velocity
implement. It is similar in some ways to genetic of the particle i. pbest =( pbesti1, pbesti2, … ,
algorithms or evolutionary algorithms, but requires pbestiD) is the best previous position yielding the
less computational bookkeeping and generally only best fitness value for their ith particle; and gbest =
a few lines of code [10]. Particle swarm (gbesti1, gbesti2, … , gbestiD) is the best position
optimization originated in studies of synchronous discovered by whole population[14]. C1 and C2 and
bird flocking and fish schooling, when the are the acceleration constants reflecting the
investigators realized that their simulation weighting of stochastic acceleration terms that pull
algorithms possessed an optimizing each particle toward pbest and gbest positions,
characteristic[11]-[13]. As the particles traverse the respectively. rand1id and rand2id are two random
problem hyperspace, each particle remembers its numbers in the range [0, 1].
own personal best position that it has ever found,
called its local best. Each particle also knows the 3.2. PSEUDOCODE:
best position found by any particle in the swarm,
called the global best. Overshoot and undershoot 1:Generate the initial swarm by randomly
combined with stochastic adjustment explore generating the position and velocity for each
regions throughout the problem hyperspace, particle;
eventually settling down near a good solution. This 2: Evaluate the fitness of each particle;
process can be visualized as a dynamical system, 3: repeat
although the behaviour is extraordinarily complex 4: for each particle i do
even when only a single particle is considered with 5: Update particle i according to (1) and (2);
extremely simplified update rules. This new 6: if (xi) <f(xpbesti) then
optimization technique has much promise, and 7: xpbesti:=xi;
electromagnetic researchers are just beginning to 8: if f (xi) <f(xgbest) then
explore its capabilities. 9: xgbest := xi
10: end if
11: end if
3.1. CLASSICAL ALGORITHM:
12: end for
The particle swarm concept originated as a 13: until the stop criterion is satisfied
simulation of a simplified social system. The
original intent was to graphically simulate the
3.3. ADVANCEMENTS IN PSO AND ITS APPLICATION AND
graceful but unpredictable choreography of a bird RESEARCH AREAS:
flock. Initial simulations were modified to
incorporate nearest-neighbour velocity matching, APPSO (Agent based parallel PSO) is based on two
eliminate ancillary variables, and incorporate types of agents: one coordination agent and several
multidimensional search and acceleration by swarm agents. The swarm is composed of various
COPO1O2-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
sub-swarms, one for each swarm agent. The applications. Particle swarm optimization has been
coordination agent has administrative and used for approaches that can be used across a wide
managing duties. All the calculations are done by range of applications, as well as for specific
the swarm agents (see Figure 2). applications focused on a specific requirement. In
In order to gain benefit from the large knowledge this brief section, we cannot describe all of particle
and insights achieved in the research field of swarm’s applications, or describe any single
sequential PSO it is important to modify the application in detail. Rather, we summarize a small
swarm’s behavior as little as possible. The sample.
Generally speaking, particle swarm optimization,
inevitable changes to the algorithm due to the like the other evolutionary computation algorithms,
parallelization should also lead to positive effects can be applied to solve most optimization problems
and problems that can be converted to optimization
problems. Among the application areas with the
most potential are system design, multi-objective
optimization, classification, pattern recognition,
biological system modelling, scheduling
(planning), signal processing, games, robotic
applications, decision making, simulation and
identification. Examples include fuzzy controller
design, job shop scheduling, real time robot path
planning, image segmentation, EEG signal
simulation, speaker verification, time-frequency
analysis, modelling of the spread of antibiotic
resistance, burn diagnosing, gesture recognition
and automatic target detection, to name a few.
4. FIREFLY OPTIMIZATION
COPO1O2-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO1O2-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO1O2-7
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
[8]. A. ABRAHAM, A. BISWAS, S. DASGUPTA AND [13]. J. J. LIANG AND A. K. QIN, “Comprehensive Learning
SWAGATAM DAS, “Analysis of Reproduction Operator in Particle Swarm Optimizer for Global Optimization of
Bacterial Foraging Optimization Algorithm.” Proc. of IEEE Multimodal Functions.” Trans of IEEE on Evolutionary
Congress on Evolutionary Computation, pp. 1476-1483, 2008. Computation, vol. 10, no. 3, June 2006.
[9]. YING CHU, HUA MI, HUILIAN LIAO, ZHEN JI, AND [14]. LI ZHI-JIE, LIU XIANG-DONG, DUAN XIAO-DONG,
Q. H. WU, “A Fast Bacterial Swarming Algorithm For High- WANG CUN-RUI, “An Improved Particle Swarm Algorithm
dimensional Function Optimization.” Proc of IEEE Congress on for Search Optimization.” Proc. of IEEE Global Congress on
evolutionary computation, pp. 3135-3140, 2008. Intelligent System, pp.154-158, 2009.
[10]. DANIEL W. BOERINGER AND DOUGLAS H. [15]. MICHAEL BREZA, JULIE MCCANN, “Can Fireflies
WERNER, “Particle Swarm Optimization Versus Genetic Gossip and Flock?: The possibility of combining well-knowbio-
Algorithms for Phased Array Synthesis.” Trans. Of IEEE on inspired algorithms to manage multiple global parameters in
Antennas and Propagation, vol. 52, no. 3, pp. 771-779,March wireless sensor networks without centralised control.
2004. [16]. MING-HUWI HORNG AND TING-WEI JIANG,
[11] R. EBERHART AND J. KENNEDY, “A new optimizer “Multilevel Image Thresholding Selection based on the Firefly
using particle swarm theory,” in Proc. 6th Int. Symp. Micro Algorithm.” Symposia and Workshops on Ubiquitous,
Machine and Human Science (MHS ’95), pp. 39–43, 1995. Autonomic and Trusted Computing, pp. 58-63, 2010.
[12] J. KENNEDY AND R. EBERHART, “Particle swarm [17]. LIN CUI, HONGPENG WANG, “Reachback Firefly
optimization,” in Proc. IEEE Int. Conf. Neural Networks, vol. 4, Synchronicity with Late Sensitivity Window in Wireless Sensor
pp. 1942–1948, 1995. Networks.” Proc. of IEEE Ninth International Conference on
Hybrid Intelligent Systems, pp. 451-456, 2009.
COPO1O2-8
BIOMETRIC AUTHENTICATION USING IMAGE PROCESSING TOOLS
of the decomposed image. These three detail where i is the decomposition level. Hi,Vi and
images are superimposed in [8] to yield Di are the detail images in horizontal, vertical
composite mage. It may be noted that as the and diagonal directions respectively. Other
decomposition level increases the size of the features are under investigation.In [8] all the
detail images also decreases. detail images are super imposed and then
The results of authentication correspond to energy is calculated.
the composite image.
4. Results and implementation
From each detail images, energy feature is
calculated as [1]: In [8] a 100% recognition score is obtained
using fuzzy feature. A simple Euclidean
M N
(1) distance measure is used to find out the
Eid ( Si ( x, y )) 2 , i 1, 2,..., 5
x 1 y 1
recognition rate. In [8] the database used, is
created in the biometrics lab of IIT Delhi.
ROI is divided in to non-overlapping
windows. Features are calculated from these
windows. The size of the window is varied
and the recognition score is obtained.
Given two data sets of features corresponding
to the training and testing samples, a
matching algorithm determines the degree of
similarity between them. A Euclidean
distance is adopted as a measure of
dissimilarity for the palmprint matching using
Fig. 3: Two dimensional one level DWT both wavelet and fuzzy features.
decomposition
The wavelet feature in [8] is applied on polyU
database [15] on 50 users and 5 images of
each user (total 250 images).4 images are
taken for training data and 1 image for testing
data. A recognition score of 82% is obtained.
Euclidean distance measure is taken as a
classifier.ROC plot is shown in fig.5
COPO1O1-3
BIOMETRIC AUTHENTICATION USING IMAGE PROCESSING TOOLS
COPO1O1-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
ABSTRACT
Image quality and utility become crucial issues for engineers, scientist, doctors, patients,
insurance companies and lawyer whenever there are changes in the technology by which
medical images are achieved. Examples of such changes include analog to digital conversion,
lossy compression for transmission and storage, image enhancement, and computer aided
methodology for diagnosis of disease in medical images. Edit an image so that it is more suitable
for a specific application than the original image is termed as image enhancement technique.
Image is defined as a two dimensional function f(x, y), where x and y are spatial coordinates,
that bears information, which can be generated in any form such as visual, x-ray and so on. X-
rays are the oldest source of electromagnetic radiation used for medical imaging. Medical image
enhancement methods are used like all other methods and algorithm in image processing, as
chain subsequent edits aimed at achieving a suitable result. Improving one function in the chain
is only useful if the end result is really unproved, and that does not solely depend on that
particular function; it also depends on the quality of the first image. In this paper we have
compared different types of image enhancement technique of medical image in spatial domain
and also presented a statistical analysis.
COPO1O5-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
images are not affected as much as that of correction performs much better as
global histogram method. If window size compared to all other methods addressed.
increases then proportionately larger region This conclusion is on the basis of the change
is chosen out of entire image region causing in standard deviation and mean value on the
mean values to decrease. If value of constant histogram of the enhanced image.
k changes or increases, more uniform
distribution is seen in the histogram it Figure 4, 5, 6 shows plot of mean value and
indicates that more equalization is achieved. standard deviation value of four methods of
It is observed that in log transformation Leg1, Hand1, and chest 1 image. In bar
method for different value of c, if gray value graph the global histogram method is not
increases then statistical values (mean, shown.
standard deviation) of enhanced image also Conclusion
increases which in turn increases the image
brightness. Similarly in power law In this present paper we have compared
transformation using fixed values of gamma different techniques of image enhancement
and different value of c, if the gray value is in spatial domain. They have been evaluated
increased, the statistical values are also in terms of standard deviation, mean values
increased proportionately. In power law as statistical measures. As medical image
transformation with gamma correction has only a finite number of gray scales, an
approach, if the value of gamma increases, ideal equalization is not possible. Image
the value of mean of enhanced image enhancement technique can be improved if
decreases while the value of standard the enhancement criteria are application
deviation increases, as result the brightness dependant and can be illustrated precisely.
of enhanced image is decreased.
References
Similar observation have been found in
Millener AE, AubinM, Pouliot J,
piecewise contrast stretching method i.e.
ShinoharaK, Roach HIM (2004)
enhanced image is dark for gamma=0.1 and
“Daily electronic portal imaging
contrast increase as the value of gamma
from morbidly obes men undergoing
increases from gamma-0.2 onwards.
radiotherapy for localized prostate
Table 1, 2, 3 shows the comparison of cancer”. Int J Radiat Oncol
different method of Radiographic Image Bioiphys, 6.10.
Enhancement.
Van Denberg DL, De Ridder M,
Because of flexibility of changing window Storm GA (2004) “Imaging in
size and equalization factor k an auxiliary radiotherapy” European Journal of
region can be enhanced to the extent radiology, 41-48.
required in the local histogram equalization
method. Therefore this method is also Shi X.Q., Sallstrom P, and Welander
sometimes referred to as adaptive method. U. “A Color Coding Method for
In the present enhancement approaches, Radiographic Image and vision
power law transformation using gamma
COPO1O5-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Log Transformed
Original Image
COPO1O5-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO1O5-7
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Chest
Hand
COPO1O5-8
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Leg
COPO1O5-9
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
conventional prescription until today. We drawback in above is that they uses EMG
have described a relatively simple signal as stationary signal but for multi DOF
application of hybrid microelectronics in it is required to process it as non stationary
this novel field. signal. Real time EMG pattern recognition
[6] is done to improve the hand functioning.
The purpose of this project is to develop a But still signal is not sufficient so fingers
myoelectric controlled individual finger, can‟t be controlled separately. For this
multi-DOF adaptive grasping technique purpose SRI EMG (surface recorded
equipped commercial hand capable to fast intramuscular EMG [7] signals can be used
response. Also amputee can feel touch, they are more accurate than surface EMG
temperature and force applied by prosthetic signals. Prosthetic hand with individual
hand on the joint of prosthetic and natural finger movement will be more similar to
hand. natural hand. Prosthetic hand approximately
similar to natural hand is made by Bryan
Christie [8] and Dean Kamen [9] with the
LITERATURE SURVEY help of DARPA. But it is connected by
nerve system so not easy to use also it‟s still
In the field of prosthetic arm, lots of in experimental state. Chappell [10] has
research work is in progress. Maximum presented an approach having an artificial
prosthetic hands are simple grippers with hand with sensors allowing for the inclusion
one or two degrees of freedom (DOF). They of automatic control loops, freeing the user
are using smart hooks (passive fingers and from the cognitive burden of object holding
thumb) like Otto Bock [1] they have 2 or 3 which is similar to the natural low level
point of contact so more force is required for spinal loops that automatically compensate
grasping. Also they are capable of gripping for object movement. Force, object slip and
as in myohand variplus speed [2] but not finger positions are variables that need to be
capable of some more works that include measured in a hand designed for the
finger, like door opening, rotating car key physically impaired person. It shows that a
etc. Kenzo Akazawa [3] developed a hand high specification sensor is required for
using dynamic behaviour of antagonist designing an arm. Also it must be designed
muscles, flexor muscles and extensor separately for each amputee. But latest
muscles but is similar as in [1]. Also its technology provides adaptive signal
response is slow and a large training session processing helping to adapt the amputees‟
is required to train the amputee. For fast requirements. An electrically driven locking
processing, Isamu [4] has presented mechanism has been built by Law and
evolvable hardware chips with genetic Hewson [11], which is controlled by the
algorithm. It takes less time to train electromyogram (EMG) of the surviving
amputee. But it restricts the DOF of hand. muscles in the upper arm. Hybrid
Also dedicated chip is required for technology is used for the construction of
prosthetic hand which reduces versatility to the associated electronic circuitry. Many
use high speed microcontrollers. similar applications are now being
Microprocessor and high torque motor based considered in an attempt to improve the
hand is being made by Ryuhei [5], however, performance of upper-limb prostheses using
this uses only two surface EMG signals, due latest researches. Development, testing and
to which less DOF is obtained. Also, more experimentation work of a device for the
training required for amputee. Another main hand rehabilitation was done by Mulas et al
COPO1O1-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
[12]. The system designed is intended for invasive). Cipriani et al [14] briefly
people who have partially lost the ability to describes the mechatronic design of the
control correctly the hand musculature, for prosthesis, the set of sensors embedded in
example after a stroke or a spinal cord the hand and finally focuses on the design of
injury. Based on EMG signals the system the control architecture that allows action
can "understand" the subject volition to and perception for such a sophisticated
move the hand and actuators can help the device. It consists of 8-bit architecture
fingers movement in order to perform the microcontrollers, however, not using the
task. This paper describes the device and available signal processing techniques.
discusses the first results conducted on a
healthy volunteer. It requires a number of Herrera et al [15] designed and constructed
actuators to increase the DOF of prosthetic a prosthesis that will be strong and reliable,
arm. Also EMG processing done is not while still offering control on the force
sufficient to provide significant exerted by the artificial hand. The design
performance. Massa et al [13] designed a had to account for mechanical and electrical
hand to augment the dexterity of traditional design reliability and size. These goals were
prosthetic hands while maintaining targeted by using EMG in the electrical
approximately the same dimension and control system and a linear motion approach
weight. This approach is aimed at providing in the mechanical system. The prosthetic
enhanced grasping capabilities and natural gripper uses EMG to detect the amputee's
sensory-motor coordination to the amputee, intended movement. The control system
by integrating miniature mechanisms, requires an adaptation mechanism for each
sensors, actuators, and embedded control. A amputee's characteristics. Gordon et al [16]
biomechatronics hand prototype with three used proportional myoelectric control of a
fingers and a total of six independent DOFs one-dimensional virtual object to investigate
has been designed and fabricated. This differences in efferent control between the
research work is focused on the actuators proximal and distal muscles of the upper
system, which is based on miniature limbs. Restricted movement was allowed
electromagnetic motors. However, still not while recording EMG signals from elbow or
using better EMG processing technologies wrist flexors/extensors during isometric
which can dramatically increase contractions. Subjects used this proportional
performance of the hand also the grasping EMG control to move the virtual object
force is low because of the limited torque through two tracking tasks, one with a static
generated by the miniature actuators used target and one with a moving target (i.e., a
for the hand (which are among the best sine wave). Eriksson et al [17] studied the
available on the market in that range of neural network feasibility for categorizing
size). Embedded control architecture for the patterns of EMG signals. The signals
action and perception of an recorded by the surface electrodes are
anthropomorphic 16 degree of freedom, 4 sufficient to control the movements of a
degree of actuation prosthetic hand for use virtual prosthesis. The presented method
by transradial amputees has also been offers great potential for the development of
reported. The prosthetic hand is provided future hand prostheses. A signal processing
with 40 structurally integrated sensors useful system based on RAM as a look-up table
both for automatic grasp control and for (LUT) has been presented by Torresen et al
biofeedback delivery to the user through an [18]. This provides a fast response besides
appropriate interface (either neural or non- being compact in size. Several algorithms
for programming it have been proposed For
COPO1O1-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
the given data set used in the following upper extremity prosthesis has been
experiments, the time used for programming considered as a system in which the
the RAM was approximately equal to the necessary components to design a better
time used for training a feed-forward neural prosthetic arm are viewed and divided into
network – solving the same problem in the four subsystems: input, effecter, feedback
best possible way. However, the main and support. Current research is reviewed in
advantage of this scheme is the fast runtime terms of these subsystems. Each subsystem
speed. Ferguson [19] described the performs its own task, but they are related to
development of a system that will allow each other and together they function to
complex grasp shapes to be identified based make up a prosthetic upper extremity, which
on natural muscle movement. The provides the movement to the amputee [22].
application of this system can be extended to Hands such as the All Electric Prosthetic
a general device controller where input is Hand utilize a series of gears to transmit the
obtained from forearm muscle, measured motion of motors housed in the forearm to
using surface electrodes. This system the relevant fingers [23]. Other designs have
provides the advantage of being less the actuators transmitting the power directly
fatiguing than traditional input devices. V. to the joint. An example of this is the
Tawiwat et al. [20] applied a mouse„s roller Anthroform Arm, which uses pneumatic
with a gripper to increase the efficiency of a „muscles‟ mimicking the muscles of the
gripper could lead to material handling human arm connected directly to the „bones‟
without slipping. To apply a gripper, the it moves [24]. Shape Memory Alloy wires
optimization principle is used to develop are also used, to both provide the force and
material handling by use of a signal for transmit the motion. SMA wires contract
checking a roller mouse that rotates. In case when heated and return to their initial shape
the roller rotates, meaning that the material when cooled [25]. This method of actuation
slips. A gripper will slide to material is utilized in the Shape Memory Alloy
handling until the roller does not rotate. In Activated Hand constructed by DeLaurentis
an attempt to improve the functionality of a et al [26].
prosthetic hand device, a new fingertip has
been developed, that incorporates sensors to
measure temperature and grip force and to
CONCLUSIONS
detect the onset of object-slip from the hand.
The sensors have been implemented using
A lot of work is done for developing the
thick-film printing technology and exploit
prosthetic arm but still more precision work
the piezoresistive characteristics of
could be done. Grasping technique can be
commercially available screen printing
enhanced so that amputees can work more
resistor pastes and the piezoelectric
effectively, response time of available arms
properties of proprietary lead-zirconate-
is not sufficient. Still the available prosthetic
titanate (PZT) formulated pastes. The force
arms are not comparable with natural hand
sensor exhibits a highly linear response to
in terms of DOF. The main limitations are
forces. The force sensor response is also
less space available for motors and their low
extremely stable with temperature. The
size to torque ratio. Power consumed is also
ability of the piezoelectric PZT vibration
more if number of motors increases. The
sensor to detect small vibrations of the
lack of enhanced signal processing
cantilever, indicative of object slip, has also
techniques in artificial hands and associated
been demonstrated [21]. Externally powered
controllers has limited their functionality
COPO1O1-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
and technological progress. Still a lot of 25th Annual International Conference of the
work can be done to improve the available IEEE Vol.2 pp.1674-1677.
prosthetic arms. Use of hydraulics in place
of motors can provide great amount of [6] Jun-Uk Chu, Inhyuk Moon, Member,
controlled force in lesser space. Due to this IEEE, and Mu-Seong Mun “A Supervised
available control of force applied by the Feature Projection for Real-Time
hand on the object, amputees can handle soft Multifunction Myoelectric Hand Control”
or brittle items easily. By effective use of International Conference of the IEEE
hydraulics, DOF can also be increased Engineering in Medicine and Biology
without any increased power consumption. Society pp.282-290.
Also use of adaptive signal processing
[7] Nikolay S. Stoykov, Madeleine M.
techniques can improve overall performance
Lowery, Charles J. Heckman, Allen Taflove,
of artificial hand.
and Todd A. Kuiken “Recording
Intramuscular EMG Signals Using Surface
Electrodes” Proceedings of the 2005 IEEE
REFERENCES 9th International Conference on
Rehabilitation Robotics June 28 - July 1,
[1]Available on: 2005, Chicago, IL, USA pp. 291-294.
http://www.ottobock.com.au/cps/rde/xchg/o
b_au_en/hs.xsl/384.html [8] Available on:
http://spectrum.ieee.org/robotics/medical-
[2Available on: robots/winner-the-revolution-will-be-
http://www.ottobock.com.au/cps/rde/xchg/o prosthetized/2
b_au_en/hs.xsl/19932.html
[9] Available on:
[3] Kenzo Akazawa, Ryuhei Okuno and http://spectrum.ieee.org/biomedical/bionics/
Masaki Yoshida “Biomimetic EMG- dean-kamens-luke-arm-prosthesis-readies-
prosthetic-hand” 18th Annual International for-clinical-trials
Conference of the IEEE Engineering in
Medicine and Biology Society Amsterda, [10] P H Chappell, “A fist full of sensors”
pp. 535-536. Journal of Physics: Conference Series, vol.
15, 2005, pp. 7-12.
[4] Isamu Kajitani and Tsukuba Masahiro
Murakawa “An Evolvable Hardware Chip [11] H.T. Law and J. J. Hewson, “An
for Prosthetic Hand Controller” Electromyographically Controlled Elbow
Microelectronics for Neural, Fuzzy and Bio- Locking Mechanism for an Upper Limb
Inspired Systems, 1999. MicroNeuro' 99. Prosthesis”, Electro component Science and
Proceedings of the Seventh International Technology, vol. 10, 1983, pp. 87-93.
Conference on pp. 179-186.
[12] Marcello Mulas, Michele Folgheraiter
[5] Ryuhei Okunol, Masahiro Fujikawa', and Giuseppina Gini, “An EMG-controlled
Masaki Yoshida2, Keno Akazawal Exoskeleton for Hand Rehabilitation”, IEEE
“Biomimetic hand prosthesis with easily 9th International Conference on
programmable microprocessor and high Rehabilitation Robotics, Chicago, IL, USA,
torque motor” Engineering in Medicine and June 28 - July 1, 2005, pp 371-374.
Biology Society, 2003. Proceedings of the
[13] M. C. Carrozza, S. Micera, R.
Lazzarini, M. Zecca and P. Dario,
COPO1O1-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO1O1-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
http://www.cronos.rutgers.edu/~mavro/pape
rs/act2000.pdf
COPO1O1-7
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
2 3
2 9 4
1. Introduction 2
5
COPO3O2-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
4. Searching Area
0 2 ∞ ∞ ∞ ∞ ∞ 3 ∞ ∞ Since the searching area of a classical
2 0 4 ∞ ∞ ∞ ∞ 3 2 ∞
Dijkstra Algorithm is large, there are some
∞ 4 0 3 ∞ ∞ ∞ ∞ ∞ 5
process by which the searching area of the
∞ ∞ 3 0 5 2 ∞ ∞ ∞ ∞
∞ ∞ ∞ 5 0 4 ∞ ∞ ∞ ∞
classical Dijkstra algorithm can be reduced.
∞ ∞ ∞ 2 4 0 3 ∞ ∞ ∞ Since the shortest path between two points is
∞ ∞ ∞ ∞ ∞ 3 0 3 ∞ 1 a straight line. The direction from the start
3 3 ∞ ∞ ∞ ∞ 3 0 ∞ ∞ point to the is generally strike of the shortest
∞ 2 ∞ 3 ∞ ∞ ∞ ∞ 0 5
∞ ∞ ∞ ∞
path when plan the route of the real road
∞ 5 ∞ 1 5 0
network. The shortest path between two
points is generally the both side of connection
line of the start point to the destination point
Figure.2 adjancy matrix(10x10) and usually it is near to the connection line. If
there are only one edge between the start
The storage format of the above matrix get point and the destination point, the edge itself
reduced and given as- is the shortest path.
3. Complexity Analysis
Let for the list array, the space
complexity is given as O(T). where T is the
edgentain one or more tables to no. of the
directed graph.
In the worst case T=n2 , then the space
complexity will be O(n2).
COPO3O2-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
S
T2
= [ ]2
COPO3O2-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
from the above equation, it can be compared The abutment matrix for the example is
that the ratio of time complexities of, when given as A=
searching area is an ellipse is less in
comparison to the ratio of time complexities
0 2 1 ∞ 7 12
of , when searching area is rectangular. 2 0 ∞ 2 ∞ ∞
Elliptical searching area gives better result. 1 ∞ 0 1 2 ∞
∞ 2 1 0 3 ∞
5. Feature matrix[2] 7 ∞ 2 3 0 4
Another way for the improvement of 12 ∞ ∞ ∞ 4 0
classical dijkstra algorithm. Since if we to
find the shortest path between two points, for
this purpose there are a no. of operations by There are following steps which from which
which it can be done. But if in any way it can feature matrix can be obtained.
reduce these no of operations, then the 1. source S={v1}, D={0,2,1,∞,7,12}
efficiency of the classical dijkstra algorithm 2. firstly find the shortest distance from the
will get increases. For this purpose there is a source i.e. D[3]=1, so S={v1,v3}
concept of feature matrix. From this feature 3. now the nodes connected via 3 to the 1 ,
matrix we can draw the shortest path tree and distance can be obtain by
can get the shortest path and shortest path D[3]+A[3][4]=2<D[4]=∞
length from source node to all other & D[3]+A[3][5]=3<D[5]=7
destination node. So that modification will D[4]=2 ,D[5]=3 and
For understand the concept of feature matrix the D matrix will become D={0,2,1,2,3,12}
let’s take a example of 6 node which are 4. iterate operation as follows,
connected as shown in figure.6. The second time we get that,
S={v1,v3,v2,v4}
& D={0,2,1,2,3,12}
The third time we get S={v1,v3,v2,v4,v5}
1
12 & D={0,2,1,2,3,7}
6
2 7
The fourth time S={v1,v3,v2,v4,v5,v6}
1 4 & D={0,2,1,2,3,7}
3
2
2 5 5. compare the following D and A matrix we
2
1 will get the following matrix called feature
4
3
matrix(F).
0 0 0 0 0 0
Figure.6 nodes arrangement
2 0 0 0 0 0
1 0 0 0 0 0
0 0 1 0 0 0
0 0 2 0 0 0
0 0 0 0 4 0
COPO3O2-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
7. References:
4 5 [1]. Dong Kai fan, Ping Shi, “Improvement of dijkstra
4
algorithm and its application in route planning”,
Shandong University of technology, China, IEEE,
6
International conference on FSKD, 2010, pp 1901-
1904.
[2]. Ji-Xian Xiao, Fang-Ling Lu, “An improvement of
Figure.7 shortest path tree
the shortest Path algorithm based on Dijkstra
algorithm”, IEEE, ICCAE, 2010, pp. 383–385.
From the shortest path tree we can find the
shortest path from the source to the all other
nodes and also can find the path length.
COPO3O2-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO303-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011
.
Times of recording of different movements
were different.
III. METHODOLOGY
B.Data Processing
EEG data was notch filtered to remove 50Hz
frequency from each channel present in the
data due ac power supply of the EEG
machine. Harmonics of 50Hz frequency
were also removed from the data. IIR second
order notch filter with the quality factor (or
Q factor) of 3.55 was used to remove the
desired frequency.
Fig1: Block diagram for feature extraction of wrist movement
COPO303-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011
-10
-15
D. CLASSIFIER DESIGN:
Filter #1
-20 Motor movements are finally differentiated
-25 by using different classifiers on the recorded
-30
data
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Normalized Frequency ( rad/sample) Table1: percentage values of classification for motor
Fig3: Magnitude response of a notch filter designed at 50Hz (or positions
.39∏ radian per sample) frequency
400
From the observations in Table1 we see that
Quad classifier gives us the best possible
200 results with high classification percentage
0
accuracy for both extension and pronation
movements.
-200
0 20 40 60 80 100 120 140
frequency
COPO303-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011
activity.
3500
frequency of occurance
2500
can prove to be good features for
classification of wrist movement. 2000
1500
1000
500
F3-Extension 0
2500 -40 -30 -20 -10 0 10 20 30 40
Range of values
1500
Fig5: Histogram plot of frontal electrode (F3) for Table2: Statistical features values for different
extension.
positions
F3-Pronation
2500 From the result tabulated in Table2
Extension has larger variance, mean, kurtosis
2000 and skewness values while pronation has
smaller values. Also for both the motor
frequency of occurance
0
From Table1, we justify our findings by
-60 -40 -20 0 20 40 60
range of values designing different classifiers and training
Fig6: Histogram plot of frontal electrode (F3) for
them for the two class movements to get the
pronation. best possible accuracy from the 16 channel
recorded sample values
COPO303-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011
V. CONCLUSION
EEG data class separation was investigated
for two wrist movement classes, pronation
and extension using a 16 channel setup. Four
features were extracted from the histogram
plots of signal for both the movement and
neutral state. Variance, mean, skewness and
kurtosis are the features which easily
demarcates between the two movements. It
was a three class problem since three data
sets were used. Different classifiers were
designed and trained based on the recorded
data sets on the basis of which we classified
between motor movement
VI. ACKNOWLEDGEMENT
The authors are indebted to the UGC.This
work is a part of the funded major research
project CF.No 32-14/2006(SR)
REFERENCES
COPO303-5
Color Images Enhancement Based on Piecewise
Linear Mapping
Anubhav kumar 1, Awanish Kr Kaushik 1 , R.L.Yadava 1, Divya Saxena 2
1
Department of Electronics & Communication Engineering
Galgotia’s College of Engineering & Technology, Gr.Noida, India
2
Department of Mathematics
Vishveshwarya Institute of Engineering and Technology , G.B.Nagar,India
rajput.anubhav@gmail.com
Keywords - Color image enhancement, RGB color space, YCbCr Similarly Fairweather [8] have used techniques such as
color space, Piecewise linear mapping, RFSIM. contrast stretching and Markov Random field. They applied
bimodal histogram model to the images in order to enhance
the underwater image. Yoav [9] have used a Physics-based
I.INTRODUCTION model. They developed scene recovery algorithm in order to
clear underwater images/scenes through polarizing filter. This
Image enhancement processes consist of a collection of approach addresses the issue of backscatter rather than blur.
techniques that seek to improve the visual appearance of an
image or to convert the image to a form better suited for In this paper color images enhancement based on piecewise
analysis by a human or machine. Nowadays there is a rapid linear mapping has been proposed .In Section II, an illustration
increase in the application of color video media. This has of theoretical foundations of piecewise linear mapping
resulted in a growing interest in color image enhancement function are presented. The proposed enhancement algorithm
techniques. is developed in Section III. Experiments conducted using a
variety of color images are described in Section IV and results
Some other common technique to enhance the contrast of are discussed. A conclusion is drawn in Section V.
images is to perform histogram equalization and
Homomorphic methods. The advantage of histogram
equalization technique is that it works very well for grayscale II. THEORETICAL FOUNDATIONS
images, however, when histogram equalization is used to
enhance color images, it may cause shift in the color scale Piecewise linear Mapping Function –
resulting in artifacts and an imbalance in image color. The
homomorphic filtering is used to correct non uniform The enhancement is done by first transforming the intensity
values using a piecewise linear mapping function for the
intensity component. The piecewise linear mapping function original image are transformed to create the optimally
consists of three line segments as indicated in Figure 1, where enhanced color image in NTSC YCbCr space directly.
vmin and vmax denote the minimum and maximum intensity
levels in the original image, respectively. This type of
mapping function permits proper allocation of the dynamic
range to different ranges of intensity levels. The actual values
of vlower and vupper determine the dynamic range allocation
for lower, intermediate and higher ranges of intensity levels.
1 1 <
,
(1)
(a ) (b)
(a ) (b)
(c)
Figure-4 –Watch image (a) Original image (b) Enhanced from
Murtaza method [1] (c) Enhanced from Proposed Method
(c)
Figure-3 -Lena image (a) Original image (b) Enhanced from Murtaza
method [1] (c) Enhanced from Proposed Method
(a ) (b)
(a ) (b)
(c)
Figure-4 –Play image (a) Original image (b) Enhanced from Murtaza
method [1] (c) Enhanced from Proposed Method
∑ ∑ ,. ,
enhancement and intensity preservation for gray-level images
∑ ∑ ,
using multi objective particle swarm optimization,” IEEE
Di = (3) Trans. on Automation Science and Engineering, vol. 6, no. 1,
pp. 145–155, 2009.
The similarity between two feature maps fi (i = 1~5) and gi at the [6] S. K. Naik and C. A. Murhty, “Hue-preserving color image
corresponding location (x, y) is defined as enhancement without gamut problem,” IEEE Trans. on Image
,.!,"#
Processing, vol. 12, no. 12, pp. 1591–1598, 2003.
$ ,.! $ ,"#
[7] Q. Chen, X. Xu, Q. Sun, and D. Xia, “A solution to the
di (x,y) = (4) deficiencies of image enhancement,” Signal Processing, vol.
90, pp. 44–56, 2010.
[8] A J R Fairweather, M A Hodgetts, A R Greig, “Robust
= ∏'() &
Then, we compute the RFSIM index between f and g as scene interpretation of underwater image sequences”, In 6th
International Conference on Image Processing and its
RFSIM [3] (5) Applications, 1997, pp. 660 -664, ISBN: 0 85296 692 X
[9] Schechner, Y and Karpel, N., “Clear Underwater Vision”.
Proceedings of the IEEE CVPR, Vol. 1, 2004, pp. 536-543.
Table-I [10] M.Isa,M.Y.Mashor,N.H.Othman,"Contrast Enhancement
Image Processing on Segmented Pap smear Cytology Images",
RFSIM (Image Quality Assessment) Prof. of Int. Conf. on Robotic, Vision, Information and Signal
Images Murtaza et al. [1] Proposed Method Processing.pp. 118 – 125, 2003.
Lena Image 0.3445 0.7261
Flower Image 0.4183 0.8321
Watch Image 0.4604 0.8953
Play image 0.4577 0.7614
V .CONCLUSION
V. REFERENCE
COPO305-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
combined to yield a strong learner with enhanced predictive The class identities were given numeric representation of 1
capability. The ensemble learning methods produce a fused and 1 . That is, yi Y 1, 1 for all x i X for all
decision fusion. In principle the performance of any classifier i 1, 2,..., N .
can be enhanced. However, usually the risk of overfitting Step 1. Define input. It consists of N training examples, a
prompts to use an ensemble of weak learners [5]-[7]. There is base learning algorithm, and number of training runs T .
no limit on the number of learners in the ensemble. The Step 2. Initialize weight distribution over training examples
performance of an ensemble learning method can be optimized
according to w1 (i) 1 / N for i 1,2,..., N where i stands for
by proper selection of ensemble size and fusion method to suit
any specific application [5], [8]. the training examples and subscript 1 denotes the first round
Most popular among many ensemble learning algorithms is of T runs to determine weights wt (i ) . In the first round all
the AdaBoost (nickname for adaptive boosting) algorithm training examples are assigned equal weights. This assignment
described by Freund and Schapire [9]. A summary of the ensures that weight distribution is normalized, that is,
AdaBoost algorithm is given in section II. The objective of N
present study is to show that the predictive ability of the w1(i) 1.
i 1
AdaBoost algorithm can be further boosted by varied
Step 2. Start a loop „for t 1 to T ‟ that will create
representation of the same raw data by employing several
sequential base classifiers according to the following substeps
preprocessing and feature extraction methods. Different
in succession:
preprocessing and feature extraction procedures reveal hidden
Train the base classifiers ht based on the example
data structure from different perspectives, and yield alternate
sets of features to represent the same example. By combining weight distribution wt (i ) .
these sets in some way can, in principle, provide a more Determine the training error defined as t wt i
reliable and accurate representation. Motivated by this idea, i:ht xi yi
we report here a study on enhancing performance of the which is the sum of weights for all misclassified
AdaBoost algorithm by using a simple model of feature fusion examples by ht .
based on the two common preprocessors and one feature Assign weight to the t-th classifier ht according to:
extractor combination. The procedure is described in section
III. Using a linear threshold classifier for the AdaBoost 1 1 t
t ln .
ensemble generation, section IV presents validation results 2 t
based on some benchmark data sets available from open Update the example weights according to:
sources. The paper concludes with a discussion in section V.
wt i e t for ht xi yi
wt 1 i
zt e t for ht xi yi
II. THE ADABOOST ALGORITHM where z t is the normalization factor for making
The AdaBoost [9] is a supervised learning boosting wt 1 (i) to be a probability distribution. That is, for
algorithm that produces a strong classifier by combining N
several weak classifiers from a family. It needs a set of training making wt 1 (i) 1.
examples and a base learning algorithm as input. Let i 1
X x1 , x 2 ,..., x N be the set of N training vectors drawn End loop.
Step 3. Output the boosted classifier as weighted sum of
from various target classes, and let Y y1, y2 ,..., y N denote
the T base classifiers
their class labels. The base learner is chosen such that it T
produces at least more than 50% correct classification on the H x sign t ht x .
training set. Let base learner be denoted by ht x . The basic t 1
COPO305-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Vector Autoscaling
III. MULTIPLE PREPROCESSORS AND INFORMATION FUSION Data Set 2
1 N 1 N
where xi xij and ( xij xi ) 2 . IV. VALIDATION
i
N j 1 N j 1 Four data sets of two class problems have used in the
validation analysis. These data were collected from the UCI
Dimensional autoscaling machine learning repository. When analyzed by a single strong
The matrix elements are mean-centered and variance classifier (backpropagation neural network) in combination
normalized for each sample separately (row wise) as with dimensional autoscaling and PCA feature extraction, the
xij x j classification rate for the two data sets (sonar and heart) were
xij typically <60% and that for the other two (Haberman breast
j cancer and Pima Indian diabetes) were typically >70%. The
1 M 1 M analysis of the same data sets were done by the proposed
where x j xij and j ( xij x j ) 2 . method consisted of the weak threshold classifier based
Mi1 M i 1 AdaBoost algorithm and the two methods of feature extraction.
The feature extraction has been done by the principal The division of the available data between the training and the
component analysis (PCA). test sets were done by random selection in nearly 50-50 ratio.
In data space fusion, the sample vectors transformed by the The description of the data sets is given Table 1.
two methods were fused by simple concatenation of the vector Table 2 presents the best classification results obtained by
components. That is, if the i-th training vector processed by the AdaBoost algorithm with the linear threshold base
the vector autoscaling is x1i {x1ij } {x1i1, x1i 2 ,..., x1iM } and that classifiers as described in the preceding. The data are
processed by four combinations of preprocessing, fusion and
processed by the dimensional auto scaling is
feature extraction strategies before AdaBoosting. These
xi2 {xij2} {xi21, xi22 ,..., xiM
2
} then the i-th fused data vector is combinations are: vector-autoscaling + PCA; dimensional-
autoscaling + PCA; (vector-autoscaling + dimensional-
defined by z i {x1i1, x1i 2 ,..., x1iM , xi21, xi22 ,..., xiM
2
} . The feature
autoscaling) data space fusion + PCA; and, (vector-autoscaling
extraction is done by PCA of the new N 2M dimensional + PCA) + (dimensional-autoscaling + PCA) feature space
data matrix. fusion. It can be seen that in all cases the performance of the
In feature space fusion, two alternate feature spaces are AdaBoost algorithm has improved after the multiple
created first by the PCA of the vector autoscaled and the
COPO305-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
preprocessor based data space or feature space fusions. The vector autoscaling dimensional autoscaling
amount of improvement however depends on the type of data. 1 1
error rate
error rate
diabetes data the improvements are very marginal. However, 0.5 0.5
in case of sonar data and heart data the improvements are
substantial after data space fusion.
0 0
20 40 60 80 20 40 60 80
TABLE I
DATA SETS USED IN PRESENT ANALYSIS data space fusion feature space fusion
1 1
Data Classes Sample Attribute Remark
s s
error rate
error rate
Sonar 2 208 60 Classes: sonar returns from a 0.5 0.5
metal cylinder and from a
similarly shaped rock.
Attributes: integrated energy
within a frequency band. 0 0
20 40 60 80 20 40 60 80
Heart 2 267 22 Classes: cardiac normal and X - axis: number of threshold classifiers in AdaBoost ensemble
abnormal condition.
Attributes: SPECT images. Fig. 1. Variation of error rate for sonar test data with ensemble size in
Haberman‟s 2 306 3 Classes: patients‟ survival after AdaBoost algorithm for linear threshold classifiers.
Breast-Cancer 5 years and death within 5 years
of breast cancer surgery. vector autoscaling dimensional autoscaling
1 1
Attributes: Age, year of
operation, positive axillary
nodes.
error rate
error rate
Pima-Indian 2 768 8 Classes: signs or no-sign of 0.5 0.5
Diabetes diabetes pima-indian females
above 21 years age.
Attributes: patient‟s history and 0 0
physiological parameters 20 40 60 80 20 40 60 80
error rate
Classification Rate (%) 0.5 0.5
error rate
error rate
samples, and the plots show the variation of error rate with the 0.5 0.5
example, the selected threshold classifier results in the initial X - axis: number of threshold classifiers in AdaBoost ensemblle
error rates for the sonar and heart data close to 50%, and for Fig. 3. Variation of error rate for Haberman test data with ensemble size in
the breast-cancer and diabetes data close to 25%. The AdaBoost algorithm for linear threshold classifiers.
AdaBoosting reduces error rate significantly in case of former,
Fig. 1 and Fig. 2, but not so much in case of latter, Fig. 3 and apparent from the results in Table II. Another notable point is
Fig. 4. The similar impact on the classification rates is that the data space fusion facilitates better boosting.
COPO305-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
error rate
0.5 0.5
AdaBoosted linear threshold classifier by 12% to 22% for the
sonar and heart data compared to the AdaBoosting without
fusion. We thus conclude that by bringing in diversity in the
0 0 preprocesssing methods for data representation to the feature
20 40 60 80 20 40 60 80
extractor yields more accurate feature set, which further
data space fusion feature space fusion
1 1
enhances the efficiency of the AdaBoost algorithm.
error rate
error rate
REFERENCES
V. DISCUSSION AND CONCLUSION .
[1] A. K. Jain, R. P. W. Duin, and J. Mao, “Statistical pattern recognition: a
Sonar data is a complex distribution of energy of chirped review,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol.
sonar signals in different frequency bands returned from two 22, no. 1, pp. 4-37, Jan. 2000.
types of targets (metallic cylinder and cylinder shaped rock). [2] R. J. Schalkoff, Pattern Recognition – Statistical, Structural and
All the attributes are therefore of the same kind. Besides, there Neural Approaches, Wiley & Sons, 1992, ch. 1.
[3] T.G. Dietterich, “Ensemble learning,” in The Handbook of Brain
could be appreciable correlation between different attributes in Theory and Neural Networks, 2nd ed. M.A. Arbib, Ed. Cambridge,
raw data. The best result for this data set is obtained by the MA: The MIT Press, 2002, pp. 405-408.
combination of data space fusion with AdaBoosting. The error [4] R. E. Schapire, “The strength of weak learnability,” Machine Learning,
rate on the training data set drops to 0 after 20 rounds of base vol. 5, no. 2, pp. 197-227, 1990.
[5] T. G. Dietterich, “Ensemble methods in machine learning,” in Lecture
learner creation. On the test data set however the error rate Notes in Computer Science, J. Kittler, F. Roli, Eds. Berlin Heidelberg:
continued to decrease up to 80 rounds, Fig. 1. The use of other Springer-Verlag, vol. 1857, 2000. pp. 1–15.
preprocessing methods did not produce much boosting for [6] Y. Freund, Y. Mansour, R. Schapire, “Why averaging classifiers can
ensemble sizes. The use of dimensional autoscaling and protect against overfitting,” in Artificial Intelligence and Statistics 2001
(Proc. of the Eighth International Workshop: January 4-7, 2001, Key
feature space fusion reduced the error rate significantly after
West, Florida), T. Jaakkola, T. Richardson, Eds. San Francisco CA:
only few rounds of iteration. Later, the error rate increased. Morgan Kaufmann Publishers, 2001.
The vector autoscaling did not produce boosting effect in any [7] D. Chen, J. Liu, “Averaging weak classifiers,” in Lecture Notes in
condition, Fig. 1. Computer Science, J. Kittler and F. Roli, Eds. Berlin Heidelberg:
Heart data is diagnostic cardiac data based on SPECT Springer-Verlag, vol. 2096, 2001, pp. 119-125.
[8] G. Levitin, “Threshold optimization for weighted voting classifiers,”
(single proton emission computed tomography) images for Naval Research Logistics, vol. 50, 2003, pp.322-344.
patients belonging to two categories: normal and abnormal. [9] Y. Freund, R. E. Schapire, “A decision-theoretic generalization of on-
The classification result in Table II and error result in Fig. 2 line learning and an application to Boosting,” Journal of Computer and
indicates a trend similar to that for sonar data. The System Sciences, vol. 55, no. 1, 1997, pp. 119–139.
[10] Cuneyt Mertayak (2007, may 25). AdaBoost, version 1.0. Available:
combination of data space fusion with AdaBoosting yields the http://www.mathworks.com/matlabcentral/fileexchange/21317-
best result. The attributes of image data are also likely to be adaboost.
correlated. [11] R. G. Osuna, H. T. Nagle, “A method for evaluating data preprocesssing
In contrast, the Haberman‟s survival data after breast cancer techniques for odor classification with an array of gas sensors,” IEEE
Trans. Syst. Man Cybern. B, vol. 29, May 1999, pp. 626–632.
surgery and the Pima Indian Diabetes data consist of patient‟s
history and physiological parameters like number of positive
axillary nodes and body glucose level. The variables in these
data sets are of different types and do not seem to have direct
correlation. The AdaBoosting in any combination of
preprocessing fusion strategy does not yield significantly
enhanced classification rate.
It appears therefore that the strategy of multiple
preprocessors based fusion of information enhances the
efficiency of the AdaBoost algorithm in those multivariate
situations where the variables are of the same type, and are
COPO305-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Abstract- This paper presents some real-time including the complicated nature of static and
video processing techniques for intelligent and dynamic hand gestures, complex backgrounds, and
efficient hand gesture recognition, with an aim occlusions. Attacking the problem in its generality
of establishing a virtual interfacing platform for requires elaborate algorithms requiring intensive
Human-Computer Interaction (HCI). The first computer resources. Due to real-time operational
step of the process is colour segmentation based requirements, we are interested in a
skin detection, followed by area-based noise computationally efficient algorithm.
filtering. If a gesture is detected, the next step is
to calculate a number of independent Previous approaches to the hand gesture
parameters of the available visual data and recognition techniques include the use of markers
assign a distinct range of values of each on various points on the hand, including fingertips.
parameter to a predefined set of different Calculation and observation of relative placement
gestures. The final step is the hierarchical and orientation of these markers specifies a
mapping of the obtained parameter values to particular gesture. The inconvenience of placing
recognise a particular gesture from the whole set. markers on the user’s hand makes this an infeasible
Deliberately, the mapping of gestures is not approach in practice. Another approach is to use
exhaustive, so as to prevent incorrect mapping sensor- fitted gloves to detect the orientation and
(misinterpretation) of any random gesture not other geometrical properties of the hand. The
belonging to the predefined set. The applications demerit of this approach is its cost ineffectiveness.
of the same are inclusive of, but not limited to,
Sign Language Recognition, robotics, computer The approach proposed in this text is quite user-
gaming etc. Also, the concept may be extended, friendly as it does not require any kind of markers
using the same parameters, to facial expression or special gloves for its operation. Also, the
recognition techniques. memory requirements are low because the
subsequent video frames are not stored in memory,
they are just processed and overwritten. Obviously,
1. INTRODUCTION it adds a new challenge to make the algorithm very
Gestures and gesture recognition are terms fast and efficient, fulfilment of which is ensured by
increasingly encountered in discussions of human- using low-complexity calculation techniques. For
computer interaction. The term includes character the ease of implementation, the proposed algorithm
recognition, the recognition of proof readers is based on three basic assumptions:
symbols, shorthand, etc. Every physical action
involves a gesture of some sort in order to be 1. The background should be dark.
articulated. Furthermore, the nature of that gesture 2. The hand should always be at a constant
is generally an important component in establishing distance from the camera.
the quality of feel to the action. The general 3. There should be a time gap of at least 200
problem is quite challenging due a number of issues ms between every two gestures.
COPO306-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO306-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Figure 4(b)
COPO306-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
(a)
COPO306-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO306-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO307-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO307-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO307-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
William Sheldon classified personality type of mimics and voice may define
according to body type [17]. He called personality traits. For example, it’s used
this a person’s somatotype and identified in socionics (see Table 2) that is a
three main somatotypes shown in branch of psychology based on Carl
Table 1. Sheldon’s somatotypes and Jung’s work on Psychological Types.
character interpretations Moreover, many socionics experts use
the visual method of personality
Sheldon’s Character Shape
Somatotype
characteristics identification as a main
method for personality traits and types
Endomorph Relaxed, sociable, Plump, recognition.
[viscerotonic] tolerant, buxom,
comfort-loving, developed
Table 2. Example of some outer
peaceful visceral appearance characteristics and their
structure interpretation
Mesomorph Active, assertive, Muscular
[somatotonic] vigorous,
OUTER APPEARANCE
combative No Physical Sensoring Intuitive
characte
Ectomorph Quiet, fragile, Lean,
[cerebrotonic] restrained, delicate,
r
non-assertive, poor 01 The Short and Lengthy
sensitive muscles form of thick, and thin,
bones muscles are muscles
Person is rated on each of these three and pronounced aren’t
dimensions using a scale from 1 (low) muscles pronounce
to 7 (high) with a mean of 4 (average). d
Therefore, for example, a person who is 02 Form of Sen Sen Intu Intui
a pure mesomorph would have a score of the nose sori sori itive tive
1-7-1. ng ng + +
In Ayurvedic medicine (used in India + + Ethi Logi
since ˜3000 BC) there are three main Log Eth cal cal
metabolic body types (doshas) – Vata, ical ical
Pita, & Kapha – which in some way «triangle
correspond to Sheldon’s somatotypes. Horizonta with peak
Body types have been criticized for very l line in on the top»
weak empirical methodology and are not the nose «triangle
generally used in Western psychology bridge with peak
(they are used more often in alternative in the
therapies and Eastern psychology and bottom»
spirituality).
Complex physical appearance Neuropsychological tests
evaluation Around the 1990s, neuroscience entered
This is approach of evaluation of face the domain of personality psychology.
and body parts in complex, and it is It introduced powerful brain analysis
considered to be physiognomy too. tools like Electroencephalography
Physical appearance characteristics such (EEG), Positron Emission Tomography
as appearance of some facial features, of (PET), Functional Magnetic Resonance
the skull, shoulders, hands, fingers, legs, Imaging (fMRI) and structural MRI
including diffusion tensor imaging (DTI)
COPO307-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
to this study. One of the founders of this are often able to distort their responses.
area of brain research is Richard This is particularly problematic in
Davidson of the University of employment contexts and other contexts
Wisconsin-Madison [18]. Davidson’s where important decisions are being
research lab has focused on the role of made and there is an incentive to present
the prefrontal cortex and amygdala in oneself in a favorable manner. Social
manifesting human personality. In desirability is a tendency to portray self
particular, this research has looked at in a positive light, and faking bad also
hemispheric asymmetry of activity in happens, that is purposely saying ’no’ or
these regions. Neuropsychological looking bad if there’s a ’reward’ (e.g.
studies have illustrated how hemispheric attention, compensation, social welfare,
asymmetry can affect an individual’s etc.). Work in experimental settings
personality. [20,21] has shown that when student
In contemporary psychological research samples have been asked to deliberately
there should be an instrument which fake on a personality test, they
would provide a maximum amount and demonstrated that they are capable of
type of objective/unbiassed information doing this.
about personality in as short a time as Though several strategies have been
possible, preferably with no participation adopted for reducing respondent faking,
of person whose characteristics are this is still a problem for such traditional
identified. Comparison of approaches to psychological testing instruments like
identi- fication of psychological questionnaires, interviews, direct
characteristics described above is observations. Surprisingly,
represented in Table 3. neuropsychological tests are prone to
Some comparison of approaches to respondent faking, too [22,23]. Faking
identification of psychological response styles include faking bad
characteristics (malingering), faking good
Criterion Psy Inte Face, Neurops (defensiveness), attempts at invalidation,
chol rvie body ycholog
ogic w, evalua ical
mixed responding (faking good and
al dire tion tests bad), and a fluctuating, changing style
que ct that occurs within one evaluation
stio obs
nnai erva
session. These response styles lead to
res tion getting incorrect results.
Easy and not – – + – Concerning face and facial features,
time-consuming
for person who is
faking becomes much more complicated:
tested it’s impossible to change the shape of a
Person may not – – + – nose or cheekbones just when person
participate in
testing process
wants. Besides, it is often unknown to a
High validity and + – ? – holder what his/her face reveals exactly.
reliability [19] Theoretically people can “fake” facial
Practically no – – + – features intentionally changing their
possibility for
respondent faking shape, color, texture, for instance, using
No need in + + + – plastic surgery, and identifying personal
expensive hi–tech psychological characteristics becomes
hardware
much harder in this case, though it may
In psychological testing there is
be also accomplished.
considerable problem that respondents
COPO307-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Face is the first subject that is unique for recognition, facial expression
people and used for people recognition. recognition, face animation, face
Thus, face is the most available means of retrieval, etc., and finally contribute to
evaluation among other instruments development of human-computer
based on questionnaires, interviews, interaction on higher level. Thus, the
neuropsychological tests. People in relations between such research areas as
general may not participate in testing face recognition, facial expression
process, identification of personality recognition and psychological
characteristics may be done remotely, characteristics recognition are mutually
even by exterior parties. beneficial.
Summarizing, face provides researchers 3. Approaches to psychological
and psychologists with instrument of characteristics recognition from face
obtaining information about personality There are three main approaches to
and psychological traits that would be psychological characteristics recognition
much more objective than questionnaires from face: physiognomy, phase facial
and neuropsychological tests (as we portrait and ophthalmogeometry, see
can’s change facial features just when Fig.1. The first originally interprets
such desire appears) and could be different facial features, the second
obtained remotely using person’s facial works with angles of facial features and
portrait, with no need for personal facial asymmetry, and the third extracts
involvement. and interprets eye region parameters.
If such instrument is working Methods developed for these approaches
automatically (system gets facial are described below. Physiognomy is a
portrait, processes it and in result gives theory based upon the idea that the
out information about personality assessment of the person’s outer
characteristics) and has straight-forward appearance, primarily the face, facial
layout, then: 1) psychological testing features, skin texture and quality, may
becomes more accurate, fast, objective give insights into one’s character or
and available for different kinds of personality. Physiognomy has flourished
research and applications; 2) deep since the time of the Greeks
knowledge in interpretation of facial (Empedocles, Socrates, Hippocrates and
features, which is rather rare in modern Aristotle), amongst the Chinese and
society, isn’t needed to administer and Indians, with the Romans (Polemon and
use the instrument. Methods and Adamantius), in the Arab world
algorithms originally developed for face (including Avicenna), and during the
detection, face recognition and facial European renaissance (Gerolamo
expression recognition research fields as Cardano and Giovanni Battista della
well as contemporary trends (applying Porta). It faded in
standard face images, multimodality,
three-dimensionality) should be applied
and adjusted to so-called Automatic
Psychological Characteristics
Recognition from Face. From its side,
Automatic Recognition of Psychological
Characteristics from Face is believed to Figure 1. Approaches to psychological characteristics
bring scientific benefits to face recognition from facial portrait
COPO307-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
popularity during the 18th century, was including an estimation of the accuracy
eclipsed by phrenology in the 19th and of the sources of information.
has been refreshed by personologists in “Digital physiognomy“ software
the 20th century. determines a person’s psychological
During 20th century attempts had been characteristics based on temperament
made to perform scientific experiments types, intellect, optimism – pessimism,
concerning validity of different facial conformism – adventurism, egoism –
features interpretations and high altruism, philanthropy – hostility,
accuracy results had been claimed [24], laziness, honesty, etc.
though they are mostly aren’t accepted
by official science [25]. At the same
time, science step by steps proves some
physiognomy beliefs. For instance,
correlations have been established
between IQ and cranial volume
[26,27,28,29]. Testosterone levels,
which are known to correlate with
aggressiveness, are also strongly
correlated with features such as finger-
Figure 2. Example of the table and interface of Visage
length ratios and square jaws [30,31]. demonstration application: facial features in the
Interpretation of facial features forehead and eyebrow area [34]
based on physiognomy has been and then presents a detailed person’s
implemented into psychological character analysis in a graphic format.
characteristics diagnosis tools such as The tool works like a police sketch
“Visage” Project [32] developed by Dr. (photo robot), so user has to select
Paul Ekman and “Digital physiognomy“ different parts of the person’s face, and
software [33] developed by Uniphiz Lab. doesn’t need to have a person’s
“Visage” is a project for collecting photograph, see Fig. 3. It’s claimed that
and organizing information about only the facial features that can be
relatively permanent facial features. It interpreted with high accuracy were
includes methods for storing, retrieving, used, and the confidence factor is
and inspecting the data. Visage is a calculated for each interpretation by the
unique database schema for representing tool. It should be noted that “Digital
physiognomy and the interpretation of physiognomy“ tool also uses visual
physiognomic signs. The Visage systematic classification of 16
demonstration application illustrates personality types based upon Myers-
limited variations of some facial features Briggs typology, see Fig. 4.
in the following categories: forehead and “Visage” and “Digital Physiognomy”
eyebrows (see the Fig.2), eyes and projects are some of the first attempts
eyelids, nose, mouth and jaw, cheeks, to develop physiognomic database and
chin, ears. User should select features use modern technology for
that are distinctive about the face that is physiognomic interpretations. In spite of
going to be interpreted and then click the having value for psychological diagnosis
“Get...” button. The application retrieves based on physiognomy, both projects use
information from the database relevant manual selection of facial features, and
to description of physiognomy, thus, can’t be used extensively and
applied in scientific research.
COPO307-7
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO307-8
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Figure 7. Translated picture from Muldashev’s book pattern extraction and further
[39]: here two parameters of facial eye region are used
for recognition of some basic psychological traits, e.g. investigation.
strong will and fearfulness, etc. 4. Conclusion
The paper represents general idea that
face provides researchers and
psychologists with objective instrument
of obtaining information about
personality and psychological traits. An
Figure 8. Ophthalmogeometrical pattern extraction 40]
up-to-date survey of approaches and
methods in psychological characteristics
of brain asymmetry phenomena and face
recognition from facial image is
asymmetry. Although Anuashvili claims
provided.
that application developed for video-
In perspective new research task of
computer psychological diagnosis and
automating procedures in applications of
correction method is entirely automated,
psychological characteristics recognition
practically it may be considered to be
from face should be explored. Various
semi-automated as manual selection of
approaches and methods developed
facial points on image is required. This
within face recognition, facial
limits usage of such application for
expression recognition, face retrieval,
extensive research and other purposes.
face modeling and animation may be
Concerning ophthalmogeometry
applied and adjusted for recognition of
approach, it is based on idea that
psychological characteristics from face.
person’s emotional, physical and
Undeniably, such automated system of
psychological states can be recognized
psychological characteristics recognition
by 22 parameters of an eyes part of the
from face will get countless
face [39], see Fig. 7. phthalmogeometry
psychological, educational, business
phenomenon has been discovered by
applications. It may be used also as part
prof. Ernst Muldashev. Apart from other
of medical systems: 1) patient’s
interesting facts, E. Muldashev has
psychological state and traits influence
found that in 4-5 years after birth the
the process of medical treatment, and it
only practically constant parameter of
should be taken into consideration and
human body is the diameter of the
researched; 2) patient’s psychological
transparent part of cornea which equals
characteristics should be taken into
10±0,56 mm. He also represented an
account to reflect and construct the
idea that ophthalmogeometrical pattern
psychosomatic model of disease in the
is unique for people. The procedure of
environment, which includes biological,
this pattern identification and calculation
psychological, and social factors.
is described by Leonid Kompanets [40],
References
see Fig. 8. [1] Carver C. S., Scheier M. F. Perspectives on
Ophthalmogeometry is based on personality (4th ed.) Boston: Allyn and
interesting ideas and may be applied to Bacon, 2000, page 5.
[2] DSM, Diagnostic and Statistical Manual of Mental
psychological, medical research as well Disorders, can be found at
as to biometrics, though this is not very http://www.psych.org/ research/ .
deeply investigated area of facial [3] Hampson S. E. Advances in Personality
analysis which primarily needs Psychology. Psychology Press, 2000.
[4] Holigrocki R. J., Kaminski P. L., Frieswyk S. H.
automation of ophthalmogeometric (2002). PCIA-II: Parent-Child Interaction
COPO307-9
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Assessment Version II. Unpublished manuscript, Journal of Human Genetics (2003) 11, pages 555-560.
University of Indianapolis. [17] Irvin L. Child, William H. Sheldon. The
(Update of PCIA Tech. Rep. No. 99-1046. Topeka, correlation between components of physique
KS: Child and Family Center, and scores on certain psychological tests. Journal of
The Menninger Clinic.) (Available from Dr. Richard J. Personality, Vol. 10, Issue 1,
Holigrocki or Dr. Patricia September 1941, page 23.
L. Kaminski). [18] Richard Davidson, Ph.D. Vilas Professor of
[5] Nigel Barber. The evolutionary psychology of Psychology and Psychiatry. Can be
physical attractiveness: Sexual selection found at https:// psychiatry.wisc.edu/
and human morphology. Ethology and Sociobiology, faculty/FacultyPages/Davidson.htm.
Volume 16, Issue 5, September [19] The Validity of Graphology in Personnel
1995, pages 395-424. Assessment. Psychological Testing Centre.
[6] John P. Swaddle, Innes C. Cuthill. Asymmetry and Found at www.psychtesting.org.uk, November 1993
Human Facial Attractiveness: reviewed April 2002.
Symmetry May not Always be Beautiful. Proceedings: [20] Chockalingam Viswesvaran, Deniz S. Ones.
Biological Sciences, Vol. 261, Meta-Analyses of Fakability Estimates:
No. 1360 (Jul. 22, 1995), pages 111-116. Implications for Personality Measurement,
[7] Thomas R. Alley, Michael R. Cunningham. Educational and Psychological Measurement,
Averaged faces are attractive, but very Vol. 59, No. 2, 1999, pages 197-210.
attractive faces are not average. Psychological Science [21] Deniz S. Ones, Chockalingam Viswesvaran,
2 (2), 1991, pages 123-125. Angelika D. Reiss. Role of Social Desirability
[8] Leslie A. Zebrowitz, Gillian Rhodes. Sensitivity to in Personality Testing for Personnel Selection: The
”Bad Genes” and the Anomalous Red Herring. Journal
Face Overgeneralization Effect: Cue Validity, Cue of Applied Psychology, 1996. Vol. 81, No. 6, pages
Utilization, and Accuracy in Jud 660-679.
[22] Hall, Harold V.; Poirier, Joseph G.; Thompson,
72 Ekaterina Kamenskaya, Georgy Kukharev Jane S. Detecting deception in
ging Intelligence and Health. Journal of Nonverbal neuropsychological cases: toward an applied model.
Behavior Volume 28, Number 3 From: The Forensic Examiner,
/ September, 2004, pages 167-185. 9/22/2007.
[9] Caroline F. Keating. Gender and the Physiognomy [23] Allyson G. Harrison, Melanie J. Edwards and
of Dominance and Attractiveness, Kevin C.H. Parker. Identifying students
Social Psychology Quarterly, Vol. 48, No. 1 (Mar., faking ADHD: Preliminary findings and strategies for
1985), pages 61-70. detection. Archives of
[10] Ulrich Mueller, Allan Mazur. Facial Dominance Clinical Neuropsychology. Volume 22, Issue 5, June
of West Point Cadets as a Predictor 2007, pages 577-588
of Later Military Rank. Social Forces, Vol. 74, No. 3 [24] Naomi Tickle. You Can Read a Face Like a
(Mar., 1996), pages 823-850. Book: How Reading Faces Helps You
[11] J. Liggett, The human face. New York: Stein and Succeed in Business and Relationships, Daniels
Day, 1974, page 276. Publishing, 2003.
[12] Phisiognomics, attributed to Aristotle. Cited in [25] Robert Todd Carroll. The Skeptic’s Dictionary: A
J.Wechsler (1982), A human comedy: Collection of Strange Beliefs,
Physiognomy and caricature in 19th century Paris Amusing Deceptions, and Dangerous
(p.15). Chicago: University Delusions.Wiley; 1st edition (August 15, 2003)
of Chicago Press. [26] J. Philippe Rushton, C. Davison Ankney. Brain
[13] A. Brandt. Face reading: The persistence of size and cognitive ability: Correlations
physiognomy. Journal Psychology Today, with age, sex, social class, and race. Psychonomic
1980, December, page 93. Bulletin & Review, 1996, 3
[14] Sibylle Erle. Face to Face with Johann Caspar (1), pages 21-36.
Lavater, Literature Compass 2 (2005) [27] Michael A. McDaniel. Big-brained people are
RO 131, pages 1 -4. smarter: A meta-analysis of the re
[15] Stefan Boehringer, Tobias Vollmar, Christiane Recognition of Psychological Characteristics from
Tasse, Rolf P Wurtz, Gabriele Face 73
Gillessen-Kaesbach, Bernhard Horsthemke and lationship between in vivo brain volume and
Dagmar Wieczorek. Syndrome identification intelligence. Intelligence, Volume 33,
based on 2D analysis software. European Journal of Issue 4, July-August 2005, pages 337-346.
Human Genetics [28] J. Philippe Rushton. Cranial size and IQ in Asian
(2006), pages 1-8. Americans from birth to age
[16] Hartmut S Loos, Dagmar Wieczorek, Rolf seven. Intelligence, Volume 25, Issue 1, 1997, pages
PWürtz, Christoph von der Malsburg and 7-20.
Bernhard Horsthemke. Computer-based recognition of [29] John C. Wickett, Philip A. Vernon, Donald H.
dysmorphic faces, European Lee. Relationships between factors of
COPO307-10
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO307-11
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Abstract:
Object Tracking is an important task in video accident detection on highways, and routine
processing because of its various applications like maintenance in nuclear facilities[1,2,3,4]. Hu et.
visual surveillance, human activity monitoring and al.[1], provide a good survey on visual surveillance
recognition, traffic flow management etc. Multiple and its various applications. Detecting and Tracking
object detection and tracking in outdoor environment humans in a video is a step to take in the process of
is a challenging task because of the problems raised analyzing and predicting their behavior and
by poor lighting conditions, occlusion and clutter. intention. This is a good practice to detect objects in
This paper proposes a noble technique for detecting a video sequence before tracking them. The problem
and tracking the multiple humans in a video. A of multiple object tracking is more complex and
classifier is trained for object detection using haar- challenging than the single object tracking because
like features from the training image set. The human of the issue of the management of multiple tracks
objects are detected with the help of this trained caused by newly appearing objects and the
detector and are tracked with the help of a particle disappearance of already existing targets. Viola et
filter. The experimental results show that the propose al[5] proposed an object detection framework for
technique can detect and track the multiple humans in and used it for first time for detecting the human
a video adequately fast in the presence poor lighting faces in an image. The adaptive boosting can be
conditions, clutter and partial occlusion and the used to speed up a binary classifier [6] and this can
technique can handle varying number of human be used in machine learning for creating the real-
objects in the video at various points of time. time detectors. After detecting an object or multiple
objects in a video sequence captured by the
Keywords: Human detection, Automatic multiple surveillance camera, the very next step is to track
object tracking, Haar-like features, Machine learning, these objects (human, vehicle etc.) in the subsequent
Particle filter. frames of the video stream. The particle filter based
tracking techniques are gaining popularity because
1. Introduction:
of their ease of implementation and capability to
Detecting humans and analyzing their activities by represent a non-linear object tracking system and
vision is key for a machine to interact intelligently non-Gaussian nature of noise. Various object
and effortlessly with a human inhabited tracking techniques based on particle filtering are
environment .The aim of visual surveillance is the found in literature [7,8,9,10,11]. These approaches
real-time observation of targets such as human fall under two main categories; single object
beings or vehicles in some environment which leads trackers and multiple object trackers. Lanvin et.
to the description of objects’ activities with the al.[7], propose an object detection and tracking
environment or within them. Visual surveillance has technique and solve the non-linear state equations
been used for security monitoring, anomaly using particle filtering. Single object trackers suffer
detection, intruder detection, traffic flow measuring, from the problem of false positives when severe
COPO4O1-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
occlusions occur because of hidden first order the object detection framework proposed by Viola
Markov hypotheses [8]. The problem of tracking et.al.[5] using Haar-like features and then tracked
multiple objects using particle filters can be solved using a simple particle filter in the subsequent video
in two ways. One is by creating multiple particle frames. Binary adaptive boosting [6] reduces the
filters for each track and another one is by having a training time of the human detector. The earlier
single particle filter for all tracks. The second detection of the objects simplifies the process of
approach works fine as long as the objects under tracking and sheds the load of detection from tracker.
tracking are not occluded but in case of occlusion The exhaustive dataset used for training the detector
when objects come close by, these techniques fail to makes the system detect and track the multiple
track the objects. In [8] a multi-object tracking objects in critical lighting conditions, in dynamic
technique using multiple particles has been background, partial occlusion and clutter.
proposed. Algorithms which attempt to find the
target of interest without using segmentation have The rest of the paper is organized as follows;
been proposed for single target tracking based on section(2) discusses the proposed technique for
cues such as color, edges and textures [12]. human detection and tracking. In section (3) the
various experimental results using the proposed
Chen et. al.[9] propose a color based particle filter technique are given which prove the validity and
for object tracking. Their technique is based on novelty of the method. Section(4) comprises of the
Markov Chain Monte Carlo(MCMC) particle filter conclusions and at last references are given.
and object color distribution. Sequential Monte Carlo
techniques also known as particle filtering and 2. Methodology
condensation algorithms and their applications in the
Here we solve two sub-problems. One is to detect
specific context of visual tracking, have been
the humans in the video and another one is to track
described in length in the literature[13,14,15]. In
them in the subsequent video frames. The sub
[10], object tracking and classifications are
problem of object detection is solved using
performed simultaneously. Many of the particle filter
machine learning approach for which we train our
based multiple object tracking schemes rely on
human detector using binary adaptive boosting.
hybrid sequential state estimation.
Algorithm for multi human detection and
The particle filter developed in[16] has multiple
tracking :-
models for the objects motion, and comprises an
additional discrete state component to denote which Let Z be the input video to the algorithm.
of the motion models is active. The Bayesian
multiple-blob tracker[17] presents a multiple tracking
In first frame 0
of Z, detect humans using
system based on statistical appearance model. The the human detector. Let be the number
multiple blob tracking is managed by incorporating of detected humans.
the number of objects present in the state vector and Initialize trajectories j
, 1≤ j≤ with
the state vector is augmented as in [18] when a new
object enters the scene. The joint task of object
initial positions j,0
.
detection and tracking comes with a heavy Initialize the appearance model (color
computational overhead but for visual surveillance
applications the speed is one of the most important
histogram) j
for each trajectory from
factors. Many previously proposed techniques are the region around j,0
.
slow in speed so cannot be good for visual
surveillance and some fail to cope with dynamic For each subsequent frame of input
i
outdoor environment conditions. This paper presents video,
a noble technique based on machine learning and
particle filtering to detect and track the humans in a
(a) For each existing trajectory j
,
COPO4O1-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
i. Use motion model to predict the speed up the process. We collected positive and
distribution ( │
j ,i
),
j , i 1
negative image samples for training. The
positive images are those comprising the human
over locations for human j in
frame i, creating a set of beings and negative images do not contain any
human being. We cropped the humans from the
(k )
candidate particles j ,i
, 1 ≤k positive images and resized to the dimensions
≤ K. of 40*40. Our positive dataset consisted of
ii. Compute the color histogram 2,000 images while the negative dataset
(k )
and likelihood consisted of 2,700 images. Fig.1 consists of
some cropped human images from positive
( │ , )
k (k )
for each samples used for human detector training. This
j ,i j human detector is nothing but a binary classifier
particle k using the appearance having two classes: human and non-human.
model. Next step after the sample collection is Haar-
iii. Resample the particles according to
their likelihood. Let k* be the like feature extraction from these samples. We
index of the most likely use here the rectangular Haar-like features and
particle. these features have their intuitive similarity to
iv. Perform confirmation by the Haar-wavelets. Integral Image
classification: run the Humans representation is used for fast feature evaluation
detector on the location (see fig. 3).
*
( k )
. If the location is
j ,i
classified as a human, reset
j
0; else increase
j
1.
j
v. If j
is greater than threshold,
remove trajectory j.
distance j ,k
between each
newly detected human k and each
Fig. 1: Sample positive human images used for detector
existing trajectory j
. When training.
j ,k
for all j, where is a
threshold (in pixels) less than the
width of the tracking window,
initialize a new trajectory for
detection k.
2.1 Sample Creation and Human Detector
Training
COPO4O1-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
The Integral Image, ii ( x, y ) at location based appearance model. After computing the
likelihood of each particle we treat the
( x, y ) contains the sum of all pixels above and left likelihoods as weights, normalizing them to sum
of ( x, y ) and can be computed in single pass over to 1.
image using the following pair of equations.
2.2.3 Resample:- We resample the particles to
( x, y) ( x, y 1) i( x, y) (2.1.1) avoid degenerate weights. Without re-sampling,
over time, the highest-weight particle would tend
to a weight of one and the other weights would
ii( x, y) ii( x 1, y) ( x, y) (2.1.2) tend to zero. Re-sampling removes many of the
Where, ( x, y) is the cumulative row sum and low weight particles and replicates the higher-
i ( x, y ) is the original image. We use adaptive weight particles. We thus obtain a new set of
equally-weighted particles. We use the re-
boosting technique[6] for training the human detector sampling technique described in [13].
system. The parameters and their values used in the
training set up are given in the table 1. 2.3 Motion and Appearance Models:-
Table 1: Human detector training parameters and their values.
Our motion model is based on a second-order auto-
regressive dynamical model. The autoregressive
Parameters Values Description
Npos 2,000 Number of positive images
consisting only human beings
model assumes the next state t
of a system is a
Nneg 2,700 Number of negative images not function of some number of previous states and a
Nstages 20
consisting any human
Number of training stages
noise random variable t
Minhitrate 0.991 Per stage minimum hit-rate
f ( t 1, t 2,........., t p , i )
(99.10%)
maxfalsealarm 0.5 Maximum false alarm rate per t
(2.3.1)
stage (50%)
Mode All Use upright and tilted features
Width*height 40*40 Size of training images
We assume the simple second-order linear
Boosttypes DAB Discrete Adaptive Boosting autoregressive model
2 j ,i 2 i (2.3.2)
j ,i j ,i 1
For tracking we use a particle filter. We use an Our appearance model is based on color histograms.
approach in which the uncertainty about an human's
state (position) is represented as a set of weighted
We compute a color histogram j
in HSV space
particles, each particle representing one possible for each newly detected human and save it to
state. The filter propagates particles from frame i-1 to compute particle likelihoods in future frames. To
frame i using a motion model, computes a weight for compute a particle's likelihood we use the
each propagated particle using an appearance model, Bhattacharyya similarity coefficient between model
then re-samples the particles according to their (k )
histogram and observed histogram as
weights. The initial distribution for the filter is j
centered on the location of the object the first time it follows, assuming n bins in each histogram:
is detected. Here are the steps in more detail:
COPO4O1-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
( │ j , i , j ) e
(k )
k (k ) d ( , )
j (2.3.4)
And ,
d ( ,
n
) 1
(k ) (k )
(2.3.5)
j j ,b b
b 1
and
(k )
and
(k )
and b
denote bin b of ,
j ,b j Frame 100 Frame 125
respectively. A more sophisticated appearance model
based on local histograms along with other
information such as spatial or structural information
would most likely improve our tracking performance,
but we currently use a global histogram computed
over the entire detection window because of its
simplicity.
3. Experimental Results:-
Frame 150 Frame 175
We have experimented the proposed automatic
human detection and tracking technique on a number
of videos. The results with some of the representative
videos are given here. The human detection and
tracking starts automatically without providing any of
the initialization details unlike many other tracking
techniques in which operator intervention is required.
Our proposed detector detects the human beings
irrespective of their body poses and locations in
video (fig. 4). The results given in fig. 5 show that
the proposed technique performs fairly good in
outdoor environment in low lighting conditions and Frame 200 Frame 225
is quite suitable for visual surveillance applications.
COPO4O1-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
[8] Jingling Wang, Yan Ma, Chuanzhen Li, Hui Wang, Jianbo Liu,
“An Efficient Multi-Object Tracking Method using Multiple
Particle Filters” In the Proceedings of World Congress on
Computer Science and Information Engineering, pp. 568-572,
2009.
Frame 350 Frame 375 [10] Francois Bardet, Thierry Chateau, Datta Ramadasan, “
Illumination Aware MCMC Particle Filter for Long-term Outdoor
Multi-Object Simultaneous Tracking and Classification,” In
Proceedings of IEEE 12th International Conference on Computer
Vision, pp. 1623-1630, 2009.
[4] D. Koller, J. Weber, T. Huang, J. Malik, B. Rao G. Ogasawara, [18] J. Czyz, B. Ristic, and B. Macq, “A Color-Based Particle
and S. Russell, “Toward Robust Automatic Traffic Scene Analysis Filter for Joint Detection and Tracking of Multiple Objects,” in
in Real-time,” In Proceedings of Int. Conference on Pattern Proc. of the ICASSP, 2005
Recognition, Vol. 1, pp. 126–131, 1994,.
[5] Paul Viola and Michael Jones, “Rapid Object Detection using a
Boosted Cascade of Simple Features,” In the Proceedings of IEEE
International Conference on Computer vision and Pattern
Recognition, 2001.
COPO4O1-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Abstract
An efficient method for segmentation of images has been proposed by incorporating the advantages of the
normalized cut partitioning method.The proposed method pre-processes an image by using the normalized cut
algorithm to form segmented regions and applied to form the weight matrix W. Since the number of the
segmented region nodes is much smaller than that of the image pixels. The proposed algorithm allows a varied
dimensional images with significant reduction of the computational complexity comparing to conventional
Ncut method based on direct image pixels. The experimental results also verify that the proposed algorithm
behaves an improved performance comparing to Ncut algorithm. This paper presents different ways to
approach image segmentation, explain an efficient implementation for each approach, and show sample
segmentations results. We show that an efficient computational technique based on a generalized eigen value
problem can be used to optimize this criterion. We have applied this approach to segmenting static images and
found the results to be very encouraging.
COPO4O2-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
The eigenvector associated with the smallest
(D W ) y
eigenvalue will minimize and is
Dy
equivalent to Ncut(A,B). But, if we assume it for
now, we can observe a number of properties of λ and
(4) the associated eigenvectors.
(a)
(a)
(b)
(b)
(c)
(c)
(d)
(d)
Fig. 1. (a) Original image. (b) The resultant image after applying
the Ncut Algorithm for edge computation. (c)The segmented Fig. 2. (a) Original image. (b) The resultant image after applying
results by directly applying the Ncut algorithm to the image pixels. the proposed Algorithm for edge computation. (c)The segmented
(d)The results of the Ncut algorithm partitioning . results by applying the proposed algorithm to the image pixels.
(d)The results of the proposed algorithm partitioning .
COPO4O2-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
1 2 3 4 5 6 7 8
1 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0
3 0 63.96937 63.04625 0 0 0 0 0
4 0 0 0 61.45696 0 0 0 0
5 0 0 0 0 56.69138 0 0 0
6 0 0 0 0 0 52.0375 53.44258 54.79843
7 0 0 0 0 0 0 0 0
8 0 0 0 0 0 0 0 0
COPO5O1-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
COPO5O1-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Figure 3 Ethernet Delay (sec) Figure 4 Ethernet Throughput (bits/sec) for link variation
As shown in Figure 3, the delay occurs due to the 3.3 Performance Analysis using Load-Balancer
heavy traffic that passes through the network. The cause of
heavy traffic is the large number of users (30) as shown in the In another simulation for performance analysis, the
Figure 1. The amount of traffic sent and traffic received number of stations in a network is varied as shown in the
increases/ decreases depending on the number of users and the figure5, in which there is only one server, a load-balancer, a
amount of data accessed. In first scenario according to Table firewall and other network objects as per the specifications of
4.1, the type of Ethernet link used is 10 Base T. The graphs in the network required.
Figures 3, show the delay statistics (which is maximum than the
other two types of links). The reason is the low data rate of the
10 Base T links as compared to the other two types of links
listed in Table 1 i.e. 100 Base T and 1000 Base X. Another
parameter used for performance analysis is:
Ethernet – Traffic Received (bits/sec): This statistic
defines the throughput (bits/sec) of the data forwarded by the
Ethernet layer to the higher layers in this node.
The Ethernet throughput depends on the amount of
delay that occurs in the transmission of packets. Figure 3
indicates that the delay in case of 100 Base T and 1000 Base X
is less as compared to a network with 10 Base T. Hence the
throughput achieved in the case of 100 Base T and 1000 Base X
is much higher as compared to throughput achieved in case of
Figure5 Wired LAN Network Model using Load Balancer
10 Base T as shown in following figure 4. Moreover, the links –
100 Base T and 1000 Base X provide more bandwidth as Then scenarios described in Table 2 have modelled and
compared to the 10 Base T. Hence the traffic received through simulated using the network design shown in figure 5. The table
the Ethernet increases with the increase in bandwidth. shows the variations in the number of nodes:
COPO5O1-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Figure 8 A wired LAN network model with load- balanced multiple servers
COPO5O1-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Figure10 Traffic sent & Received (Bps) using No. of connections Policy
COPO5O1-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
Figure 13 Traffic sent Received (bytes/sec) using different load balancing policies
COPO5O1-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
In this paper work performance analysis of the wired network [11] Dr. Reinhard Kuch, “Studienbrief 2: Simulation of
configuration through simulation was started with the networks using OPNET ITGURU Academic Edition v
investigation of the network performance using various types of 9.1,” version: 1.0, 26-04-2009.
[12] “Opnet_Modeler_Manual,” available at
links. The impact of various network configurations on the
http://www.opnet.com
network performance was analyzed using the network [13] T.Velmurugan, Himanshu Chandra and S. Balaji,
simulator- OPNET. It has been investigated that performance of “Comparison of Queuing disciplines for Differentiated
the wired Networks is good if high speed Ethernet links are used Services using OPNET,” IEEE, ARTComm.2009, pp.
under heavy network loads. Moreover, the mechanism of load 744-746, 2009.
balancing also improves the performance by reducing and [14] Yang Dondkai and Liu Wenli, “The Wireless Channel
balancing the load equally among multiple servers. Modeling for RFID System with OPNET,” in the
Proceedings of the IEEE communications society
REFERENCES sponsored 5th International Conference on Wireless
communications, networking and mobile computing,
[1] Behrouz A Forouzan, Data Communications and
Beijing, China, pp. 3803-3805, September 2009.
Networking: Tata McGraw Hill, 4th Edition, 2007.
[15] Ikram Ud Din, Saeed Mahooz and Muhammad Adnan,
[2] Sameh H. Ghwanmeh, “Wireless network performance
“Performance evaluation of different Ethernet LANs
optimisation using Opnet Modeler,” Information
connected by Switches and Hubs,” European Journal of
Technology Journal, vol. 5, No 1, pp. 18-24, 2006.
Scientific Research, vol. 37, No 3, pp. 461- 470, 2009.
[3] Sarah Shaban, Dr. Hesham, M.El Badawy, Prof. Dr.
Attallah Hashad, “Performance Evaluation of the IEEE
802.11 Wireless LAN Standards,” in the Proceedings
of the World Congress on Engineering-2008 , vol. I,
July 2-4, 2008.
[4] A. Goldsmith, Wireless Communications: Cambridge
University Press, 1st Edition, August 2005.
[5] Mohammad Hussain Ali and Manal Kadhim Odah,
“Simulation Study of802.11b DCF using OPNET
Simulator,” Eng. & Tech. Journal, vol. 27, No. 6, pp.
1108-1117, 2009.
[6] Hafiz M. Asif, Md. Golam Kaosar, “Performance
Comparison of IP, ATM and MPLS Based Network
Cores Using OPNET,” in 1st IEEE International
Conference on Industrial & Information Systems
(ICIIS 2006), Sri Lanka, 8-11 August, 2006.
[7] Dibyendu Shekhar, Hua Qin, Shivkumar
Kalyanaraman, Kalyan Kidambi, “Performance
Optimization of TCP/IP over Asymmetric Wired and
Wireless Links” in the Proceeding of conference on
Next Generation Wireless Networks: Technologies,
Protocols, Services and Applications (EW-2002), –
Florence, Italy, February 25-28, 2002.
[8] Born Shilling, "Qualitative Comparison of Network
Simulation Tools,” Institute of Parallel and Distributed
Systems (IPVS), University of Stuttgart, 2005.
[9] “OPNET IT Guru Academic Edition” available at
http://www.opnet.com/university_program/itguru_acad
emic_edition
[10] Ranjan Kaparti, “OPNET IT Guru: A tool for
networking education,” REGIS University.
COPO5O1-7