Documente Academic
Documente Profesional
Documente Cultură
Foreword
We are glad and proud that we have been able to hold the International Conference on
Contemporary Issues in Computer and Information Science for the third successive year,
with your delightful presence; the conference which, besides special attention to the
scientific progress in this subject, is aimed at bringing better interaction between different
areas of computer science and everyday life, and acknowledges this fraternity a must for
the progress of the society. By this approach, the CICIS Conference pays direct attention
to several applicatory aspects of computer science and information. The third conference
is held with concentration on Graph and Geometrical Algorithms, Intelligent Systems,
Bioinformatics, IT and the society, in addition to all computer areas.
What makes us more glorious is the coincidence of this conference with the 20th
anniversary of the creation of Institute for Advanced Studies in Basic Sciences where
outstanding scientific achievements is carried out in a friendly environment which would
never have happened without God assistance and notable effort of the directors, teachers,
researchers and students.
Number of 277 received papers indicates kind feedback and makes us more determined.
45 papers (16.24 %) were accepted as oral presentation, 77 papers (27.79 %) as poster
presentation and 155 papers were rejected.
In this conference besides IASBS, Computer Society of Iran, Iranian branch of IEEE
and University of Zanjan have collaborated and supported us and we hope this improves
the scientific results.
Last but not least, we would like to respect our sponsors for their help and financial
support: Information Technology and Digital Media Development Center, Statistics and
Informatics department of Sanjesh Organization, Arameh Innovative Researchers, Brown
Walker Publisher.
Contents
Reducing Packet Overhead by Improved Tunneling-based Route Optimization
Mechanism
Hooshiar Zolfagharnasab
15
19
26
30
36
40
44
Data mining with learning decision tree and Bayesian network for data
replication in Data Grid
Farzaneh Veghari Baheri, Farnaz Davardoost and Vahid Ahmadzadeh
49
54
58
64
69
74
80
84
89
Improvement of the Modeling Airport Assignment Gate System Using SelfAdaptive Methodology
Masoud Arabfard, Mohamad Mehdi Morovati and Masoud Karimian Ravandi
95
A new model for solving capacitated facility location problem with overall cost
of losing any facility and comparison of Particle Swarm Optimization,
Simulated Annealing and Genetic Algorithm
Samirasadat jamali Dinan, Fatemeh Taheri and Farhad Maleki
100
104
109
114
118
123
127
Predicting Crude Oil Price Using Particle Swarm Optimization (PSO) Based
Method
Zahra Salahshoor Mottaghi, Ahmad Bagheri and Mehrgan Mahdavi
131
134
138
146
151
156
161
165
171
175
180
184
187
191
195
200
205
209
213
219
224
228
233
237
241
246
Evaluate and improve the SPEA using fuzzy c-mean clustering algorithm
Pezhman Gholamnezhad and Mohammad mehdi Ebadzadeh
251
Hypercube Data Grid: a new method for data replication and replica
consistency in data gird
Tayebeh Khalvandi, Amir Masoud Rahmani and Seyyed Mohsen Hashemi
255
262
Bus Arrival Time Prediction Using Bayesian Learning for Neural Networks
Farshad Bakhshandegan Moghaddam, Alireza Khanteimoory and Fatemeh Forutan
Eghlidi
267
271
276
281
286
290
294
300
304
310
315
320
326
329
336
340
345
350
355
360
Repairing Broken RDF Links in the Web of Data by Superiors and Inferiors
sets
Mohammad Pourzaferani and Mohammad Ali Nematbakhsh
365
370
A Simple and Efficient Fusion Model based on the Majority Criteria for
Human Skin Segmentation
S. Mostafa Sheikholslam, Asadollah Shahbahrami, Reza PR Hasanzadeh and Nima
Karimpour Darav
374
380
385
391
396
403
408
412
417
422
426
430
435
443
448
453
The study of indices and spheres for implementation and development of trade
single window in Iran
Elham Esmaeilpour and Noor Mohammad Yaghobi
458
Web Anomaly Detection Using Artificial Immune System and Web Usage
Mining Approach
Masoumeh Raji, Vali Derhami and Reza Azmi
462
A Fast and Robust Face Recognition Approach Using Weighted Haar And
Weighted LBP Histogram
Mohsen Biglari, F. Mirzaei and H. Ebrahimpour-Komleh
467
473
478
482
487
492
498
504
509
514
518
The lattice structure of Signed chip firing games and related models
A. Dolati, S. Taromi and B. Bakhshayesh
525
528
532
536
540
548
553
557
561
Hybrid Harmony Search for the Hop Constrained Connected Facility Location
Problem
Bahareh khazaei, Farzane Yahyanejad, Angeh Aslanian and S. Mehdi Hashemi
566
571
575
580
584
589
To enrich the life book of IT specialists through shaping living schema Strategy
based on Balance-oriented Model
Mostafa Jafari
595
Abstract: Common Mobile IPv6 mechanisms, bidirectional tunneling and route optimization,
show inefficient packet overhead when both nodes are mobile. Researchers have proposed methods
to reduce per-packet overhead regarding to maintain compatible with standard mechanisms. In this
paper, three mechanisms in Mobile IPv6 are discussed to show their efficiency and performance.
Following discussion, a new mechanism called improved tunneling-based route optimization is proposed and due to performance analysis on packet overhead, it is shown that proposed mechanism
has less overhead comparing to others. Analytical results indicate that improved tunneling-based
route optimization transmits more payloads due to send packets with less overhead.
Introduction
Related Works
Some attempts have been performed to improve security and performance in Mobile IP. C. Perkins proposed
a security mechanism in binding updates between CN
and MN in [5]. C. Vogt et al. in [6] proposed a proactive address testing in route optimization.
In other aspect, D. Le and J. Chang suggested reIn order to enable mobility over IP protocols, net- ducing bandwidth usage due to use tunnel header inwork layer of mobile devices should send messages to stead of route optimization header when both MN and
Corresponding
Author: IT Manager at Soroush Educational Complex, Tehran, Iran, Tel: (+98) 912 539-4829
3.1
Bidirectional Tunneling
3.3
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Figure 1: Protocol model for route optimization and packets passing between layers
nel header instead of using 48 bytes extension header depicts the protocol model in sender and receiver.
when standard route optimization is used.Result presented in [7] shows that TRO can increase performance
in Mobile IP comparing to standard mechanisms.
Improved
Tunneling-based
Route Optimization (ITRO)
4.1
4.2
Below, we discuss two scenarios to explain our proposed method. It should be mentioned that a tunnel
between MN and CN should be initiated at first. Also
BU messages have been sent to construct binding cache
in both CoA and HoA.
As long as MN wants to send a packet to CN, since
mobility is transparent to upper layers in nodes, MNs
network layer sets both source of the packet to MN
HoA and destination to CNs HoA. In the next step,
when tunnel manager gets the packet, it updates the
packet by changing both packets source and destination. Since MN is in a foreign network, it changes the
source field from its HoA to its CoA. Later, searching
binding cache (by the help of CNs HoA), it finds CNs
corresponding CoA and then writes it in the destination address field. Altered packet is sent directly to
CN through the tunnel.
By reception of packet to the other side of the tunnel, CNs tunnel manager manipulates the packet to
make it ready for upper layers. First manipulation is
performed by changing the packets destination from
CN CoA to CNs HoA. Next step is followed by searching binding cache with MNs CoA to find corresponding HoA. Later, the CNs tunnel manger then change
packets source from MNs CoA to what has just been
found, MNs HoA. As long as changes are finished, the
Evaluation
We have evaluated our proposed mechanism via comparison to three other mechanisms. Since improved
tunneling-based route optimization mechanism intends
Figure 2: Improved tunneling-based route optimization to reduce header overhead, main comparison metric
is bytes consumed to establish mobile communication.
packets due to Fig. 1
We used relation 1 proposed in [7] to calculate mobility
overhead. It should be noted that mobility overhead
is bytes used to route packets from one mobile node to
another, and is different from overhead used to route
updated packet is surrendered to upper layers. Due to packets through network layer.
Fig. 1, packets sent from MN to CN are addressed as
shown in Fig. 2.
Mobility Addition Size
,
Mobility Overhead Ratio =
Original Packet Size
Same action is performed when a packet is sent from
(1)
CN to MN. Since CNs network upper layers are unaware of mobility, a packet is constructed which is adAlso, comparing to bidirectional tunneling mechadressed from CNs HoA to MNs HoA. As the packet is nism, communicating time is also mentioned which is
passed to CNs tunnel manger, due to binding cache, defined as total time for a packet to deliver from source
to destination.
the destination of the packet is changed from MNs
HoA to MNs CoA. Since CN knows its CoA, tunnel
Moreover, packets are assumed to be 1500 bytes
manger updates the packets source from its HoA to that is maximum transmission unit size in Ethernet,
CoA. Then the packet is tunneled to MN.
containing IPv6 packets, extension header if needed
and tunneling overhead.
Similarly, MNs tunnel manager changes the packets destination from MNs CoA to MNs HoA. Later,
searching binding cache, the packets source is also
5.1 Comparing to Bidirectional Tunchanged from CNs CoA to CNs HoA.
neling
4.3
Changing BU messages
To maintain compatible with other MIPv6 mechanisms, binding messages should change. We propose
to use two flags in order to distinguish three different
mechanisms. Calling ROT0 and ROT1, these flags indicate whether route optimization or tunneling-based
route optimization or improved tunneling-based route
optimization is used. Routing mechanisms due to
ROT0 and ROT1 are listed in table 1.
As mentioned before, in bidirectional tunneling, packets from CN should be tunneled from HA to MN and
are replied in the same tunnel from MN to HA, called
reverse tunneling. For each time a packet is tunneled,
40 bytes are used additionally to route the packet to
the other side of tunnel. As a packet is tunneled twice
to reach to destination, 80 bytes are consumed in two
different communications. Total bandwidth which is
used to carry a packet from source to destination is
calculated as follows:
+
Table 1: Routing mechanism due to ROT flags
Mechanism
ROT1
ROT0
Route Optimization
Tunneling-based Route Optimization
Improved Tunneling-based Route
Optimization (proposed method)
0
0
0
1
1 or 0
40
40
+
= 5.48%,
1500 40 1500 40
(2)
The Third International Conference on Contemporary Issues in Computer and Information Sciences
5.2
Although both route optimization and proposed mechanisms construct a tunnel to reduce delay time and
overhead needed to communicate two mobile nodes,
different overheads are used to route a packet in constructed tunnel. In the situation when both nodes are
mobile, route optimization uses Home Address Option
and Type 2 routing extension headers as it is depicted
Figure 3: Comparing delay time for bidirectional tun- in Fig. 4. Since each extension header is 24 bytes in
neling mechanism and route optimization based mech- size, total mobility header added to IPv6 packet is
anisms
48 bytes. So mobility overhead ratio is calculated as
follows:
consists of three Internet routing time that is com24 Btype 2 + 24 BHOA Option
Mobility Overhead Ratio =
puted from:
1500 48
=
Total time =TM N HAM N + THAM N HACN
+ THACN CN
= 3 TInternet ,
48
= 3.3%,
1452
(6)
(3)
Because improved tunneling-based route optimization uses address field of packet both for tunneling and
IPv6 routing, as it calculated before, it uses 0% of total
packet size.
Using same tunnel for transmitting packets, total
delay time is same for both route optimization and proposed method.
Comparing to Tunneling-based
Route Optimization
with standard mechanisms, not only the tunnel manager should be changed, but also Binding Update messages must be altered. Comparison to Bidirectional
Tunneling, Route Optimization and Tunneling-based
Route Optimization shows that the packet overhead of
proposed mechanism is reduced significantly comparFigure 5: Tunneling-based route optimization packets ing to previous mechanisms. Therefore regarding to
less overhead for each packet, more data can be transdue to Fig. 1
mitted through network via a Mobile IP communication.
Table 2: Comparison between Mobile IPv6 mechanisms
Mechanism
Bidirectional Tunneling
Route Optimization
Tunneling-based Route
Optimization
Improved Tunneling-based
Route Optimization
(proposed method)
Packet
Overhead
(%)
Delay
(Internet
Time)
6.6
3.3
3
1
2.74
Acknowledgement
I would like to thank Soroush Educational Complex
and especially Mr. Adbullah Shirazi for financial support and assistance. Also I should thank Mr. Seyed
Morteza Hosseini for preparing final version of PDF
using LATEX 2 .
Refrences
40 BIPv6 tunnel header
1500 40
40
=
= 2.74%,
(7)
1460
Conclusion
Handovers,
Available:
The Third International Conference on Contemporary Issues in Computer and Information Sciences
[11] M. Kalman and B. Girod, Modeling the delays of successivelytransmitted Internet packets, In Proceedings of the
Faculty of Engineering
Faculty of Engineering
Payam@eng.ui.ac.ir
M.R.Khayyambashi@eng.ui.ac.ir
Introduction
Artificial neural networks have been developed as generalizations of mathematical models of biological nervous systems. A first wave of interest in neural networks emerged after the introduction of simplified neurons by McCulloch and Pitts (1943). Neural networks
have the ability to perform tasks such as pattern recognition, classification problems, regression problems differential equations and etc as demonstrated [1,2]. The
basic processing elements of neural networks are called
artificial neurons, or simply neurons or nodes. In a simplified mathematical model of the neuron, the effects
of the synapses are represented by connection weights
that modulate the effect of the associated input signals,
and the nonlinear characteristic exhibited by neurons
is represented by a transfer function. The neuron impulse is then computed as the weighted sum of the
input signals, transformed by the transfer function.
used to determine weight adjustments has a large influence on the performance of neural networks. While gradient descent is a very popular optimization method,
it is plagued by slow convergence and susceptibility to
local minima as demonstrated [3]. Therefore, other
approaches to improve neural networks training introduced as demonstrated [4]. These methods include
global optimization algorithms, such as Seeker Optimization Algorithm [5], Genetic Algorithms [6-8], Particle Swarm Optimization Algorithms [9-10], Imperialist Competitive Algorithm [11] and Harmony Search
Algorithm [12].
Football Optimization
Algorithm
Figure 1 shows the flowchart of the proposed algorithm. FOA encodes potential solutions to a specific
problem on players and applies teamwork operators to
these players. This algorithm is viewed as function optimizers although the range of problems to which this
algorithm has been applied to, is quite broad.
Start
Initialize parameters
Parameter
n
Description
Maximum number of players
Divide coefficient of players
Number of replacement in entire iterations
Number of replacement per iteration
Pass coefficient
Velocity coefficient of players
Spectators effect on players
Spectators effect on parameters
Value
[11,)
(0,1]
[0,n]
[0, n ]
[0,1]
best value [0.5,2]
[0,1]
[0,1]
2.2
Creating a team
The first step in the implementation of any optimization algorithm is to generate an initial population [13].
In a FO algorithm, a population of players called team,
which encode candidate solutions to an optimization
problem, evolves towards better solutions. In other
words, each player creates by array. The population
size (n) depends on the nature of the problem, but typically contains several hundreds or thousands of possible
solutions. This algorithm usually starts from a population of randomly generated individuals and covers the
entire range of possible solutions (the search space).
player1
player2
T eam =
(1)
..
.
playern
in which:
Dividing players into main players
and Substitution players
2.3
Dividing players
player1
..
mainP layers =
The Third International Conference on Contemporary Issues in Computer and Information Sciences
playerm+1
..
substituteP layers =
.
playern
sk
Playeri
(2)
Parameter1
Parameter1
Parameter2
Parameter2
in which:
Parameter k
m = round(n ), s = n m
2.4
Player j
Parameter k
2.6
in which:
Ranki = F itness(playeri ) + U (d, +d)
2.5
10
(4)
2.7
2.9
Convergence
The impact of spectators upon sport is substantial and This process is repeated until a termination condition
varied. These are one of the reasons for the success has been reached. Common terminating conditions
of football teams. Spectators at the stadium and team are:
practices increase morale and the sense of responsibility
in the football players. This feeling will be transferred
A solution is found that satisfies minimum criteamong all players and even coaches and managers.
ria (goal).
This movement is shown in figure 4 in which spectators
Fixed number of iterations reached.
effect modeling by random change in players parameters. In equation 5, m is a number of main players and
Allocated budget (computation time/money)
k is number of parameters of each player.
reached.
Ef f ectP layers = k
Ef f ectP arameters = m
(5)
Manual inspection.
2.8
Substitutes
player1
mainPlayer s
playerm
mk
playerm1
substitutePlayers
playern
sk
H
X
i=1
wip f [
n
X
wjp xj ]
(6)
j=1
11
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Teaching Assistant Evaluation: includes evaluations of teaching performance; scores are low,
medium, or high. It contains 151 samples with 5
attributes.
50% of instance data applied for training the Neural
Network and the remaining 50% for testing. The neural network trained by FOA, ICA, PSO and GA algorithms and the results compared with each other. An
accurate comparison of the four methods is presented
that uses 10-fold experiment replication. For each classification problem, same topologies have been selected
Figure 5: Training and classification processes
and minimum cost function value and mean cost value
versus epochs are presented. The results of these experiments are presented in Table 2 and 3. Figures 7 show
the mean test error and the mean train error (false
4 Experimental Results
classification percent) for each of the four compared
optimization methods on five classification problems.
From the experimental results, it can be seen that in
In this paper, the proposed method performance evalall cases the FOA performed better.
uated in comparison to ICA, PSO and GA algorithms
for training a three layered Perceptron Neural Network.
FOA
ICA
PSO
GA
Dataset
Precision
Precision
Precision
In the FOA algorithm, the parameters , , , and
MSE
MSE
MSE Precision MSE
(%)
(%)
(%)
respectively are set to 0.5, 1, 2.5, 0.1 and 0.1. The
Wine
0.0509
0.9348
0.0945
0.8587
0.0552
0.8478
0.1783
0.4022
Glass
0.4881
0.742
0.5762
0.5158
0.5505
0.5965
0.6948
0.4649
number of players is considered 100. In the ICA alHeart
0.0887
0.8603
0.1778
0.7647
0.1051
0.6765
0.1999
0.7059
Vertebral 0.2836
0.7134
0.4148
0.6115
0.3154
0.7061
0.4319
0.5796
gorithm, the parameters , a and b respectively are
Teaching 0.2878
0.6538
0.5264
0.4487
0.3103
0.6026
0.5077
0.4872
set to 2, 0.5 and 0.2. The number of imperialists and
the colonies are considered 10 and 100. In the PSO
Table 2: Train result for each of the four methods
algorithm, the parameters c1 and c2 are fixing to 1.5
and the number of the particle is 100. Determining
this amount for c1 and c2 we have given equal chance
FOA
ICA
PSO
GA
Dataset
Precision
Precision
Precision
Precision
to social and cognition components take part in search
MSE
MSE
MSE
MSE
(%)
(%)
(%)
(%)
process. In GA the population size is 100, the muWine
0.0224
0.9651
0.0723
0.9186
0.3316
0.7326
0.2350
0.3372
Glass
1.261
0.651
1.5927
0.5300
1.4525
0.4800
2.8823
0.4000
tation and crossover rate are respectively set to 0.03
Heart
0.2008
0.7463
0.2099
0.7015
0.1874
0.5970
0.2185
0.6642
Vertebral 0.3649
0.6948
0.4298
0.5294
0.4293
0.6869
0.5301
0.5556
and 0.5. The number of iteration is considered 1000
Teaching 1.621
0.3562
2.600
0.2899
2.077
0.3014
3.588
0.2110
for all methods. The datasets used for evaluating the
proposed approach are known datasets that are availTable 3: Test result for each of the four methods
able for download from UCI and refer to classification
problems. Five datasets have selected as follows:
0.6
0.5
0.4
FOA
Error
ICA
0.3
PSO
GA
0.2
0.1
0
Mean Test Error
12
that the proposed algorithm trained very well rather of ball to the goal as expected. In this cooperation,
than the other algorithms.
the ball is moved gradually to the goal and finally best
player takes a shot at the goal. Then, Football Optimization Algorithm uses an evolutionary algorithm
in order to optimize the weights of a neural network.
The FOA method is evaluated on five known classification problems and compared against the state of the
art methods: ICA, PSO and GA. The consideration
of the results showed that the training and test error
of the network trained by the FOA algorithm has been
reduced in comparison to the other three methods. FuFigure 7: Mean square error for FOA per iteration
ture work will consist in modifying some parts of the
algorithm improve the algorithm execution speed.
Refrences
[1] T. J. Glezakos, T. A. Tsiligiridis, L. S. Iliadis, C. P. Tsiligiridis, F. P. Maris, and P. K. Yialouris, Feature extraction
for time-series data: An artificial neural network evolutionary training model for the management of mountainous watersheds: Lecture Notes in Computer Science, Neurocomputing 73/2009 (2009), 4959.
[2] T. J. Glezakos, G. Moschopoulou, T. A. Tsiligiridis, S.
Kintzios, and C. P. Yialouris, Plant virus identification
based on neural networks with evolutionary preprocessing:
Lecture Notes in Computer Science, Computers and Electronics in Agriculture 70/2010 (2010), 263275.
[3] M. Georgiopoulos, C. Li, and T. Kocak, Learning in the
feed-forward random neural network: A critical review: Lecture Notes in Computer Science, Performance Evaluation
68/2011 (2011), 361384.
In this paper, an optimization algorithm based on modeling the football match is proposed. Each individual
of the population is called a player. The team is divided into two groups: main players and substitute
players. A team composed of good passers and mobile players. Teamwork among main players forms the
core of this algorithm and results in the convergence
13
The Third International Conference on Contemporary Issues in Computer and Information Sciences
14
Department of Computer Engineering, Zanjan Branch, Islamic Azad University, Zanjan, Iran
MarziehJavadi@ymail.com
Hassan NADERI
Faculty of Iran University of Science and Technology (IUST)
naderi@iust.ac.ir
Abstract: Today, with growing of XML documents on the web, attempts to develop XML retrieval
systems is also growing. As more XML retrieval systems are offered, performance evaluation of them
become more important. In this context, there are some metrics that are used to rank retrieval
systems, and most of them, extend the definitions of precision and recall.
In this paper, ranking of XML retrieval systems, for INEX 2010 runs, according to three methods
of Averaging precision and recall values in specific rank-cutoffs, are compared with results of MAiP
metric, that is used for evaluation by INEX.
Introduction
15
2.2
Topics
2.3
MAiP
Evaluation Metrics
3.3
3.1
F Measure
16
The Third International Conference on Contemporary Issues in Computer and Information Sciences
GeometricM ean =
2
P @r.R@r
MAiP
a@1
0.87
a@2
0.85
a@5
0.92
a@10
0.93
a@25
0.92
a50
0.94
MAiP
G@1
0.84
G@2
0.85
G@5
0.93
G@10
0.93
G25
0.92
G50
0.93
MAiP
F@1
0.80
F@2
0.86
F@5
0.90
F@10
0.90
F25
0.89
F50
0.91
17
runs of INEX IMDB 2010. The Spearman correlation related with MAiP measure. Despite of importance of
coefficient is 0.91.
research for overcoming the weaknesses of existing metrics and efforts to creat new metrics, results of simplest
definitions are very close to the best existing metrics.
According to results were shown tables in section 4,
arithmetic mean of precision and recall at rank cut-off
50, has created the best results. Hence it can be the
appropriate baseline for comparing results of metrics
that are created in the future. In the future, we want
to expand this research with the Wikipedia collection
and more cut-off points.
Refrences
[1] J. Pehcevski and B. Piwowarski, Evaluation Metrics for
Semi-Structured Text Retrieval (2009).
[2] M. Lalmas and A. Tombros, INEX 2002-2006: Understanding XML Retrieval Evaluation: DELOS07 Proceedings of
the 1st international conference on Digital libraries: research
and development, Springer, Berlin/Heidelberg (2007), 187
196.
Conclusions
Works
and
Future
[3] N. Fuhr, N. Govert, G. Kazai, and M. Lalmas, INEX: Initiative for the Evaluation of XML Retrieval: Proceedings of the
SIGIR 2002 Workshop on XML and Information Retrieval
(2002).
[4] A. Trotman and Q. Wang, Overview of the INEX 2010 Data
Centric Track: Lecture Notes in Computer Science, Springer,
Berlin/Heidelberg 6932/2011 (2011), 171181.
[5] J. Kamps, J. Pehcevski, G. Kazai, M. Lalmas, and S. Robertson, INEX 2007 evaluation measures, Springer, Heidelberg
4862 (2008), 2433.
[6] J. Pehcevski and J.A. Thom, HiXEval: Highlighting XML
Retrieval Evaluation. In Advance in XML Information Retrieval and Evaluation: Fourth Workshop of the Initiative
of XML Retrieval, INEX 2005, Springer, Berlin/Heidelberg
3977/2006 (2006), 4357.
[7] J. Pehcevski, Evaluation of Effective XML Information Retrieval, Phd thesis, Chapter 5, pages: 149184, 2006.
18
Mohammad Kalantari
Qazvin, Iran
Qazvin, Iran
Neda.Azadi@qiau.ac.ir
md.kalantari@aut.ac.ir
Abstract: Modeling and evaluating of grid computing environment is very difficult because of
complexity and distribution nature. The present paper studies the evaluation of the performability
of grid computing. Here, a tree structure is assumed for the grid with RMS in its root. Users
give their tasks as well as their requirements to the RMS and finally take back the result from it.
The RMS divides the task into parallel and smaller subtasks in order to get a better performance.
Transferring each parallel subtask to several resources also increases its reliability. Analysis of the
system by means of reliability and performance measure is called performability. The performability
improvement is directly related to resource allocation among subtasks. In this paper, we present an
algorithm for resource allocation based on artificial bee colony optimization algorithm. The most
important step in optimizing algorithms is to define the objective function that should be solved
with optimizing algorithms. In this paper, the objective function is the performability improvement.
Since the tree structure is used in the resource allocation problem, the Bayesian logic and graph
theory are also used.
Keywords: RMS; performability; Bayesian model; graph theory; optimization; artificial bee colony; swarm intelligent.
Introduction
19
If a task breaks into n parallel subtasks, the execution time will decrease. But in a real situation which is not devoid of failure - any failure in each subtask makes the whole task execution problematic. In
order to solve this problem, increase the reliability besides performance, and make harmony between these
two measures, we assign each subtask to several resources. In this way, if a failure accrues, any subtask
can be performed by other resources and the probability of the flawless accomplishment of the main task will
increase [5].
In the paper [10] the evaluation of performability is
studied in a grid with star structure. The models used
in the evaluation of systems performance and reliability are queuing network [12], Stochastic Petri Nets [14],
Bayesian model [16] and markov models [13]. Each
of the above models can be evaluated by the analysis or simulation methods [15]. Like paper [5], we use
Bayesian method for evaluation.
As mentioned, one way to increase the grids
performability is to optimize the resource allocation
among subtasks. In paper [6] this is done with Genetic
algorithm [25]. Nevertheless, in the present paper we
have made use of the artificial bee colony since it is
simpler and more flexible than the genetic algorithm.
The rest of the paper is organized as follows. Section 2 presents a model for the evaluation of reliability
and performance. The artificial bee colony algorithm is
explained in third part. The result of the optimization
is presented in part 4 and in the final part, a comparison between genetic optimization algorithm and artificial bee colony optimization algorithm is presented.
A few researches have been done about grids performability since their complexity challenges their model
making and evaluation [8]. In this part, grid is evalu- According to assumption above, when the subtask j is
ated from a performability point of view. In order to assigned to a resource i, the processing time is a ranutilize this model, the following hypothesis are needed dom variable that can be calculated from this relation:
[5, 6, 9]:
Cj
Tij =
xi
The requirements are taken into account imme- Where xi is the processing speed of resource i and Cj
is the computational complexity of subtask j. If data
diately; therefore, no time is wasted.
transmission between the RMS and resource i is ac The RMS divides each task into several subtasks.
complished through links belonging to a set i . Where
The resources are automatically recorded in si is the link with minimum bandwidth in a set i ,
and ai is denoted the amount of data that should be
RMS.
20
The Third International Conference on Contemporary Issues in Computer and Information Sciences
transmitted for the subtask j, thus the random time of does not fail:
I
X
communication between the RMS, and the resource i
i Qi
(3)
W
=
that executed the subtask j, can calculated from this
R()
aj
i=1
relation:ij = si
R() is defined as the probability that the correct outFor the constant failure rate the probability that puts without respect to the service time.
resource i does not fail until the completion of the subA tree is composed of the combination of resources
task j can be obtain as: pij = ei Tij
and the communication channel taking part in the exWhere i is the failure rate of resource k.
ecution of a task. Each tree contains several minimal
Given a constant failure rate the probability that spanning trees (MST) that guarantee the complete exthe communication channel between the RMS and the ecution of a task by the subtasks. On the condition
resource i does not fail until the completion of the sub- that any composing part of a tree encounters a failure, the whole task will be jeopardized. As any task
task j is:
is divided into parallel subtasks, different realizations
qij = ei ij
Where i is the failure rate of communication channel (MST) have been made. The execution time of any
MST is determined by the features of the grid such
between the RMS and the resource i.
as the bandwidth of communication channels, the pro
The random total completion time for subtask j cessing speed of performing resources and . . . .
assigned to resource i is equal to Tij + ij . It distribution of this time is P r( ) = pij qij .
1 If the user has requested the time limitation Where Ei is the event when M STi is available and Ei
for the execution, the reliability of each task can is the event when M STi is not available. A binary
search tree can be used to calculate relation 4.
be calculated through this relation:
R( ) =
I
X
Qi .1(i < )
(1)
i=1
I
X
Qi
(2)
i=1
In such relations, parameter i shows the number of realizations of performing a task,i equals the time of
execution task by realization i and Qi is the probability
of performing the task by realization i that performing
a task in i .
he conditional expected service time W is considered to be a measure of its performance, which determines the expected service time, given that the service
21
Optimizing Technique
optimization algorithms [20] and cuckoo optimization solution is the best distribution of resources among the
algorithm.
subtasks to gain the highest degree of reliability.
Artificial bee colony (ABC) algorithm is one of the
newest and most applied optimization algorithms because of its simplicity and few control variables. The
researchers have paid special attention to it since 2005.
In ABC algorithm, each cycle of the search consists
of three steps: moving the employed and onlooker
bees onto the food sources and calculating their nectar
amounts; and determining the scout bees and directing
them onto possible food sources. A food source position represents a possible solution to the problem to
be optimized. The amount of nectar of a food source
corresponds to the quality of the solution represented
by that food source(fitness function). Onlookers are
placed on the food sources by using a probability based
selection process. As the nectar amount of a food
source increases, the probability value with which the
food source is preferred by onlookers increases, too[21].
3.1
22
The Third International Conference on Contemporary Issues in Computer and Information Sciences
we need to define a context for performing the algo- algorithm. This result actually shows that the ABC
rithm. Since our goal is to compare the result of this is a more appropriate solution for resource allocation
optimizing algorithm with the genetic optimizing algo- among the subtasks.
rithm, we use the mentioned context of paper [6].
Subtask Amount of
Distribution
Same as[6] the task is broken into 3 subtasks by the
complexity
RMS. The amount of complexity and data transferring
SB1
38.94
R1 , R3 , R7
of any subtask is shown in Tables 1 and 2.
SB2
25.44
R2 , R5
SB3
35.62
R4 , R6 , R8 , R9
SB1 38.94%
SB2 25.44%
Table 3: Distribution of Optimal Solution
SB3 35.62%
Table 1: Amount of Complexity of Each Subtask
SB1
SB2
SB3
250 MB
350 MB
400 MB
There are 9 processing resources that are connected Figure 2: Diagram of comparison of two optimizing
together like a tree structure. Figure 1 shows the algorithm
grid environment. In this figure, the rate of resource
failures, communication channel failures, information
transfer speed and resource processing speed are also
4.1 The effect of bandwidth on evaluaexhibited.
tion measures
We have two scenarios for limiting the communication
channel in calculating the quality of the resources in
ABC algorithm:
1: the bandwidths of communication channel are supposed to be the minimum of the existing channel.
2: the bandwidths of communication channel are supposed to be the average of the existing channel.
If we use later scenario the reliability seems to be
increased and performance would increase. In Figure
3, the effect of average communication channel bandwidth usage has been shown.
23
4.2
As previously mentioned, users requirements for execution time are different. Some of them should be performed in a time limit while some other are supposed to
be done correctly without any time constraint. Therefore, the user will be more pleased with the result if the
requirements are analyzed in addition to selecting and
allocating the resources. This way, the resources and
their power will be used in appropriate manners [7].
So, in this part, we consider the proposed algorithm
with respect to users requirements.
Imagine that in the previous context, a user delivers the task along with constraint on execution time
(deadline) to the RMS. Such limitation of time directly
affects the tasks reliability since, in such a situation;
the subtasks are only assigned to those realizations that
can perform them in shorter time duration than the
users deadline. In the following diagram that is shown
in Figure 4 the three following condition are compared
in a particular distribution. If each execution time
limit is between 40 to 120 seconds, different conditions
can be defined as follow:
The problem of resource allocation is highly complicated because the complexity and distribution of computational grids are more complicated than other distributed environments. It is not possible to optimize such difficult problems with common algorithms;
rather Meta heuristic optimization algorithms are more
useful. The most important step in optimizing algorithms is to define the objective function that should
be solved with optimizing algorithms. In this paper,
the objective function is the simultaneous increase in
the two measures of performance and reliability or, in
the other word, performability. The grid user delivers
his intended task as well as requirements (optionally)
to the RMS. After dividing the task to the parallel subtask, the RMS allocates the best resources to subtasks
using an optimizing algorithm, and in the meanwhile,
it considers the processing resource and communication channels and their elements such as speed rate,
bandwidth, processing speed and so on. Therefore the
task can be performed with the highest performability.
As a future procedure for optimizing the problem of resource allocation in grids, we can use other
Meta heuristic algorithms that are inspired by nature,
like the artificial immune system, ant colony, particle
swarm, etc and then compare the results. Exponential
distribution is a general distribution in the reliability
analysis of hardware and software component but it
has a constant rate, while in the real environment, the
failure rate is time varying parameter. Therefore, the
use of another appropriate distribution for failures can
First condition: the users deadline is 70 second.
be studied in the future.
Second condition: the users deadline is 100 second.
Third condition: no deadline is defined.
Refrences
[1] I Foster, C Kesselman, and S Tuecke, The anatomy of the
grid: Enabling scalable virtual organizations, International
Journal of High Performance Computing Applications 15
(2001), 200-222.
[2] I Foster, D. Becker, and C Kesselman, The grid 2: Blueprint
for a new computing infrastructure, San Francisco, CA:
Morgan-Kaufmann, 2003.
[5] Y.S. Dai and G. Levitin, Reliability and performance of treestructured grid services, Reliability and performance of treestructured grid services 55(2) (2006), 337-349.
24
The Third International Conference on Contemporary Issues in Computer and Information Sciences
25
ahmadianalir@msc.guilan.ac.ir
sahraei.shahin@gmail.com
Abstract: Language-based security is a mechanism for analysis and rewriting applications toward
guaranteeing security policies. By use of such mechanism issues like access control by employing a
computing base would run correctly. Most of security problems in software applications were previously handled by this component due to low space of operating system kernel and complicacy. These
days this task by virtue of increasing space of OS applications and their natural complicacy is fulfilled by novel proposed mechanisms which one of them is treated as security establishment or using
programming languages techniques to apply security policies on a specific application. Languagebased security includes subdivisions such as In-lined Reference Monitor, Certifying Compiler and
improvements to Type which would be described individually later.
Introduction
Author, P. O. Box 41635-3756, F: (+98) 131 6690 271, T: (+98) 131 6690 274-8 (Ex 3017)
26
To understand language-based security more accurately we need to introduce two principles in computer
security systems and provide them with detail descriptions[6].
I. Principle of Least Privilege (PoLP): while running accomplishment policies, each principle is supposed to have least possible access to be applied;
II. Minimal Trusted Computing Based (MTCB):
components which should operate properly to confirm
execution system properties, such as operating system
kernel and hardware. That is mechanism in use fulfills big tasks while small. Smaller and simpler systems have less errors and improper interactions which
is quite appropriate to install safety.
II. Cryptography: by this method makes it possible to install safety at a sensible data transmission
level in an unreliable network and make use of a receiver as a verifier. Power of cryptography methods
is as much complex as hypotheses. Digital Encryption
Standards (DESs) are susceptible to violation by a sufficient amount of damaging codes. Cryptography thus
cannot guarantee downloaded codes from a network to
be safe. It is only able to provide a safe transmittal
space for these codes through the Internet to avoid intrusions and suspicious interference;
III. Code instrumentation: Another approach practiced by operating system in some systems to inspect
safety level of a program from various aspects such as
writing, reading and programming jumps. Code instrumentation is a process through which machine code of
an executed program is changed and main action consequently could be overseen during execution. Such
changes occur in sequence of program machine code
for two reasons; first, behaviors of changed code and
initial code equal. It suggests that initial code did not
violate safety policy and second, if violation by initial
code occurs, changed code is immediately able to handle the situation by two options; either it recognizes
violation, gains control from system and terminates
destructive process or prevents fatal effects which are
likely to affect the system soon. For instance lets suppose a program needs to be run on a machine with certain hardware specifications. To do so lets assume the
program is loaded within a continuous space of memory addresses [c2k , c2k + 2k 1 ] where c and k are
integer numbers. The program then links to run and
after execution and obtaining destination code, by altering values of and jumping to another address space
of memory for indirect addresses, the code in question
is ready to run[4];
Language-Based Security
In computer systems, compiler usually interprets a program in a high-level language. Assembler of destination machine then issues Hex code of the program to
the hardware to let it start. Compiler obtains information about programs while compiling them. The
information includes variables values, types or speci-
27
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Language-Based
Techniques
Security
A reference monitor is a program execution and prevents the program if it violates the safety policies. Typical examples for reference monitor are operating systems (hardware monitor), interpreters (software monitor) and firewalls. Most of safety mechanisms, today,
employ reference monitor.
I. In-lined Reference Monitor (IRM): a mechanism
fulfilled by operating system in traditional approaches
to supervise programs flawless execution and confirmation of objective safety policies is that reference monitor and objective system are located in distinct address space. Alternative approach is an in-lined reference monitor; a similar task which is performed by
SFI. This component fulfills safety policy for objective
system by stopping reading, writing and jumps in the
memory outside a predefined area [3]. One of methods
thus, is the merge of the reference monitor with objective application. In-lined reference monitor is specified
by definitions below:
A. Security events: action to be performed by reference monitor.
B. Security status: information to be stored during
a safety event occurrence according to which a permission to progress is issued.
Code providers take advantage from various techniques to produce such certificate. Some of the most
important ones are:
C. Security updating: sections of the program running in response to safety events and updating safety
status. SASI is the first generation of IRM proved
by researches to be an approach guaranteeing policies in question. The first generation is programmed
in Assembly 80x86 and the second generation is programmed in Java [2]. SASI x86 that is compatible with
Assembly 80x86 is the graphical output of gcc compiler. The destination code generated meets the two
conditions below:
28
B. Variables and addresses of target branch marked safety is founded on two principles of minimal access
with some tags by gcc compiler are matched during privilege and computing base. In such approaches the
compilation.
safety is warranted by operating systems and kernels
which the kernel acts as a proxy for other processes
So the first version is comprehensively employed in running on the system. Because of technology adorder to save the program memory data. In the second vances, complicacy of operating systems in terms of
version of IRM, JVML SASI, the programmed is pre- tasks and increase in kernel codes for supporting propserved in term of type safety. JVML instructions pro- erties such as graphic cards and distributed file sysvide information about the program classes, instances, tem, new approaches install safety which are proved to
methods, threads and types. Such information can be be high performance like safety establishment by using
utilized by JVML SASI to supply safety policies in ap- programming techniques. Such techniques drop under
plications [5]. Rewriting components in IRM mecha- three main categories: in-lined reference monitor, type
nism generate a verifying code with related destination system and certifying compilers which are described
code by this extra information [10].
separately.
II. Type System: the main objective is to prevent
error occurrence during the execution. Such errors are
identified by a type checker. The importance of this
case is that a high-level program certainly does have
many variables. If these variables of a programming
language are within a specific area we technically say
the language is a type safe. Lets assume variable x in
Java is defined as a Boolean and whenever it is initiated False the result is !X(not x) that is True. If
variables are under a condition such that their values
are within an undefined area we say the language is
not type safe. In such languages we do not meet types
but a global type including all possible types. An action is fulfilled by arguments and output may contain
an optional constant, an error, an exception or an uncertain effect [8]. Type system is a component of type
safe languages holding all types of variables and type of
all expressions are computed during execution. Type
systems are employed in order to decide a program is
well-formed. Type safe languages are explicitly known
as typical if types are parts of syntax otherwise implicit
type.
III. A Certifying compiler is a compiler that the
data given to it guarantees a safety policy, generates a
certificate as well as destination code which is checkable by machine i.e. it checks policies in question [9].
Conclusion
Refrences
[1] J.O. Blech and A. Poetzsch Heffter, A Certifying Code
Generation Phase. Proceedings of the Workshop on Compiler Optimization meets Compiler Verification (2007 ), 65
82.
[2] U. Erlingsson and F.B Schneider, IRM Enforcement of
Java Stack Inspection. In IEEE Symposium on Security and
Privacy, Oakland, California (2000 ), 246255.
[3] R. Wahbe, S. Lucco, T. Anderson, and S. Graham, Ecient
Software-Based Fault Isolation. In Proc.14th ACM Symp.
on Operating System Principles (SOSP) (1993 ), 203216.
[4] K. Crary, D. Walker, and G. Morrisett, Typed Memory
Management in a Calculus of Capabilities. In Proc. 26th
Symp. Principles of Programming Languages (1999 ), 262
275.
[5] U. Erlingsson and F.B. Schneider, SASI Enforcement of Security Policies: A Retrospective. In Proc. 26th Symp. Principles of Programming Languages (1999 ), 262275.
[6] F.B. Schneider, G. Morrisett, and R. Harper, A LanguageBased Approach to Security. Lecture Notes in Computer
Science (2001 ), 86101.
[7] D. Kozen, G. Morrisett, and R. Harper, Language-Based
Security. Mathematical Foundations of Computer Science
(1999 ), 284298.
[8] R. Hahnle, J. Pant, P. Rummer, and D. Walter, Integration
of a Security Type System into a Program Logic. Theoretical Computer Science (2008 ), 172189.
[9] C. Yiyun, L. Ge, H. Baojian, L. Zhaopeng, and C. Liu, Design of a Certifying Compiler Supporting Proof of Program
Safety. Theoretical Aspects of Software Engineering,IEEE
(2007 ), 127138.
[10] M. Jones and K.W. Hamlen, Enforcing IRM Security Policies: Two Case Studies. Intelligence and Security Informatics, IEEE (2009 ), 214216.
29
Alireza Khosravi
Babol, Iran
Babol, Iran
hbabaee@stu.nit.ac.ir
akhosravi@nit.ac.ir
Abstract: The increasing need for more energy sensitive and adaptive systems for building light
control has encouraged the use of more precise and delicate computational models. This paper
presents a time series prediction model for daylight interior illuminance obtained using optimized
Adaptive Neuro- Fuzzy Inference System (ANFIS). Here the training data is collected by simulation,
using the globally accepted light software Desktop Radiance. The model developed is suitable for
adaptive predictive control of daylight - artificial light integrated schemes incorporating dimming
and window shading control. In ANFIS training process, if the data clustered first and then go to
ANFIS, the performance of ANFIS will be improved. In clustering process, the radius of clusters
has high efficiency on the performance of system. In order to achieve the best performance we need
to determine the optimum value of clusters radius. In this study particle swarm optimization has
been used to determine the optimum value of radius. Simulation results show that the proposed
system has high performance.
Keywords: Particle swarm optimization, Adaptive Neuro- Fuzzy inference system, Radius, Optimization
Introduction
30
Adaptive -network-based fuzzy inference system (ANFIS) has been proposed by Jang [7]. The fuzzy inference system is implemented in the framework of
adaptive networks using a hybrid learning procedure,
whose membership function parameters are tuned using a back propagation algorithm combined with a least
square method. ANFIS is capable of dealing with uncertainty and imprecision of human knowledge. It has
self-organized ability and inductive inference function
to learn from the data. ANFIS is a multilayer feed forward network [7]. Each node of the network performs a
particular function on incoming signals as well as a set
of parameters pertaining to this node. To present the
ANFIS architecture, consider two-fuzzy rules based on
a first order Sugenos model [8] shown in Figure 1.
Layer 1: The nodes in this input layer are adaptive. They define the membership functions of the
inputs.
O1,i = Ai (x1 );
i = 1, 2
O1,i = Bi 2 (x1 );
i = 3, 4
(3)
(4)
i = 1, 2
(5)
Rule 1 : IF x is A1 and y is B1 ,
then f1 = p1 x1 + q1 x2 + r1
Rule 2 : IF x is A2 and y is B2 ,
then f2 = p2 x1 + q2 x2 + r2
(2)
i =
O3,i = W
31
Wi
;
W1 + W2
i = 1, 2
(6)
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Layer 4:The nodes in this inference layer are adaptive. an excessive propagation of rules when the input data
The outputs in this layer are the outputs from Layer has a high dimension.
3 multiplied by a linear formula. Parameters in this
The cluster radius indicates the range of influence
layer are referred to as consequent parameters:
of a cluster when you consider the data space as a unit
i Fi = W
i (pi xi + qi x2 + ri ); i = 1, 2 (7) hypercube. Specifying a small cluster radius usually
O4,i = W
yields many small clusters in the data, and results in
Where, pi , qi and ri are design parameters (consequent many rules. Specifying a large cluster radius usually
parameter since they deal with the then-part of the yields a few large clusters in the data, and results in
fuzzy Rule).//// Layer 5:The nodes in this output fewer rules.
layer are fixed. It computes the overall output as the
summation of the Weighted outputs from Layer 4.
In this study in order to more increasing the accuracy of proposed system, we intend to find the optimum
value of clusters radius using PSO. In next section,
PSO algorithm is explained.
P
W
F
i
i
X
i Fi = iP
; i = 1, 2
(8)
O5,i = F =
W
Wi
i
32
PSO Algorithm
This equation in the global model is used to calculate a particles new velocity according to its previous
velocity and the distance of its current position from
its own best experience and the groups best experience . The local model calculation is identical, except
that the neighbourhoods best experience is used instead of the groups best experience. Particle swarm
optimization has been used for approaches that can be
used across a wide range of applications, as well as for
specific applications focused on a specific requirement.
Its attractiveness over many other optimization algorithms relies in its relative simplicity because only a
few parameters need to be adjusted[12,13].
Simulation Results
4.2
4.1
First we have evaluated the performance of the recognizer without optimization. Figure 2 shows the prediction error. As Figure 2 suggests that the difference
between the original signal and the signal expected at
different times almost too much and is approximately
0.05
10
e-10
3
8
100
2
2.1
33
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Status
Optimized ANFIS
ANFIS without optimization
Value
8.7649e-004
0.3214
Table 2: The Area Enclosed Between the Original Signal and the Signal Prediction
The rule extraction method first uses the sub cluster function to determine the number of rules and
antecedent membership functions and then uses linear
least squares estimation to determine each rules consequent equations. This function returns a FIS structure
that contains a set of fuzzy rules to cover the feature
space.
ANFIS
ANFIS uses a hybrid learning algorithm to identify
the membership function parameters of single-output,
Sugeno type FIS. A combination of least-squares and
back propagation gradient descent methods are used
for training FIS membership function parameters to
model a given set of input/output data.
EVALFIS
This performs fuzzy inference calculations. Y = EVALFIS(U,FIS) simulates the FIS for the input data U and
returns the output data Y. For a system with N input
variables and output variables, U is M-by-N matrix,
each row being a particular input vector and Y is
M-by-L matrix, each row being a particular output
Figure 5: InputOutput SURFVIEW of ANFIS scheme vector.
Conclusion
34
Refrences
[1] A.Nabil and J.Mardaljevic, Useful daylight illuminance: a
new paradigm for assessing daylight in building, Lighting
Research & Technology 37 (2005), no. 1, 4159.
[2] DHW Li, CCS Lau, and JC Lam, Predicting daylight Illuminance by computer simulation techniques, Lighting Research & Technology 36 (2003), no. 2, 113119.
[3] P.J Littlefair, Daylight coefficients for practical computation of internal illuminances, Lighting Research & Technology 24 (1992), no. 3, 127135.
[4] DHW Li and GHW Cheung, Average daylight factor for the
15 CIE standard skies, Lighting Research & Technology 38
(2006), no. 1, 137152.
[5] Tregenza PR and Waters IM, Daylight coefficients, Lighting
Research & Technology 15 (1983), 6571.
[6] Kittler R, Darula S, and Perez R, A set of standard skies
characterizing daylight conditions for computer and energy
conscious design, Bratislava, Slovakia, 1998.
35
zh.sadreddini@iaushab.ac.ir
mjamali@itrc.ac.ir
Abstract: In Vehicular Delay Tolerant Networks (VDTNs), the optimal use of buffer management
policies can improve the overall network throughput. Due to the fact that several message criteria
can be considered simultaneously for the optimal buffer management, conventional policies are unsuccessful to support different applications. Through this research, we present a buffer management
strategy called Multi Criteria Buffer Management (MCBM). This technique applies several message criteria according to the requirements of different applications. We examine the performance
of proposed buffer management policy by comparing it with existing FIFO and Random. According
to the proposed scenario in this paper, simulation results prove that the MCBM policy perform well
as existing ones in terms of the overall network performance.
Keywords: Buffer management policies, Epidemic routing, Vehicular Delay Tolerant Networks.
Introduction
In VDTNs we can point out to different scenarVehicular Delay Tolerant Networks (VDTNs) are an
ios:
traffic condition monitoring, collision avoidance,
application of the Delay-Tolerant Networks (DTNs),
emergency
message dissemination, free parking spots
where the mobility of vehicles is used for connectivity
information,
advertisements, etc.[4],[5],[6]
and data communications.[1]
According to the mobility and high speed of vehicles, the end to end path is not available all the time.
Therefore, in such networks the intermittent connection is occurred and as a result, the sending of the
message encounters a delay.[2],[3] In order to overcome
the intermittent connectivity and increase the delivery
rate of messages and reduce the average latency, we
have used store carry and forward patterns. So, messages are stored and sent among network nodes until
reaching the final destination. Consequently, according
to the limitation of space in buffer nodes, messages are
faced with the buffer overhead and dropped. To overcome this problem, optimal buffer management policy
Corresponding
36
According to the requirements of different applications, it is possible that multiple major message criteria
are considered simultaneously for optimal buffer management. In addition, different criteria may have different levels of importance and conflict with each other.
However, the existing policies have considered only one
or two message criteria; as a result they are for a single
purpose and do not support different applications.
The MCBM technique offers the buffer management problem as a multi- criteria decision problem.
Therefore, different criteria can be applied to manage
the buffer in terms of requirement of different applica-
2.1
2.2
Random
In random policy, messages are scheduled for transmission in a random order. Moreover, the selection of
messages to be dropped is in random order.[9],[10]
2.3
Approach
37
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Performance Evaluation
3.2
3.1
3.3
Simulation Setup
In this study, it is assumed that the delivery of emergency messages is so important and that these messages generate larger volumes of traffic. Thus, messages are generated with sizes uniformly distributed
in the ranges of [250 KB, 750 KB] for bulk messages,
[500 KB, 1 MB] for normal messages, and [750 KB,
1.5 MB] for emergency messages. In all policies, the
creation probability of three priority class is set as:
Emergency=20The performance of policies assessment
is done with the Epidemic routing protocol.[12] Epidemic is a flooding based routing protocol where nodes Figure 1: MCBM, FIFO and Random Delivery Probability with 20 vehicles
exchange the messages they dont have.
38
buffer management.
Refrences
[1] V. N. G. J. Soares, F. Farahmand, and J. J. P. C. Rodrigues,
A layered architecture for vehicular delay-tolerant networks,
(ISCC09), Sousse, Tunisia (2009).
[2] V. N. G. J. Soares, J. J. P. C. Rodrigues, and P. S. Ferreira, Improvement of messages delivery time on vehicular
delay-tolerant networks, ICPP, Vienna, Austria, (2009).
Figure 2: MCBM, FIFO and Random Delivery Probability with 100 vehicles
39
Shabestar, Iran
Shabestar, Iran
m.marzaei@gmail.com
m jamali@itrc.ac.ir
Abstract: In Vehicular Delay Tolerant Networks (VDTNs), buffer management policies effect on
performance of network. Most conventional buffer management policies make decision only based on
message criteria and do not consider features of environment where nodes are located. In this paper
we propose knowledge based scheduling (KBS) policy which make decision using two knowledge
of amount free space of receiver node s buffer and traffic amount of segment where sender node
is located. Using simulation, we evaluate performance of proposed policy and compare it with
Random and Lifetime desc policies. Simulation results show that our buffer management policy
increases delivery rate and decreases number of drop significantly.
Introduction
In order to increase delivery rate and decrease average of latency in VDTNs, message replication is
performed by many of routing protocols. Combination of message storage during long periods of time
and their replication imposes high storage overhead on
buffer nodes and reduce overall performance of network. Therefore efficient buffer management policies
are required to improve overall performance of network. Most conventional buffer management policies,
make decision just based on message criteria (like size
of message, time-to-live (TTL) of message, number of
forwarding of message).
40
ria, it forwards a message based on knowledge of free where this message will also contain information about
space amount of receiver nodes buffer and also knowl- free space of buffer [4].
edge of traffic amount of segment where sender node
is located. Using simulation, we show that KBS policy
Assume the free space of receiver nodes buffer is
improve performance of network.
250K. As can be seen in Table 1, in buffer of sender
node, there is multiple messages with equal or smaller
size than free space of node receivers buffer. In this
case, KBS policy makes decision based on amount traf2 Existing Scheduling Policies
fic of segment where the sender node is located. Based
on segment traffic, it selects the message with least
TTL or with highest TTL, among messages with equal
Scheduling policy determines the order which messages or smaller than free space of receiver buffer. Therefore
should be forwarded at a contact opportunity.
KBS policy gives opportunity both messages with low
TTL and messages with high TTL.
2.1
Msgid
M1
M2
M3
M4
M5
M6
FIFO scheduling policy orders messages to be forwarded at a contact opportunity based on their entry
time into nodes buffer.
2.2
Msgsize
180K
450K
200K
150K
300K
550K
MsgTTL
50
120
90
100
70
35
Random
Random scheduling policy forwards messages in a ranIf the segment traffic of sender node is low or
dom order.
medium (An interval has been defined for low or
medium traffic), among messages with equal or smaller
size than free space of receiver nodes buffer (M1, M3,
M4), the message with the highest TTL (M4) is se2.3 Lifetime descending order
lected to forward. The reason of selecting the message
with the highest TTL is that since segment traffic is
Lifetime descending order (Lifetime desc) policy sorts low, contact opportunities in segment are also low and
messages based on their TTL in a descending order waiting times in buffers are high, so the possibility of
and at contact opportunity forwards the message with traversing current segment by messages with high TTL
is more than the possibility of traversing current seghighest TTL.
ment by messages with low TTL. But if segment traffic
is high (An interval has been defined for high traffic),
the message with the least TTL is selected to forward
(M1). In this case, since segment traffic is high, con3 Proposed policy
tact opportunities in segment are also high and waiting
times in buffers are low, so it is possible that messages
Knowledge Based Scheduling (KBS) policy in addition with low TTL traverse current segment before expirato message criteria, considers neighboring environment tion. As a result an opportunity would be given to
of node and using two knowledge of free space amount messages with low TTL. Knowledge of segment traffic
of receiver nodes buffer and traffic amount of segment is obtained using traffic oracle [3]. Based on Cartesian
where sender node is located, makes decision. In a coordinate of each node, this oracle obtains related segcontact opportunity, KBS policy considers free space ment and determines traffic amount that including the
of receiver nodes buffer and forwards a message with number of present nodes in that segment.
equal or smaller size than it. Therefore it reduces numIf the size of all messages in buffer of sender node
ber of drops. The knowledge of free space of receiver
nodes buffer is obtained based on HELLO-RESPONSE is larger than the free space of node receivers buffer,
technique. Sender node sends a HELLO message in or- the KBS policy makes decision just by considering segder to make communication. If receiver node hears the ment traffic of sender node. When segment traffic of
HELLO message, it will send a RESPONSE message sender node is low or medium, due to above mentioned
41
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Simulation results
Simulation setup
In this section, we evaluate our KBS policy and compare it with Random, FIFO and Lifetime desc scheduling policies. The dropping policy in all three policies is
Drop head [17]. Evaluation is done by simulation using
Opportunistic Network Environment (ONE) simulator
[14].
Performance metrics considered are the message delivery probability (measured as the ratio of the delivered messages to the sent messages), and number of
drop.
Figure 1: KBS, Lifetime desc, Random and FIFO DeTo evaluate, Epidemic routing protocol is used [1]. livery Probability
Epidemic routing protocol is a flooding based protocol.
According to this protocol, when two nodes connect,
they send to each other messages which they do not
have.
Figure 2 represents the comparison of buffer management
policies with respect to the number of drops.
In order to examine the performance of KBS policy, we use an urban scenario. Area of simulation is KBS policy reduces number of drop to significant rate,
6000m 6000m. We simulate 100 vehicles. Buffer ca- because it forwards a message based on free space repacity of vehicles is 20Mbyte. Vehicles move with ran- ceiver node receives the message with the least number
dom speed 30 and 50 Km/h and using shortest avail- of drops.
able path. Random Wait times of vehicles are between
5 and 15 minutes.
Network nodes communicate with each other using
a wireless connectivity link with data transmission rate
of 6Mbps and transmission range of 30 meter.
The messages are generated using an inter-message
creation interval that is uniformly distributed in the
range of [5,20] seconds. Message size is uniformly distributed in the range of [500K, 1M]. TTL of messages
are 120 minutes along the simulations. Simulation time
is 12 hours.
In all the scenarios we have defined the number of
vehicles less than 6 in one segment as a low traffic, the
number of vehicles 6 to 8 as medium traffic and the Figure 2: KBS, Lifetime desc, Random and FIFO
Number of Drop
number of vehicles higher than 8 as high traffic.
42
In this paper KBS buffer management policy was presented which in addition to message criteria, uses
knowledge of neighboring environment of nodes. This
policy based on free space of receiver nodes buffer and
traffic of segment where sender node is located, select
a message to forward.
Using simulation, performance of KBS policy was
compared with FIFO, Random and Lifetime desc
scheduling policies. Results were showed that KBS,
increases delivery ratio and decrease number of drop
significantly. In future works, we can present dropping policy that considers neighboring environment
of nodes. Moreover, we can compare the proposed
method with other buffer management policies.
Refrences
[1] A. Vahdat and D. Becker, Epidemic routing for partially
connected ad hoc networks, Duke University, Tech. Rep. Cs200006, 2000.
[2] K. Fall, Delay-tolerant network architecture for challenged
internets, In Proc. SIGCOMM (2003).
[3] S. Jain, K. Fall, and R. patre, Routing in delay tolerant
network, In Proc. SIGCOMM (2004).
[4] J. Lebrun, Ch.N. Chuah, D. Ghosal, and M. Zhang,
Knowledge-based opportunistic forwarding in vehicular
wireless ad Hoc Networks, IEEE Conference on Vehicular
Technology 4 (2005), 22892293.
[5] A. Lindgren and K.S. Phanse, Evaluation of queuing policies and forwarding strategies for routing in intermittently
connected networks, IEEE international Conference on
Communication System Software and Middleware (2006),
110.
43
Ali Katanforoush
z.roozbahani@mail.sbu.ac.ir
a katanforosh@sbu.ac.ir
Keywords: Feature Selection, Artificial Neural Network, Gene Expression, Cancer Classification.
Introduction
44
Feature Selection
Evaluators
In this paper, each neuron in the input layer is asBrief description of evaluators used in this paper is sociated with a gene selected by the pervious step (feaas follows;
ture selection), the number of hidden layers is 1 or 2,
and the output layer has just a single neuron. We
use four different training algorithms, Resilent Back GainRatioAttributeEval: measure the gain ratio
propagation(RP), Levenberg Marquardt(LM), Onewith respect to the class.
Step Secant Backpropagation (OSS), and Broyden,
InfoGainAttributeEval: measure the information Fletcher,Goldfarb, Shanno(BFGS). In the framework
of Backpropagation (BP) scheme. We set up initial
gain with respect to the class.
weight with random values. The learning procedure
OneRAttributeEval: evaluate the worth of an atiterates until the error (estimated by validation set) is
tribute using the OneR classifier.
fallen under a pre-specified threshold.
ReliefFAttributeEval: resample an instance and
In our method, selection algorithm is implemented
consider the value of the given attribute for the
nearest instance of the same and different class. in two steps: 1) first the relevant candidate genes from
the initial set of features are selected by each criterion
SymmetricalUncertAttributeEval: measure the
evaluator, and 2) genes which are commonly passed all
symmetrical uncertainty with respect to the
evaluators threshold are selected.
class.
CfsSubsetEval: Subsets of features that are
highly correlated with the class while having low
intercorrelation are preferred.
Datasets
For exploring the performance of new gene se-
45
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Experimental
Discussion
Results
and
The feature selection evaluators and their Rank thresholds with respect to each datasets are shown in Table
1. In the first step of selection algorithm, the number of selected genes is set to a moderate number, e.g.
between 30 to 90. Then, we find minimum number
of genes that is shared in all evaluators criteria. Informative genes found in datasets are listed in Table
2.
ALL-AML Leukemia The leukemia data consists
of 72 samples among which are 25 samples of
AML and 47 samples of ALL. The number of
genes in each sample in this dataset is 7129. The
training data consists of 38 samples (27 ALL and
11 AML), and the rest is considered as test data
[16]. Using the test data without cross validation,
a perfectly accurate classification is observed (Table 3). This also achieves the best leave-one-out
(LOOCV) result (98.61) with RP training algorithm (Table 4). LM and OSS methods are the
second highest accurate classifier with the accuracy of 95.83% and 94.44% respectively.
Our result compares with the result reported in
[17] where 1038 genes predict for 91.18% (10-CV,
46
The third dataset contains 58 samples from DLBCL patients and 19 samples from follicular lymphoma (FL) on 7029 genes. Here, RP and BFGS
obtained the most accurate results; respectively
100% (estimated by the test data, Table 3) and
96.10% (estimated by LOOCV, Table 4).
It should be noted that result reported in [20]
are more accurate than our result (97.50% vs.
96.10%), but they have not identified any group
of genes responsible for DLBCL. Our results compare with results of the kNN based method (reported accuracy=92.71%) [21] where eight genes
have been identified to be associated with DLBCL. The hyper-box enclosure method [15] obtains the same accuracy as our multiple Ranker
methods with ANN.
We are also interested in the effect of the feature reduction on the classification accuracy. We
gradually reduce the number of initial genes selected by each evaluator and re-organize the ANN
classifier. Fig. 2 shows the trend of accuracy
with respect to the number of initial genes. The
numbers of commonly selected genes are shown
by bullets on each curve. As shown in Fig.
2, over 90 percent of Lymphoma samples can
be perfectly identified by using only one gene
(GENE3330X). The same accuracy can be also
achieved by four genes for Leukemia (M84526 at,
X95735 at, U46499 at, L09209 s at). DLBCL is
rather complicated; a reliable classification requires at least seven genes, even more (see Ta-
Fig. 3. In this step, we consider only the results of LM algorithm known as the most efficient
training algorithm in our experiences. We have
also studied some other classifiers than ANN, like
SMO, Kstar and Logistic regression, but no better results have been obtained.
Conclusion
47
The Third International Conference on Contemporary Issues in Computer and Information Sciences
one-out cross validation has shown the highest prediction accuracy for the proposed approach among gene
expression classification algorithms. It suggests our
method can select informative genes for cancer classification.
[6] S.B. Dong and Y.M Yang, Hierarchical Web Image Classification By Multi-Level Features, Proceedings of the first
international conference on Machine Learning and Cybernetics, Beijing (2002), 663 668.
[7] R. Setiono and H Liu, Feature Selection via Discretization,
IEEE Transactions on Knowledge and Data Engineering 9
(1997), 642645.
[8] H. Hu, J Li, H Wang, and G Daggard, Combined Gene Selection Methods for Microarray Data Analysis, Proceedings
on 10th International Conference Knowledge-Based Intelligent Information and Engineering Systems, Bournemouth,
UK (2006), 911.
[9] S.A. Vinterbo, E.Y Kim, and L Ohno-Machado, Small,
fuzzy and interpretable gene expression based classifiers,
bioinformatics/bti287 21 no. 9 (2005), 19641970.
[10] L. Sun, D Miao, and H Zhang, Gene Selection with Rough
Sets for Cancer Classification, IEEE, Fourth International
Conference on Fuzzy Systems and Knowledge Discovery,
Haikou (2007), 167172.
Refrences
[1] R. Kohavi and G.H. John, Wrapper for feature subset selection, Artif.Intell. 97,1/2 (1997), 273324.
[2] Z. Zainuddin and P. Ong, Reliable multiclass cancer classification of microarray gene expression profiles using an
improved wavelet neural network, Expert Systems with Applications, 38 (2011), 1371113722.
[3] L. Nanni and A. Lumini, Wavelet selection for disease classification by DNA microarray data, Expert Systems with
Applications, 38 (2011), 990995.
[4] Y. Yan and J.O Pederson, Comparative Study of feature selection in Text Categorization, Proceedings on Fourteenth
International Conference on Machine Learning (ICML97),
(1997), 412420.
[5] B. Krishnapuram, A.J Hartemink, L Carin, and M.A.T
Figueiredo, A Bayesian Approach to Joint Feature Selection and Classifier Design, IEEE Transactions on Pattern
Analysis and Machine Intelligence 26, No. 9 (2004), 1105
1111.
48
Data mining with learning decision tree and Bayesian network for
data replication in Data Grid
Farzaneh Veghari Baheri
Farnaz Davardoost
Farzaneh Veghari@Yahoo.com
Farnaz Davardoost@Yahoo.com
Vahid Ahmadzadeh
Department of Computer,
Payame Noor University,
PO BOX 19395-3697 Tehran, Iran.
Ahmadzadeh.Vahid@Gmail.com
Abstract: Data management is a main problem in Grid environment. A data Grid is composed
of thousands of geographically distributed storage resources usually located under different administrative domains. The size of the data managed by data Grid is continuously growing, and it has
already reached Petabytes. Large data files are replicated across the Data Grid to improve the
system performance. In this paper, we improve the performance of data access time and reduce
the access latency. In this research, a hybrid model is extended by combining a Bayesian Network
and a learning decision tree. We suppose hierarchical architecture which has some clusters. This
approach detects which data should be replicated. Initially, an algorithm calculates Entropy for
dataset then calculated Gain for every attribute. Finally the probability of result calculated with
Bayesian expression and replication rule will be produced. We simulate this approach to evaluate
the performance of proposed hybrid method. The simulation results show that the data access time
is reduced.
Keywords: Bayesian Network; Data Replication; Entropy; Gain; Grid; Learning Decision Tree.
Introduction
In recent years, applications such as bioinformatics, climate transition, and high energy physics produce large
datasets from simulations or experiments. Managing
this huge amount of data in a centralized way is ineffective due to extensive access latency and load on
the central server. In order to solve these kinds of
problems, Grid technologies have been proposed. Data
Grids aggregate a collection of distributed resources
placed in different parts of the world to enable users to
share data and resources (Chervenak et al., 2000; Allcock et al., 2001; Foster, 2002; Worldwide Lhc Computing Grid, 2011). Data replication has been used in
Corresponding
Author
49
database systems and Data Grid systems. Data replication is an important technique to manage large data
in a distributed manner. The general idea of replication
is to place replicas of data at various locations. Learning decision trees and Bayesian networks are widely
used in many areas, such as data mining, classification
systems, and decision support systems and so on.
A decision tree is model that of inductive learning from observation. Decision trees are creating from
training data in a top-down direction. A learning decision tree is like a hierarchical tree structure which
is divided based on a single attribute at each internal
node. The first stage of a learning decision tree is the
root node that is allocated all the examples from the
Bayesian networks are popular within the community of artificial intelligence due to their ability to
support probabilistic reasoning from data with uncertainty. A Bayesian Network (BN) is a directed acyclic
6. Fast Spread: file copies are stored at each node
graph that represents relationships of probabilistic naon the path to the best client.
ture among variables of interest. With a network at
hand, probabilistic inference can be conducted to predict the values of some variables based on the observed
In [9] authors discussed a new dynamic replication
values of other variables and find a pattern in training
method
in a multi-tier data Grid called predictive hierdata. [1, 2, 3, 4, 5, 6, 7].
archical fast spread (PHFS) which is an extended verIn this paper, we present a hybrid model compose sion of fast spread. Considering spatial locality, PHFS
of the learning decision trees and Bayesian networks tries to increase locality in accesses by predicting users
resultant from running database. We assume a hier- subsequent file demands and pre-replicate them beforearchical architecture of data Grid system. Proposed hand in hierarchal manner. In PHFS, in order to elimarchitecture composes some clusters that every cluster inate the delay of replication on request, data must be
composes some sits. At first, decision tree based on replicated in advance by using the concept of predicting
ID3 learning algorithm is being created then, a set of future request while we use learning decision tree and
decision rules is generated for data replication in Grid Bayesian network for determining which data should
environment. We simulate our method to evaluate the be replicated.
performance of this training method.. Providing the
replication rule for data will increase the performance
of system and it is obtained the more optimum solution than the other methods. Summary, the time of
data access is reduced with proposed hybrid method.
Section 2, in this paper introduces some previous work
on data replication. Section 3, explain our proposed
method in detail and section 4, we evaluate proposed
method. Finally, conclusion are presented is section 5.
Related work
1. No Replication: in this case only the root node The performance of replication strategies is highly deincludes the replicas.
pendent on the architecture of data Grid. One of the
basic models is the hierarchical data model which is
2. Best Client: a replica is created for the client
also known multi-tier. In this paper, we assumed hiwhich access frequently.
erarchical architecture with 2 tires furthermore; our
3. Cascading: a replica is created on the path of the architecture is considered as cluster. This hierarchical
best client.
architecture is shown in Fig.1.
50
The Third International Conference on Contemporary Issues in Computer and Information Sciences
If all Examples are positive, Return the singlenode tree Root, with label = +
If all Examples are negative, Return the singlenode tree Root, with label = If Attributes is empty, Return the single-node
tree Root, with label = most common value of
Target attribute in Examples
Figure 1: Hierarchical architecture for data manage- Otherwise Begin
ment
3.1
If Examples vi is empty
Description
Data Identification
Number of
accessing data
importance of data
Long of time for
allocating requested data
Size of data
Values
Number
Low, Mid,
High
Low, High
Low, High
Low, High
c
X
pi Log2 pi
(1)
i=1
In this paper, we present the basic algorithm for
decision tree learning corresponding approximately to
The information gain, Gain(S, A) of an attribute A, relID3 Where Examples are according to table 1, target
ative to a collection of examples S, is defined as Equaattribute is data replication and attributes are fields
tion (2).
of table (Access Number, Priority, Service time, Size
X
of data). The summary of ID3 algorithm is as follows:
|Sv |
Gain = Entropy(S)
Entropy(Sv ) (2)
|S|
Entropy(s) =
vV alues(A)
51
3.2
Bayesian Networks
Bayesian Network could represent the probabilistic relationships between data and data replication. Given
symptoms, the Bayesian network can be used to compute the probabilities of the use of data in future. To
develop a Bayesian network, at first we often improve
a DAG such decision tree. Then we verify the condiFigure 2: Example of Decision Tree
tional probability distributions of each variable. Now
we calculated probability of attribute. We find the influence factor for all the attribute values. The influence
factor gives the dependability of the attribute value on
Rule base allows knowledge extraction. The rules
the class label. The formula for Influence factor for a
reflect
the main characteristics of the dataset. Decision
particular Class Ci is given Equation (3).
tree of Figure 2 can be written down as the following
set of rules:
N (Aj = Xi Ci )
I(Aj = xi ci ) =
(3)
N (Ci )
No Replication Rule:
IF
{
Where Aj =attribute that is currently considered for (Accesss Number= Low) OR
calculating, j varies from 1..n here n refers to maxi- (Access Number=Mid AND priority=Low) OR
mum number of predictive attributes and k is maxi- (Access Number=Mid AND priority=High AND Sermum number of attribute values for the attribute Aj vice Time=Low) OR
[12, 13, 14, 15].
(Access Number=Mid and priority=High AND Service Time=High AND size of data =High)
}
Then replicate=No
3.3
Access Number=Mid AND priority=High AND Service Time=High AND size of data =Low OR
We assume that a learning decision tree for target value Access Number=Mid AND priority=High AND Serdata replication is according to Figure 2 each path from vice Time=High AND size of data =mid
the root to the leaf can be written down as set of IF- }
THEN rules.
Then replicate=Yes
52
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Refrences
Simulations
We evaluate and compare the performance of our approach with no replication algorithm in a condition
which number of clusters verify. Figure 3 Illustrates
comparing of access time in 4 clusters, 8 clusters and
12 clusters.
Conclusion
53
[12] S. A.a.Balamurugan, R.Rajaram, Effective solution for unhandled exception in decision tree induction algorithms
, in: expert systems with - applications 36(2009)1211312119,contents lists available at sciencedirect.
[13] T.Amjad, M.Sher, A.Daud, A survey of dynamic replication strategies for improving data availability in data grids,
Future Generation Computer Systems 28 (2012) 337349.
[14] N.Xiong, Learning fuzzy rules for similarity assessment in
case-based reasoning, Expert Systems with Applications
38 (2011) 1078010786.
[15] L.Mohammad Khanli, F.Mahan, A.Isazadeh, Active rule
learning using decision tree for resource management in
Grid computing, Future Generation Computer Systems 27
(2011) 703710.
[16] Tom M. Mitchell, Machine Learning, McGraw-Hill Science/Engineering/Math; (March 1, 1997)
Roya Derakhshanfar
Maisam M.Bassiri
Iran University of Science and Technology
Department of Electrical Engineering
Tehran, Iran
basiri@iust.ac.ir
S.Kamaledin Setarehdan
University of Tehran
Control and Intelligent Processing Center of Excellence, School of ECE, College of Engineering
Tehran, Iran
ksetareh@ut.ac.ir
Abstract: The purpose of this paper is to introduce a method for transmitting patients data
using a wireless network. By this network, the patients data is first gathered at a central station
and from there, it is then sent to a computer. In the computer, the patients profiles are created,
so that their medical information can be controlled every moment. The existing protocol between
master and slave provides synchronous data transfer without collision. Another protocol is also
provided between the computer and the master in order to collect, save and process the data.
Introduction
54
2
2.1
55
The Third International Conference on Contemporary Issues in Computer and Information Sciences
56
2.2
Software
Serial interface can be explained in two parts; one between the master and slave and the other between the
master and the PC.
Results
Conclution
Refrences
[16] http://en.wikipedia.org/wiki/master/slave-(technology).
[17] http://www.atmel.com.
[1] Andreas Lymberis and Silas Olsson, Intelligent biomedical
clothing for personal health and disease management : state
of the art and future vision, Telemedicine Journal and ehealth 9 (2003), no. 4.
[18] http://www.hy-line.de.
[19] http://www.hoperf.com.
[20] http://www.maxim-ic.com.
57
Elahe Najafi
Ahmad Baraani
Isfahan University,Isfahan,IRAN
enajafi@aut.ac.ir
ahmadb@eng.ui.ac.ir
Abstract: Designing architecture for Organizations is a complex and confusing process. It is not
obvious that you should start from which point and how you can continue to achieve the holistic
architectural model of an organization. Using CEA framework (CEAF), a semantic enterprise architecture framework (EAF), brings a new opportunity for enterprise experts to get their enterprise
ontology by focusing on one variable at a time without losing sense of enterprise as a whole. A
number of semantic frameworks like CEAF have been presented by famous Enterprise Architecture
(EA) researchers and experts till now. A significant goal of all of them is to design a transparent
Enterprise which is as LEAN as possible to adapt and adopt external demands and environment
changes. To achieve this goal CEAF is based on primitive object named Service. It is a substantial
characteristic of CEAF which distinguishes it from other presented frameworks till now.
Keywords: Enterprise Architecture(EA);Enterprise Architecture Framework(EAF); Service Oriented; Service Oriented Framework; Service Oriented Enterprise Architecture(SOEA) .
Introduction
works in section 2, CE framework is elaborated in details in Section 3. Finally, Directions of future research
are discussed in section 4 to conclude the paper.
Enterprise architecture (EA) is an approach that organizations should practice to integrate their business with Information and Communication Technology (ICT). It presents a comprehensive and rigorous
solution describing a current and future structure and
behaviour of an organization by employing a logical
structure. This structure comprising a comprehensive
collection of different views and aspects of the enterprise, called EAF . EAF is a total picture of an organization showing how all organization elements work
together to achieve defined business objectives.
Several distinctive EAF have been proposed, till now,
but many organizations are struggling with using these
frameworks, the main challenge which current EA
Frameworks faced is that using these frameworks are a
tedious and complex activity.
In this paper we will present a new service oriented
semantic framework to reduce this challenge.In the remainder of this paper we will discussed the related
Corresponding
58
1.1
Related Work
services
in
sense-
Observer: Strategist
Description: Service Strategy provides a foundation
for enterprise management.It derives all enterprise activities.
To eliminate these challenges a number of researchers Critical Questions:
try to use SO paradigm with EAF for generating EA
artifacts )[16]. These researchers believe that this
What are our business objectives and expectaparadigm re-engineer enterprise to one that senses the
tions?
environment rapidly and adapts itself to business chal In which domains and to whom do we offer our
lenges and opportunities quickly.
services (our stakeholders)?
Although the scope and coverage of these frameworks
differ extensively, they do not completely clarify how
What value do we create for our stakeholders?
combination between EAF can takes advantage of ser What services do we offer to our stakeholders now
vices and what is a well-defined classification schema
or plan to offer in the future?
to support this combination. To eliminate the deficien What is the quality and warranty of our services
cies of current SOEAF we suggest a new SO semantic
to differentiate our services from rivals?
framework named CEA in next section.
Who are our service provider partners?
5. Heterogeneous models for each cell.
CEA Framework
2.1
CEAF Rows
CEAF rows show the enterprise from various viewpoints. For describing each row we defined a template
comprising below items and defined each row by this
template.
Observers:a list of audiences and viewers.
Description:a brief depiction.
Goals:a list of goals targeted by each row.
Critical Question:a list of questions to be answered
by end of each row.
Organization:a list of the roles and the responsibility
of each role corresponding COBIT RACI chart.In this
field A,R and I are stand for Accountable, Responsible
and Informed.
Candidate Patterns:a list of patterns and suitable
references.
Perquisite:a list of perquisite inputs.
Deliverables:a list of deliverables of each row
Based on:a list of theoretical concepts which each row
is based on.
2.1.2
59
The Third International Conference on Contemporary Issues in Computer and Information Sciences
What are the quality requirements and constraints of each business services?
What is the pattern of each business services?
Which of our design services realizing our business processes are meaningful for our external Goals:
stakeholders?
What are the quality, management and operational requirements of each design services which
must be addressed as a fundamental part of design?
2.1.4
2.1.3
60
Goals:
Design of new or changed IT service aligned business services.
Provide a holistic view to all aspects of system
design
provide IT services realized business needs.
2.2
CEA Columns
2.2.1
2.2.2
2nd:Policy
Description: This column is about policies of organization.A policy is management expectation, intention
and condition used to ensure that consistent and appropriate decision, design and development of goals,
responsibilities, resources and process are created at
last. Policies are about constraints and quality of different type of services exposed in different rows.
Levels:
Strategy policies: Strategic policies describe governance rules that driving the strategic decision .they
should be considered to accomplish strategic mission
through well understood steps by an agreed date and
budget. These policies consider any risk, constraints
and limitation affecting business strategy and quality
of delivered services.
Orchestration policies: Orchestration policies address any constraint exists for composition and integration of business services together.
Business Policies:
These policies specify constraints, standards and business rules regarding the operation of services.
IT policies: IT policy is about the quality of IT services. It covers all types of non-functional requirements
like performance, efficiency, security, availability, reliability which should be addressed by service-oriented
architecture.
Deliverable: Policy relationship Map
First Column:Purpose
2.2.3
3rd Column:Service
Description:A Service is a loosely coupled, selfcontained and stateless component that interacts with
other services to accomplish business goals and deliver
value to customers. In this column we design services
in three levels.
Levels:
Process services: Process services provide the control capabilities required to manage the flow and interactions of multiple services in ways that implement
business processes. These services representing longterm work flows or macro flow of business processes
which implemented by an orchestration of basic and
complex business services.
Business services: Each business service may participate in different process services. These services
contain business micro logic. These services are meaningful from business internal viewer of systems.
IT services: IT services are services which handle
the technical view of system. These types of services
include the technology solutions and IT constraints to
design services. An IT Service may be composite or
basic.
61
The Third International Conference on Contemporary Issues in Computer and Information Sciences
2.2.4
Training and skills enhancement was needed for personnel and communication management (3) New roles
and responsibilities should be defined (4) the governance structure must be established.
Deliverables: Organization structure, Chain of Authority and Responsibility.
2.2.5
2.2.7
7th Column:Resource
5th Column:Stakeholder
Refrences
[1] D. HARRISON and L. VARVERIS, TOGAF: ESTABLISHING ITSELF AS THE DEFINITIVE METHOD FOR
BUILDING ENTERPRISE ARCHITECTURES IN THE
COMMERCIAL WORLD (2004).
[2] D. MINOLI, Enterprise Architecture A to Z: Frameworks,
Business Process Modelling, SOA, and Infrastructure Technology, Auerbach PUBLICATIONS, 2008.
[3] J. Schekkerman, How to Survive in the Jungle of Enterprise
Architecture Frameworks: Creating or Choosing an Enterprise Architecture Framework, Trafford Publishing, 2006.
2.2.6
6th Column:People
Description: In this column we define all of organization workers and committees which participant in
defining EA.By focusing on people we can clarify: (1)
the changes needed in organization structure, chains
of responsibility, authority and communication. (2)
62
[4] A. AYED, M. ROSEMANN, E. FIELT, and A. KORTHAUS, ENTERPRISE ARCHITECTURE AND THE
INTEGRATION OF SERVICE-ORIENTED ARCHITECTURE, PACIS 2011 PROCEEDINGS, Brisbane , Australia
(2011).
[5] A. NABIOLLAHI, R. A. ALIAS, and S. SAHIBUDDIN, A
SERVICE BASED FRAMEWORK FOR INTEGRATION
OF ITIL V3 AND ENTERPRISE ARCHITECTURE, Design (2010), 15.
63
Maryam Tahmasbi
Narges Mirehi
m tahmasi@sbu.ac.ir
n.mirehi@mail.sbu.ac.ir
Abstract:
We consider the problem of finding a large number of disjoint paths for unit disks moving amidst
static obstacles. The problem is motivated by the problem of finding shortest non-crossing paths for
aircraft in air traffic management, in which one must determine the shortest path for any aircraft
that can safely move through a domain while avoiding each other and avoiding no-fly zones and
predicted weather hazards. We compute K shortest paths for aircrafts in a domain with one hole,
where K 1 pairs of terminals lie on the boundary of the domain, but one pair of terminals lie on
the boundary of the domain and the boundary of the hole. We present an algorithm for solving the
problem in polynomial-time.
Keywords: K thick paths; minkowski sum; non-crossing paths; simple polygon with one hole; minsum
Introduction
[6].
The input to the problem is a simple polygonal domain, and K pairs of terminals (sk , tk ) that are sources
and sinks of any path, and one hole/obstacle. K 1
pairs of terminals lie on the boundary of the domain
and one of the points of the last pair lies on the boundary of the hole. The goal is to find K thick non-crossing
paths in the domain with no intersection to the hole,
such that the total length of the paths is minimum.
One of the most studied subjects in computational geometry is the shortest path problem [1],[2]. one of the
extension geometric shortest path problem is [3]: given
a set of obstacles and a pair of points (s, t), find a
shortest s t path avoiding the obstacles. The noncrossing paths problem is an extension of the shortest
path problem: given a set of obstacles and K pairs
of points (sk , tk ), find a collection of K non-crossing
Thick path planning in geometric domains is an imsk tk paths such that the paths are optimal accord- portant computational geometry subject with applicaing to some criterion. The objective may be either to tions in robotics, VLSI routing, air traffic management
minimize the sum of the lengths of the paths (minsum (ATM), sensor networks, etc. [3].
version) or to minimize the length of the longest path
(minmax version). A thick path is the Minkowski sum
of a curve and the unit disk. Two thick path are called
non-crossing when they are non-intersecting, the thick 2
Motivation
paths allow to share parts of the boundary with each
other; but the interiors of the paths are disjoint [4].
We are motivated by an application in ATM; similar
The problem of finding multiple thick paths (the problems may arise in other coordinated motion planThick non-Crossing Paths Problem), which we consider ning problems in transportation engineering, e.g., shipin this paper, is an extension of both the shortest non- ping vessels, robotic material handling machines, etc.
crossing paths [5] and the shortest thick path problems The polygon P models an airspace through which the
Corresponding
64
aircrafts intend to fly. We assume that the aircraft remain at constant altitude (as is often the case during
en route flight), so that we can consider the problem
to be in a two-dimensional domain. There is an obstacle within P that correspond to a no-fly zone arising
from special use of airspace (military airspace, noise
abatement zone, security zone over city, etc.).
We are interested in determining the paths for
any aircraft from source to sink that safely be routed
through P whit optimal total length of paths. while
maintaining safe separation from each other and from
the obstacle .
Related work
This problem can be viewed as a variation of the FatEdge Graph Drawing Problem (FEDP) [7],[8], which,
in turn, is an extension of the continuous homotopic
routing problem (CHRP) a classical problem in VLSI
design [9],[10],[11],[12]. A related problem is that of
finding shortest paths homotopic to a given collection
of paths [13],[14],[15]. The novelty of our work lies in
considering the problem in simple polygons and polygonal domains; the previous research concentrated on
point obstacles for the paths. Although only point obstacles are considered in CHRP/FEDP, the existing results on FEDP [7],[8] are more general than our result
in some other aspects: the general FEDP receives as
input, an embedding of an arbitrary planar graph and
finds a drawing with the edges of maximum thickness;
we do not answer the question of finding the maximum
separation between the paths. Some heuristics for finding thick non-crossing paths in polygonal domains are
suggested in the VLSI literature [16], but neither complexity analysis nor performance guarantees are given
there. A very restricted version is considered in [17].
In a rectilinear environment, fast algorithms are known
for some special cases of the minsum version [18],[19].
A related problem can be viewed in [6], that considered
all pairs (sk , tk ) lie on the boundary of the polygon and
the sources/sinks are not allow to lie on the boundary
of the holes. We extend the work in [6] for the case
where one of the sinks/sources lies on the boundary of
the hole, and compute K shortest path in linear time.
Preliminaries
Let P 1 = P \(P )1 be the 1 unit offset of P inside it. We assume that P 1 is still a simple polygon.
We begin with a formal statement of our problem and Let ST = {(sk , tk ), k = 1...K} be the set of K pairs
a review of some relevant notions and results from pre- of points on the boundary of P 1 and the boundary of
vious works [20],[6].
Q1 (one point lies on the boundary of Q1 ) and ()k be a
65
The Third International Conference on Contemporary Issues in Computer and Information Sciences
in which the root is the whole circle C, the roots immediate children are sl(s1 , t1 ) and sl(t1 , s1 ), and the
parent-child relation is defined by containment of the
slices (see Fig. 2, ignore the shaded disk now; see also
[5] for details).
66
5.1
Algorithm
Refrences
[1] Je. Erickson and A. Nayyeri, A Shortest non-crossing walks
in the plane, Proceedings of the 22nd Annual ACM-SIAM
Symposium on Discrete Algorithms (SODA) (2011), 125
128.
[2] E. M. Arkin, J. S. B. Mitchell, and V. Polishchuk, Maximum thick paths in static and dynamic environments, Comput. Geom 43(3) (2010), 279-294.
[3] J. S. B. Mitchell, Geometric shortest paths and network optimization Handbook of Computational Geometry,
J. Sack and G. Urrutia, editors, Elsevier Science B.V.
North-Holland, Amsterdam, pages: 633701, 2000.
5.2
Running time
[9] R. Cole and A. Siegel, River routing every which way, but
loose, Proc. 25th Annu. IEEE Sympos. Found. Comput. Sci
(1984), 6573.
[10] S. Gao, M. Jerrum, M. Kaufman, K. Mehlhorn, and W. R
u
lling, On continuous homotopic one layer routing, SCG88:
Proc. of the fourth annual symposium on Computational geometry, New York, NY, USA, ACM Press (1988), 392-402.
[11] C. E. Leiserson and F. M. Maley, Algorithms for routing
and testing routability of planar VLSI layouts, Proc. 17th
Annu. ACM Sympos. Theory Comput. (1985), 6978.
[12] F. M. Maley, Single -Layer Wire Routing and Compaction,
MIT Press, Cambridge, MA (1990).
[13] S. Bespamyatnikh, Computing homotopic shortest paths in
the plane, J. Algorithms 49(2) (2003), 284303.
67
The Third International Conference on Contemporary Issues in Computer and Information Sciences
[14] A. Efrat, S.G. Kobourov, and A. Lubiw, Computing homotopic shortest paths efficiently, In Proceedings of the 10th
Annual European Symposiumon Algorithms London, UK,
Springer-Verlag (2002), 411423.
[15] T. Dayan, Rubber-Band Based Topological Router, P.h.D
thesis, UC Santa Cruz, 1997.
[16] C. P. Hsu, General river routing algorithm, Proc. of the
twentieth design automation conference on Design automation (1983), 578583.
[17] A. Aggarwal, M. M. Klawe, S. Moran, P. W. Shor, and R.
Wilber, Geometric applications of a matrix searching algorithm, In Proc. 2nd Annu.ACM Sympos. Comput. Geom.
(1986), 285292.
[18] Y. Kusakari, H. Suzuki, and T. Nishizeki, Finding a shortest pair of paths on the plane with obstacles and crossing
68
Zahra Jalalian
Faculty of Engineering
Kharazmi University
Kharazmi University
borna@tmu.ac.ir
jalalian@tmu.ac.ir
Abstract: The 3-Sum problem for a given set S of integers is subject to find all three-tuples
(a, b, c) for which a + b + c = 0. In computational geometry many other problems like motion
planning relate to this problem. The complexity of existing algorithms for solving 3-Sum are O(n2 )
or a quotient of it. The aim of this paper is to provide a linear hash function and present a fast
algorithm that finds all suitable three-tuples in one iteration of S. We also improve the performance
of our algorithm by using index tables and dividing S into two negative and non-negative parts.
Introduction
69
70
The Third International Conference on Contemporary Issues in Computer and Information Sciences
71
Figure 2: The histogram of avarage amount of operations that Algorithms 1, 2 and 3 are doing in 100 tests.
Refrences
[1] I. Baran, E. D. Demaine, and M. Patrascu, Subquadratic algorithm for 3-SUM: Proc. 9th Worksh. Algorithms & Data
Structures, Springer, Berlin/Heidelberg 3668/2005 (2005),
409421.
[2] M. Dietzfelbinger, Universal hashing and k-wise independent
random variables via integer arithmetic without primes: Lecture Notes in Computer Science, Proc. 13th Symposium on
Theoretical Aspects of Computer Sceince (1996), 569580.
[3] H. Edlesbrunner, J. ORourke, and R. Seidel, Constructing
arrangements of lines and hyperplanes with applications:
Lecture Notes in Computer Science, SIAM. J. Comput. 15
(1986), 341363.
[4] J. Erickson, Lower bounds for fundamental geometric problems, PhD thesis, University of California at Berkeley, 1996.
72
The Third International Conference on Contemporary Issues in Computer and Information Sciences
73
[6] M. N. Wegman and J. L. Carter, New classes and applications of hash functions, Proc. 20th IEEE FOCS (1979),
175182.
Abolghasem Laleh
shadi.nilforoushan@gmail.com
aglaleh@alzahra.ac.ir
Ali Mohades
Abstract: Voronoi diagrams have proven to be useful structures in various fields and are one of
the most fundamental concept in computational geometry. Although Voronoi diagrams in the plane
have been studied extensively, using different notions of sites and metrics, little is known for other
geometric spaces. In this paper we are interested in the Voronoi diagram of a set of sites in the
given inversion circle. We studied various cases which show some difference of Voronoi diagram
between Euclidean and inversion geometry. Finally, a special partition of the inversion circle which
is proven that will be a Voronoi diagram of inverted point in the inversion circle, is given.
Introduction
Author, Algorithm and Computational Geometry Research Group, Amirkabir University of Technology, Tehran,
Iran, T: (+98) 26 34550002
Algorithm and Computational Geometry Research Group, Amirkabir University of Technology, Tehran, Iran.
74
Voronoi diagrams have nice properties which motivated us to study if they will be preserved in other
spaces. In this paper, we study the case in inversion
geometry specially in a given inversion circle.
t(z) =
1
.
z
75
The Third International Conference on Contemporary Issues in Computer and Information Sciences
76
4.2
Stereographic Formulae
In this subsection we recall explicit formulae connecting the coordinates of a point z in C and its stereographic projection z on . These formulae are useful
in investigating non-Euclidean geometry.
4.1
Stereographic Projection
1 (x + iy) = (
2y
x2 + y 2 1
2x
,
,
)
x2 + y 2 + 1 x2 + y 2 + 1 x2 + y 2 + 1
.
Let be the sphere centered at the origin of C, and
Proof: See [14].
with unit radius. That is, its equator coincides with
Now change the direction in Figure 7, and let S be
the unit circle. We now seek to setup a correspondence
between points on and points in C (see Figure 7).
the north pole of Riemann sphere, then we deduce the
following:
77
The Third International Conference on Contemporary Issues in Computer and Information Sciences
be the stereographic map with this Theorem 5: For a given set of points as sites in C, the
Let u : C
assumption that S is the north pole, then for the given image of Voronoi diagram of mentioned sites inside the
point (X, Y, Z) on :
given inversion circle C, will be a partition of C which
preserve symmetry. That is, the image of each pair of
X
Y
u(X, Y, Z) =
+i
sites inside C are symmetric with respect to the image
1+Z
1+Z
of corresponding Voronoi edge.
and
Therefore according to Theorem 5 and
2y
x2 + y 2 1
2x
, 2
, 2
) Lemma 4.1 [2], we will obtain the main result of this
u1 (x + iy) = ( 2
2
2
2
x +y +1 x +y +1 x +y +1
paper as follows.
.
Theorem 6: For a given set of points as sites in C, the
image of Voronoi diagram of mentioned sites inside the
Hence in this case we have the followings:
(i) The interior of the unit circle is mapped to the given inversion circle C, is the Voronoi diagram of the
southern hemisphere of . In particular, 0 is mapped inverted point sites in C. That is, the inversion of any
Voronoi diagram in C relative to the given circle C,
to the south pole N .
will give a Voronoi diagram in C.
(ii) Each point on the unit circle is mapped to itself.
(iii) The exterior of the unit circle is mapped to the
northern hemisphere of , except that S is the stereographic image of .
Refrences
4.3
Main results
By combining Theorem 3 and Corollary ??, the following interesting theorem will be derived:
and denote
Theorem 4: Let P be a given point in C
P 0 = u 1 (P ), then P 0 is the inverse of P with respect to the unit circle.
By using
Proof: Let P (x, y) be a given point in C.
Theorem 3,
1 (x + iy) = (
2y
x2 + y 2 1
2x
,
,
)
x2 + y 2 + 1 x2 + y 2 + 1 x2 + y 2 + 1
2
2y
x +y 1
2x
according to Corollary ??, u( x2 +y
2 +1 , x2 +y 2 +1 , x2 +y 2 +1 )
y
y
x
x
0
= ( x2 +y2 , x2 +y2 ). Thus P = ( x2 +y2 , x2 +y2 ). Therefore it will imply that (OP )(OP 0 ) = 1 and the proof is
done.
78
[2] F. Aurenhammer and R. Klein, Voronoi Diagrams, Handbook of Computational Geometry, J. Sack and G. Urrutia,
editors, Elsevier Science Publishers, B.V. North-Holland,
Chapter 5, pages: 201290, 2000.
[3] L. P. Chew and L. Drysdale, Voronoi diagram based on convex distance functions, Proc. 1st Ann. Symp. Comp. Geom.
(1985), 235244.
[4] S. Drysdale, Voronoi Diagrams: Applications from Archaology to Zoology, Regional Geometry Institute, Smith College,
July 19 (1993).
[5] A. Francois, Voronoi diagrams of semi-algebraic sets, Ph.D
Thesis, Department of Computer Science, The University of
British Colombia, January, 2004.
[6] M. J. Greenberg, Euclidean and Non-Euclidean Geometries,
2nd ed., W. H. Freeman & Co., 1988.
[7] M. Karavelas, 2D Segment Voronoi Diagrams, CGAL User
and Reference Manual: All parts, Chapter 43, 20 December,
2004.
[8] M. I. Karavelas and M. Yvinec, The Voronoi Diagram of
Planar Convex Objects, 11th European Symposium on Algorithms (ESA 2003), LNCS 2832 (2003), 337348.
[9] D.-S. Kim, D. Kim, and K. Sugihara, Voronoi diagram of a
circle set from Voronoi diagram of a point set: 2. Geometry,
Computer Aided Geometric Design 18 (2001), 563585.
[10] V. Koltun and M. Sharir, Polyhedral Voronoi diagrams of
polyhedra in three dimensions, In Proc. 18th Annu. ACM
Sympos. Comput. Geom. (2002), 227236.
[11]
[15] A. Okabe, B. Boots, K. Sugihara, and N. Chiu, Spatial tesselations: concepts and applications of Voronoi diagrams,
2nd edition., John Wiley & Sons Ltd., Chichester, 2000.
79
Pnu University
Department of Computer Engineering and Information Technology
mehdi seidhamze@yahoo.com
Abstract: This paper presents an application of the analytic hierarchy process used to selection
of effective factors in estimating of costumers respond to mobile advertising and then investigates
the most successful factors form of mobile communication; short message services (SMS).
This method adapts a multi-criteria approach that can be used for analysis and comparison of
mobile advertising. Four criteria were used for evaluating mobile advertising: Information services,
Entertainment, Coupons, Location base services. For each, a matrix of pair wise comparisons
between factors influence was evaluated. Finley the aim of this investigate is to gain a better
understanding of how companies use mobile advertising in doing business.
Keywords: : Mobile Advertising; E Advertising; Personalization; Analytic Hierarchy Process; Short Message
Services (SMS); Successful Factors
Introduction
With a growth and progress in electronic business, especially in cellular phones, mobile advertisement seems
to be successful when elements which are effective on
customers attitude in electronic and wireless situation
are to be well-understood and necessary actions shell
be done in this case. While several elements are efOnline advertising (ad) is a form of promotion that
fective on customers attitude owe can mention to peruses the Internet and World Wide Web for the express
sonal values and inner believes customers characterpurpose of delivering marketing messages to attract
istics and also technological and media elements and
customers [3].
even strategies which companies take up and finally
Corresponding
80
2
2.1
Literature review
Mobile advertising
Short message services (SMS) have become a new technological buzzword in transmitting business to customer messages to such wireless devices as cellular telephones, pagers, and personal data assistants. Many
brands and media companies include text message
numbers in their advertisements to enable interested
consumers to obtain more information [4].Mobile marketing uses interactive wireless media to deliver personalized time- and location-sensitive information promoting goods, services, and ideas, thereby generating value
for all stakeholders [2].Studying interactive mobile services such as SMS and MMS suggests drawing upon
theories in marketing, consumer behavior, psychology
and adoption to investigate their organizational and
personal use [4].
Mobile advertising is predicted to be an important
source of revenue for mobile operators in the future
[9] and has been identified as one of the most promising potential business areas. For instance, in comparison with much advertising in traditional media,
mobile Advertisements can be customized to better
suit a consumers needs and improve client relationship [1].Examples of mobile advertising methods include mobile banners, alerts, and proximity-triggered
advertisements [6].
3
3.1
The AHP
Personalization
3.2
Credibility
3.3
2.2
Alternatives
Consumer permission
The AHP is one the extensively used multi-criteria decision making (MCDM) methods. The AHP has been
applied to a wide variety of decisions including car purchasing, IS project selection [8], and IS success[5].
The AHP is aimed at integrating different measures 3.4 Consumer control
in to a signal overall score for ranking decision alternative. Its main characteristic is that it is based on pair
wise comparison judgments.
There is a trade-off between personalization and consumer control. Gathering data required for tailoring
In this paper, we discuss one representative the re- messages raises privacy concerns. Corporate policies
lationship between the effective factors in success the must consider legalities such as electronic signatures,
marketing companies and also influence the way the electronic contracts, and conditions for sending SMS
consumer reacts to mobile advertising.
messages [1].
81
The Third International Conference on Contemporary Issues in Computer and Information Sciences
4
4.1
4.1.1
4.2
4.1.2
Discussion of results
5
After calculating the weight of the effective elements
in mobile ads in relation to the total designated criteria, we should determine the weight of the criteria. In
other words, the quota of each criterion in determining
the best effective element must be identified. To do
this we need to compare the criteria in pairs. For example, in order to determine the relative importance of
the four major criteria, a 4 4 matrix was formed. Expert Choice provided ratings to facilitate comparison,
these then needed to be incorporated into the decision
making process. After inputting the criteria and their
importance into Expert Choice, the priorities from each
set of judgments were found.
82
Conclusion
Refrences
[1] Arno Scharl, Astrid Dickinger, and Jamie Murphy, Diffusion and success factors of mobile marketing, Electronic
commerce research and applications 4 (2005), 159-217.
[2] A.P. Dickinger, A. Haghirian, A. Scharl, and J. Murphy,
A conceptual model and investigation of SMS Marketing,
Thirty-Seventh Hawaii International Conference on System
Sciences (HICSS-37), Hawaii, U.S.A (2004).
[3] Cookhwan Kim, Kwiseok Kwonb, and Woojin Chang, How
to measure the effectiveness of online advertising in online
marketplaces, Expert Systems with Applications 38 (2011),
4234-4243.
[4] David Jingjun Xu, Stephen Shaoyi, and Liao Qiudan Li,
Combining empirical experimentation and modeling techniques: A design research approach for personalized mobile advertising applications, Decision support Systems 44
(2008), 710-724.
[5] E.W.T.Ngai, Selection of web sites for online advertising
using the AHP, Information & Management 40 (2003), 233
242.
[6] G.M. Giaglis, P. Kourouthanassis, and A. Tsamakos, towards a classification framework for mobile location services, in: B.E.Mennecke, T.J. Strader (Eds.), Mobile Commerce: Technology ,theory, and applications, Idea Group
Publishing (2003).
[7] Glin Bykzkan, Determining the mobile commerce user requirements using an analytic approach, Computer Standards & Interfaces 31 (2009), 144-152.
[8] M.J. Schniederjans and RL. Wilson, sing the analytic hierarchy process and goal programming for information system
project selection, Information & Management 20 (1991), 33
342.
[9] DeZoysa and E. Mizutani, Mobile advertising needs to get
personal, tele-communications International 36 (2002),
no. 2.
[10] Thtinen j and B. V. S Ram, Mobile Advertising or Mobile
Marketing a need for new concept?, Conference proceeding
of eBRF, 152164.
83
Fatemeh Ghadimi
Department of Electrical and Computer Engineering, Qazvin Branch, Islamic Azad University, Qazvin, Iran
F Ghadimi@qiau.ac.ir
Abstract: The Steiner Tree Problem in a graph, which is one of the most well known optimization
problems, is used for finding minimum tree between some Terminal nodes. This problem has various
usages that one of them is routing in the urban transportation network. In these networks, there
are some obstacles that Steiner tree must avoid them. Moreover, as this problem is NP-Complete,
the time complexity of solving it, is very important for make it useable in large networks. In this
article, an obstacle avoiding approach has proposed that can find the near optimum answer of the
Steiner tree problem, in polynomial time. This approach has good rates in comparison with the
others, and it can find the possible near optimum tree, even when there are some obstacles in the
network.
Keywords: Steiner Tree on the Graph; Urban Transportation Network; Free-Form Obstacles; Heuristic Algorithms.
Introduction
The Steiner Tree Problem (STP) has several definitions, but in this article, it is considered on a graph.
The STP on a graph has many practical usages such
as global routing and wire length estimation in VLSI
applications, civil engineering and routing on urban
networks, and also multicasting in computer networks.
In 1972, the STP even on the graph, has been
This article focuses on the urban transportation network routing, so in computing Steiner tree, the sug- proven to be NP-Complete [2], so there is no polygested approach should avoid obstacles that may be nomial time solution for it, that can find the optimum
answer. Thus, there is a need for heuristics and approxexisted in this network.
imate approaches instead of exact algorithms. Some of
The urban transportation network is assumed as an these approaches are as follows: MST based algorithms
undirected, weighted graph. The nodes of this graph like algorithms of Takahashi et al. [3] and Wong et al.
are intersects, the edges are roads and the weights are [4] that for finding Steiner tree, they add an edge at
traffic volume. In this graph there can be some poly- each time until all terminals connect together; Nodegons that they are the obstacles, like Tehran restricted based local search algorithms like Dolagh et al. [5]
traffic area. The Steiner Tree Problem that can avoid that find Steiner tree with using local search and idenobstacles is defined as follows: finding the shortest sub tifying proper neighbors; Greedy Randomized Search
Corresponding
84
3.1
85
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Input. , = , ,
Output. ,
Input. , ( = , , )
Output. ( = , , )
// = \
1 for each do
2
if Deg( ) < 2 then
3
Remove from and its edge from ;
4
end if
5 end for
Initialization: = , = , = , = 3.
//P is a set of edges; N is a set of nodes; is an array of size r, J is
//a counter.
3.3
First Phase
1 for each do
2 Min{ShrtTree( , };
3 + .edges;
4 + .nodes;
5 end for
6 repeat
7 flag true;
8 -1;
9
for each do
10 Temp Min{ShrtTree( , };
11 if Temp < and Temp.edges .edges then
12
\ .edges;
13
\ .nodes;
14
Temp;
15
+ .edges;
16
+ .nodes;
17
flag false;
18
end if
19 end for
20 until flag=true or =0.
21 Remove all repeated edges in ;
22 Remove all repeated nodes in ;
86
Experimental Results
We implemented our algorithm in the C# programming language, and all the experiments were performed
in a computer with a 2.50 GHz Intel processor and 3GB
of RAM. This algorithm has been executed on several
data sets such as Beasleys data sets [9] and SteinLib
data sets [10]. Here the results of running the OASTUN algorithm on the set B of Beasleys data set are
shown.
The costs of the resulted Steiner trees from executing OASTUN algorithm on the set B, without running
Obstacle Avoiding phase, are in Table 1. The rate of
this algorithm is computed from the ratio of the cost
of OASTUN to the optimum cost.
1B
2B
3B
4B
5B
6B
7B
8B
9B
10 B
11 B
12 B
13 B
14 B
15 B
16 B
17 B
18 B
In the second loop (lines 7-15), until all the separated trees are not joined together, a path with lowest
cost that connects two trees is selected from the C. The
edges and nodes of the selected path are respectively
added to P and N and also if there is any Steiner node
in this path, it is added to set H. In this situation,
the connectivity status of the groups and the number
of isolated groups are updated.
87
50
50
50
50
50
50
75
75
75
75
75
75
100
100
100
100
100
100
63
63
63
100
100
100
94
94
94
150
150
150
125
125
125
200
200
200
9
13
25
9
13
25
13
19
38
13
19
38
17
25
50
17
25
50
82
83
138
59
61
122
111
104
220
86
88
174
165
235
318
127
131
218
82
83
138
59
61
122
111
104
220
86
92
174
170
235
321
132
131
218
Rate
Time
(Opt/MSTG) (h:m:s:ms)
1
1
1
1
1
1
1
1
1
1
1.045
1
1.03
1
1.009
1.039
1
1
0: 0: 0: 10
0: 0: 0: 16
0: 0: 0: 33
0: 0: 0: 15
0: 0: 0: 20
0: 0: 0: 50
0: 0: 0: 28
0: 0: 0: 37
0: 0: 0: 101
0: 0: 0: 54
0: 0: 0: 66
0: 0: 0: 131
0: 0: 0: 69
0: 0: 0: 123
0: 0: 0: 220
0: 0: 0: 125
0: 0: 0: 168
0: 0: 0:350
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Conclusions
Refrences
[1] S. E. Dreyfus and R. A. Wagner, The Steiner Problem in
Graphs, Networks 1 (1972), 195207.
[2] R.M. Karp, Reducibility among Combinatorial Problems,
Complexity of Computer Communications, Plenum Press,
New York (1972), 85103.
(a)
(b)
[4] Y. F. Wu, P. Widmayer, and C. K. Wong, A faster approximation algorithm for the Steiner problem in graphs, Acta.
Info. 23 (1986), 223229.
[5] S. V. Dolagh and D. Moazzami, New Approximation Algorithm for Minimum Steiner Tree Problem, International
Mathematical Forum 6/53 (2011), 26252636.
[6] S. L. Martins, P. M. Pardalos, M. G. C. Resende, and C.
C. Ribeiro, Greedy Randomized Adaptive Search Procedures
For The Steiner Problem In Graphs, AT&T Labs Research,
Technical Report (1998).
[7] S. Dasgupta, C. H. Papadimitriou, and U. V. Vazirani, Algorithms, Chapter 4, Section 4, 2006.
(c)
(d)
88
University Of Guilan
Department of IT Engineering Trends in Computer Networks
Ka.Bazargan@yahoo.com
Abstract: Ad hoc wireless network includes a set of distributed nodes that are connected with
each other wirelessly. Nodes can be the host computer or router. Nodes directly without any access
point to communicate with each other and have no fixed organization and therefore have been
formed in an arbitrary topology. Each node are equipped with sender and receiver. An important
feature of this network is a dynamic and changing topology .It is result of node mobility. Nodes in
these networks are continually changing its position that it requires a routing protocol that has the
ability to adapt to these changes, to appear.
Keywords: Specialized mobile networks, network security, massive attack of the black hole, the routing protocol,
Black Hole, AODV
Introduction
89
3.2
Black holes are two characteristics: first of all, introduce his path as the shortest route (reliable routes), although this is a false path, with intention to the packet
stopping Secondly, black holes is wasting with the passage of the node of origin to consumption. In Ad hoc
networks routing, AODV protocols is one of the most
popular protocols that are used, Black hole nodes that
more damage to this protocol are in the routing protoThe routing nodes are created problem during the col and cause disturb to the routing protocol.
routing that causes data loss in the network so are
called them malicious nodes or black holes in, this paper is presented solution black holes attack. This way
3.3 Divided Black Hole Nodes
the behavior of nodes in the network decides whether
the target node is malicious or not?
Black hole nodes can be divided into several categories:
AODV Algorithm
3
3.1
4.1
90
The Third International Conference on Contemporary Issues in Computer and Information Sciences
4.2
Introduce Making Resistant Tech- is issued, Takes voting place around the process of a
niques in The Black Hole Attack on node. Then, based on opinions issued by the neighbor
node RREP, takes decision that the node is being held
Indivisual Nodes
for bad business.
4.3
Data Routing
From
1
1
0
Information
Through
0
1
0
Table 1:
91
1 Each node has a table that is related to its behavior and its of neighboring. Each entry in this
6 Simulations of Black Hole Attable specifies that the neighbor node with the
tacks
specified Id how many data packet send with this
and how many reply packet send this node and
how many data packet the desired node is delivIn this simulation using NS simulation software and the
ered to a neighbor node.
number of healthy nodes and the number of malicious
2 Each node contains a list of nodes that are in nodes, we show the simulation results.
quarantine and will be removed from the routing
process.
6.1
Malicious nodes are nodes that are responding RREQ
packets to send RREP packets to the large number
of data packets delivered to it by the data, but the
minimum data has been sent to neighboring nodes.
When a node receives RREP packet from its neighbor node if the node receives a RREP responding to a
RREQ, be an intermediate node and destination node,
it checks whether the responding node is not the nodes
that are in quarantine. If the node is a malicious node,
the RREP packet is discarded. Otherwise, voting process is performed around the responding node so as to
obtain all the desired node activity. Then based on
92
value
OPNET
600 sec
50
AODV
CBR
2 sec
600*600 m
250 m
2
The Third International Conference on Contemporary Issues in Computer and Information Sciences
6.2
6.3
The Simulation
Simulation
time in
seconds
100
130
160
190
210
240
270
300
The
average
delay
End-to-End
0.003323244
0.003323344
0.003323371
0.003323419
0.003323444
0.003323454
0.003323474
0.003323348
Packet
delivery
rate
2557
2551
4001
4001
4001
4001
4001
4001
Routing
rates
2171
2133
2151
2198
2165
2171
2199
2111
The
average
delay
End-to-End
1.104323644
1.104323644
1.104323644
1.104323654
1.104323654
1.104323664
1.104323664
1.104323664
Packet
delivery
rate
2271
2333
2451
2698
2765
2871
2899
2911
Routing
rates
4950
4950
4950
4950
4950
4950
4950
4950
The following we will display the simulation using simulation software graph with malicious nodes.
Figure 1: OPNET simulation environment
Figure 2: View increasing the delivered package with
the removal of malicious nodes
Figure 3: Show created delay because of. Attack elimination
Figure 4: View the routing overflow because of attack
elimination.
93
Conclusion
method in a more ad hoc networks Because of this simplicity and ease of implementation.
Networks:
Refrences
[1] C. Bettstetter, G. Resta, and P. Santi, The node distribution of the random Waypoint mobility model for wireless
94
arabfard-ma@kaums.ac.ir
Abstract: Nowadays, the influence of software on most of the fields such as industry, sciences,
economy, etc is understood significantly. Success of software systems depends on its requirements
coverage. Requirement Engineering explains that the system can do what work in what circumstances. Successful Requirement Engineering depends on exact knowledge of users, customers and
beneficiaries requirements. Airport Assignment Gate System is a System Software which performs
the Gate Assignment Management to aircrafts automatically. This system is used in order to reduce
delays in airline system as well as reducing the delay time for planes which are waiting for landing
or flying. In this paper, the Self-Adaptive Methodology has been used for modeling this system and
with regard to this issue that this system should show different behavior in different conditions.
Self-Adaptive System is a system which is able to change itself at the time of responding to the
changing needs, system and environment. Using this Methodology, this paper attempts to support
the uncertainty and accountability to the needs created in the runtime more than ever.
Keywords: Self-adaptive Software; Run-time Requirements Engineering; KAoS; Uncertainly Management; Goal
Oriented; Airport Assignment Gate System.
Introduction
95
feedback circle in order to adapt itself with changes occurred in the runtime (Figure 1). These changes may
have arisen from the system itself (Internal factors) or
the concept of system (External factors). Thus, these
kinds of systems required to scan themselves, detect
the changes, decide to react against the change, and
finally implement the decided action[6].
Gates are the final ports for passengers entry and exit
at the airport. Airport Assignment Gate is the process of selecting and assigning the aircraft to the Gate,
which is used for exact and scheduled assignment and is
considered as one of the important tasks at an airport.
This assignment is connected with a set of arrived and
moved flights, the Gates, which are ready to be assigned, and a set of constraints, which are imposed by
the airlines and airport. Thus, the Assignment Process
may be different due to various circumstances. In order
to create an efficient assignment, the assignment process must be able to cope with the sudden changes in
the operating environment and provide a timely soluFigure 1: Self-Adaptive System Feed Back Loop
tion for satisfying the proactive needs. Therefore, the
Gate Assignment should be quite clear and explicit and
has the ability to cope with the changes [7].By increasing the number of passengers and flight, the complexity
of this process will be increased significantly and the
2 Self-Adaptive System
optimal use of gates will be so important. Furthermore,
as mentioned, due to the sudden changes, which may
Self-Adaptive System is a kind of system which is able be occurred, the system should apply the optimal and
to change itself in the runtime in respond to changing efficient assignment according to the new conditions
needs, system and environment. These kinds of sys- and caused requirements.
tems depend on a variety of aspects like user needs,
features of system, features of environment, etc. The
main feature of these systems is that it reduces partly
Goal and Agent in Assignment
the dependence on human management. In fact, the 4
Self-Adaptive Software assesses its own behavior and
Gate System
changes it if it becomes clear in the assessments that
the system has not done the task, which is assigned
to it, completely and not achieved the desired objec- System Goal is the system final aim which should
tive, or the work can be done with greater efficiency achieve it. Goal can be connected with the life of sysand effectiveness [5].Before creation of Self-Adaptive tem or its scenario. Goal can be displayed as several
Software, the reimplementation and reconfiguration quantities each of which is connected with different feaprocess of system, which was a time consuming and tures. In addition, this goal can be divided into several
costly act, was done by human or his direct manage- sub goals each of which are associated with a feature.
ment in order to respond the occurred changes. There- A Behavioral goal defines a maximum set of system
fore, the research on software, which can automatically permissible behaviors. This kind of goal is divided into
and without human interference adapt itself with the two groups including the Achieve Goal and Maintain
occurred changes in the runtime, became important. Goal. Achieve goal is an objective which indicates the
Self-Adaptive Software was developed as a system with ultimate destination of system and demonstrates the
96
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Figure 2 represents the Use Case chart of Airport System. KAoS charts are drawn using this chart and based
on it. The players of this chart are in fact the agents
of goal and responsibility model. Moreover, the cases
of this chart help us in determining the existing methods in the object model and the operators of operator
model. The goal chart of Airport Assignment Gate
System is presented in Figure 3. This chart is designed
based on the goal-based methodology. The numbers
shown in figures 3 and 4 are described as follows.
1. Gate Is Requested
2. Achieve[Getting information If Pilot was requested]
3. Achieve[Checking, Assigning emergency flight If
Information was given]
4. Achieve[Checking capacity, airline, area and
making a decision If emergency was not true]
5. Achieve[Update1 database]
6. Achieve[Inform To pilot]
7. Achieve[Inform pilot to allocator for leaving gate]
8. Achieve[Assigning gate if flight was emergency]
9. Achieve[Assigning gate that is appropriate to
other constraint if flight was not emergency]
97
C. GetInfo
D. CheckEmergent
E. AssignGateToEmergencyFlight
F. CheckOtherConstraint
G. AssignGateToOtherFlight
H. AddQueue
Related Works
I. Update1Databese
J. InformedPilot
K. LeaveGate
L. FindEmergencyWaitingFlight
M. AssignToEmergencyWaitingFlight
N. FindAppropriateFlight
O. AssignAppropriateFlight
P. Update2Database
In the runtime, the Requirement Engineering is considered as a subset of self-adaptive software engineering
science which only has been studied seriously in recent
years. [10] is one of the works which can solve the
problem of Airport Assignment Gate System. In the
method presented in [10] the ability of functions based
on the past knowledge and experiences for the manual
operation is used for solving this problem, but the algorithms used in this method had more analyzing and
computing power than before. The major problem of
this method was the manual section of operation. Furthermore, in order to optimize the Gate Assignment
in [11], it has been focused on minimizing the distance
passed by passenger between the terminal and gate
assigned to the aircraft. Despite the fact that this
subject is considered as a second-rate problem among
the problems of gate assignment, generally solving
this problem will affect the optimum gate assignment.
Moreover, in order to solve the assignment problem
in [12], the probable flight delay has been focused. In
this method of problem solving, the probable gate assignment model and proactive assignment rules have
been used.Since that this system should be changed according to various conditions, which may be occurred
in the operational environment, and adapt itself with
new circumstances, none of the existing systems have
focused on much supporting of the uncertainty in designing this system. This aim has been achieved in the
method presented in this paper using the self-adaptive
methodology and the modeling language of KAoS requirements.
98
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Refrences
[1] M. Jackson, The meaning of requirements, Annals of Software Engineering 3 (2010), no. 1, 5-21.
[2] A. Van Lamsweerde, Requirements engineering: from system goals to UML models to software specifications, Vol. 3,
Wiley, 2009.
99
[12] S. Yan and C.H. Tang, A heuristic approach for airport gate
assignments for stochastic flight delays, European journal
of operational research 180 (2007), no. 2, 547-567.
[13] I.J. Jureta, A. Borgida, N.A. Ernst, and J. Mylopoulos,
Techne: Towards a new generation of requirements modeling languages with goals, preferences, and inconsistency
handling, IEEE (2010), 115-124.
Fatemeh Taheri
smrjamali@yahoo.com
Ft.taheri@gmail.com
Farhad Maleki
M. E. Shiri
maleki.farhad@gmail.com
shiri@aut.ac.ir
Abstract: Facility location problem arise in a wide variety of practical problems. In this paper we
propose a new formulation for capacitated facility location problem which is a development of general
framework including the amount of risk for each facility if other resource cannot serve its customers.
The new formulation is evaluated by three meta heuristic algorithm including Genetic algorithm,
Particle Swarm optimization algorithm and Simulated annealing and finally some numerical example
are provided to show the performance of these algorithm in solving the new problem formulation.
Keywords:
Capacitated Facility Location Problem,Genetic Algorithm ,Paticle Swarm Optimization,Simulated Annealing
Introduction
The facility location problem is a classic combinatorial optimization problem for determining the number
and locations of a set of facilities which of N capacityconstrained facilities should be used to satisfy the demand for M customers at the lowest sum of fixed and
variable costs. The problem is formulated as in Khumawala (1974). Structural properties of the location
problems treated here have been studied by e.g. Leung
and Magnanti (1989),Cornu-ejols, Sridharan and Thizy
(1991), Aardal (1992), and Aardal, Pochet and Wolsey
(1995) and (Harkness and ReVelle, 2003; Drezner et
al., 2002; Canel and Das, 2002; Nozick, 2001;Canel et
al., 1996, 2001; Melkote and Daskin, 2001; Giddings
et al., 2001; Canel and Khumawala, 1996, Hinojosa et
Corresponding
Author
100
(1)
(D)
jJ
j J :
Problem Statement
sj yj < dk
(T)
dk xkj < sj yj
(C)
jJ
CFLP Problem
Genetic Algorithm
Mutation Rate
Crossover
Iteration
POP
Simulated annaleang
Accept
Rate
Partical swarm Optimization
Iteration
Parameter
0.05
0.85
100
40
1000
0.09
100
j J :
X
kK
j J, k K :
xkj yj 6 0
0 6 xkj 6 1 , 0 6 yj 6 1
yj {0, 1}
Where K is the set of customers and J the set of potential plant locations; ckj is the cost of supplying of
the customer Ks demand dk from location j,fj is the
fixed cost of operating facility j and aj its capacity if
it is open; the binary variable yj is equal to 1 if facility j is open and 0 otherwise;Finally,xkj denote the
fraction of customer ks demand met from facility j.
the constrains (D) are the demand constraint and constraints (C) are te capacity constraints. The aggregate
capacity constraint (T) and the implied bounds (B) are
superfluous;they are ,however, usually added in order
to sharpen the bound if Lagrangean relaxation of constraints (C) and/or (D) is applied. Without loss of generality it is assumed that
P ckj 0 k, j, fjP 0 j, sj >
0j, dk 0 k K : jJ sj > dk = kK dk Lagrangean relaxation approaches for the CFLP relax at
one the constraint sets (D) or (C). 2.2 new formulation of CFLP with cost risk In the new formulation of
CFLP we add the overall cost of risk that calculate for
each facility
XX
X
X
Z = min
(ckj xkj ) +
(fj yj ) +
(Rj ) (2)
kK jJ
jJ
jJ
101
The Third International Conference on Contemporary Issues in Computer and Information Sciences
3.2
0.5
0.2
0.3
0.5
0.7
0
0
0
0
0
0.1
0.1
0.1
0.1
0.1
0
0
0
0
0
0.3
0.3
0.2
0
0.1
0.1
0.4
0.4
0.4
0.4
customerf acility
Unrestricted results
20
10
5.4
x 10
5.2
19.8
3.9
5.2
14
5
4.8
Fitness value
4.6
4.4
4.2
capacity of facilities is sj :
20
26
10
10
30
3.8
3.6
0.2
6.1
4.8
15
20
40
60
Generation
80
100
Figure 1: CFLP with risk
x 10
2.9
3.1
Empirical comparison
Fitness value
Time-limited results
For the time-limited evaluation, all of the three heuristics were allowed a maximum time of 200 s and the
best solutions from each heuristic were noted. This
approach evaluates the efficiency with which the three
heuristics reach quality solutions over time. For CFLP,
PSO gives best results in terms of rapidly reaching lowcost solutions, followed by SA and GA, respectively.
102
2.8
2.7
2.6
2.5
2.4
20
40
60
Generation
80
100
4.5
x 10
10
3.5
Function value
3
2.5
2
7
6
1.5
0.5
0
x 10
3
0
20
40
60
80
200
100
400
600
Iteration
800
1000
1200
Refrences
5
5.5
x 10
[2] Zvi Drezner and Horst W. Hamacher, Facility Location:application and theory, Wiley Publishing, 2005.
Best fitness
Mean fitness
[3] K Aardal, Reformulation of capacitated facility location problems: How redundant information can help, Annals of Operations Research (1998), 289-308.
Fitness value
4.5
4
3.5
3
2.5
2
20
40
60
Generation
80
100
103
Hojjat Gohargazi
Saeed Jalili
h.gohargazi@modares.ac.ir
sjalili@modares.ac.ir
Abstract: Due to the lack of infrastructure and routers Mobile Ad hoc NETwork (MANET)s in
addition to external attacks, are vulnerable against internal attacks that can occur from authorized
nodes. Collusion attack is a prevalent attack based on Optimized Link State Routing (OLSR)
protocol. In this attack two colluding malicious node prevent routes to a target node from being
established. In this paper we propose a hybrid (One Class Classification (OCC) and Centroid)
method for detecting Collusion attack. For this purpose we adapt OCC methods using a simple
distance based method called Centroid. results show that this model increases the accuracy of
discerning this attack.
Keywords: Anomaly detection; Collusion attack; OLSR; One class classification; MOG.
Introduction
104
3
3.1
Background
Optimized Link State Routing
OLSR is one of four standard routing protocols provided for MANETs. this protocol is proactive in which
the routes to all nodes are calculated periodically and
maintained in routing table for each node. Base of
OLSR have been established on two types of messages
HELLO and TC. Every node broadcasts HELLO messages only to its 1 hop neighbourhood at 2 second
interval times including its link, neighbourhood and
MPR informations. Using the information collected
from HELLO messages, each node selects a subset of
its 1 hop neighbours called MPR set. MPRs ensure
delivering packets received from their selectors to all 2
hop neighbours of them.
After selecting MPRs and informing them of their selectors, every MPR generates and broadcasts TC messages each 5 seconds to propagate topology information
across the network. Unlike HELLO messages, TC mes-
sages are forwarded and spread but just by MPRs. Using topology information obtained from messages every node calculates its routing table by a shortest path
computation algorithm.
3.2
Collusion Attack
Proposed method
In this section we describe our method to detect attacks (especially Collusion Attack) against OLSR.
First a set of features is needed to use for collecting
data samples. For this purpose we use 20 different
features, 16 of them are taken from features defined
in [6] and the others are new ones. The features and
their descriptions are listed in figure 1.
105
The Third International Conference on Contemporary Issues in Computer and Information Sciences
4.3
Combining is performed in Testing phase. After learning a model and calculating T and A in Training
phase, when a sample xi receives, to test whether it is
normal or attack, first its distance to model (learned by
OCC) is computed (Di ). Then according to equation 1
RDi is calculated to determine the probability of being
Collusion Attack. At last this two values are combined
with a Voting mechanism. We use two different simple
voting functions, mean and maximum. This functions
are defined as:
y = mean(Di , RDi )
y = max(Di , RDi )
5
4.1
and
Data Scaling
To validate our model we simulated a MANET in Network Simulator 2 (NS2) to collect normal and attack
Many of OCC methods are sensitive to data scaling. datasets. The simulation parameters are as follows
So it is important how the data are scaled. AssumNumber of nodes
50
ing X = {x1 , x2 , ..., xn } as data samples, the scaling
Simulation
time
3000s
method we used, is as follows:
Area
1000m 1000m
xsi = (xi T )/T i
Mobility model
RWP
Traffic
type
CBR
in which T and T are mean and standard deviation
of training data, respectively.
4.2
Centroid method
(1)
RD shows the sample xi how much is close to attack versus normal status. Whatever the value of RD
be high, means it has occurred Collusion Attack with
higher probability. As is shown in figure 2 a part of this
method is performed in Training and part in Testing.
106
60
DR(%)
40
30
20
10
MOG
MOGCentroid (max function)
MOGCentroid (mean function)
10
FAR(%)
12
14
16
18
20
90
80
70
DR(%)
60
50
40
30
20
MOG
MOGCentroid (max function)
MOGCentroid (mean function)
10
10
FAR(%)
12
14
16
18
20
Acknowledgement
Refrences
90
[1] B. Kannhavong et al, A Collusion Attack Against OLSRbased Mobile Ad Hoc Networks, Global Telecommunications
Conference, GLOBECOM 06. IEEE, 2006, pp. 15.
80
DR= 78%
FAR= 10%
70
Threshold= 0.60221
DR(%)
60
50
40
30
20
10
10
FAR(%)
12
14
16
18
20
107
The Third International Conference on Contemporary Issues in Computer and Information Sciences
[6] J.B.D. Cabrera et al, Ensemble methods for anomaly detection and distributed intrusion detection in Mobile Ad-Hoc
Networks, Information Fusion 9 (2008), no. 1, 96 - 119.
108
N. Karimpour Darav
Faculty of Engineering
Faculty of Engineering
Guilan University
rebrahimi@guilan.ac.ir
karimpour@liau.ac.ir
S. Arabani Mostaghim
Abstract: Encryption has been considered a precious technique to protect information against
unauthorized access in addition to developing analytical methods to evaluate cryptographic algorithms. Analysis of statistical tests is one of the methods used by the International Institute of
standard and Technology (NIST). This article introduce a software tool that implemented by using
C and JAVA programming languages for cryptographic purposes.
Introduction
By exploiting JNI[7] technique, C and JAVA programming languages have intertwined together to take advantage of the features of these two languages.
Encryption plays s significantly important role in protecting information against unauthorized access. The
use of random number in cryptographic applications
is increasing[1] notably. For example, needed keys are
generated by utilizing random number generators in
order to prevent attackers from guessing keys. Hence,
generating random numbers is a sobering problem,
which is done by applying random number generators. However, evaluating their quality is far from
straightforward and needs some analytical manipulation. NIST[2] for this purpose has provided a set
of statistical tests applied on the output of implemented Random Number Generators (RNGs). Consequently, their results are taken into account as a
benchmark to select the generator for desired application[2]. Nonetheless, there are many other applications
that statistical analysis can be used in the fields[1],[2].
Our tool utilizes the C and JAVA[9] programming languages, can be run under Windows operating system.
Corresponding
109
xi+1 = axi + b
mod
m,
f or
i0
(1)
mod
(2)
mod
m,
f or
i0
(3)
mod
m, f or
i0
(4)
Cubic Congruential Generator: This generator A package of statistical tests are proposed by NIST[2]
produces random numbers by use of a cubic equa- as criteria to evaluate the quality of an RNG(or PRNG)
tion. The cubic equation is[17]:
to be appropriate for one (more) application(s). If a sequence is successfully passed all these 15 tests, it does
xi+1 = ax3i + bx2i + cxi + d mod m
(5) not mean that the result of these tests are exactly correct but with high probability, it can be accepted. This
By under primitives, the recurring equation is
package consist of 15 tests coming as the follow:
applied to generate random numbers by the
CCG[2]. a = 1, b = c = d = 0, m = 2512
xi+1 = x3i mod 2512 , f or i 0
Frequency test : in this test the number of 0s and
1s in a sequence is computed and then compared
Exclusive OR Generator: This generator prowith an expected result. Furthermore in this test,
duces random numbers through recurrence equa2 is computed from the equation[12],[1]:
tion[1]:
xi = xi1 xi127 , f or
i 128.
(6)
2 =
(n0 n1 )2
n
(7)
110
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Runs Test: If s is considered as an entire sequence, containing 0s and 1s. The iteration of
0s (or 1s) is named as a run of sequence. If the
number of runs in s is the same as a expected
in a random sequence, the test result being the
entire sequence will be random[14].
n
2m X
(ni m )2
n
2
m
iZi
m1
X
iZim1
(ni
n
2m1
)2
111
could be converted to a DLL file by means of applying some changes and compiling them by visual studio
compiler. Its user interface has been designed by java,
using JNI technique to connect the GUIs to the main
core codes. As it is shown in figure 1, in the generator
window, the kind of the generating algorithm must be
selected, and then in the next step either one or more
tests should be selected. When the program ends up
successfully, a window like what is shown in figure 3
appears.
6
Figure 1: Generators view
Conclusion
Refrences
Figure 2: Tests view
[1] A. Menezes, P. van, Oorschot, and S. Vanstone, Handbook
of Applied Cryptography, CRC-Press,Inc, Chapter 5, pages:
169190,Chapter 9, pages: 321348, Jun,1997.
[2] Andrew Rukhin, Juan Soto, James Nechvatal, Miles Smid,
Elaine Barker, Stefan Leigh, Mark Levenson, Mark Vangel,
David Banks, Alan Heckert, James dray, and San Vo, A
Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications: Reports on
Computer Systems Technology, NIST,U. S (April,2010),
(available from: www.csrc.nist.gov).
[3] J. L. Massey and S. Serconek, A fourier transform approach to the linear complexity of nonlinearly filtered sequences: Advances in Cryptology-CRYPTO, Lecture Notes
in Computer Science (1994), 332-340.
[4] J. Stern, Secret linear congruential generators are not cryptographically secure, Proceedings of the IEEE 28th Annual
Symposium on Foundations of Computer Science (1987),
421-426.
[5] H. Krawczyk, How to predict congruential generators, Journal of Algorithms (1992), 527-545.
112
The Third International Conference on Contemporary Issues in Computer and Information Sciences
[6] S. M. Hong, S.Y. Oh, and H. Yoon, New modular multiplication algorithms for fast modular exponentiation, Advances
in CryptologyEUROCRYPT (1996), 166-177.
[7] Sheng. Liang, The JavaT M Native Interface Programmers
Guide and Specification, ADDISON-WESLEY, Jun,1999.
[9] www.java.sun.com.
[10] I.J. Good, The serial test for sampling numbers and other
tests for randomness, Proceedings of the Cambridge Philosophical Society (1953), 276-284.
[11] I. N. Kovalenko, Distribution of the linear rank of a random matrix, Theory of Probability and its Applications 17
(1972), 342-346.
[17] J. Eichenauer-Herrmann and E. Herrmann, Compound cubic congruential pseudorandom numbers, Computing 59
(1997), 85-90.
[18] ANSI X9.30 (PART 2), Public Key Cryptography Using Irreversible Algorithms for the Financial Services Industry:
The Secure Hash Algorithm 1(SHA-1), ASC X9 Secretariat
American Bankers Association (1993).
113
Zahra Roozbahani
raziehghiasi@gmail.com
roozbahani2@gmail.com
Behrooz Minaei-Bidgoli
University of Science and Technology, Tehran, Iran
Department of Computer Engineering
minaeibi@cse.mcu.ed
Abstract: Customer churn has become a critical issue, especially in the competitive and mature
telecommunication industry. From economic and risk management perspective, it is important to
understand customer characteristics in order to retain customers. However, few studies have used
hybrid modeling for churn prediction. The main contribution of this paper is to use hybrid neural
networks for churn prediction. The experimental results show that the hybrid model performs better
than single neural network model.
Introduction
As the new markets are developed, competition between companies increases sharply. Since the competition gets hard and telecommunication becomes a selling product, companies encounter to minimize costs,
add value to their services, and guarantee differentiation. Now, the customers can choose their service
providers, so companies pay attention to customer care
in order to keep their position in the market. Under
the hard conditions of competition, companies try to
focus on customers behaviors. Base on the needs of
customers, telecommunication companies decide their
service offers, give a shape to their communication network and in addition change their organizational structure [1]. If a customer ends doing business with a
provider, and join another one, the customer is called a
churner. Churn is a major problem for companies with
many customers, like credit card providers or insurance
companies. In telecommunication industry, the sharp
Corresponding
114
predict customer churn. Researchers showed that hybrid data mining models can improve the performance
of the single clustering or classification techniques individually. In particular, they are composed of two
learning stages [5]. Nevertheless, few studies examine
the performance of hybrid data mining techniques for
customer churn prediction. Therefore, this paper uses
hybrid neural network in order to improve the accuracy
of prediction models. The rest of the paper is organized
as follows. The definition of churn and the summary
of the studies are introduced in Section 2. The data
which is used in the research is described in Section 3,
and the modeling process based on neural network is
presented in Section 4. The conclusion of this paper is
represented in Section 5.
Literature Review
Many highly competitive organizations have understood that retaining existing and valuable customers
is their core managerial strategy to survive in industry. This leads to the importance of churn management. Customer churn means that customers are intending to move their custom to a competing service
provider. Many studies have discussed customer churn
management in various industries, especially in mobile
telecommunications. In order to understand how related work constructs their prediction models, this paper reviews some of the current related studies. ShinYuan Hung et al. (2006) [?6] used decision tree and
neural network techniques for predicting wireless service churn. They understood that both decision tree
and neural network techniques can deliver accurate
churn prediction models. John Hadden et al. (2007)
[7] reviewed some of the most popular technologies that
have been identified for the development of a customer
churn management platform. Kristof Coussement and
Dirk Van den Poel (2008) [8] compared three classification techniques Logistic Regression, Support Vector
Machines and Random Forests to distinguish churners from non-churners. Their reviews show that Random Forests is a viable opportunity to improve prediction performance compared to Support Vector Machines and Logistic Regression which both exhibit an
equal performance. Elen Lima et al. (2009) [9] show
how domain knowledge can be incorporated in the data
mining process for churn prediction, viz. through the
evaluation of coefficient signs in a logistic regression
model, and secondly, by analyzing the decision table
(DT) extracted from a decision tree or rule-based classifier. Dulijana Popovi and Bojana Dalbelo Bai (2009)
[10] presented a model based on fuzzy methods for
churn prediction in retail banking. B.Q. Huang et al.
3
3.1
Data
Reactive Agents
In this paper we used CRM dataset provided by American telecom companies, which focuses on the task
of customer churn prediction. Database contained a
churn variable signifying whether the customer had left
the company two months after observation or not, and
a set of 75 potential predictor variables which has been
used in a predictive churn model. For the purpose of
this paper 4,000 records are randomly selected that
with ratio 9 to 1 are divided into two test data set and
train data set.
115
The Third International Conference on Contemporary Issues in Computer and Information Sciences
3.2
Noise Reduction
4.2
3.3
Combined neural network models often results in a prediction accuracy that are higher than the individual
models. This construction is based on a straightforward approach that has been termed stacked generalization. The stacked generalization concepts formalized by Wolpert [16] and refer to schemes for feeding information from one set of generalizers to another before
forming the final predicted value (output). The unique
contribution of stacked generalization is that the information fed into the net of generalizers comes from
multiple partitionings of the original learning set.[17],
[18].
Normalization
4.3
Evaluation Model
116
[3] They Love Me, They Love Me Not 17(21) (2000), 3842.
[4] Standing
By
Your
Carrier:
Available
From
Http://Currentissue.Telophonyonline.Com/ (2002).
[5] M. Lenard, G.R. Madey, and P. Alam, The Design And
Validation Of A Hybrid Information System For The Auditors Going Concern Decision, Journal Of Management
Information Systems 14(4) (1998), 219237.
[6] S.Y. Hung, D.C. Yen, and H.Y. Wang, Applying Data Mining To Telecom Churn Management, Expert Systems With
Applications 31(5) (2006), 1552.
[7] J. Hadden, A. Tiwari, R. Roy, and D. Ruta, Assisted Customer Churn Management:State-Of-The-Art And Future
Trends, Computers & Operations Research 34(10) (2007),
29022917.
[8] K. Coussement and D. Van Den Poel, Improving Customer Attrition Prediction By Integrating Emotions From
Client/Company Interaction Emails And Evaluating Multiple Classifiers, Expert Systems with Applications 36(3)
(2009), 61276134.
[9] E. Lima, C. Mues, and B. Baesens, Domain Knowledge Integration In Data Mining Using Decision Tables: Case Studies In Churn Prediction, Journal of the Operational Research Society 60(8) (2009), 10961106.
[10] D. Popovic and B.D. Basic, Churn Prediction Model In Retail Banking Using Fuzzy C-Means Algorithm, Informatica
33 (2009), 243247.
Conclusion
In this study, we also developed and used hybrid neural networks for predicting potential churn in wireless
telecommunication services. We have tested our hybrid neural network model and compared this model
with a single neural network model. The results of our
experiments indicate that the hybrid neural networks
perform better than the single neural network model,
but are computationally expensive. However, successful churn management must also include effective retention actions. Manager need to develop attractive
retention programs to satisfy those customers. Furthermore, integrating churn score with customer segment and applying customer value will also help managers to design the right strategies to retain valuable
customers.
Refrences
[1] p. Kisioglu and Y.I. Topcu, Bayesian Belief Network Approach To Customer Churn Analysis:A Case Study On The
Telecom Industry Of Turkey, Expert Systems With Applications 37 (2011), 71517157.
[2] M. Richeldi and A. Perrucci, Churn Analysis Case Study:
Telecom Italia Lab Report, Torino, Italy (2002).
117
Saeed Jalili
H.Masoud@Modares.ac.ir
Sjalili@Modares.ac.ir
S.M.Hossein Hasheminejad
Tarbiat Modares University (TMU)
Electrical and Computer Engineering Faculty
SMH.Hasheminejad@Modares.ac.ir
Abstract: Assigning responsibilities to classes is a vital and critical task in the object oriented
software design process and directly affects maintainability, reusability and performance of software
system. In this paper we propose a clustering based model for solving the Class Responsibility
Assignment (CRA) problem. The proposed model is independent of specific clustering method and
has a high extensibility to cover the new features of object oriented software design. The input
of model is collaboration diagrams of analysis phase and its output is the class diagram with high
cohesion and low coupling. To evaluate the proposed model we use four different clustering methods:
X-means, Expectation Maximization (EM), K-means and Hierarchical Clustering (HC). Comparing
the obtained results of clustering methods with the expert design reveals that the clustering methods
yield promising results.
Keywords: Object-oriented analysis and design; Class responsibility assignment (CRA); Clustering.
Introduction
Object-oriented software design process involves several steps, in which each step has its own activities. Class Responsibility Assignment (CRA) is one
of the important and complex activities in the ObjectOriented Analysis and Design (OOAD). Its main goal is
to find the optimal assignments of responsibility (where
responsibilities are shown in terms of methods and attributes) to classes in regards to various aspects of coupling and cohesion, thus leading to a more maintainable and reusable model [1]. CRA not only is vital
during analysis and design phase, but also during maintenance.
There are many methodologies to help recognize responsibilities of a system [2] as well as assigning them
to classes [3], but all of them depends greatly on hu Corresponding
118
Related Works
The CRA problem can simply be mapped to a clustering problem. To show this, first, we define the clustering problem. Consider a set of N d -dimensional data
objects O = {O1 , O2 , , ON }, where Oi = (oi1 , oi2 , ,
oid ) Rd . Each oij called a feature (attribute, variable, or dimension) and represents value of data object
i at dimension j. Given O the set of data objects, the
goal of partitional clustering is to divide the data objects into K clusters {C1 , C2 , ,CK }, that satisfies the
following conditions:
a) Ci 6= ,
i = 1, ..., K
SK
b) i=1 Ci = O
c) Ci Cj = ,
i, j = 1, ..., K and i 6= j
Proposed Model
119
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Figure 1 shows our model for solving the CRA problem. The proposed model has three main steps: (1)
extracting features and generating data set, (2) clustering data set, and (3) processing clustering results and
generating class diagram. These steps are described in
the following subsections.
Definition
Method-Attribute Relation
Method-Method Relation
Related Attributes
Related Methods
Attribute Complexity
Method Complexity
120
are determined according to dependency of their contents. For example, suppose method Mi from Class1
calls method Mj from Class2, in this case there is a
relationship between Class1 and Class2.
Case Study
Experimental Results
121
The Third International Conference on Contemporary Issues in Computer and Information Sciences
value of these metrics, obtained by GA are worse than yield promising results. On the other hand comparing
clustering methods. Also, the computational time of the obtained results of clustering methods with single
clustering methods is better than GA.
objective Genetic algorithm reveals that the clustering
methods have low computational time and better average value for coupling and cohesion metrics.
Table 2: The value of coupling and cohesion
In a future work, we intend to use powerful dynamic
for clustering methods and expert design
clustering methods and extend the feature set to sup#
Coupling
Cohesion
port the new aspects of software design.
Algorithm
Classes
MAC
MMC
RCI
TCC
X-means
EM
K-means
K-means
K-means
K-means
HC
HC
HC
HC
Expert
14
17
18
15
14
13
18
15
14
13
18
22
25
27
22
18
13
27
25
22
18
27
29
29
23
29
29
24
27
29
29
29
29
0.137
0.125
0.079
0.128
0.139
0.149
0.097
0.125
0.120
0.129
0.005
0.102
0.102
0.272
0.201
0.35
0.35
0.173
0.098
0.35
0.35
0.109
Metric
Coupling
Genetic
Algorithm
Cohesion
Avg SD
Best
Avg SD
Best
Time
Coupling
Clustering
Methods
Cohesion
Time
Avg SD
Best
Avg SD
Best
Value
38.8 1.4
37
0.420 0.09
0.499
41s
37.3 0.9
37
0.472 0.08
0.499
1 0.5 s
Class Responsibility Assignment (CRA) is an important and complex activity in the object oriented analysis and design. In this paper, we addressed CRA as
a clustering problem and proposed a clustering based
model (Figure 1) for solving it. The proposed model
has three main steps: (1) extracting features and generating data set, (2) clustering data set, and (3) processing clustering results and generating class diagram.
Four different clustering methods (X-means, EM, Kmeans and HC) used to evaluate the proposed model.
Comparing the obtained results of expert design with
clustering methods reveals that the clustering methods
122
Refrences
[1] L.C. Briand, J. Daly, and J. Wuest, A Unified Framework for Cohesion Measurement in Object-Oriented Systems, Empirical Software Engineering 3 (1998), 65117.
[2] C. Larman, Applying UML and patterns: an introduction
to object-oriented analysis and design and iterative development, Prentice Hall, 2004.
[3] B. Bruegge and A.H. Dutoit, Object-Oriented Software Engineering, Prentice Hall, 2004.
[4] M. Harman, S.A. Mansouri, and Y. Zhang, Search based
software engineering: A comprehensive analysis and review
of trends techniques and applications, Kings College London,Technical Report TR-09-03 (2009).
[5] O. Raiha, A survey on search-based software design, Computer Science Review 4 (2010), 203-249.
[6] M. OKeeffe and M. O Cinneide, Towards Automated Design Improvement through Combinatorial Optimization,
Proceedings of the Workshop on Directions in Software Engineering Environments (2004).
[7] M. O Keeffe and M. O Cinneide, Search-Based Refactoring
for Software Maintenance, Journal of Systems and Software
81 (2008), 502-516.
[8] M. Bowman, L.C. Briand, and Y. Labiche, Solving the Class
Responsibility Assignment Problem in Object-Oriented
Analysis with Multi-Objective Genetic Algorithms, IEEE
Transactions on Software Engineering 36 (2010), 817837.
[9] G. Glavas and K. Fertalj, Metaheuristic Approach to Class
Responsibility Assignment Problem, Proceedings of the International Conference on Information Technology Interfaces (ITI) (2011), 591596.
[10] I. Seng, J. Stammel, and D. Burkhard, Search-Based Determination of Refactorings for Improving the Class Structure of Object-Oriented Systems, Proceedings of the 8th annual conference on Genetic and evolutionary computation
(2006), 19091916.
[11] S. Choi, S. Cha, and C.C. Tappert, A survey of binary similarity and distance measures, Journal of Systemics, Cybernetics and Informatics 8 (2010), 4348.
[12] M. Docherty, Object-Oriented Analysis and Design, John
Wiley & Sons Ltd, 2005.
[13] G. Gui and P.D. Scott, Coupling and Cohesion Measures
for Evaluation of Component Reusability, Proceedings of
the international Workshop on Mining software repositories
(2006), 1821.
Karim Faez
b namazi@aut.ac.ir
kfaez@aut.ac.ir
Abstract: Energy efficiency and quality of service(QoS) assurance are challenging tasks in wireless
multimedia sensor networks(WMSNs). In this paper, we propose a new power-aware routing protocol for WMSNs supporting multi-constrained QoS requirements, using localized information. For
realtime communication we consider both delay at sender nodes and queuing delay at the receiver.
In order to achieve reliability requirements and energy efficiency, each node dynamically adjusts
its transmission power and chooses nodes that have less remaining hops towards the sink. A load
balancing approach is used to increase lifetime and avoid congestion. Simulation results shows that
our protocol can support QoS with less energy consumption.
Introduction
123
packet to a secondary sink. All of these methods decrease the lifetime of the network. REP[9] instead, uses
a power allocation protocol to guarantee needed reliability, since increasing transmission power results in
higher SINR. It divides the area into many concentric
coronas and randomly chooses a node from the corona
nearer to sink and increase transmission power until
the requirement is met. We use a novel HELLO message approach to find the exact remaining hops to the
sink and select nodes which have better link quality,
needing less increase in transmission power.
The rest of this paper is organized as follows: Section 2 gives the network model and assumptions. The
proposed protocol is described in section 3, and its
performance is evaluated in section 4. Finally, section
5 concludes the paper.
Each node should be aware of its neighboring nodes status, including their position, remaining energy, quality of the link, remaining hops to the
sink(level) and queue state. Like other state-of-the
art localized routing protocols, we use HELLO packets
to exchange these needed information, but instead of
sending HELLO messages simultaneously by all of the
nodes, the sink is the node that initializes the HELLO
message containing its information, labeled as level
zero. Upon receiving the first HELLO packet each node
is labeled as the next level and broadcast its information. Using this method all of the nodes know their
level and tell it to their neighbors. The sink node does
it at fixed intervals and after each reception, nodes add
an entry to their routing table, including: nodes distance to sink, level, remaining energy, speed and required transmission power. We will discuss about the
last two in more detail, later in this section.
System Model
Protocol Overview
3.1
Neighbor Management
124
The Third International Conference on Contemporary Issues in Computer and Information Sciences
ACK packet and tack is the time consumed for transmitting an ACK packet at the receiver.
Transmission delay may vary because of changes
in network parameters. A reason might be variations
in transmission power level. In order to count for past
delays, we use EWMA method for estimating the transmission delay.
Queuing delay, say dq , is computed at the receiver
and is exchanged between nodes via HELLO packets.
We use the moving average approach for this delay too.
Having different kinds of delay, each node can estimate
the velocity of its neighbors and compare it to the required velocity of the packet to be transmitted. The
velocity of neighboring nodes is computed as bellow:
(6)
disid disjd
dtr + dq
125
Simulation Results
To evaluate the performance of the proposed protocol we used Castalia-3.2[11] simulator. Castalia is a discrete event simulator designed for simulating wireless
sensor networks. The simulation configuration consists
of 36 nodes randomly deployed in a 100*100 m2 terrain.802.11 MAC protocol (with RTS/CTS packets) is
used and a node is selected randomly to send its packet
to the sink node. The traffic consists of all four class
packets and the simulation time is 600 seconds.
The performance metrics used are average energy
consumption, average end-to-end delay and BER. We
compare our protocol, called hereafter PMCR(Poweraware Multi-Constrained Routing), with LOCALMOR
protocol over these metrics.
EndtoEnd Delay(ms)
1200
PMCR
LOCALMOR
350
1000
900
800
700
PMCR
LOCALMOR
600
200
0.05
0.1
0.15
0.2
Required BER
0.25
0.3
500
0.05
(a) Delay
0.1
0.15
0.2
Required BER
0.25
0.3
Fig.1(a) shows average end-to-end delay in different BER requirement for high priority packets and
Tdl = 0.3s. The average energy consumption is shown
in Fig.1(b) for this situation. It can be seen that our
protocol uses less energy than LOCALMOR protocol.
Packet BER
1200
PMCR
LOCALMOR
[2] S. Misra, M. Reisslein, and G. Xue, A Survey of Multimedia Streaming in Wireless Sensor Networks, IEEE Commun.Survey Tutorials 10 (2008), 1839.
[3] S. Ehsan and B. Hamdaouni, A Survey on Energy-Efficient
Routing Techniques with QoS Assurances for Wireless Multimedia Sensor Networks, IEEE Commun.Survey Tutorials
(early access) (2011).
[4] T. He, J.A. stankovic, C. Lu, and T.F. AbdelZaher, A SpatioTemporal Communication Protocol for Wireless Sensor
Networks, IEEE Trans. Parallel and Distributed Systems
16,no.10 (2005), 995-1006.
[5] E. Felemban, C. Lee, and E. Ekici, MMSPEED:Multipath
Multi-SPEED protocol for QoS Guarantee of Reliability and
Timeliness in Wireless Sensor Networks, IEEE Trans. Mobile Comput. 5,no.6 (2006), 738-754.
[6] O. Chipara, Z. He, G. Xing, Q. Chen, X. Wang, C. Lu, J.A.
stankovic, and T.F. AbdelZaher, Realtime Power-aware
Routing for Sensor Networks, in proc. 14th IEEE International Workshop on Quality of Service(IWQoS 2006),New
Haven,Ct (June 2006).
Energy Consumption
0.5
1100
300
250
Conclusion
Refrences
Energy Consumption
400
1100
0.4
1000
0.3
900
800
0.2
700
0.1
PMCR
LOCALMOR
600
0
200
250
300
EndtoEnd Delay(ms)
(a) BER
350
400
500
200
250
300
Required BER
350
400
Figure 2:
BER and energy consumption with
BERreq = 0.1
[8] D. Djenouri and I. Balasingham, Traffic-DifferentiationBased Modular QoS Localized Routing for Wireless Sensor
Networks, IEEE Trans. Mobile Computing 10 (2011), 797809.
[9] K. Lin and M. Chen, Reliable Routing Based on Energy
Prediction for Wireless Multimedia Sensor Networks, IEEE
GLOBECOM (2010), 1-5.
[10] A.F. Molisch, Wireless Communications, John Wiley and
Sons, 2011.
[11] Castalia User Manual, http://castalia.npc.nicta.com.au/,
2011.
126
Ali Moeini
Faculty of Engineering
University of Qom
Tehran University
faranak fotouhi@hotmail.com
moeini@ut.ac.ir
Abstract: Mobile learning is a new paradigm of learning that takes place in a meaningful context,
involves exploration and investigation, and includes opportunities for social dialogue and interaction
where learners have access to appropriate resources. The learning process could be supported
by the use of mobile phone in a responsive manner by means of context aware hardware and
technologies that facilitate interaction and conversation. This mode of learning can enhance and
improve learning, teaching and assessment. In this article we discuss distinctive feature of mobile
learning, different approaches to mobile learning in different continents, advancement of portable
devices their implications and mobile learning in Iran.
Keywords: Mobile Learning; Mobile Games; Game Based Learningt; Augmented Reality.
Introduction
Distinct
Learning
Features
of
M-
M-learning has three main characteristics: (1) mobility, (2) context aware and (3) able to communicate.
Sharples [10] defines mobility as (a) mobility in physical space it is not bound to classroom (b) it uses mobile hardware such as Bluetooth, GPS, camera, WiFi
all integrated in a compact portable devices (c) mobile in social space learner could forms different ad hoc
During the last decade, mobile learning (m-learning) a groups during the day for collaborative learning (d)
new kind of e-learning has been introduced, in which mobile in time, learning could be distributed during
the power of wireless technologies have been used in ed- different time according to the learners preferences.
ucational context. Compared to traditional e-learning
it is more personal, always connected to communicaAnother characteristics of mobile learning is being
tion tools, portable, cheap and available to the public. context aware, which means it can collect environmenThe m-learning consumers are mobile and so learn- tal data simultaneously or with the learners command
ing could take place ubiquitously anywhere. Educa- to help his/her to better analyze and apprehend education protocols which employ this method has aided tional material that depend to the physical world. The
many aspects of learning such as: motivation [1], au- data is usually collected using devices such as GPS,
tonomy [2], interaction and collaboration [3] and [4], compass, Bluetooth, camera, accelerometer and gyroself-esteam [5], social skill [6], accessibility [7], and lan- scope.
guage acquisition [7]. It has been especially effective in
the teaching of disadvantaged students in developing
The next characteristic of m-learning is that it alcountries [8] and [9].
ways has available communication tools such as phone
Corresponding
127
calls, SMS, MMS and mobile internet. These features tions for tablets that could be used in schools.
facilitate the process of learning between students and
teachers when they are located at different physical
M-learning in Middle East has had limited accomspaces.
plishments. However, it is moving towards the use of
Java applications and online electronic materials [28].
Europe and Japan are far ahead of other countries vis-vis taking advantage of mobile phones features. They
have used SMS in mobile commerce thus forming a
rich communication ecosystem with clients. Many mlearning research projects have taken place in Europe
[11-15]. These projects have played a major role in
shaping and developing mobile learning theories and
techniques. On the other hand the homogenous mobile communication system in Europe has provided
each project with a big market. However in North
America lacking homogeneity in the implementation
of third generation of mobile communication systems
caused the late blooming of m-learning. At present,
m-learning application include game simulation environments that incorporate technologies such as GPS,
WiFi and Bluetooth [16].
128
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Implementation
Learning in Iran
of
Mobile
versity, under the supervision of the lecturer. The virtual space of the game simulated a computer lab. The
game experience was considered highly rich and motivating by students to learn Technical English Vocabularies. It has also assisted the lecturer on the teaching
by presenting the material, involving the students on
high level cognitive processes and assessing the students work using an advanced scoring system [38].
Conclusion
129
Refrences
[1] J. L. Shih, C. W. Chuang, and G. J Hwang, An Inquirybased Mobile Learning Approach to Enhancing Social Science Learning Effectiveness, Educational Technology and
Societyg 13/4 (2010), 50-62.
[2] C. Whitet, Learner Autonomy and New Learning Environments, Language Learning and Technology 15/3 (2011),
1-3.
[3] J.
Attewellt,
From
Research
and
Development
to Mobile Learning: Tools for Education and
Training Providers and their Learners, Proceedings
of
mLearn
2005
(2005),
Available
from:
http://www.mlearn.org.za/CD/papers/Attewell.pdf.
[4] D. Corlettt and M. Sharples, Tablet technology for informal
collaboration in higher education, Proceedings of MLEARN
2004: Mobile Learning anytime everywhere, London, UK:
Learning and Skills Development Agency (2004), 59-62.
[5] M. Hansent, G. Oosthuizen, J. Windsor, I. Dohertyt, S.
Greig, K. McHardy, and L. McCann, Enhancement of Medical Interns Levels of Clinical Skills Competence and SelfConfidence Levels via Video iPods: Pilot Randomized Controlled Trial, Journal of Medical Internet Research 2011
13/1 (2011), e29.
[6] M. Joseph, C. Branch, C. March, and S. Lerman, Key factors mediating the use of a mobile technology tool designed
to develop social and life skills in children with Autistic Spectrum Disorders, Computers and Education 58/1
(2011), 53-62.
[7] F. Fotouhi-Ghazvini, R.A. Earnshaw, A. Moeini, D. Robison, and P.S. Excell, From E-Learning to M-Learning
the use of Mixed Reality Games as a New Educational
Paradigm, The International Journal of Interactive Mobile
Technologies (IJIM) 5/2 (2011), 17-25.
[26] http://delphian.com.au.
[27] http://www.apac.studywiz.com/.
[28] R. Belwal and S. Belwal, Mobile Phone Usage Behavior of
University Students in Oman, New Trends in Information
and Service Science (2009), 954-962.
[29] P. Christy and H. Stevens, Gartner Says Android
to Command Nearly Half of Worldwide Smartphone
Operating System Market by Year-End 2012 (2011),
http://www.gartner.com/it/page.jsp?id=1622614.
[30] H. Tarumi, Y. Tsujimoto, T. Daikoku, F. Kusunoki, S. Inagaki, M. Takenaka, and T. Hayashi, Balancing virtual and
real interactions in mobile learning, International Journal
of Mobile Learning and Organisation 5/1 (2011), 28-45.
[31] C.L. Holden and J.M. Sykes, University of New Mexico,
USA Leveraging Mobile Games for Place-Based Language
Learning, International Journal of Game-Based Learning
1/2 (2011), 1-18.
[32] J. Johnson, Tablets To Overtake Desktop Sales
By
2015,
Laptops
Will
Still
Reign
(2010),
http://www.inquisitr.com/76157/tablets-to-overtakedesktop-sales-by-2015-laptops-will-still-reign.
[33] S. Papert, The Childrens Machine: Rethinking School in the
Age of the Computers, Basic Books, New York, 1993.
[34] C.N. Quinn and R. Klein, Engaging Learning Designing eLearning Simulation Games, Pfeiffer: John Wiley and Sons,
Inc., 2005.
[35] G.a. Gunter, R. F. Kenny, and E.H. Vick, Taking educational games seriously: using the RETAIN model to design endogenous fantasy into standalone educational games,
Journal of Educational Technology Research and Development 56/5 (2008), 511-537.
[36] F. Fotouhi-Ghazvini, A. Moeini, D. Robison, R.A. Earnshaw, and P.S. Excelli, A Design Methodology for Gamebased Second Language Learning Software on Mobile
Phones, Proceedings of Internet Technologies and Applications, Wrexham, North Wales (2009), 609-618.
[22] S.S. Adkins, The Worldwide Market for Mobile [37] F. Fotouhi-Ghazvini, R.A. Earnshaw, D. Robison, and P.S.
Excelli, The MOBO City: A Mobile Game Package for
Learning Products and Services: 2010-2015 ForeTechnical Language Learning, International Journal of Incast and Analysis (2010), 1-21, Available from:
http://www.ambientinsight.com/Resources/Documents/Ambient- teractive Mobile Technologies 3/2 (2009), 19-24.
Insight-2010-2015-US-Mobile-Learning-Market-Executive[38] F. Fotouhi-Ghazvini, R.A. Earnshaw, D. Robison, A.
Overview.pdf.
Moeini, and P.S. Excelli, Using a Conversational Framework in Mobile Game based Learning Assessment and
[23] http://www.open.ac.uk/deep.
Evaluation, Communications in Computer and Information
[24] http://www.bridges.org/ipaq competition.
Science/Springer-Verlag Berlin Heidelberg 177 (2011), 200213.
[25] www.wlv.ac.uk /.
130
Ahmad Bagheri
Faculty of Engineering
Faculty of Engineering
zsalahshoor@msc.guilan.ac.ir
bagheri@guilan.ac.ir
Mehrgan Mahdavi
Faculty of Engineering
Department of Computer Engineering
mahdavi@guilan.ac.ir
Abstract: Oil is a strategic commodity in the entire world. Oil price is always changing, but this
change is rapid and predicting this change is difficult too. So how to predict the future price of
oil is one of the major issues in this industry. In this paper,a Particle Swarm Optimization (PSO)
based method has been proposed to predict the future price of oil for upcoming 4 months. PSO is a
population-based optimization method that was inspired by flocking behavior of birds and human
social interactions. The proposed equation has 13 dimensions and 4 variables. These variables
are price of petroleum in the past 4 months. The experimental results indicate that the proposed
approach can predict monthly petroleum price with 3.5 dollar difference on average.
Introduction
hart had been inspired it from the life of birds and fish
[1]. This algorithm has good speed and accuracy; it can
solve engineering problems greatly. Here, a method
Prediction is an estimate or a number of quantitative based on PSO is used to predict oil price. The reestimates about the likelihood of future events that sults show this method has good ability in forecasting
will be developed by the use of current and past data. medium-term crude oil price.
Predictions are used as a guide for public and private
Many studies have predicted oil price such as intepolicies, because decision making is not possible without predictive knowledge. For thousands of years oil grating text mining and neural networks in forecasting
has had an important role in peoples lives. It is not the oil price [2], Junyou have been proposed a method
only the main source of worlds energy but also it is for forecasting stock price using PSO-trained neural
very hard to find a product that does not need oil in networks [3], and Abolhassani have been introduced a
its production or distribution. Hence, predicting oil method for forecasting stock price using PSOSVM [4].
price is considered to be a hot topic in this industry.
This paper is organized as follows; PSO algorithm
In this paper, oil price has been predicted by a PSO
is described in Section 2. In Section 3 expressed the
based method.
method based on PSO for predicting the oil price, the
PSO is one of the intelligent algorithms and is a evaluation results are given in Section 4 and Section 5
suitable algorithm in optimization. Kennedy and Eber- refers to conclusion.
Corresponding
131
xR xmin
Particle swarm optimization is a population-based evo(3)
xn =
lutionary algorithm and is similar to other populationxmax xmin
based evolutionary algorithms. PSO is motivated by
the simulation of social behavior instead of survival of xn is normalized value, xmax, xmin are maximum and
minimum amount of data, and xR is the data that must
the fittest [1].
be normalized.
In PSO, each candidate solution is associated with
n
X
a velocity [5]. The candidate solutions are called parF (x) =
(Eactual Epredicted )2
(4)
ticles, and the position of each particle is changed aci=1
cording to its own experience and that of its neighbors
(velocity). It is expected that the particles will move
Where Eactual is the real value of oil price and Epretoward better solution areas. Mathematically, the pardicted is the predicted oil price, n is the number of
ticles are manipulated according to the following equadata. The formula (5) is estimating the predicted
tions.
value for first future month,and these formulas (6), (7),
and (8) are estimating second, third and fourth future
v~i (t + 1) = wv~i (t) + C1 r1 (~xpbesti ~xi (t))
month,respectively. These are the same but they are
difference in last phrase. In the training mode, past 4
+C2 r2 (~xgbesti ~xi (t))
(1)
months of oil price are used for learning model, But in
x~i (t + 1) = x~i (t) + v~i (t + 1)
(2) test mode, fixed price of 4 months from the previous
or past 4 months are used to calculate each of the next
4 months.the proposed method is shown in figure 1.
Where xi (t) and vi (t) denote the position and velocity of particle i, at time step t. r1, r2 are random Epredictedif irstmonth = w1 xi+3 w2 +w3 xi+2 w4 +w5 xi+1 w6
values among zero to one. C1 is the cognitive learning
factor and represents the attraction that a particle has
+w7 xi w8 + w9 xi+3 xi+2 + w10 xi+3 xi+1 + w11 xi+3 xi
toward its own success. C2 is the social learning factor
and represents the attraction that a particle has to+w12 xi+2 xi+1 + w13 xi+3 4 xi+1 6
(5)
ward the success of the entire swarm. W is the inertia
weight which is employed to control the impact of the
Epredictedisecondmonth = w1 xi+3 w2 +w3 xi+2 w4 +w5 xi+1 w6
previous history of velocities on the current velocity of
a given particle. The personal best position of the par+w7 xi w8 + w9 xi+3 xi+2 + w10 xi+3 xi+1 + w11 xi+3 xi
ticle i is xpbesti and xgbest is the position of the best
particle of the entire swarm. Here, W is 0.4, and C1,
+w12 xi+2 xi+1 + w13 xi+3 4
(6)
C2 are 2.
Epredictedithirdmonth = w1 xi+3 w2 +w3 xi+2 w4 +w5 xi+1 w6
(7)
Identifying and applying various parameters influencing oil price from past and present status can be very Epredictedif ourthmonth = w1 xi+3 w2 +w3 xi+2 w4 +w5 xi+1 w6
effective in making accurate predictions. Parameters
such as dollar price and inflation in America can affect
+w7 xi w8 + w9 xi+3 xi+2 + w10 xi+3 xi+1 + w11 xi+3 xi
on the desired issue.
+w12 xi+2 xi+1 + w13 xi+3 xi+2 xi+1 1.9
(8)
In this paper, the monthly oil price from past years
has been used to predict the next 4 months. These w shows the number of dimensions that are obtained
data are divided into two parts, training and testing by PSO algorithm in the training phase and i is the
data into three 4-months periods. Then data normal- number of data. Algorithm are repeated in each stage
ization was performed by formula (3) on the data until 100 times, 36 is the number of particles. Oil price is
they were placed between zero and one. The function predicted by the use of equations (5),(6),(7), and (8)
of PSO algorithm is considered to be the total squared in three periods.
132
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Experimental Results
105
Actual
Predicted
100
Refrences
95
90
85
80
75
70
65
6
Month
10
12
133
[1] J. Kennedy and R.C. Eberhart, Particle swarm optimization, In Proceedings of the IEEE International Conference
on Neural Networks/Perth, Australia IV (1995), 1942-1948.
[2] Sh. Wang, L. Yu, and K.K. Lai, A novel hybrid AI system
framework for crude oil price forecasting, Lecture Notes in
Computer Science 3327 (2004), 233-242.
[3] B. Junyou, Stock forecasting using PSO-trained neural networks, In Proceedings of the congress on Evolutionary computation (2007), 2879-2885.
[4] A.M. Toliyat Abolhassani and M. Yaghobbi, Stock price forecasting using PSOSVM, 3rd International Conference on advanced computer theory and engineering (ICACTE) (2010),
352-356.
[5] R.C. Eberhart, Dobbins R., and Simpson P.K., Computational Intelligence PC Tools, Morgan Kaufmann Publishers
(1996), 233-242.
Maryam Hasanzadeh
sm.javadi@shahed.ac.ir
hasanzadeh@shahed.ac.ir
Abstract: Steganography is the art of hiding information. Whereas the goal of steganography
is the avoidance of suspicion to hidden messages in other data, steganalysis aims to discover and
render useless such covert messages .In this article, we proposed a new method for steganalysis
based on the color channels correlation in adjacent pixels while omitting the heterogeneous areas
in color images.This method is designed independent of steganography method. The results of our
proposed method shows that it has high accuracy in steganalysis.It also does better than well known
WS, SP and RS steganalysis methods in low embedding rates.
Keywords: steganography, steganalysis, color channels correlation, homogeneous and heterogeneous areas
Introduction
Steganography is the art of hiding information. Despite cryptography that deals with immuning information content not to be wiretapped, Steganography techniques are used to make messages undercover. Since
the main goal of steganography is to communicate securely in a completely undetectable manner, an adversary should not be able to distinguish in any sense
between cover-objects (objects not containing any secret message) and stego-objects (objects containing a
secret message). In this context, steganalysis refers to
the body of techniques that are conceived to distinguish between cover-objects and stego-objects [1],[2].
Digital images have high degree of redundancy in
representation and pervasive applications in daily life,
thus appealing for hiding data. As a result, the past
decade has seen growing interests in researches on image steganography and image steganalysis. Some of
the earliest work in this regard was reported by Johnson and Jajodia [3],[4]. They mainly look at palette
tables in GIF images and anomalies caused there in
by common stego-tools. A more principled approach
to LSB steganalysis was presented in [5] by Westfeld
and Pfitzmann. They identify Pairs of Values (PoVs),
Corresponding
134
which consist of pixel values that get mapped to one another on LSB flipping. Fridrich, Du and Long [6] define
pixels that are close in color intensity to be a difference
of not more than one count in any of the three color
planes. They then show that the ratio of close colors to
the total number of unique colors increases significantly
when a new message of a selected length is embedded
in a cover image as opposed to when the same message
is embedded in a stego-image. A more sophisticated
technique that provides remarkable detection accuracy
for LSB embedding, even for short messages, was presented by Fridrich et al. in [7] and called RS method.
Moreover; the other different methods of steganalysis
such as WS [8] by Fridrich and M. Goljan and sample
pair(sp) [9] by Dumitrescu , Xiaolin and Wang have
been presented.
The most of recent steganalysis methods in color
image are based on some independent process in each
color channels. In this article , we proposed a new steganalysis method for detection stego- image, while we
focused on existence correlation between color channels
in homogeneous areas in color images .
This paper is structured as follows. In Section 2,
we will introduce the principle and basic of proposed
method. In Section 3, we present our experimental re- will be improved. To do so, the heterogeneous areas are
sults. Finally, Section 4 concludes the paper.
computed by using the following formula (Equation 2)
and it will not interfere in calculating CF. In other
words, these pixels have no effects when calculating
features. We expect that, dont have any correlations
2 Proposed Method
in heterogeneous areas, thus accuracy in steganalysis
method will be increased by omitting these pixels from
Signv matrix.
The proposed method is based on color channels correlation and the omittion of heterogeneous areas in color Signv (P ) = (R) > T hr&(G) > T hr&(B) > T hr
image that is designed independent of steganography
(2)
methods. The basic idea of feature extraction in RGB
space is based on[10].The way of extracting features
In the above formula, threshold is selected adapin this method is as follow. In first step, for all pixels tively such that n% of the image pixels belongs to hetin the color image, we compute the differences between rogeneus area. We set the n parameter to 5 experimenpixel intensity and the intensity of its neighbor pixels in tally. It means that 5% of image pixels that have the
four direction: 0, 45, 90 and 135 (i.e. we compute dif- least correlation to their neighboring pixels wont be inferences for three channels(Red, Green and Blue) and terfered in computing Signv . In the proposed method
produce the vector V =[R G B]T .). Fig.1 shows four feature based on the mentioned correlation have
a pixel P and its neighbors in these four directions.
been extracted from the image. First we calculate this
feature in four directions:0, 45, 90 and 135.
Dif f = [CF0 , CF45 , CF90 , CF135 ]
(3)
(4)
F eature2 = V ariance(Dif f )
(5)
#{p|signv(p) = 3or0or + 3}
#T otalImageP ixels
(6)
(1)
F eature4 =
In third step, the attained feature of previous steps
135
(8)
kM ean(Dif f 00 ) M ean(Dif f 0 )k
kM ean(Dif f 0 ) M ean(Dif f ) + Ek
(9)
The Third International Conference on Contemporary Issues in Computer and Information Sciences
If any input image is already tampered with a message, embedding it again will not modify the features
a lot. So we expect F eature3 to be close to zero and
F eature4 be close to 1.After the extraction of the feature, a key factor is choosing a classifier. In this article
we used support vector machine(SVM) with polynomial kernel.
Experimental Result
The proposed steganalysis methods in low embedding rates(10%,20%,30%) which detection is harder,
have done better than the other methods. Also the
proposed method in high embedding rates, has suitable
performance. The proposed method in all cases does
better than the SP steganalysis method. There is a little viberation in proposed method with change in embedding rate, while in other methods, there is a lot of
viberation. In other words, in some other steganalysis
methods, there will be much viberation in embedding
rates, but in proposed method from the low embedding
rate to high embedding rate, the total detection rate
will be improving.
Figure 3: TP Rate
T Ps
T Ps + F Ns
F Ps
F P Rate =
T Ns + F Ps
T Ps + T Ns
AccuracyRate =
T Ps + F Ns + T Ns + F Ps
T Ps
P recisionRate =
T Ps + F Ps
T P Rate =
Figure 4: FP Rate
(10)
(11)
(12)
(13)
In this article,we drawing the charts of the Equations 10, 11, 12 and 13 for three well known WS, SP
and RS steganalysis methods and suggested method
in[10] and our proposed method (fig.3-6). Regarding
the charts(fig.3-6) we come to these conclusions:
136
Refrences
[1] J. D. Boissonnat and C. Delage, ESSENTIALS OF IMAGE
STEGANALYSIS MEASURES, Journal of Theoretical and
Applied Information Technology (2010).
[2] T Morkel ., JHP Eloff ., and MS Olivier., AN OVERVIEW
OF IMAGE STEGANOGRAPHY, Proceedings of the
Fifth Annual Information Security South Africa Conference
(ISSA2005),, Sandton, South Africa (, June/July 2005 ).
[3] N. F. Johnson. and S. Jajodia ., Steganalysis: The investigation of Hidden Information, IEEE Information Technology
Conference, Syracuse, USA (1998).
Conclusion
In this paper, we have proposed a new steganalysis technique based on color channels correlation and
omitting the heterogeneous areas in color image. We
demonstrated the effectiveness of the proposed approach against LSB replacement. It is shown that our
method detects the hidden message very accurately
even in low embedding rate. The results of our proposed method shows that it has high accuracy in steganalysis and in lower rates it also does better than
well known WS, SP and RS steganalysis methods and
suggested method in[10].
137
Elmira Hasanzade
University of Kashan
University of Kashan
Kashan, Iran
Kashan, Iran
elm.hasanzade@grad.kashanu.ac.ir
Babamir@kashanu.ac.ir
Abstract: this study addresses an approach to predict deadlocks in concurrent processes where
processes are threads of a multithread program. A deadlock occurs when two processes need some
resource held by the other; accordingly both of them will wait for the other forever. Based on
past behavior of threads of a multithread program, deadlock possibility n future behavior of the
threads can be guessed. To predict future behavior based on past behavior stimulates us to use
a mathematical model because multithread programs have uncertain behavior. To this end, we
consider past behavior of threads in terms of time series indicating a sequence of time points. Then,
we use the past time points in Artificial Neural Networks (ANNs) to predict future time points.
Efficiency and elasticity in predicting complex behavioral patterns by ANNs was our stimulation
in using ANNs. In fact, using ANNs in predicting and improving safety of multithread programs
behavior is contribution of this study. To show the effectiveness of our model, we applied it for
some Java multithread programs that were prone to deadlock. In compared with actual execution
of the programs, it was proved that about 74% of deadlock predictions were correct.
Keywords: Multithread program, Deadlock detection, Artifical Neural Networks, Time series
Introduction
The prevalence of multi-core processors is widely encouraging the programmers to use the concurrent programming. However, applying concurrency introduces
lots of challenges, and among them, deadlock is one of
the most popular problems. The origin of deadlock is
in sharing exclusive resources between the processes or
threads. Locking mechanisms have been used to share
these resources between processes or threads. Locking
is a task that is done by the programmer. Because of
this fact, it is an error prone technique and potential
Online deadlock detection at runtime, has been reto cause deadlocks.
ceived attention in recent years, because it does not
have previous approaches limitations. In general, they
Recovering from deadlock is not a cost efficient so- allow the system proceed normally without any limlution. The solutions like: 1- Restarting the system, itation. When the program is running, one or more
2- killing several processes or threads until deadlock monitors observe the execution of program and try to
obviated 3- extorting some resources from processes, find out about the possibility of deadlock in the future.
are most common ways for deadlock recovery. Using
each one of these approaches is not cost efficient and
Corresponding
138
Related Works
In [5] deadlock immunity concept has been introduced. It means the ability in a system that in some
way, it can avoid from all deadlocks happened in the
past. When a deadlock happens for the first time, they
keep deadlocks information in a concept named context in order to avoid the similar contexts in future
runs. In this approach they achieve immunity against
the corresponding deadlocks. To avoid deadlocks with
already seen contexts, they use changing the scheduling of threads. Deadlock contexts increase in the system; therefore, it can avoid a wider range of deadlocks.
However, if a deadlock does not have a pattern similar
to an already encountered one, this approach will not
In turn online techniques mostly are not language avoid it.
dependent and dont need programmer effort. These
Obviously, in all online approaches, they pre-run
techniques can be applied to legacy code with mini-
139
The Third International Conference on Contemporary Issues in Computer and Information Sciences
2.1
In some applications, it will be usefull to predict the future behaviore on applications. In order to apply these
techniques, it is necessary to know the application behaviors in past and predict the future behaviors. A
process behavior can be represented by its execution
pattern. This pattern also known as process access
patterns [8].
exploited for time series prediction problems. A neural network is an information processing system that is
capable of treating complex problems of pattern recognition, or of dynamic and nonlinear processes. In particular, it can be an efficient tool for prediction applications. The advantage of neural networks compared to
statistical approaches, is their ability to learn and then
to generalize from their knowledge [14]. Also the neural networks are based on training and in many cases
their prediction results are precise, even if the training
set has considerable noise [10]. These approaches are
much more suitable for real world problems that do not
obey specific rules.
140
Proposed Model
This type of information can be easily converted into univariant time series, which represent a dedicated thread behavior against a dedicated resource in a time interval. This time series
could be showed in the form of two elements tuple like: (threadi , resourcej )={nothing, request, nothing, nothing, nothing, release, nothing, request}which
means in the first period of time T hreadi requests
resourcej , in the next period of time it has nothing to do with resourcej and also in the next two period either. In the sixth period of time T hreadi releases resourcej . This thread will request resourcej
in eighth period again. This set can be written for any
thread and any resource which make a two elements
tuple together. Each member in this set can take one
of these three values: {release, request, nothing}. This
set is a univariant time series which can be used for
prediction of the thread behavior, in (t + 1)th period of
time. Therefore, we will have n r time series which n
is the number of threads and r is the number of shared
resources or locks.
3.2
3.1
Problem Defenition
To online prediction of potential deadlocks in multithread programs we proposed a model consist of four
components. Each component has a dedicated task.
The architecture of proposed model has been showed
in Figure 1. Each component task is discussed in the
following.
141
The Third International Conference on Contemporary Issues in Computer and Information Sciences
142
4.1
another algorithm that can find cycles in the resulting composed graph. It receives the predictor component and online lock tracker component results and
reasons about the possibility of deadlock in the future.
Behavior Extraction & Time series Generation Component was implemented using Java and
AspectJ compiler. This component takes a Java written multithread program, and instruments it using AspectJ. What it instruments in the target code, is the
logic of extracting deadlock-wise behaviors and converting them to a time series. After doing this, any
time that targeted multithread code executed, the be- 4.2.1
haviors that we are interested in, will be extracted at
runtime and will be converted to time series.
The second component, that is runtime lock
tracker, implemented with Java. It takes online extracted deadlock-wise behaviors from the first component and draws a lock graph.
Third component has been implemented using
MATLAB default Time Series Tools, which placed
in Neural Network Toolbox of MATLAB. We used
Nonlinear Aggressive (NAR) predictor network in our
work. This network predicts each member of a time
series using d past values of that series. That is
y(t) = f (y(t 1), ..., y(t d)) . This is a simple network which consists of 3 layers named, input, output
and hidden layer. In addition to the d parameter, the
number of nodes in hidden layer is another important
factor in network configuration which has effect on the
efficiency of predictions. Hidden layer nodes are responsible for the main part of prediction task and the
proper number of these nodes is depended on type of
time series which should be predicted. We used n r
of these networks (n is the number of threads and r is
the number of shared resources) to predict all the future members of time series. This is a simple network
and its computational complexity is low.
We deployed 200 NAR networks (Nonlinear Aggressive) to prediction Because of 20 threads and 10 resources. For the first phase evaluation, we ran our
program 250 times and used the information of these
runs to train and test the networks. In this part of
work, we examined networks using different values for
d which is the number of past values of series, and also
the different number of nodes in hidden layers. In this
way, we selected the best configuration of networks to
applying them in proposed deadlock prediction model.
The result of each configuration showed in Table 1.
As it is obvious the overall result of networks in
the case which d is 3 and the number of nodes in hidden layer is 10, is the best result. So we selected this
configuration to be placed in predictor component.
4.2.2
143
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Total number
of runs
250
250
250
250
250
250
250
parison with other online techniques is more cost efficient. In addition, it does not force offline or traditional
deadlock detection techniques limitations, like Banker
algorithm.
The contribution of this work is in using process behavior prediction techniques to reasoning about deadlock possibility. We first convert the process execution
behavior to multiple time series, and next predict the
future members of these series. The predicted members retranslated into behaviors; therefore, we obtain
future behaviors of threads. Using these predicted behaviors we result about deadlock possibility in the future. Rate of true detection of deadlock occurrence is
depended on the correctness of predicted behaviors. In
proposed approach, the prediction has been done using neural networks, which is a powerful technique in
predicting complex and nonlinear time series.
144
Refrences
[1] D.Engler and K.Ashcraft, RacerX:Effective, Static Detection of Race Conditions and Deadlocks, SOSP (2003).
[2] Y. Nir-Buchbinder, R. zoref, and S. Ur, Deadlocks: From Exhibiting to Healing, Runtime Verification: 8th International
Workshop, RV Budapest, Hungary (2008).
[3] S. Bensalem, J. Fernandez, K. Havelund, and L. Mounier,
Confirmation of deadlock potentials detected by runtime
analysis, WorkShop on Parallel and distributed systems:
testing and debugging (2006).
[4] P.Joshi, C. Park, K. Sen, and M. Naik, A randomized dynamic program analysis technique for detecting real deadlocks, ACM SIGPLAN conference on Programming language design and implementation, Dublin, Ireland (2009).
[5] H. Jula and G. Candea, A Scalable, Sound, EventuallyComplete Algorithm for Deadlock Immunity, 8th International Workshop, RV Budapest, Hungary (2008).
[6] F.Chen and G.Rosu, Predictive Runtime Analysis of Multithread Programs, supported by the joint NFS/NASA.
[7] C.Wang, S. KunduS, M. Gana, and A. Gupta, Symbolic Predictive Analysis For Concurrent Programs.
[10] R. Zemouri, D. Racoceanu, and N. Zerhouni, Recurrent radial basis function network for time-series prediction, Engineering Applications of Artificial Intelligence (2003), 453463.
[11] O. Voitcu and Y. Wong, On the construction of a non-linear
recursive predictor, Science B.V., Journal of Computational
and Applied Mathematics (2004).
[12] Y. Chen B and A. Abraham, Time-series forecasting using flexible neural tree model, journal=A. Abraham, (2004),
219-235.
[13] C.J. Lin and Y.J. Xu, A self-adaptive neural fuzzy network
with group-based symbiotic evolution and its prediction applications, Science, Fuzzy Sets and Systems (2005).
[14] R. Zemouri and P. Ciprian Patic, Recurrent Radial Basis Function Network for Failure Time Series Prediction,
World Academy of Science, Engineering and Technology 72
(2010).
[15] E. Dodonov and R. F. d. Mello, A Novel Approach For
Distributed Application Scheduling Based on Prediction of
Communication Events, Future Generation Computer Systems 26.
145
Amin.Allahyar@stu-mail.um.ac.ir
H-Sadoghi@um.ac.ir
Abstract: Ng. Jordan Weiss (NJW) approach is one of the most widely used spectral clustering
algorithms. It uses eigenvectors of the normalized affinity matrix derived from input data. These
eigenvectors are treated as new features of the input data and now has same structure of high
dimensional input data but are represented in lower dimension. these new trasformed data can
be easily used in regular clustering algorithms. NJW method uses the eigenvectors with highest
corresponding eigenvalue. However, these eigenvectors are not always the best selection to reveal
the structure of the data. In this paper, we aim to use Googles page rank algorithm to replace unsupervised problem with an approximated supervised problem then we utilize the fisher criterion to
select the most representative eigenvectors. The experimental result demonstrates the effectiveness
of selecting the relevant eigenvectors using the proposed method.
Keywords: Feature/Eigenvector Selection, Fisher Criterion, Spectral Clustering, Googles Page Rank.
Introduction
Author, P.O. BOX: 91779-48974, F: (+98) 511 421 5067, T: (+98) 511 421-5071
146
2
2.1
Preliminaries
j Vij
Spectral Clustering
Spectral clustering technique[1] has a strong connection with spectral graph theory[2]. It usually refers to
the graph partitioning based on the eigenvalues and
eigenvectors of the adjacency (or affinity) matrix of
a graph. Given a set of N points in d dimensional
space X = {x1 , x2 , . . . , xN } Rd we can build a complete, weighted undirected graph G(V, A) whose nodes
V = {v1 , v2 , . . . , vN } correspond to the N patterns and
edges defined through the adjacency matrix A encode
the similarity between each pair of sample points. Adjacency between two data points can be defined as (1):
Aij = e
d2 (xi ,xj )
2
(1)
d2 (xi ,xj )
2.2
gree
PN matrix D is a diagonal matrix whose Dii =
j=1 aij element is the degree of the point xi .
3
3.1
Related Work
Eigenvector Selection
147
The Third International Conference on Contemporary Issues in Computer and Information Sciences
scatter is variance of each classs. We use this measure clusters has more connection compared to data reside
in the boundary. Using the Googles page rank one
for evaluating eigenvectors relevance as (2):
can detect which data has most connections. Because
K
of the intrinsic disjunction of these data (as they reside
i,j=1 kxi xj k
(2)
fscore = K
in center of each cluster and focused on a spot) reprei=1 kvar(xi )k
sented in Figure.2 it is very easy to cluster them into
where xi and var(xi ) is the mean and variance of class i separate groups using a popular clustering algorithm
respectively. Whatever the value of this index is higher, such as K-means and it converge in one or two iterthe data points are better separated in classes.
ations. These data is then labeled according to their
clusters. So the problem is converted from an unsupervised feature selection to a supervised one. After this
step we can use a regular fisher criterion to score each
eigenvector individually. After this phase K number
eigenvectors with the highest score is selected for last
phase of spectral clustering procedure. Block diagram
of the proposed approach is shown in Figure.3
3.3
Adjacency matrix A is basically a graph which represents the connection between data.It can be considered as the connection of web pages and be fed to the
Google ranker function so that the data (web pages)
that has many neighbors (links in web) can be determined. Regularily, data which reside in the center of
148
Experimental Result
To form the affinity matrix, we utilized the proposed we aim to investigate more indexes for pairwise and
method in[10] using the 7th nearest neighbor. By com- individually evaluation of eigenvectors.
paring the NMI result (Table.2 and Figure.4), it can
be seen that the proposed method has higher NMI except in two datasets, Image and Glass. By analyzing
the data in these datasets it can be observed that the
input data in these dataset has a very mixed clusters.
This issue shows the fact that first K eigenvectors related to the largest eigenvalue are more appropriate
when the clusters are too mixed together.
Conclusion
Figure 4: Minimum and maximum NMI achieved during 50 run. The blue column is NJW and red column
is the proposed method.
Refrences
[1] N. and Shawe-Taylor Cristianini J. and Kandola, Spectral
kernel methods for clustering, Advances in neural information processing systems 14 (2002), 649655.
[2] F.R.K. Chung, Spectral graph theory, Amer Mathematical
Society, 1997.
[3] G.L. and Longuet-Higgins Scott H.C., Feature grouping by
relocalisation of eigenvectors of the proximity matrix, Proc.
British Machine Vision Conference, 1990, pp. 103108.
[4] P. and Freeman Perona W., A factorization approach to
grouping, Computer VisionECCV98 (1998), 655670.
[5] T. and Belkin Shi M. and Yu, Data spectroscopy:
Eigenspaces of convolution operators and clustering.
[6] A.Y. and Jordan Ng M.I. and Weiss, On spectral clustering:
Analysis and an algorithm, Advances in neural information
processing systems 2 (2002), 849-856.
[7] N. and Verri Rebagliati A., Spectral clustering with more
than K eigenvectors, Neurocomputing (2011).
[8] T. and Gong Xiang S., Spectral clustering with eigenvector
selection, Pattern Recognition 41 (2008), no. 3, 10121029.
[9] F. and Jiao Zhao L. and Liu, Spectral clustering with eigenvector selection based on entropy ranking, Neurocomputing
73 (2010), no. 10, 17041717.
[10] P. and Zelnik-Manor Perona L., Self-tuning spectral clustering, Advances in neural information processing systems 17
(2004), 16011608.
[11] T. and Belkin Shi M. and Yu, Data spectroscopy:
Eigenspaces of convolution operators and clustering, The
Annals of Statistics 37 (2009), no. 6B, 39603984.
[12] Y. and Li Wang L. and Ni, Feature selection using tabu
search with long-term memories and probabilistic neural
networks, Pattern Recognition Letters 30 (2009), no. 7,
661670.
[13] M.A. Hall, Correlation-based feature selection for machine
learning, The University of Waikato, 1999.
[14] S.C. Yusta, Different metaheuristic strategies to solve the
feature selection problem, Pattern Recognition Letters 30
(2009), no. 5, 525534.
[15] X. and Cai He D. and Niyogi, Laplacian score for feature
selection, Advances in Neural Information Processing Systems 18 (2006), 507.
[16] R.O. and Hart Duda P.E. and Stork, Pattern Classification
and Scene Analysis 2nd ed. (1995).
149
The Third International Conference on Contemporary Issues in Computer and Information Sciences
150
[18] A.N. and Meyer Langville C.D., Google page rank and beyond, Princeton Univ Pr, 2006.
Shabnam ebadi@yahoo.com
at haghighat@yahoo.com
Abstract: Peer-to-peer (P2P) topology has significant influence on the performance, search efficiency and functionality, and scalability of the application. In this paper, we propose the Imperialist
Competitive Algorithm (ICA) approach to the problem of Neighbor Selection (NS) in P2P Networks.
Each country encodes the upper half of the peer-connection matrix through the undirected graph,
which reduces the search space dimension. The results indicate that ICA usually required shorter
time to obtain better results than PSO (Particle Swarm Optimization), specially for large scale
problems.
Introduction
Peer-to-peer computing has attracted great interest and attention of the computing industry and
gained popularity among computer users and their networked virtual communities [1]. All participants in
a peer-to-peer system act as both clients and servers
to one another, thereby surpassing the conventional
client/server model and bringing all participant computers together with the purpose of sharing resources
such as content, bandwidth, CPU cycles It is no longer
just used for sharing music files over the Internet.
Many P2P systems have already been built for some
new purposes and are being used. An increasing number of P2P systems are used in corporate networks or
for public welfare [2].
A recent survey states that computer users are increasingly downloading large-volume contents such as
movie and software, and 24 percent of the Internet
users had downloaded a feature-length film online at
least once, and that there exists a large demand for
this category of P2P applications. A new generation
of P2P applications serves this purpose, where their top
Corresponding
151
152
The Third International Conference on Contemporary Issues in Computer and Information Sciences
N
N
[
X
(ci \cj ) ij |
|
max
B
j=1 i=1
Subject to:
N
X
eij di , i
j=1
153
Initialize parameters
Initialize random countries(N)
Calculate fitness of countries
Initialize the empires
for i=1 to D do
Assimilate()
Revolution()
Competition()
Calculate fitness of empires
if the end Decades is met or there is just one
empire then
stop and output the best solution, the
fitness
else
go to Assimilate()
end
end
Algorithm 1: Neighbor Selection Algorithm Based
on ICA (N, D)
The main steps in the algorithm are summarized
in the pseudo code shown in Algorithm1. In the algorithm, N is number of peers and D is the total number
of iterations to solve the NS problem.
After initialize parameters and random countries
based on the problem (2), the initial countries are
evaluated. After initialize the empires, in assimilation
move the colonies toward their relevant imperialist. We
find in this algorithm, match bits between imperialists
and their colonies that are not similar then calculate
number of them. We select randomly some of bits that
must change; Next step will be updated colony with
new values.
After revolution and evaluation empires by using
problem (2), in this algorithm after a while all the empires except the most powerful one will collapse and
all the colonies will be under the control of this unique
empire. If the end Decades is met or there is just one
empire stop.
Figure 2 illustrate the ICA, ICAm, PSO performance during the search processes for the NS problem
versus iteration in during the each processes for the
5 Experimental Studies
problem (30, 1400, 15).
Specific parameter settings of the algorithms are described in Table 2 Simulation result of ICA and ICAm
This section analyzes and compares the results of sim- almost is similar. As evident, ICA methods obtained
ulation PSO and ICA.
better results much faster than PSO.
Given a P2P state S = (N, C, M), in which N is the
number of peers, C is the entire collection of content
pieces, M is the maximum number of the peers which
each peer can connect steadily in the session.
Figure 1 illustrate the ICA, ICAm, PSO performance
ICA, ICAm
NumOfCountries = 80, Nuduring the search processes for the NS problem versus
mOfInitialImperialists = 8, Nuiteration in during the each processes for the problem
mOfDecades = 50, Revolution(25, 1400, 12).
Rate = 0.3, AssimilationCoefficient= 2
Specific parameter settings of the algorithms are
PSO
C1=1.5, C2=4-C1 NumofPartidescribed in Table 1.
cles=80, MaxIterations=50
As evident, ICA methods obtained better results
much faster than PSO.
ICA, ICAm
PSO
Figure 3 illustrate the ICA, ICAm, PSO performance during the search processes for the NS problem
versus iteration in during the each processes for the
problem (40, 1400, 20).
Specific parameter settings of the algorithms are described in Table 2. As evident, ICA methods obtained
better results much faster than PSO.
154
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Refrences
[1] S. Kwok, P2P searching trends: 2002-2004, Information
Processing and Management 42 (2006), 237-247.
Conclusions
155
Mahnaz Agha-Mohaqeq
Rahimipour@aut.ac.ir
m.mohaqeq@aut.ac.ir
Abstract: In the last two decades, short term prediction of traffic parameters has led to a vast
number of prediction algorithms. Short term traffic prediction systems that operate in real time are
necessary but not sufficient. A prediction system should be able to generate accurate and reliable
multi steps ahead predictions, besides the single step ahead ones. Multi steps ahead predictions
should provide information about the future traffic states with acceptable accuracy in cases of
system failure. This paper presents a comparative study between three different approaches for
multistep ahead forecasting. After a brief discussion about each approach, we apply them for data
gathered from Tehran highways, by modifying the structure of Adaptive Neuro-Fuzzy Inference
System (ANFIS). Finally the results of the comparative study are summarized.
Introduction
156
2.1
Conventional approaches to multi-step-ahead prediction like iterated and direct methods, belong to this
family since they both model from historical data a
multiple-input single-output mapping. Given a timeseries of a variable - for example volume V(t),V(t1),. . . ; their difference resides in the considered output
variable: V (t + 1) in the iterated case and the variables
V (t + h), h {1, . . . , H} in the direct case [14].
2.1.1
Iterated method
2.1.2
Direct method
The Direct method is an alternative method for longterm prediction. It learns H single output models
where each returning a direct forecast of V (t + h) with
h {1, . . . , H}:
V (t+h)=f(V(t),V(t-1),. . . )
h {1, . . . , H}
157
The Third International Conference on Contemporary Issues in Computer and Information Sciences
ables V(t+h) and consequently bias the prediction accuracy. Also direct methods often require higher functional complexity than iterated ones in order to model
the stochastic dependency between two series values at
two distant instants [12].
The reliability of direct prediction models is suspect because the model is forced to predict further ahead [15].
This is the main argument in using iterative models
in multiple steps ahead prediction. On the other hand,
iterative predictions have the disadvantage of using the
predicted value as input that is probably corrupted
[16]. A possible way to overcome this shortcoming is to
move from the single-output to multiple-output modeling.
2.2
Both aforementioned cases used multi-input singleoutput techniques to implement the predictors. Singleoutput approaches face some limits when the predictor
is expected to return a long series of future values.
Another possible way for multistep ahead prediction
is to move from the modeling of single-output mapping to the modeling of multi-output dependencies.
This requires the adoption of a multi-output technique
where the predicted value is no more a scalar quantity
but a vector of future values of the time series. This
approach replaces the H models of the direct approach
by one multiple-output model [14].
{V (t+h), . . . , V (t+1)}=f (V(t), V(t-1), . . . )
ANFIS network organizes two parts like fuzzy systems. The first part is the antecedent part and the
second part is the conclusion part, which are connected
to each other by rules in network form. ANFIS structure demonstrated in five layers can be described as a
multi-layered neural network. The first layer executes
a fuzzification process, the second layer executes the
fuzzy AND of the antecedent part of the fuzzy rules,
the third layer normalizes the membership functions
(MFs), the fourth layer executes the consequent part
of the fuzzy rules, and finally the last layer computes
the output of fuzzy system by summing up the outputs
of layer fourth.
We are going to use this network to apply the iterative method to our data. For the direct approach we
have to train H single output ANFIS with each returning a direct forecast of V(t+h) with h {1, . . . , H}.
when we place as many ANFIS models side by side as
there are required, the structure is called MANFIS1 .
Here, each ANFIS has an independent set of fuzzy
rules, which makes it difficult to realize possible certain
correlations between outputs. MANFIS is used to im3 Adaptive Neuro Fuzzy Infer- plement the direct approach. Another structure which
is used here for the MIMO approach is called CANence System
FIS2 . CANFIS has extended the notion of a singleoutput system, ANFIS, to produce multiple outputs.
Neuro- Fuzzy System combines the advantages of the In short, fuzzy rules are constructed with shared memtwo intelligent methods: Neural Network and fuzzy bership values to express correlations between outputs
logic. Neural network is capable of self-learning and [17].
1 Multiple
2 Coactive
158
Data
Iterative :
V (t+1)=ANFIS(
V (t+2)=ANFIS(
V (t+3)=ANFIS(
Direct :
V (t+1)=ANFIS( V (t), V (t-1), V (t-2),. . . )
V (t+3)=ANFIS( V (t), V (t-1), V (t-2),. . . )
Which is in fact :
{V (t+1), V (t+3)} = MANFIS( V(t), V(t-1),
V(t-2),. . . )
MIMO :
{V (t+1), V (t+3)} = CANFIS( V (t), V (t-1), Figure 3: Actual vs predicted speed - Iiterated apV (t-2),. . . )
proach
We repeat these predictions for two traffic parameters,
speed and density.
Results
Conclusion
In this paper, we implemented three different apAs shown in the table, all parameters achieve their proaches for multistep ahead prediction based on AN-
159
The Third International Conference on Contemporary Issues in Computer and Information Sciences
FIS. The data used to train and check the models were
acquired by Aimsun simulation. The results show that
all testing errors are low enough to be accepted, but
MIMO approach implemented by CANFIS obviously
shows the good performance of simplicity, precision
and stabilization. It can be used in practical projects as
an applied short-time prediction model of urban roads.
Refrences
[1] B. L. Smith and R. K. Oswald, Meeting Real-Time Requirements with Imprecise Computations: A Case Study in Traffic Flow Forecasting, Computer Aided Civil and Infrastructure Engineering 18/3 (2003), 201213.
[2] E. I. Vlahogianni, J. C. Goloas, and M. G. Karlaftis, Short
term traffic forecasting: Overview of objectives and methods, Transport Reviews 24/5 (2004), 533-557.
[3] M. S. Dougherty and M. R. Cobbet, Short-term inter-urban
traffic forecasts using neural networks, International Journal of Forecasting 13 (1997), 21-31.
[4] B. Abdulhai, H. Porwal, and W. Recker, Short-term Freeway Traffic Flow Prediction Using Genetically-optimized
Time-delay-based Neural Networks, UCB, UCB-ITSPWP991 (Berkeley, CA (1999).
[5] S. Innamaa, Short-term prediction of traffic situation using MLP-neural networks, Proceedings of the 7th World
Congress on Intelligent Transportation Systems, Turin,
Italy (2000).
[6] M. Danech-Pajouh and M. Aron, ATHENA: a method
for short-term inter-urban motorway traffic forecasting,
Recherche Transport Securite 6 (1991), 1116.
[7] H. Kirby, M. Dougherty, and S. Watson, Should we use neural networks or statistical models for short term motorway
forecasting, International Journal of Forecasting 13 (1997),
45-50.
160
Abbass Asosheh
Tehran, Iran
Tehran, Iran
p.hajinazari@gmail.com
Asosheh@modares.ac.ir
Abstract: The service oriented architecture (SOA) and its most common implementation, services, enable the enterprises to increase their agility in the face of change, to improve their operating
efficiency, and greatly reduce the cost of doing business in e-commerce environments. However, in
order to have a certain business, the behavior of the services should be guaranteed and these guarantees can be specified by Service Level Agreements (SLAs). In this regard, we present a model to
express SLAs and utilize the business services performance requirements specified as Key Indicators
(KPIs and KQIs) to define SLA parameters. This model can help automate the process of SLA
negotiation, monitoring and take actions in case of violations.
Keywords: E-Commerce; Service Oriented Architecture (SOA); Service Level Agreements (SLAs); Key Performance Indicators (KPIs); Key Quality Indicators (KQIs).
Introduction
161
In our work the SLA concept is oriented to the service relationship between service consumer and service
provider in which a set of metrics could be used for
describing levels of quality of service and in order to
guarantee these levels, mechanisms are utilized. This
is in conformity with the definition contained in the
SLA Management Handbook of the TeleManagement
Forum.
Background
Proposed Model
162
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Service
Service KQI
Availability
Speech/Visual Quality
Video Conference
Response Time
Round Trip Delay
Delay
Confidentiality
Non- repudiation
Interoperability
Connect Time
Service KPI
MTBF, MTBR
Loss of Service
MOS
Loss, Jitter, Delay
CustomerSatisfacation
Response Time
OWD, RTT
OWD, RTT
PhysicalAccecclViolations
PhysicalAccecclViolations
InteroperabilityComplaints
Connect Time
ships among KIs or between an SLA and a KI, calculating KQIs from KPIs, and detecting SLA violations. For modeling metric dependencies between services and processes, we focus on metrics which can be
measured or calculated at runtime. Examples for such
metrics are response time and availability [8].
SOA enables the integration of services from various organizations such that the organizations can easily use
the services of other organizations based on specified
standards and setting out contracts under same standards. However some external providers may offer services that they dont meet the quality attribute requirements of the service consumer organization, therefore
defining an Service Level Agreement and establishing
SLA management mechanisms are important factors
when explaining the quality requirements for achieving
mission goals of business and service-oriented environments [9]. The level of service can be specified as target
and minimum which allows customers to be informed
what to expect (the minimum), while providing a measurable (average) target value that shows the level
of organization performance. SLA management allows the enterprises to identify and solve performancerelated problems before the business is being influenced
Based on our model, we have developed an ontol- by these problems.
ogy for representing SLA specification and presented
The KI is a key instrument in order to evaluate the
a Web Ontology Language (OWL) based Knowledge
performance
of business services and detect the state of
Base that can be used in an autonomous SLA mancurrent
and
completed
processes. In our methodology,
agement system. In our study, Protg, a free openKIs
are
used
for
mapping
business services performance
source ontology editor [7] with other related plug-ins
indicators
to
SLA
parameters.
With this method one
like SWRL tab are employed. In order to implement
can
find
the
suitable
services
that
satisfy business proour ontology, we built SLA OWL, and then, the SWRL
cess
performance
requirements.
In
general, being able
rules have been added for inferring hidden relationto characterize SLA parameters has some advantages
163
for enterprises. First, it allows for more efficient translation of enterprises vision into their business processes,
since those can be designed according to service specifications. Second, it allows the selection and execution
of web services based on business process requirements,
to better fulfill customer expectations. Third, it makes
possible the monitoring of business processes based on
SLA. In order to achieve these purposes, we introduced
a model to manage business services with SLAs that
guarantee a certain quality of performance. In this
regard, we have investigated business services KPI hierarchy based on [5] and proposed the ontology-based
SLAs and SWRL rules that are used for inferring hidden relationships among KPIs and SLAs.
[10]. Hence, forecasting SLA violations is more appropriate than just detecting them. This is left for our
future work. In spite of the above mentioned restrictions, our ontology-based SLA has some advantages as
it is very easy to extend due to its use of ontologies.
Also, SWRL rules, which are used for reasoning, can
be defined and modified dynamically without affecting
other aspects of the code. However, one of our major
research aims for the future work is finding a suitable
way to forecast SLA violations.
Refrences
[1] M. P. Papazoglou and W. J. Heuvel, Service oriented architectures: approaches, technologies and research issues, The
VLDB 16 (2007), 389-415.
[2] G. Frankova, M. Sguranb, F. Gilcher, S. Trabelsi, J. Dorflinger, and M. Aiello, Deriving business processes with service level agreements from early requirements, Journal of
Systems and Software, Elsevier 84 (2011), 1351-1363.
[3] E. Toktar, G. Pujolle, E. Jamhour, M. Penna, and M. Fonseca, An XML model for SLA definition with key indicators,
IP Operations and Management, Springer 4786 (2007), 196199.
[4] A. Arsanjani, S. Ghosh, A. Allam, T. Abdollah, S. Ganapathy, and K. Holley, SOMA: A method for developing serviceoriented solutions, IBM Systems Journal 47 (2008), 377
396.
164
Shiva Rahimipour
m.mohaqeq@aut.ac.ir
rahimipour@aut.ac.ir
Masoud Safilian
m.safilian@aut.ac.ir
hashemi@aut.ac.ir
Abstract: This paper employs previously developed model predictive control (MPC) approach
to optimally coordinate variable speed limits and ramp metering along the 2 km section of the
Hemmat highway to deal with the problem of rush hour congestion. To predict the evolution of
traffic situation in this zone, an adapted version of the METANET model that takes the variable
speed limits into account should be used.Before using this traffic model to predict the evolution
of the traffic situation, it should be calibrated in order to make the state variables of the model
in a good consistence with the real values. To do this, we use a genetic algorithm. Simulation
consequence show that genetic algorithm is able to find optimal solutions to model set parameters
so that MPC approach results less congestion, a higher outflow and a lower total time spent in the
controlled areas.
Keywords: Model predictive control (MPC); METANET model; calibration; ramp metering; variable speed limit
control; genetic algorithm.
Introduction
eas, and environmental considerations render this approach little attractive. The second approach is based
on the fact that the capacity provided by the existing infrastructure is practically underutilized, i.e. it
is not fully exploited [1]. Thus, before building new
infrastructure, the full exploitation of the already existing infrastructure by means of dynamic traffic management measures such as ramp metering, reversible
lanes, speed limits and route guidance should be ensured.
165
metering algorithms is found in [2]. However, the effectiveness of this method will be reduced when the
demand from the onramp is high and traffic in the upstream mainline is getting dense [3]. In such circumstances, ramp meter cannot relieve or even alleviate
the congestion itself, because even a small flow from
the on-ramp can cause a breakdown and subsequently
congestion will be formed, especially where the capacity of on-ramp is limited. Thats because, ramp metering only controls the inflow from the on-ramp into the
mainline and the collective behaviors of the drivers in
the mainline of highway are not controlled by this. This
is why using ramp metering alone cannot appropriately
control the highway traffic in practice and employing
other control strategies such as Variable Speed Limits
is needed.
Variable Speed Limit control is a particular dynamic traffic management measure that aims to simultaneously improve both traffic safety and traffic performance (e.g., minimizing the total time spent) of highway network by dynamically adjusting optimal set of
speeds for controlled segments and display those variable speed limits on variable message signs (VMSs).
Variable Speed Limit attempts to control the collective
vehicle speed or driver behavior of mainline and in this
regard is complementary to ramp metering [4]. On the
other hand, as shown in [3], placing speed limiters just
before the on-ramp can help reduce the outflow of controlled segments so that there will be some space left to
accommodate the traffic from the on-ramp.in this way,
traffic breakdown could be prevented or delayed. These
are the motivation for using different control strategies
in coordinated scheme. References [5-8] are examples
of resources that considered both variable speed limit
and ramp metering, which are believed to be the two
key tools influencing conditions on congested highways.
One of the major difficulties to implement a modelbased optimization control strategy is that the model
parameters are difficult to calibrate. To address this
issue genetic algorithm is used to tune the model set
parameters.
The arrangement of this article is as follows. In Section 2, the basics of the MPC scheme are introduced.
In Section 3, the traffic flow model (prediction model)
is introduced. The tuning process of model parameters
base on the genetic algorithm is explained in Section
4. In Section 5, the introduced method is applied to
the 2-km section of the eastbound Hemmat highway
selected as the study network. Section 6 summarizes
the main conclusions.
166
The Third International Conference on Contemporary Issues in Computer and Information Sciences
the system state, the Control decisions, and the disturbance at time k. At each control step k c , a new optimization is performed to compute the optimal control
decisions(u(kc )), e.g,.
.
.
...
.
.
.
...
.
.
.
...
.
uN (kc ) uN (kc + 1) ... uN (kc + p 1)
u1 (kc ) u1 (kc + 1)
.
.
.
.
.
.
uN (kc ) uN (kc + 1)
... u1 (kc + C 1)
...
.
...
.
...
.
... uN (kc + C 1)
Prediction model
167
Flow-Density equation
Conservation of vehicles
Speed dynamic
Relaxation term: drivers try to achieve desired speedV ().
Relaxtion Term
T
+(
vm,i (k)[vm,i1 (k) vm,i (k)])
lm,i
|
{z
}
Convection Term
of vehicles.
m T m,i+1 (k) m,i (k)
(
)
m .lm,i
m,i (k) + m
{z
}
|
Anticipation Term
(k)
m,i
a
V [m,i (k)] = vf ree,m exp( (1)
am ( crit,m )m
wo (k + 1) = wo (k) + T [do (k) qo (k)]
qo (k) = min[do (k) + woT(k) , Qo .ro (k),
m,1 (k)
]
Qo max,m
max,m crit,m
(k)
m,i
a
V [m,i (k)] = min[vf ree,m exp( (1)
am ( crit,m )m )
, (1 + )vcontrol,m,i (k)]
T q (k)v
Calibration
METANETs
model parameters
Genetic algorithm starts with an initial set of random solutions called population. Each individual in
the population is called a chromosome, representing a
solution to the calibration problem. The evolution operation simulates the process of Darwinian evolution
to create population from generation to generation by
selection, crossover and mutation operations. The success of genetic algorithm is founded in its ability to
keep existing parts of solution, which have a positive
effect on the outcome [12].
The seven parameters of METANET model
(vf ree , , , , , am , crit ) are changed by the genetic
algorithm. To compromise between computation time
and precision, the 30 individuals are selected. After
creating a new population the fitness value has to be
calculated for each member in the population and then
ranked based on the fitness value. The genetic algorithm selects parents from the current population by
using a selection probability. Then the reproduction
168
The Third International Conference on Contemporary Issues in Computer and Information Sciences
4.1
Fitness Function
The calibration is an optimization procedure that minimizes the difference between the real data coming
from Aimsun and the data coming from METANET
model. In particular we try to minimize the following
objective function:
Figure 1: Segment 2, measured versus predicted flow,
speed and density,qualitative validation.
Nsamp
model
sim
(qm,i
(h) qm,i
(h))2
h=0 m,iIall
model
+(vm,i
(h)
sim
sim
2
vm,i
(h))2 + (model
m,i (h) m,i (h))
(1)
Case Study
Where Nsamp is the number of simulation time step A 2-km section of the eastbound Hemmat highway was
into the entire simulation period, Iall is the set of in- selected as the study network. The Hemmat services
dexes of all pairs of links and segments.
a large volume of commuter traffic in both morning
and evening peak periods, leading to heavy recurrent
congestion. For these reasons, we consider the 2-km
section of this highway as an ideal study section to apply control framework presented above in order to al4.2 Results of Model Calibration
leviate serious congestion problems. Network topology
and the location of the control equipments and sensors
For the Calibration procedure one measurement set, can be seen in Fig.2.
corresponding to one weekday, from 7 a.m to 11 a.m,
was available from the Study site. Our data collecting tool was Aimsun simulator. These data provided
flow, speed and density measurements on a ten secondby- ten second basis. Genetic algorithm results a set of
optimal parameters. The summarized outcome of this
effort is presented in Table 3.
Figure 2: Candidate traffic network.
Table 3: Parameter set for Hemmat highway
crit,m
vf ree (km/h)
(second)
32.1646
92.1957
0.08649
13.839
(km2 /h) (veh/km)
am
31.6307
56.0935
2.425
-
The objective function used in this paper is to minimize the TTS spent by all vehicles, as defined in
Pk+P 1 P
T j=k [ m,i m,i (j)lm,i nm +
P
PK+P 1
P
[ramp oOramp (ro (j)
+
o wo (j)]
j=k
P
vi (j)vi (j1) 2
2
) +
Based on the set of parameters shown in Table 3, ro (j P 1)) + speed iIspeed (
vf ree
Fig.1 depicts the speed, density and flow trajectory de- queue 0Oramp (max(wo wmax ))2 ] (2)
termined by the Calibrated model and compared with
the actual measurements. As it can be seen in Fig.1,
For the MPC system, the optimal prediction and
after calibrating the model parameters the model is control horizons were found to be approximately 60
properly able to predict the network traffic conditions. and 48 steps, corresponding to 10 and 8 min, respecTTS
169
Refrences
Figure 4: Simulation results for the controll case: Segment traffic density, Segment traffic speed, Segment
traffic flow, Origin queue length, Optimal ramp metering rates and Optimal speed limit values.
170
Zanjan, Iran
Zanjan, Iran
z.alizadeh@iasbs.ac.ir
a.ghasemazar@iasbs.ac.ir
Abstract: Expert systems are designed for non-expert individuals with the aim of providing skills
of qualified personnel. These programs simulate the pattern of thinking and the manner of how
human operates and cause the operation of expert systems to be close to operations of human or
an expert. Variety of expert systems has been yet offered in the field of medical science and in
this respect it is one of the leading sciences. Leukemia is very common and serious cancer starts
in blood tissue such as the bone marrow. It causes large numbers of abnormal blood cells to be
produced and enter the blood. Speed is always effective in diagnosis and treatment of Leukemia
and recovery of patients, but sometimes there is no access to specialists for patients and because of
this reason designing a system with specialist knowledge, that offers the diagnosis and appropriate
treatment to patients, provides the timely treatment of patients. In this paper an expert system
has been presented for diagnosis of Leukemia using VP-Expert shell.
Introduction
With the expanding application of information technology, decision making systems or generally decisions
based on computer have been of very importance. In
this regard expert systems as one of the parts attributed to artificial intelligence have the main role.
All kinds of decisions in expert systems are taken by
the help of computers. Expert systems, are knowledgebased systems and knowledge is their most important
part. In these systems knowledge is transferred from
experts in any sciences to the computer. Expert systems have been used extensively in various sciences.
So far various expert systems have been designed and
presented in areas such as industry, space travel, financial decision making and etc. Using expert system has
found its way to medical world [1].
DENDRAL was presented in 1965 to describe and explain the molecular structure [2], MYCIN was submitted in 1976 to diagnose heart disease [3], and other expert systems to detect acid and electrolyte materials,
Corresponding
171
Risk Reductions;
2.2
Survey Method
172
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Ability to create a knowledge base file with a sim- any question. For this purpose, 3 tables, decision in orple table;
der to identify patient, deduce the type of blood test
mode and deduce the type of symptoms are used.
Chaining capability to link together multiple
knowledge bases;
Automatically generation of some questions that
achieving to the result is not possible without
knowing their answers;
2.4 Inference Engine Subsystem
Existence of relatively diverse mathematical
functions;
In rules-based systems, inference engine, works in a
Existence of instructions that wants expert sysmanner that selects a base for the test and checks
tem to explain its activities through a consultawhether or not the conditions of this rule are correct?
tive work.
These conditions may be assessed through examination
of the user or may be derived from the facts that were
obtained during interviews. When conditions about a
rule are right, then the results of that rule will be correct. Once this rule is activated, and the result is added
to the knowledge base.
2.5
173
Refrences
[1] Durkin J., Expert Systems: Design and Development, Prentice Hall, New York, 1994.
[2] Edward A, Feigenbaum BG, Buchanan D, and Meta D,
Roots of Knowledge Systems and Expert System Applications, Artificial Intelligent 59 (1933), no. 12, 233240.
[3] Shorrtliffe EH, Computer-based Medical Consultations:
MYCIN, Elsevier Science Publishers, New York (1976).
174
Zanjan, Iran
Zanjan, Iran
a.ghasemazar@iasbs.ac.ir
z.alizadeh@iasbs.ac.ir
Abstract: In the present paper, a basic proof method is provided for representing the verification,
validation and evaluation of expert systems. The result provides an overview of the basic method
for formal proof such as: partition larger systems into small systems, prove correctness on small
systems by non-recursive means, prove that the correctness of all subsystems implies the correctness
of the entire system.
Introduction
An expert system is correct when it is complete, consistent, and satisfies the requirements that express expert knowledge about how the system should behave.
For real-world knowledge bases containing hundreds of
rules, however, these aspects of correctness are hard to
establish. There may be millions of distinct computational paths through an expert system, and each must
be dealt with through testing or formal proof to establish correctness.
To reduce the size of the tests and proofs, one useful
approach for some knowledge bases is to partition them
into two or more interrelated knowledge bases. In this
way the VV&E problem can be minimized [1].
2.1
A simple example
175
Rule 1
Rule 2
Rule 3
If Risk tolerance = high AND Discretionary income exists = yes then investment = stocks.
If Risk tolerance = low OR Discretionary income exists = no then investment = bank account.
If Do you buy lottery tickets = yes OR Do you currently own stocks = yes then Risk tolerance
= high.
If Do you buy lottery tickets = no AND Do you currently own stocks = no then Risk tolerance
= low.
If Do you own a boat = yes OR Do you own a luxury car = yes then Discretionary income
exists = yes.
If Do you own a boat = no AND Do you own a luxury car = no then Discretionary income
exists = no.
Rule 4
Rule 5
Rule 6
Knowledge Base 1
DO YOU CURRENTLY
OWN STOCK?
DO YOU CURRENTLY
OWN STOCK?
YES
NO
YES
YES
YES
YES
YES
NO
NO
(a)
(a)
NO
NO
DO YOU CURRENTLY
OWN STOCK?
YES
NO
NO
DO YOU CURRENTLY
OWN STOCK?
YES
NO
DO YOU BUY LOTTERY
TICKETS?
(b)
(b)
176
The Third International Conference on Contemporary Issues in Computer and Information Sciences
RULE 3
DO YOU BUY LOTTERY TICKETS?=YES
AND
DO YOU CURRENTLY OWN STOCKS?=YES
DO YOU CURRENTLY
OWN STOCK?
RULE 3
DO YOU BUY LOTTERY TICKETS?=YES
OR
DO YOU CURRENTLY OWN STOCKS?=YES
YES
NO
YES
3
YES
NO
NO
DO YOU CURRENTLY
OWN STOCK?
YES
NO
Include all variables that appeared in rules already in the subsystem and are not goals of another subsystem;
RISK
TOLERANCE?
(a)
Start with the variables that are goals for the subsystem, e.g., risk tolerance for the risk tolerance
subsystem;
LOW
THEN
RISK TOLERANCE>LOW
(b)
Next, the region by the logical expression of the hypotheses is labelled with its rule. For Rule 3, the three Figure 4 below shows the partitioning of KB1 using
Hoffman regions are labelled with a circled 3 as shown this method.
in Figure 3.a. Consequence for the Rule is linked to
the label of the region of the hypotheses. In Figure
3.b, an arrow starts at the circled 3 and ends at the
value low of the variable Risk.
2.2
LT=
INVESTMENT
(1)
DISC. INCOME
(DI)
YES
NO
RISK TOLERANCE
(RT)
YES
= Boat
Y
NO
YES
= Lux. Car
YES
NO
NO
AND
BANK ACCOUNT
OR
RULES :
3, 4
1, 2
5, 6
2.4
A subsystem to find type of investment given this 2.4.1 Completeness Step 1-Completeness Of
Subsystems
information
(part of Step 2).
The first step in proving the completeness of the entire expert system is to prove the completeness of each
2.3 Step 2-Find Knowledge Base Parti- subsystem. To this end it must be shown that for all
tions
possible inputs there is an output, i.e., the goal variables of the subsystem are set. This can be done by
To find each of the three subsystems of KB1, an itera- showing that the OR of the hypotheses of the rules that
tive procedure can be followed:
assign to a goal variable is true [7].
177
2.4.2
The results of subsystem completeness are used to establish the completeness of the entire system. The basic argument is to use results on subsystems to prove
that successively larger subsystems are complete. At
each stage of the proof there are some subsystems
known to be complete; initially the subsystem that
concludes overall goals of the expert system will be
complete. At each stage of the proof, a subsystem that
concludes some of the input variables of the currentlyproved-complete subsystem is added to the currently
complete subsystem. After a number of steps equal to
the number of subsystems, the entire system can be
shown to be complete.
2.5
2.5.3
The results of subsystem consistency are used to establish the consistency of the entire system. The basic
argument is to use results on subsystems to prove that
successively larger subsystems are consistent. At each
stage of the proof, there are some subsystem known to
be consistent; initially, this is the subsystem that concludes goals of the expert system as a whole. At each
stage of the proof, a subsystem that concludes some of
the input variables of the currently-proved-consistent
subsystem is added to the currently consistent subsystem. After a number of steps equal to the number
of subsystems, the entire system can be shown to be
consistent [2].
178
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Conclusion
179
Refrences
[1] Ayel M and Laurent J-P, two different ways of verifying
Knowledge-based systems, Validation, Verification and Test
of Knowledge-Based Systems, Wiley, New York (1991), 6376.
[2] Bendou A, A constraint-based test data generator,
EUROVAV-95, Saint Badolph, France (1995), 19-29.
[3] Ginsberg A, Knowledge-base reduction: A new approach to
checking Knowledge bases for inconsistency & redundancy,
AAAI 88 2 (1988), 585-589.
[4] Kirani S, Zualkernan I.A, and Tsai W.T., Comparative Evaluation of Expert System Testing Methods, Computer Science
Department, University of Minnesota, Minneapolis 2 (1992),
9230.
[5] Laurent J-P, Proposals for a valid terminology in KBS validation, ECAI-92, Wiley, New York 2 (1992), 829-834.
[6] Lounis R and Ayel M, Completeness of KBS, EUROVAV-95,
Saint Badolph, France 2 (1995), 3146.
[7] OLeary D, Design, development and validation of expert systems: A survey of developers, Vol. 2, 1991.
m tahmasi@sbu.ac.ir
z.abdi@mail.sbu.ac.ir
Abstract: In this paper we study the problem of point-set embedding. We assume that G is a
planar graph with n vertices and S is a set of n points in general position in the plane. The problem
is to find a planar drawing of G such that each vertex is mapped to one of the points in S and each
edge is mapped to a polygonal chain and the drawing has small number of bends. In this paper we
prove that (1) every wheel has a point set embedding with no bends on a set of points in non-convex
position. Moreover, if the points are in general position, then wheel has a point set embedding with
at most one bend. (2) every -graph has a point set embedding with at most six bends on a set of
points in general position such that one of the cycles is drawn with straight lines. (3) every k-path
graph has a point set embedding on a set of points in general position with at most 2k 2 bends.
Keywords: point set embedding; wheel; -graph; planar drawing; bend; convex hull,k-path graph.
Introduction
180
A wheel Wn is a graph consisting of a cycle with n vertices and a vertex, called center that is adjacent to all
vertices of the cycle. In this section we study the problem of embedding a wheel on a point set S in general
position. We study two cases where the points of S are
in convex and non convex position, separately.
2.1
-graph
181
The Third International Conference on Contemporary Issues in Computer and Information Sciences
k-path graph
Let S1 be set of the first n1 vertices of S from below in lexicographic order. We map p to the left most
point in S1 , h1l and q to the right most point in S1 ,
h1r . suppose
Pi1 that Si be the set of the first ni vertices
of S j=1 Sj from below in lexicographic order,for
2 i k.
We map all vertices of P1 from p to q, to the points
of S1 from left to right. Now we need to map all vertices of Pi from p to q, except p and q to the Si from
left to right. let Bi be the bounding boxe of Si . It is
enough to connect p to the left most point in Si , hil
and q to the right most point in Si , hir , for 2 i k.
182
Refrences
Figure 4: (a)point set embedding of p1 , p2 and p3 . (b)
point set embedding of G on S with at most 6 bend
Conclusions
Works
and
Future
[4] P. Gritzmann, B. Mohar, J. Pach, and R. Pollack, Embedding a planar triangulation with vertices at specified points,
Amer. Math 98 (2) ( Monthly (1991)), 165166.
[5] M. Kaufmann and R. Wiese, Embedding vertices at points:
Few bends suffice for planar graphs, J. Graph Algorithms
Appl. 6 (1) (2002), 115129.
In this paper we studied the problem of point set embedding of wheels, graphs and k-Path graphs without mapping.
We proved that every wheel has a point set embedding with no bends on a set of points in non-convex position. In case that the points are in general position,
183
Zahra Jalalian
Faculty of Engineering
Kharazmi University
Kharazmi University
borna@tmu.ac.ir
jalalian@tmu.ac.ir
Abstract: The aim of this paper is to study two open problems and provide faster algorithms for
them. More precisely for two sets X and Y of numbers with the size of n and m we first present an
O(nm) algorithm to sort X + Y = {x + y | x X, y Y } of pairwise sums. Then we offer another
O(nm) algorithm for finding all pairs (x, y) and (x0 , y 0 ) from X + Y for which x + y = x0 + y 0 . In
particular if X, Y are both of size n this later algorithm enables us to know when the set X + Y
have n2 unique elements.
Introduction
Sorting X + Y
184
ing problem:
For which x, x0 X and y, y 0 Y we have x + y = x0 + y 0 ?
185
The Third International Conference on Contemporary Issues in Computer and Information Sciences
186
Ali Mohades
shadi.nilforoushan@gmail.com
mohades@aut.ac.ir
Amin Gheibi
Sina Khakabi
amin-gheibi@carleton.ca
sinakhm.cs84@aut.ac.ir
Abstract: Voronoi diagrams have useful applications in various fields and are one of the most
fundamental concepts in computational geometry. Although Voronoi diagrams in the plane have
been studied extensively, using different notions of sites and metrics, little is known for other
geometric spaces. In this paper, we present a simple method to construct the Voronoi diagram of
a set of points in the Poincare hyperbolic disk, which is a 2-dimensional manifold with negative
curvature. Our trick is to define and use some well-formed geometric maps which take care of
connection between the Euclidean plane and Poincare hyperbolic disk. Finally, we give a brief
report of our implementation.
Introduction
Brown [5], its higher-dimensional analogues can be obtained using methods in Seidel [20].
Voronoi diagrams for point-sets in d-dimensional Euclidean space E d have been studied by a number of
people in their original as well as in generalized settings. For a finite set M ( E d , the (closest-point)
Voronoi diagram of M associates each p M with the
convex region R(p) of all points closer to p than to any
other point in M . More formally, R(p) = {x E d |
d(x, p) < d(x, q), q M p}, where d denotes the
Euclidean distance function. Voronoi diagrams are of
importance in a variety of areas other than computer
science whose enumeration exceeds the scope of this
paper (see for instance Aurenhammers survey [3] or
the book by Okabe, Boots, Sugihara and Chiu [18] ).
Shamos and Hoey [21] were the first to introduce the
planar diagram to computational geometry and also
demonstrated how to construct it efficiently. Using
a dual correspondence to convex hulls discovered by
As the variety of applications of the Voronoi diagram were recognized, people soon became aware of the
fact that many practical situations are better described
by some modification than by the original diagram. For
example, diagrams under more general metrics [15,16],
for more general objects than points [9, 13], and of
higher order [10, 14, 21] have been investigated.
The interesting properties of Voronoi diagrams attracted our attention to ask a natural question whether
they will be satisfied in other spaces, especially for hyperbolic surfaces. Hyperbolic surfaces are characterized
by negative curvature and cosmologists have suffered
from a persistent misconception that negatively curved
universe must be the finite 3-D hyperbolic space [23].
Although we do not see hyperbolic surfaces around us,
Corresponding Author, Algorithm and Computational Geometry Research Group, Amirkabir University of Technology, Tehran,
Iran, T: (+98) 26 34550002
Algorithm and Computational Geometry Research Group, Amirkabir University of Technology, Tehran, Iran.
187
often nevertheless nature does posses a few. For example, lettuce leaves and marine flatworms exhibit hyperbolic geometry. There is an interesting idea about hyperbolic plane by W. P. Thurston that if we move away
from a point in hyperbolic plane, the space around that
point expands exponentially [22]. Hyperbolic geometry has found applications in fields of mathematics,
physics, and engineering. For example in physics, until we figure out whether or not the expansion of the
universe is decelerating, hyperbolic geometry could be
the most accurate way to define the geometries of fields.
Einsteins invented his special theory of relativity based
on hyperbolic geometry.
Poincar
e hyperbolic disk
ax = by.
Geodesics are basic building blocks for computational
geometry on the Poincare disk. The distance of two
points is naturally induced from the metric of D2 ; consider two point z1 (x1 , y1 ), z2 (x2 , y2 ) D2 , the distance
between z1 and z2 , denoted by d(z1 , z2 ), can be expressed as
Z
d(z1 , z2 ) =
ds
the geodesic connecting z1 and z2
= tanh1 (|
188
z2 z1
|).
1 z1 z2
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Our method
(0, 0, 2)
Suppose we are given a set S of n points (representing sites) in D2 . To construct the Voronoi diagram,
we use a combination of four maps to transfer these
sites into the Euclidean plane. The maps are defined
between four hyperbolic models and Euclidean plane,
denoted by D2 , S 2 , K 2 , H 2 and R2 , respectively. In
[7], Cannon et al. have an elegant discussion about
these hyperbolic models:
H2
p1
S2
D2
p2
p3
p4
S0
p = (x0 , y0 )
(0, 0, 1)
Now by using any algorithm in [4] for constructing the Voronoi diagram of the transferred sites in
R2 which has the worst case running time complexity
O(nlogn), the combination of the inverses of fi s will
allow us to obtain the Voronoi diagram in D2 . This
3. K 2 = {(x, y) : x2 + y 2 < 1},
combination is robust, as the subsequent theorem verdx2 +dy 2
2
ifies.
dsK 2 = 4 (1x2 y2 )2
Theorem 1: Let z1 and z2 be two points in R2 and J
be their bisector. Then f (J) would be the bisector of
4. H 2 = {(x, y, z) : z 2 x2 y 2 = 1, z > 0},
f (z1 ) and f (z2 ) in D2 and f = f11 f21 f31 f41
2
2
2
2
dsH 2 = dx + dy dz .
where fi s (i = 1, 2, 3, 4) are the above mentioned
maps.
Proof: Since we use the geodesics in each hyperThe list of maps that we defined and used is given
bolic models and Euclidean plane R2 , by using the corin the following:
responding metrics ds2 , we obtain that the bisector of
two given points z1 and z2 in R2 will be mapped to the
(a) A central projection map from the point (0, 0, 1),
bisector of f (z1 ) and f (z2 ) in D2 and vice-versa .
f1 : D2 S 2 that
2. S 2 = {(x, y, z) : x2 + y 2 + z 2 = 1, z > 0},
2
2
+dz 2
ds2S 2 = dx +dy
z2
As the complexity of the mentioned maps are linear, we conclude that the complexity of our method to
compute the Voronoi diagram of a set of sites in D2
is O(nlogn) using any algorithm with the complexity
O(nlogn) to compute the Voronoi diagram in R2 for the
transferred sites from D2 and this yields the following
consequence.
(x, y) 7 (
2x
2y
1 x2 y 2
,
,
).
1 + x2 + y 2 1 + x2 + y 2 1 + x2 + y 2
(x, y, 1) ( p
x
1
x2
y2
y
1
,p
,p
).
2
2
1x y
1 x2 y 2
Implementation
In this section we present our implementation, and discuss its performance in some series of experiments, designed to test different aspects of our algorithm and implementation. Our code has been written in C++, and
Fig.2 is an illustration of the above mentioned for visualization we have used MATLAB. Our implespaces and the connecting maps.
mentation with C++ have three main steps: in the first
(x, y, z) (
2x 2y
,
).
z2 z2
189
[3] F. Aurenhammer, Voronoi Diagrams: a Survey of a Fundamental Geometric Data Structure, ACM Computing Surveys 23(3) (1991), 345405.
[4] F. Aurenhammer and R. Klein, Voronoi Diagrams, Handbook of Computational Geometry, J. Sack and G. Urrutia,
editors, Elsevier Science Publishers, B.V. North-Holland,
Chapter 5, pages: 201290, 2000.
[5] K. Q. Brown, Voronoi diagrams from convex hulls, Inform.
Process Lett. 9 (1979), 223228.
[6] H. Brettel, F. Vienat, and J. D. Mollonl, Computerized simulation of color appearance for dichromats, Journal of Optical Society of America 14(10) (1997), 26472655.
[7] J. W. Cannon, W. J. Floyd, R. Kenyon, and W. R. Parry,
Flavors of Geometry, MSRI Publications 31 (1997), 59115.
[8] The CGAL User and Reference Manual: All Parts. Release
3.3., 2007.
[9] R. L. Drysdale and D. T. Lee, Generalization of Voronoi
diagrams in the plane, SIAM J. COMPUT. 10 (1981), 73
87.
[10] H. Edelsbrunner, J. ORourke, and R. Seidel, Constructing
arrangements of lines and hyperplanes with applications,
Proc. 20th. Ann. IEEE Symp. FOCS (1983), 8391.
[11] S. Fortune, http://cm.bell-labs.com/who/sjf/index.html.
[12] C. Goodman-Strauss, Compass and Straightedge in the
Poincar
e Disk, Disk. Amer. Math. Monthly 108 (2001), 33
49.
[13] D. G. Kirkpatrick, Efficient computation of continous skeletons, Proc. 20th. Ann. IEEE Symp. FOCS (1979), 1827.
[14] D. T. Lee, On k-nearest neighbor Voronoi diagrams in the
plane, IEEE Trans. Comp. C-31, 6 (1982), 478487.
[15] D. T Lee, Two-dimensional Voronoi diagrams in the Lp
metric, JASM 27(4) (1980), 604618.
[16] D. T. Lee and C. K. Wong, Voronoi diagrams in L1 (L )
metrics with two dimensional storage applications, SIAM
J. COMPUT. 9 (1980), 200211.
[17] Z. Nilforoushan and A. Mohades, Hyperbolic Voronoi Diagram, ICCSA 2006, LNCS 3984 (2006), 735742.
[18] A. Okabe, B. Boots, K. Sugihara, and N. Chiu, Spatial tesselations: concepts and applications of Voronoi diagrams,
Wiley Series in Probability and Statistics, 2000.
Acknowledgments
Refrences
[1] S. Anisov, Geometrical spines of lens manifolds, Department of Mathematics, Utrecht University, 2005.
[2] J. W Anderson, Hyperbolic Geometry, New York. SpringerVerlag, 1999.
190
Shahriar Lotfi
University of Tabriz
m.abdollahi89@ms.tabrizu.ac.ir
shahriar lotfi@tabrizu.ac.ir
Davoud Abdollahi
University of Tabriz
Department of Mathematics
d abdollahi@tabrizu.ac.ir
Abstract: Systems of nonlinear equations arise in a diverse range of sciences such as economics,
engineering, chemistry, mechanics, medicine and robotics etc. For solving systems of nonlinear equations, there are several methods such as Newton type method, Particle Swarm algorithm (PSO),
Conjugate Direction method (CD) which each has their own strengths and weaknesses. The most
widely used algorithms are Newton-type methods, though their convergence and effective performance can be highly sensitive to the initial guess of the solution supplied to the methods. This
paper introduces a novel evolutionary algorithm called Cuckoo Optimization Algorithm, and some
well-known problems are presented to demonstrate the efficiency and better performance of this new
robust optimization algorithm. In most instances the solutions have been significantly improved
which proves its capability to deal with difficult optimization problems.
Keywords: Systems of Nonlinear Equations; Optimization; Cuckoo Optimization Algorithms; Evolutionary Algorithm.
Introduction
191
distance from their habitat. From now on, this maximum range will be called Egg Laying Radius (ELR).
In an optimization problem with upper limit of varhi
and lower limit of varlow for variables, each cuckoo has
an egg laying radius (ELR) which is proportional to
the total number of eggs, number of current cuckoos
eggs and also variable limits of varhi and varlow . So
ELR is defined as:
Like other evolutionary algorithms, the proposed algorithm starts with an initial population of cuckoos.
These initial cuckoos have some eggs to lay in some
host birds nests. Some of these eggs which are more
similar to the host birds eggs have the opportunity to
grow up and become a mature cuckoo. Other eggs are
Current cuckoo0 s eggs
(varha varlow )
ELR =
detected by host birds and are killed. The grown eggs
T otal number of eggs
reveal the suitability of the nests in that area. The
(3)
more eggs survive in an area, the more profit is gained
in that area. So the position in which more eggs surwhere is an integer, supposed to handle the maxivive will be the term that COA is going to optimize
mum value of ELR.
[2].
immigrate
to new and better habitats with more simif1 (x1 , x2 , ..., xn ) = 0
larity of eggs to host birds and also with more food for
f2 (x1 , x2 , ..., xn ) = 0
(1) new youngsters.To recognize which cuckoo belongs to
..
fn (x1 , x2 , ..., xn ) = 0
3 5 seems to be sufficient in simulations).
In order to transform (1) to an optimization probWhen each cuckoo moving toward goal point, they
lem, we will use the auxiliary function:
only fly a part of the way and also have a deviation.
Each cuckoo only flies % of all distance toward goal
n
X
2
minf (x) =
fi (habitat),
(2) habitat and also has a deviation of radians. For each
cuckoo, and are defined as follows:
i=1
U (0, 1)
U (/6, /6)
(4)
192
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Problem 1 [1]:
cos(2x1 ) cos(2x2 ) 0.4 = 0
2(x2 x1 ) + sin(2x2 ) sin(2x1 ) 1.2 = 0
2 x1 2, 2 x2 2
Problem 2 [1]:
f1 (x1 , x2 ) = ex1 + x1 x2 1 = 0
f2 (x1 , x2 ) = sin(x1 x2 ) + x1 + x2 1 = 0
2 x1 2, 2 x2 2
Problem 3 [1]:
0 = x1 0.25428722 0.18324757x4 x3 x9
0 = x4 0.19807914 0.15585316x7 x1 x6
0 = x5 0.44166728 0.19950920x7 x6 x3
0 = x6 0.14654113 0.18922793x8 x5 x10
0 = x7 0.42937161 0.21180486x2 x5 x8
0 = x8 0.07056438 0.17081208x1 x7 x6
0 = x9 0.34504906 0.19612740x10 x6 x8
P2
40
P3
5
P4
5
P5
5
P6
5
[2, 4]
[2, 4]
[2, 4]
[2, 4]
[2, 4]
[2, 4]
150
200
300
30
300
300
250
300
500
1000
1000
300
50
50
50
50
50
50
0.001
0.001
0.001
0.001
0.001
0.001
(x1 ,
(0.15,
(0.15,
(0.15,
(0.1575,
(0.15772,
(0.1563,
x2 )
0.49)
0.49)
0.49)
0.4970)
0.49458)
0.4931)
(f1 , f2 )
(-0.00168, 0.01497)
(-0.00168, 0.1497)
(-0.00168, 0.1497)
(0.005455, 0.00739)
(0.001264, 0.000969)
(-3.2559e-004, 1.2562e-006)
10 xi 10, i = 1 to 10
Problem 4 [3]:
P1
20
(x1 , x2 )
(0.0096, 0.9976)
(-0.00138, 1.0027)
(-0.00003, 1.00009)
(f1 , f2 )
(0.019223, 0.016776)
(-0.00276, -0.0000637)
(-0.0000745, 0.0000174)
1 x1 2, 1 x2 2
Problem 5 (Neurophysiolosgy Application) [1]:
2
x1 + x23 = 1
x
+ x2 = 1
2 3 4
x5 x3 + x6 x34 = 0
x5 x31 + x6 x32 = 0
x5 x1 x23 + x6 x24 x2 = 0
x5 x21 x3 + x6 x22 x4 = 0
Evolutionary
| xi | 10
Problem 6 [4]:
0.5sin(x1 x2 ) 0.25x2 / 0.5x1 = 0
(1 0.25/)(exp(2x1 ) e) + ex2 / 2ex1 = 0
COA
0.25 x1 1, 1.5 x2 2.
The used parameters in COA for problems is listed
in Table 1.
PSO
COA
(x1 , x2 )
1.08421508149135
-0.29051455550725
1.08421508149135
-0.29051455550725
(f1 , f2 )
-9.99200722162e-016
6.77236045021e-015
-9.99200722162e-016
6.77236045021e-015
193
10
Cost Value
10
10
10
10
10
50
100
150
Cuckoo Iteration
200
250
300
Refrences
[1] C. Grosan and A. Abraham, A New Approach for Solving
Nonlinear Equations Systems: PART A: SYSTEMS AND
HUMANS, IEEE VOL. 38, NO. 3 (MAY 2008), 698714.
(x1 , x2 )
0.50043285
3.14186317
0.29930000
2.83660000
Filled Function
COA
(f1 , f2 )
-0.00023852
0.00014159
-0.000071289
0.000026644
[5] C. G. Broyden, A Class of Methods for Solving Nonlinear Simultaneous Equations, Math. Comput vol. 19, no. 92 (Oct.
1965), 577593.
5
Fitness Function
10
10
[7]
15
10
10
15
Run Number
20
25
30
[8] H. Bahrami, K. Faez, and M Abdechiri, Imperialistic Competitive Algorithm Using Chaos Theory for Optimization,
12th International Conference on Computer Modelling and
Stimulation (2010).
194
University of Kashan
University of Kashan
sheikhi@grad.kashanu.ac.ir
babamir@kashanu.ac.ir
Abstract: Dynamic changes in operational environments of softwares and users requirements has
caused software communities to develop adaptive softwares. The inherent dynamism of adaptive
softwares makes them complex and error prone. So accomplishing many tasks such as understanding, testing, analyzing cohesion and coupling of an adaptive software is a difficult and costly labor.
we present a novel approach for slicing of an adaptive system which its result can be used for fulfilling these task with less costs and more easier. The approach uses Techne model of an adaptive
software. Being model-based gives the approach the chance of not being involved in software code
and work in a abstract level.
Introduction
Author
195
Problem Statement
But adaptive softwares are inherently complex and using usual software engineering approaches for some
applications like understanding, testing, cohesion and
coupling analysis of them is so difficult, has high cost
and is error prone. So new Techniques should be used
to optimize adaptive software development to take advantage of them. Slicing is a reducing technique that
ADAPTIVE SOFTWARE
Software systems operate in open, changing and unpredictable environments. So to have robustness, they
should be able of adapting to environmental changes
as well as adapting to their internal changes and stakeholders various requirements [2]. There are many languages to model these systems. Structural or object
oriented ones specify a system from its developer point
of view and dont pay attention to stakeholder of the
systems, instead goal based languages are closer to
stakeholder idea and are easier to understand [3,4].
Techne [5,6] is a goal based modeling language for
adaptive systems. In addition to general properties of
other languages, Techne has unique properties distinguishing it from other languages. The model is in the
form of a directed graph. Its nodes represent propositions related to environment or stakeholder of the software such as:
To clarify the Techne Model, we consider the problem of scheduling a meeting. Scheduling can be done
automatically by the use of email and web forms or
manually. Web forms are designed to acquire participants calendar constraints to submit the requests to
modify the meeting date and location. web forms addresses and the invitations are sent to participants by
email. The manual approach organize the meeting via
phone calls. A part of the model of meeting schedular
is depicted in Figure 1.
SLICING
196
The Third International Conference on Contemporary Issues in Computer and Information Sciences
PROPOSED APPROACH
2. Determine
Decomposed(ai ),
Preference(ai ).
Conflict(ai ),
from
from
197
7. prefk ++
8. 9. Insert (Bj , Bk ) into checked-together-list
10. prefk > prefj Decomposed(S1 )= {(G1 , Q2
)}
go to 11
11. BF = (G1 , Q2 )
Case Study
S
S
12. slice(S1 )=slice(G1 )S slice(Q2 ) = slice(G1 )
slice(G1 )=slice(T1 ) slice(T2 )
slice(S1 )= { S1 , Q2 , G1 , T1 , T2 }
Result is a slice which is much more smaller,simpler
and cost effective to be analyzed for different applications in comparion with the whole Techne model. It
has less nodes and relations relevant to the way of satisfaction of the criterion. It contains all the elements
that effect satisfaction of the criterion.
Related Work
As far as we have studied, no research has been dedicated to slicing of adaptive system model. But there
are some researches on slicing of software models, specially UML models. Lizzhang [8] considered class diagram , and does the slicing to extract test cases based
on a black box method. Ray [9] used condition slicing
of class diagrams but it is not suitable as class diagrams are static and dont show the systems behavior
with regarding the data dependencies. To compensate
this handicap Samuel [1] benefited sequence diagram,
which is dynamic and shows the system behavior, for
slicing and test case generation. He proposes a formula
for slicing criterion adequacy and claims that it covers
the slicing criterion with the least number of test cases.
Bertolino [10] focused on message passing between sequence diagram components and tries to generate test
198
The Third International Conference on Contemporary Issues in Computer and Information Sciences
199
Refrences
[1] P. Samuel and R. Mall, A Novel Test Case Design Technique Using Dynamic Slicing of UML Sequence Diagrams,
e-Informatica Software Engineering Journal 2/1 (2008),
367378.
[2] A. G. Ganek and T. A. Corbi, The dawning of the autonomic computing era, IBM Systems Journal 2/1 (2003),
71-92.
[3] E. Nitto, C. Ghezzi, A. Metzger, M. Papazoglou, and K.
Pohl, A journey to highly dynamic, self-adaptive servicebased applications, Automated Software Engineering Journal/USA 15/3 (2008), 313-317.
[4] Q. Zhu, L. Lin, H. M. Kienle, and H. A. Muller: Characterizing maintainability concerns in autonomic element design,
software maintenance ICSM/Beijing (2008), 197-206.
[5] A. Borgida, N. Ernest, I.J. Jureta, A. Lapouchnian, S.
liaskos, and J Mylopoulos, Techne (another) Requirements
Modeling Language, University of Toronto (2009).
[6] I.J. Jureta, A Borgida, N. Ernest, and J Mylopoulos,
Techne: Towards a New Generation of Requirements Modeling Languages with Goals, Preferences, and Inconsistency
Handling, Proceeding of IEEE International Conferance on
Requirement Engineering,sydney,NSW (2010), 115-124.
[7] D. Binkley, S. Danicic, T. Gyimothy, M. Harman, A. Kiss,
and B. Korel, Theoretical foundations of dynamic program
slicing, Theoretical Computer Science 360/23-41 (2006).
[8] W. Linzhang, Y. Jiesong, Y. Xiaofeng, H. Jun, L. Xuandong,
and Z. Guoliang, Generating test cases from UML activity
diagrams based on gray-box method, Proceedings of the 11th
Asia- Pacific Software Engineering Conference/Washington,
DC, USA (2004).
[9] M. Ray, S. S. Barpanda, and D.P. Mohapatra, Test Case
Design Using Conditioned Slicing of Activity Diagram, International Journal of Recent Trends in Engineering 1/2
(2009), 117-120.
[10] A. Bertolino and F. Basanieri, A practical approach to UML
based derivation of integration tests, Proceedings of 4th International Software Quality Week Europe (2000).
Saeed Kargar
University of Tabriz
Department of Computer
Tabriz, Iran
Tabriz, Iran
l-khanli@tabrizu.ac.ir
saeed.kargar@gmail.com
Hossein Kargar
Islamic Azad University, Science and Research Branch
Department of Computer
Hamedan, Iran
h.kargar.ir@gmail.com
Abstract: In this paper, we proposed the method of discovering resource in grid environment
which is able to discover the required combinational resources of users apart from single resources.
In this method, the idea of combination of colors was used for saving and discovering resources.
This method uses combination of colors for illustrating characteristics of resources and the users
use combination of colors or their equivalent codes for requesting their necessary resources. This
method is able to establish the users required resources with low traffic and discover them by a direct
path and diagnose the changes which were occurred in the system and update the environment.
This method is simulated in environments with different sizes and the results show that this method
established lower traffic in environment comparing the other methods and so it is more effective.
Keywords: Facility Location; Voronoi Diagram; Reactive Agent; Computational Geometry; Artificial Intelligence.
Introduction
200
Related work
Resource discovering problem is one of the most important problems which researchers are trying to solve
it and they have proposed different methods for solving
this problem and we explain some of these methods in
brief in this section.
201
The Third International Conference on Contemporary Issues in Computer and Information Sciences
3.1
Resource discovery
For example, if one node has the combination resource CPU 3.8 GHz & HDD 2TB, it will use the resulted color from combination of (255.102.0) (related
to CPU 3.8) and (34.0.204) (related to HDD 2T). For
obtaining combination of colors, it is enough to calculate the integral number of the average of numbers:
([(255+34)/2].[(102+0)/2].[(0+204)/2])=(144.51.102)
Figure 5: A sample of resource discovery in our method
In Figure 3, three samples of combination resources
were represented together with color and code. In FigHaving received this request, the node 7 compares
ure 4, the color table related to node 1 was shown. As
it
first
with its local color and then with available colors
you see, number of the rows of color table of each node
in
its
table.
is for the number of the children of that node. In each
row, the color related to available resources was written
Since there is no conformity, so delivers it to its
in related child.
parent (that means node 4). The node 4 delivers the
request to node 1 in the same way. As it is shown
in Figure 5, node 1 finds a conformity in row 1 which
is related to node 2 and delivers the request to that
node and the node 2 acts in the same way and sends
the request to nodes 5 and 6 and at the end, the desired resources was discovered in two nodes to the user
(multi reservation).
202
trees with different sizes and desired number of chil- ferent methods. We supposed 300 users that everyone
dren.
requested different number of resources. In Figures 7
and 8, the results were shown.
Simulation Results
203
The Third International Conference on Contemporary Issues in Computer and Information Sciences
[15] Ye Zhu, Junzhou Luo, and Teng Ma, Dividing Grid Service Discovery into 2-stage matchmaking, ISPA 2004, LNCS
3358 (2004), 372-381.
[16] Sanya Tangpongprasit, Takahiro Katagiri, Hiroki Honda,
and Toshitsugu Yuba, A time-to-live based reservation algorithm on fully decentralized Resource Discovery in Grid
computing, Parallel Computing 31 (2005).
Refrences
[1] I. Foster, C. Kesselman, and Globus, A meta-computing infrastructure tool-kit, Int. J. High Perform, Comput. Appl
2 (1997), 115-128.
[2] M. Mutka and M. Livny, Scheduling remote processing capacity in a workstation processing bank computing system,
Proc. of ICDCS (1987).
[19] Simone A. Ludwig and S.M.S. Reyhani, Introduction of semantic matchmaking to Grid computing, J. Parallel Distrib.
Comput 65 (2005), 15331541.
[20] Juan Li and Son Vuong, Grid resource discovery using semantic communities, Proceedings of the 4th International
Conference on Grid and Cooperative Computing, Beijing,
China (2005).
[21] juan Li and Son Vuong, Semantic overlay network for Grid
Resource Discovery, Grid Computing Workshop (2005).
[7] R-.S Chang and M-.S .Hu, A resource discovery tree using
bitmap for grids, Future Generation Computer Systems 26
(2010), 2937.
204
HamidReza Barzegar
Hyderabad, India
Hyderabad, India
Shahgholi a@hotmail.com
Hr.barzegar@gmail.com
G.Praveen Babu
Jawaharlal Nehru Technological University
School of Information and Technology
Hyderabad, India
pravbob@jntu.ac.in
Abstract: Offline Web Application [7]: Web applications are able through using HTML5 Offline
Web Application to make them working offline. A web application can send an instruction which
causes the UA to save the relevant information into the Offline Web Application cache. Afterwards
the application can be used offline without needing access to the Internet. Whether the user is asked
if a website is allowed to store data for offline use or not depends on the UA. For example, Firefox
3.6.12 asks the user for permission but Chrome 7.0.517.44 does not ask the user for permission to
store data in the application cache. In this case the data will be stored in the UA cache without
the user realizing it.
Introduction
User Agent (UA): The UA represents a web application consumer which requests a resource from a web
Corresponding
205
Vulnerabilities
206
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Figure 1:
1 Victim access any.domain.com through a malicious access point (e.g. public wireless).
2 The HTTP GET Request is sent through the malicious access point to any.domain.com.
Countermeasures
The threats Persistent attack vectors and Cache poisoning cannot be avoided by web application providers.
The threats are defined in the HTML5 specification.
To come around this problem is to train the users to
clear their UA cache whenever they have visited the Internet through an unsecured network respectively be8 After the user has entered the login credentials to fore they want to access a page to which sensitive data
the faked login form (offline application), it posts are transmitted. Further, the user needs to learn to
the credentials to an attacker controlled server understand the meaning of the security warning and
(JavaScript code execution).
only accept Offline Web Applications of trusted sites.
207
Conclusion
Refrences
[1] World
Wide
Web
Consortium
(W3C),
HTML
4.01
Specification,
and
W3C
Recommendation,
http://www.w3.org/TR/1999/REC-html401-19991224/
(1999).
[2] The World Wide Web Consortium (W3C) and XHTML
1.0 The Extensible HyperText Markup Language,
http://www.w3.org/TR/xhtml1/ (2000).
[3] The World Wide Web Consortium (W3C), HTML5 - A vocabulary and associated APIs for HTM and XHTML, and E.
Jamhour, http://www.w3.org/TR/html5/ 4786 (2007), 196199.
[4] M. Pilgrim, HTML5: Up and Running, Sebastopol: OReilly
Media, 2010.
[5] Web
Hypertext
Application
Technology
Working
Group
(WHATWG),
What
is
the
WHATWG?:
http://wiki.whatwg.org/wiki/FAQ (2011).
[6] Internet Engineering Task Force, The Internet Society:
Hypertext Transfer Protocol HTTP/1.1,
http://www.ietf.org/rfc/rfc2616.txt (1999).
[7] The World Wide Web Consortium (W3C), Offline Web Applications: http://www.w3.org/TR/offline-webapps/ (1999).
[8] Lavakumar Kuppan and Attack and Defense Labs, Chrome
and Safari users open to stealth HTML5 AppCache
attack: http://blog.andlabs.org/2010/06/chrome-and-safariusers-open-to-Stealth.Html (2010).
208
Amin Moradi
Department of Physics
Department of Computer
amin.moradi@iasbs.ac.ir
media.aminian@yahoo.com
Abstract: We use a multi agent system architecture approach in a wireless sensor network (WSN)
for prediction the occurrence earthquake by study on vital signs animals. This system uses several
agents with different functionalities. CBR methods were applied to analyze and compare the similarity in animal vital signs just before an occurred earthquake with real time to reduce false alarm.
The presented architecture consists of two layers, including interface layer and regional layer. At
the interface layer the interface agents interact with users and at the regional layer, the cluster
agents communicate with each other and packing the information.
Introduction
Every year more than 13,000 earthquakes with a magnitude greater than 4.5 occurred around the world that
hundreds of them are destructive and too many people
lost their lives[1]. If we could predict them, we will be
able to save many lives. Before the earthquake Earths
crust break and gases such as argon and radon are released into the air[2]. Animals are sensitive to these
gases and their behavior and vital signs in response
to these gases will be changed[3]. So we can detect
stress in animals by measuring the vital signs. A WSN
comprises numerous sensor devices commonly known
as motes which can contain several sensors to monitor the vital signs such as temperature, heart rate, etc.
The sensor motes are spatially scattered over a large
area Since, Data collection is difficult in this network.
So, we presented a multi layer agent system to increase
Corresponding
209
2.1
Regional Layer
2.2
Interface Layer
2.4
Similarity Coefficients
Various similarity coefficients are proposed by the researchers in several domains. A similarity coefficient
indicates the degree of similarity between object pairs.
Methods are shown in figure 1[5]. The variables will be
transformed as a the number of property being located
in the two cases, the new case and every recorded case
in the case base; b the number of property being located in the new case; c the number of property being
located in the recorded case. The steps of a CBR system by an analytical approach can be ordered in five
steps:
1. The abnormal vital signs enter as a new case to
the system.
2. Thanks to the interviews made with the experts,
weights have already attributed to every try case.
210
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Weight
0.16
0.15
0.14
0.13
0.11
0.9
0.8
0.7
0.6
1
3. For every recorded case in the base similarity coefficient (Sij) between old case and the new case
is calculated.
Degree(1-9)
9
8
7
6
5
4
3
2
1
55
Property
Heart rate
shaking
Temperature
Breath rate
Blood glucose
Urine volume
Calcium
Proteins
Enzymes
Normal range
60-120 per minute
37-40 C
20-23 per minute
53-59 mg per cc
89-109 mg per cc
11-14 mg per cc
Property
Heart rate
shaking
Temperature
Breath rate
Blood glucose
Urine volume
Calcium
(1)
i=1
3
2.5
Conclusion
Writing Algorithm
211
Refrences
[1] United
States
Geological
http://earthquake.usgs.gov .
Surveys
USGS,
212
Department of Computer
Department of Computer
m.rah62@gmail.com
baharakshakeriaski@yahoo.com
Abstract: Today, almost everyone in the world is directly or indirectly affected by computer
systems. Therefore, there is great need for looking at ways to increase and improve the reliability
and availability of computer systems. Software fault tolerance techniques improve these capabilities.
One of Software fault tolerance techniques is Software rejuvenation, which counteracts software
aging. In this paper, we address this technique for the application with one and two and three
software versions then extend model for n versions and show that with more software versions can
greatly improve availability of application.
Introduction
213
Software Rejuvenation
(1 + 1 ) 1
1
0
p (t) = p(t)Q
(2)
R1
R1
0
Q=
(4)
r1
0
r1
First, we study Software rejuvenation model for the
application with one software version, model based Let P (t) be the matrix of transition probability function Pij (t)(i, j ). According to Kolmog forward
Markov process, as is show in Fig. 1.
214
The Third International Conference on Contemporary Issues in Computer and Information Sciences
(5)
(6)
r1 P2 + 1 P0 = 0
(9)
pi = 1
pA1 = p0
2.1
(8)
(10)
215
model in Fig.3, the application is unavailable in the number of existence states, and transition rate funcstate of (F,F),(R,F) and (F,R). Thereafter, the avail- tion matrix, for every number version. After accountability of two-node application is given by:
ing of this matrix and it placing in Eq.5 we can obtain
present probability in every state. By accounting of
pA2 = p0 + p1 + p2 + p4 + p5
these probabilities, can obtain availability system by
following formula:
= 1 (p3 + p6 + p7 )
(12)
pA = 1 (pm1 + pm2 + + pmn + p2n 1 ) (14)
Suppose that n software version be available, the number of states at any time t account with following formula:
n
n2
m=3 2
n
n3 n
n4 n
2
2
2
3
4
n(n1)
n
nn n
2
n1
n
(15)
Which 3n is all states that exist,2n2 n2 is number of
states that 2 version are in rejuvenation states, Accord2.2 Software rejuvenation model of ing to Assumption 2, At any time t only one version can
be in rejuvenation state therefore number of states that
three-node application
have repeated versions in rejuvenation
state, should
be deduct from 3n . So, 2n3 n3 is the number of
We study this work for three-dimension state space and
states that 3 version are in rejuvenation state and figain the less unavailability by Software rejuvenation
nally ,2nn nn is the state that all the versions be in
model of three-node application as show in Fig.3. Q is
rejuvenation state.
matrix of the transition rate function as in Eq.16.By
solving the obtained equations, we obtain the value
ofPi , i = {0, 1, 2 19}. According to the rejuvenation model in Fig.3, the application is unavailable in
the state of (F,F,F),(R,F,F),(F,R,F),(F,F,R). Thereafter, the system availability of three-node application
is given by[20]:
pA3 = 1 (p7 + p17 + p18 + p19 )
2.3
(13)
(16)
216
The Third International Conference on Contemporary Issues in Computer and Information Sciences
R#
C#
Value
R#
C#
Value
0
1
1 2n n 2
2n 1
n
0
2
2
...
2n 1
...
0
...
...
2n 2
2n 1
1
0
n
n
2n
0
r1
0
2n
1
2n + 1
0
r2
0
2n + 1
2
...
0
...
0
...
... 2n + n 1
0
rn
0
2n + n 1 n
2n
2n + n
2
1
0
R1
2n
2n + n + 1 3
1
n+1
2
2n
...
...
1
n+2
3
2n
2n + 2n 2 n
1
...
...
2n + 1 2n + 2n 1 3
1
2n
n
2n + 1
...
...
2
0
R2
2n + 1
...
n
2
n+1
1
2n + 1 2n + 3n 2 1
2
2n+1
3
2
2n+2
4 2n + n 1
1
2
...
... 2n + n 1
2
2
3n-2
n 2n + n 1
...
3
0
R3 2n + n 1
n1
3
n+2
1
2n + n
2
r1
3
2n+1
2 2n + n + 1
3
r1
4
2n+2
2
...
...
r1
2n + 2n 2
n
r1
n
0
Rn 2n + 2n 1
3
r2
n
2n
1
...
...
r2
n+1
3n-2
2 2n + 3n 4
n
r2
n+1
1
R2
n+2
1
R3
n2
n
rn1
...
...
...
n2 1
n-1
rn
2n-1
1
Rn
...
...
rn
n+1
2
R1 n 2 n 2
1
rn
n+2
3
R1
...
n-1
rn1
...
...
...
...
n-1
rn1
2n-1
n
R1
2n
2
R3
m-n-2
1
r2
2n+1
2
R4
2n + n
m-n
n
...
...
...
...
m-n
...
3n-2
2
Rn
...
m-n
2
2n
3
R2 2n + 2n 1
m-n
1
2n+1
4
R2
...
...
...
m-3n
m-1
1
3n-2
n+1
R2
...
m-1
n
...
m-1
...
2n 2 2n 2n 4 Rn m 2n 1
m-1
2
2n 2
...
...
m-2n
m
n
2n 2
2n n 3 R2
...
m
...
2n 1
2n n 2 Rn
...
m
2
2n 1
...
...
m-n-1
m
1
2n 1
2n 2
R1
m-1
2n n 2 rn
2n 2n 4 2n 2
n
m-2
2n n 3 rn1
...
2n 2
...
...
...
r2
2n n 3
2n 2
2
m-n-1
2n 2
r1
217
Unavailability
0.047528
0.022814
0.022438
0.00108
0.00061
To acquire availability measure of application, we perform numerical experiments by taking system unavailability as evaluation indicator. The system parameter default values in software
rejuvenation model are given in table 2. All the parameter values are selected by experimental experience for demonstration
purposes.
The change in the unavailability of software applications with
the different number of versions and rejuvenation rates is plotted in table 3 and Fig. 2. The number of versions is varied
from simplex to multiplex (n = 5), at the same time we perform
software rejuvenation with the interval from rate=0.5 to infinity (rate=0: no rejuvenation). From the graph, the amount of
unavailability decrement from simplex to duplex is significant.
We can see that number of versions strongly influences system
reliability. With the number of version increasing, the system
unavailability reduces rapidly and goes to a steady value.
Conclusion
[10] T. Dohi, K. Goseva, and K. Trivedi, Statistical nonparametric algorithms to estimate the optimal software
rejuvenation schedule: ACM SIGMETRICS Conf., ACM
Cambridge, MA (2000).
Refrences
218
Tehran University
School of Medicine
Tehran, Iran
Tehran, Iran
shourie.n@srbiau.ac.ir
amir h jafari@aut.ac.ir
Abstract: In this paper, a fuzzy neuro-chaotic network is proposed for retrieving pattern. Activation function of each neuron is a logistic map with flexible searching area. Bifurcation parameter
and searching area of each neuron are determined depending on its desired output. They are obtained using two fuzzy systems, separately. In the beginning of training process, desired patterns
are stored in fixed points by use of pseudo-inverse matrix learning algorithm. Then required data
for constructing of the fuzzy systems are provided. The fuzzy rule bases are designed by use of look
up table scheme based on provided data. In the retrieving process, all neurons are initially set to
be chaotic. Each neuron searches for its state space completely to find its correct periodic points.
When this occurs, the neuron is driven to periodic state of period 2. In this case, the bifurcation
parameter and the searching area of the neuron are determined by the two obtained fuzzy systems.
When all neurons are driven to periodic state, the desired pattern is retrieved. Computer simulations represent the remarkable performance of the proposed model in the field of retrieving noisy
patterns.
Introduction
Chaotic behavior exists in many biological systems specially, in behavior of biological neuron. Observation of
chaotic behavior in biological neuron persuades many
researchers to consider these properties in artificial
neural network models, in order to obtain new computational capability. Hence, numerous chaotic neural
models with ability of representing chaotic behavior
and data processing were offered until now.
For example, G. Lee and N.H. Farhat proposed a
chaotic pulse coupled neural network as an associative
memory based on a bifurcation neuron which is mathematically equivalent to the sine circle map [3]. In
another research, a bifurcation neuron is suggested by
M.Lysetskiy and J.M. Zurada that is constructed with
the third iterate of logistic map. It uses an external
input which shifts its dynamics from chaos to one of
the stable fixed points [4]. L.Zhao et al. [5] presented
Corresponding
a chaotic neural model for pattern recognition by using periodic and chaotic dynamics. Periodic dynamic
represents a retrieved pattern and chaotic dynamic corresponds to searching process. A. Taherkhani et al. [6]
designed a chaotic neural network that could be used
for storing and retrieving gray scale and binary patterns. This model contains chaotic neurons with logistic map as activation function and a NDRAM network
which is applied as supervisor model for the neurons of
the model evaluating.
In this paper, we try to show the advantage of
chaotic behavior in artificial neural network. Chaotic
neurons are able to emerge various solutions for a problem. Therefore, we propose a fuzzy neuro-chaotic network, which is capable of pattern retrieving. In this
model, activation function of each neuron is a logistic
map with flexible searching area. Parameters of neurons are obtained using two fuzzy systems, separately.
In the training process, data are stored in memory us-
219
Model Description
2.1
Training Stage
Then required data for constructing of the fuzzy systems are provided using the training patterns. The
training patterns are noisy versions of basic patterns
that are normalized into [0-1]. Each one of the training
patterns is applied to the model separately as its initial
conditions. At first, all neurons are set to be chaotic
in order to search for their state space completely to
find correct periodic points. As the maximum value
for each element of the training pattern is equal to 1,
initial searching area of each neuron is considered into
[0-1.1] and therefore i (0) is set to 1.1. The dynamic
of each neuron is determined relevant to its error that
is defined as below:
N
X
(4)
wij xj (k) i = 1, 2, ..., N
ei (t) = xi (k)
j=1
Where, wij is an element of the connection matrix that
is obtained by Eq. (3) and xi (k) is the output of ith
neuron. As the outputs of neurons in periodic state
alternate with the period of two, t = k 2. Thus ei
of each neuron is evaluated every two time units. The
bifurcation parameter of each neuron is obtained as:
Ac
otherwise
(5)
Where is the threshold of error, Ac is a bifurcation
parameter corresponding to chaotic state and Ap (0) is
an initial bifurcation parameter corresponding to periodic orbit with period of 2. If ei (t) is greater than
a threshold, ith neuron still remains in chaotic state.
Otherwise, it indicates that the neuron approximates
its corresponding periodic point. When this occurs, the
neuron is driven to periodic state with period of 2 and
its initial bifurcation parameter is set as bi (0) = Ap (0).
In this case, the output of neuron and its error are
stored for constructing the fuzzy systems and then the
initial i (0) is calculated using Eq.(1). In this way,
bi (0) and the output of neuron are substituted in Eq.
(1). Then Eq. (1) is solved for in a way that one
of the periodic points of logistic map will be equal to
present output of neuron.
In the Training stage, at first basic patterns are normalized into [0-1] and then they are stored in fixed points.
It has been supposed that matrix X = {x1 , x2 , , xN }
containing M training patterns which each one include
N elements. All of the M training patterns are stored
in fixed points as [2, 5]:
bi = bi + 21 sign(di oi )(x2 x1 )
(3)
220
(6)
The Third International Conference on Contemporary Issues in Computer and Information Sciences
i = i + 22 sign(di oi )
(7) 2.2
221
Model
recognition %
Table 1: The recognition results from applying retrieved images using proposed model to the classifier.
until all neurons are driven to the periodic state.
Results
Refrences
Figure 4: Some examples of pattern retrieval using our
proposed model. (a) Noisy images, (b) retrieved images using proposed model. (c) Noiseless images.
[4] M. Lysetskiy and J.M. Zurada, Bifurcating neuron: computation and learning, Neural Networks 17 (2004), 225-232.
222
The Third International Conference on Contemporary Issues in Computer and Information Sciences
223
Mohsen Khosravi
Jawaharlal Nehru Technological University Hyderabad, INDIA
Department of Computer Science Engineering
mo kho 1388@yahoo.com
Abstract: GSM (Global System for Mobile Communications) is a standard set introduced by
the European Telecommunications Standards Institute (ETSI) to explain technologies for second
generation (or 2G) digital cellular networks. It was designed to be a secure mobile phone system
with strong subscriber authentication and over-the-air transmission encryption. Security plays
a crucial role in wireless communication. Due to ubiquitous nature of the wireless medium, it
causes more susceptible to security attacks than wired communications. With noticing to daily
usage of GSM equipments by hundreds of millions of users, more secure and reliable algorithms for
encryptions will be considered.
Introduction
224
Figure 3: Authentication
225
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Key
32
The security mechanisms specified in the GSM
length in
standard make it the most secure cellular telecommubits
nications system available. The use of authentication,
Time re- 1.19
encryption, and temporary identification numbers enquired to hours
sures the privacy and anonymity of the systems users,
test all
as well as safeguarding the system against the fraudupossible
lent use. Even GSM systems with the A5/2 encryption
keys
algorithm or even with no encryption are inherently
more secure than analog systems due to their use of
speech coding, digital modulation, and TDMA chan- Table 1: Brute-force
sizes
nel access.
2.1
40
56
64
128
12.7
days
2,291
years
584,542 10.8
years
1024
years
1 day
1 week
1 year
13
836,4788
2.14108
3.9 1027
2
119,132
3.04 106
5.6 1026
2,291
584,542
10.8 1024
The clock control is a threshold function of the Table 2: Number of machines required to search a key
space in a given time
middle bits of each of the three shift registers.
The sum of the degrees of the three shift registers
is 64. The 64-bit session key is used to initialize
the contents of the shift registers.
The 22-bit TDMA frame number is fed into the
shift registers.
Two 114-bit key streams are produced for each
TDMA frame, which are XOR-ed with the uplink
and downlink traffic channels.
It is rumored that the A5 algorithm has an effective key length of 40 bits.
2.2
Key Length
Let focus on key length as a figure of merit of an encryption algorithm. Assuming a brute-force search of
every possible key is the most efficient method of cracking an encrypted message (a big assumption); Table 1
shown below summarizes how long it would take to
decrypt a message with a given key length, assuming
a cracking machine capable of one million encryptions
per second.
226
concern is that the widespread use of encryption technology for cellular telephone communications will interfere with the ability of law enforcement agencies to
conduct surveillance on terrorists or organized criminal
activity.
A disagreement between cellular telephone manufacturers and the British government centering on export permits for the encryption technology in GSM was
settled by a compromise. Western European nations
and a few other specialized markets such as Hong Kong
would be allowed to have the GSM encryption technology, in particular, the A5/1 algorithm. A weaker version of the algorithm (A5/2) was approved for export
to most other countries, including central and eastern European nations. Under the agreement, designated countries such as Russia would not be allowed to
receive any functional encryption technology in their
GSM systems. Future developments will likely lead
to some relaxation of the export restrictions, allowing
countries, which currently have no GSM cryptographic
technology to receive the A5/2 algorithm.
Dont relegate lawful interception to an afterthought - especially as one considers end-toend security.
Refrences
[1] GSM/EDGE:A mobile communications system determined
to stay AEU, International Journal of Electronics and Communications (2011).
[2] Steve Gold, Cracking GSM Network Security (2011).
[3] Bernard Menezes, Network Security and Cryptography, CENGAGE Learning (2010).
[4] European Telecommunications Standards Institute (ETSI),
European digital cellular telecommunication system.
Conclusion
In this article, we have described design and performance issues of GSM on the cellular network, required
security and developing of open international standards. The technical details of the encryption algorithms used in GSM are closely-held secrets. GSM
provides a basic range of security features to ensure adequate protection for both the operator and customer.
[5] El-Ghazali Talbi and Herv Meunier, Hierarchical parallel approach for GSM mobile network design, Journal of Parallel
and Distributed Computing 66 (2006), no. 2.
[6] Sukanta Das, Sipra Das (Bit), and Biplab K Sikdar, Nonlinear Cellular Automata Based Design of Query Processor
for Mobile Network, accepted for publication in the proceedings of IEEE SMC, Hawaii (2005).
[7] Brookson C, Can You Clone a Smart Card (SIM):
http://www.brookson.com/gsm/clone.pdf.
[8] Satish Damodaran and Krishna M Sivalingam, Scheduling
algorithms for multiple channel wireless local area networks
Computer Communications (2002).
227
Saeed Jalili
r.mortazavi@modares.ac.ir
sjalili@modares.ac.ir
Keywords: Microaggregation; Privacy Preserving Data Publishing (PPDP); TSP; Perturbative Masking Methods
Introduction
sponding centroid. This mechanism is called microaggregation and is widely used in practice. Since original
records are changed, some information is lost after this
anonymization. The more similar records in groups,
the more utility remains in perturbed data. Regarding
microaggregation mechanism, this utility is measured
by information loss (IL) metric. Lower values of IL
means less distortion is introduced and anonymized
dataset is more similar to original one. The optimal
microaggregation problem can be formally defined as
follows: Given a dataset with n records and d numerical attributes, cluster the records into groups, each
of them contain at least k records, such that the sum
of squared error (SSE)
Pc within
Pnj groups is minimized.
2
SSE is defined as
i=1 kXji Xj k , where c
j=1
is the number of groups, nj is the number of records
in j-th group, and Xj is the mean of j-th group. IL
is defined as SSE/SST 100%, where SST is the
sum of squared
the entire dataset and is
Pcerror
Pwithin
nj
calculated as j=1 i=1
kXji Xk2 , where X is the
centroid of the entire dataset.
228
2
2.1
The MHM algorithm introduced in [4] involves constructing a graph over a list of sorted records, and finding the shortest path in the graph. Each arc in the final
shortest path represents a group containing records under the arc (excluding the record in the beginning of
the arc). MHM constructs the graph as follows: Let
X = X1 . . . Xn be a vector of length n consisting of
the records sorted in ascending manner. Construct a
graph Gk,n . For each record Xi , the graph has a node
with label i. The graph also has one additional node
with label 0. For each pair of graph nodes (i, j) such
that i + k j < i + 2k, the graph has a directed arc
(i, j) from node i to node j. Each arc (i, j) corresponds
a group C(i,j) consisting of {Xh : i < h j}. For each
arc (i, j), let the length L(i,j) of the arc be the withingroup sum of squared error
group
Pj for the corresponding
2
C(i,j) , i.e. L(i,j) =
(X
M
)
,
where
h
(i,j)
h=i+1
Pj
1
X
,
is
the
centroid
of records in
M(i,j) = ji
h
h=i+1
group C(i,j) . It is proved that every group in each optimal cluster corresponds to an arc of the graph and,
each optimal clustering corresponds to a path from
node 0 to node n in the graph. The length of the
shortest path is equal to SSE of the clustering. The
time complexity of constructing the directed graph is
O(k 2 n). A shortest path algorithm for this graph has
complexity O((k + 1)n) [4]. Since k is small, the algorithm is efficient in practice.
2.2
229
The Third International Conference on Contemporary Issues in Computer and Information Sciences
effectiveness, the main drawback of the approach is related to converting a given tour to a path, i.e. selecting
the first record of the path from which the graph must
be constructed. One heuristic would be to delete the
longest edge in the cycle to convert it to a path. Unfortunately, this method doesnt always produce optimal
clustering in terms of IL. Another approach is to test all
possible starting records. Obviously, this method is not
applicable due to its time complexity of O(n2 k 2 ) and
O(n2 k) to construct the graphs and compute shortest
paths, respectively. Additionally, the heuristic of a
shorter tour results in a lower IL, may fail. In the
next section, we illustrate this problem for a small two
dimensional dataset, and propose an efficient refinement procedure to overcome the weakness.
MicTSP
230
conflict with previous changes. If an exchange is committed on the dataset, involved clusters and all their
neighbours are added to a list to be considered in the
next round. This iteration terminates after no considerable change in IL is achieved or a maximum repeat
count is reached.
TSP SSE: 5.1272
2
5/3
1.5
3
k
Tarragona
8/3
1
1/3
0.5
4/2
Census
3
4
5
6
10
5.6922
7.4947
9.0884
10.3847
14.1559
5.0710
6.8708
8.4611
9.7662
14.6112
4.9603
6.6662
8.0179
9.0907
12.9363
12.8579
11.0545
11.7788
12.4606
8.6155
0.77
0.09
0.10
0.09
8.03
0.84
1.04
2.41
2.79
8.19
EIA
3
4
5
6
10
0.4829
0.6714
1.6667
1.3078
3.8397
0.3843
0.5262
0.8582
1.1205
2.0756
0.3617
0.4948
0.7730
0.9521
2.0189
25.1129
26.2918
53.6239
27.2014
47.4204
0.18
0.26
0.30
0.42
0.72
3.09
3.46
3.54
5.02
6.48
3/2
7/1
0.5
2
6/1
1
1
2/2
9/1
1.5
2
2
1.5
0.5
0.5
1.5
1
8/1
0.5
4/3
3/2
7/3
-0.5
2/2
-1
9/3
-1.5
-1
-0.5
0.5
1.5
[1] L. and others Sweeney, k-anonymity: A model for protecting privacy, International Journal of Uncertainty Fuzziness
and Knowledge Based Systems 10 (2002), no. 5, 557570.
6/2
2
-1.5
Refrences
1/1
1.5
-2
-2
IL (%)
MicTSP
Refinement
Time (sec) Time (sec)
MDAV
MicTSP MicTSP2* Imp **
3
16.9326 14.8456 14.7995 12.5976
0.05
0.79
4
19.5458 17.7523 17.4193 10.8796
0.06
1.51
5
22.4613 21.1884 20.5634
8.4496
0.06
2.98
6
26.3252 25.1887 24.0602
8.6039
0.07
3.06
10
33.1929 33.5223 30.7549
7.3449
0.13
9.56
Experiments
[6] J.L. and Wen Lin T.H. and Hsieh, Density-based microaggregation for statistical disclosure control, Expert Systems
with Applications 37 (2010), no. 4, 32563263.
[7] D. and Forn
e Rebollo-Monedero J. and Soriano, An algorithm for k-anonymous microaggregation and clustering inspired by the design of distortion-optimized quantizers, Data
& Knowledge Engineering (2011).
[8] B. Heaton, New Record Ordering Heuristics for Multivariate Microaggregation, NOVA SOUTHEASTERN UNIVERSITY, 2012.
[9] J. and Martnez-Ballest
e Domingo-Ferrer A. and MateoSanz, Efficient multivariate data-oriented microaggregation, The VLDB Journal 15 (2006), no. 4, 355369.
231
The Third International Conference on Contemporary Issues in Computer and Information Sciences
[10] SH and Khaviari Nasseri MH, Solving TSP by considering processing time: Meta-heuristics and fuzzy approaches,
Fuzzy Information and Engineering 3 (2011), no. 4, 359
378.
232
Peyman Gholami
Arak, Iran
Arak, Iran
Peyman711@yahoo.com
dnoshirvani@yahoo.com
Abstract: High unemployment rate is a serious problem in all countries. Governments are trying
to increase GDP to solve unemployment problem and have heavily invested money on cooperatives
as a quick way to make jobs. In this paper 200 industrial cooperatives are inspected according to
BSC criteria, their job making capability is evaluated then the possibility of cooperative job making
is found using classification algorithms and BSC criteria and the importance of effective factors on
job making of cooperatives is calculated using Fisher score algorithms, important factors are selected
using CFS algorithms and finally Data Base rules are extracted using association rules. Results
show high efficiency of Data Mining methods for job making analysis of industrial cooperatives.
Keywords: Data Mining, Classification Algorithms, Association Rules, Fisher score, Balanced Score Card, Cooperative company, Making Job Opportunity
Introduction
[2].
Cooperative organizations are one of the most important social and economical tools and are aimed for special conditions. Cooperative behavior which can be
called social puberty shows the determination of society for solving economical and social problems. Today
cooperative economy is part of developed economical,
social and political knowledge which is taught in many
universities across the world. This branch of economy
is successfully used in developing countries by helping
them to decrease unemployment rate and to spread social welfare.
Data mining and knowledge discovery (DMKD)
has made predominant progress during the past two
decades [1]. It utilizes methods, algorithms, and
techniques from many disciplines, including statistics,
datasets, machine learning, pattern recognition, artificial intelligence, data visualization, and optimization
Corresponding
2
2.1
Preliminaries
Classification
233
following:
explained
Section 4
works are
Where rzc is the correlation between the summed components and the outside variable, k is the number of
components, rzc is the average of the correlations between the components and the outside variable, and rii
is the average inter-correlation between components.
Equation 5 is, in fact, Pearsons correlation coefficient,
where all variables have been standardized.
following conclusions can be drawn:
2.2
The higher the correlations between the components and the outside variable, the higher the correlation between composite and the outside variable.
The lower the inter-correlations among the components, the higher the correlation between the
composite and the outside variable.
Association rule
2.3
Fisher score
Fr =
ni (i )
i=1
c
P
i=1
(1)
ni i2
2.4
(2)
Experimental study
data source
In this paper the information of 200 industrial organizations in the central province based on 2010 survey
are used as Data Base and records are related to these
cooperatives. Used features are a number of BSC criteria:
Primary investment
Amount of export
Relatively low price for products
Customer satisfaction
Number of customers
Keeping customers
Production to capacity ratio
Applying new production technology
Having ISO license
Having insurance security for personnel
Innovation capability
Increasing control over material suppliers
Increasing control over distributers and retailers
Increasing market share
Increasing sale by improving quality
Class 3 classes of job making are selected for each cooperative. These 3 classes are defined according to the
number of job opportunities that they have made in
234
The Third International Conference on Contemporary Issues in Computer and Information Sciences
2011.
Class 1: low job making (less than 20 job opportunities
are created)
Class 2: medium job making (between 20 to 40 job opportunities are created)
Class 3: high job making (more than 40 job opportunities are created)
Classification algorithm
J48
SVM
Logistic
Random Forest
Simple Bayes
Accuracy
79
83
81
91
69
3.2
Proposed Method
1 Start
2 Making Data Base according to data source in
previous section
3 Classification performance using a number of algorithms to acquire the power of forecasting the
number of job making opportunity using BSC criteria
4 Using CFS algorithm to select the most important feature of Data Base
5 Using Fisher score algorithm for ranking features
of industrial cooperatives in Central province
6 Using Apriori algorithm to extract the rules of
cooperatives Data Base.
Fisher score
0.861
0.671
0.511
0.607
0.418
0.319
0.089
0.156
0.147
0.311
0.264
0.056
0.208
0.097
0.041
Feature
Primary investment
Amount of export
Relatively low price products
Customer satisfaction
Number of customers
Keeping customers
Production to capacity ratio
Applying new production
technology
Having ISO license
Having insurance security for
personnel
Innovation capability
Increasing control over material suppliers
Increasing control over distributers and retailers
Increasing market share
Increasing sale by improving
quality
7 End
235
Rules
Primary
investment(high)+ number
of customers(high)+
Standard(high)
Conclusions
Refrences
[1] Y. Peng, G. Kou, Y. Shi, and Z. Chen, A descriptive framework for the field of data mining and knowledge discovery,
International Journal of Information Technology and Decision Making 7 (2008), no. 4, 639-682.
[2] U.M. Fayyad, G. Piatetsky-Shapiro, and P. Smyth, From
data mining to knowledge discovery, Advances in Knowledge
Discovery and Data Mining, AAAI Press (1996), 134.
[3] Chen K, Xu L, and Chi H, Improved learning algorithms for
mixture of experts in multi-class classification, Neural Networks 12 (1999), 1229-1252.
[4] Platt
JC,
Cristianini
N,
and
Shawe-taylor
J,
Large margin DAGs for multi-class classification:
http://www.brookson.com/gsm/clone.pdf,
Proceedings
of neural information processing systems, NIPS99, MIT
Press (1999), 547-553.
[5] Allwein EL, Schapire RE, Singer Y, and Kaelbling P, Reducing multi-class to binary: a unifying approach for margin
classifiers, Journal of Machine Learning Research 1 (2000),
113-124.
[6] Loucopoulos C, Three-group classification with unequal Misclassification costs: a mathematical programming approach,
Omega 29 (2001), no. 3, 291-297.
[7] Rich Caruana and Alexandru Niculescu-Mizil, An Empirical
Comparison of Supervised Learning Algorithms, Appearing
in Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh (2006).
[8] R. Agrawal, T. Imielinski, and A. Swami, Mining association rules between sets of items in large databases, SIGMOD,
Washington, DC, USA (1993), 207-216.
Confidence support
1
0.56
0.46
0.41
0.62
0.55
0.39
0.59
0.44
236
Karim Faez
Amirkabir University Of Technology(Tehran polytechnic)
Department of Electrical Engineering
k.faez@aut.ac.ir
Abstract: This paper takes a look on performance of an energy-efficient target tracking using
recovery process in a cluster-based wireless sensor network. object tracking is vulnerable to loss
of the target due to reasons such as sensor failure, predicting error or abrupt change in object
trajectory. For a reliable object tracking method that could be used in critical situations, a foolproof mechanism is needed. This paper explains an object tracking method (HCTT) along with
a recovery mechanism to track objects and recover lost objects(if needed) in a clustered network.
The simulation has been carried out using the Castalia simulation framework of OMNET++.
Introduction
cluster structure is being adopted to overcome the object tracking problem. so we use a clustered structure
consisting of dynamic clustering and static clustering
called hybrid cluster-based target tracking (HCTT) together with a recovery process.
The main contributions of the paper are summarized
as follows:(1) We address the base target tracking
method(HCTT) and (2) the recovery process and finally (3) We examine the results of simulations to show
the efficiency of the proposed scheme.
A Wireless Sensor Network (WSN) consists of a number of sensor nodes (depending on the usage) where
each sensor has the prerequisite ingredients to save
and compute data. One of the practical application
where WSNs can be used in, is object tracking [1] [2].
However, difficulties exist in a target tracking sensor
network which we should overcome to reach the ideal
tracking. The network is always vulnerable to errors
such as sensor failures,detection errors, prediction errors, network failures, and localization errors. These
The Performance Of The
cause to lose the objects course. the target exists 2
within the sensor network, but its not traceable anyHCTT
more. so we need to a robust recovery method which
recover the lost objects. The recovery process should
be quick and effective. as we know the cluster struc- In this section a review of the method is offered and
ture is the only suitable structure which is capable of for a full study with details readers are referred to [3]
extending and have benefits for large scale WSNs. the
Corresponding
237
2.1
System Model
Our network is formed by n static sensor nodes randomly deployed in a area of interest. The sink node is
deployed at a corner of the network. The network is
made up of m clusters by using any suitable clustering
algorithm. Each cluster i has ni nodes including one
cluster head and many members. A general sensing
model is adopted for a sensor node vi, defined by R(vi;
rs).
2.2
Boundary Problem
3.1
2.3
Problem Description
Inter-cluster Handoffs
238
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Search: The search step has been added to reduce false recovery beginning. In this step, the
static cluster head queries the dynamic cluster
Simulation
for target existing. Continuing the above exam- 5
ple, the static cluster head determines the target
presence within its own cluster for target existSimulations were carried out to study the proposed
ing. On failing to discover the car it will enter
tracking and recovery mechanism. The OMNeT++
the next step.
package with Castalia framework was used. The network consists of 300 sensor nodes, deployed in a eld of
Active recovery: The static cluster is the focal 140140 m. There are 80 GPS nodes in the network, and
point of recovery process. The static cluster head the other nodes determine their location as explained
sends a target loss message to all one hop clus- earlier. The sensor network had 30 static clusters with
ters(dynamic and static clusters). upon receipt- 12 nodes each. The cluster heads are identified during
ing of this message, the
deployment. A node chooses a cluster head based on
239
minimum distance between all the cluster heads it can Figure 2 shows the movement of target in the sensor
reach in just one hop. It is assumed that nodes are network while there is no kind of errors.The simulation
perfectly localized.
experiment is run for 120 s. The target starts at 7 seconds from location (10, 140) and moves at a speed of
10 m/s in a zig-zag fashion. The target nally leaves the
network at 45 seconds from location (130, 0). The tar5.1 Target Tracking
get is tracked from the moment it enters the network
until it leaves the network. A total of 164 target locations are recorded all along the network. The cluster
heads localize the target every 300 ms. Some localized
points with the timing information are shown in gure2.
5.2
conclusions
Refrences
[1] A. Arora et al., A Line in the Sand: A wireless sensor network for target detection, classication and tracking: Computer Networks 46 (2004), 605634.
[2] C. S. Raghavendra. and K. M. Sivalingam., Wireless Sensor
Networks. (2004), 125128.
[3] . Z. Wang, W. Lou., and J. Ma., A novel mobility management scheme for target tracking in cluster- based sensor networks, IEEE DCOSS (2010), 172 186.
240
Qazvin, Iran
Qazvin, Iran
h.vojodi@qiau.ac.ir
eftekhari@qiau.ac.ir
Abstract: In this paper we propose an unsupervised evaluation method based on minimal intraregion disparity and maximum inter-regions disparity measured on a pixel neighborhood. This
method evaluates color image segmentation algorithms and measures the accuracy of them. The
proposed method can be used for any type of color images with any number of regions. Also
it limits over-segmentation problems. Experiments were performed on a database composed of
2400 segmented color images. We compared the proposed method with an unsupervised evaluation
method. The Experimental results demonstrate the effectiveness of the proposed method.
Introduction
Segmentation is a fundamental stage in image processing and machine vision applications. Many segmentation methods have been proposed in the literature [1,2]
but it still is a challenging task to evaluate their efficiency. Consequently, methods for evaluating different
image segmentation algorithms play a key role in image
segmentation research [3].
The evaluation of a segmentation result makes a
given level of precision. Generally, two main approaches of evaluation exist including supervised and
unsupervised approaches. Supervised evaluation criteria use some prior knowledge such as a ground truth. In
these methods, the results of a segmentation algorithm
are compared to a standard image that is manually
segmented. This is the most commonly used method
of objective evaluation. Supervised evaluation is subjective, time-consuming task and for most images, especially natural images do not exist we generally cannot guarantee that single manually-generated segmentation image. These methods widely use in medical
applications [4].
Corresponding
241
Rosenberger presented in [7] a measure that enables image based on any of its components, using the gray
to estimate the intra-region homogeneity and the inter- level method is processed. Each intra-region color erregions disparity of gray level images.
ror of segmented image is computed based on its R, G
and B components. According to each component, one
Zhang et al. [3] proposed a novel objective segmen- error value for each region is obtained. The average
tation evaluation method based on information theory. of three color errors of each region represents the total
The new method uses entropy as the basis for mea- color error of region.
suring the uniformity of pixel characteristics within a
segmentation region. This method used to evaluate
Intra-region disparity is defined based on error
color segmented images.
color. Let I be the original image and Ig be the segmented image that is defined as a division of I into
The proposed evaluation method is based on mini- N arbitrarily-shaped regions. One defines Cx (s, t) =
mal intra-region disparity and maximum inter-regions |gI (s) gI (t)|/(L 1) as the disparity between two
disparity measured on a pixels neighborhood. It pro- pixels s and t, with L being the maximum of the gray
vides a quality score that can be used to compare dif- level. The interior disparity CIx (Rj ) of the region Rj ,
ferent segmentations of the same image. This method is defined as follows:
can be used to compare various parameterizations of
1 X
max{Cx (s, t), t W (s) Rj } (1)
CIx (Rj ) =
one particular segmentation method (including those
Sj
sR
j
which differ in terms of the number of regions used in
the segmentation) as well as fundamentally different
segmentation techniques. We compare the proposed
Rj is the set of pixels in region j. CIx (Rj ) is the
method with Zhangs method [3].
value of component x for intra-region R and W(s) to
j
X
Cx (Rj ) = (
Cx (p))/Sj
(3)
The proposed method is based on maximum interpRj
regions disparity and minimal intra-region disparity
measured. It enables to estimate the intra-region ho- where x colorcomponents (RGB in our experimogeneity and the inter-regions disparity. In a seg- ments).
mented image the pixels that are located in a region
should have similar property though compared to pixThe disparity of two uniform regions Ri and Rj is
els of neighboring regions should have a different prop- calculated as.
erty.
X |Cx (Rj ) Cx (Ri )|
1
DEx (Rj ) =
(4)
NR
N g(Rj ) + N g(Ri )
In this article we use the RGB color space. A color
Ri (Rj )
242
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Experimental Results
243
Image
1
2
3
4
5
6
7
8
9
10
11
Normal-segmentation
Over- segmentation
Normal
Over
Normal
Over
Normal
Over
Normal
Over
Normal
Over
Normal
Over
Normal
Over
Normal
Over
Normal
Over
Normal
Over
Normal
Over
EEntropy
5.0591
5.1476
5.3755
5.1673
5.1797
4.8328
4.6521
4.6437
5.0339
5.8374
5.8806
6.1022
5.6258
5.8837
6.9249
6.4253
5.2589
5.1189
5.6475
5.2757
5.2629
5.2764
0.3038
0.1840
0.5457
0.2900
0.4083
0.0874
0.4101
0.2811
0.8845
0.1184
0.1683
0.0157
0.5115
0.2065
0.2151
0.0174
0.5200
0.4097
0.4031
0.2653
0.4137
0.2688
Table 1: The comparison results of 11 segmented images given by EEntropy and the proposed method.
Spatial
Average of 50 images
Average of 100 images
Average of 150 images
Average of 200 images
2
0.0120
0.0176
0.0145
0.0172
5
0.0104
0.0130
0.0106
0.0119
10
0.0144
0.0218
0.0178
0.0179
50
0.0180
0.0254
0.0223
0.0231
100
0.0221
0.0348
0.0294
0.0293
200
0.0230
0.0360
0.0327
0.0328
Ground truth
0.1247
0.1238
0.1173
0.1103
Table 2: The accuracy (%) of proposed method for 50, 100, 150 and 200 segmented images with parameter
spatial: 2, 5, 10, 50, 100 and 200 and ground truth of them.
Conclusion
Evaluation of image segmentation algorithm is necessary to quantify the performance of the existing segmentation methods. In this paper, we proposed an
unsupervised evaluation method for the evaluation of
color image segmentation algorithms. It is based on
the minimal intra-region disparity and maximum interregions disparity measured on a pixel neighborhood.
We used a large database composed 2400 segmented
images of Berkeley dataset for performance of the exFigure 2: Evolutions of the E measure over to 11 seg- periments. We compared the proposed method with
mented images.
an unsupervised evaluation method. The proposed
method is sensitive over-segmentation and penalizes
it. Experimental results demonstrate that the proposed method is appropriate for evaluation of segmented color images.
244
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Refrences
[1] N. Senthilumaran and R. Rajesh, Image Segmentation - A
Survey of Soft Computing Approaches, International Conference on Advances in Recent Technologies in Communication and Computing (2009), 844846.
[2] W. Tao, H. Jin, and Y. Zhang, Colour Image Segmentation
Based on Mean Shift and Normalized Cuts, IEEE Transaction on Systems, Man, and CyberneticsPart B: Cybernetics
37 (2007), no. 5, 13821389.
[3] H. Zhang, J. Fritts, and S. Goldman, An entropy-based objective evaluation method for image segmentation, Proceedings of SPIE-Storage and Retrieval Methods and Applications for Multimedia (2004).
[4] N. M. Nasab, M. Analoui, and E. J. Delp, Robust and Efficient Image Segmentation Approaches Using Markov Random Field Models, Journal of Electronic Imaging 12 (2003),
no. 1, 5056.
[5] H. Zhang, J. E. Frittsm, and S. A. Goldman, Image Segmentation Evaluation: a Survey of Unsupervised Methods, Computer Vision and Image Understanding 110 (2008), no. 2,
260-280.
245
[6] S. Chabrier, B. Emile, C. Rosenberger, and H. Laurent, Unsupervised performance evaluation of segmentation,
EURASIP Journal on Applied Signal Processing 10/1155
(2006).
[7] S. Chabrier, C. Rosenberger, H. Laurent, B. Emile, and
P. Marche, Evaluating the segmentation result of a graylevel image, in Proceedings of 12th European Signal Processing Conference (EUSIPCO04), Vienna, Austria (2004),
953-956.
[8] Martin, C. Fowlkes, D. Tal, and J. Malik, A Database of
Human Segmented Natural Images and its Application to
Evaluating Segmentation Algorithms and Measuring Ecological Statistics, Proceedings of 8th International Conference Computer Vision 2 (2001), 416423.
[9] Edge Detection and Image Segmentation System:
http://www.caip.rutgers.edu/riul/research/code/EDISON/.
[10] R. Unnikrishnan, C. Pantofaru, and M. Heber, Toward
Objective Evaluation of Image Segmentation Algorithms,
IEEE Transaction on Pattern Analysis and Machine Intelligence 29 (2007), no. 6.
[11] F. Ge, S. Wang, and T. Liu, New Benchmark for Image
Segmentation Evaluation, Journal of Electronic Imaging 16
(2007), no. 3, 033011033026.
Qazvin, Iran
Qazvin, Iran
h.vojodi@qiau.ac.ir
eftekhari@qiau.ac.ir
Abstract: many segmentation methods have been proposed in the literature but it is difficult to
compare their efficiency. In this paper, we propose an unsupervised evaluation method based on the
combined principles of minimal intra-region disparity and maximum inter-regions disparity measured on a pixel neighborhood. The purpose of this paper is to present a framework for evaluation
of image segmentation algorithms. The proposed method, measures the accuracy of image segmentation algorithms that can be used for any type of color images with any number of regions. It can
also limit the under-segmentation and the over-segmentation problems. We compared the proposed
method with an unsupervised evaluation method on a database composed of 2400 segmented color
images. Experimental results demonstrate the effectiveness of the proposed method.
Introduction
Segmentation is a fundamental stage in image processing, video and computer vision applications. The target of image segmentation is the domain-independent
partitioning of the image into several regions which
are visually distinct and uniform with respect to some
property, such as grey level, texture or color. Many
segmentation methods have been proposed in the literature [1, 2] but it still remains a challenging task to
evaluate their efficiency.
Researches into better segmentation methods invariably encounters with two problems: (1) inability to
effectively compare different segmentation methods, or
even different parameterizations of any given segmentation method, and (2) inability to determine whether
one segmentation method or parameterization is suitUnsupervised ones compute some statistics in the
able for all images or classes of images (e.g. natural segmentation results according to the original image
images, medical images, etc). Consequently, methods without any prior knowledge. Unsupervised evaluation
for evaluating different segmentations play a key role
Corresponding
246
enables the objective comparison of both different segmentation methods and different parameterizations of
a single method, without requiring human visual comparisons or comparison with a manually-segmented or
pre-processed reference image. Additionally, unsupervised methods generate results for individual images
and images whose characteristics may not be known
until evaluation stage. Unsupervised methods are crucial to real-time segmentation evaluation and can furthermore enable self-tuning of algorithm parameters
based on evaluation results [5].
terizations of one particular segmentation method (including those which differ in terms of the number of
regions used in the segmentation) as well as fundamentally different segmentation techniques.
The test images in the benchmark should have a
large variety so that the evaluation results can be extended to other images and applications. The experiments are conducted using the images and groundtruth segmentations in the Berkeley segmentation data
set [7].We will evaluate the performance of our algorithm on the Berkeley Segmentation Database (BSD).
The proposed method, limits under-segmentation and
over segmentation problems. Analysis of the experimental results on the large variety of test images from
the Berkeley segmentation dataset demonstrates the
efficiency of the proposed method.
2.1
Intra-region Disparity
Zeboudj proposed a measure based on the combined principles of maximum inter regions disparity
and minimal intra region disparity measured on a pixel
neighborhood. Rosenberger presented in a criterion
that enables to estimate the intra region homogeneity
and the inter regions disparity. This criterion quantifies the quality of segmentation results [6]. Zhang
et al. proposed a novel objective segmentation evaluation method based on information theory. This method
uses entropy as the basis for measuring the uniformity
of pixel characteristics within a segmentation region.
Intra-region squared color error is computed as the proportion of misclassified pixels in an image. This parameter, in the uniform case is equal to the normalized
standard deviation of the region. In a segmented image
the pixels that are located in a region should have similar property and pixels of neighboring regions should
have a different property. Thus a good segmentation
criterion should consider two conditions: homogeneity
of intra-regions and disparity of neighboring regions.
In this paper, we propose a novel objective segmentation evaluation method based on minimal intraregion disparity and maximum inter regions disparity
measured on a pixels neighborhood. Our new evaluation method provides a quality score that can be used
to compare different segmentations of the same image.
This method can be used to compare various parame-
247
The Third International Conference on Contemporary Issues in Computer and Information Sciences
pn W (p),pn R
/ j
(4)
where Bj is the number of pn pixels. W (p) is the neighX
Cx (p))/Sj
(1) borhood of the p . pn is neighborhood with p of Rj in
Cx (Rj ) = (
separate region. It belongs to neighboring region Rj .
pRj
N R is number of regions that are neighbor with Rj .
The CE(Rj ) is the average inter-region disparity of rewhere x colorcomponents (RGB in our experi- gion Rj .
ments). Rj is the set of pixels in region j, and is used
The V inter(Rj ) is total an inter-region disparity.
Sj = |Rj | to denote the area of region j.
It is defined as follows:
The squared color error of region j is defined as
V inter(Rj ) =
e2x (Rj ) =
X
pRj
2
1
(Cx (p) Cx (Rj )) ]
L1
1
NR 3
CEx
(5)
x{R,G,B}
(2)
If an intra-region disparity is less than its interregion disparity, that region would be more accurate
(precisian). The disparity of the region Rj is defined
where L is the total number of gray levels.
by the measurement C(Rj ) [0, 1] expressed as folThe total interior disparity denoted by V intra(Rj ) lows:
computes the homogeneity of each region of the seg- C(R ) =
j V intra(Rj )
mented image:
1 V inter(Rj ) if 0 < V intra(Rj ) < V inter(Rj )
if
V intra(Rj ) = 0
2
X
ex (Rj )
V inter(Rj )
V intra(Rj ) =
(3)
0
otherwise
3 Sj
x{R,G,B}
(6)
2.2
The accuracy of segmented image depends on accuracy of its regions. Therefore, the evaluation measure
for segmented image is as follows.
Inter-region Disparity
EIntra Inter =
N
1 X
C(Rj ) Sj
SI j=1
(7)
248
four quantities: color error, squared color error, tex- System (EDISON) [8]. Our dataset includes (8 300)
ture, and entropy.
2400 images. We produce images with variety number of regions in the segmentation (using EDISON to
The current public version of the Berkeley Segmen- generate the segmentations) to study the sensitivity of
tation Database is composed of 300 color images. The these objective evaluation methods to the number of
images size is 481 321 pixels, and is divided into regions in the segmentation. However, producing of
two sets, a training set containing 200 images that can more regions does not necessarily make a better segbe used to tune the parameters of a segmentation al- mentation. Since over-segmentation may occur and
gorithm, and a testing set containing the remaining the trade-off between the number of regions and the
100 images on which the final performance evaluations amount of needed detail can be heavily influenced.
should be carried out. We perform the evaluations on
the ground truth images from Berkeley segmentation
We analyze experimental results and compare the
dataset and segmented images resulted from Edison proposed method (EIntraInter ) with evaluation measegmentation system [8].
sure based on entropy [3]. These measures calculate
the amount of segmentation accuracy. To compare the
Under-segmentation and over-segmentation are two proposed evaluation method with evaluation measure
major problems for segmentation algorithms which are based on entropy, 11 images of dataset are selected ranshown in Figure 1. If by default we presume that to- domly. For each image, three types of segmentation are
tally accurate segmentation for natural images is hard selected including segmentation with more regions, apto achieve in practice, we need to minimize the under proximately equal and less than number of regions in
or over-segmentation as much as possible.
ground truth image.
Figures 2 and 3, show evaluation of two measures for 11 selected images. EEntropy cannot penalize under-segmentation and over-segmentation problems and in some images their precision value are more
than normal value.
Figure 3 shows the results of proposed evaluation
Figure 1: (a) Under-segmented image, (b) ground method. The diagrams of under and over-segmentation
truth image and (c) Over-segmented image
are under the normal diagram. So our proposed
method is sensitive to under-segmentation and oversegmentation. This method is able to obtain the suitIn the case of under-segmentation, full segmenta- able degree of similarity for different images and petion has not been achieved, i.e. there are two or nalize under and over-segmentation well. The experimore regions that appear one. In the case of over- mental results demonstrate that the proposed method
segmentation, a region that would be ideally present is appropriate for evaluation of image segmentation.
as one part is split into two or more parts. These
The Edison segmentation system has several input
problems are so important and are not easy to resolve.
The evaluation methods that are sensitive to both over- parameters (spatial, minimum region ...) which they
segmentation and under-segmentation and can penal- need to be adjusted. In this paper, we propose a new
method to adjust spatial parameter in Edison system.
ize both of them are efficient.
First, we select 60 images of Berkeley dataset and segIn this section we analyze our proposed supervised ment each of images with spatial parameter varying in
evaluation measure and compare it with evaluation the range of {2, 5, 10, 50, 100 and 200} and ground
measure based on entropy (EEntropy ). We discuss the truth image. Then each of segmented images is evaladvantages and shortcomings of each type of meth- uated with proposed method. We obtain average segods. We use two groups of images (ground truth and mentation accuracy of each spatial parameter on 60 immachine segmentation) to perform experiments. We ages. The experimental results in Figure 4 show that
use manually segmented images of Berkeley dataset as spatial=200 is more suitable input value for Edison
ground truth images. The second group is images of segmentation system.
machine segmentation results that make up our test
dataset.
We generate machine segmentation images with variety numbers of regions using the Image Segmentation
249
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Conclusion
Refrences
[1] N. Senthilumaran and R. Rajesh, Image Segmentation - A
Survey of Soft Computing Approaches, International Conference on Advances in Recent Technologies in Communication and Computing (2009), 844846.
[2] W. Tao, H. Jin, and Y. Zhang, Colour Image Segmentation
Based on Mean Shift and Normalized Cuts, IEEE Transaction on Systems, Man, and CyberneticsPart B: Cybernetics
37 (2007), no. 5, 13821389.
[3] H. Zhang, J. Fritts, and S. Goldman, An entropy-based objective evaluation method for image segmentation, Proceedings of SPIE-Storage and Retrieval Methods and Applications for Multimedia (2004).
[4] N. M. Nasab, M. Analoui, and E. J. Delp, Robust and Efficient Image Segmentation Approaches Using Markov Random Field Models, Journal of Electronic Imaging 12 (2003),
no. 1, 5056.
[5] H. Zhang, J. E. Frittsm, and S. A. Goldman, Image Segmentation Evaluation: a Survey of Unsupervised Methods, Computer Vision and Image Understanding 110 (2008), no. 2,
260-280.
[6] S. Chabrier, B. Emile, C. Rosenberger, and H. Laurent, Unsupervised performance evaluation of segmentation,
EURASIP Journal on Applied Signal Processing 10/1155
(2006).
[7] C Martin, D Tal Fowlkes, J. Malik, and H. Laurent, A
Database of Human Segmented Natural Images and its Application to Evaluating Segmentation Algorithms and Measuring Ecological Statistics, Proceedings of 8th International Conference Computer Vision 2 (2001), 416423.
[8] Edge Detection and Image Segmentation System:
http://www.caip.rutgers.edu/riul/research/code/EDISON/.
[9] R. Unnikrishnan, C. Pantofaru, and M. Heber, Toward
Objective Evaluation of Image Segmentation Algorithms,
IEEE Transaction on Pattern Analyziz and Machine Intelligence 29 (2007), no. 6.
250
IAU
Tehran, Iran
Tehran, Iran
pezhman.gholamnezhad@gmail.com
ebadzadeh@aut.ac.ir
Abstract: The implementation of most current MOEAs hasnt been used from new method to
productive new solutions and new solutions obtain from traditional genetic recombination operators
such as cross over and mutation. Strength Pareto Evolutionary Algorithm, introduces elitism by
explicity maintaining an external population that this population stores a fixed number of the
non-dominated solutions. The balance between current population and external population, is
an important issue. The unbalanced nature of them, the current population quickly converges
toward external population and decreases the possibility of exploring pareto optimal. We propose
a method base on the Fuzzy c-means, with out specifying the number of external population,
while keeping diversity and sufficiency and surmounts deficiency of SPEA. Results of this method
has compared with NSGA-II and SPEA and systematic experiments have shown that overall, this
method is faster than perivious algorithms and with fewer iterations and evaluations, better results
are obtains. KeywordsStrength Pareto Evolutionary Algorithm; Non-dominated Sorting GA; Fuzzy
c-means clustering; Multiobjective optimization.
Keywords: Strength Pareto Evolutionary Algorithm; Non-dominated Sorting GA; Fuzzy c-means clustering; Multiobjective optimization.
Introduction
Multiobjective optimization is the process of simultaneolusy optimizing two or more colflicting objectives
subject to certain constraints. Maximizing profit and
minimizing the cost of a product; maximizing performance and minimizing fuel consumption of a vehicle; and minimizing weight while maximizing the
strength of a particular component are examples of
multi-objective optimization problems. For nontrivial
multiobjective problems, one cannot identify a single
solution that simultaneously optimizes each objective.
While searching for solutions, one reaches points such
that, when attempting to improve an objective further, other objectives suffer as a result. The pareto
optimal set is the set of all optimal points in the decision space and the pareto optimal front is the set of
Corresponding
251
Problem Definition
(1)
among a very great range of issues in computer sciences. It worries about positioning a number of facilities on a problem plane to serve some determined
demands, in order to optimize one or several objectives,
generally known as demand satisfaction. In this paper,
we aim to introduce a brand new class of facility location problems. This hybrid class of facility location
problems concerns about locating a set of facilities on
a two dimensional (2D) continuous problem plane, in
order to provide service to a collection of demands, the
agent study. We will bring a narrow introduction on
this issue. For now, it is only needed to know that these
two properties belong to a very fundamental class of intelligent agents, i.e. the reactive agents.
Facility location problems are also titled as the location analysis problem among computer scientists. As
stated before, facility location problems purpose is to
set a set of facilities in an assigning process, to a collection of demands, in order to satisfy them completely
or when not feasible, optimally.
In order to become more familiar to the issues and
tools we tend to use in this article, here we bring some
descriptions. In this order, we can build our structure
more scientifically. So we would like to bring introductions to the facility location problem, to Voronoi
diagrams, their classes and generalizations, and also to
agent study issues, especially the reactive agents.
Zitzler and Thiele(1998) proposed an elitist evolutionary algorithm, which they called the Strength
Pareto Evolutionary Algorithm(SPEA)[3]. This algorithm introduces elitisim by explicitly maintaining an
external population. This population stores a fixed
number of the non-dominated solutions that are found
until the beginning of a simulation. At every generation, newly found non-dominated solutions are compared with the existing external population and the
resulting non-dominated solutions are preserved. The
first step is to assign a fitness to each in the population
and uses genetic operators to find a new population. In
addition fitness is also assigned to external population
members that called the strength and it is less than
the fitness of current population. At this method a
solution with a smaller fitness is better. EA population members dominated by many external members
get large fitness values. With these fitness values, a binary tournament selection procedure is applied to the
combined population to choose solutions with smaller
fitness values. Thus, it is likely that external elites will
be emphasized during this tournament procedure. As
usual, crossover and mutation operators are applied to
the mating pool and a new population is created. At
this method a clustering algorithm is applied which reduces the size of external poulation. Clustering ensures
that non-dominated solutions lead to a better spread
252
The Third International Conference on Contemporary Issues in Computer and Information Sciences
3
3.1
Algorithm
Basic Idea
3.4
Stopping condition
253
Experimental Results
Variables
[ 1n , 1n ]
x1 , x2
[0, 1]n
[0, 1]n
[0, 1]n
Objectives
FON
POL
ZDT1
ZDT2
F Unconstrained
SPEA
Std
0.0257
0.0196
0.0566
0.0874
0.0581
NEW II
Mean
Std
0.1854 0.21
0.1625 0.158
0.1154 0.098
0.2651 0.217
0.1156 0.095
SPEA
Mean
Std
0.1768 0.1657
0.151
0.124
0.2356 0.212
0.284
0.243
0.1961 0.174
Real numbers
Random uniform
Min-EX
Fuzzy c-mean
SBX method
Polynomial
F-value
Number of iteration
Conclusion
Refrences
[1] K. Deb, Multi-Objective Optimization Using Evolutionary
Algorithms, Baffins Lane Chichester: Wiley, 2001.
[2] K. Deb, R. zoref, and S. Ur, Multi objective evolutionary algorithm: Introducing bias among pareto optimal solutions,
A. Ghosh and S. Tsutsui(Eds), Theory and applications of
evolutionary computation. Recent Trends. London: Springerverlag.
[3] Zitzler E and L Thiele, An evolutionary algorithm for multi
objective optimization: The strength Pareto approach, Technical Report 43, Zurich, Switzerland (1998).
[4] R Yager, D. Filev, K. Sen, and M. Naik, Generation of Fuzzy
Rules by Mountain Clustering, Journal of Intelligent & Fuzzy
Systems 2 (1994), no. 3, 209219.
Figure 1: The evaluation of the average IGD of the nondominated solutions in the current populations among
20 independent runs with the number of 5000 function
evaluations in three algorithms for F1.
254
Function
F1
F2
F3
F4
F5
Hypercube Data Grid: a new method for data replication and replica
consistency in data gird
Tayebeh Khalvandi
Abstract: Nowadays scientific applications generate huge amount of data. Grid is an efficient
solution to manage and store huge amount of data. Data Grid provides sharing and management
services for very large data around the world. Data replication is a practical and effective method to
improve data access time and fault tolerance by replicating data. Modification of data in grid can
caused a problem of maintaining consistency among them. In this paper we proposed new method
for embedding grid in hypercube, called Hypercube Data Grid (HDG). Master data is distributed in
HDG by divided into several parts and reside them into the grid sites. Update propagation is done
with broadcast algorithm in hypercube. Simulation results by Optorsim show that the proposed
approach is improved in mean job time, effective network usage, total number of replication and
percentage of storage filled compared with other approaches.
Introduction
255
Hypercube
In [12] Least Recently Used (LRU) algorithm is introduced, which delete the files which have been used A hypercube of degree k has 2k nodes and each node
least recently.
has exactly k neighbors. The distance between any two
nodes is less than or equal to k. The nodes in a hyperIn [13] Bandwidth Hierarchy Replication (BHR) al- cube may be labeled with binary numbers of length k.
gorithm is introduced. BHR strategy extends the site- Two nodes are adjacent if their labels differ in exactly
level replica optimization considering the network lo- one bit position [18]. Some hypercube was shown in
cality. Network locality means that sites which are Figure 1.
256
The Third International Conference on Contemporary Issues in Computer and Information Sciences
the same message to all the other nodes in the hypercube. This problem has been previously examined in
[25,26], in this study, the algorithm proposed in [27] for
broadcast in hypercube was used. There are d stages,
numbered 0, 1, , d1. During stage k, nodes 0, 1, , 2k 1
send the message to nodes
k (0), k (1), ..., k (2k 1)
Figure 1: Some hypercube
respectively (and concurrently).
k (n)
Denote the node ID formed by taking the bit-wise
The diameter of hypercube with 2k nodes is k, and exclusive-OR of n and 2k . I.e., it is just n with the
the bisection width of that size network is 2k1 , the kth bit flipped.
hypercube has low diameter and high bisection width
[18]. Embedding networks of processor into hypercube
is good because of low degree and low diameter in hyHypercube Data Grid
percube [19]. Transmitting a large number of data 4
quickly makes the hypercube more useful interconnection network than networks such as trees and rectanIn this section the proposed approach is presented.
gular grid [20].
A graph G is cubical if there is an embedding of
G into hypercube of degree k for some k [21]. Havel
and Liebl [22, 23] deduced that all trees, rectangular
meshes, and hexagonal meshes are cubical. They also
proved that a cycle is cubical if and only if it is even.
In [21] showed that star graph with m + 1 node, simple
path with m nodes are cubical. There are classes of
graphs cannot be embedded into hypercube with adjacency preserving such as complete graph and graph
with odd cycle [19].
A restriction with the hypercube topology is that
the number of nodes in a system must be a power of
two. This restriction can be overcome by using an incomplete hypercube, a hypercube missing certain of its
nodes. Example of incomplete hypercube was shown in
figure 2. Unlike hypercube, incomplete hypercube can
be constructed with any number of nodes. The routing
and broadcast algorithms for incomplete hypercube are
nearly as simple as those for the hypercube [24].
4.1
A method for embedding any arbitrary network structure to hypercube is presented. For embedding knowing the number of nodes in the network and its neighbors is sufficient, and no other information from the
network infrastructure is required. Steps to mapping
the network in hypercube are as follows:
The first step:
In a network with n node a hypercube of degree dlogn2 e
be made.
The second step:
In the second step network nodes is mapping in hypercube nodes. Initially a node of network selects randomly and mapping in the first node of hypercube. After mapping the node of network to the node of hypercube, the neighbors of this node in network are mapped
in neighbors of hypercube node. Based on the number
of neighbors in network and hypercube nodes one of
the following three cases may be occurs:
Both numbers of neighbors equal, all neighbor
nodes in the network are mapped to neighbor
nodes in hypercube.
Number of neighbor nodes in the network is
more, to all neighbor nodes in hypercube, the
neighbor node of network is assigned.
3.1
Broadcast in hypercube
One of the fundamental hypercube communication pat- After mapping each node of network into hypercube
terns is broadcasting, in which one node has to send node, the steps is repeated for the neighbors of that
257
node. An unvisited node of network has been mapping cation and they can be deleted. Master data which
in hypercube node which is not been initialized in the stored in one site caused single point of failure and
previous steps.
bottleneck. To solve this problem, solutions have been
proposed in [15, 16] that the master data is stored comThe algorithm ends when all nodes in the network pletely in multiple sites. In this study, a method for
are mapped in hypercube nodes. If the mapping of distributing master data in grid is introduced. As menall nodes in the network to hypercube nodes is not tioned in the previous section in hypercube with degree
possible, a hypercube with a higher order is created, k each node has exactly k neighbors. The master data
and then network nodes are mapped in new hypercube is partitioned into k + 1 parts then distributed them
nodes as same way.
in hypercube nodes, the required data for each site is
in the same site or its neighbors. Thus data access is
It should be mentioned that the above algorithm faster and the master data can be stored in a site with
can be implemented on a connective network. If the less storage space. Distribution of master data in hynetwork is not connected, the algorithm for different percube with degree three is shown in Figure 4, which
parts of network is run separated. This approach cre- master data is partitioned into four parts.
ated hypercube virtually and does not change the main
structure of the network.
4.2
Grid project is a wide range which can include largescale projects in several countries to smaller scale
projects in several levels of organization. Data grid
includes sites with computing and storage components
to run jobs and routers without computing and storage components to routing. Set of sites linked to the
router and they are considered as a virtual organiza- Figure 4: Distribution of master data in hypercube
tion (VO). The routers in different VO are connected with degree three
with each other and the sites are communicated via
routers. In proposed approach routers are organized in
hypercube. An example of this architecture is shown
4.3 Embedding grid in hypercube
in Figure 3.
The grid includes sites and routers, but only routers
are embedded in hypercube; thus the connection between routers and sites is removed. Routers are classified in a group with maximum eight members, and
then they are embedded in a hypercube with the algorithm in section 2-3, the resulted hypercube maybe
incomplete. Communication between the routers is not
change and the hypercube is a virtual, so each router
knows its location and neighbors in the hypercube. After creation hypercube, each router re-connected with
neighbor sites.
4.4
258
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Results
4.5
There are two types of data in proposed method, master data and secondary data. Master data is divided
into smaller parts and distributed on grid structure.
Secondary data was created by data replication. Master data can be changed by grids users, but secondary
data is read-only. The changes of master data must be
propagate to other data.
There are twenty sites and eight routers. The simProcess of consistency management starts by se- ulation parameters shown in table
lecting master data manger among the whole of master data stored in the sites of VO. A master data which
Parameter
Value
saved in VO with smaller ID will be called the manager
Number of sites
20
of master data.
Number of routers
8
Number of Jobs
100
Propagation of update is start with broadcasting
Number of job type
6
the update operation, towards every VO in hypercube.
Each file size(GByte)
1
Propagation is started in a periodic way base on apTotal file size(GByte)
97
plication requirement. In hypercube with degree k the
Number of experiment
10
time complexity of broadcast has O(k) so this stage
has degree O(k). Sites receive the message, site with
Table 1: Simulation Parameter
modified master data; broadcast the changes to manger
of master data. The time complexity of this stage is
O(k) too. Manger of master data receive the changes
For simulation, the topology is embedded in hyper-
259
cube. Then master data distributed in it. The embed- comparison with other algorithms. The reason is that
ded topology in hypercube and position of master data the data are distributed so jobs have parts of the data
in it is shown in Figure 7.
locally. So increase the number of local access and decrease the number of replication caused decrease ENU.
260
The Third International Conference on Contemporary Issues in Computer and Information Sciences
[14] A. Horri, R. Sepahvand, and Gh. Dastghaibyfard, A hierarchical scheduling and replication strategy, International
Journal of Computer Science and Network Security 8
(2008).
Conclusions
In this paper, a method for embedding network structure in hypercube is introduced. The grid structure
embedded in hypercube which called hypercube data
grid (HDG). Master data is distributed in HDG. Thus
the master data is divided into smaller parts and distributed them in HDG. Distribution of master data
reduce mean job time, total number of replication and
percentage of storage filled and improved effective network usage. Update propagation in HDG was done
by broadcast algorithm in hypercube which caused the
faster maintaining consistency in HDG.
Refrences
261
Abstract: There are different workload types with different characteristics that should be supported by cloud computing, whereas there is no any single solution that can allocate resources to
all imaginable demands optimally. It is necessary to design specific solutions to allocate resources
for each workload type. Based on that this paper proposes an idea to facilitate dynamic resource
allocation for bag of task applications. The proposed approach exploits users service level agreement parameters and a classification technique. Specifically, our approach manages resources and
increases utilization of them to response users in a reasonable time. We evaluates the proposed
approach using the Monte Carlo simulation. The simulation results compared with two reference
models, First Fit and Proportional Share. The proposed approach outperforms the reference models
in terms of the total cost of resource allocation and the total waiting time of clients.
Introduction
Cloud computing presents services by providing infrastructure via the network to facilitate management of
both hardware and software resources [2, 4]. The services are provided in three models, viz, Software as
a Service (SaaS), Platform as a Service (PaaS), and
Infrastructure as a Service (IaaS). The SaaS provides
most of users applications. The PaaS is concerned in
applications environments, and the IaaS is involved
in hardware level management. A contracted Service
Level Agreement (SLA), including parameters such as
cost of operation and response time, defines characteristics of these models. Service providers provisions
Figure 1: Tasks of a BoT gravitates to be completed
resources for customers in regard to the SLAs [5].
at different times.
However, resource allocation to different workload
types and applications with different characteristics in
cloud computing is a challenging problem [13]. TechniThis paper focuses on allocating resources to Bag
cally, there is no any single hardware or software that of Tasks (BoT) applications. Precisely, a BoT includes
can allocate resources to all imaginable workload types loosely coupled and compute intensive tasks demandefficiently [6]. Besides, sine each type has its specific ing minimal intertask [12]. Final results of all tasks in
properties, a single solution cannot deal with in that a BoT represent the answer of a single problem, and
Corresponding
262
Related Work
263
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Id
WL1
WL2
WL3
WL4
WL5
WL6
Bags
500
1000
1500
2000
2500
3000
St. Dev.
29.295
29.132
29.087
29.125
29.198
29.174
Metrics
The main metrics are the total response time, the total
service time, and the total waiting time for users. Besides, based on these metrics and the total idle time of
servers, it is possible to calculate the total cost of resource allocation for our datacenter. Moreover, to calculate the total response time, the proposed approach
must sum up service and waiting time of servers.
4.2
Evaluation
264
two reference models are not comparable to our approach in terms of the total idle time of servers. Besides, since the total cost of resource allocation must
be calculated based on both the solutions service time
and idle time, our approach can provide a cheaper resource allocation compared to the reference models.
Conclusion
References
In addition, idle time of servers in accordance with
different workload (WL1 till WL6) for the FF and the
modified PS ranges from 0.049 to 0.858 and from 0.957
to 9.330 respectively. However, the proposed approach
could omit idle time of servers. This is why the RM
continuously checks servers and exploits a deallocation
process to switch unnecessary servers off. Hence, the
[1] J. Ekanayake et al., Cloud Technologies for Bioinformatics Applications, IEEE Trans. on Parallel and Distributed
Systems 22 (2011), no. 6, 998-1011.
[2] M. Armbrust et al., A View of Cloud Computing, Communications of the ACM 53 (2010), no. 4, 50-58.
[3] Z. Liu et al., On Maximizing Service-Level-Agreement Profits, Proc. of the ACM Int. Conf. on Electronic Commerce,
2006.
265
The Third International Conference on Contemporary Issues in Computer and Information Sciences
[5] H. Khazaei et al., Performance Analysis of Cloud Computing Centers Using M/G/m/m+r Queuing Systems, IEEE
Trans. on Parallel and Distributed Systems 23 (2012).
[12] F. da-Silva and H. Senger, Scalability Limits of Bag-ofTasks Applications Running on Hierarchical Platforms,
Journal of Parallel and Distributed Computing 71 (2012),
no. 6, 788-801.
266
Alireza Khanteimoory
fmoghaddam@iasbs.ac.ir
khanteymoori@iasbs.ac.ir
Abstract: Nowadays, using of Intelligence Transportation System (ITS) is very common in many
countries. One Important component of ITS is Advanced Traveler Information System (ATIS), and
one of the main rule of ATIS is providing travel time information to travelers. Providing accurate
transit arrival time information is important because it attracts additional passengers and increases
the satisfaction of users. In this paper we used Bayesian Learning approach for Neural Network to
predict bus arrival time. We compared our proposed model to FeedForward, Backpropagation and
Cascade-Forward Backpropagation Neural Networks. The result of Bayesian Learning is a posterior
distribution over weights of the network. We use Markov Chain Monte Carlo method (MCMC) to
sample N values from the posterior weight distribution. These N samples help us to choose the best
prediction by voting for best solution. Our results show that Bayesian Neural Network work better
than standard Neural Network and the accuracy of prediction has been increased.
Keywords: Neural Network; Bayesian Learning; Markov Chain and Monte Carlo.
Introduction
267
Neural Network. In section 3 we bring the experi- conditioned on M. By integrating over (everything),
mental results and compare our proposed model with the chosen assumptions M and prior p(|M ), comprise
Z
FeedForward, Backpropagation and Cascade-Forward
Backpropagation Neural Network. At the end we bring
p(D|M ) = p(D|, M )p(, M )d
(2)
conclusion and future works that can be done for creating more precise and real models.
In MLP network, we have some training data D =
(x1 , y1 ), . . . , (xn , yn ), and want to know y new given
xnew . It has be done by integrating the predictions
2 Bayesian Learning approach of the model with respect to the posterior distribution
of the model
for Neural Network
Z
p(y new |xnew , D, M ) = p(y new |xnew , )p(|D, M )d
In Bayesian analysis all unknown and uncertain pa(3)
rameters can be modelled as probability distributions, Where denotes all the model parameters of the prior
and inferences are performed by constructing poste- structures.
rior conditional probabilities for unobserved variables,
given the observed variable and prior assumptions [4].
In Bayesian approach we have to define a probabilThe Bayesian approach is used for the first time by ity distribution for network parameters. A commonly
Buntine and Weigend in 1991 and reviewed by MacKay used prior distribution for network parameters is Gausand Neal in 1996. The main difficulty of model build- sian.
ing in standard Neural Network is controlling the comwk N (0, a2k )
(4)
plexity, because the optimal number of degrees of freedom in the model strictly depends on the number of where wk represents the weights and biases of network
2
training sample, amount of the noise in the samples and ak is the variance hyperparameter for given weight
2
and complexity of the function being estimated. An- (or bias). The hyperparameter ak is given, for example,
other problem of standard Neural Network is lack of a conjugate inverse gamma hyperprior
analysing tools for results. These issues can be handled
a2k Inv gamma(a2ave , va )
(5)
in a very natural and consistent way by using Bayesian
approach. The unknown degree of complexity is han- Also we have to define a probability distribution for
dled by dening vague (non-informative) priors for the residual. A commonly used Gaussian noise model is
hyperparameters that determine the model complexity,
e N (0, 2 )
(6)
and the resulting model is averaged over all model complexities weighted by their posterior probability given The conjugate distribution for noise model is the inthe data sample, also Bayesian analysis yields posterior verse Gamma, producing the prior
predictive distributions for any variables of interest,
2 = Inv gamma(02 , v )
(7)
making the computation of condence intervals possible.
One of the main advantages of Bayesian approach is
that because we integrate over all the possible solution
it can avoid from overlapping. In other words, Bayesian
MLP returns theoretically all possible solutions and integrates them out. In case of MLP the posterior distribution is typically very complex. The integrations required by Bayesian approach can be approximated using Markov Chain Monte Carlo (MCMC)Rmethods [5].
p(D|, M )p(|M )
This method says that An integral = g(x)p(x)dx
(1)
p(|D, M ) =
can
be approximated by using a sample of values x(t)
p(D|M )
drawn from the distribution p(x)
Where p(D|, M ) is the likelihood of the parameters
n
, p(|M ) is the prior probability of , and p(D|M ) is
1X
g(x(t) )
(8)
n
a normalizing constant, called evidence of the model
n t=1
M. The term M denotes all the assumptions that are
made in dening the model, like a choice of MLP network, specic residual model etc. the normalization
MCMC for Bayesian Neural Network have been
term p(D|M ) means marginal probability of the data, proposed by Neal [6]. The posterior distribution is
The result of Bayesian approach is posterior distribution. In Bayesian approach Predictions are made by
integrating all models over this posterior distribution.
Use of the posterior probabilities requires explicit denition of the prior probabilities for the parameters. The
posterior probability for the parameters in a model M
given data D is, according to Bayes rule,
268
The Third International Conference on Contemporary Issues in Computer and Information Sciences
The dots show the data points. The thin grey lines
are the N different solution and the dark solid line is
the average solution. This figure shows that the average solution is smoother than other individual solution.
MCMC algorithm is exact in the limit as the size of
the sample and the length of time for which the Markov
chain is run increase, but convergence can sometimes
be slow in practice [5]. Note that samples from the posterior distribution are drawn during the learning phase,
which may be computationally very expensive, but pre- Figure 2: The route of bus in Zanjan. Blue circles show
dictions for the new data can be calculated quickly us- the stations and read line shows the route path.
ing the same stored samples.
Experimental Results
269
Because there is no test bed for this route, we collect data by ourselves. This data set consists of the
arrival time of bus to bus stations, dwell time for bus,
schedule adherence of bus and amount of time that
bus takes to reach the bus station. We use these parameters to train our Neural Network. In this paper, a
fully connected multilayer Neural Network model was
chosen. The Neural Network architecture used in the
research has three layers: an input layer, a hidden layer
and an output layer. Because we have three inputs,
the number of neurons in input layer will be 3. It
was found that the number of neurons in hidden layer
did not substantially impact on the result of Neural
Network models. Therefore, in this paper, we use 15
neurons in hidden layer. The structure of this network
is shown in Fig. 3.
Conclution
[7] Jarno Vanhatalo and Aki Vehtari, MCMC Methods for MLPnetwork and Gaussian Process and Stuffdocumentation for
Matlab Toolbox MCMCstuff, Laboratory of Computational
Engineering, Helsinki University of Technology.
270
Mohammadali Nematbakhsh
khosravi@eng.ui.ac.ir
nematbakhsh@eng.ui.ac.ir
George Lausen
Informatik Department
Albert-Ludwigs University, Freiburg, Germany
lausen@informatik.uni-freiburg.de
Abstract: Similarity estimation between interconnected objects appears in many real-world applications and many domain-related measures have been proposed. This work proposes a new
perspective on specifying the similarity between resources in linked data, and in general for vertices
of a directed graph. More specifically, we compute a measure that says two objects are similar
if they are connected by multiple small-length shortest path. This general similarity measure,
called SRank, is based on simple and intuitive shortest paths. For a given domain, SRank can be
combined with other domain-specific similarity measures. The suggested model is implemented in
order to cluster resources extracted from DBPedia knowledge-base.
Introduction
Extracting the similarity score between items is relevant to many areas of computer science, for instance,
social networks, targeted advertisements, clustering,
web mining, data mining, ontology mapping, and in
general, information networks require a model to specify the notion of similarity between items. Clearly,
similarity metric could be developed based on the definition of similarity and the context in which items are
being found.
Various aspects of resources could be used to determine similarity, which usually depends on the connectivity (e.g. the number of possible paths between two
vertices) and structural similarity (e.g. the number
of common neighbors of two vertices). In this paper, we propose SRank(Short-Rank) that exploits the
resource-to-resource relationships found in information
networks. Our study is motivated by recent research
and applications on RDF resource clustering, link dis Corresponding
271
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Nodes
Sim
BRank
PRank
SRank3
SRank4
K, C
J, K
J, C
M, C
C, S
J, S
S, M
M, S
N, S
M, N
K, S
N, C
K, M
M, K
J, N
J, M
M, J
K, N
N, K
.29
.58
.4
.23
.21
.18
.32
N/A
.23
.47
.25
.4
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
.11
N/A
N/A
N/A
N/A
N/A
N/A
N/A
.29
N/A
N/A
.14
.14
.11
.05
.05
N/A
N/A
.19
.36
.28
.11
.06
.04
.11
N/A
.07
.35
.07
.26
.17
N/A
.14
.08
.08
.12
.12
.5
.25
.25
.5
1.0
N/A
N/A
.16
N/A
.16
.5
.5
N/A
.33
N/A
1.0
.33
.5
1.0
.5
.25
.33
.5
1
.08
N/A
.41
.25
.16
.5
.5
N/A
.33
.25
1
.33
.5
1
Table 1: The corresponding similarity values for SimRank (C = 0.8), BapartiteRank (C = 0.8) and P-Rank
(C= 0.8, = 0.5)
272
The Third International Conference on Contemporary Issues in Computer and Information Sciences
rithm based on the SRank similarity measure. High there are multiple shortest paths between a and b as
similar resources fall into one cluster while low similar well as b and a.
resources are distributed to different clusters.
Definition 1: [Access Value] Let P P be the
N N transition probability matrix of length p of a
graph G. Access value from a to b is defined as
SRank
p
n2
1
H(a, b) = w1 Pa,b
+...+wp Pa,b
+...+wn2 Pa,b
(1)
p
Pa,b
=P
2.1
(2)
1 s n 2 (3)
Preliminaries
(4)
Hs (a, b) HM in
HM ax HM in
(5)
Theorem 1: The above equations (Shown in equations 1 to 5) have the following properties:
2.2
SRank Formula
273
2.3
P
d(Ci , Cj ) =
aCi
bCj
mi mj
d(a, b)
The clusters must be merged together as their distance scores satisfy the user-provided threshold value.
Given a set of clusters, the threshold value strongly
depends on the context in which clusters are found. In
high connected graph, high value should be assumed
for threshold measure while in low connected graph,
the threshold value should be chosen in lower value. In
the implementation of all clustering algorithms, we investigated the clustering qualities and report the best
result for each algorithm.
We have proposed a general model for computing similarity scores between resources of a directed graph. It
is based on the number of shortest paths between resources. These similarity measures can be used in order
to compare resources belonging to the RDF graph that
are not necessarily connected. They rely on the degree
of number of paths between these elements. While the
model has been developed in the case of a directed
graph extracted from LOD cloud, the notion of SRank
also can be applied to undirected graphs in the other
domains.
We are now working on other domains such as social networks and web graph to assess the feasibility
of SRank. We are also comparing the results in the
other data oriented applications that need the similarity between resources. Moreover, different weight adjustment approaches may be deployed to improve the
results. Finally, we are working on generalizations to
directed weighted graphs, directed labeled edges, and
heterogeneous graphs.
Ci Gi , Cj Gj
(7)
where mi and mj are the number of resources in clusters i and j respectively.
274
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Refrences
[1] G. Glen and J.. Widom, SimRank: A Measure of StructuralContext Similarity: Tachnical Reporty in Stanford InfoLab
(2011).
275
[2] P. Zhao, H. Han, and Y. Sun, P-Rank: a comprehensive structural similarity measure over information networks, International Conference on Information and Knowledge Management (2009), 553562.
Vali Derhami
Yazd University
Yazd University
raji.n@yazduni.ac.ir
vderhami@yazuni.ac.ir
Reza Azmi
Alzahra University
Computer Engineering Department
azmi@alzahra.ac.ir
Abstract: The analogy between immune systems and intrusion detection systems encourage the
use of artificial immune systems for anomaly detection in computer networks, Web servers and
web-based applications which are popular attack targets. This paper presents a web anomaly
detection based on immune system and web usage mining approach for clustering web sessions
to normal and abnormal. In this paper the immune learning algorithm and the attack detection
mechanism are described. Theoretical analysis and experimental evaluation demonstrate that the
proposed approach is more suitable for detecting unknown attacks, and are able to provide a real
time defense mechanism for detecting web anomalies.
Keywords: Intrusion Detection Systems; Artificial Immune Systems; Anomaly; Normal behavior; Session.
Introduction
276
The remainder of this paper is organized as follows. In Section 2, a review on some available IDSs
is presented. Section 3 discusses the goals of this study
and introduces algorithm regarding the data representation. In Section 4, the experimental evaluation of the 3
Proposed Method
proposed system is presented. Moreover, the detection
ability of the system is tested to other area dataset.
Finally, Section 5 concludes our study.
The proposed Web Host Immune Based Intrusion Detection System (WHIBIDS) introduces immune principles into IDSs to improve the capability of learning and
recognizing web attacks, especially unknown web attacks. In the proposed algorithm sessions and requests
2 Related Work
are constructed from web logs in which the clickstream
data are stored. Clickstream data are generated as a
There are two possible approaches for intrusion detec- result of user interaction with a website. Antigen and
tion. Iintrusion detector can be provided by a set of antibodies are represented same form and their length
rules or specifications of what is regarded as normal be- is equal.
havior based on the human expertise. This approach
could be considered as an extension of misuse detection
Antigen Presenting: Define each users request
systems. In the second approach, the anomaly detector as the antigens set Ag.
Each request is repreautomatically learns the behavior of the system under sented by a vector of attributes extracted from
normal operations and then generates an alarm when the access log file.
The form of the vector of
a deviation is detected from the normal model [1].
the antigen set Ag is listed as following: Ag=
ag| =< Session ID, U RLlength, numberof variables,
Vigna et al.[5] proposed an IDS that operates on distributionof characters, Attributelength, depthof path >
multiple event streams and use similar features to our
work. The system analyzes the HTTP GET requests
There are some shortcomings to common access log
that use parameters to pass values to server-side pro- files generated by web servers such as Apache.One of
grams. However, these systems are misuse-based and these problems is to define the web sessions. Since
therefore not able to detect attacks that have not been the boundaries of sessions are not clearly defined, expreviously modeled. Guangmin [6] presents an immune traction of web sessions from these log files is not a
2
277
The Third International Conference on Contemporary Issues in Computer and Information Sciences
initialization;
Fix the Maximal population sizeNB ;
Initialize B-cell population and i2 =init using a
number of random antigen;
while all antigens are presented;
do
Present antigen to each B-cell;
if activated the B-cell wij > wmin ;
then
Refresh age(t = 0);
Add the current B-cell ad its KNN to
working sub-network;
else
Increment the age of B-cell by one;
end
if for all B-cells wij < wmin ;
then
Create a new B-cell=antigen;
else
Repeat for each B-cell in working
sub-network;
Compute B-cell stimulation;
Update B-cell i 2 ;
end
if antigens of a session is presented;
then
Clone B-cell B-cell based on their
stimulation level;
if populationsize > NB ;
then
Remove extra least stimulated B-cells;
else
index/wp-admin/export.php
Finally, the vector that is corresponded to that request is normalized. The range of output is between 0
and 1. The normalized value for each field in a vector
of a request is calculated by dividing the value of that
field by the sum of values over all the fields in that
vector.
Affinity function: similarity measure between tow
antigen is Euclidean distance determines the distance
between two web application requests. Precisely, the
similarity between two requests agi and agj is defined
as:
end
else
end
end
The modified algorithm of [2]
Algorithm 1: ]
As it is shown in proposed algorithm, when an antigen is unable to activate any B-cell, this antigen may
represent a noise or a new emerging pattern. In this
v
u k
condition, a new B-cell is created which is a copy of the
uX
(agin agjn )2
(1) presented antigen. If this antigen is a noisy data and
dis(agi , agj ) = t
does not present a new emerging pattern, it would not
n=1
get enough chance to get stimulated by incoming antigens and is probably eliminated. After each antigens
of a session is presented to the network, the B-cells
go under cloning operation based on their stimulation
level. When the population of the network exceeds a
defined threshold, the least stimulated B-cells are removed from the network. The distance measure preWhere k is the number of features is extracted for each sented in this study is used in all the steps for calculatrequest. The pseudo code of the proposed algorithm is ing the internal and external (B celltoantigen) interactions of B-cells. The detailed information about calpresented as following:
3
278
Experimental Evaluation
= 0.75
Accuracy False alarm rate Detection rate
Request based 80%
0.18
76%
We run the proposed algorithm 5 times with 5folds cross validation and the final values for evaluation measures is the average of these 5 runs. Table 1
and Table 2 represent the proposed systems high capabilities in both criteria and both datasets. As the
results show that performance of session based is the
better than request based and we can claim that the
proposed algorithm can detect malicious activities with
high accuracy. Patterns may be repeated in multiple
b-cells within the population. This is called a loss of
diversity or overfitting which essentially leads to redundancy (e.g. multiple requests have the same signature). To show that there has not been overfitting
in training data, 20% noise is added to the test data.
Table 3 shows the noise, about 15 percent impact on
the results. If overfitting had occurred would have a
significant impact on results.Table 4 shows the comparison of WHIBIDS vs IADMW IDS, which comes
from [6]. The detection rate of WHIBIDS is 92%, but
the detection rate IADMW is 67%. Simultaneously,
WHIBIDS is also capable of classifying web attacks
and has a high accuracy rate 97.3%. These results
show that WHIBIDS is a competitive alternative for
detecting web attacks.
Acknowledgment
279
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Conclusions
In this paper we proposed an intrusion detection system, Based on the principles of the immune system
(WHIBIDS) that can detect known and unknown attacks. Here an attack as a series of actions is considered. The requests obtained from the preprocessed
log files of web server are presented to the system as
antigens. The network of the B-cells represents a summarized version of the antigens encountered to the network. Also, they are able to adapt to emerging usage
patterns proposed by new antigens at any time. The
results show the ability of the proposed AIS to clustering web sessions to normal and abnorma.The results
indicate designing an immune base IDS that has several advantages:. (1) Self learning and immune learning
make the model can detect both the known and unknown web attacks. (2) Ability to detect anomaly in
real time.(3) capability to recognize abnormal behavior
with regard to the actual sessions. (4) Using immune
network algorithm achieved high detection rates. (5)
Can be used as a general classifier.There was limitation such as determination of similarity threshold with
testing. Future work will determine this threshold by
reinforcement learning.
280
H. Sajedi
Tehran University
Qazvin, Iran
Tehran, Iran
bhr.shabani@gmail.com
hhsajedi@aut.ac.ir
Abstract: A large number of machine learning and data mining algorithms which are used for
classification, prediction, and uncertain reasoning cannot handle continuous attributes. And some
of the other algorithms require a considerably large execution time when the input data contains
continuous attributes. Discretization is very important in developing practical methods in data
mining. It is the process of converting continuous attributes of a database into discrete attributes
so that they can be used by some classification algorithms. The approach in this study is based
on successive pseudo deletions. Our empirical experiments show that C4.5 gives improvement
in performance with the discretized output from YABAC4.5 compared to SPID4.7 and MDLP
discretization algorithms.
Introduction
The data in a database or a data warehouse are usually available either in discrete or continuous forms.
Accordingly, the values of continuous attributes could
be possibly very large while a limited number of these
possible values are for discrete attributes [1].
Author
281
a reasonable search space. Furthermore, concise summarization of continuous attributes to help the experts
and users understand the data more easily, but also
make learning more accurate and faster. There are
five different axes by which the proposed discretization
algorithms can be classified: supervised versus unsupervised, static versus dynamic, global versus local,
top-down (splitting) versus bottom-up (merging), and
direct versus incremental [2].
tion. The ChiMerge [6] is a supervised, local and merging discretization algorithm that works in a bottom-up
way to merge two adjacent intervals with the smallest
chi-square value, until the minimum chi-square value
becomes greater than both the predefined significance
level value and the threshold value determined by degrees of freedom. The Khiop [7] is a supervised, merging, global and statistics discretization method that
have recently been published. CACC [2] is proposed
a static, global, incremental and supervised discretization algorithm, in order to raise the quality of the generated discretization scheme by extending the idea of
contingency coecient and combining it with the greedy
method. CACM and Efficient-CACM [3] are supervised and splitting discretization algorithms. For comparing these algorithms with other discretization algorithms [3] has been used C4.5 [8] and RBF-SVM classifiers. And YABAC4.5 that is presented in this paper is
a supervised, splitting and global method of discretization.
282
The Third International Conference on Contemporary Issues in Computer and Information Sciences
YABAC 4.5
SPID4.7 algorithm handles missing values for an attribute Ai by putting the most frequent value among
the existing values of Ai in the place of all missing values of Ai . But we have changed these approaches, and
handle missing value by putting the most frequent attribute value in that class, and we deleted the second
part of SPID4. Before of them, we do an attribute selection to reduce instance dimension to obtain better
results and then run this algorithm.
Att1
1
2
?
?
2
2
1
?
2
(a) Data
value
Att2
f
t
t
t
t
t
f
f
t
with
class
no
yes
yes
yes
yes
yes
no
no
yes
missing
Att1
1
2
2
2
2
2
1
1
2
Att2
f
t
t
t
t
t
f
f
t
class
no
yes
yes
yes
yes
yes
no
no
yes
Table 1
283
attribute
For each continuous attribute in S
Find cut point of attribute i
Find boundary points of attribute i
Insert threshold point at the boundary point
having
maximum information gain
Find pseudo-deletion-count (PDEL-count)
While (PDEL-count6=0) and (all boundary point
have not been considered) do
For each continues attribute in S
Find boundary point with maximum
information
gain
Calculate the min-PDEL-req among the above
selected point
If min-PDEL-req < PDEL-count
Begin
Accept the selected boundary point as threshold
point
PDEL-countmin-PDEL-req
Remove boundary point(s) violating m on both
sides
of selected threshold point
End
Else
Remove the point(s) with max-PDEL-req among
the maximum gain points of each continuous
attribute
End //end of while
End
Data Set
Anneal
Australian
Credit
Detematology
Echocardiogram
Ecoli
Glass
Heart-hungary
Heart-statlog
Horse-colic
Iris
Liver-disorder
Newthyroid
Pima
Vehicle
Wine
Wisconsin
#I
798
690
690
330
132
336
214
294
270
300
150
345
214
768
94
178
699
#A
6C, 14D
6C, 8D
6C, 9D
1C, 33D
8C, 3D
7C, 0D
9C, 0D
5C, 8D
5C, 8D
7C, 15D
4C, 0D
6C, 0D
5C, 0D
8C, 0D
18C, 0D
13C, 0D
9C, 0D
#CL
6
2
2
6
3
8
7
5
2
2
3
2
3
2
4
3
2
M.V.
Yes
No
Yes
Yes
Yes
No
No
Yes
No
Yes
No
No
No
No
No
No
No
Conclusion
Refrences
[1] U. M. Fayyad and K. B. Irani, On The Handling of Continuous Valued Attributes in Decision Tree Generation, Machine Learning 8 (1992), 87106.
284
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Data Set
Ann
Austr
Cred
Der
Ech
Ecol
Gla
Hun
Stat
Hc
Iris
Liv
Thy
Pim
Veh
Win
Wis
mean
In-built
Discretizer
Acc.s.d.
93.962.58
83.954.30
85.044.02
92.115.69
64.2813.35
81.844.82
66.068.19
62.088.42
77.808.04
83.746.03
95.595.09
66.736.96
93.465.13
74.565.06
63.5014.09
70.609.38
95.412.26
80.8
MDLP Acc.s.d.
SPID4.7 Acc.s.d
YABAC4.5 Acc.s.d.
92.762.66
85.423.66
86.294.54
91.995.68
65.126.11
81.844.82
66.068.19
64.988.62
78.917.33
84.195.93
95.485.40
69.227.67
94.685.01
75.375.13
68.0615.51
96.384.47
96.072.29
81.95
92.982.52
85.614.05
85.763.79
92.115.69
68.6211.64
82.456.06
72.248.20
61.0510.85
80.247.42
82.137.13
96.664.06
67.377.57
94.214.79
76.265.44
68.3516.07
96.304.47
95.392.71
82.22
93.77 3.38
86.763.70
87.093.75
95.352.47
70.1713.80
86.557.31
73.868.99
72.2414.41
81.326.87
83.455.55
98.672.67
88.294.07
98.143.12
87.2305.61
70.563.58
96.434.42
96.202.26
98.55
Table 3: accuracy comparisons using C4.5 algorithm. Emprical result: acc.= average accuracy and s.d.=
standard deviation
Figure 1: This figure shows comparing result on different data set between different algorithms
285
Karim Faez
Qazvin , Iran
Tehran , Iran
Einolah.hatami@gmail.com
kfaez@aut.ac.ir
Abstract: Identification of the script of the text in multi-script documents is one of the important
steps in the design of an ocr system for analysis and recognition of the page. In this paper, a new
and effective method has been proposed to identify the script type of a trilingual document printed
in Arabic, English and Chinese script. To identify these three languages and to extract their feature,
two methods based on the horizontal profiles have been used. In the first method, we calculate the
ratio of the number of black pixel on each text line to the enclosed area of each text line and in
the second method; each text line is divided into 3 distinct zones of the upper, middle and lower
zones. Then, we obtain the absolute maximum and the next largest relative maximum profile of the
middle zone. Text Lines with different fonts and sizes have been used to test the proposed system
that this algorithm has been tested on 150 different scanned pages containing 3750 text line of three
script with accuracy of 99.84%.
Introduction
286
segmentation
M = (ni=0 xi )/area,
(1)
The output of this stage is the segmentation of text Figure 2: The ratio of black pixel to the area in 4 difline for each input image.
ferent font sizes for Arabic, English and Chinese.
feature Extracting
3.1
Horizontal projection profiles of each text line consists of nearly 100 rows, which this text line has been
surrounded in a rectangle that its area is calculated
through multiplying length to width of the rectangle
enclosed the text line then, we calculate the sum of the Figure 3: The ratio of black pixel to the area in 4 difblack pixels enclosed in this area, and we obtain the ferent fonts for Arabic, English and Chinese.
287
The Third International Conference on Contemporary Issues in Computer and Information Sciences
language
Chinese
English
Arabic
M Range
0.59 to 0.69
0.40 to 0.44
0.23 to 0.40
3.2
15 to 40
The distance between the absolute English
Arabic
0 to 10
and relative maximum in the middle zone of the text line
Table 2: compares the proposed algorithm with recent
works
288
The proposed method uses two new methods for analyzing and extracting feature based on the horizontal
profiles of each text line. The proposed method in its
own simplicity has priority in identifying various fonts
and sizes that similar methods lacks this potentiality
and with 99.84% confidence on the data collection it is
correct. The proposed algorithm in this paper can also
be developed for the level of word and other languages.
language The
whole
number
of the
lines
Chinese 1300
Arabic 1200
English 1250
Conclusion
Correct
identification
Wrong
identification
1300
1198
1247
0
3
3
100%
99.81%
99.83%
Refrences
[1] A. Selamat and Ng Choon Ching, Arabic Script Documents
Language Identifications Using Fuzzy ART, Modeling & Simulation, AICMS 08. Second Asia International Conference on,
Chapter 5, pages: 528533, 2008.
[2] P.K. Aithal, G. Rajesh, D.U. Acharya, and N.V.K.M. Subbareddy, Text line script identification for a tri-lingual document: Lecture Notes in Computer Science (2010), 13.
The comparison of the proposed algorithm with recent works in this field which has been tested on the
texts with a fixed font size has been shown in Table 4.
confidence
Rule based classifier using Top
and bottom profile feature[8]
Rule based classifier using profile
feature[2]
Proposed Algorithm
Date
(lines)
500
Method
200
99.83%
3750
99.84%
96.6%
[3] A. Lawrence Spitz, Determination of the Script and Language Content of Document Images: Lecture Notes in Computer Science, IEEE Transactions on Pattern Analysis and
Machine Intelligence 19 (1997), 235245.
[4] A. Zramdini and R. Ingold, Optical font recognition using
typographical features: Lecture Notes in Computer Science,
Pattern Analysis and Machine Intelligence, IEEE Transactions on 20 (1998), 877882.
[5] D. Dhanya and A. G. Ramakrishnan, Script Identification
in Printed Bilingual Documents: Lecture Notes in Computer
Science (2002), 7382.
[6] E.B. Bilcu and J. Astola, A Hybrid Neural Network for Language Identification from Text: Lecture Notes in Computer
Science (2006), 253258.
[7] H. Rezaee, M. Geravanchizadeh, and F. Razzazi, Automatic
language identification of bilingual English and Farsi scripts:
Lecture Notes in Computer Science (2009), 14.
[8] P.A. Vijaya and M.C. Padma, Text Line Identification from a
Multilingual Document: Lecture Notes in Computer Science
(2009), 302305.
289
Samaneh Ahmadi
Esfahan University
KIIT University1
Esfahan, Iran
India
samanehahmadi71@yahoo.com
vissair@gmail.com
Vaibhav Kesri
NIT Kurukshetra
Department of Electrical
India
vaibhavkesri1@gmail.com
Abstract: The N-queens problem is a classic problem where n number of queen were to be placed
into an n x n matrix such that no queen attack any other queen. The Branching Factor grows
in a roughly linear way, which is an important consideration for the researchers. However, many
researchers have cited the issues with help of artificial intelligence search patterns say DFS, BFS
and backtracking algorithms. We have conducted an study on this problem and propose a new
backtracking algorithm on base of clustering in chess board. For perform cluster in chess board we
need to convert chess board into a network. And this algorithm give you all solution of n x n chess
board.
Introduction
queen is 64!/(56! 8!) 4 : 4 109 and the total number of possible solutions is 92. [2] We can consider two
solutions to be the same and can obtain one from the
The N-queen problem is a generalized form of 8-queen other by rotation or symmetry. So there exists only 12
problem, proposed by the chess player Max Bezzel. In different solutions and it becomes very hard to obtain
8-queen problem, 8 queens are required to be placed on an unique solution out of these.
a 8x8 chess board in such a way that no queen attacks
any other queen. [2] A queen can move in horizontal (in the same row), vertical (in the same column)
and diagonal direction. Also a N-queen problem must
follow the following rules [4]:
1.1
Applications
Corresponding
290
Solution strategy
2.1
Backtracking
2.4
2.2
2.3
Cluster
Figure 1:
Condition
Figure 2:
291
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Figure 6:
Figure 3:
Association Rules
After making all cluster if some single node remain,
There is some association rule which perform beso make these node cluster by using table-1, table-2,
tween cluster. These rule are following :
table-3. Because any two or more node, that table
value is same, so these node belong to cluster-A.
Eg:
2.5
Analyzer
Figure 4:
Figure 5:
292
How it work
first we make cluster of 4 node then go for 3 node cluster than 2 node.
Consider a chess board (6 6) and convert it into netHere we can see that after place 2 queen, there is
work. Here we follow backtrack algorithm. Below show 3 cluster. So we cannot place 4 queen in 3 cluster and
network of chess board (6 6):
selection of second queen position is wrong. So we need
to backtrack and choose another position for place second queen. And these cluster belong to category of
cluster-A.
Refrences
[1] V. Kesri, Va. Kesri, and P. Ku. Pattnaik, An Unique Solution for N queen Problem, International Journal of Computer
Applications 43 (2012), no. 12, 16.
[2] C Letavec and J Ruggiero, The n-queen problem, Informs
Transaction on Education 2 (2002), no. 3.
[3] E. Horowitz, S. Sahni, and S. Rajasekaran, Fundamental of
computer Algorithm.
Figure 8:
Step-2:
Find cluster for next four rows and give preference
to those cluster which cover more node. It means that
293
Shima Tabibian
m rajabzadeh@comp.iust.ac.ir
shimatabibian@iust.ac.ir
Ahmad Akbari
Babak Nasersharif
akbari@iust.ac.ir
bnasersharif@eetd.kntu.ac.ir
Keywords: Keyword Spotting; Phone Lattice; Lattice Search; Minimum Edit Distance; Scoring.
Introduction
Keyword spotting (KWS) systems are used for detection of selected words in speech utterances. Searching
for various words or terms is needed in spoken document retrieval which is a subset of information retrieval. KWS is used in a wide range of applications
such as searching in audio files.
One of the noticable challenges in KWS is the choice
of an suitable approach for finding the target keyword
in the input speech utterance. KWS approaches can
be divided into two main categories. The first category is called Large Vocabulary Continuous Speech
Recognition-based (LVCSR-based) and the second category is called phone sequence-based approach.
Phone lattice-based KWS systems have low accuThe performance of LVCSR-based approaches de- racy due to the low performance of the speech phone
Corresponding
Author
294
recognizers. The performance of KWS systems are related to the performance of the speech phone recognizers. Thus, high insertion, deletion and substitution error rates affect the performance of KWS systems. The
Minimum Edit Distance (MED) during lattice search
used in some phone lattice-based KWS approaches,
compensates for speech recognizer errors. Given source
and target sequences, the MED calculates the minimum cost of transforming the source sequence to the
target sequence. In this transformation, a combination
of insertion, deletion, substitution and match operations is used. Each of mentioned operations has an
associated cost. Then, the sequences extracted from
lattice, are accepted or rejected by thresholding on this
MED score. KWS system can be robust against the errors of the speech recognizer by using MED, but MED
measure raises the false alarm rate. That is because
it considers the irrelated words as keyword hits[57].
In [5] a method named Dynamic Match Phone Lattice Search (DMPLS) uses MED measure during the
search.
In this paper, we use DMPLS method, a phonebased approach, which applies a lattice structure as its
search space. By using lattice, we can avoid OOV problem and also make search speed higher. We propose to
improve the MED weakness using Viterbi scores. Using this technique, the proposed method decreases the
false alarm rate while preserving the same detection
rate. In addition, we use lattice pruning and indexing
to increase the search speed of DMPLS.
The rest of the paper is organized as follows. In
Section 2, we describe MED measure. In Section
3, we propose Viterbi score for improving MED and
techniques for improving the search speed on lattice.
Section 4 contains the experimental results. Finally,
we conclude the paper in Section 5.
M(0)(0)=
M(i)(0)=
M(0)(j)=
M(i)(j)=
As mentioned, weakness of speech phone recognizers and their errors affect phone-based KWS systems.
KWS systems are based on the speech recognizers results, so they inherently suffer from high insertion,
deletion and substitution error rates. The occurrence
of error caused by speaker, can also increase these error rates. Some KWS approaches use the Minimum
Edit Distance (MED) during lattice search to compensate for speech recognizer errors. The MED between
two strings is defined as the minimum cost of convert-
0
i*I; i=1,...,p
j*D; j=1,...,q
Min{M(i-1)(j-1)+S(X(i),Y(j))
M(i-1)(j) + D
M(i)(j-1) + I}
Proposed Approach
One drawback of MED measure is that it doesnt normalize the scores of candidate keywords and also uses
the information of just candidate substring. For compensating this defect, we use an approach that improves the performance of the methods that uses only
MED measure. We describe this approach in subsection 3.1. We also use some techniques for increasing the
speed of search process. We describe these methods in
subsection 3.2.
295
The Third International Conference on Contemporary Issues in Computer and Information Sciences
3.1
Viterbi Scoring for Improving the results of applying Viterbi scoring to search process
with MED measure are presented in Section 4.
Accuracy of Search
phoneme
CMkeyword
= Lphoneme
(keyword)
alpha
+Lphoneme (keyword)
+Lphoneme
(keyword)
beta
Lphoneme
best
Where CM is the confidence measure of keyword, Lphoneme (keyword) is the likelihood of the keyword computed as the sum of acoustic likelihoods for
phonemes recognized correctly. Forward likelihood
Lphoneme
(keyword) is the likelihood of the best path
alpha
through lattice from the beginning of lattice to the first
of keyword. Backward likelihood Lphoneme
(keyword)
beta
is computed from the end of lattice to the end of
keyword. Forward and backward probabilities are recursively evaluated as:
Lphoneme (Np )
alpha
Lphoneme
(N ) = Lphoneme
acoustic (N ) + minNp
alpha
Lphoneme (NF )
beta
Lphoneme
(N ) = Lphoneme
acoustic (N ) + minNF
beta
We evaluated our keyword spotting system on TFarsdat [10], a database of Persian conversational telephone
speech. This database consists of 320 audio files spoken
by 64 different speakers. Speakers have a wide variety
of genders, ages and educations. They also cover 10
different Persian dialects. Number of different phones
in this database considered as 30.
Syllable Kind
CV CVC CVCC
2
16
23
3
19
24
296
files were used for test phase (2 file for each speaker). real time. Also, when Emax is set to 2, the speed of
Total duration of training files is about 3 hours and our KWS system is 1.1 times faster than real time.
total duration of test files is 0.9 hour. Each phone
has been modelled using a left-to-right HMM with 3
states and 64 Gaussian mixtures per state. After that,
we used tying mixtures for constructing the triphone
models. In This way, we obtaines 4535 triphone models for Tfarsdat. In this way, we can obtain a better phone recognizer for constructing the phone lattice
than monophone-based models. Feature vectors contain energy and 12 MFCCs and their first, second and
third order derivatives. So, feature vector dimension is
52.
Figure 1 presents the effect of triphone models on
the performance of KWS system in comparison to
monophone models. In this figure, ROC (Receiver Operating Characteristic) curve is reported for two values
of allowed errors. The word allowed error demonstrates
the maximum allowed number of different phones between candidate keyword and the desired keyword.
Therefore, when allowed errors are set to 1; MED
measure considers a candidate keyword with only one Figure 1: KWS system based on triphone models comphone different from the original keyword. We show pared with KWS system based on monophone models
the allowed errors with Emax. As be shown, triphone
models can improve the results of KWS system. So
we choose our KWS system that is based on triphone
models.
Figure 2 presents ROC curve for comparing MED
method with a search method without lattice. Of
course these methods are based on triphone models.
As shown in the figure, MED method on lattice can
increase accuracy of search.
Table 2: Accuracy of the proposed approach measured
with FOM
Figure 3 presents ROC curve for comparing method
Method
Emax FOM
of using Viterbi Scoring with method of using only
Monophone-based
1
0.068
MED measure.The results are summarized in Table 2.
2
0.19
This table presents the accuracy of the proposed apMED + triphone
1
0.24
proach measured with FOM (Figure Of Merit). As
2
0.34
can be seen from the table, when Emax is set to 1,
MED + Viterbi + triphone
1
0.26
search using Viterbi scores in comparison to method
2
0.42
using only MED measure increases the FOM by 0.02%
and when Emax is set to 2, it increases the FOM by
0.08%. In addition, when Emax is set to 1, proposed
approach based on triphone models in comparison to
monophone models increases the FOM by 0.172% and
when Emax is set to 2, it increases the FOM by 0.15%.
As mentioned before, we use some techniques to
make search on lattice faster. Speed of search is measured by Real Time Factor (RTF). Table 3 presents
the results of search speed. In this table the results
of speed are reported for two values of allowed errors.
When Emax is set to 1, the speed of our KWS system
with adding Viterbi scoring is 1.7 times faster than
297
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Acknowledgements
Refrences
[1] J. Cernocky, I. Szoke, M. Fapso, M. Karafiat, L. Burget, J.
Kopecky, F. Grezl, P. Schwarz, O. Glembek, I. Oparin, P.
Smrz, and P. Matejka, Search in Speech for Public Security and Defense, Proceedings of IEEE Workshop on Signal
Processing Applications for Public Security and Forensicsm
(SAFE) (2007), 17.
Conclusions
In this paper, we present a phone-lattice based keyword spotting system for online Persian conversational
telephony speech. This system uses MED measure.
MED measure covers some errors of speech recognizer.
So it increases the detection rate of KWS system. We
defined this system as our baseline system. For improving the performance of this system, we applied Viterbi
scoring to MED measure. This method uses the information of whole of lattice and normalizes the scores
of candidate keywords, so it decreases the false alarm
rate. In addition, we showed the effect of triphone
models on the performance of KWS system in com-
298
[8] I. Szoke, P. Schwarz, P. Matejka, and M. Karafiat, Comparison of keyword spotting approaches for informal continuous
speech, Proceedings of 9th European Conference on Speech
Communication and Technology (Interspeech) (2005), 633
636.
[9] K. Trinh, H. Nguyen, D. Duong, and Q. Vu, an empirical
study of multi pass decoding for Vietnamese lvcsr, Proceed-
ings of International workshop on Spoken Languages Technologies for Under-resourced languages (SLTU) (2008).
[10] M. Bijankhan, J. Sheykhzadegan, M. Roohani, R.
Zarrintare, S. Z. Ghasemi, and M. E. Ghasedi, Tfarsdat
- the telephone Farsi speech database, Proceedings of EUROSPEECH (2003), 15251528.
299
Mohammad-R. Akbarzadeh-T
shahram.shahraki@mshdiu.ac.ir
akbarzadeh@ieee.org
Abstract: Some algorithms such as Estimation of Distribution Algorithms use probabilistic modeling to generate candidate solutions in optimization problems. The probabilistic presentation and
modeling allows the algorithms to climb the hills in the search space. Similarly in this paper,
Adaptive Gaussian Estimation of Distribution Algorithm (AGEDA) which is kind of multivariate
EDAs is proposed for real coded problems. The proposed AGEDA needs no initialization of parameters; mean and standard deviation of solution is extracted from population information adaptively.
Gaussian Data distribution and dependent Individuals are two assumptions that are considered in
AGEDA. The fitting task model in AGEDA is based on maximum likelihood procedure to estimate
parameters of assumed Gaussian distribution for data distribution. The proposed algorithm is evaluated and compared experimentally with Univariate Marginal Distribution Algorithm (UMDA),
Particle Swarm Optimization (PSO) and Cellular Probabilistic Optimization Algorithm (CPOA).
Experimental results show superior performance of AGEDA to the other algorithms.
Keywords: Probabilistic Optimization Algorithm, Particle Swarm Optimization, Evolutionary algorithms, Univariate Marginal Distribution Algorithm
Introduction
Evolutionary search algorithms are important population based optimization techniques in the recent years
as a consequence of computation ability increment to
solve optimization problems. These techniques search
through many possible solutions which operate on a set
of potential individuals to get better estimation of solution by using the principle of survival of the fittest, as
in natural evolution. Genetic algorithms (GAs) developed by Fraser [1], Bremermann [2], and Holland [3],
evolutionary programming (EP) developed by Fogel
[4], and evolution strategies (ES) developed by Rechenberg [5] and Schwefel [6] establish the backbone of evolutionary computation which have been formed for the
past 50 years. Estimation of Distribution Algorithms
(EDAs), or Probabilistic Model-Building Genetic Algorithms, or Iterated Density Estimation Algorithms
have been proposed by Mhlenbein and Paa [7] are as
an extension of genetic algorithms which are one of the
Corresponding Author, Center of Excellence on Soft Computing and Intelligent Information Processing, IEEE Senoir Member,
Professor
300
In this paper a new kind of multivariate EDAs is announced called Adaptive Gaussian Estimation of Distribution Algorithm (AGEDA). AGEDA has been designed for real coded problems. The proposed algorithm assumed Gaussian distribution of data to model
and estimate the joint distribution of promising solutions based on maximum likelihood technique. Next
generation will be sampled based on this set of solution
as parent set and estimated joint distribution. This
type of probabilistic representation of AGEDA allows
the algorithm to escape from local optimums and move
free through fitness function.
e[
1/2
2 k/2 ||
(x)T 1 (x)
]
2
EDA can improve exploitation without losing the exploration ability of EDAs. To achieve this, AGEDA
benefits different Gaussian distributions estimation in
every dimension of individuals.
The proposed algorithm has two implicit parameters: mean and standard deviation, where these parameters extracted from promising population adaptively,
also AGEDA has not any parameters. Therefore, there
is no need to set the parameter in AGEDA.
The procedure of proposed algorithm is described
as below:
Step 1) initializes first generation randomly with
uniform distributed random numbers in all dimensions.
Step 2) evaluates the fitness function of all the real
valued individuals.
Step 3) is the main loop of algorithm. Continue until termination condition (max generation production)
meets.
Step 4) in this step, based on truncation selection
model, top evaluated individuals are selected to estimate parameters of distribution. And weak individuals
are eliminated to not participate in the estimation.
Step 5) Distribution parameters are estimated
based on maximum likelihood estimation technique.
..
.
E[(xk k ) (x1 1 )]
..
.
E[(x1 1 ) (xk k )]
..
.
E[(xk k ) (xk k )]
= E(X)
(3)
h
i
T
= E (X ) (X )
(4)
Step 6) Based on the estimated mean and standard deviation for every dimension, a new population
is sampled as (5):
xij = G(i , i )
(5)
Where ,, are estimated parameters of populaOne of the advantages of EDAs against other EAs tion based on top evaluated individuals and G(.,.) is a
is in exploration of search space. This presentation of Gaussian random number generator. In addition, i =
301
The Third International Conference on Contemporary Issues in Computer and Information Sciences
1,2,..d (d dimensions problem) is the dimension indiIn this paper, the benchmark problems that have
cator and j = 1,2...k (max population size is k) is the been used to evaluate our algorithm are numerical funcpopulation size indicator.
tion optimization problems that contain Schwefel, Ackley, Griewank, Rosenbrock, G1, Kennedy, Rastrigin,
Step 7) is consistency check step.
and Michalewics.
xij =
xij
G (i , i )
(6)
Experimental result
S
Schwefel
0.3
Ackley
0.4
Griewank
0.36
Rosenbrock 0.3
G1
0.3
Kennedy
0.25
Rastrigin
0.3
Michalewics 0.3
0.05
0.05
0.5
0.5
0.05
0.05
0.06
0.05
0.03
0.29
0.07
0.2
0.03
0.08
0.03
0.09
CPOA
Mutate Rmu
0.005
0.002
0.005
0.002
0.005
0.002
0.005
0.002
0.005
0.002
0.005
0.002
0.005
0.002
0.005
0.002
Rdel
0.002
0.002
0.002
0.002
0.002
0.002
0.002
0.002
S
6
6
6
6
6
6
6
6
302
9.65
9.65
9.65
9.65
Schwefel
Ackley
Griewank
Rosenbrock
G1
Kennedy
Rastrigin
Michalewicsz
Mean
-4.172e+04
-2.3
-1.14
-2.36e+04
18.26
-2.78e+03
-1.53e+3
8.67
UMDA
STD
Best
3.1
-4.17e+04
1.1
-1.01
0.05
-1.1
4.36e+4
-1.84e+04
0.09
18.534
1.06e+03
-2.53e+03
186
-1156
1.01
9.56
Worst
-4.181e+04
-5.91
-1.16
-3.6e+04
18.06
-2.96e+03
-2.658e+3
6.642
Schwefel
Ackley
Griewank
Rosenbrock
G1
Kennedy
Rastrigin
Michalewicsz
Mean
-4.173e+04
-9.12
-1.12
-8.168e+05
18.53
-3.96e+04
-1.51e+03
9.095
PSO
STD
Best
16.58
-4.17e+04
0.91
-7.84
0.03
-1.08
3.15e+05
-3.66e+05
0.02
18.55
2.68e+04
-7.37e+03
100
-1.36e+03
0.9596
10.558
Worst
-4.18e+04
-10.65
-1.16
-1.26e+06
18.5
-1.011e+05
-1.66e+03
7.47
Schwefel
Ackley
Griewank
Rosenbrock
G1
Kennedy
Rastrigin
Michalewicsz
Mean
-4.19 e+04
-1.6
-1.616
-2.28e+06
18.035
-1.69e+05
-1.69e+03
8.94
CPOA
STD
Best
1.002
-4.19 e+04
0.58
-0.96
0.05
-1.55
9.434e+04
-2.154e+06
0.7724
18.549
3.12e+04
-1.74e+05
43.32
-1.60e+03
1.27
12.32
Worst
-4.29 e+04
-2.69
-1.69
-2.474e+06
16.86
-3.41e+05
-1.76e+03
8.017
This paper proposed a novel EA, inspired by the Estimation of distribution algorithms. The Adaptive Gaussian Estimation of Distribution Algorithm (AGEDA) is
designed for real codec problems.
AGEDA uses prior information of data if possible
to find optimal parameters. The proposed algorithm
sets own needed parameters based on information of
data adaptively.
Refrences
[1] A.S. Fraser, Simulation of genetic systems by automatic digital computers, Aust. J. Biol. Sci 10 (1957), 484491.
[2] H. J. Bremermann, Optimization through evolution and recombination, Self-Organizing Systems, M. C. Yovits, G. T.
Jacobi, and G.D. Goldstine, Eds. Washington, DC: Spartan
(1962), 93106.
[3] J. H. Holland, Adaptation in Natural and Artificial Systems, Ann Arbor: Univ. Michigan Press, 1975.
[4] J. Fogel, A. J. Owens, and M. J.Walsh, Artificial Intelligence through Simulated Evolution, New York: Wiley
(1966).
[5] Rechenberg and Evolutionsstrategie, Optimierung technischer Systeme nach Prinzipien der biologishen Evolution,
New York: Wiley Stuttgart, Germany: FrommannHolzbog (1973).
[6] H.-P. Schwefel, Evolution and Optimum Seeking, New York:
Wiley (1995).
[7] H Mhlenbein and G Paa, From Recombination of Genes
to the Estimation of Distributions, Springer-Verlag PPSN
IV. LNCS, Vol. 1141 (1996), 178187.
[8] D. H. Wolpert and W. G. Macready, No Free Lunch Theorems for Optimization, IEEE Transactions on Evolutionary
Computation 1 (1997), 6782.
[9] Lyudmila Zinchenko, Matthias Radecker, and Fabio
Bisogno, Multi-Objective Univariate Marginal Distribution
Optimisation of Mixed Analogue-Digital Signal Circuits,
GECCO07, ACM, London, England, United Kingdom 9781-59593-697 (4/07/2007).
[10] Member IEEE Qingfu Zhang and Heinz Mhlenbein, On the
Convergence of a Class of Estimation of Distribution Algorithms, IEEE TRANSACTIONS ON EVOLUTIONARY
COMPUTATION VOL. 8, NO. 2 (APRIL 2004).
[11] Tayarani-N and M.-R Akbarzadeh-T, Probabilistic Optimization Algorithms for numerical function optimization
problems, IEEE Conference on Cybernetics and Intelligent
System (2008), 12041209.
303
Babak Nasersharif
h mahdavinataj@yahoo.com
bnasersharif@eetd.kntu.ac.ir
Introduction
The classification contains three main step [1]: feature extraction, classifier training using the extracted
features, evaluating and testing classifiers. For best
classification performance, we should select a classifier
that be suitable appropriate for data and our pattern
recognition problem. Obviously, a classifier may not
be suitable for all data types and problems.In addition, performance of classifiers highly depends on the
selected features set. The selected features should represent the main data properly without redundancy and
also separate data classes in a suitable way. Sometimes,
the extracted features dont have these specifications
completely. Thus, a linear or nonlinear transformation
is applied to features to make features more discriminative and independent and decrease or increase the
feature space dimensions.
Principal component analysis (PCA) is a well-known
technique for feature transformation and dimensionality reduction. It represents a linear transformation
where the data is expressed in a new coordinate basis
that corresponds to the maximum variance direction
[2]. The idea behind PCA is to find a lower dimensional
Corresponding
304
In this paper, we propose a linear feature transformation for obtaining more discriminative and orthogonal features, simultaneously. For class discrimination
criterion, we use Dunn index. On the other hand, for
more orthognality and statistical independence, we use
covariance matrix and ratio of sum of its diagonal elements to sum of non-diagonal elements. In this manner, we find a transformation that make covariance matrix diagonal. These criteria are used as fitness function
of genetic algorithm in order to determine the transformation. The rest of paper is organized as follows.
Section 2 introduces Dunn index. Section 3, includes
the proposed genetic algorithm and its fitness function.
Section 4, contains our experimental results. Finally,
we conclude the paper in Section 5.
2.1
Dunn Indices
d(c ,cj )
)
c (diam(ck )
(1)
where nc is number of classes, and d(ci , cj ) is the
dissimilarity function between two classes ci and cj defined by eq.[2]:
d(ci , cj ) = minxci ,ycj d(x, y)
(2)
and diam(c) is the diameter of the class c, a measure of dispersion of the class. The diameter of a class
c can be defined as:
diam(ci ) = maxx,yci d(x, y)
(3)
305
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Proposed Method
3.2
3.1
The first step in GAs is to define the encoding to describe any potential solution as a numerical vector. We
use a vector of float to express an individual code. Each
element of this vector is in interval [1, 1]. The length
of individuals in the hypothesis space is d2 where d is
number of features. So, each transformation matrix is
represented with a vector with d2 elements.
Genetic Algorithm is one of meta-heuristic optimization techniques such as simulated annealing, tabu
search and evolutionary strategies. GA has been
demonstrated to converge to the optimal solution for
many diverse and difficult problems as a powerful and
stochastic tool based on principles of natural evolution
[16] . The details of our implementation of GA are
described as follows:
3.4
Algorithm1: Genetic Algorithm
Recombination
3.5
Mutation Operator
3.6
Fitness Function
The role of the fitness function is to measure the quality of solutions. In our method each chromosome is a
transformation matrix. For its evaluation we used a
fitness function which considers both of class discrimination and features orthogonality using Dunn index
306
Pd
f1 (w) =
i=1
Pd
j=1,i6=j
Pd
i=1
Cw (i, j)
Cw (i, i)
Where d is features vector dimension, Cw is covariance matrix of transformed fatures using transformation matrix W . Thus, we obtain ratio of sum of nondiagonal elements of the covariance matrix to sum of its
diagonal elements. We want to minimize this function.
Therefore, sum of diagonal elements of the covariance
matrix should be greater than the other elements of covariance matrix. In the ideal case, sum of non-diagonal
elements of the covariance matrix should be a small
number and close to zero. In such case, covariance matrix is diagonal and features are completely statistical
independent and orthogonal. In this case, f 1 tends to
zero.
Dataset
Iris
Tae
Glass
Ionosphere
Vowel
CMC
Attribute
4
5
9
34
13
9
Samples
150
151
214
351
990
1473
Classes
3
3
6
2
11
3
307
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Classifier
BayesNet
RBFN
IB1
MLP
Dataset
Iris
CMC
Glass
Ionosphere
Tae
Vowel
Iris
CMC
Glass
Ionosphere
Tae
Vowel
Iris
CMC
Glass
Ionosphere
Tae
vowel
Iris
CMC
Glass
Ionosphere
Tae
Vowel
Normal
90.19
48.80
61.64
95
41.17
36.58
96.07
47.21
68.49
96.66
37.25
49.78
94.11
45.21
65.75
90
41.17
49.78
96.07
50.79
67.12
90.83
29.41
46.10
PCA
96.07
46.21
58.90
95
33.33
41.55
98.03
53.18
78.08
95.83
43.13
43.07
94.11
42.82
71.23
87.5
37.25
45.02
96.07
50.59
61.64
90
39.21
47.61
LDA
96.07
50.99
53.42
93.33
50.98
28.35
98.03
49.40
64.38
92.5
50.89
27.72
94.11
45.02
69.86
87.5
45.09
35.06
96.07
52.19
64.38
91.66
47.05
43.29
COV
96.07
48.50
67.12
76.66
37.25
45.45
98.03
52.89
65.75
97.5
43.13
52.16
98.03
45.91
72.60
91.66
41.17
59.95
96.07
55.10
73.97
97.5
50.98
55.62
DI
98.03
44.02
69.86
94.16
39.21
47.40
98.03
52.52
67.12
97.5
45.02
51.51
98.03
49.40
75.34
91.66
43.13
52.38
96.07
55.17
69.86
94.16
47.05
54.54
DI-CV
98.03
50.99
65.75
93.33
43.13
45.02
98.03
53.38
67.12
98.33
50.98
50.21
98.03
45.81
75.34
91.66
41.17
57.35
96.07
55.37
73.97
96.66
45.09
54.97
Table 2: Classification accuracy with pre-processing methods. The Bests are Bold-Underline, The Seconds are
Bold, and The Thirds are Underline
Conclusion
Refrences
[1] R.O. Duda, P.E. Hart, and D. Stork, Pattern classification,
second edition,Wiley, 2001.
308
309
Masoud Ghiasbeigi
Mahalat branch, Islamic Azad University, Mahalat, Iran
Department of Computer Engineering
I@Merkousha.net
Abstract: Tag collision is one of biggest challenges in RFID systems. some protocols have been
proposed in literature to address this issue. our objective in this paper is to introduce these energy
aware tag anti collision protocols and evaluate them. our comparison is based on the messages
transferred between tag and reader.
Introduction
Since the advent of RFID (Radio Frequency Identification) in 1948[1] this technology was first used in
WWII by the Allies armed forces to distinguish friendly
from enemy aircraft and tanks , called IFF (Identify
Friend or Foe)[2].Today RFID has a significant role in
fields of: supply chain management, agriculture, military, healthcare, Pharmaceuticals, retail and etc. This
technology use communicated radio frequency to retrieve data. the main RFID components are :Reader
including an antenna which is the device used to read
or write data to RFID tags. Tag which is a device
with an integrated circuit on which the reader acts.
The tags can earn its energy from signals which has
received from reader, which called passive tag or by
its own battery supply (active tag). There is another
tag which called semipassive tag that use battery supply to power on and received signal energy from reader
The rest of this paper is organized as follows: secto transmit data. this technology is used to track intion
2 overviews related research works. Section 3 we
ventory, object identification and more. Data write on
introduce
energy aware Tag anti collision protocols and
the tags and attach to objects to rapid and automat Corresponding
310
3.2
3.1
Improved QT
This approach modify QT to improve its performance[13, 14]. In previous approach all the tags sent
complete ID to reader in collision case but in this approach, as soon as collision detected by reader, reader
send a message to tags to stop sending their IDs. This
signal is not a 0 and 1 symbol. This reduce number
of bits that tags will send in collision case. In this approach the reader need one clock to detect the collision
and one clock to send stop signal too. So the tags
send only 3 bits in collision. This approach processing levels are similar to figure 1 and Table 1 but 8 tag
responses with less bits. In this case the reader send
more bits, reader energy consumption does not matter
here, due to reader use AC as power source.
The next three protocols are combination of QT
protocol and Frame slotted aloha protocol, to reduce
311
The Third International Conference on Contemporary Issues in Computer and Information Sciences
3.3
3.5
A=00000,
312
5
Figure 4: MAS Process for three tags.
B=00101, C=01001
Conclusion
Evaluation
Refrences
[1] H. Stockman, Communication by Means of Reflected
Power, Proc. IRE 35 (1948), 1196-1204.
[2] G. Roussos, Networked RFID, systems, software and services, Springer-Verlag London Limited, Chapter 1, pages:
7, 2008.
313
The Third International Conference on Contemporary Issues in Computer and Information Sciences
314
S. Sanei
University of Surrey, UK
Department of Computing, Faculty of Engineering and Physical Sciences
s.sanei@surrey.ac.uk
H. Alizadeh
Iran University of Science and Technology
Department of Computer Engineering
h.alizadeh.iust@gmail.com
Abstract: Geometry dilution of precision (GDOP), a geometrically determined factor that describes the effect of geometry on the relationship between measurement error and position determination error, plays a very important role in the total positioning accuracy. The calculation of
the GPS GDOP is a time and power consuming task which can be done by solving measurement
equations with complicated matrix transformation and inversion. In order to reduce the calculation
burden, in this paper satellites geometry classification for good navigation satellites subset selection
based on advanced training algorithms including Levenberg-Marquardt (LM) and modified LM algorithms to train a feed-forward neural network (NN) and principal component analysis (PCA) is
presented. LM and modified LM are very and powerful algorithms that can train an NN rapidly.
Also, PCA is used as a pre-processing step to create the uncorrelated and informative features of
the GPS GDOP. Simulation results show that these methods are more efficient to converge upon
the optimal value in the GPS GDOP classification.
Keywords: GPS GDOP, classification, principal component analysis, Levenberg-Marquardt (LM) and modified LM.
Introduction
315
The most common approach to obtain the GPS GDOP computer simulation results are discussed in section 4.
is to calculate the inverse matrix for all combina- Finally, conclusions are given in section 5.
tions of satellites and choose the minimum one which
it is very time consuming approach. There are two
approaches to overcome the computational burden, 2
Background Knowledge for
namely regression/approximation and classification of
the Proposed Method
GPS GDOP data by computational intelligence methods such as neural networks (NNs), support vector machines (SVMs), and so on.
2.1 Geometry dilution of precision
In order to select the optimal subset of satellites, the
GPS GDOP approximators are used for computing the
value while the GPS GDOP classifier is used to select
one of the acceptable subsets of satellites for navigation using [5]. In [6] Hsu has proposed a method based
on tetrahedron volume formed by four user-to-satellite
vectors. However, it is not universally acceptable because it does not guarantee optimum selection of satellites [5]. In order to estimate and classify the GPS
GDOP, first, Simon and El-Sherief extracted a set of
features include traces of the measurement matrix and
its second and third powers, and the determinant of
the matrix. Then, in order to advantages of computational efficiency, they use the basic back propagation
(BP) of NN (BPNN) to classify and approximate the
GPS GDOP [5].
However, in many cases including GPS GDOP classification, the BPNN has many deficiencies such as too
slow convergence speed, easy to fall into local minimum, and easily affected by sudden peaks in the signal
trend during the learning process. To overcome these
problems Jwo and Lai in [5] have proposed to use the
basic BP with momentum, the optimal interpolative
(OI) network, probabilistic NN (PNN) and general regression NN (GRNN) to classify the GPS GDOP. To
improve the accuracy of the GPS GDOP classification
and increase the consuming time, in this paper we propose an approach based on principal component analysis (PCA) and Levenberg-Marquardt (LM) to classify
the GPS GDOP.
(1)
where
q
2
2
2
(Xi Xu ) + (Yi Yu ) + (Zi Zu ) ctu
(2)
iono,i and trop,i which are the errors induced by
the ionospheric and the tropospheric propagation, are
calculated from a model,(Xu , Yu , Zu , tu ) are the four
system unknowns and is the correction the receiver has
to apply to its own clock. Also, c is the speed of light.
To resolve this system we need four equations which
mean four pseudo-ranges from four different satellites.
The pseudo-ranges can be approximated by a Taylor
expansion. We obtain:
q
u )2 + (Yi Yu )2 + (Zi Zu )2 c tu
i = (Xi X
(3)
i =
316
The Third International Conference on Contemporary Issues in Computer and Information Sciences
H=
ax1
ax2
..
.
ay1
ay2
..
.
az1
az2
..
.
1
1
..
.
axNsat
ayNsat
azNsat
(6)
(7)
p
trace[G]
(8)
Figure 2: GPS data collecting embedded system used
in our experiments
2.2
Levenberg-Marquardt Algorithm
In the first step, in order to reduce the instruction time,
the input of measurement data is normalized. Since
M = H T H is a symmetric 4 4 matrix, it has four
real-valued eigenvalues which are known as 1 , 2 , 3
and 4 . It is explicit that the four eigenvalues for M 1
are 1
i (i = 1, 2, 3, 4) . Based on the fact that the trace
of a matrix is the sum of its eigenvalues, equation (8)
can be expressed as [9]:
y2 (~) = 1 2 + 2 2 + 3 2 + 4 2 = trace[M 2 ]
(11)
y3 (~) = 1 3 + 2 3 + 3 3 + 4 3 = trace[M 3 ]
(12)
In this research, a large set of experiments was carried out using the following set-up: a standard GPS
receiver was installed in a fixed point and was connected to a PC. In order to the GPS GDOP real collection, the azimuth (Az) and the elevation (E) of each
observed satellite are measured by using a developed
embedded system. After collecting the GPS information on DRAM, these data were transformed to serial
port of PC for processing. Figure 2 shows the entire
GPS data collecting embedded system.
317
y4 (~) = 1 2 3 4 = det(M )
(13)
BPNN [5]
Correct classification
rate
CPU time
93.16%
about 1.5 s
for 200 iterations
PNN [5]
GRNN[5]
LM with PCA
97.29%
97.29%
99.27
about 1.5 s
about 1.5 s
Modified
LM
with
PCA
99.48
about 1.3 s
for 200 iterations
Table 2: Comparison of classication rate and training time for proposed methods and three well-known existing
classifiers
In this research, the NN is designed to do the mapping
from to the GPS GDOP classes. It must be noted that
the number of features (three) is selected with trial
and error. The entire classification block diagram of
the GPS GDOP using the NN and PCA is shown in
Figure 3.
In Table 2 the obtained accuracies by proposed methods, train a feed-forward NN by using the LM and
modified LM with PCA and BPNN, GRNN, and PNN
are shown. The classification accuracy of 99.27% and
99.48% by using the LM and modified LM with PCA
are achieved using the GPS GDOP measurement data,
respectively.
In this paper we use a feed-forward NN with three layers and for these methods we train it with 50% of the
GPS GDOP measurement data and then use the rest
of the data for testing these algorithms. The momentum and initial learning rate are set to 0.85 and 0.05,
respectively. Because of uncertain behavior of NNs,
It should be mentioned that Jwo et al. in [2] used the
we run all algorithms 20 times, and the average of the
NN for the GPS GDOP classification with the BP techresults is presented.
nique, PNN and GRNN for NN learning. Advantages
Deciding how many neurons to use in the hidden layer of the proposed method based on the LM and PCA
is one of the most important characteristics in an NN. in our paper than reference of mentioned are high acWhen the number of neurons is too low, the NN cannot curacy and low CPU time. Also, it has the structure
model complex data and the resulting may be unac- complexity less than for hardware implementation.
318
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Conclusions
GPS errors resulted from satellites configuration geometry are indicated by the GDOP factor which is often used for selecting suitable satellites subset from at
least 24 orbited existing satellites. In this paper, a
fast and precise approach for the GPS GDOP classification using LM and modified LM algorithms to train
a feed-forward NN and PCA has been proposed. The
method of NNs is a realistic computing approach used
for classifying the measurement GPS GDOP. Also, in
order to reduce the computational burden and training time, we use the PCA as a pre-processing step.
PCA can create independence and informative data.
The performance of the proposed methods has been
studied on the test data of the paper. The simulation results demonstrate the significant advantage of
the proposed methods compared with several existing
methods, namely, the BPNN, GRNN, and PNN.
Refrences
[1] M. R. Mosavi and H. Azami, Applying neural network ensembles for clustering of GPS satellites, Journal of Geoinformatics 7 (2011), no. 3, 714.
319
[10] S. H. Doong, A closed-form formula for GPS GDOP computation, Journal of GPS Solutions 13 (2009), no. 3, 183
190.
Abstract: This research shows the influence of multi-core architecture to reduce the execution time
and thus increase performance of some software fault tolerance techniques. According to superiority
of N-version Programming and Consensus Recovery Block techniques in comparison with other
software fault tolerance techniques, implementations were performed based on these two methods.
Finally, the comparison between the two methods listed above showed that the Consensus Recovery
Block is more reliable. Therefore, in order to improve the performance of this technique, we propose
a technique named Improved Consensus Recovery Block technique. In our proposed technique, not
only performance is higher than the performance of consensus recovery block technique, but also the
reliability of our proposed technique is equal to the reliability of consensus recovery block technique.
The improvement of performance is based on multi-core architecture where each version of software
key units is executed by one core. As a result, by parallel execution of versions, execution time is
reduced and performance is improved.
Keywords: Software Fault Tolerance; Multi-core; Parallel Execution; Consensus Recovery Block; N-version Programing; Acceptatnce Test.
Introduction
Nowadays the influence of software into different domains such as economics, medicine, aerospace and so
on is quite sensible. One of the main requirements of
these systems is the use of safe and reliable software.
According to the importance of software reliability, requirement of using fault tolerance techniques in software development have increased significantly. Design
diversity is one of fault tolerance methods, that needs
to run multiple versions of the program[1]. While software fault tolerance techniques increase software reliability, but on the other hand by increasing number of
versions of the program, execution time also increases
and this will reduce the performance. But by taking
advantage of distributed and parallel processing systems, the efficiency is increased and thus the cost of using these systems will be acceptable. Using the multi Corresponding
320
Software
Technique
Fault-Tolerance
2.1
2.3
N-version programing-Acceptance
test technique
2.7
321
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Acceptance Test
The equation of motion of a satellite is a second order vector differential equation, therefore it has to be
converted to a system of first order differential equaAcceptance Test (AT) is the most basic approach to tion[6]:
self-checking software. It is typically used with the
RcB, CRB and DRB techniques. The AT is used to
verify that the systems behavior is acceptable based
on an assertion on the anticipated system state. As
shown in fig. 1 it returns the value TRUE or FALSE.
An AT needs to be simple, effective, and highly reliable to reduce the chance of introducing additional
design faults, to keep run-time overhead reasonable,
to ensure that anticipated faults are detected, and to
ensure that nonfaulty behavior is not incorrectly detected. ATs can thus be difficult to develop, depending
Where, r is the position vector, GM is the product
on the specification. The form of the AT depends on
of
gravitational
constant and Earths mass, k is the
the application. The coverage of an AT is an indicator
effects
of
all
the
perturbing
forces acting on a satellite.
of its complexity, where an increase in coverage genSince
the
equation
of
motion
of a satellite is a second
erally requires a more complicated implementation of
order
three-dimensional
differential
equation, it could
the test. A programs execution time and fault manibe
solved
numerically
using
methods
such as Rungefestation probabilities also increase as the complexity
Kutta,
Adams-Bashforth
and
Adams-Moulton.
In this
increases[3].
paper, various implementations of these methods are
used as different versions of fault-tolerant techniques.
322
and important parts, that occurrence of error in them plemented as different versions.
causes system failure and the cost cannot be compenIn the other words, we execute different versions
sated. These critical and important parts are called
software key units and other sections are software non- on single core architecture in each technique, and then
compare its execution time on single core architecture
key units[2].
with its execution time on multi-core architecture, and
One way to increase fault tolerance is having dif- finally we offer a new technique to reach a higher perferent versions and deployment of fault tolerance tech- formance.
niques.But since the development of different versions
We reduced execution time of those techniques sigof the entire system is very costly, several different
versions that have different implementations are de- nificantly by using multi-core architecture. As shown
veloped, only for software key units. Since the key in Fig. 3, the speedup rate of NVP technique is equal
units have several versions and causes increase of ex- to 1.22 for Dual core processor and 1.89 for quad core
ecution time, we use multi-core architecture features processor. Because the reliability on this technique is
to reduce this time and run versions on different cores low the NVP-TB-AT Technique is used instead. Which
in parallel. This approach reduces execution time and the speedup rate of this technique is equal to 1.20 for
thus increases the performance. Because the cost of dual core processor and 1.61 for quad core processor.
synchronization and communication between the cores The effect of multi-core architecture on performance of
compared with the high costs resulting from the se- RcB technique is shown in Fig. 4.
quential program, is negligible[2].
The effect of multi-core architecture on increasing performance of NVP technique has been discussed by
Yang and his colleagues[8]. In this paper we discuss
about effect of multi-core architecture on NVP derived techniques, DRB, CRB and improved consensus
recovery block. In this paper, fault- tolerance techniques have been used to increase reliability. So different implementations of numerical methods for solving differential equations of satellite motion were used
as different versions which are required in fault tolerance techniques, therefore methods such as RungeKutta, Adams-Bashforth and Adams-Moulton are im-
323
The Third International Conference on Contemporary Issues in Computer and Information Sciences
technique is shown. First versions are executed simultaneously through NVP technique and result of them
In NVP-TB-AT, if the result of two faster versions be is given to a voter. If the voter can produce a corequal, one of the results will be announced as the cor- rect result, this result is returned. Otherwise, different
rect result and any acceptance test does not performed versions are executed through DRB technique.
on the result[5]. So if there is an error in the system
that causes the result of two faster versions be similar and wrong, the overall system failure probability is
increased using this technique. Thus this technique is
less reliable than RcB technique, because in RcB technique the result should perform the acceptance test in
any conditions to be returned as a correct result. Also
if program has several correct answers, the NVP-TBAT technique may be faced with failure. If each of
the two faster versions produces correct but different
results, the voter waits for the slowest version and by
using the decision mechanism judges among result of
two faster versions and result of slowest version. If the Figure 5: consensus recovery block technique algorithm
lowest version has correct but different result than results of faster versions, the voter cannot decide and
system will face with failure. But if the RcB technique
Influence of multi-core architecture on performance
is used and the program has several correct results, sys- of CRB technique is shown in Fig. 6. Different imtem does not fail because AT is done for every version plementations of numerical methods for solving differand so the correct result will be determined.
ential equations of satellite motion were used as difOn the other hand the performance of RcB technique is largely dependent on performance of acceptance test. While in many cases, creation of acceptance test program is very difficult. CRB technique
reduces the importance of acceptance test than its importance in RcB technique. Also NVP technique will
not be able to produce the final result, in cases where
the problem has several correct answers. So the RcB
technique and NVP have drawbacks in some cases that
CRB has resolved these drawbacks. The CRB by combining the two techniques that discussed, resolves the
two drawbacks.
324
Conclusion
Refrences
[1] A. Avizienis and J.P.J. Kelly, Fault tolerance by design
diversity- concepts and experiments, IEEE Computer 17
(1984), 67-80.
[2] L. Yang, L. Yu, J. Tang, L. Wang, J. Zhao, and X. Li,
Mcc++/java: enabling multi-core based monitoring and fault
tolerance in c++/java, 15th IEEE International Conference
on Engineering of Complex Computer Systems (2010), 255256.
[3] L.L. Pullum, Software fault tolerance techniques and implementation, Artech House Publishers, 2001.
[4] A.T. Tai, J.F. Meyer, and A. Avizienis, Performability enhancement of fault-tolerant software, IEEE Transactions on
Reliability 42 (1993), no. 2, 227-237.
[5] A.T. Tai, Software performability: from concepts to applications, Kluwer Academic Publishers, 1996.
[6] M. Eshagh and M. NAJAFI ALAMDARI, Comparison of
numerical Integration methods in orbit determination of low
earth orbiting satellites, JOURNAL OF THE EARTH AND
SPACE PHYSICS 32 (2006), no. 3, 41-57.
[7] S. Akhter and J. Roberts, Multi-core programming, Intel
Press, 2006.
[8] L. Yang, Z. Cui, and X. Li, A case study for fault tolerance oriented programming in multi-core architecture, IEEE
International Conference on High Performance Computing
and Communications (2009), 630-635.
325
Mina Serajian
Amirkabir University
r.asadnejad@aut.ac.ir
mina.serajian@gmail.com
Mohsen Vahed
ngo.iran@yahoo.com
emadi@roozbeh.ac.ir
Abstract: The museum of present time hasnt been separated from digital world and Internet age.
Nowadays, In the world there are different museums which have deformed to digital and virtual
museums. But still there are some people that prefer to have physical presence in the museum and
visit its objects nearly rather than surfing a virtual museum. Because of this, the aim of this essay
is combining the joy of visiting old style museums with characteristics and advantages of digital
museums. For this purpose in suggested system, user goes to the museum building and by using
technologies and tool which are provided by the museum, can easily use. Museum services and have
enough joy of visiting objects and items.
Keywords: Digital museum, Information technology usage in museums, Promotion of services quality in museum,
E-Government
Introduction
326
ically, availability of more detail, description and related photos of a specific object and possibility of saving information can be cited[3]. But many people still
believe that, going to museums and visit the objects
closely has its own special pleasure and prefer physical visits to virtual museum surfing. The purpose of
this article is to propose a framework and a structure
for using IT, Digital tools and technologies in order to
support the physical presence of visitors in the museum
building.
In following, part 2 deals with the problems of digital
and traditional museums that leads to the basic idea
of this paper. Part 3 is the introduction to the proposed system architecture, part 4 deals with tools and
technology. In part 5, conclusion and suggestions, we
express potential suggestions for future works in order
to Implementation and setup such a system.
Problem Statement
The software has a part named place finding or visual map, and by selecting this Choice user can see her
place in museum and all paths and corridors of muThe architecture of the system is shown in fig 1 to seum, so can be guided smartly to a selected place.
describe and detect system needs. We use scenario- This feature is provided by RFID technology.
327
The Third International Conference on Contemporary Issues in Computer and Information Sciences
328
Mitra Abbasfard
Faculty of Engineering
Reza Hassanpour
Computer Engineering Department
Cankaya University, Ankara Turkey
Abstract: In image processing applications, data authentication is implemented using watermarking techniques. Watermarking is the process of inserting predefined patterns into image data in
a way that the degradation of quality is minimized and remain at an imperceptible level. Many
digital watermarking algorithms have been proposed in special and transform domains. The techniques in the spatial domain still have relatively low-bit capacity and are not resistant enough to
lossy image compression and other image processing operations. For instance, a simple noise in the
image may eliminate the watermark. On the other hand, frequency domain-based techniques can
embed more bits for watermarking and are more robust to attack. Some transforms such as Discrete
Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are used for watermarking in
the frequency domain. In this paper, the robustness of different transform watermark algorithms is
evaluated by applying different attacks. We evaluate two and six watermark algorithms, which have
been proposed using the DCT and DWT, respectively. Our results show that the Coxs algorithm
which is based on DCT is more robust compared to other transform watermark algorithms.
Introduction
Digital multimedia data are rapidly spreading everywhere. On the other hand, this situation has brought
about the possibility of duplicating and/or manipulating the data. To keep on with the transmission of data
over the Internet the reliability and originality of the
transmitted data should be verifiable. It is necessary
that multimedia data should be protected and secured.
One way to address this problem involves embedding
an invisible data into the original data to mark ownership of them. This is down using digital watermarking
algorithms [6,16].
There are different algorithms in the spatial and
Corresponding
transform domains for digital watermarking. The techniques in the spatial domain still have relatively low-bit
capacity and are not resistant enough to lossy image
compression and other image processing operations.
For instance, a simple noise in the image may eliminate
the watermark data. On the other hand, frequency
domain-based techniques can embed more bits for watermark and are more robust to attack. Some transforms such as Discrete Cosine Transform (DCT) and
Discrete Wavelet Transform (DWT) are used for watermarking in the frequency domain. Most DCT-based
techniques work with 8 8 blocks. These transforms
are being used in several multimedia standards such as
MEPG-2, MPEG-4, and JPEG2000. In addition, different watermark algorithms have been proposed using
DCT and DWT. In considering the attacks on water-
329
2.2
2.1
On the other hand, transform-domain watermarking techniques are typically much more robust to image manipulation compared to the spatial domain techniques. This is because the transform domain does
not use the original image for embedding the watermark data. In addition, a transform domain algorithm
spreads the watermark data over all part of the image. Additionally, frequency domain-based techniques
can embed more bits for watermark and are more robust to attack. Furthermore, most of the images are
avaliable in the transform domain.
Spatial domain watermark algorithms insert watermark data directly into pixels of an image [8]. For
example, some algorithm insert pseudo-random noise
to image pixels. Other techniques modify the Least
Significant Bit (LSB) of the image pixels. The invisibility of the watermark data is obtained on the assumption that the LSB bits are visually insignificant. There
are two ways of doing an LSB modification. There
are some methods to change the LSB bits. The LSB
of each pixel can be replaced with the secret message
or image pixels may be chosen randomly according to
Some transforms such as DCT and DWT are used
a secret key. Here is an example of modifying the
LSBs, suppose we have three R, G, and B component for watermarking in the frequency domain. Most DCTin an image. Their value for a chosen pixel is green based techniques work with 8 8 blocks [3].
(R, G, B) = (0, 255, 0). If a watermark algorithm wants
to hide the bit value 1 in R component then the new
pixel value has components (R, G, B) = (1, 255, 0). As
Watermark Algorithms Based
this modification is so small, the new image is to the 3
human eye indistinguishable from the original one [12].
on DCT and DWT
Although this spatial domain techniques can be easily used on almost every image, they have the follow- We discuss two and six watermark algorithms, which
ing drawbacks. These techniques are highly sensitive have been proposed based on the DCT and DWT, reto signal processing operations and can be easily dam- spectively. We focus more on wavelet-domain water-
330
The Third International Conference on Contemporary Issues in Computer and Information Sciences
3.1
Cox et al. [3] proposed spread spectrum watermarking algorithm. In this technique, the watermark data
is spread over many frequency values so that the energy in any value is very small and undetectable. In
order to implement this algorithm, a sequence of values V = v1 , v2 , ..., vn are extracted from each image
I. Then the watermark data X = x1 , x2 , ..., xn are inserted into the extracted values V , to obtain a sequence
of V 0 = v10 , v20 , , vn0 using a scaling factor with the
following equations.
vi0
vi0
= vi + xi .
= vi (1 + xi ).
(1)
(2)
331
3.2
Xie et al. [15] has developed a blind watermark technique in the DWT domain. Xies algorithm modified
the wavelet coefficients using a median filter with a 13
sliding window. A non-overlapping 3 1 window runs
through the entire low frequency band of the wavelet
coefficients. For example, elements in the window are
denoted as b1 , b2 , b3 corresponding to the coordinates
(i 1, j), (i, j), (i + 1, j), respectively. They are sorted
as b(1) b(2) b(3) . Xia et al. [14] insert pseudorandom codes to the large coefficients at the high and
middle frequency bands of the DWT for an image. The
idea is the same as the spread spectrum watermarking
idea proposed by Cox et al. [3]. A pseudo-random sequence, Gaussian noise sequence N [m, n] with mean 0
and variance 1 are inserted to the largest wavelet coefficients. Wavelet coefficients at the lowest resolution
are not changed. In other words, Xia embedded the
watermark data to all sub-bands except LL sub-band.
A watermark technique based on DWT has been
proposed by Wang et al. [13]. They search significant
wavelet coefficients in different sub-bands to embed the
watermark data. The searched significant coefficients
are sorted according to their perceptual importance.
The watermark data is adaptively weighted in different sub-bands to achieve robustness. Tsun et al. [7]
also proposed a watermarking algorithm using DWT.
In order to equally embed the watermark to the whole
image, Kim et al. [4] embedded watermark data to all
sub-bands. The watermark data has been generated
using a Gaussian distributed random vector. Leveladaptive thresholding has been used in order to select
significant coefficients for each sub-band and different
decompositions. They used 3-level decompositions and
the length of the watermark is about 1000.
Dugad et al. [1] have inserted the watermark
data to sub-bands, which have coefficients larger than
a given threshold T1 except the low-pass sub-bands.
Picking all wavelet coefficients above a threshold is
a natural way of adapting the amount of watermark
added to the image. They used three 2D DWT decomposition levels. Robustness requires the watermark
to be added in significant coefficients in the transform domain. However, the order and number of these
significant coefficients can change due to various image manipulations. Adding watermark data to significant wavelet coefficients in the high frequency bands
is equivalent to adding watermark to the edge areas of
the image, which makes the watermark invisible to the
human eye.
Experimental Results
4.1
Original image
Quality Measurements
Watermark data
Apply a watermark
algorithm
Apply an attack
Extract watermark
data
Measuring
correlation
Two commonly measurements that are used to quanFigure 2: Block diagram for watermark robustness extify the error between images are namely, Mean Square
periments.
Error (MSR) and PSNR. Their equations are as follows.
M SE
N M
1 XX
(f (i, j) g(i, j)).
N M i=1 j=1
(3)
2552
.
(4)
M SE
Where the sum over i and j denote the sum over all
image pixels. Increasing PSNR represents increasing
fidelity of compression. In general when the PSNR is
40 dB or larger, then the two images are virtually indistinguishable by human observers. In other words, the
transformed image is almost identical to the original.
P SN R
10log10
It is important to evaluate an image watermark algorithm on many different images. Images should cover
332
The Third International Conference on Contemporary Issues in Computer and Information Sciences
methods described in the previous sections, then an attack is applied. Next we try to extract the watermark
and compute the amount of damage done to the watermark. The similarity between damaged watermark
extracted from the image after the attack, and the original watermark is measured using their correlation.
These correlation values have been averaged for
each watermarking algorithm and plotted against the
parameters of each attack separately. Different attacks
such as JPEG compression, EZW compression, median
filter, cropping, and rotation have been applied.
4.2
Evaluation Results
median and Gaussian filtering which are low-pass filters do not have a major effect on it.
EZW lossy compression distortion rate is more compared to JPEG compression. This can be related to
the fact that EZW is a wavelet based method, which
can achieve higher compression rates through its hierarchical coding. Median filtering attack uses a square
window to find the median value. Normally the median
filter can eliminate isolated noise. However, when the
window size is even and hence the middle of the window is not the middle value in the list of pixel value,
more distortion is caused.
Geometric attacks have the most drastic effect
on the embedded watermark. Also, using the midfrequency components make the watermark invisible.
This gives it the robustness expected from a good watermarking method. For large filter sizes such as 15,
the watermark is not completely destroyed and its correlation value remains at about 0.534. For other methods, even DWT based methods, achieving a correlation value this high, is not easy. Another algorithm
which is comparable in performance to Cox algorithm
is Xies algorithm. However, Xies algorithm modifies
the wavelet coefficients using a median filter with a
1 3 sliding window. But despite this, its performance
and robustness is very close to Coxs algorithm. Coxs
algorithm has the advantage of preserving the original
image at the watermark detector. This makes it possible for the detector to subtract the retrieved image
and the original image and use the result as a metric to
evaluate how good the watermark has been embedded.
This helps the algorithm to avoid blind comparison and
in fact because of this feature it falls into non-blind
group of algorithms.
333
DUGAD
KOCH
XIE
TSUAN
COX
KIM
WANG
XIA
0.79
0.54
0.24
0.08
0.22
0.794
0.49
0.23
0.44
0.38
0.976
0.69
0.72
0.20
0.24
0.82
0.60
0.58
0.15
0.21
0.99
0.94
0.975
0.22
0.44
0.44
0.28
0.18
0.15
0.06
0.92
0.61
0.48
0.19
0.11
0.83
0.55
0.40
0.61
0.30
Table 1: Test results in terms of correlation values for each watermarking method.
also try to reduce the image size by removing the rithms is also dependent on the frequency at which the
small details which correspond to high frequency con- watermark data is added.
tent of the image. Therefore, watermarking methods
such as Coxs algorithm which embed the watermark
in mid frequency contents of the image are more robust against attacks. Secondly, when the watermark- Refrences
ing algorithm and the attack are based on the same
transform, the destructing effect of the attack is mini- [1] R. Dugad, K. Ratakonda, and N. Ahuja. A New WaveletBased Scheme for Watermarking Images,. In Proc. IEEE
mized. This is the case in Coxs algorithm when JPEG
Int. Conf. on Image Processing, 1998.
lossy compression attack is applied. Coxs algorithm
[2] M. S. Hsieh, D. C. Tseng, and Y. H. Huang. Hiding Digand JPEG are both based on DCT transform.
The major consideration in frequency based methods is the robustness of the method to different attacks. To achieve a better performance with respect to
robustness, the watermark should be embedded in the
lower frequency contents of the image as far as possible. The reason is that the main watermark attacks
change or eliminate high frequency content of the image. From this viewpoint, the main difference between
the frequency based watermarking algorithms, is their
choice of frequency coefficients to use for embedding
watermark data. In fact the robustness of the algo-
334
The Third International Conference on Contemporary Issues in Computer and Information Sciences
[6] S. J. lee and S. H. Jung. A Survey of WAtermarking Techniques Applied to Multimedia. In Proc. IEEE Int. Symp.
on Industrial Electronics, June 2001.
[7] C. T. Li and H. Si. Wavelet-Based Fragile Watermarking
Scheme for Image Authentication. Journal of Electronic
Imaging, 16(1), March 2007.
[8] B. M. Macq and J. J. Quisquater. Cryptology for Digital
TV Broadcasting . Proceedings of the IEEE, 83(1):944957,
1995.
[9] M. W. Marcellin, M. J. Gormish, A. Bilgin, and M. P.
Boliek. An Overview of JPEG 2000. In Proc. Data Compression Conf., March 2000.
[10] F.
Petitcolas.
Photo
database.
www.petitcolas.net/fabien/watermarking/imagedatabase/.
[11] M. Rabbani and R. Joshi. An Overview of the JPEG2000
Still Image Compression Standard. Signal Processing: Image Communication, 17(1):348, January 2002.
335
Ali Mohades
z.mirzeai@aut.ac.ir
mohades@aut.ac.ir
Abstract: Let S be a set of n points inside a simple polygon. We study the problem of bipartition
S into two subsets such that minimized the maximum of the geodesic diameters of the subsets. Since
in ground transportation, where obstacles obstruct the space between the points, the geodesic metric
is more convenient and useful than Euclidean one in order to represent real world problems. This
paper focuses on an O(n2 log n) algorithm for bipartition problem. The proposed algorithm employs
geodesic metric for distance between points inside a simple polygon.
Keywords: Computational geometry, Facility Location, Polygon portioning, Geodesic diameter, Geodesic distance.
Introduction
Clustering is a prominent problem of fundamental importance in operations research. This problem seeks to
partition a set of points into k disjoint clusters subject
to some optimization criterion. More formally, such a
problem specifies a set of points S, a parameter k, a
set measure , and a k-argument function f ; the solution to the problem is a partition of S into k disjoint
subsets S1 , . . . , Sk , such that f ((S1 ),. . ., (Sk ))
is minimized. Such problems are generally NP-hard
for arbitrary k, even for planar point sets and simple
instances of and f e.g. = diameter and f = maximum [3]. Let S is a set of n points inside a simple
polygon P. We seek a bipartition of S that minimizes
the maximum of the geodesic diameters of two subsets.
The previous researches in computational geometry consider planar points and use Euclidean metric.
However for ground transportations when obstacles obstruct the space between the points, the geodesic metric is more convenient and useful than Euclidean one
in order to represent the real world problems.
The rest of this paper is organized as follows: Section 2 introduces some primary definitions, the proThe geodesic distance between two points in a sim- posed algorithm is presented in Section 3 and Section
ple polygon is the length of the shortest path connect- 4 focus on the correctness of the algorithm. Finally
ing the points that remains inside the polygon. Fig. 1 section 5 ends the paper with a short discussion and
future works.
is an example of geodesic distance.
Corresponding
336
Preliminaries
2.2
BIPARTITE algorithm
337
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Algorithm BIPARTITE
Function CV2
Inputs: A set of vertices Q.
outputs: A partitioning of Q into Q1 and Q2 .
begin
In this section we prove the accuracy of the proposed
Compute the geodesic distances between all pairs algorithm.
of vertices of Q.
Proof by contradiction:
Assume, to the contrary, that there exist sets T1 and
Sort all these pairs in a decreasing order of their T2 where T1 and T2 are a partitioning of S and
geodesic distances.
max(DiamG (T1 ), DiamG (T2 )) < max(DiamG (S1 )
, DiamG (S2 )) (4.1).
Keep the sorted pair in a list L. It is obvious that Without lost of generality suppose that the maximum
L(1) = (p, q) is a pair of vertices which construct the value of the left side of statement 4.1 is related to two
geodesic diameter of Q.
points t1 and t2 and these points are belong to T1 and
Q1 = {p}.
the maximum value of the right side of statement 4.1
Q2 = {q}.
is related to two points s1 and s2 and these points are
i = 2.
belong to S1 , So DiamG (t1 , t2 ) < DiamG (s1 , s2 ). In
Do the following steps until there is a vertex which are BIPARTITE algorithm (s1 , s2 ) should be proceed benot assigned to Q1 and Q2 :
fore (t1 , t2 ) and therefore s1 and s2 are classified in two
different sets. This contradiction shows that the supposition is false and so the given statement is true and
(a, b) = L(2).
this completes the proof.
338
Refrences
[1] J. Hershberger and S. Suri, Matrix searching with the shortest path metric, STOC93, ACM, 1993.
[2] B. Aronov, S. Fortune, and G. Wilfong, The furthest-site
geodesic Voronoi diagram, SCG88, ACM, 1988.
339
Hasan Rashidi
Qazvin,Iran
Qazvin,Iran
moadabsh@gmail.com
hrashi@googlemail.com
Eslam Nazemi
Shahid Beheshti University
Department of Electrical and Computer Engineering
Tehran, Iran
nazemi@sbu.ac.ir
Abstract: Nowadays about 50 percent of all software developing efforts take place in testing
phase. Lack of precision in this phase may cause irrecoverable damage or end with software failure.
Automatic test case generation is one of software verification ways. Automatic Path-oriented Test
Case Generation is one of the most powerful approaches among other similar approaches Which is
accomplished in three main phases that includes control flow graph constructing, path selection and
test case generation .The path selection phase is based on McCabe proposed testing, called basis
path testing which is accomplished with basis path set generation. Existence of infeasible paths is
one of the most important basis path set generation problems. Whereas calculating the number of
infeasible paths is an undecidable task to do, in this paper we did our best to make such problems
decidable. To solve this problem, potentially most promising areas have been predicated at the
beginning, then with finding all these points and labeling them, potentially most promising areas
will be limited and at the end all infeasible path will be extracted. Besides 45 percent improvement
of time executing, this approach has produced a full automatic tool without any testers interference.
Keywords: Test Case; Basis Path Testing; Infeasible Path; Cyclomatic Complexity; Control Flow Graph.
Introduction
white-box testing, black-box testing and gray-box testing techniques. In white-box testing, the program of all
data structures, code and algorithms are available. In
black-box testing or behavioral testing, the most fundamental software requirements are examined. This
complementary testing is considered to the white-box
testing. Gray-box testing is similar to black-box testing, furthermore, to some extent, software and interactive components are available.
Test case generation is one of the software testing methods in white-box testing technique. Manual
340
3.1
Randomized approach
This approach is the easiest kind of test case generation. This approach fails thanks to low probability of
discovering errors in many application programs. The
advantages of randomized approach is easy to implement, and having high speed [12]. The disadvantages
of randomized approach are incomplete and inappropriate of programs paths.
Related Works
3.2
Goal-oriented approach
341
The Third International Conference on Contemporary Issues in Computer and Information Sciences
3.3
3.3.1
Insomuch, traversing all executive paths is impossible; then selecting the appropriate path set is important. To do this, mostly use McCabes proposed
testing [15] which is known basis path set. In this approach, basis path set including linearly independent
path generates.
Definition 3. A linearly independent path traverses at
least one new edge.
3.3.3
342
Proposed approach
Insomuch variables boundaries are more error potentially most promising areas than other areas, assignment instructions and existing conditions of branches
are labeled by 3 values and loop structures by 6 values. Values of 3 labels of assignment instructions and
conditional instructions include boundary value, one
stage before it and one stage after that; and values of
6 loop instructions label include both top and bottom
boundary values of that structure, two previous values
and two values after them. One of the disadvantages of
based algorithm of this paper is receiving UL variable
from tester so that to end infinite probable loops, but
it in contrast with automatic test case. Our suggested
proposed algorithm will not be involved in infinite loop
because of decidability. So, there is no need to get any
variables from testing (tester) and it will produce a
full automatic tool. The next improvement is eliminating the condition B == . This condition was
considered to solve the first founded path problem
for basis path set by algorithm. The first discovered
path is linearly independent rather than null set, so
this condition is omissible. By omitting the condition the speed of executive algorithm will increase.
You can see the proposed algorithm in Algorithm 3.
BOOL FeasibleBPGen(){
Predicate(T , E);
EPath(T , E , EP);
Lbl(T);
for (len = 1; ; len++) do
while (P = Select(len))!=NULL do
if (!LR(P , B)) then
if (Feasible(P)) then
add P to B;
if (Size(B) == FindRC(T)) then
return TRUE;
end
end
end
end
end
Algorithm 3: Proposed FeasibleBPGen Algorithm
343
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Experiments
Refrences
In this section, to show the efficiency of proposed approach, this approach has examined on GNU coreutils
package. The acquired results are available in table 1.
In the first column of this table, the name of testing
function; in the second column, the number of feasible paths; in the third column, the number of acquired
paths from [9] and in the fourth column, the number
of acquired paths from proposed algorithm has presented. Whereas all functions execute by [9] on a Pentium IV PC with 3.2 GHz CPU and 1GB memory per
10 seconds and also proposed algorithm in this paper
executes in 5.5 seconds, that is make 45% time saving.
Also because of lack of tester involvement it has turned
to a full automatic tool.
Function Name
getop()
strol()
InsertionSort()
dosify()
bsd split 3()
attach()
remove suffix()
quote for env()
isint()
|B|
11
7
5
8
6
5
3
4
9
|B| of Base
11
7
5
8
6
5
3
4
9
V(G)
11
7
5
8
6
5
3
4
9
[1] C. Kai-Yuan, D. Zhao, and L. Ke, Software testing processes as a linear dynamic system, Information Sciences
178 (2008), no. 6, 15581597.
Automatic test case generation is one of the most practical testing due to increasing reliability and decreasing
cost. In this paper, a full automatic tool has presented
by limiting the value arias domain, predicating and discovering potentially most promising arias and related
paths, labeling the paths and finally extracting feasible
paths. This tool, by calculating the number of infeasible paths reduces the executing time. The experimental result shows 45 percent time saving. Also in
proposed approach, due to presenting full automatic
tool, there is no need to use testers support. In future works, the proposed approach in this paper can
be used as a base for producing test cases which supports more constraints such as polynomial constraints
and functions call.
344
[9] Y. Jun and J. Zhang, An Efficient Method to Generate Feasible Paths for Basis Path Testing, Information Processing
Letters 107 (2008), no. 34, 8792.
[10] T. Sangeeta and K. Dharmender, Automatic Test Case
Generation of C Program Using CFG, IJCSI International
Journal of Computer Science Issues 7 (2010), no. 4, 2731.
[11] P. Lili, W. Tiane, and Q. Jiaohua, Research on Infeasible
Branch-Based Infeasible Path in Program, JDCTA: International Journal of Digital Content Technology and its Applications 15 (2011), no. 5, 166174.
[12] K.H. Chang, J.H. Cross, W.H. Carlisle, and L. Shih-Sung,
A performance evaluation of heuristics-based test case generation methods for software branch coverage, International
Journal of Software Engineering and Knowledge Engineering 6 (1996), no. 4, 585560.
[13] M. Xiao, M. El-Attar, M. Reformat, and J. Miller, Empirical evaluation of optimization algorithms when used in goaloriented automated test data generation techniques, Empirical Software Engineering, Springer 12 (2007), 183239.
[14] Y. Jia and M. Harman, An analysis and survey of the development of mutation testing, IEEE Transactions on Software
Engineering 28 (2010), 2032.
[15] T.J. McCabe, A Software Complexity Measure, IEEE
Trans. Software Engineering 2 (1976), 308-320.
[16] S. Anand, P. Godefroid, and N. Tillmann, Demand-driven
compositional symbolic execution, Tools and Algorithms for
the Construction and Analysis of Systems, Springer 14
(2008), 367381.
Masuod Sabaei
bgholamiyan@yahoo.com
sabaei@aut.ac.ir
Abstract: In wireless sensor networks energy consumption is one of the influential factors on the
life time of the network. Therefore , reducing energy consumption is one of the important criteria
in our designing. Control topology is a method for specifying appropriate transmission range to
nodes by regulating transmission power, in such a way that energy consumption reduces. While
topology control, maintaining some specifications of the network such as connection of all topology
nodes to each other and preventing production of unnecessary regions is required.
In this article, for creating a topology with the least interference and covering delay constraint,
we have proposed an algorithm .In this algorithm we divide the network environment in such a
way that the delay constraint based on the number of hops of each sensor nodes to sink node
could be met. Also, this division by considering the power of transmission and the traffic of
nodes in each cell, distributes energy consumption balanced in all cells. Then ,by using the
rate of energy obtained in each cell, the radius size of each cell and the transmission range
between the nodes of both adjacent cells are computed. The result of simulation shows that the
proposed algorithm reduces energy consumption at least 10 percent compared to similar algorithms.
Keywords: Control Topology; Energy Consumption; Transmission Power; Delay; Traffic; Asymmetric Division.
Introduction
One of the most important challenges of wireless sensor networks(WSN) is energy constraint. In general,
wireless communications in this networks use energy
more than processing signals, computations, sensing,
etc. Thus the application of protocols and algorithms
that improve energy efficiency could help in solving this
problem[1][2][3].
In some applications of wireless networks like environment monitoring for sensing the existence of neural gas,
it is required that packet reach sink node at a certain
time period[7]. On the other hand, due to energy constraint we should use an algorithm that is able to cover
energy constraint with the least amount of energy consumption[9][10]. Traffic rate is also an effective factor
in energy consumption[4]. Therefore in order to reduce
Corresponding
345
d
i,j , 2
(1)
(2)
Proposed Algorithm
346
The Third International Conference on Contemporary Issues in Computer and Information Sciences
In this article the Non-Polar Cell Based (NPCB) algorithm is proposed for asymmetric network division in
order to decrease the power of transmission and energy
consumption given traffic and delay cover constraint.
2.1
Network Division
In this algorithm the network environment is considered circular with sink node center and sensor nodes
aware of location, are dispersed in the environment in
random and are fixed in location. In NPCB algorithm
in order to balance the energy consumed in all cells, the
network environment is divided asymmetrically so that
nodes with lower traffic transmit at a larger range and
nodes with higher traffic transmit with smaller range.
This is done considering traffic and delay cover constraint.
This type of division is conducted because the more we
move toward the sink node, the more the traffic rises
and, as a result, interference increases and packets are
lost. For packet transmission, it must be retransmitted. These retransmissions lead to more power and
energy consumption. Thus, if we make the transmission range shorter according to equations (1) and (2),
this reduces the power and energy consumption.
For this purpose, we suppose that delay constraint is
T , that is, the number of permissible hops from each
original sensor node to sink node must be smaller or
equal to T . In addition, as in Fig. (2) it could be
transmitted from one sensor node in each cell in single
hop to the sensor node of the next cell.
dM =
(4)
(5)
(6)
crk
(L2 b21 ), if M = 1
4
crk
(L2 (b1 +...+bM 1 )2 )(bM 1 +bM )2 ,
4
if M > 1
(7)
2.2
EM = Pt r k nM
Indexing cells
(3)
347
3.1
2.3
p
x
x2 + y 2 , = tan1
y
(8)
2.4
First scenario
Second scenario
In this scenario, first the number of fixed nodes is as(9) signed 1000, then total energy used by algorithms for
different numbers of hop are compared. As it was menCondition I(j) 1 demonstrates that when we tioned in NPCB algorithm, the closest node in adjacent
cell is selected as the next node. Then, nodes are perreach the first cell, we must transmit to sink node.
After finding nodes that are located in the next cell, we mitted to transmit on topology links. Total energy
select the node which is in the shortest distance from consumption of the network for the state in which all
the nodes inside the network transmit traffic toward
present node.
sink node is computed.
Fig. (6) demonstrates the simulation results for a fixed
number of nodes and different number of hops. As you
observe in the figure, more energy is used for the few
3 Performance evaluation
number of hops compared to the case the number of
hops is larger. This is because the more the number of
The efficiency of proposed algorithm has been com- hops increases, the transmission range reduces and, as
pared with that of DCEERP and SDEL algorithms a result, energy consumption reduces, too.
using MATLAB software, and efficiency measurement As you see NPCB algorithm improves energy consumpwas conducted based on total energy consumption by tion at least 10 percent compared to other algorithms
sensor nodes. The energy used in transmitting a unit mentioned.
I(j) = I(i) 1, I(j) 1
348
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Conclusion
In this paper we presented a new algorithm for monitoring topology with delay constraint cover of the given
traffic. This algorithm reduces energy consumption by
asymmetric division of network environment and, as a
result, decreases the transmission range of high traffic nodes and increases the transmission range of low
traffic nodes. In simulation it was shown that NPCB
algorithm improves energy consumption compared to
other similar methods.
The proposed algorithm given the traffic and permissible delay constraint, divides the network environment
in such a way that the energy used is balanced among
cells. Then by considering the energy consumed by
Figure 4: Topology produced for 2500 nodes in 6 cells. each cell, we can compute the radius size of each cell
and transmission range between two adjacent cells.
Refrences
[1] J. Pan, Y. T. Hou, L. Cai, Y. Shi, S. X. Shen, and V. Tech,
Topology Control for Wireless Sensor Networks, ACM International Conference on Mobile Computing and Networking , MobiCom03 (2003), 286-299.
[2] P. Santi, Topology control in wireless ad hoc and sensor
networks, ACM Comput. Surv. 37/2005 (2005), 164-194.
[3] M. A. Labrador and P. Wightman, Topology Control in
Wireless Sensor Networks, Springer (2009), 1-100.
[4] S. Zarifzadeh, A. Nayyeri, and N. yazdani, Efficient Construction of Network Topology to Conserve Energy in
Wireless Ad-Hoc Networks, Computer Communications
31/2008 (2008), 160-173.
[5] T. He, J. Stankovic, C. Lu, and J. Stankovic, Speed: A stateless protocol for real-time communication in sensor networks, In ICDCS (2003).
[6] P. K. Pothuri, V. Sarangan, and J. P. Thomas, Delayconstrained energy efficient routing in wireless sensor networks through topology control, Proc. IEEE ,Int. Conf.Netw.
Sensing Control (2006), 35-41.
[7] H. Xu, L. Huang, W. Liu, G. Wang, and Y. Wang, Topology
control for delay-constraint data collection in wireless sensor networks, Computer Communications 32/2009 (2009),
1820-1828.
[8] M. Burkhart, P. Rickenbach, and R. Wattenhofer, Does
topology control reduce interference?, Proceedings of the 5th
ACM international symposium on Mobile ad hoc networking and computing mobihoc 04 (2004).
[9] K. Akkaya and M. Younis, Energy-aware delay-constrained
data in wireless sensor networks, Journal of Communication Systems (2004).
349
Shahriar Lotfi
Tabriz University
Technical Group
Technical Group
chaieasl@gmail.com
shahriar lotfi@tabrizu.ac.ir
askari@pnu.ac.ir
Abstract: In this paper, an evolutionary algorithm is presented for solving layout workstations
in organizations. According to the non-polynomial complexity - hard alignment issues, providing
solutions for problems with large sizes, are not possible through other mathematical methods. Thus,
evolutionary approaches are able to offer optimal or near optimal solutions for these problems. In
this paper, the purpose of layout in organizations is maximizing the closeness of workstations with
each other. Proposed solution does layout by using the amount of working relationships between
people and the workstations and by using evolutionary algorithms and clustering process. The
layout process is performed in two stages. First stage, including the arrangement of the workstations,
and the second stage, including workstations arrangement is in existing buildings. The proposed
algorithm, with obvious examples provided and conduct a research study, has been evaluated.
Keywords: The facility layout problem; layout of workstations; clustering; evolutionary algorithms.
Introduction
Author
350
evaluation and practical results will be presented in minimizing inter-dependencies between clusters. So we
Section 4.
have [5]:
K
X
Problem Statment
OF =
K
X
Ai
i=1
i,j=1
K(K1)
2
K > 1 (3)
0 Ai 1
Ei,j =
0
ij
2Ni Nj
if
if
i=j
i 6= j
(2)
351
The Third International Conference on Contemporary Issues in Computer and Information Sciences
3.5
3.6
3.7
Constraint
In this section, to the validity of the proposed algorithm has been assessed.
4.1
352
4.2
4.3
Case Study
Conclusion
Facilities layout design is considered as a very important and influential strategy in any administrative ac-
353
The Third International Conference on Contemporary Issues in Computer and Information Sciences
[3] KY. Lee, SN. Han, and M. Roh, An improved genetic algorithm for facility layout problems having inner structure
walls and passages, Comput Oper Res 30 (2003), 117138.
[4] T. Hamann and F. Vernadat, The intera-cell layout problem in automated manufacturing systems, Rapports de
Recherche 1603 (1992).
[5] D. Doval, B. Mancoridis, and B. Mitchell, Automatic Clustering of Software Systems Using a Genetic Algorithm, Doctor
of Philosophy, Drexel University, Philadelphia, 19104 (1999).
[6] Ch. Hicks, A Genetic Algorithm Tool for Optimising Cellular or Functional Layouts in the Capital Goods Industry, Int.
J. Production Economics 104 (2006), 598614.
Refrences
[1] R.S. Liggett, Automated facilities layout: past, present and
future, Automation in construction 9 (2000), 197215.
[2] A. Dria, H. Pierreval, and S. Hajiri-Gabouj, Facility Layput
problems: A survey, Annual reviews in Control 31 (2007),
255267.
354
[7] F. Azadivar, JJ. Wang, and B. Sadeghi Bigham, Facility layout optimization using simulation and genetic algorithms,
Int J Prod Res 38 (2000), 43694383.
[8] R. ChaieAsl and Sh. Lotfi, Layout of human resource with
evolutionary algorithm approach, first national conference of
Scholars for computer and IT Science of Tabriz University
(2011).
majid.ag1@gmail.com
athaghighat@yahoo.com
Abstract: In this paper we propose a routing metric for Wireless Mesh Networks called Channel
Available Bandwidth that is aware of each link load. This metric assign weights to each link which
are based on different causes of channel busyness and available bandwidth. This article aims to
find high throughput paths and avoid route traffic to congested network areas that balance the
traffic. Using obtained weights for each link that comprises path, we are able to consider load and
interference of each link. We combine this new metric with routing protocol AODV and evaluate
performance of this metric using simulation. We show that proposed metric by distinguishing
between different causes of channel busyness obtains high throughput paths compared other related
metrics.
Keywords: Wireless Mesh Network; Routing; Routing Metric; Channel Busyness; Load Aware; Interference Aware.
Introduction
Wireless Mesh Networks (WMNs) is an emerging network technology that offers wireless broadband connectivity[1]. They can provide a cost-effective and
flexible solution for extending broadband services to
areas where cabling is difficult. In WMNs, most of the
nodes are either static or minimally mobile and do not
rely on batteries. The goal of routing algorithms is
hence to improve network capacity or the performance
of individual communications, instead of dealing with
mobility or minimizing power consumption.
Since most users of WMNs are interested in accessing the internet or using services provided by some
servers, the traffic is mainly directed towards gateways,
or from gateways to clients. Based on the specific requirements of WMNs, we believe that a good routing
protocol should find paths with minimum delay, max1
imum data rate and low levels of interference. In this
(1)
ET X =
d
dr
f
sense, an effective routing metric, which is used by
routing protocols, must be able to capture the quality
The ETT routing metric[6], is an improvement on
of the links effectively.
The simplest and most commonly-used routing metric ETX made by considering the differences in link trans Corresponding
355
S
B
(2)
Tbusy + Ttransmitting
Tidle + Tbusy + Ttransmitting
(3)
Where CB represents the amount of channel busyness ratio, Tbusy is equal to amount of time that node
senses the channel as busy, TT transmitting is equal to
amount of time that node transmits frames, Tidle is
equal to amount of time that node senses the channel
as idle.
Here Tidle , Tbusy , TT transmitting and are computed as
follows[9]:
data
ack
+
+ SIF S + DIF S
bitrate bitrate
data
+ ack timeout + SIF S + DIF S
=
bitrate
Tsuccessf ul =
Tcollision
(4)
Proposed Metric
356
The Third International Conference on Contemporary Issues in Computer and Information Sciences
2.1
357
2.2
In previous stage we were able to compute the channel busyness ratio. Now, we are going to estimate
the available bandwidth in each link. The importance
of this work becomes clear when we know that due to
changes in wireless network environment, the data rate
in the links varies, in other words, wireless links are
multi-rate[6]. Nodes in wireless network when due to
multiple unsuccessful transmission, sense the busyness
of environment and other reasons become aware of unsuitable condition of its environment, reduce their rate
automatically.
To compute the proposed metric, CAB, that in addition to channel busyness ratio, present data rate is also
taken into consideration and acts as follow :
CAB = (1 CB)
S
Ts
(6)
2.3
3.1
First scenario
In first scenario, it is shown that CAB metric can recognize the existence of traffic in the network and take
new traffic flow away from the congested area. The
simulation was conducted in an area of 270m 270m,
that included 36 nodes. CBR flow was used for interfered traffic flow. The size of transmitted packets
was 1400 bytes. Fig. (2) shows the throughput of link
when there is interfered traffic in the network. With
increasing this traffic, throughput reduces.
As it is shown, when interfered traffic in the network
is low, all three metrics, that is, ETX, ETT and CAB
have high throughput. Since ETX and ETT metrics
are not sensitive to interference and other nodes busyness, by increasing this traffic, they are faced with high
reduction in throughput. The results show the superiority CAB metric to other metrics.
3.2
Performance evaluation
Second scenario
358
The Third International Conference on Contemporary Issues in Computer and Information Sciences
are 1400 bytes. Fig. (3) and (4) illustrate throughput and end-to-end delay of all the network for metrics
compared, respectively. CAB has a higher throughput compared to the other two metrics. Distribution
of traffic in less congested areas of the network by this
metric is the reason for its higher throughput.
Concerning end-to-end delay since CAB metric avoids
selecting nodes with high busyness ratio, at first, it
shows higher delay, but over time, since it balances
traffic better, as a result end-to-end delay is lower and
its standard deviation is less than that of delay.
real data rate of each link, and reduces the packet loss.
The conducted simulations illustrated that CAB can
increase mean network throughput and reduce end-toend delay compared to other metrics.
Our metric collects all obtained information locally and
like ETX and ETT metrics does not impose the overhead of broadcast or unicast packets to the network.
In future work, we are going to evaluate the performance of our metric in multi-channel wireless mesh networks, because information obtained from CAB each
node supplied with multiple channels for transmitting
or receiving is very valuable.
Refrences
[1] I. F. Akyildiz, X. Wang, and W. Wang, Wireless mesh networks :a survey, Computer Networks 47/2005 (2005), 445
487.
[2] David B. Johnson, David A. Maltz, and Yih-Chun Hu, The
Dynamic Source Routing Protocol for Mobile Ad Hoc Networks (DSR), Internet-Draft (2004).
[3] C. Perkins, E. Belding-Royer, and S. Das, Ad hoc Ondemand Distance Vector (AODV) Routing, rfc 4561 (2003).
[4] C. E. Perkins and P. Bhagwat, Highly dynamic DestinationSequenced Distance-Vector routing (DSDV) for mobile
computers, SIGCOMMComput. Commun. Rev. 24/1994
(1994), 234244.
[5] S. J. De Couto, D. Aguayo, J. Bicket, and R. Morris, A HighThroughput Path Metric for Multi-Hop Wireless Routing,
9th annual international conference on mobile computing
(MobiCom 03) (2003).
[6] R. Draves, J. Padhye, and B. Zill, Routing in Multi-Radio,
Multi-Hop Wireless Mesh Networks, in MobiCom 04: Proceedings of the 10th annual international conference on
Mobile computing and networking Philadelphia, PA, USA:
ACM Press (2004).
[7] T. Salonidis, M. Garetto, A. Saha, and E. Knightly, Identifying high throughput paths in 802.11 mesh networks: a
model-based approach, IEEE International Conference on
Network Protocols(ICNP) (2007), 2130.
[8] Nemesio A. Macabale Jr., Roel M. Ocampo, and Cedric Angelo M. Festin, Attainable Capacity Aware Routing Metric
for Wireless Mesh Networks, The Second International Conference on Adaptive and Self-Adaptive Systems and Applications (2010).
[9] H. Zhai, X. Chen, and Y. Fang, How Well Can the IEEE
802.11 Wireless LAN Support Quality of Service?, IEEE
Transactions on Wireless Communications 4/2005 (2005),
30843094.
Conclusion
359
Ali Mohades
farhad.maleki@aut.ac.ir
mohades@aut.ac.ir
F. Zare-Mirakabad
M. E. Shiri
f.zare@aut.ac.ir
shiri@aut.ac.ir
Afsane Bijari
University of Economic Science
Department of Knowledge engineering and decision science
afsaneh bijari@yahoo.com
Abstract: In this paper, PSOHS, as a new version of Harmony Search is presented. The proposed
algorithm, while creating a new harmony, takes advantage of both Social and Cognitive components
of Particle Swarm Optimization algorithm. To examine the proposed algorithm, PSOHS is tested
using a battery of standard benchmark functions and the results certify the superiority of PSOHS
over HS, IHS, GHS, and EGHS algorithms.
Introduction
360
algorithms are introduced in more details. Section 3 on each strategy. Fig. 1 depicts the pseudo-code of
present a brief overview of Particle Swarm Optimiza- this algorithm.
tion. In Section 4, the proposed algorithm is presented.
Section 5 focus on the experimental results. The paAlgorithm HS
per ends with a brief conclusion and future research
Data:
proposals in Section 6.
N=size of harmonies in HM
d=dimension of each harmony in HM
PAR, HMCR, bw
2 Harmony Search and Particle
Result:
The best harmony in HM
Swarm Optimization
Begin
while Stopping Criteria not Satisfied do
j=1;
The purpose of this section is to provide a detailed
while j d do
description of the necessary information which is esif rand() HM CR then
sential to pursue the rest of the paper. In the following
k = a random number {1, , N };
subsections HS, IHS, GHS, and EGHS are represented.
H(j) =HM(k, j);
In addition, Particle Swarm Optimization algorithm is
if rand() P AR then
depicted in this section.
H(j) = HM (k, j) bw;
end
else
2.1 Harmony Search
H(j) = A random value from the
possible range;
end
Harmony Search (HS) was inspired by the musical proj=j+1;
cess of searching for a perfect state of harmony. In HS,
end
each potential solution for the problem is coded as a
if Harmony H dominate the worst hormony
feature vector named harmony and the goal is to find
in HM then
a global optimum as determined by a fitness function.
Substitute H for the worst Harmony in
HS take advantage of a limited subset of successful exHM
periences, i.e. the fitest solutions. These harmonies are
end
gathered in a memory called Harmony Memory (HM ).
end
The algorithm continues for a number of iterations; in
end
each iteration a new harmony is generated. When the
Algorithm 1: Harmony Search Pseudo-code
New Harmony was generated, it is compared with the
worst harmony in harmony memory using the fitness
IHS, GHS and EGHS are variants of original Harfunction; if the new harmony dominates the worst har- mony Search. let us take a look at each one.
mony, the new harmony will be used as a substitute
for the worst one.
Let us use HM(i, j) to show the j th component of the 2.1.1 IHS Algorithm
ith harmony in Harmony Memory. In order to create
a new harmony, all components or features of this new
harmony should be computed, let us name this new IHS was proposed to overcome the shortcomings of
harmony H. HS employs three strategies to compute original harmony search. It is exactly the same as
each component H, e.i. H(j). As the first strategy, one HS except for PAR and bw; IHS dynamically updates
of the harmonies in HM is selected randomly, e.g. the PAR and bw according to equation 1 and equation 2,
ith harmony, and then the value of HM(i,j) is assigned respectively.
to H(j). As the second strategy, one of the harmonies
in HM is selected randomly, e.g. the ith harmony, and P AR(t) = P ARM in + (P ARM ax P ARM in ) t (1)
NI
then an adjacent value of HM(i,j) is assigned to H(j).
Finally, as the third strategy, a random value from the
bwM in
ln ( bw
)
M ax
possible range is used as H(j). To compute each H(j),
bw(t) = bwM ax exp (
t)
(2)
NI
HS uses one of these three strategies and therefore it
needs to decide on one of them. HS uses two param- where P AR(t) is the Pitch Adjustment Rate for genereters named Harmony Memory Consideration Rate ation t; P ARM in is the minimum Pitch Adjustment
(HMCR) and Pitch Adjustment Rate (PAR) to decide Rate; P ARM ax is the maximum Pitch Adjustment
361
The Third International Conference on Contemporary Issues in Computer and Information Sciences
2.2
h(j) = HM (best, k)
Where k is a random integer number from {1, 2, ..., d},
and best is the index of the best harmony in HM[14].
(3)
2.1.3
EGHS Algorithm
(4)
th
Where Vi,j (t) denotes the j component of the i particles velocity vector at time step t; Xi,j (t) represents
the j th component of the ith particles position vector at
time step t; C1 and C2 are positive acceleration constants used to scale the contribution of the cognitive
and social components, respectively. And also r1,j (t)
and r2,j (t) are uniformly distributed random values in
[0, 1], Xp,j (t) is the best position visited by ith particle
since the first time step. And finally Xg,j (t) is the best
position found by swarm i.e. all particles. Both Xp,j (t)
and Xg,j (t) are determined by the use of a fitness function which evaluate each particle to find how close the
corresponding solution is to the optimum. It is obvious
that velocity vector drives the optimization process,
and reflects both the Social and cognitive knowledge of
particles. Originally, gbest and lbest PSO algorithms
have been developed which differ in the size of their
neighborhoods [5]. In gbest PSO, each particle is supposed to be the neighbor of all other particles. In lbest
PSO the degree of connectivity among the population
is less than the gbest PSO. There are many different
social network structures such as Wheel, pyramid, Four
Clusters and Von Neumann which have been developed
and studied for PSO[5].
PSOHS Algorithm
362
HM CRM ax HM CRM in
)t
NI
(5)
363
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Experimental Results
Refrences
[1] S. Russell and P. Norvig, Artificial Intelligence : A Modern Approach (3rd Edition), prentice Hall, Chapter 4, pages:
125126, 2010.
[2] T. Mitchel, Machine Learning, McGraw Hill, Chapter 9,
pages: 249270, 1997.
[3] F. Glover, Tabu search-part I, Computer ORSA Journal on
Computing 1 (1989), 190206.
[4] F. Glover, Tabu search-part II, INFORMS Journal on Computing 2 (1990), 432.
[5] A. P. Engelbrecht, Computational Intelligence: An Introduction, John Wiley & Sons Ltd, Chapter 4, pages: 285357,
2007.
[6] A. P. Engelbrecht, Computational Intelligence: An Introduction, John Wiley & Sons Ltd, Chapter 4, pages: 359411,
2007.
[7] S. Bitam, M. Batouche, and E. Talbi, A survey on bee colony
algorithms, IEEE International Symposium on Parallel Distributed Processing Workshops and Phd Forum IPDPSW
(2010).
[8] Z. W. Geem and J. H. Kim, A new heuristic optimization
algorithm: Harmony search, Simulation 76 (2001), 6068.
[9] ShiaF., X. Xia, c. Chang, G. Xu, X. Qina, and Z. Jia, An Application in Frequency Assignment Based on Improved Discrete Harmony Search Algorithm, 2011 International Conference on Advances in Engineering (2011).
Conclusion
364
[15] D. Zou, L. Gao, S. Li, and J. Wu, An effective global harmony search algorithm for reliability problems, Expert Systems with Applications 38 (2011), 46424648.
University of Isfahan
University of Isfahan
Department of Computer
Department of Computer
Pourzaferani@eng.ui.ac.ir
Nematbakhsh@eng.ui.ac.ir
Abstract: Broken Link is a well-known obstacle for the Web of Data. There are a few researches
which have repaired broken links through the destination point. These approaches have two major
problems: (i) A single point of failure (ii) inaccurate changes reported from destination to source
dataset. In this paper, we introduce a new method to repair broken links through the source point of
the link. At the time of detecting broken link, it repaired instantly. For this purpose, the algorithm
finds the new address for desirable target from the destination data source and use Just-In-Time
method. In this paper, we introduce two sets which we call them Superiors and Inferiors. Through
these two sets, we create an exclusive graph structure for every entity that need to be observed
and make a fingerprint-like identification for the entity. The result shows that almost 90 percent of
broken links that are referred to a person entity in DBpedia have been repaired.
Keywords: Broken Links; Link Integrity; Superior Entity; Inferior Entity; Just-In-Time.
Introduction
connected by RDF triples that link classes and properties in one vocabulary to those in another, thereby
defining mappings between related vocabularies [3].
Ontologies which used in this technology make the links
which connect entities have a meaning that is understandable for machines. Datasets which published according to Linked Data publishing rules and semantic
links like RDF links which connect these datasets to
each other, make a global cloud called Web of Data.
Although there are various types of tool for publishing information regards to Linked Data publishing
rules and many tools for discovery semantic linked between the entities, there are not suitable tools to preserve these links [4]. The purpose of link preservation
is maintaining links for a long period of time. Link
preservation also refers to activities taken to fix broken
links. Broken links appeared in two situations: (i) destination of the link is not dereferencable: in this case,
the destination entity address had deleted or changed.
(ii) definition of the entity had changed over the time:
in this case the entity is dereferencable but its semantic
got changed in such a way that it doesnt mean what
365
Problem Statement
Proposed Algorithm
Editing Problem: Although the destination en- The proposed algorithm consist of five basis modules
tity is accessible but its semantic got changed in that we have a short description for them.
a way that never meaning what the author mean
at creation.
Broken links are a considerable problem as they interrupt navigational paths in a network leading to the
practical unavailability of information [3]. Broken links
in Web of Data are even worse than they are in the document Web: human users are able to find alternative
paths to the information they are looking for. Such
alternatives include directly manipulating the HTTP
URI they have entered into a Web browser or using
search engines to re-find the respective information.
They may even decide that they do not need this information at this point in time [4]. This is obviously
much harder for machine actors. Therefore redirection
of the problem to application is not suitable because if
Superiors
Repository
Destination
Data Source
Inferiors
Repository
Local
Links
Outside
Links
Crawler
Controller
Analyzer
Ranking
Query
Maker
366
The Third International Conference on Contemporary Issues in Computer and Information Sciences
3.1
3.4
Entity Repository
Analyzer
Crawler Controller
100
There is a relationship between this module and Analysis module. The Analysis module gives appropriate
queries to Crawler Controller and this module start its
task with initial entities in the repository. Afterward,
for each entity in previous step, make Superiors and
Inferiors sets.
Superior Entity: An entity which has a semantic
link that reference to the observable entity. The
observed entity described as (S, P, O) in RDF
data model, each entity that have this entity in
their Object, included in Superiors set.
Inferior Entity: An entity which the observable
entity has a semantic link to it. Related to previous definition, each entity that have the observed
entity in their Subject are included in Inferiors
set.
50
0
1
35
69
103
137
171
205
239
273
307
341
375
409
443
477
511
545
579
613
647
681
715
749
3.2
250
200
150
100
50
1
35
69
103
137
171
205
239
273
307
341
375
409
443
477
511
545
579
613
647
681
715
749
3.3
Query Maker
Figure 3: Target entities with threshold=75
When the Analysis module analyzed Superiors and Inferiors sets give the statistic to Query Builder module
and in this module the appropriate query built and supThe main reason why raising the threshold point
plied to Crawler Controller module. It used SPARQL not effected the precision is that there are a few entias a RDF query language to make the query.
ties which have a similar graph structure. Therefore,
367
when we decrease the threshold we have not seen a have an evaluation on them. Some of entities have
noticeable change in results (Fig. 4).
a change in their address and refer to another entity
which may not be a person entity. There are 786 entity of all entities were refers to a person entity, there695
fore our evaluation dataset contains 786 person entities
694
which supply to the proposed algorithm and the result
is their new address. The success of the algorithm is
693
depending on the intersection between Superiors and
692
Inferiors of two versions (further information are shown
in Table 2).
691
690
689
1
10 15 20 25 30 35 40 45 50 55 60 65 70 75
3.5
Ranking
e1 = max(sim(I(Se ) , Ie ))
(1)
e2 = max(sim(S(Ie ) , Se ))
T arget = max(e1 , e2 )
Evaluation
(2)
4.1
Evaluation method
To evaluate the system we extract 296595 person entities from DBpedia 3.6 version. As shown in Table 1
there are 4726 entities have a change in their address.
To evaluate our algorithm we sample 1056 entity and
Refrences
[1] H. Ashman, Electronic Document Addressing - Dealing with
Change, ACM Computing Surveys 32 (2000), 201212.
[2] S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak,
and Z. Ives, DBpedia: a nucleus for a web of open data
(2007), 722-735.
[3] C. Bizer, T. Heath, and T. Berners-Lee, Linked Data - The
Story So Far, International Journal on Semantic Web and
Information Systems (IJSWIS) 5 (2009), no. 3, 122.
[4] N. Popitsch and B. Haslhofer, A Reactive-Agent Based Approach for a Facility Location Problem Using Dynamic Addiively Weighted Voronoi Diagram, Web Semantics: Science,
Services and Agents on the World Wide Web 9 (2011), no. 3,
266 - 283.
368
The Third International Conference on Contemporary Issues in Computer and Information Sciences
369
[6] J. Volz, C. Bizer, M. Gaedke, and G. Kobilarov, Discovering and Maintaining Links on the Web of Data, ISWC 09,
Springer-Verlag, Berlin, Heidelberg, 2009.
F. Moayyedi
mj.yazdani1988@gmail.com
Mi. Yazdani
Jahrom University, Jahrom, Iran
mi.yazdani@hotmail.com
Abstract: Palmprint is a unique and reliable biometric characteristic with high usability. Since
the increasing demand for automatic palmprint authentication systems, the development of accurate
and robust palmprint verification algorithms has attracted a lot of interests. This paper focuses
on palmprint recognition method using Histogram of Oriented Gradient (HOG). The proposed
method has three main stages. In the first stage, preprocessing is performed to segment Regions Of
Interest (ROI). In the next stage HOG is employed to ROIs extracted from previous step . In the
final stage we employed kullback leibler divergence to measure the similarity between two palms.
Experimental results and False Acceptance Rate (FAR), False Rejection Rate (FRR) and Total
Success Rate (TSR) chart of the system illustrated that the HOG in conjunction with kullback
leibler divergence construct a powerful, efficient and practical approach for automatic palmprint
authentication systems.
Keywords: Biometric Authentication; Histogram of Oriented Gradient (HOG); Kullback Leibler; Palmprint.
Introduction
Biometrics identifies different people by their physiological and behavioral difference, such as face, iris,
retinal, gait, etc. As an alternative personal identity
authentication method, Biometric identification has attracted increasing attention during recent years. In the
field of biometrics, palmprint is a novel but promising
member [1]. It has been attracting much attention because it has merits, such as high speed, user friendliness, low cost, and high accuracy. However, there is
room for improvement of online palmprint systems in
the aspects of accuracy and capability of spoof attacks
[2].
A palmprint has three types of basic features: principal
lines, wrinkles, and ridges (see Figure 1 ) and they have
been analysed in various ways. Zhang and Shu [3] applied the datum point invariant property and the line
feature matching technique to conduct the verification
Corresponding
370
Please refer to [9] for the detailed ROI determination process. Figure 3 illustrates a ROI image cropped
from the original palmprint image.
Figure 1: Important lines in a palmprint
Methodology
2.2
The essential thought behind the Histogram of Oriented Gradient descriptors is that local object appearance and shape within an image can be described by
the distribution of intensity gradients or edge direcApply a lowpass filter and using a threshold to tions.
convert the image to binary image.
The implementation of these descriptors can be
Obtain the boundaries of gaps.
achieved by dividing the image into small connected
Compute the tangent of the two gaps.
regions, called cells, and for each cell compiling a hisRotate the image with the founded angel. (way togram of gradient directions or edge orientations for
of finding angel is shown in Figure 2).
the pixels within the cell. The combination of these
371
The Third International Conference on Contemporary Issues in Computer and Information Sciences
histograms then represents the descriptor. For improved accuracy, the local histograms can be contrastnormalized by calculating a measure of the intensity
across a larger region of the image, called a block, and
then using this value to normalize all cells within the
block. This normalization results in better invariance
to changes in illumination or shadowing. In this paper
we used [10] code to extract HOG feature. In Figure 4
you can see the plot of one palmprint HOG feature.
F AR =
N umberof acceptedimposterclaim
100 (2)
T otalnumberof imposteraccesses
F RR =
N umberof rejectedgenuineclaim
100 (3)
T otalnumberof genuineaccesses
2.3
Matching with Kullback Leibler di- the Equal Error Rate (EER) criteria where FAR =
FRR. This is based on the rationale that both rates
vergence
ln
x
dQ
dP
dP
FAR(%)
0.97
FRR(%)
0.64
TSR(%)
99.2
Our approach has been compared with Lis algorithm [7] and Zhangs approach [9].
In Lis algorithm the R features and features of the
3 Experimental Results
palmprint were extracted from the frequency domain
to identify different persons. R features showed the inThis section presents and evaluates results of the ex- tensity of the lines of a palmprint and features showed
periments carried out according to three stages, pre- the direction of these lines.
372
Zhang tried to extract texture feature from low resolution palmprint images and proposed 2-D Gabor filter for texture analysis. He used hamming distance for
matching.
table 2 displays the comparison result.
Conclusion
Palmprint is a relative new biometric method to recognize a person. In this paper, a palmprint verification system is developed by using HOG and kullback
leibler. The features were extracted using HOG and
the palms were matched by kullback leibler. Experimental result shows that a verification rate of 99.2%
can be achieved using HOG and kullback leibler. This
shows that palmprint recognition is performed well by
using the proposed method.
Refrences
[1] T. T. Yufei Han, Palmprint Recognition Based on Directional Features and Graph Matching, Springer,Verlag Berlin
Heidelberg (2007).
[2] D. Zhang, Z. Guo, G. Lu, L. Zhang, and W. Zou, An Online
System of Multispectral Palmprint Verification, IEEE trans-
[5] C. Han, H.L. Cheng, C.L. Lin, and K.C. Fan, Personal Authentication using palmprint features, Pattern Recognition
36 (2003), 371381.
[6] N. Duta, A. Jain, and K.V. Mardia, Matching of Palmprint,
Pattern Recognition Letters 23 (2001), 477485.
[7] W. Li, D. Zhang, and Z. Xu, Palmprint Identification by
Fourier Transform, International Journal of Pattern Recognition and Artificial Intelligence 16 (2002), 417432.
[8] PolyU
Palmprint
http://www.comp.polyu.edu.hk/.
Database:
373
Abstract: A new skin segmentation model based on combination of some of the most useful
existent models of skin segmentation is presented. Formerly different skin segmentation models
have been presented for different color spaces, then according to their pros and cons, a new method,
based on a combination of three models of skin segmentation that consists of elliptical model, single
Gaussian model and hsv fixed range, and according to majority criteria is presented. The advantage
of this model is its high amount of true detection rate beside low amount of false detection rate.
Keywords: Skin Color Segmentation; Skin Cluster; Single Gaussian Model; Elliptical Model; Majority Criteria.
Introduction
374
segmentation models could be divided to three categories that contain models with explicit threshold on
the skin cluster, the parametric models, and the nonparametric models. Models with explicit threshold
on skin cluster are built based on determination of
skin cluster boundary. One of the main problems of
this method is low amount of consistency against the
changes in the light conditions [10]. The Parametric
models contain statistical models such as, single Gaussian and mixture Gaussian [4, 13]. For example, in
single Gaussian model, the distribution of skin cluster
is estimated by a Gaussian function, and then the parameters of this function are determined by using maximum likelihood training algorithm. The small amount
of the required parameters for building the single Gaussian model leads to low storage space for this model.
In the non-parametric models such as, histogram based
model, two histograms are computed, one for skin and
one for non-skin areas. After dividing every slot in to
the total count of elements, the probability of a pixel
belongs to a skin or non-skin areas could be computed
[5, 14]. The accuracy of this model is high, while it
needs to a large amount of storage space. Another
(2)
The YCbCr color space is used by European television studios and for image compression standards [6].
The Y component is luminance and it is computed using the weighted sum of RGB values. The Cb and Cr
components are two color difference values and they are
computed using subtracting luminance from Red and
Blue components respectively [6]. The main advantage
of this color space is that, the YCbCr color space separates luminance component from chrominance components. The Eq. (3) shows the transformation matrix
between RGB and YCbCr:
(3)
The rest of this paper is organized as follows: Section 2 introduces the color spaces. Skin color models
are presented in section 3. Proposed method is presented in section 4. Performance of this method is presented in section 5 and section 6 concludes the paper.
Color Spaces
The HSV color space is one of the most common cylindrical coordinate representations of points in the RGB
color model. Hue is dominant color such as green and
2.1 RGB and Normalized RGB
blue, Saturation is colorfulness of a region in proportion to its brightness, and Value is related to color lumiRGB color space is the most known color space. Cap- nance [6]. The Eq. (4) shows the relations to calculate
turing and displaying systems work based on it. RGB HSV components from RGB components [4, 6, 7].
describes color as a combination of three color rays
(red, green and blue). The RGB color space is not a
very good choice for skin segmentation, this is because
(4)
of mixing the chrominance and luminance components
[6, 7]. Hence, the normalized RGB tries to reduce the
dependency of the chrominance components to the luminance of the each pixel by a simple normalization
procedure that show in Eq. (1):
r = R/(R + G + B)
g = G/(R + G + B)
(1)
b = B/(R + G + B)
375
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Related Work
3.1
40 < V
0 < H < 25
(7)
The presented model in [9], as a classifier, defines three 3.4 Single Gaussian Model
attributes in normalized RGB color space. Eq. (5) illustrates this classifier.
Results reported in the literature indicate that a single
Gaussian function can be used as a model for skin color
(5) cluster [13]. A Gaussian probability function that used
as skin model is defined by Eq. (8)
(8)
Where r, g, b are the normalized coordinates obtained from Eq. (1). This algorithm uses a restricted
covering algorithm, and obtained from a skin probability map that described in [9]. Each pixel that satisfies
Eq. (5) can classify as a skin pixel.
Where m is a mean vector, s is a covariance matrix and x is input sample that in a color image can be
T
T
x = [Cb Cr ] or x = [R G B] and so on. From a set
that contains the samples (pixels) of skin covered regions in different images, and with applying Maximum
Likelihood that defined in Eq. (9-10), the parameters
of a single Gaussian model can be determine.
(9)
3.2
(6)
133 Cr 173
In fact, Eq. (6) presents the range of skin cluster
Where mM L is the mean of the samples and SM L
that obtain from a training data set. If an input pixel is the covariance of the samples as mentioned above.
satisfies Eq. (6) it can be classified as a skin pixel.
For each input pixel, first p(x) is calculated according
376
3.5
EllipticalModel
As described in [8] it is possible to determine an elliptical range in a set of skin samples and build a classifier
according to it. Eq. (11-12) show the elliptical model
that used in [12]:
(11)
In this model, we combined three skin segmentation models which consist of, single Gaussian, elliptical, and HSV models. The final result of segmentation
is selected according to majority criteria. A majority
criterion means that, an input pixel is classified as a
skin pixel if at least two techniques classify that pixel
as a skin pixel.
(12)
5
5.1
Experimental Results
Database
Proposed Model
Performance Approaches
x
100
Y
(13)
377
The Third International Conference on Contemporary Issues in Computer and Information Sciences
w
100
Z
(14)
5.3
Figure 2a. Detection Rate. 1- NRGB, 2- YCbCr, 3HSV, 4- SGM, 5- Elliptical, 6- SSCM, respectively.
Figure 2b. False Detection Rate. 1- NRGB, 2YCbCr, 3- HSV, 4- SGM, 5- Elliptical, 6- SSCM,
respectively.
Implementation Results
Conclusions
378
Refrences
[1] L. Sigal et al., Skin color-based video segmentation under
time-varying illumination, IEEE Trans. on pattern analysis
and machine intelligence 26 (2004), no. 7, 862-877.
[2] S. Phung et al., Skin segmentation using color and edge information, Proc. IEEE Int. Symp. on Signal Processing and
Its Application, 2003, pp. 525-528.
[3] P. Kakumanu et al., A survey of skin-color modeling and detectionnmethods, Pattern recognition 40 (2007), 1106-1122.
[4] R. Hassanpour, A. Shahbahrami, and S. Wong, Adaptive
gaussian mixture model for skin color segmentation, proceeding of world academy of science, engineering and technology 31 (2008), 102-105.
[5] M. Abdullah-Al-Wadud et al., Skin segmentation using
color distance map and water- flow property, Proc. IEEE
Int. Conf. on Information Assurance and Security, 2008,
pp. 83-88.
379
Qazvin, Iran
Tehran, Iran
f.golchenari@qiau.ac.ir
saniee@modares.ac.ir
Abstract: Fuzzy clustering is an important problem which is the subject of active research in
several real-world applications. Fuzzy c-means (FCM) algorithm is one of the most popular fuzzy
clustering techniques because it is efficient, straightforward, and easy to implement. However,
FCM is sensitive to initialization and is easily trapped in local optima. Genetic algorithms (GAs)
are believed to be effective on NP-complete global optimization problems, and they can provide
good near-optimal solutions in reasonable time. Memetic algorithms (MAs) were presented as
evolutionary algorithms that hybridize the global optimization characteristics of GAs with local
search techniques that allowed the GAs to perform a more deep exploitation of the solutions. In
this paper, a hybrid fuzzy clustering method based on FCM and fuzzy memetic (MFCMA) is
proposed which remove some FCM shortcomings. To improve the expensive crossover operator
in memetic algorithms (MAs), we hybridize MA with the fuzzy c-means algorithm and define the
crossover operator as a one-step fuzzy c-means algorithm. Experimental results show that the
proposed clustering algorithm outperforms FCM, FPSO and FPSO FCM on several UCI datasets.
Keywords: Genetic algorithm; Fuzzy clustering; Fuzzy memetic algorithm; Fuzzy c-means algorithm (FCM)
Introduction
380
ij
crossover operator In [8], a hybrid data clustering ali=1
381
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Description of MA
Memetic algorithms (MAs) are evolutionary algorithms (EAs) that apply a separate local search process
to refine the individuals (i.e. improve their fitness by
hill climbing, etc.). Additionally, MAs are inspired by
Richard Dawkins concept of a meme which represents
a unit of cultural evolution that can exhibit local refinement. They are like Gas combined with some kinds
of local search and able to balance the exploration and
exploitation capabilities of both genetic algorithm and
local search [11].
Memetic algorithm
5
4.1
Description of GA
In general, a MA consists of seven basic elements: coding or string representation, population initialization,
selection, crossover, mutation, local search, Termination criterion. In this section, we will introduce these
six elements of the FMA for fuzzy C-means clustering.
5.1
String representation
11
..
X= .
n1
...
..
.
1c
..
.
nc
(8)
In which ij is the membership function of the ith object with the j th cluster with constraints stated in (1)
and (2). Therefore, we can see that the chromosome is
the same as fuzzy matrix in FCM algorithm.
382
5.2
Initialization process
5.5
Mutation
In the mutation process, each gene has a small probability Pm of mutating, decided by generating a random number. After testing several mutation method,
finally we used the method of [7]. in the mutation
process, the fuzzy memberships of a point in a chromosome will be selected to mutate together with probability Pm. The mutation process is described as follows:
for i= 1 to n do
for j=1 to c do
for ii= 1 to N do
Generate a random numbers from (0,1)
for i=1 to n do
for each ij point of the chromosome;
Generate a random numbers from (0,1)
end
if r Pm then
Generate c random numbers
end
vi1 , vi2 , ..., vic ; from (0,1) for the ith
point of the chromosome;
After Initializing of each chromosome, it may vic
P
olate the constraints given in (1) and (2). So, it is
Vij for j=
Replace ij with Vic /
j=1
necessary to normalize the chromosome matrix. The
1,2,...,c; and i=1,2,...,n;
matrix undergoes, (10), the following transformation
end
without violating the constraints:
end
end
In the initialization phase, a population of N legal chromosomes is generated, where N is the size of the population. To generate a chromosome like (8) we employ
the method introduced by [12], which is described as
follows:
Xnormal
11
c
P
1j
j=1
..
.
n1
c
P
nj
j=1
5.3
...
1c
c
P
1j
j=1
..
..
.
nc
c
P
nj
j=1
Selection
5.6
Local search
As mentioned before, we used a local search after mutation in memetic algorithm. At the first we sort all
chromosome based on their fitness. After that we select some chromosome randomly among higher half of
sorted population. Afterwards each selected chromosome searches the best chromosome in its vicinity randomly based on hill climbing way with 5 neighbourhood. Then we use lamarckian idea.
Simulation Results
There in K is a constant and Jm is the objective function of FCM algorithm (Eq. (4)). The smaller is Jm , 6.1 Parameter settings
the better is the clustering effect and the higher is the
individual fitness f(X).
In order to optimize the performance of the MFCMA,
fine tuning has been performed and best values for their
parameters are selected. Based on experimental results
these algorithms perform best under the following set5.4 Crossover
tings: m=2, N=50, Pm =0.02 and Pc =0.9. The MA
terminating condition is the maximum number of iterAfter the selection process, the population will go ations 100. The FCM terminating condition in Algothrough a crossover process. We use three times run- rithm 1 is the number of iterations 5 that runs after
ning of FCM algorithm according to Algorithm 1.
MA.
383
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Instances (n,c,d)
Best
FCM [13]
Average
Worst
Best
FPSO [13]
Average
Worst
Best
FCMFPSO[13] Average
Worst
Best
MFCMA
Average
Worst
Iris(150,3,4)
67.92
70.43
71.58
66.26
67.39
69.72
62.19
62.55
62.96
60.53
60.63
61.11
Glass(214,6,9)
72.26
72.87
73.37
86.26
86.97
87.37
72.23
72.64
73.11
71.69
71.98
72.25
Cancer(683,2,9)
2196.8
2213.3
2235.8
2704.6
2724.4
2750.1
2181.9
2190.5
2218.7
2132.1
2148.7
2183.8
CMC(1473,3,9)
3517.1
3534.7
3548.3
4025.2
4095.6
4190.1
3416.5
3485.6
3531.2
3334.6
3358.9
3412.6
Table 1: Results of FCM, FMA, FCM-FMA Methods on Four Real Data Sets.
6.2
Experimental results
[3] H. Pomares, A. Guill en, J. Gonz alez, I. Rojas, O. Valenzuela, and B. Prieto, Parallel multi objective memetic
RBFNNs design and feature selection for function approximation problems, Neurocomputing 72 (2009), 3541-3555.
[4] T. A. Runkler and C. Katz, Fuzzy clustering by particle
swarm optimization, In 2006 IEEE international conference
on fuzzy systems 2 (2006), 601-608.
Conclusion
[9] H.C Liu, J.M. Yih, D. B. Wu, and S.W. Liu, Fuzzy C-mean
clustering algorithms based on Picard iteration and particle
swarm optimization, International workshop on education
technology and training (2008) (2001), 838-842.
[10] S. Bandyopadhyay and U. Maulik, An evolutionary technique based on K-Means algorithm for optimal clustering
in RN, Information Sciences 146 (2002), 221-237.
[11] R. Bansal and K. Srivastava, A memetic algorithm for the
cyclic antibandwidth maximization problem, Soft Compute
(2011), 397-412.
Refrences
[1] J. Bezdek, Fuzzy mathematics in pattern classification,
Ithaca, NY: Cornell University 3 (1989), no. 2, 68-73.
[2] U. Fayyad, G. Shapiro, and P. Smyth, From data mining
to knowledge discovery, Advances in Knowledge Discovery
and Data Mining, AAAI Press (2008), 1200-1209.
384
Mohsen Safaeinezhad
Amirkabir University of technology
Department of Computer engineering and information technology
safaienezhad@aut.ac.ir
Ebrahim SaeediNia
Lameaie Gorgani Institute
Department of Computer engineering and information technology
saeedi.ebrahim68@gmail.com
Abstract: The advances made in Nanomaterial sciences opened the doors of electromagnetic
communication among nanodevices. Nanonetwork can be more robust than conventional wireless
networks because of their non-hierarchical distributed control and management mechanisms. Since
quantum mechanics becomes an accurate representation of matter at the atomic and subatomic
scale, it will naturally be a significant part of nanoscale networking. In this paper, a new CrossLayer network architecture for the interconnection of Quantum nodes with Ad-hoc communication
nanonetworks is provided. We proposed a new network architecture that can transmit signals from
the nodes of an Ad-hoc structured nanonetwork to another one via some quantum communication
channels. We have simulated the quantum channel and evaluate its throughput for some different
multiplexing schemes.
Introduction
385
to have a macroscale effect from the millions of nanoeffects [2]. Due to the high number of nodes and the
mentioned energy constraints, we foresee the necessity
of organizing the access to this shared medium. In the
same way as classical networks were designed to transport data between remote nodes, quantum networks
have been proposed to do communication of quantum
data. Thought classical networks can use repeaters
that amplify signals and copy data, quantum networks
cannot rely on such operations as they are forbidden
by quantum mechanics. Quantum Communication is
based on transferring entangled pairs from one location
to another, with the help of swapping, repeating and
purification [3]. Quantum communication is considered
to be the ultimate in privacy because its impossible to
read quantum state data without changing it. Thus,
if the line is tapped in any way, the receiver will know
about it. This breakthrough demonstrates its possible
to perform quantum communication in the real world.
These communication protocols between two remote
parties can be unconditionally secure against eavesdropping [4]. Quantum communication holds promise
for the secret transfer of classical messages as well forming an essential element of quantum networks, allowing
for teleportation of arbitrary quantum states and violations of Bells inequalities over long distances [5]. Each
repeater needs to execute many quantum operations,
which are done by nano quantum nodes. Therefore,
a quantum network is considered to be a distributed
quantum computing problem. In order to provide them
for long distances, quantum repeaters were proposed,
which allow extending the distance of entangled pair.
The connection of many quantum repeaters in complex
topologies makes a network of nano-transceivers [6]. In
this paper, we focus on distributed control of the communications in nano-quantum networks. The remaining of this paper is organized as follows. In Section 2,
quantum teleportation will be briefly discussed, and in
section 3, the topology of quantum network based on
teleportation is introduced. In section 4, a new network architecture for the interconnection of nanosensor devices with quantum communication is provided.
In Section 5, we have presented the simulation results
of quantum channel modeling for some different multiplexing schemes. Finally, the paper is concluded in
Section 6.
2 Teleportation Architecture
in Quantum Networks
Figure 1: Fig. 1. A quantum circuit for teleportationthe Hamard and CNOT that makes the Bell pair can
be replaced with any mechanism that creates a Bell
pair over distance, such as Qubus [8].
(1)
The idea of a quantum network emerged after successNext, the controlled-NOT gate implements a flip of
ful experiments on quantum teleportation. Digital abstraction, on which modern computation and commu- the qubits in Alices shared entangled qubit in slice A1
386
The Third International Conference on Contemporary Issues in Computer and Information Sciences
as shown in Equation 2.If |1 > is applied, a controlled- to properly deliver messages between nodes. These
NOT gate implements the qubit flip operation, other- protocols must be designed layered to allow indepenwise the state passes though unchanged.
dence of different layers and easy upgrade of each of
them without impacting the rest protocols. In order to
1
design robust protocols, we need to create finite state
|1 >=
2
machines and decide a proper definition of the legal
sequence of actions and time-outs that such protocols
[|0 > (|00 > +|11 >) + |1 > (|10 > +|01 >)] (2)
must follow.
In the next step, slice A2 , |2 >, the Hadamard gate
is applied to Alices qubit as shown in Equation 3.
2 >=
1
[(|0 > +|1 >)
2
|(|00 > +|11 >) + (|0 > |1 >)(|10 > +|01 >)] (3)
4 Protocol Design
The result can be expanded via simple algebra as In this section, we propose quantum protocols that will
enable distributed and consistent decision making for
shown in Equation 4.
the nanonetwork nodes. Some of the functions these
1
protocols must handle are reporting results of quan2 >= [|00 > (|0 + |1) + |01 > (|1 > +|0 >)
2
tum operations, results of any measurement, exchang| + |10 > (|0 > +|1 >) + |11 > (|1 > +|0 >)] (4) ing density matrices, the time when these operations
were done, etc. Network architecture is more than simWe see that a quantum network using teleportation will ply contacts, formats and semantics of the messages, it
not work without the necessary entangled qubits [11]. also, includes many aspects of the behaviour which may
Yet, a teleportation network could be used to transmit be visible only implicitly, rather than in the contents
the entangled qubits. Entanglement swapping appears of messages.
to be a key to the solution of transmitting quantum
information.
387
In the nanonetwork case, when the nodes are biological or electromagnetic nanosensors or nanoactuators,
the application layer may convert the qbit to an action
signal. In this combination, we must use an interface
for converting molecular signals and quantum messages
to each others. This encoder-decoder is placed in the
application layer of quantum network, and after the
physical layer of the nanosensor node.
In Fig. 2, we show the relation between the different units integrating a nanosensor device and a sensingaware cross-layer protocol for nanosensor network. Fig.
3, shows the proposed protocol stack, that we detailed
its layers interconnect in Fig. 4. It can be seen that
after the Entanglement Control (ESC) layer is done,
the next higher protocol is again Purification Control
(PC), but in this case, the Bell pairs belong to further
stations [13]. And this keeps on repeating until the
end-to-end Bell pair is purified. Finally, the application
layer is reached and the data qbit can be teleported.
The control of a qbit (a single-qbit buffer) is passed
from layer to layer until consumed by the Application
layer or reinitialized to start over from the lowest layer.
In quantum networks, due to the huge amount of messages, measurements and the probabilities of success
of quantum operations, make a calculation completely
analitic of the protocols behaviour is a very hard task.
In addition, the construction of quantum repeaters
is not yet possible due to many physical challenges,
so build a network and measure the protocols performance is not possible either. The only way to study
our protocols is simulation the channel and protocols
in a network simulator. As there is not any standars
simulator for nanonetworks and quantum communications, we must develop our media over existing computer platforms. In our pervious works, we simulate
the molecular nanonetworks using MATLAB.
388
The Third International Conference on Contemporary Issues in Computer and Information Sciences
In this paper, we use Omnet++ for Quantum network simulation. Omnet++ allows us to define the
configuration parameters of a node and the network
topology. We simulated a qybus mechanism, with
20km hops with a number of qbits in each transmitter
is 50, and 16 in the receivers. In all of our simulations,
we use a target end-to-end fidelity of 0.98. We have run
simulations for two cases: only one flow, and two flows
competing for shared resources in the network shown
in Fig. 6. Both flows are over three-hop paths (AEFB
and CEFD), with the middle hop (EF) being a shared
link and hence CD flows. Used naively, the first and
third hops on each path will remain idle half of the
time. We have studied different multiplexing schemes
in order to recommend a mechanism for sharing resources in a multi-user network, and ultimately to be
able to predict the performance of a given network under certain trrafic patterns. The total throughput of
all five flows is highest for the statistical multiplexing,
achieving 257 teleported qbits per second, compared to
228 qbits per second for buffer space multiplexing and
201 for tme division multiplexing.
nanonetworks will have a great impact in almost every field of our society ranging from healthcare to
homeland security and environmental protection, but
Enabling the communication among nanosensors is
still an unsolved challenge [14]. Similarly, Quantum
communications is fast becoming an important component of many applications in information science and
technology. Sharing quantum information over a distance among geographically separated nodes requires
a reconfigurable transparent networking infrastructure
that can support quantum information services. In this
paper, we proposed a new network architecture that
can can transmit signals from the nodes of an Adhoc
structured nanonetwork to another one via some quantum communication channels. The interface between
nano and quantum network that convert molecular or
electromagnetic signals and qbits together, is placed in
the Application layer of quantum network nodes. We
have simulated the quantum channel and evaluate its
throughput for some different multiplexing schemes.
As our results, the best multiplexing scheme was stastical multiplexing in performance and simplicity of
implementation. We also find out that proper tuning
of uncontested links improves the network performance
while spending time of unused links doing purification
and reducing the number of end-to-end purification
steps, which are slower due to the addition of more
hopes and the longer propagation delay for the classi-
389
Refrences
[1] I. Akyildiz and Josep Miquel Jornet, Electromagnetic wireless nanosensor networks, Elsevier, Nano Communication
Networks 1 (2010), 319.
[2] A.Shojaie, Modeling and analysis a new nanonetwork architecture for molecular communication in medical scenarios, Master of Science Dissertation, June 2011.
[3] Stephen F Bush, Nanoscale Communication Networks,
Artech House Series , Nanoscale Science and Engineering,
2010.
[4] S. Gaertner, C. Kurtsiefer, M. Bourennane, and H. Weinfurter, Exper-imental Demonstration of Four-Party Quantum Secret Sharing, Phys.Rev. Lett 98 (2007).
[5] H. J. Kimble, The quantum internet, Nature 453 (June
2008), 1023 - 1030.
[6] A. Shojaie and M. Dehghan takhtfooladi, Bio-inspired communication using diffusion based long-range nanonetworks,
Proceedings of the 2012 International Conference on Intelligent Information and Networks ICIIN 2012 (2012).
[7] W. McCarthy, Hacking Matter: Levitating Chairs, Quantum Mirages, and the Infinite Weirdness of Programmable
Atoms, Basic Books, free multimedia edition ed., 2003.
[8] R. Van Meter, T. D. Ladd, W. J. Munro, and K.
Nemoto, System design for a long-line quantum repeater,
IEEE/ACM Trans. Netw. 17(3) (2009), 10021013.
[9] A. Luciano and R.V. Meter, Path selection in hetrogeneous
quantum networks, 10th Asian Confference on Quantum Information Science (AQIS) (2010).
[10] W. G. Cooper, Evidence for transcriptase quantum processing implies entanglement and decoherence of superposition
proton states, Biosystems 97 (August 2009), 7389.
[11] C. J. Kaufman, M. A. Nielsen, and I. L. Chuang, Quantum Computation and Quantum Information, Cambridge
University Press, October 2000.
[12] A. Shojaie and M. Safaienezhad, Automata based nanorobot
for molecular communication in medical scenarios, Proceedings of the 7th Vienna International Conference on
Mathematical Modelling (Feb 2012).
[13] M. Hayashi, K. Iwama, H. Nishimura, R. H. Putra, and S.
Yamashitaa, Quantum network coding, STACS (2007), 610
621.
390
Parisa.taherian@gmail.com
mhkarimi@ipm.ir
Abstract: In this paper introduced that the PKI is a security architecture that has been defined
to create an increased level of trust for exchanging information. Digital Certificate sticks an identity
to each pair of public and private keys which its issued by a Certification Authority (CA) and can
be used to encrypt data. Combine apples with encryption makes a complete security solution and
assuring the identity of all parties involved in a transaction. The OpenCA Project is a collaborative
effort to develop a robust, full-featured and Open Source out-of-the-box Certification Authority
implementing the most used protocols with full-strength cryptography world-wide. OpenCA is
based on many Open-Source Projects. Among the supported software is OpenLDAP, OpenSSL,
Apache Project, Apache mod ssl.
Introduction
Ensure of the source and destination of that inublic Key Infrastructure (PKI) is a security architecformation
ture that has been defined to create an increased level
of trust for exchanging information. The more accu Assurance of the time and timing of that inforrate PKI be introduced the style and technologies that
mation
provide a secure infrastructure. In order to achieve this
goal, using a mathematical technique called public key
cryptography that uses a pair key for authentication
and proof of content. These keys are known public key
and private key. Public key can be exposed in public 1.2 A PKI consists of
but the private key is only available to owner and when
information is encrypted with public key, only by the
A certificate authority (CA) that both issues and
private key can be decrypted it.
verifies the digital certificates.[2]
1.1
Author
391
To do this
Use whose
Send an encrypted
message
Send an encrypted
signature
Decrypt
an
encrypted message
Decrypt
an
encrypted
signature
(and
authenticate
the sender)
Kind of
key
Public
key
Private
key
Private
key
Public
key
1.3
There is different type of PKI which refers to the distribution of public keys, therefore a PKI is a common
way to exchange public keys which this process called
PKE (Public Key Exchange) and no need to have a
CA, an RA or a current server. Different types of PKI
to be divided into two categories: centralized and decentralized
S/MIME).
Encryption and/or authentication of documents
(e.g., the XML Signature or XML Encryption
standards if documents are encoded as XML).
Authentication of users to applications (e.g.,
smart card logon, client authentication with
SSL). Theres experimental usage for digitally
signed HTTP authentication in the Enig form
and mod openpgp projects.
Bootstrapping secure communication protocols,
such as Internet key exchange (IKE) and SSL. In
both of these, initial set-up of a secure channel
(a security association) uses asymmetric key
(a.k.a. public key) methods, whereas actual communication uses faster symmetric key (a.k.a. secret key) methods.
Mobile signatures are electronic signatures that
are created using a mobile device and rely on signature or certification services in a location independent telecommunication environment.
Universal Metering Interface (UMI) an open
standard, originally created by Cambridge
Consultants for use in Smart Metering devices/systems and home automation, uses a PKI
infrastructure for security
PGP: PGP has a completely decentralized infrastructure: each user generates a pair of keys
public and private keys, signs its public key and 2
Digital Certificates
its email address with its private key and then exchange the result with its acquaintances through
a key server or an offline channel (from hand to As noted PKI provides inpatient to raise the level of
hand, through a telephone line, etc.).
security in computer network and in the insecure In X.509: Unlike PGP, an X.509 PKI is centralized. ternet, but there are many security problems that PKI
Each certificate must be issued by a trusted third cannot solve all of them.
party, but there can be several trusted third parDigital Certificate sticks an identity to each pair of
ties. If two persons communicates using certificates issued by two different trusted third parties, public and private keys which its issued by a Certificaeach person must trust the two third parties un- tion Authority (CA) and can be used to encrypt data.
less the two trusted parties are cross certified. A Combine apples with encryption makes a complete secross certification results from an accord between curity solution and assuring the identity of all parties
two trusted parties,which agrees on the practice involved in a transaction.
of the other contracting party.
Digital Certificates can be used for a variety
of electronic transactions including e-mail, electronic
commerce, groupware and electronic funds transfers.
1.4 Usage PKI [5]
Netscapes popular Enterprise Server requires a Digital Certificate for each secure server.
PKIs of one type or another, and from any of several
vendors, have many uses, including providing public
keys and bindings to user identities which are used for: 2.1 A Digital Certificate typically con-
tains the:
Encryption and/or sender authentication of
e-mail messages (e.g., using OpenPGP or
392
The Third International Conference on Contemporary Issues in Computer and Information Sciences
3.2
Owners name
IDX-PKI
3.3
OpenCA
393
4.1
The CA interface
4.2
The RA interface
4.4
4.5
Figure 2: The RA interface
4.3
Features [9]
394
The Third International Conference on Contemporary Issues in Computer and Information Sciences
If you have Cisco routers, you may want to use burn a CD-RW and send it through carrier pigeon.
the SCEP (Simple Certificate Enrollment Proto- All these solutions are working (nevertheless the last
col) to request a certificate directly and automat- is maybe not reliable).
ically from the router.
When the CA needs to sign a certificate several
When a certificate must be revoked, OpenCA can
ways
to do it are possible: you can store the private
do it in two different manners. You can issue
key
in
a file on the hard disk and use it to sign the
a CRL (Certificate Revocation List) and/or use
certificate.
In spite of being simple, this solution is not
an OCSP (Online Certificate Status Protocol) revery
secure.
For example, an assailant might gain acsponder. The CRLs can be published by a Web
cess
to
the
server
by a security hole in the web server
server and/or in a directory.
and steal the private key (this allows him to create false
Interoperability features: When configuring
certificates at will). Instead you can store the private
OpenCA, you will have to cope with a couple
key in an HSM and let the HSM do the signatures, In
of choices: which database you are using, which
this manner, the assailant gaining access to the server
LDAP server you have or how do you exchange
cannot steal the private key (but if your HSM is not
data with the other servers.
protected by PIN or a password the assailant can alOpenCA uses a database to store the issued cer- ways ask the HSM to generate a false certificate). If
tificates. The choice of this database is left to the you do not own an HSM (this is not a cheap hardware),
administrator, the only requirement is to have you can always use a smart card to hold the private key
the Perls DBi driver. Currently, OpenCA is of your CA and do the signatures.
known to operate with:
PostgreSQL, a fully SQL99 compliant
RDBMS
Mysql, a lightweight SQL database
Oracle, a well known and full featured
RDBMS
Once stored in the database, the certficates can be published through an LDAP server. One more time, the
choice of this LDAP server is left to the administrator.
Although any RFC compliant directory might work,
OpenCA was only tested with OpenLDAP.
When two parts of the PKI must exchange data,
both must use the same protocol. The choice, the
implementation and the configuration of this protocol
(as large meaning) is left to the administrator. This
is probably the largest freedom of this software: you
could transmit a zip archive through the HTTPS protocol, a tarball through FTP over an IPsec link or even
395
Refrences
[1] http://searchsecurity.techtarget.com/definition/PKI.
[2] Jhn R Vacca, Public key infrastructure: building trusted applications and Web services, CRC Press. p. 8. ISBN 978-08493-0822-2, 2004.
[3] Barton McKinley, ask of setting up a public-key infrastructure, Network World (2001).
[4] Al-Janabi Sufyan T. Faraj et al, Combining Mediated and
Identity-Based Cryptography for Securing Email, In Ariwa,
Ezendu Digital Enterprise and Information Systems: International Conference, Deis Proceedings. Springer (2010), 2-3.
[5] http://en.wikipedia.org/wiki/Public key infrastructure.
[6] http://www.newpki.org/.
[7] http://www.idealx.com/index.php?lang=en.
[8] http://www.openca.org.
[9] Nicolas Mass, Open source PKI with OpenCA.
Mohsen Afsharchi
Sanay Systems
University of Zanjan
najafirobab88@gmail.com
afsharchim@znu.ac.ir
Abstract: Computer networks are nowadays subject to an increasing number of attacks. Intrusion Detection Systems (IDS) are designed to protect them by identifying malicious behaviors or
improper uses. Since the scope is different in each case (register already-known menaces to later
recognize them or model legitimate uses to trigger when a variation is detected), IDS have failed so
far to respond against both kind of attacks. In this paper, we apply two of the efficient data mining
algorithms called Naive Bayes and tree augmented Naive Bayes for network intrusion detection and
compare them with decision tree and support vector machine. We present experimental results on
NSL-KDD data set and then observe that our intrusion detection system has higher detection rate
and lower false positive rate.
Keywords: Anomaly And Misuse Detection, Bayesian Network, Intrusion detection, Tree Augmented Naive-Bayes,
Naive-Bayes.
Introduction
Intrusion detection techniques are the last line of defense against computer attacks behind secure network
architecture design, firewalls, and personal screening.
Despite the plethora of intrusion prevention techniques
available, attacks against computer systems are still
successful. Thus, intrusion detection systems (IDSs)
play a vital role in network security. The attacks are
targeted at stealing confidential information such as
credit card numbers, passwords, and other financial information. One solution to this dilemma is the use of
intrusion detection system (IDS). It is very popular security tool over the last two decades, and today, IDS
based on computer intelligent are attracting attention
of current research community a lot.
We present experimental results on NSL-KDD data
set and WEKA software.
396
The paper is structured as follows: Section 2 introduces Bayesian networks and classifications. Section
3 introduces intrusion detection systems and different
kinds of attacks. Section 4 describes intrusion detection with Bayesian networks. Section 5 presents and Bayesian network structure represents the interanalyzes our experimental results. Section 6 summa- relationships among the dataset attributes. Human
experts can easily understand the network structures
rizes the main conclusions.
and if necessary modify them to obtain better predictive models. By adding decision nodes and utility nodes, BN models can also be extended to deci2 Primary Description
sion networks for decision analysis. Applying Bayesian
network techniques to classification involves two subtasks: BN learning (training) to get a model and BN
A Bayesian network B =< N, A, > is a directed
inference to classify instances. The two major tasks
acyclic graph (DAG) < N, A > where each node n N
in learning a BN are: learning the graphical structure,
represents a domain variable (e.g., a dataset attribute),
and then learning the parameters (CP table entries)
and each arc a A between nodes represents a probfor that structure.
abilistic dependency, quantified using a conditional
probability distribution (CP table) i for each
The set of parents of a node xi in BS is denoted as
node ni . A BN can be used to compute the condii . The structure is annotated with a set of conditional
tional probability of one node, given values assigned to
probabilities (BP ), containing a term P (xi = Xi |i =
the other nodes [3].
i ) for each possible value Xi of xi and each possible
instantiation i of i . [3]
One application of Bayesian networks is classification. A somewhat simplified statement of the problem
of supervised learning is as follows. Given a training
set of labeled instances of the form < a1 , ..., an > , C
construct a classifier f capable of predicting the value
of C, given instances < a1 , ..., an > as input. The variables A1 , ..., An are called features or attributes, and
the variable C is usually referred to as the class variable or label. [11]
Two types of Bayesian network classifiers that we
use them in this paper are: Naive-Bayes and Tree Augmented Naive-Bayes
2.1
Naive-Bayes
397
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Detection: The profile is used to detect any deviance in user behavior. [7]
Intrusion detection systems are used to identify, classify and possibly, to respond to benign activities. Also,
Intrusion Detection System (IDS) is used to monitor
all or partial traffic, detect malicious activities, and
respond to the activities. Network intrusion detection
system was established for the purpose of malicious activities detection to strengthen the security, confidentiality, and integrity of critical information systems.
These systems can be network-based or host-based.
HIDS is used to analyze the internal event such as process identifier while NIDS is to analyze the external
event such as traffic volume, IP address, service port
and others. The challenge of the study is: how we can
have an IDS with high detection and low false positive
rate? [4]
IDS have three common problems: temporal complexity, correctness and adaptability.
The temporal complexity problem results from the extensive quantity of data that the system must supervise
in order to perceive the whole situation. False positive
and false negative rates are usually used to evaluate
the correctness of IDS. False positive can be defined as
alarms which are triggered from legitimate activities.
False negative includes attacks which are not detected
by the system. An IDS is more precise if it detects
more attacks and gives few false alarms.
In case of misuse detection systems, security experts must examine new attacks to add their correIntrusion detection is comprised of two main tech- sponding signatures. In anomaly detection systems,
398
human experts are necessary to define relevant at- say a1 , a2 , ..., an relative to the attributes A1 , A2 , ..., An
tribute for defining the normal behavior. This leads , respectively. Since naive Bayesian networks work unus to the adaptability problem. [10]
der the assumption that these attributes are independent (giving the parent node C), their combined probability is obtained as follows:
3.2
(2)
P (A|Ci )P (Ci )
P (A)
(1)
399
The Third International Conference on Contemporary Issues in Computer and Information Sciences
4.3
Algorithm
400
BENEFIT (d, a)
/* This is to calculate the benefit from attaching
detector d to attack vertex a */
Let the end attack vertices in the BN be
F = fi , i = 1, 2, ..., M
for each fi, the following cost-benefit table exists
do
Perform Bayesian Inference with d as the
only detector in the network and connected
to attack vertex a
Calculate for each fi , the precision and
recall, call them, Precision(fi , d, a) ,
Recall(fi , d, a) System Benef it =
m
P
Benef it fi (T rue N egative)
i=1
Experimental Result
childs
num compromised
count, srv diff host rate
is guest login
wrong fragment, flag
same srv rate
5.1
The kappa statistic measures the agreement of prediction with the true class 1.0 signifies complete agreement. This rate in Naive-Bayes is 0.759 and in DT is
0.989 and in SVM is 0.961 but TAN has better result,
0.988.
5.2
Confusion Matrix
Normal DOS
R2L
Probe
U2R
0.037
0.009
0.01
0.031
0.02
0.001
0.001
0.002
0.06
0.002
0.003
0.003
0.069
0
0
0
0.021
0.003
0.002
0.005
Normal
NB
TAN
DT
SVM
R2l
NB
TAN
DT
SVM
DOS
NB
TAN
DT
SVM
Probe
NB
TAN
DT
SVM
U2R
NB
TAN
DT
SVM
Normal
3474
4759
4760
4732
DOS
106
7
5
17
R2L
162
8
8
14
Probe
455
12
13
24
U2R
589
0
0
0
0
5
10
31
0
0
0
1
63
69
63
41
9
1
2
1
4
1
1
0
115
9
7
65
3176
3321
3319
3264
1
0
1
0
26
3
6
1
15
0
0
0
40
22
20
32
15
10
9
8
10
0
1
0
718
768
770
764
17
0
0
0
0
2
4
3
0
0
0
0
2
2
0
1
0
0
0
0
2
0
0
0
5.3
Naive-Bayes is build in 3.77 seconds, TAN in 20.09 seconds, DT in 36.86 seconds and SVM in 43.63 seconds.
So Naive-Bayes is faster.
5.4
5.5
401
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Conclusions
The average of false positive rate in Naive-Bayes is In this paper, we have proposed a framework of intrusion detection systems based on Naive-Bayes and
0.033 and in TAN and DT is 0.006, in SVM is 0.019.
TAN algorithms and compared them with decision tree
and support vector machine. According to the result,
Naive-Bayes is found less time consuming. TAN has
5.6 Accuracy Rate
better accuracy rate and detection rate, and also has
less false positive rate.
Precision Recall F Measure
Normal
0.957
0.726
0.826
NB
0.992
0.994
0.993
Refrences
TAN
0.991
0.995
0.993
DT
0.973
0.989
0.981
[1] K Korb and E Nicholson, Bayesian Artificial Intelligence
SVM
(2004).
DOS
0.963
0.953
0.958
[2] Ch Kruegel, D Mutz, W Robrtson, and F Valeur, Bayesian
NB
0.995
0.996
0.996
Event Classification For Intrusion Detection, 19th Annual
TAN
0.996
0.996
0.996
Computer Security Application Conference IEEE Computer
Society, Washington DC 187 (2008), 1423.
DT
0.992
0.98
0.986
SVM
[3] L Ben-Gal, Bayesian Network, Encyclopedia Of Statistics
In Quality And Reliability, 2007.
R2l
0.265
0.829
0.401
NB
0.873
0.908
0.89
[4] Y Wee, W Cheah, SH Tan, and K Wee, Causal Discovery And Reasoning For Intrusion Detection Using Bayesian
TAN
0.863
0.829
0.846
Network 1 (2011), no. 2.
DT
0.732
0.554
0.631
[5]
K
Chin Khor, CH Ting, and S Amnuaisuk, From Feature
SVM
Selection To Building Of Bayesian Classifiers: A Network
Probe
0.594
0.898
0.715
Intrusion Detection Perspective.
NB
0.98
0.96
0.97
[6] J Cheng and R Greiner, From Feature Selection To BuildTAN
0.973
0.963
0.968
ing Of Bayesian Classifiers: A Network Intrusion Detection
DT
0.967
0.95
0.959
Perspective, Proc.14th Canadian Conference On AI, 2001.
SVM
[7] M Pater, H Kim, and A Pamnam, State Of The Art In
U2R
0.003
0.5
0.006
Intrusion Detection System.
NB
0
0
0
[8] M Panda and M.R Patra, Network Intrusion Detection Using Nave Bayes, International Journal Of Computer Science
TAN
0
0
0
And Network Security 7 (2007), no. 12, 258263.
DT
0
0
0
[9] N Amor, S Benferhat, and Z Elovedi, Nave Bayesian NetSVM
works In Intrusion Detection System, 14th European ConAVG
0.921
0.826
0.861
ference On Machine Learning 17th European Conference
NB
0.991
0.991
0.991
On Principles And Practice Of Knowledge Discovery In
TAN
0.99
0.99
0.99
Databases.
DT
0.977
0.978
0.977
[10] S Benferhat, H Drias, and A Boudjelida, An Intrusion DeSVM
tection Approach Based On Tree Augmented Nave-Bayes
And Expert Knowledge.
5.7
ROC Curve
402
ECE Department,
seyedjavadi@qiau.ac.ir
mahdiani@pwut.ac.ir
Abstract: Although the fixed-point arithmetic is widely used due to its simple hardware implementation, it suffers from significant drawbacks such as limited dynamic range. A fixed-point
hardware does not provide acceptable accuracy levels when simultaneously deals with large and
small numbers. Although the floating-point arithmetic greatly addresses this problem, it is not
widely used because it faces important challenges when realizes in hardware. A novel computational paradigm named as Dynamic Fixed-Point (DFP) is proposed in this paper which provides
improved precision levels while has a similar VLSI implementation costs when compared to traditional fixed-point. The accuracy simulation results and VLSI synthesis costs of the new method is
presented and compared with fixed-point to prove its efficiency.
Introduction
Fixed-point arithmetic is widely used in various realtime and low-power applications due to simplicity of
its hardware units in terms of area, delay, and power
consumption. However, the main problem with a fixedpoint computational system is to preserve the dynamic
range within a finite and fixed Word-Length (WL)
which is determined based on the cost-accuracy tradeoff [1]. This limitation prevents to simultaneously
demonstrate large and small values in a finite WL. Increasing WL at the output of the computational units
AdderOutF ixedP oint (W L 1 : 0)
(1)
is a trivial solution to maintain the desired dynamic
range and accuracy. According to this approach, the
= Carry&Sum(W L 1 : 1),
WL at the output of each adder in the system should
be increased by one bit and the WL at the output of In a WL by WL bits multiplier with a 2WL bits reeach multiplier should be doubled that significantly in- sult as another instance, the output is always defined
Corresponding
403
(2)
Dynamic Fixed-Point
Carry&Sum (W L 1 : 1) , SF = 0
W hen Carry = 1
(3)
Sum (W L 1 : 0) , SF = 1
W hen Carry = 0
404
The Third International Conference on Contemporary Issues in Computer and Information Sciences
result (2 W L 1 : W L) , SF = 0
when result M SB is 1
result (2 W L 2 : W L + 1) , SF = 1
when result M SBs are 01
result (2 W L 3 : W L + 2) , SF = 2
when result M SBs are 001
result (2 W L 4 : W L + 3) , SF = 3
when result M SBs are 0001
be theoritically any positive and none-zero value however; there are some simulation results which justify
why the maximum SF in a DFP multiplier is limited
to 3 as described before. Fig.1 simultaneously includes
the logarithmic average error of WL-bit normal adder
and multiplier as well as WL-bit DFP adder and multiplier with different WL and SF values. The figure
values show that average error of a DFP component
is always better than its traditional rival regardless of
WL. It also demonstrates that average error of a DFP
multiplier improves as SF increases.
(4)
2.1
+ SF +1
, SF = 0, 1
(5)
(1 +
2W L+2SF
2 (22SF 1)
), SF 0
3
405
Table 1 includes the synthesis results of 8-bit normal and DFP adders and multipliers. To achieve these
results, the VHDL models of all components are developed and synthesized on 0.13 CMOS library cells
using Mentor Graphics Leonardo Spectrum. The synthesis results of the DFP multiplier are presented for
different SFs. The last row also simultaneously includes the average accuracy simulation results of DFP
and fixed-point blocks to simplify the overall comparison. The table results show that a fixed-point adder is
26% smaller and 9% faster than DFP while also provides 20% less accuracy. The fixed-point multiplier as
another instance, is 9% smaller and 20% faster than a
DFP multiplier with SF=3 while provides 48% worse
accuracy.
Conclusions
A new computational paradigm called Dynamic FixedPoint (DFP) is introduced in this paper. The DFP
arithmetic blocks have similar VLSI structures and implementation costs with respect to fixed-point blocks
while provide an improved dynamic range and accuracy. The presented analytic studies as well as simulation results show the accuracy improvement level of
DFP with respect to fixed-point. The synthesis results
also are provided to compare the VLSI implementation
costs of these two rivals.
406
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Refrences
[4] S.. Kobayashi and G.P. Fettweis, A new approach for blockfloating-point arithmetic, ICASSP (1999), 20092012.
[2] O. Sarbishei and K. Radecka, Analysis of precision for scaling the intermediate variables in fixed-point arithmetic circuits, ICCAD (2010), 739745.
[5] T. Lenart and V. Owall, A 2048 complex point FFT processor using a novel data scaling approach, ISCAS (2003),
4548.
407
Abbas Horri
dstghaib@shirazu.ac.ir
horri@shirazu.ac.ir
Abstract: Cloud providers must ensure that their service delivery is flexible in order to meet
various consumer requirements. However, in order to support green computing, cloud providers
also need to minimize the cloud infrastructure energy consumption while conducting the service
delivery. In this study, for cloud environments an energy consumption model is proposed for timeshared policy. This model has been implemented and evaluated using CloudSim simulator. Related
simulation results validate the model and indicate that the energy consumption may be considerable.
Simulation results demonstrate that there is a tradeoff between energy consumption and quality of
service in the cloud environment.
Introduction
One of the cloud benefits is the possibility to dynamically adapt ( i.e., scale-up or scale-down) the amount
of (provisioned) resources to applications in order to
attend the variations in demand, which are predictable
[1]. Elastic (i.e., automatic scaling) applications, such
as web hosting, content delivery, and social networks
can use this cloud ability, which is susceptible to elasticity. Although cloud has been used as a platform
supporting elastic applications, it faces limitations
such as ownership, scale, and locality. For instance,
the number of hosting capabilities (virtual machines
and computing servers) to be offered to application
services at a given instance of time by a cloud is limited; hence the scaling application capacity in this
situation, will becomes complex. Therefore, the applications hosted in a cloud must compromise on the
overall QoS delivered to its users, on condition that the
number of requests overshoots the cloud capacity[2].
One of the important requirements to be provided by
Cloud computing environments is reliable QoS[3]. It is
defined in terms of the service level agreements (SLA),
which describe such characteristics as the throughput,
response time, or latency delivered by the deployed
system. Response time is an amount of time, obtained
Corresponding
1.1
408
The Approach
409
The Third International Conference on Contemporary Issues in Computer and Information Sciences
execute between two context switches in MIPS (milFigure 1 demonstrates the cost of time-shared pollion instructions). Hence, cost of context switches is icy as the size of the jobs is increased. In this figure,
computed as:
axis X represents job size and axis Y represents the extra cost that caused by context switch overhead, given
X
Icost(J) =
(I(j) (q IP S) t)
(3) in MIPS. As can be seen if the job size increases, the
cost of time-shared policy will also increase. This is
Where J is the set of jobs to be executed in the time due to the fact that the larger jobs cause more context
share policy, IPS is the processor instructions per sec- switches than smaller jobs. In the cloud environment,
ond, t is the cost of a context switch in MI (million the job size is sufficiently large. In this case, the quaninstructions), I(j) is the total instructions for job j in tum size is 5 msec.
MI and q is the quantum parameter in second. To
fig1.png
measure extra energy consumption by the time-shared
policy, the linear model described in (1) and (2) has
been used. Based upon this model and the cost model Figure 1: demonstrates the indirect cost as the size of
depicted above, the extra energy consumption for the the jobs is increased
time-shared policy is:
Z
1
Icost
V M M IP S U
P (u)dt
E=
(4)
t=0
Next experiment was planned to evaluate the effect of quantum parameter does on the turnaround
time in time-shared policy. Figure 2 shows that the
turnaround time of jobs in the time-shared policy increases as the quantum parameter decrease. In this
experiment, the job length is 75,000 MI. An increase
in quantum parameter contributes to a decrease in the
time-shared policy cost.
Where P(u) is the power consumed by the host, vmmips indicates the processor speed of VM in MIPS
(million instructions per second), ICost is the cost of
the time-shared policy in MI and u is the CPU utilization. Hence, TCost/vmmips is the extra time for the
time-shared policy. E will be the extra energy usage
if the time-shared policy be applied. Each job in the
fig2.png
cloud must be executed on a VM. Hence, for each job
to be executed in the time-share policy the cost and
energy usage is calculated by the VM parameter based Figure 2: shows that the indirect cost increases by deon the above methods.
creasing the quantum parameter
Experimental Result
410
Conclusion
Refrences
[1] M. Armbrust, A. D. A. Joseph, R. H. Katz, D. A. Patterson, A. Fox, and R. Griffith, Above the clouds: A Berkeley
view of cloud computing: EECS Department, U.C. Berkeley, 2005.
[2] Wu L., Garg Kumar, and R. Buyya, SLA-based admission control for a Software-as-a-Service provider in Cloud
computing environments, Journal of Computer and System
Sciences (2011), 367378.
[3] A. Beloglazov, R. Buyya, and Y. Lee, A taxonomy and
survey of energy-efficient data centers and cloud computing systems, Advances in Computing 82 (2011).
[4] H. Aydin, R. Melhem, D. Mosse, and P. Mejia-Alvarez,
Power-aware scheduling for periodic real-time tasks, IEEE
Transactions on Computers 53 (2004), 584-600.
411
Zahedan, Iran
Zanjan, Iran
M mahmoodi 64@yahoo.com
B sadeghi b@yahoo.com
Abstract: Evolutionary algorithms, one of important algorithms have been used in data mining
field to induct fuzzy if- then rules based classification systems. Harmony search algorithm (HSA), an
inspired algorithm from nature, has been successfully applied to classification. This paper proposes
a rule-based system for medical data mining by using a combination of HSA and fuzzy set theory,
we call it FHDD. In the classification problem the object is to maximize the correctly classified
data and minimize the number of results. We have evaluated our new classification system via
UCI machine data set. Results show the propose algorithm can detect diseases with an acceptable
accuracy or even better than previous works. In addition, the computation time to build classifier
reduce because of FHDD utilizes an HSA to learn a set of fuzzy rules from labeled data in parallel
manner.
Introduction
Medical diagnosis can be viewed as a pattern classification problems: based on a set of input features
the goal is to classify a patient as having cancer or as
not having it, i.e. as a malignant or a benign case[1].
Breast cancer is the most common cancer in woman
accounting for about 30% of all cases [2]. Most breast
cancers are detected as a mess on the breast. Some
diseases such as breast cancer show symptoms which
Corresponding
412
some of these symptoms are appear in the other diseases. Thus physicians must pay attention to previous
decisions which made for patients in the same conditions, and whereas early diagnosis is important as it
is directly linked with increased survival chances, thus
the physician needs both knowledge and experience for
proper decision making. From a computational point
of view, breast cancer diagnosis can be viewed as a
pattern classification problem: based on a set of input
features the goal is to classify a patient as having cancer or as not having it, i.e. as a malignant or a benign
case.
This job is not easy to consider the number of factors that the expert has to evaluate. To reduce the possible errors and help the expert, the classification system can be used. Classification is a supervised learning
technique that takes labeled data samples and generates a classifier that classifies new data samples into
different predefined groups or classes. This classification problem can be easily solved by fuzzy logic with
interpretable if-then rules and membership function.
Classification schemes have been developed successfully for several applications such as medical diagnosis,
speech recognition, etc. Classification problems, sets
of if-then rules for learned hypotheses is considered to
be one of the most expressive and comprehensive representations.
Generally the rules and the membership functions
used by the fuzzy logic for solving the classification
problem are formed from the experience of the human
experts.
tion 5 is conclusion.
Proposed Algorithm
413
The Third International Conference on Contemporary Issues in Computer and Information Sciences
3.1.2
S:
MS:
M:
ML:
L:
3.1
Small
Medium Small
Medium
Medium Large
Large
10000
01000
00100
00010
00001
(1)
Where jh (Rj ) is the sum of difference of the training patterns (binary strings) in Class h with the fuzzy
if-then rule Rj .
3.1.1
Initial population
3.1.5
414
Stopping condition
3.2
Fuzzy Inference
Where
TP: true positives, the number of cases in our training
set covered by the rule that have the class predicted by
the rule.
FP: false positives, the number of cases covered by the
rule that have a class different from the class predicted
by the rule.
FN: false negatives, the number of cases that are not
covered by the rule but that have the class predicted
by the rule.
TN: true negatives, the number of cases that are not
covered by the rule and that do not have the class predicted by the rule.
Also, Precision measures, Recall measure and FMeasure are computed by following equations. FMeasure is a trade-off between Precision and Recall.
Precision =
Recall =
F Measure =
TP
TP + FP
(3)
TP
TP + FN
(4)
2 Precision Recall
Precision + Recall
(5)
Experimental Results
C4.5
NN
KNN
BayesNet
Proposed
algorithm
Classification
Rate
0.946
0.958
0.951
0.96
0.9778
Precision Recall
0.946
0.959
0.951
0.962
0.978
0.946
0.959
0.951
0.96
0.978
FMeasure
0.946
0.959
0.951
0.96
0.978
Instances
699
768
270
Attributes
10
8
13
Classes
2
2
2
Method
C4.5
NN
KNN
BayesNet
Proposed
algorithm
(T P + T N )
(2)
(T P + T N + F N + F P )
Classification
Rate
0.762
0.753
0.702
0.743
0.793
Precision Recall
0.754
0.75
0.696
0.741
0.798
415
0.754
0.754
0.702
0.743
0.795
FMeasure
0.751
0.751
0.698
0.742
0.795
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Method
Classification
Rate
C4.5
0.762
NN
0.751
KNN
0.735
BayesNet 0.811
Proposed 0.793
algorithm
Precision Recall
0.766
0.75
0.713
0.811
0.798
0.767
0.752
0.732
0.811
0.795
FMeasure
0.767
0.751
0.728
0.811
0.795
[4] H. Ishibuchi, K. Nozaki, and H. Tanaka, Distributed representation of fuzzy rules and its application to pattern classification, Fuzzy Sets Syst 52 (1992), 2132.
[5] S. Mitra, L.I. Kuncheva, Shi Y, and Chen Z, Improving
classification performance using fuzzy MLP and two-level
selective partitioning of the feature space, Fuzzy Sets and
Systems 70 (1995), no. 1, 113.
[6] V. Vebele, S. Abe, and M. Lan, Neuralnetwork-based fuzzy
classifier, IEEE , Transact ion . on systems Men and cybernetics 25 (1995), no. 2, 333361.
[7] A. Gonzoles and R. Perez, SLAVE : A genetic learning systems based on iterative approach, IEEE Transaction on systems 7 (1999), no. 2, 176-191.
[3] S. Abe and M.S. Lan, A method for fuzzy rules extraction directly from numerical data and its application to pattern classification, IEEE Trans. on Fuzzy Systems 3 (1995),
no. 1, 1828.
Conclusion
We have introduced a novel approach to fuzzy classification for medical diagnosis. This paper presents a
mixture of Harmony Search Algorithm and Fuzzy Logic
for classification. The proposed algorithm is used in
the structure of a Michigan based evolutionary fuzzy
system. The algorithm learned the rules for each class
independently. Our experiments have confirmed that
the algorithm can classify the data with considerable
classification accuracy. The algorithm has some feature
such as classification rate increase, to generate only one
rule for each class or interpretability increase.
Refrences
[1] T. Nakashima, G. Schaefer, Y. Yokota, S. Ying Zhu, and
H. Ishibuchi, Weighted Fuzzy Classification with Integrated
Learning Method for Medical Diagnosis, IEEE Engineering
in Medicine and Biology 27th Annual Conference (2005),
56235626.
[2] American Cancer Society, Cancer facts and figures: http://
www.cancer.org/docroot/STT/stt 0.asp.
416
Repository:
Sama Technical and Vocatinal Training College, Islamic Azad University, Astara Branch, Astara, Iran
Department of Computer
ce@samapour.ir
Mohsen Solhnia
Science and Research Branch, Islamic Azad University, Guilan, Iran
Department of Computer Engineering
m.solhnia@gmail.com
Abstract: The fast development of the connection networks infrastructure and the increase
in IT usage around the many humans activities has fantastically enhanced the necessity to
provide e-services for clients. Generally cost is considered as one of the main obstacles to present
such services for developing countries. In this paper, we are presenting new cloud computing
based approaches to overcome these existent barriers against implementation e-government
services by using cloud computing capacities. Nowadays, with increasing extension e-government
services, integration of these services from viewpoint of economic, culture and etc., has become to
almost necessary affair. This integration that being by cloud computing techniques, in addition
e-government costs decrement and facilitate using the services for clients, help to spread using of
e-government services and thereupon significantly decrease government costs. This paper discusses
the concept of cloud computing, proposed cloud based approaches for e-government services and
correlated economic opportunities and challenges.
Introduction
In each period, the most common challenges of governments are interaction with resource constraints; reduce
costs and using technology based on the specific framework from e-government to c-government via cloud
computing [1]. Common goal of the mentioned cases
is maximum benefit from minimum opportunity.
Thus, the role of research in the field of optimization
and lowering costs is filling for every day. In other
side, the world is changing to a small village with
growth of global communications and communications
equipment that government not being excluded from
this theorem. Therefore, the government developed
Corresponding
417
418
The Third International Conference on Contemporary Issues in Computer and Information Sciences
5.1
At cloud computing there are other divisions. If
cloud the in terms of ownership enterprise resources
are divided into three different states, private cloud,
public cloud and hybrid [5]. The private cloud is defined as a state that an enterprise owns all resources
in cloud. Public cloud that all resources available to
rent enterprise. Hybrid cloud is combination of private
and public clouds; this means that it composed of a
part of the rent and other parts is complete acquisition.
Cloud
Approach
Government
for
E-
419
Benefits
5.2
Challenges
a) Security policies In e-government world, data security is one of the main challenges. Also, cloud computing environment provide multiple users and software
access to sharing hardware and network resources to
improve resource usage. However, different governmental entities unavoidably face with the situation sharing
the same physical infrastructure. Thus the entities
are deeply concerned about the security of important
and sensitive data being released without security and
privacy justifies [12].
b)Network infrastructure As the cloud computing is
entirely network-based and highly dependent on the
network condition. Thus, the risks of network transmission delay or other problems are being increased
after the e-government services moves to cloud computing environments, therefore system reliability is
being reduced. On the other hand, such migration to
cloud with neglect network condition, imperil success
of system implementation and its security.
c) Security considerations In cloud computing, egovernment service platform applications become more
variegated. Though, the difficulty of application service management will also increase correspondingly.
The public hopes to use the e-services without violation of personal privacy by government. In this regard,
the government must also consider the actual needs of
the citizens and law standards. To ensure service availability and to prevent serious crime may be derived.
Moreover, in order to fortify the security, the government may infringe the citizenship right in the cloud
closely supervision activities at the same time.
d) Appropriate laws In most cases can be seen that
the laws have not been changed in proportion to the
growth rate of technology. For instance, cloud computing includes many legal issues which are not fully
present currently. In some countries, the gap between
cloud computing technology and policy has been concerned, some governments have begun to formulate and
improved relevant laws, but the speed of development
of cloud technology goes much faster than the governments legislation do. Therefore, the cloud computing
environment is still full of legal issues of uncertainty.
Conclusion
420
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Refrences
[1] M. Armbrust and et. al, Above the Clouds: A Berkeley View
of Cloud Computing (2009).
[2] The e-government hand book for developing countries:
unpan1.un.org/intradoc/groups/public/documents/apcity,
World Bank, 2009.
[3] W. Zhang and Q. Chen, From E-government to Cgovernment via Cloud Computing, 2010.
[4] A. Tripathi and B. Parihar, E-governance challenges and
cloud benefits, 2011.
421
Babak H.Khalaj
f rezaei@ee.sharif.edu
khalaj@sharif.edu
Abstract: Human tracking is one of the main problems in object tracking field. There are a
lot of challenges such as human pose variation, illumination changes in the environment, lack of
specific moving behavior, occlusion and image noise. This paper presents an adaptive particle
filter using HOG and color histogram for human tracking. A motion model is proposed which
estimates the target speed from the history of its last displacements. The experimental results
show improvements in the robustness of tracking. In addition, by using a background subtraction
before extracting the HOG features, the running time of the algorithm improves. The publicly
available data set PETS2009 S2.L1 is used to evaluate the performance of the proposed method. It
is shown that the correct tracking percentage improves and probability of missing targets decreases.
Keywords: human tracking; particle filter; motion model; HOG; color histogram.
Introduction
422
of it, the power of the tracker improves and probability of missing targets decreases. In addition, by using a
background subtraction and segmentation on the image before extracting the HOG features, the running
time of the algorithm improves. This helps the algorithm to be real time.
Particle Filter
Detection
Tracking-by- 3
3.1
2.1
i=1
n
X
(1)
Proposed Method
Initial Steps
Particle Filter
First of all, the first frame of the video sequence is
subtracted from its background and then is segmented
using a proper threshold to extract binary foreground
image. It should be mentioned that if the background
images are not available, they can be driven using some
methods such as reference image extraction. Second,
the HOG algorithm is run only on the white pixels of
the binary image to extract the initial location of the
targets. The centroid of these detection windows is
Particle filtering is one of the most important algorithms for human tracking. As People have no specific
structure or equation for their motions, it is necessary
to have a tracking algorithm that does not require targets equation of moving. Particle filter is not a tracking algorithm by itself, but a sampling algorithm. By
combining it with some observation models, it can be
used for tracking applications. So based on the observation model and feature extraction methods used in
{(xr , yr )|r = 1 : R}
(2)
particle filtering, there will be various approaches with
in which, R is the number of detected targets. This
different results.
step is proposed because the white pixels of the binary
image representing foreground, are most likely pixels
to be on the body of the human targets. As mentioned
2.2 Human Detection Features
before, HOG method is one of the most important and
reliable approaches for human detection, but its speed
HOG
is very low. So this simple background subtraction has
One of the best human detection algorithms is huge improvement on the speed of the algorithm and
Histogram of Oriented Gradients (HOG), intro- helps it to be real time. At this step, target templates
duced in [10]. The HOG detector is a sliding called q are driven using color histogram of the inir
window algorithm. This means that for any tial targets window locations and their HOG features.
given image, a window is moved across at all lo- The template that we proposed is the combination of
cations and scales and a descriptor is computed. both features mentioned above, including color inforSo it is a time consuming algorithm and makes mation of the target and human contour information.
difficulties with real time applications. A pre- It makes the template robust against variations and
trained classifier is used to assign a matching noise in video sequence such as illumination changes,
score to the descriptor to decide whether there is human pose variation and occlusion.
a human or not. The classifier is a linear SVM
classifier and the descriptor is based on the histograms of gradient orientations.
3.2
Color Histogram
The color histogram is computed from RGB color Next step is running the particle filter for the targets.
space as follows [15]
Particle filter contains samples and their corresponding
423
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Observation Model
The observation model proposed in this paper
is a combination of HOG features and color histogram. It helps the model to be more robust
against variations and noise in video sequences
such as illumination changes, human pose variation and occlusion, since it includes color information and human contour information of the
target, simultaneously. The proposed observation model is
(3)
d2
1
e 22 ) + (1 )WHOG ,
Wn = (
2
State Vector
The state vector of the filter is considered as
where
d2 = (1 (p(Xn ), q))
Xt = {xt , yt , ut , vt }
(7)
(4)
show Bhattacharyya coefficient [16] for the template model comparing with the observations.
WHOG represents the observation weight obtained by HOG descriptor and is a coefficient
controlling the effect of each observation on the
total weight.
(5)
(6)
(8)
Experimental Results
424
history of its last displacements. In addition, background subtraction used before extracting the HOG
features. It has been shown that the correct tracking
percentage, robustness and the running time of the algorithm improves by our proposed method.
Refrences
Acknowledgement
The authors would like to thank National Elite Foundation for their support.
425
mehrzad.almasi@gmail.com
hamidnaji@ieee.org
Abstract: One of the most important issues in distributed databases is concurrency control of
transactions that can be run simultaneously. This is a critical issue because it can be a danger for
integrity and consistency of data in distributed databases. So, concurrency control protocols ensure
integrity and consistency of data. In this paper, we proposed a new concurrency control algorithm
based on multi-agent systems which is an extension of majority protocol.
Introduction
426
2
2.1
An agent is a computer system situated in some environment and capable of autonomous action in this
environment in order to meet its design objectives [3].
A Multi agent system consists of a group of agents that
can potentially interact with each other.
2.2
Majority protocol
The Approach
Step 1: The monitoring node, M broadcasts a message (Metadata) to the other n 1 nodes asking them
to check status of data Q1 , ..., Qn . Nodes should check
for acceptance of the requested data Qi . With this
message algorithm is initiated.
Step 2: Upon receiving the message (Metadata), node
i (and also the monitoring node) check whether transaction j can lock its requested data or not. If node i
locks data copy Qj by its local lock manager, then it
will send a message for transaction j to another agent
where destination agent ID is j. If i = j then, this
massage sending will be implicit.
Step 3: in this step, each node does the following: For
each i and each j(j 6= i), let Mj be the message node
j has received from node i in step 2. So, this message
is a vote for transaction j. Therefore, agent j counts
received votes for transaction j. All agents do this in
parallel. If number of votes for each transaction is at
least n2 + 1, transaction j can be run.
Step 4: in this step, each agent informs result of counting votes and sends a commit message to other nodes
for running corresponding transaction.
Step 1 is used to send the message by the monitoring
node for initiating algorithm. In step 2, depended on
status of data Q (whether can be locked or not) each
agent sends a message to the other agents where destination nodes are selected by a rule as follows. Node i
sends a message to j if it can lock the requested data of
transaction j. This is done for all transactions on each
agent. In step 3, each agent counts the number of its
received messages during step 2. These counts can be
done parallelly. In fact, there is no boundary between
steps 2 and 3, i.e. upon receiving message by agents,
counting of votes is started, however, it is possible for
some agents not to have received any messages. We
presented our algorithm in several steps for better understanding of the reviewers. In the final step, agents
will send a commit command for executing transaction
if at least n2 + 1 nodes send their votes for locking the
requested data of transaction during previous steps.
3.1
An Illustrative Example
427
The Third International Conference on Contemporary Issues in Computer and Information Sciences
2, 3, 4 and 5). Message passing during step 2 can be After counting votes by each agent, it is clear that
done as mentioned above (represented by dashed lines). nodes 0 to 5 have 4, 4, 2, 4, 3 and 3 votes respectively.
As mentioned above each request gaining at least n2 + 1
(here is 4) votes can be run.
The purpose of clustering is to decrease exchanged messages between nodes for reducing network traffic and
overload on monitoring node and other nodes. Here,
each cluster is called as an external agent that concurrency control algorithm mentioned in previous section
can be run in each of the clusters separately. Nodes in
clusters are called internal agents and the node initiating the algorithm is called monitoring agent. Use of
external agents can lead to increase in the speed of the
algorithm due to parallelism and concurrency control
level. To present the clustering model we make the following assumptions:Number of nodes in system should
be an odd number. This algorithm has the most efficiency when it has the least failures in the links.
In fig 3 there are 11 nodes where node 0 is the monitoring agent; nodes 1 to 5 are in the first cluster (cluster
0) and others are in cluster 1. First, monitoring agents
broadcast a message to all other nodes for initiating
the algorithm and parallel execution of algorithm in
both clusters (fig 3a).
428
n 1 nodes which means that corresponding transaction can be run. If monitoring agent doesnt receive
commit message from some inner agents (here, node 4
and 7, shown by colored nodes) until a threshold time,
monitoring node broadcasts a message to cluster 0 for
transaction 7 as (7, 2) in which the first argument is ID
of the node and the second is number of votes received
previously for transaction 7 (fig 3c). This is also true
for transaction 4 in cluster 1. In the next stage, nodes 1
to 5 in cluster 0 (k1 ) send their votes to the corresponding node (j) of node i in cluster 1 (k2 ). Therefore, the
corresponding node (j) of node i is derived as follows:
j = i + (|k1 k2 | m) where m is number of node in
each cluster. This is true if i < j otherwise this formula changes as follows: j = i (|k1 k2 | m). In
this example, the corresponding node for transaction 7
is 2, where i > j.
The Implementation
Conclusions
Number of required messages for concurrency control of n transactions in majority protocol is 3n(n 1)
while in proposed algorithm that is an extension of the
majority protocol this value equals to (n 1)(1 + 2n).
2
But this value in the clustering model is 3(n 21) . These
formulas are gained from analytic computations when
all links are intact and all messages are passed. Here
in the proposed algorithm, message passing is shared
Refrences
[1] B. Bhargava, Concurrency Control in Database Systems,
IEEE transaction on knowledge and data engineering 11
(1999).
[2] P.A. Bernstein, V. Hadzilacos, and N. Goodman, Concurrency control and recovery in database systems, Addison
Wesley, Reading, (1987).
[3] M. Wooldridge, An introduction to multiagent systems, John
Wiley and Sons Ltd (2002).
[4] Y. Aoyama and J. Nakano, RS/6000 SP: Practical MPI
Programming, International Technical Support Organization.
IBM., Chapter 1, pages: 1112, 1999.
429
K. faez
Ahar, Iran
Tehran, Iran
AHazrati@iau-ahar.ac.ir
Kfaez@aut.ac.ir
T. Taheri
P. Hazrati Bishak
Qazvin, Iran
Ahar, Iran
taheri tayebeh2002@yahoo.com
PHazrati@iau-ahar.ac.ir
Abstract: In this paper, we propose a fast and robust scheme, called Multiscale Local Average
Binary Pattern operator based Genetic algorithm (MLABPG) and apply it to face recognition.
The proposed scheme consists of two steps: feature selection and classification. In MLABPG,
feature selection is based on modified Multiscale LBP operator; we take the size of the window
(s) as a parameter, and s s denoting the scale of the LBP operator. Calculation is performed
based on average gray-values of pixels values within windows, instead of individual pixels. And
standard deviation values of pixels are used for comparison. In classification step: By weighing
classifiers representing classifier importance using a Genetic Algorithm (GA). We can optimize the
classification accuracy by combine the classifiers based weights that obtain with GA algorithms.
Our fitness function measures the accuracy rate achieved by classification fusion. The experimental
results on the ORL databases validate that the offered algorithm has better performance than or
comparable performance with state-of-the-art local feature based methods.
Keywords: Local Binary Pattern, Genetic Algorithm, Average values, Standard Deviation values
Introduction
Recently, Local Binary Patterns (LBP) is introduced as a powerful local descriptor for microstructures
of images [19]. The LBP operator labels the pixels
of an image by thresholding the 3 3-neighborhood of
each pixel with the center value and considering the
result as a binary string or a decimal number. Recently, Ahonen et al offered a novel approach for face
recognition, which takes benefit of the Local Binary
Pattern (LBP) histogram [7]. After it was extended to
Unicode LBP, it was used at many places because of
its high efficient code way and low excellent local tex-
430
(1)
The original LBP operator, introduced by Ojala et al. A genetic algorithm is a population-based search and
[19], is a powerful means of texture description. The optimization method that simulate the process of nat-
431
The Third International Conference on Contemporary Issues in Computer and Information Sciences
ural evolution. The two main concepts of natural evolution, which are natural selection and genetic dynamics, inspired the development of this method. The basic
principles of this technique were first laid down by Holland [25] and are well described, for example, in [26],
[27].
In general, GAs start with an initial set of random
solutions called population [28]. A GA generally has
four components. A population of individuals where
each individual in the population represents a possible solution; a fitness function which is an evaluation
function by which we can tell if an individual is a good
solution or not; a selection function which decides how
to pick good individuals from the current population
for creating the next generation; and genetic operators such as crossover and mutation which explore new
regions of search space while keeping some of the current information at the same time. Each individual
in the population, representing a solution to the problem, is called a chromosome. Chromosomes represent
candidate solutions to the optimization problem being
solved. In GAs, chromosomes are typically represented
by bit binary vectors and the resulting search space
corresponds to a high dimensional Boolean space. It
is assumed that the quality of each candidate solution
can be evaluated using the fitness function.
LABP (s)P,R(x,y) =
432
(2)
4.1
s(Mp Mc )2P
p=0
s(x) =
P
1
X
1 if x
0 f x<
(3)
0<1
4.2
#Train
2
3
4
5
6
7
8
9
MLABG
83.87
91.13
96.35
98.37
98.75
99.43
100
100
LBP[6]
79.03
86.80
93.76
96.21
97.12
97.75
98.67
99.60
Methods
LDA[4] Gabor[5]
76.33
81.33
86.67
88.10
92.86
93.43
95.47
95.67
96.67
97.43
97.10
98.64
97.33
99.65
98.56
100
LGBPH[12]
85.57
94.72
97.56
98.83
99.56
99.86
100
100
433
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Refrences
[1] W.Y. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld,
Face recognition: A literature survey, ACM Computing Surveys 34 (2003), no. 4, 399-485.
[2] A. K. Jain, R. Ross, and S. Prabhakar, An introduction
to biometric recognition, IEEE Transaction on Circuits and
Systems for Video Technology 14 (2004), no. 1, 8492.
[3] M. Turk and A. Pentland, Eigenfaces for recognition, Journal of Cognitive Neuroscience 13 (1991), no. 1, 7186.
[13] C.H Chan, C. Parkan, and A. Swami, Multi-scale local binary pattern histogram for face recognition, Centre for Vision, Speech and Signal Processing School of Electronics
and Physical Sciences University of Surrey Guildford, Surrey, U.K (2008).
[14] Bhuiyan A.-A and C. H. Liu, On face recognition using
Gabor filters, World Academy of Science, Engineering and
Technology 28 (2007).
[15] R Mehta, J Yuan, and K Egiazarian, Local Polynomial
Approximation-Local Binary Pattern (LPA-LBP) based
Face Classification, Proc. SPIE 7881 (2011).
[16] J Shelton, G Dozier, K Bryant, K Popplewell, T Abegaz,
K Purington, L Woodard, and K Ricanek, Genetic Based
LBP Feature Extraction and Selection for Facial Recognition, Proceedings of ACM South-east Conference, Kennesaw, GA (2011).
[17] The
Olivetti
Research
Laboratory
(ORL)
database,
Cambridge,
U.K:
http://www.uk.research.att.com/pub/data/att faces.zip,
1994.
[18] The
Olivetti
Database:
orl.co.uk/facedatabase.html.
http://www.cam-
[5] C. Liu, H. Wechsler, and Shi Y, Gabor feature based classification using the enhanced Fisher linear discriminant model
for face recognition, IEEE TPAMI 11 (2002), no. 4, 467
476.
[20] M.L Raymer, W.F Punch, E.D Goodman, L.A Kuhn, and
A.K Jain, Dimensionality Reduction Using Genetic Algorithms, IEEE Transactions on Evolutionary Computation 4
(2000), 164171.
[21] A. K Jain, D Zongker, and M. Pietikainen, Feature Selection: Evaluation, Application, and Small Sample Performance, IEEE Transaction on Pattern Analysis and Machine
Intelligence 19 (1997), no. 2.
434
[22] De Jong K.A, Spears W.M, and Gordon D.F, Using genetic
algorithms for concept learning. Machine Learning, IEEE
Transactions on Pattern Analysis and Machine Intelligence
13 (1996), 161188.
[23] L.I Kuncheva, L.C. Jain, and S. Z. Li, Designing Classifier
Fusion Systems by Genetic Algorithms, IEEE Transaction
on Evolutionary Computation 33 (2000), 351373.
[24] Skalak D. B, A. Hadid, and M. Pietikinen, Using a Genetic Algorithm to Learn Prototypes for Case Retrieval an
Classification, Proceeding of the AAAI-93 Case-Based Reasoning Workshop, Washington, D.C., American Association
for Artificial Intelligence, Menlo Park, CA (1994), 6469.
[25] J. Holland, Adaptation in Natural and Artificial Systems,
The University of Michigan, 1975.
[26] K. De Jong, An analysis of the behavior of a class of genetic
adaptive systems, The University of Michigan (1975).
[27] D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, 1989.
[28] D.E.Goldberg, Adaptation in natural and artificial systems,
Ann Arbor, MI, Univ of Michigan Press, 1975.
haghighat bahar@ee.sharif.ir
bagheri-s@sharif.ir
Mohsen Firouzi
Sharif University of Technology
Department of Electrical Engineering, 202 ACL
mfirouzi@alum.sharif.edu
Keywords: Reinforcement Learning; Function Approximation; Active Learning Method; Fuzzy Modeling.
Introduction
435
to estimate the more sparsely sampled and yet relevant regions accurately [1]. These inherent complications in the problem emphasize the significance of the
FA method selection issue to ensure reliable and fast
learning. In this work, we propose the use of the fuzzy
modeling technique of Active Learning Method (ALM)
[4] as an ideal FA method capable of securing a fast
and reliable learning scheme in continuous domain reinforcement learning. We will show that ALM is capable of overcoming the aforementioned challenges due to
its unique modeling approach. Compared to existing
approaches, our proposed strategy utilizes much less
mathematical exactness and computational effort due
to its fuzzy nature, and yet outperforms typical powerful FA approaches such as a MLP trained by back
propagation in terms of convergence behavior. The
powerful modeling and function approximation characteristics of ALM seem a natural answer to the aforementioned difficulties. By utilizing Ink Drop Spread
(IDS) operator and an effective partitioning scheme,
ALM is capable of representing functions of any degree to any arbitrary level of accuracy, allowing for
a perfect tracing of the value function evolution during the learning process and thus avoiding the possible
divergence due to accumulated intermediate approximation errors. This advantage is what Gaussian Processes (GP) for function approximation [5], [6], [7] have
obtained through being non-parametric. GP methods
provide the expected value of the approximated function alongside with its variance as a quantitative indicator to the amount of uncertainty of the approximated value. This indicator can be very useful to
guide the search in RL [7], [8]. Such information is
naturally incorporated into ALM modeling technique
in the form of Narrow Path (NP) and spread values,
extracted from IDS planes, which are the fuzzy equivalents of the expected value and variance. In an attempt to solve the biased sampling and the local nonstationary problems through making the effect of each
update sufficiently local, [9] uses a Gaussian Mixture
Model (GMM). ALM also incorporates the notion of
arbitrarily local updates through the arbitrary choice
of radius of the ink stain utilized in the IDS curve extractor units. In addition to addressing the biased
sampling and the local non-stationary problems, this
also enables the algorithm to avoid the undesired and
unpredictable changes caused by the global updates
which are common in neural networks-based FA methods, and ultimately eliminating the necessity for batch
updates [10]. Variable resolution methods also try to
maintain the locality of update effects by partitioning the domain into independently updating regions
[11]. The locality of updates is managed through partitioning the regions into further subdivisions. However
the partitioning scheme is unrecoverable and there is
no generalization between neighbor portioned regions.
436
The Third International Conference on Contemporary Issues in Computer and Information Sciences
(x) = {b|
b
X
d(x, y)
y=1
yX
max
d(x, y), b Y }
(1)
y=b
Where (x) denotes the narrow path value corresponding to an input x, d(x,y) indicates the darkness value
associated with the point (x,y) in the IDS plane, and
(x) denotes the spread value associated with point x.
Suppose that ik (x) denotes the kth IDS plane output
with respect to xi input. Then the final model output
y would be:
y is 11 11 or ... or ik ik or ... or N lN N lN
(3)
ik = log(
1
)
ik
ik Yik
Plp
q=1 pq Ypq
p=1
ik = PN
(4)
(5)
437
Proposed Algorithm
As stated before, many of the main popular reinforcement learning algorithms are based on the dynamic
programming algorithm known as value iteration [14].
This might be called the discrete value iteration, since
the algorithm takes as input a complete model of the
world as a Markov Decision Task (MDT), and computes the optimal value function J* as the minimum
possible sum of future costs starting from x. In order
for the J* to be well-defined, it is assumed that costs
are non-negative and that some absorbing goal state
is reachable from all states. By extending the discrete
value iteration to the continuous case the smooth value
iteration algorithm is proposed in [3]. This is done by
replacing the lookup table over all states with a FA
method trained over a sample of states. The authors
report that as suggested by test results of a variety
of function approximators, including polynomial regression, MLP trained by back propagation, and local
weighted regression, convergence is no longer guaranteed in contrast to the discrete case. Instead four possible classes of behavior are recognized. These behaviors
and their description according to [3] are summarized
in table 1. Here we propose the use of ALM modeling
technique as the function approximator to be utilized
in smooth value iteration. We will use a simplified version of ALM where the iterative partitioning scheme
is replaced with an intuitive choice of variable partitions by the user. The pseudo code for the proposed
Simulation Results
We have conducted the simulations in a variety of domains including a simple continuous 2D grid-world, a
continuous 2D grid-world containing costly puddles,
and a mountain car problem. These are the same examples addressed by [3] and the simulations are run
using the same test setup specifications in order for
the results to be comparable. The first set of results
is from the simple continuous 2D grid world described
in figure 6. For a quantized state space the J* can be
computed using discrete value iteration with the optimal value function being exactly linear: J*(x,y) = 20 10x -10y. In order to simulate the proposed algorithm
in this domain we use an ALM approximator with IDS
plane resolution of 256 points, IDS radius of 26 points,
438
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Good Convergence
Lucky Convergence
Bad Convergence
Divergence
0.2
0.15
0.1
0.05
439
0
0
0.2
0.4
0.6
0.8
y
0.8
0.2
0.4
0.6
0.8
0.6
0.4
0.2
0
0
0.2
0
0.4
0.2
0.6
0.4
0.6
0.8
y
0.8
1
Iteration 48
1
0.5
0
0
0.2
0
0.4
0.2
0.6
0.4
0.6
0.8
0.8
y
0.2
0.4
0.8
0.6
0.6
0.4
0.8
0.2
Iteration 25
0.8
0.6
0.4
0.2
0
0
0.5
0.2
0.4
0.6
0.8
(6)
Figure 12: J(x,y) Function Approximation Iteration 25
Iteration 59
xt+1 = bound(xt + x t )
(7)
0.8
0.6
0.4
0
0
0.2
0.4
0.6
0.8
440
0.2
0.4
0.6
0.8
The Third International Conference on Contemporary Issues in Computer and Information Sciences
diverge
diverge
lucky
Mounain Car
good
diverge
lucky
Iteration 25
0.4
0.3
0.2
0.1
0
1.5
1
0.1
0.5
0.05
0
0.5
0.05
1
Position
0.1
Velocity
Iteration 80
0.5
0
1.5
1
0.5
0
0.1
0.05
0.5
Position
0.05
0.1
Velocity
Iteration 124
1
0.5
0
1.5
1
0.5
0
0.1
0.05
0.5
0
Position
0.05
0.1
Velocity
441
Acknowledgment
It is the authors pleasure to thank Dr. Hamid Beigi
for his continuing interest and support.
Refrences
[1] R.S. Sutton and A.G. Barto, Reinforcement learning: An
introduction, Cambridge Univ Press, Chapter 5, pages: 201
290, 1998.
[2] T. Mitchell, Machine learning, McGraw Hill, Chapter 5,
pages: 201290, 1997.
[3] J. Boyan and A.W. Moore, Generalization in reinforcement
learning: Safely approximating the value function, Advances
in neural information processing systems (1995), 369376.
[4] S.B. Shouraki, A novel fuzzy approach to modeling and control and its hardware implementation based on brain functionality and specifications, PhD. Thesis, University of Electrocomunications, Chofu, Japan, 2000.
[5] A. Rottman and W. Burgard, Adaptive autonomous control using online value iteration with Gaussian processes,
In Proc. Of the Int. Conf. on Robotics and Automation
(2009), 30333038.
[6] M.P. Deisenroth, C.E. Rasmussen, and J. Peters, Gaussian
Process Dynamic Programming, Neurocomputing 72(7-9)
(2009), 15081524.
442
Susan Fatemieparsa
se.sojudi@gmail.com
s.fatemiparsa@yahoo.com
Reza Mahini
Parisa YosefZadehfard
r mahini@pnu.ac.ir
p yousefzadeh@yahoo.com
Somayeh Ahmadzadeh
Department of Computer Engineering and IT
Payamenoor University of Tabriz
somayeh.ahmadzadeh@gmail.com
Abstract: This paper presents using of intelligent data driven methods for developing of car
parking systems. Finding a suitable, with the lowest traffic and cost, by considering people priority
for parking place is presented. The Learning progress from previous behavior of the system is done.
To obtain these goals first, a preprocessing phase by association rule mining method is performed.
Rules by using support and confidence algorithms are selected. Then with applying these rules in
the fuzzy resuming system, the system presents optimized park places. Finally experimental results
present the benefits of using intelligent model in human systems against todays systems.
Keywords: Data-driven modeling; Car parking system; Data Mining; Fuzzy expert systems; Decision support
systems.
Introduction
Parking the cars around critical places, such as hospi Problems that this case cause for the other cars.
tals is one of the most important problems in todays
life. In recent years researchers are willing to solve this
problem. According to being sensitive of solving this
case, so it is necessary to design an intelligent system
There are several solutions to solving a bow problems;
to manage parking places.
managing parking places using fuzzy inference systems
[1], fuzzy expert systems in [2], car parking locator sys Parking in sensitive places (entrance of the hospi- tem [3], management system based on wireless sensor
tal, offices, important places, etc.) even for short networks in [4] and [5],[6], [7], [8] , etc. The designed
times, cause to delay in patients transferring;
system by using several criteria and with effective combining of the data mining methods that learns from
Corresponding
443
previous data and behaviors of the exits system. Obtained rules are used in the fuzzy inference system to
make suitable decisions. With managing and organizing of Hospitals doctors, employees and customers, we
can reduce the problems in hospitals. Also by using
some rules, such as costs and priority we can manage them. Structure of the paper; in section 2 a brief
of data mining methodology is presented, in section 3
fuzzy expert system is illustrated, section 4 describes
methods, in section 5 the Experimental rules. In section 6 conclusion and future work are presented.
Data Mining
Case study
As mentioned before paying attention to problem significance about parking in critical places and to get
better results at different park place and traffic control
around hospitals, universities and busy places such as
markets in large cities, causes to do several researches
to solve this problem. By developing of technology
and transportation systems and also by increasing the
numbers of personal vehicles, researchers want to manage the parking lots and heavy traffic in busy places
like hospitals, universities. This paper presents a data
driven model as a fuzzy expert system to solve this
problem.
4.1
This system has been designed to take better car parking places and traffic management by considering of
different criteria. The designed system proposes a suitable park place with a special park code and place by
Central notion of fuzzy systems is that truth values getting the conditions. By taking attention to parame(in fuzzy logic) or membership values (in fuzzy sets) ters, effect on the problem these criteria for the decision
are indicated by a value on the range [0.0, 1.0], with mechanism system are selected:
0.0 representing absolute Falseness and 1.0 representing absolute Truth. A fuzzy set is an extension of an
ordinary (crisp) set. Fuzzy set A is characterized by
If the ambulances or other vehicles want to park
its membership function (x) is called the membership
in the sensitive pleases, system lets them to park
function of A. The set
there for short time, but if they want to stay
there for long time the system doesnt let them;
A = {(u, A (u)) u U }
:X[0,1]
(1)
444
The Third International Conference on Contemporary Issues in Computer and Information Sciences
As mentioned before in this research to getting better fuzzy rules , the first one we use the rules reduction
Figure 2: Architecture of an intelligent management algorithms and selecting 54 rules from 256 rules. Aftersystem
ward we use the method of searching association rules
in data mining Support and confidence parameters are
used for selecting 61 rules.
As it seen in this step, an Algorithm used to normalization of raw data and then a preprocessing phase for
initialization of the rules as system behaviors is done.
In this step support and confidence calculation algorithm [9] for getting the rule frequency and accuracy
measure is used [12]. After it obtained rules from this
Experimental results
method is applied in the fuzzy inference system based 5
on these rules. Finally we test our design over new test
data to measure place and traffic load with the system.
For performing experiments, the designed system is applied in Sina Hospital of Tabriz. To getting apposite re4.2
Presented solution for manage- sults system has used in three different condition; when
the environment is quite (during mornings or nights),
ment of parking the cars
in busy condition (meeting times), normal times and
the results is compared with human system and fuzzy
This section of the paper illustrates the architecture expert system. We present the results comparing diaand the algorithm of the system to solving mentioned grams are shown in figures 4,5,6. With considering the
problems. According to systems architecture system values in figurs4,5,6 human system interests to park
gets the behaviors of data as raw rules, in this step we at sensitive and near to the sections of hospital and it
have many rules which need to reduction or mining. causes the mentioned problems and also many empty
To getting better results and because of difference in parking places near the hospital are unsuitable . The
parameters, we executed for processing for all of data illustrated results are shown; the presented system by
and then data range normalization of the data is per- using different criteria such as time, personal priority,
formed.
cost and distance prevents crowding at sensitive parts.
445
20
140
120
100
80
60
40
20
0
15
10
5
0
sencitive
near
far
avenue
Simple
Normal
Busy
40
30
20
10
60
40
20
0
Finally figure 6 when systems in complex condition, the human system may select unsuitable places
for park, but it designed system can manage the classified parking places.
[2] R. Mahini, M. H. Norozi, and M. R. Kangavari, CarParking Management System: 2nd joint congress on fuzzy
and intelligent system, 2nd joint congress on fuzzy and intelligent system (2008), 2830.
[3] L. Ganchev, M. ODroma, and D. Meer, Intelligent Car
Parking Locator Service: International Journal Information Technologies and Knowledge, International Journal Information Technologies and Knowledge Vol.2 (2008).
[4] V.W.S. Tang, Y. Zheng, and J. Cao, An Intelligent Car
Park Management System based on Wireless Sensor Networks: 1st International Symposium on Pervasive Computing and Applications, 1st International Symposium on
Pervasive Computing and Applications (2006), 6570.
[5] T. Rye and S. Ison, Overcoming barriers to the implementation of car parking charges at UK workplaces: Elsevier
Ltd., Transport Policy 12/2005 (2005), 57-64.
[6] K. Aldridgea, C. Carrenob, S. Isona, T. Ryeb, and I. Strakera, Car parking management at airports: A special case?:
Elsevier Transport Policy, Elsevier 13/2006 (2006), 511521.
446
The Third International Conference on Contemporary Issues in Computer and Information Sciences
[7] H. E. Nosratabadi, S. Pourdarab, and M. Abbasian, Evaluation of Science and Technology Parks by using Fuzzy Expert
System: The Journal of Mathematics and Computer Science, The Journal of Mathematics and Computer Science
Vol.2 No.4/2011 (2011), 594606.
[8] M. Crowder and M. Walton, Developing an Intelligent
Parking System for the University of Texas at Austin:
Southwest Region University Transportation Center, Center for Transportation Research (2003).
[9] J.K. Cios, W. Pedrycz, W.R. Swiniarski, and A.L. Kurgan,
447
Reyhane azimi
Askari@grad.kashanu.ac.ir
Azimireyhane@gmail.com
Abstract: The parallel algorithms have been applied in this paper in order to run the identification
system using the iris patterns. For identifying the internal and external boundaries of Iris in this
system, the Circular Hough transform, which is the most time consuming part of extracting the
features, has been used. Since this transformation is time consuming, the iris recognition system in
real applications has been faced with the problems. In this paper, we have attempted to design this
system with parallel algorithms and implement it on GPUs(Graphic Process Units). The platform
CUDA(Computing Unified Device Architecture) in MATLAB has been used in order to implement
the system on GPUs. Finally, it is concluded that the computation time in the parallel mode has
been significantly reduced compared to its sequential mode, and makes using this system possible
in real prompt applications.
Keywords: Parallel Algorithms; Iris Recognition System; Hough Transform; CUDA; GPU.
Introduction
Author
448
used in the fields such as file and directory access, access to websites, and key access for file encrypting and
decrypting. Moreover, the iris recognition is used in
the fields which require high throughput and queuing
such as the clearance, air traveling without a ticket,
transportation, and airport security[2]. Primary algorithms of iris recognition were proposed by Professor Daugman in 1990[3]. Then, other algorithms such
as Wildes [4], Boles[5] and Noh[6] were presented but
Daugman algorithms has been the most successful one
in this field. Data used for iris recognition systems,
which include eye images, are databases such as Bath,
CASIA, MMU1, MMU2, LEI and ..., in this paper two
iris images of CASIA And LEI databases are considered. Despite the fact that using Daugman algorithm
has had good results, using this algorithm in the online
applications has faced some problems due to its time
consuming property. Using the GPUs, in this paper we
have provided a method to reduce the response time of
this algorithm significantly. In Section 2 Daugman algorithms and iris recognition system, which use this algorithm, are described. Section 3 introduces the GPUs
and the platform for using them. In Section 4 the implementation of iris recognition system by GPUs are
described and in Section 5 we will provide the results
obtained from this implementation and finally in Section 6 the conclusion and future works are presented.
Iris recognition
Cartesian coordinates to the polar coordinates. Normalized iris image is a rectangular image with angular
and radial resolution. Iris images may be taken with
different sizes and various imaging distances and mutually the size of radius may be changed. Deformation resulting from the iris texture will affect the performance of feature extraction and matching stages.
Therefore, the iris area is required to be normalized
for regulating these variables. Thus, Daugman sheet
model is used for this purpose. This model maps each
point inside the iris to a pair of polar coordinates (r,)
which r is defined on the distance [0,1] and is an angle
between[0,2] and they are shown in figure2[8].
2.1
Preprocessing
Iris images presented in the database need preprocessing for obtaining the useful iris area. Image preprocessing is divided into two stages: iris localization and its
normalization. Iris localization determines the outer
and inner boundaries of iris, and eyelids and eyelashes,
which may cover the iris area are detected and removed. For the localization Iris algorithms we have
used the circular Hough transform. Hough transform
is a kind of transform which maps the points from the
Cartesian coordinate space to a storage space. A sample of this mapping for a circle can be seen in figure1.
In this mapping, there will be a circle in the storage
space for each point in the original image. Finally, for
detecting the circle center it is enough to calculate the
maximum amount in the storage space. However, this
storage space should be drawn for all possible radius
and this is a time consuming work.
2.2
Feature extraction
2.3
Pattern matching
449
The Third International Conference on Contemporary Issues in Computer and Information Sciences
In the above equation, X And Y are the binary patterns and HD is the Hamming distance which is defined
as the total opposed bits on N or as the total number
of opposed bits in the binary patterns. Hamming distance fast matching speed is an advantage because the
patterns are in binary format. Runtime for xor comparisons of two patterns is almost 10 s. Hamming
distance is suitable for comparing millions of patterns
in the large databases.
Figure 3: Performance of Different CPUs and GPU [13]
Parallelism will determine the future of computing science. Because on the one hand increasing the number
of transistors inside the CPU has made its speed increase very difficult, and on the other hand the need
for real-time abilities and three-dimensional graphics
is increasingly growing. Using the multi-core CPUs is
also an attempt in line with the parallelism[9]. But
these CPUs are expensive and the maximum efficiency
can be increased equal to the number of cores.
GPUs (graphics processing units), which have recently been much considered, are the appropriate tools
for implementing the parallel algorithms. Each GPU
includes a large number of cores which their parallel
run enables GPU to do a set of operations with much
higher speed than CPU.Very high performance and
availability are the advantage of GPUs. figure3 compares different models of GPUs and CPU for Floating
point operations.
GPU has its own related memories and there is no common memory between GPU and CPU thus at the beginning of program, data are transferred from the main
memory to the GPU memory and at the end of program the results are transferred from the GPU memory
to the main memory[10, 11]. In 2006, NVIDIA Company offered CUDA platform in order to accomplish
the massive parallel computing with high performance
on the GPUs produced by this company. Along with
CUDA, a software environment, which allowed the developers to write their own programs in language C
and run it on GPU, was provided.
450
Each CUDA program has two parts: Host and Device. Section Host is a program which is run sequentially on CPU and the Section Device is a program
which is run in parallel on the GPU cores.
As it can be seen in figure4, each parallel program
includes a number of threads. Threads are the light
processes each which do an independent operation. A
number of dependent threads form a block and a number of block form a Grid. There are different types
of memory in GPUs. Each thread has its own local
memory; each block has a common memory which the
threads inside it has the access to this memory and
there is a universal memory which all threads access
to.
In Host part, the total number of threads or in the
other words the number of light processes, which might
be run on GPU cores, should be determined. The code
of section Device is run according to the number of
threads defined in section Host. Each thread can find
its own position by the functions considered in CUDA
and does its own work according to the position. Finally, the calculated results should be returned to the
main memory.
GPUs are a great tool for implementing the image processing Algorithms. Because most of the operations,
which can act on the image, are local and should be
run on all the pixels, thus by considering one thread
for each pixel (if the number of required thread can
be defined) the time for doing the calculations can be
reduced to O(1). Yang has implemented a number of
famous image operators [12] by CUDA. In addition,
based on the work, which was previously done by us,
CUDA has been used for processing the spatial im-
Recently, MATLAB software has provided the facilities by which, CUDA programs are used in MATLAB.
For this purpose, after writing CUDA program in language C, the resulted program should be converted to
the format PTX, which can be implemented in MATLAB, by the compiler nvcc. In the next section, the
most time consuming part of the system, introduced
in Section 2, has been implemented in MATLAB by
CUDA.
451
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Results
Obtain Iris
boundry
Obtain Pupil
boundry
Total time
Sequential
execuation
time(ms)
11.14
Parallel
execuation
time(ms)
0.95
Speed up
9.18
0.87
10.55
21.12
2.03
10.4
11.7
Refrences
[1] M. Shamsi, P.B. Saad, and A. Rasouli, A New Iris Recognition Technique Using Daugman Method (2007).
[2] L. Ma, Y. Wang, and T. Tan, Iris recognition based on
multichannel Gabor filtering, Springer, Berlin/Heidelberg 1
(2002), 279283.
[3] J. Daugman, How iris recognition works, Circuits and
Systems for Video Technology, IEEE Transactions on 14
(2004), no. 1, 2130.
[4] R.P. Wildes, Iris recognition: an emerging biometric technology, Proceedings of the IEEE 85 (1997), no. 9, 1348-1363.
[5] WW. Boles and B. Boashash, A human identification technique using images of the iris and wavelet transform, Signal
Processing, IEEE Transactions on 46 (1998), no. 4, 11851188.
[6] S. Noh, K. Pae, C. Lee, and J. Kim, Multiresolution independent component analysis for iris identification, 2002,
pp. 1674-1678.
[7] Nixon, S. Mark, and S. Aguada, Feature Extraction and
Image Processing, second edition, Academic Press is an imprint of Elsevier, 2008.
[8] L. Masek, Recognition of human iris patterns for biometric
identification, M. Thesis, The University of Western Australia (2003).
[9] J.D. Owens, M. Houston, D. Luebke, S. Green, J.E. Stone,
and J.C. Phillips, GPU Computing, Proceedings of the
IEEE, 2008, pp. 879 - 899.
[10] Nvidia Cuda C Programming Guid v.4, Nvidia Corporation,
2011.
[11] ATI Stream Computing user guide rev1.4.0a, 2009.
[12] Zhiyi and Yang, Parallel Image Processing Based on
CUDA, 2008, pp. 198-201.
[13] J. Michalakes and M. Vachharajani, GPU acceleration of
numerical weather prediction, 2008, pp. 1-7.
452
Abstract: Solving the complex problems that have heavy computations and require high processing, are not possible by common methods and since the time consumed for solving them is very
important, high performance computing method should be used to solve these problems. Mandelbrot set is one of these types of problems. Since in Mandelbrot set, each pixel is computed
without the need of neighbors pixels information, the advantages of parallel processing can be used
to solve it. This paper, shows the influence of Microsoft Windows HPC Server cluster and parallel
programming by MPI to reduce the execution time of solving Mandelbrot set and thus to increase
the performance. Also the effect of number of nodes and processes in performance is discussed by
changing the number of cluster nodes and assigned processes to run the job.
Keywords: Mandelbrot set, High performance Computing, Windows HPC Cluster, MPI
Introduction
453
454
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Zn+1 = Zn2 + C
(1)
Z0 = 0 + i0
In Mandelbrot set each pixel is computed without
the need of neighbors pixels information. Parallel algorithm of Mandelbrot set is SPMD and it uses data
partitioning. The picture is divided into sections and
these sections are distributed between the nodes and
each node computes color value of pixels which are in
its related section. When job of each node is completed,
that node returns its results to head node[9]. Therefore
according to this subject, in order to increase performance and reduce execution time, a system can be used
which has multiple processor with non-shared memory
that communicate with each other by message passing. In this paper for implementation of this system,
Microsoft Windows HPC Server 2008 R2 cluster and
parallel programming by MPI.Net has been used.
In implementation of Mandelbrot set pixel computation, because computation of value of some pixels is
5 Case Study
assigned to one node and it requires many computations, and also according to this subject that values
in each node are computed independently without the
Mandelbrot set is a mathematical set that contains
need of neighbors pixels information, computation of
a set of complex numbers. These numbers represent
pixels value in one node can be implemented in parallel.
points in complex plane and also their two dimensional
fractal shape is easily recognizable. More precisely the
Mandelbrot set is a collection of C numbers in complex
plane that its boundaries are attained by iterating the
formula 1. To create a fractal picture each pixel that is 6
Implementation and Resuluts
in the given rectangular area, should be colored. This
action would be completed after the specified number of iterations. As much the number of iterations In this paper, systems with Intel(R) Core (TM) i7 CPU
is larger, the details in the picture are sharper but the 960 3.2 GHz processors have been used for implemencomputation and execution time will be longer[8]. A tation that each processor has 8 cores. Initially, implesample of Mandelbrot Set image is shown in Figure 2. mentation was performed sequentially on a system. As
455
456
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Conclusion
457
Refrences
[1] P. Pacheco, An Introduction to Parallel Programming, Morgan Kaufmann Publishers, 2011.
[2] D. Walker and J. Dongarra, MPI: A Standard Message Passing Interface, Super computer 12 (1996), no. 1, 56-68.
[3] F.M. Hoffman and W.W. Hargrove, High Performance Computing: An Introduction to Parallel Programming with Beowulf (2000).
[4] D. Gregor, MPI.NET Tutorial in C#, Open Systems Laboratory Indiana University, 2008.
[5] Windows HPC Server 2008 R2: System Management
Overview, Microsoft Corporation, 2008.
[6] Windows HPC Server 2008 R2: Adding Workstations to
HPC Server Clusters, Microsoft Corporation, 2010.
[7] Windows HPC Server 2008 R2: Using Windows HPC Server
2008 Job Scheduler, Microsoft Corporation, 2008.
[8] R.L. Devaney, Chaos and Fractals: The Mathematics behind
the Computer Graphic, Vol. 39, Amer Mathematical Society,
1989.
[9] P. Werstein, M. Pethick, and Z. Huang, A performance comparison of dsm, pvm, and mpi, 4th International Conference
on Parallel and Distributed Computing, Applications and
Technologies (2003), 476-482.
Abstract: With the advent of information era many economic, social and cultural features of life
underwent drastic changes. One of the aspects of this turnaround is the profound alterations in
the economic relationships among individuals, firms and governments. Due to this, administrations
with the use of the latest technologies have turned to rendering information and services on a wide
scale not only to various stratum of society but also to the subdivisions of the government itself.
Thus, the present research has been conducted using the descriptive - survey method with a view
to a practical nature. This study endeavors to explore and explicate the window phenomenon
of a trade entity as an effective solution to facilitate cross - border transactions. Regarding its
vast dimensions and the undeniable benefits it encompasses, it has been attempted to examine the
creation and development of the foregoing phenomenon in Iran. In order to accomplish this goal,
while considering the experiences of other countries and the recommendations of CEFACT, the
essential elements of implementation of the trade single window have been identified and through a
survey the respondents of which were various experts along with the Freidman Test, the degree of its
significance and the current conditions of these criteria in Iran have been surveyed and evaluated.
Introduction
458
Implementing a single window requires tremendous efforts and huge investment in the infrastructure, logistical organizational support, legislation, system development and maintenance. The electronic system that
provides information exchange and processing is a significant factor of the single window.
Many countries have adopted information technology as a key to national development and the powerful presence of the new century in the world arena
is subject to being equipped this of information. On
the other hand each balanced growth of the subsectors related to information technology brings about of
national sustainable development. To explain the concept of information development, various models have
been put forwarded, one of the most effective which
is the development of the dynamic model at United
Nation Development Plan. This model, as shown in
figure (1) is the interaction between the main elements
affecting the development and includes technical infrastructure development, human resources, policies, and
459
The Third International Conference on Contemporary Issues in Computer and Information Sciences
460
Indicators
technical
policies
content
institutions
capacity of human
significance
4.44
4.41
4.29
4.26
4.23
conditions
2.42
2.25
2.19
2.03
2.20
Conclusion
Utilization of tools on the basis of information technology as much as the savings made in the use of momentary and temporary opportunities in world trade is
unavoidable. It is obvious that all transactions within a
country in the short term are not feasible but conducting e-commerce from the very beginning in the overseas
trade is doable. With regards to domestic and foreign
needs and the special attention given to upgrading the
criteria for cross-border transaction, the development
of trade single window is one of the programs of the
ministry of Industry, Mines and Commerce for the rise
in the deployment of e-commerce whose development
and sustainable growth requires planed investment and
adequate law-making and other essential measures on to the electronic commerce promotion centre. To imthe part of the government.
prove this situation essential to set of specialized single
window as well as changing the format of hierarchy and
In relation to the research done, the establishment outside communications of different bodies.
of the trade single window in an electronic form in
the country first requires the development of technical
Ultimately to develop and promote all the meninfrastructure so that together with harmonized and tioned components, much attention should be given to
standardized development of infrastructure, the effec- the development on human recourses in the country
tive use of ICT leads to an increase in the flow of data which is in a relatively good condition.
and its availability through all agencies and trade entities dealing in the import-export services all providing
the needed security. In Iran e-services in the areas
of cross-border trade including registration of import, 7
Refrences
commercial card, bonus and incentives for export, currency allocation and ... is offered through e-commerce [1] United Nations Centre for Trade Facilitation and Electronic
and this is carried out relatively well. However, for the Business (UN/CEFACT), Recommendation and Guidelines
complete setting up of trade single window harmony on establishing a Single Window, Recommendation No. 33,
and uniformity between networks and outlets that ren- UNITED NATIONS PUBLICATION, July 2005
der the existing e-services in cross-border trade, is re- [2] McMaster, Jim, The Evolution of Electronic Trade
quired.
Facilitation: Towards a Global Single Window Trade Portal.
Fiji Islands: University of the South Pacific, 2007
The indicators related to the scope of the development of content and application is rated third in the
table. In order to upgrade this area it is necessary to
work on a uniform plan in the programming of its various practical uses and ultimately on the establishment
of an informal port for cross-border trade.
So as to read agreement and coordination among
authorities involved in trade it deems necessary to develop various executive, financial and training institutes; though this in responsibility has been passed on
[3] Linington, Gordon, International Trade Single Window and Potential Benefits to UK Business. London: SITPRO
Ltd, February 2005
[4] Kimberley, Paul, Trade Facilitation and Single Windows: Some Emerging Trends, The World Bank Border
Management Conference, June 2011
[5] External Author, Electronic Single Window, Coordinated
Border Management - Best Practices Studies, Inter-American
development Bank, December 2010
[6] Accenture; Markle Fondation; UNDP, 2001
461
Vali Derhami
Yazd University
Yazd University
raji.n@yazduni.ac.ir
vderhami@yazuni.ac.ir
Reza Azmi
Alzahra University
Computer Engineering Department
azmi@alzahra.ac.ir
Abstract: The analogy between immune systems and intrusion detection systems encourage the
use of artificial immune systems for anomaly detection in computer networks, Web servers and
web-based applications which are popular attack targets. This paper presents a web anomaly
detection based on immune system and web usage mining approach for clustering web sessions
to normal and abnormal. In this paper the immune learning algorithm and the attack detection
mechanism are described. Theoretical analysis and experimental evaluation demonstrate that the
proposed approach is more suitable for detecting unknown attacks, and are able to provide a real
time defense mechanism for detecting web anomalies.
Keywords: Intrusion Detection Systems; Artificial Immune Systems; Anomaly; Normal behavior; Session.
Introduction
462
The remainder of this paper is organized as follows. In Section 2, a review on some available IDSs
is presented. Section 3 discusses the goals of this study
and introduces algorithm regarding the data representation. In Section 4, the experimental evaluation of the 3
Proposed Method
proposed system is presented. Moreover, the detection
ability of the system is tested to other area dataset.
Finally, Section 5 concludes our study.
The proposed Web Host Immune Based Intrusion Detection System (WHIBIDS) introduces immune principles into IDSs to improve the capability of learning and
recognizing web attacks, especially unknown web attacks. In the proposed algorithm sessions and requests
2 Related Work
are constructed from web logs in which the clickstream
data are stored. Clickstream data are generated as a
There are two possible approaches for intrusion detec- result of user interaction with a website. Antigen and
tion. Intrusion detector can be provided by a set of antibodies are represented same form and their length
rules or specifications of what is regarded as normal is equal.
behavior based on the human expertise. This approach
could be considered as an extension of misuse detection
Antigen Presenting: Define each users request
systems. In the second approach, the anomaly detector as the antigens set Ag.
Each request is repreautomatically learns the behavior of the system under sented by a vector of attributes extracted from
normal operations and then generates an alarm when the access log file.
The form of the vector of
a deviation is detected from the normal model [1].
the antigen set Ag is listed as following: Ag=
ag| =< Session ID, U RLlength, numberof variables,
Vigna et al.[5] proposed an IDS that operates on distributionof characters, attributelength, depthof path >
multiple event streams and use similar features to our
work. The system analyzes the HTTP GET requests
There are some shortcomings to common access log
that use parameters to pass values to server-side pro- files generated by web servers such as Apache.One of
grams. However, these systems are misuse-based and these problems is to define the web sessions. Since
therefore not able to detect attacks that have not been the boundaries of sessions are not clearly defined, expreviously modeled. Guangmin [6] presents an immune traction of web sessions from these log files is not a
463
The Third International Conference on Contemporary Issues in Computer and Information Sciences
initialization;
Fix the Maximal population sizeNB ;
Initialize B-cell population and i2 =init using a
number of random antigen;
while all antigens are presented;
do
Present antigen to each B-cell;
if activated the B-cell wij > wmin ;
then
Refresh age(t = 0);
Add the current B-cell ad its KNN to
working sub-network;
else
Increment the age of B-cell by one;
end
if for all B-cells wij < wmin ;
then
Create a new B-cell=antigen;
else
Repeat for each B-cell in working
sub-network;
Compute B-cell stimulation;
Update B-cell i 2 ;
end
if antigens of a session is presented;
then
Clone B-cell B-cell based on their
stimulation level;
if populationsize > NB ;
then
Remove extra least stimulated B-cells;
else
end
else
end
end
The modified algorithm of [2]
Algorithm 1: ]
As it is shown in proposed algorithm, when an antigen is unable to activate any B-cell, this antigen may
represent a noise or a new emerging pattern. In this
v
u k
condition, a new B-cell is created which is a copy of the
uX
(agin agjn )2
(1) presented antigen. If this antigen is a noisy data and
dis(agi , agj ) = t
does not present a new emerging pattern, it would not
n=1
get enough chance to get stimulated by incoming antigens and is probably eliminated. After each antigens
of a session is presented to the network, the B-cells
go under cloning operation based on their stimulation
level. When the population of the network exceeds a
defined threshold, the least stimulated B-cells are removed from the network. The distance measure preWhere k is the number of features is extracted for each sented in this study is used in all the steps for calculatrequest. The pseudo code of the proposed algorithm is ing the internal and external (B-cell to antigen) interactions of B-cells. The detailed information about calpresented as following:
464
Experimental Evaluation
We run the proposed algorithm 5 times with 5folds cross validation and the final values for evaluation measures is the average of these 5 runs. Table 1
and Table 2 represent the proposed systems high capabilities in both criteria and both datasets. As the
results show that performance of session based is the
better than request based and we can claim that the
proposed algorithm can detect malicious activities with
high accuracy. Patterns may be repeated in multiple
B-cells within the population. This is called a loss of
diversity or overfitting which essentially leads to redundancy (e.g. multiple requests have the same signature). To show that there has not been overfitting
in training data, 20% noise is added to the test data.
Table 3 shows the noise, about 15 percent impact on
the results. If overfitting had occurred would have a
significant impact on results.Table 4 shows the comparison of WHIBIDS vs IADMW IDS, which comes
from [6]. The detection rate of WHIBIDS is 92%, but
the detection rate IADMW is 67%. Simultaneously,
WHIBIDS is also capable of classifying web attacks
and has a high accuracy rate 97.3%. These results
show that WHIBIDS is a competitive alternative for
detecting web attacks.
Acknowledgment
465
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Conclusions
Refrences
[1] I. Khalkhali, R. Azmi, and M. Azimpour-Kivi, Host-based
Web Anomaly Intrusion Detection System, an Artificial Immune System Approach, IJCSI International Journal of Computer Science Issues,2011 8/2011 (2011), 1424.
In this paper we proposed an intrusion detection system, based on the principles of the immune system
(WHIBIDS) that can detect known and unknown attacks. Here an attack as a series of actions is considered. The requests obtained from the preprocessed
log files of web server are presented to the system as
antigens. The network of the B-cells represents a summarized version of the antigens encountered to the network. Also, they are able to adapt to emerging usage
patterns proposed by new antigens at any time. The
results show the ability of the proposed AIS to clustering web sessions to normal and abnormal.The results
indicate designing an immune base IDS that has several advantages:. (1) Self learning and immune learning make the model can detect both the known and
unknown web attacks. (2) Ability to detect anomaly
in real time.(3) Capability to recognize abnormal behavior with regard to the actual sessions. (4) Using immune network algorithm achieved high detection rates.
(5) Can be used as a general classifier.There was limitation such as determination of similarity threshold with
testing. Future work will determine this threshold by
reinforcement learning.
466
[2] M. Azimpour-Kivi and R. Azmi, Applying Sequence Alignment in Tracking Evolving Clusters on Web Sessions
Data, an Artificial Immune Network Approach, Computational Intelligence, Communication Systems and Networks
(CICSyN ) (2011).
[3] B. H. Helmi and A. T. Rahmani, An AIS algorithm for Web
usage mining with directed mutation, Pro. World Congress
on Computational Intelligence (W CCI08) (2008).
[4] N. k. Jerne, Towards a Network Theory of the Immune System, Annals of Immunology (1974), 373-389.
[5] C. Kruegel and G. Vigna, Anomaly detection of web-based
attacks, in Proceedings of the 10th ACM Conference on Computer and Communications Security (2003), 251-261.
[6] L. Guangminl, Modeling Unknown Web Attacks in Network
Anomaly Detection, International Conference on Convergence and Hybrid Information Technology (2008).
[7] M. Danforth, Towards a Classifying Artificial Immune System for Web Server Attacks: Department of Computer and
Electrical Engineering and Computer Science, International
Conference on Machine Learning and Applications (2009).
[8] M. A. Rassam, M. A. Maarof, and A. Zainal, Intrusion Detection System Using Unsupervised Immune Network Clustering with Reduced Features, Int. J. Advance. Soft Comput.
Appl. 2/2010 (2010).
[9] Z. Brewer, Web Server Protection with CSA HTTP Explorer
Directory Traversal, Cisco Security Agent Protection Series
(2006).
F. Mirzaei
University Of Kashan
University Of Kashan
H. Ebrahimpour-Komleh
University Of Kashan
Kashan, Iran
Kashan, Iran
Kashan, Iran
biglari@grad.kashanu.ac.ir
biglari@grad.kashanu.ac.ir
ebrahimpour@kashanu.ac.ir
Abstract: This paper presents a novel face recognition approach, based on Local Binary Pattern
(LBP) and Haar wavelet transform. We propose a fast and robust three-layer weighted Haar and
weighted LBP histogram (WHWLBP) representation for face recognition. In this method, face
image is decomposed using first level Haar wavelet decomposition, and then a multi-block LBP
operator is applied on each of four-channel subimages with different block sizes in order to extract
the features efficiently. The extracted histograms are concatenated together into a single ultimate
feature vector. In recognition stage, Chi square statistic is used to measure the difference between
features histograms. A weighted approach is used for histograms comparison, to emphasize on the
more important regions in faces. The performance of the proposed method is tested on Yale and
ORL face database. The results show that our method performs better than traditional methods
like LDA, PCA, KPCA and even LBP operator, and is more robust to face variations such as
illumination, expression and pose.
Keywords: Face Recognition; Local Binary Pattern; Haar Wavelet Transform; Chi Square.
Introduction
Automatic face recognition is one of the most challenging research topics in pattern recognition, which has
gained much significant attention in recent decades. A
large number of novel face recognition techniques have
been developed in the last few years [1, 2]. Many of
these methods successfully used in some real-world applications. However, after all this progresses, there are
still many challenging problems, such as different lighting conditions, pose variations and facial expressions
that cause a significant decrease in face recognition systems performance. Face recognition systems based on
Local binary pattern has been recently proposed as a
fast and robust recognition approach [35]. In the LBP
representation, instead of using raw intensity values of
pixels, a higher level pattern that reflects the relationships between intensity pixel values in a region is used.
Some approaches combined LBP with other features
like Gabor and skin color to improve the overall per Corresponding
467
2.3
The Local Binary Pattern (LBP) operator was proposed by Ojala et al. [8] for texture description. It has
gained much attention in face recognition field [3,9,10],
not only because of its computational efficiency and
high discrimination power, but also for invariance to
monotonic gray-scale transformations.
Another extension to the original LBP is the definition of uniform patterns [11]. A local binary pattern is
called uniform if the binary pattern contains at most
2 bitwise from 0 to 1 or 1 to 0. It is proved in the experiments in [11] that the uniform patterns account for
around 90 percent of all patterns in the (8, 1) neighboru2
for uniform LBP
hood. We use the notation LBPP,R
operator.
2.1
The basic LBP operator is a non-parametric 3 3 kernel. It assigns a label to every pixel of an image by
thresholding the eight surrounding pixels with the center pixel value and considering the result as a binary
number. See fig. 1 for more details.
2.2
The Haar wavelet [12] is one the simplest wavelet algorithms, however it is very useful in the field of signal
processing. It can be used to transform an image from
spatial domain to frequency domain. In the frequency
domain we can get more robust information from face
images and therefore more robust classification. In the
large range of wavelet algorithms, Haar wavelet has
fast calculation and memory efficiency properties. By
using first level 2D Haar wavelet transform, any face
images can be decomposed to four-channel subimages
LL, HL, HL and HH in the frequency domain. Fig. 3
shows an example. In the four-channel subimages, the
LL channel provides a smaller approximation to the
original face image, because its the low frequency of
the image. The HL and LH channels are in the middle frequency and respectively provide changes of the
original face image in horizontal and vertical direction.
The HH channel is the high frequency of the image and
contains less useful information about the face.
Ojala et al. in [11] expanded the original LBP operator to multi-scale LBP to use neighborhoods of different sizes. The extended LBP uses different sampling
points on a circle of different radius size. Bilinear interpolation is used when a sampling point does not fall
in the center of a pixel. The notation LBPP,R refers to
P sampling points on a circle of radius R. Fig. 2 shows
several multi-scale LBP patterns with different P and
R. This operator is more effective than the original one Figure 3: An example of first level wavelet decomposiin the field of face recognition.
tion
468
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Feature Extraction
Different metrics are available for histograms comparison, like Histogram intersection (2), Log-likelihood
statistic (3) and Chi square statistic (4); H1 is the input
histogram and H2 is the registered histogram. Reference [3] has shown that The Chi square has a better
performance than the two others; So we choose Chi
square in our experiments. For a more efficient histogram comparison, we used weighted Chi square (5)
to emphasize more on specific channel in layer one of
our three-layer approach.
D(H1, H2) =
n1
X
min(H1i , H2i )
(2)
i=0
L(H1, H2) =
n1
X
(H1i .LogH2i )
(3)
(H1i H2i )2
H1i + H2i
(4)
(H1i,j H2i,j )2
H1i,j + H2i,j
(5)
i=0
2 (H1, H2) =
n1
X
i=0
2w (H1, H2) =
X
i,j
wj
In (5), indices i and j refer to i th bin in feature histogram corresponding to the j th channel and wj is
the weight for channel j.
The nearest-neighbor (NN) classifier is used for recognition stage; the first rank is selected as the output of
Figure 4: Our proposed three-layer wieghted Haar and
classifier.
weighted LBP histogram
To ensure the proposed algorithm efficiency and measure its robustness to different face variations, it has
been tested on ORL face Database [13] and Yale face
database [14].
The LL channel has more appearance related information; therefore we divided it into 4 4 regions.
The LH and HL channels describe the texture grade
of face image and are more robust to face variations. 5.1 Yale face database
By reducing the number of blocks, we can have more
discriminative power in each region; so we used 2 2
regions for these channels. We ignored the HH channel The Yale face database contains 11 different gray-scale
and did not use it in the second level; because of its images of 15 distinct subjects, one for each of followpoor performance which has been tested in sub-section ing facial expressions or configurations: center-light,
with glasses, happy, left-light, without glasses, normal,
A in section five.
Length L of the final feature vector f v can be calcu- right-light, sad, sleepy, surprised, and wink (Fig. 5).
lated by (1). This feature vector is partially robust to The original size of each image is 320 243 with backillumination changes, pose and expression variations. ground. In order to reduce the effect of background
In order to reduce the effect of nonlinear illumination in recognition results, the faces extracted from original
changes, the histogram equalization is applied to input images. The extraction is done as follows:
images in preprocessing stage.
(1)
469
bbx =
mw hf
d
2 pupil se
(6)
The LH and HL channels are more robust to face variations, so we can use them together with LL channel
for a more robust face recognizer. The HH channel
usually mixes with noises and doesnt perform well in
recognition, as we can see in Table 1. According to the
result and with trial and error, we select weight vector
[3 1 2 0], respectively for LL, LH, HL and HH channels. This weight vector uses in weighted Chi square
statistic as wj . For comparison purposes, in addition
to our proposed method, LDA [16], PCA [17], KPCA
[18] and multi-block LBP with different block sizes are
also tested on this database too. For multi-block LBP,
different block count has been tested. Table 2 shows
the results. The proposed WHWLBP method is tested
in weighted and non-weighted modes.
Channel
LL
LH
HL
HH
Accuracy
85.9%
68.8%
82.2%
42.2%
Methods
LDA
PCA
KPCA
u2
LBP8,2
11
u2
LBP8,2
44
u2
LBP8,2
66
u2
LBP8,2 8 8
W HW LBPN W
W HW LBPW
Figure 6: Determined face bounding box by face/head
anthropometric measures defined in [15]
Accuracy
57.7%
45.9%
65.9%
70.3%
79.2%
89.6%
88.1%
87.4%
91.11%
5.2
470
The Third International Conference on Contemporary Issues in Computer and Information Sciences
have more variation in expressions and illuminations; and this is why we used partially robust
phrase in this paper.
Conclusion
In this paper, we develop a novel face recognition approach, based on LBP histogram and Haar wavelet
transform. We propose a three-layer weighted Haar
Methods
Accuracy
and weighted LBP histogram (WHWLBP) represenLDA
61%
tation for face recognition which blends the power of
both Haar wavelet transform and LBP operator.
PCA
46.5%
In our final feature vector, we effectively have a descripKPCA
69%
u2
tion of face on different levels of locality: The Haar
LBP8,2
11
87.5%
u2
wavelet operator gathers information from the freLBP8,2
44
94%
u2
quency level. The LBP operator used in layer two, conLBP8,2
66
93%
u2
tain information about the texture patterns on a pixel
LBP8,2
88
90.5%
level; then the LBP operators output are summed over
W HW LBPN W
96%
a small region to gather information on a middle level;
W HW LBPW
97.5%
and each regions histograms are concatenated together
to make a global description of input face image in last
Table 3: Result of the recognition on ORL.
layer.
The result of experiments on Yale and ORL database
proved that our proposed feature extraction method is
5.3 Discussions
effective for face representation and recognition and is
partially robust to expression, pose and illumination
As can be observed, our approach has the best accu- variations.
racy on both databases. With respect to the results,
we can get several conclusions:
Refrences
Our method is partially robust to expression, illumination and pose variations and surely more
robust than other tested methods.
[2] X. Zhang and Y. Gao, Face recognition across pose: A review, Pattern Recognition 42 (2009), 28762896.
471
472
Reza Azmi
Alzahra University
Alzahra University
Marzieh.salehi.sh@gmail.com
azmi@alzahra.ac.ir
Narges Norozi
Alzahra University
Computer Engineering Department
Na.norozi@gmail.com
Abstract: An automatic change analysis method that is efficient for detecting changes in MRI
sequence is very important for medical diagnosis, follow-up and prognosis. Chemotherapy is a
standard therapy for cancerous diseases but finding methods for analyzing the reaction of tumors to
this therapy accurately is a challenging task, because direct comparing and manual analyzing is very
time-consuming, difficult and sometimes impossible. In this paper we propose a novel unsupervised
method for change detection in breast MRI images. We apply a modified self-organizing feature map
(SOFM) neural network. to obtain an appropriate threshold for the network, we use a correlationbased criterion.
Introduction
473
images by [7],[8].
In this paper we propose a novel method for change
detection of breast tumor in MRI images. The proposed framework has three main stages: preprocessing, change detection and optimizing. In preprocessing stage, a liner intensity normalization and non-rigid
registration is applied. In the next stage We apply a
modified SOFM neural network for change detection
in breast MRI images. The basic idea exploited in the
proposed method is inspired from [1]. Finally, in last
stage to obtain an appropriate threshold for the network and minimizing error, we use a correlation-based
criterion.
2
2.1
preprocessing
Intensity Normalization
2.2
Rigid-body transformations that consist of only rotations and translations have been use to correct different patient positioning in the successive scans. Since
more complex deformations and undesired global differences may occur between two exams, using a rigid
registration or even affine registration only is not sufficient especially in breast MRI images. These complex
deformations can be the result of acquisition artifacts
or natural features of breast tissue such as variations
in breast compression. Generally, the main problem
in registration of mammography and MR images is the
large deformation of the breast during acquisitions. For
these reasons, using a non-rigid registration is an essential task. In this paper we use an intensity based nonrigid registration algorithm that have been proposed by
Rueckert et al and extended by Rohlfing et al. In this
algorithm a hierarchical transformation model of the
motion of the breast has been developed. The global
motion of the breast is modeled by an affine transformation while the local breast motion is described by
a free-form deformation based on B-splines. Normalized mutual information is used as an intensity-based
image similarity measure. Registration is achieved by
minimizing a cost function, which consist of a weighted
combination of the similarity measure and a regularization term. The regularization term is a local volumepreservation (incompressibility) constraint which is implemented by penalizing deviations of the Jacobian determinant of the deformation from unity [14].
SOFM model
474
The Third International Conference on Contemporary Issues in Computer and Information Sciences
convergence and h(p) shows the topological neighbor- to 1(here all components of input and weight vectors
hood at pth iteration of learning.
are nonnegative). So we normalize the weight vector
as follows:
3.2
wmn,k
wmn,k = Pd
k=1 wmn,k
(4)
3.3
For each output neuron (m, n) we have an input vec~ and a weight vector W
~ . in the model of SOFM
tor X
the pixel that has the maximum value of U (m, n) =
~ .X
~ = Pd xmn,k .wmn,k , will be updated but here
W
k=1
a threshold th is defined and those neurons that satisfy
U (m, n) > th can update themselves and their neighbors. Also in SOFM the same input is given to all
output neurons but here different inputs are given to
output neurons. The threshold th is in [0, 1]. Therefore the value of U (m, n) must be less than or equal
475
the results of change detection are very difficult to evaluate because preparing reference patterns for original
changes by human expert is impossible due to the following reasons : 1) some of the changes are invisible
for the human experts. 2) determining and labeling
each changed pixels exactly is very time consuming,
expensive and error prone. Therefore we use simulated
images in this paper. The method of generating simulated images comes in [14].
Table 1: evaluation of proposed method and GLRT for 14 test images through 5 criteria.
GLRT
Mean Std-dev Max
VOR(%) 33.77 21.28 38.55
FPR(%) 3.77 2.73 2.43
ACC(%) 96.14 2.60 98.97
SPC(%) 96.23 2.73 98.03
PPV(%) 37.44 27.04 47.52
Min
7.05
1.22
89.92
89.85
7.06
In this paper, an unsupervised approach is presented as a new approach for breast change detection
in MRIs. This approach evaluates through 5 criteria:
accuracy (ACC), specificity (SPC), false positive rate
(FPR), positive predictive value (PPV) and VOR. The
results show that the proposed method has a better
performance compared to GLRT method. Tables 1,2
show the results.
In the results, we see that ACC and SPC have large
values. This is because true negative is large. i.e. the
black pixels around region of breast cause the amounts
of these two criteria be upper than the reality. If we
extract region of interest, the value of ACC and SPC
will be more actual.
Proposed
Mean Std-dev Max Min
47.59 19.43 69.78 16.49
0.97
1.48
4.93 0.00
98.58 1.26 99.69 95.08
99.03 1.48 99.99 95.07
67.65 29.67 99.66 16.60
Conclusion
Refrences
[1] S. Ghosh, S. Patra, and A. Ghosh, An unsupervised contextsensitive change detection teqnique based on modified selforganizing feature map neural network, Journal of Approximate Reasoning (2009), 3750.
Test
image1
Test
image2
Test
image3
Test
image4
Test
image5
GLRT
Proposed Approach
[2] T. Kohonen, Self-organized formation of topologically correct feature maps, Biol. Cybernet (1982).
[3] T Kohonen, Self-Organizing Maps, second ed., SpringerVerlag, Berlin (1997).
[4] M. Bosc and F. Heitz et al, Automatic change detection
in multimodal serial MRI: application to multiple sclerosis
lesion evolution., NeuroImage (2003), 643656.
[5] L. Lemieux, U.C. Wieshmann, and N.F. Moran et al, The
detection and significance of subtle changes in mixed-signal
brain lesions by serial MRI scan matching and spatial normalization, Medical Image Analysis (1998), 227242.
[6] D. Rey, G. Subsol, H. Delingette, and N. Ayache, Automatic detection and segmentation of evolving processes in
3D medical images: application to multiple sclerosis, Medical Image Analysis (2002), 163179.
[7] H. Boisgontier and V. Noblet et al, Generalized likelihood
ratio tests for change detection in diffusion tensor images:Application to multiple sclerosis, Medical Image Analysis (2012), 325338.
[8] E.D. Angelini and J. Delon et al, Differential MRI analysis
for quantification of low grad glioma growth, Medical Image
Analysis (2012), 114126.
[9] X. Li, B.M. Dawant, and E.B. Welch, A nonrigid registration algorithm for longitudinal breast MR images and
the analysis of breast tumor response, Magnetic Resonance
Imaging (2009), 12581270.
[10] D.B. Kopans, The positive predictive value of mammography, American Journal of Roentgenology (1992).
476
The Third International Conference on Contemporary Issues in Computer and Information Sciences
477
Maryam Hasanzadeh
a.falahi@shahed.ac.ir
hasanzadeh@shahed.ac.ir
Abstract: In this article a new information hiding method based on LSB replacement in spatial
domain has been presented. In this method, first message bites are shuffled by chaos whose parameters are adjusted by Genetic Algorithm. Then, the best order of the shuffled message bites
are selected for embedding with consideration of image pixels. This makes minimal changes in the
visual perception of the image while the first statistics of the image will also be preserved. The
Experimental results, indicating that imperceptibility, security and high capacity of this method.
Also to extract the message, the recipient does not require the original image.
Introduction
Steganography refers to the science of invisible communication. Unlike cryptography, where the goal is secure
communications from an eavesdropper, steganographic
techniques strive to hide the presence of the message
from an observer [1]. Steganography embed the data
in the least significant components of a cover media,
such that unauthorized users are not aware of the existence of hidden data [2]. The cover object can be a
still digital image, a video or an audio file. The hidden
message also can be a row text, an image, an audio file
or a video file [3],[4]. Steganography algorithm embeds
the hidden message in a cover media. The combination
of cover and the hidden message is called stego.
The Steganography techniques can be divided into two
main categories: embedding in frequency domain and
embedding in spatial domain. In the frequency domain most of the methods are based on discrete cosines
transform (DCT). After performing DCT on 8 8
blocks and quantizing the DCT coefficients, the hidden
messages are embedded in quantized DCT coefficients.
LSB replacement is the most commonly used method
in spatial domain which directly replaces the LSB of
the cover images with the hidden message bits [5].
Due to the increasing knowledge of hackers, the need
Corresponding
Author
478
for inventing approaches with high security and acceptable capacity has increased sharply. In the recent
years lot of approaches for embedding the data in images based on evolutionary algorithms, genetic algorithms and chaos theory has been presented. The use
of chaos for shuffling the message bit and improved
adaptive LSB has been suggested in [6]. In [7] the optimization system of evolutionary algorithms are used
for increasing the resistance against the statistical attacks. In [8] by genetic algorithms a technique for watermarking the data inside the images has been proposed. In 2007, an innovative watermarking scheme
based on progressive transmission with genetic algorithms has been proposed in [9]. In 2003, by using
chaos theory another approach for data hiding in the
frequency domain has been invented [10]. In 2010, a
Steganography approached has been proposed based
on LSB replacement and hybrid edge detector [11]. In
[12] a water marking algorithm based on chaos for images in wavelet domain has been proposed. In [13] a
water marking algorithm based on SVD and genetic
algorithm has been presented.
In this paper the chaos are used for shuffling the message bits. The required parameters are adjusted by
Genetic Algorithm in which the operators are selected
intelligently. In the following, the proposed method
will be described in Section 2. In Section 3, experi- in state when 3.5699456 < 4 , the value is conmental results will be illustrated and finally Section 4 sidered in the range of [3,4] and the x0 value from [0,1]
interval for each chromosome.
concludes the paper.
Fitness Function: due to the fact that the target is to
minimize the changes between cover image and Stego
image, the Peak Signal-to-Noise Ratio (PSNR) is considered as the fitness function which calculates the im2 Proposed Method
age changes before and after the message embedding.
Greater values of PSNR are result less change in outIn this section we describe the utilized Logistic map put image. PSNR function is defined as the following
(one of the simplest chaotic maps) and Genetic Algo- relationship: :
rithm in the proposed method.
2.1
P SN R = 10 log10 (
Chaos
The chaos phenomenon is a deterministic and analogously stochastic process appearing in a nonlinear dynamical system. Because of its extreme sensitivity to
initial conditions and the outspreading of orbits over
the entire space, it has been used in information hiding to increase security [6]. Logistic map described by
xn + 1 = xn (1 xn )
(1)
where
0 4, xn (0, 1)
Researches on chaotic dynamical systems show that the
logistic map stands in chaotic state when:
3.5699456 < 4
That is, the sequence:
{xn , n = 0, 1, 2, }
generated by the logistic map is non-periodic and nonconvergent. All the sequences generated by the logistic
map are very sensitive to initial conditions, in the sense
that two logistic sequences generated from different initial conditions are uncorrelated statistically.
2.2
1
M N
2552
0
2
j=1 (x(i, j) X (i, j))
(2)
PM PN
i=1
Genetic Algorithm
2.3
Embedding algorithm: As shown in Figure 2, after obtaining the best and optimized (x0 , ) by GA,
the embedding is performed using well known LSB replacement.
Extracting algorithm: in extraction we use (x0 , )
as the key. using the (x0 , ) and logistic map, the hidden message is extracted and retrieved (Figure 3).
479
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Figures 5, 6 and 7 illustrate the comparison of classic LSB method and the method used in [11] with the
proposed method. The results of Figure 7 have been
gained from a population of 100 individuals and 10
generations.
Experimental Result
480
Conclusion
In this paper we presented a new image Steganography method in gray-level images based on LSB replacement. In this method for shuffling the message
bits, the chaos has been used whose parameters are set
by genetic algorithm. This would results in minimum
changes in the image which as well increases the security level of the method. The Experimental results
indicate that our method in addition to having high
embedding capacity, maintain the initial statistics of
the image in a satisfactory way and the changes in it
are infeasible visually by the human visual system.
Refrences
[10] Zhen Liu and Lifeng Xi, Image Information Hiding Encryption Using Chaotic Sequence, Springer-Verlag Berlin Heidelberg (2007), 202207.
[11] W.J. Chen, C.C. Chang, and T.H Ngan Le, High payload
steganography mechanism using hybrid edge detector, Expert Systems with Applications, Elseveier (2010), 3292
3299.
[4] Kefa Rabah, Steganography-The Art of Hiding Data, Information Technology Journal, Asian Network for Scientific
Information (2004), 245269.
[13] Zhao Dawei., Chen Guanrong., and Liu Wenbo., A chaosbased robust wavelet-domain watermarking algorithm, ScienceDirect.Chaos, Solitons and Fractals (2004), 202207.
481
Parinaz Mobedi
Faculty of Engineering
Faculty of Engineering
University of Guilan
University of Guilan
p.mobedi@gmail.com
mahdavi@guilan.ac.ir
Introduction
Author, P. O. Box 41635-3756, F: (+98) 131 6690 271, T: (+98) 131 6690 274-6 (Ex 3193)
482
Proposed Framework
This architecture is composed of four entities including Client, System Analysis, composition Broker and
Service provider. These entities of proposed framework work with UDDI to provide QoS user requirement. The structure of proposed framework and relation between various components is depicted in Figure
1. The name of this framework is set CACP which
is abbreviated of four main entity in the framework
(C=Client, A=system Analysis, C=Composition broker, p=service Provider). In the following we describe
each of the components.
4. Match Function: the main task of this component is service discovery. For doing this, Match
Function take class diagram as an input and then
send a request that contain class name, methods
with input and output parameters to UDDI.
5. Classification: the input of this step is a table of
appropriate service which is obtained in step 4.
Then this component is put relative services in
one category and after that one agent for each
category is created .
6. Select Candidate: the best candidate in each category should be identify. At this step choose best
candidate is done according to the non-functional
requirement such as QoS.
7. Composition: the best service candidate in each
category need to collaborate. How to establish
this collaboration lies at the heart of service composition. In this step coordination between web
services is represented by the sequence diagram.
8. Execute engine: the task of this component is implementation of the composite service which are
created in step 7 and then the result is return to
user after the execution.
9. Generate WSDL: When a service composition is
completed then WSDL of these new web services
should be created. After that this describe should
be publish in UDDI. Because if the request repeat again, this new service could be detected in
a discovery phases.
10. Web Service: Web services can be atomic or composite.
11. Agent: Agent is created for those web service
that are doing the same work.
3.1
and
483
The Third International Conference on Contemporary Issues in Computer and Information Sciences
3.2
Relationship Between Web services after that the sequence of execution web services is defined. If all three web service in a raw it means that
and Agent
Case Study
484
Class name
Method
flight
Booking()
car
CarRenting()
String carBooking-id
hotel
HotelBooking()
String hotelBooking-id
MahanAirBooking, AsemanAirBooking
bestRentcar, LuxeRentcar
Mashhad-3Star-hotelBooking, Mashhad5Star-hotelBooking
class diagram functional requirements are put in section methods of class and non-functional requirements
are put in QoS Metric class. The sequence diagram
for whole planning trip is shown in Figure 4.this diagram represent how one object (web service) interact to
another object until the task is complete, the communication between web services is managed by controller
object.
485
The Third International Conference on Contemporary Issues in Computer and Information Sciences
main parts. The first part user interact with GUI, the
second part is analyzing phase and the third and fourth
part contain service discovery and service classification
and then service composition is done by sequence diagram. Finally, the work is explained by planning trip
as a case study.
[6] R. Gronmo and M. Jaeger, Model-driven semantic web service composition, In Proc. 12th Asia-Pacific Software Engineering Conference (APSEC) (2005).
Conclusion
In this paper a new framework for web services composition is proposed. This frame work is made up of four
486
Refrences
[1] D. Scogan, R. Gronmu, and I. Solheim, Web Service Composition in UML: Lecture Notes in Computer Science,
IEEE,8th IEEE Intl Enterpriser Distributed Object Computing (2004), 4757.
[2] I. Rauf, M. Zohabib, and Z. Malik, UML based Modeling of
Web Service Composition-A Survey, EuroCG10, Sixth International Conference on Software Engineering Research, Management and Application (2008).
[3] E. Badidi, A Publish/Subscribe Model for QoS-aware Service
Provisioning and Selection, International Journal of Computer Applications 26 (2011).
[4] B. Bauer and J. Muller, MDA applied: From Sequence Diagrams to Web Service Choreography, In N. Koch, P. Fraternali, M. Wirsing, eds., Lecture Notes in Computer Science
- Web Engineering (2004), 136148.
[5] R. Gronmo, D. Skogan, I. Solheim, and J. Oldevik, Modeldriven Web Services Development, IEEE International conference on e-Technology, e-Commerce and e-Service 2004.
(2004), 4245.
[7] T.E.M. John and C.G. Gerald, Specifying Semantic Web Service Compositions using UML and OCL, IEEE International
Conference on Web Services (ICWS) (2007).
Amin Nikanjam
s.ghiasifard@gmail.com
nikanjam@iust.ac.ir
Abstract: Recommendation systems are designed to allow users to locate the preferable items
quickly and to avoid the possible information overloads. Recommendation systems apply data mining techniques to determine the similarity among thousands or even millions of data. Collaborative
filtering is one of the most successful recommendation techniques. The basic idea of CF-based
algorithms is to provide item recommendations or predictions based on the opinions of other likeminded users. In this paper, we propose an approach to predict user rates on new items for user
of Yahoo Music Dataset. The proposed approach is based on collaborative filtering and consists of
seven different methods. We combine the results of these methods using a linear blending model.
To evaluate the accuracy of predictions, Mean Absolute Error (MAE) is reported.
Keywords: recommendation systems; collaborative filtering; prediction; rate; Yahoo Music Dataset.
Introduction
487
2.2
Evaluating accuracy
Algorithms
We have implemented different methods , some innovative and some already known, to predict ratings of user
2.3 Notation
on given items. In the following some previously used
collaborative filtering methods are described. All of
We consider u for user and i for item. Symbol k is set these methods are trained using the train set of Yahoo
of items and k |u| is set of items that user u rated, pre- Music Dataset. To provide comprehensive comparison,
dictions for the pairs of user/item pairs are denote as the measured MAE value on the validation set of each
ru,i .Also in the methods we consider parameters pu as model is reported to evaluate accuracy of our predica user-dependent feature and qi as an item-dependent tions.
feature.
4.1
AC Method
Dataset
We used Yahoo Music Dataset [1] for training methods and evaluating errors on predictions. Yahoo music
dataset comprises 262,810,175 ratings of 624,961 music items by 1,000,990 users which is collected during
1999-2010. Each item and each user have at least 20
ratings in the whole dataset. All ratings were split
into train, validation and test sets, such that the last
488
The Third International Conference on Contemporary Issues in Computer and Information Sciences
4.2
AC+Avg
(2)
ru,i
4.3
ru,i + ri
=
2
(4)
CB Method
4.5
TypeTaste
(3)
)
j )2
i
uU u,i
uU (ru,j r
Like the AC model we used weighted-sum formula to
predict rating of users to items and to overcome the
computational challenges we split the users into N
parts. In our experiment we set N=1000. With this
model we get MAE=24.0286 on the validation set.The
problem of this model is the items that have not been
rated by users or the number of rates are too small.
P
N k|u| (corri,N ru,N )
(3)
P
(6)
ru,i =
N k|u| (|corri,N |)
4.4
4.6
Round
(6)
ru,i
a + ib =
a + 1
a
=
a1
ru,i
10
, b > 0.5
, b = 0.5
, b < 0.5
(10)
(11)
UserTaste
4.7
KNN+SVD
489
4.7.1
SVD
The Singular value decomposition (SVD) is a factorization of the rating matrix. Predictions of a user/item
pair are given by the following formula (12). Where qi
is an F 1 matrix of item-features and pu is an F 1
matrix of user-features and F is the number of features.
ru,i = pTu qi
(12)
PF
2
Netflix Prize [2] in 2006.
In this method pre(qi,k qj,k )
qP
(13)
ci,j = qP k=1
diction of one user/item pair is done in constant
F
F
2
2
q
q
k=1 i,k
k=1 j,k
time O(1) by using gradient descent as learning algorithm, training time rows linear with the number of ratings |R|. For completeness we sketch the In this method we also used a sigmoid function to map
stochastic gradient descent training in Algorithm 1. the correlations ci,j to c0i,j by introducing two new parameters and .
Data: Sparse rating matrix R R|U |x|I| = [ru,i ]
Result: Values of ru,i
Tunable: Learning rate , Regularization ,
c0i,j = 0.5 tanh ( ci.j + ) + 0.5
(14)
feature size F ;
Initialize user weights p and item weights q with
(7)
In order to predict the rating ru,i . We used k-nearest
small random values ;
neighbor algorithm to select K ratings which has the
while e < 0.1 do
T
highest correlations to the item i. Hence we introduce
ru,i pu qi ;
the set of items k |u0 | where u0 is the set of rated items
e ru,i ru,i ;
from user u with the K-highest correlations to item i (
for k=1..F do
k |u0 | k |u| ).Furthermore, we used the weighted sum
c pu,k ;
of the ratings from user u multiplied by the ci,j and
pu,k pu,k (e qi,k + pu,k ) ;
normalized with the absolute sum of the c0i,j . Here we
qi,k qi,k (e c + qi,k ) ;
have the final prediction formula (15) for the item-item
end
KNN with SVD Features (KNN+SVD).
end
Algorithm 1: Pseudo code for training a SVD on
P
0
rating data[6].
j k |u0 | ci,j ru,j
(7)
(15)
ru,i = P
c0
0
j k |u |
4.7.2
K-Nearest Neighbor
i,j
490
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Blending
Refrences
Conclusions
491
[6] M. Jahrer and A. Toscher, Collaborative Filtering Ensemble, 17th ACM Int. Conference on Knowledge Discovery and
Data Mining (2011).
[7] G. Linden, B. Smith, and J. York, Amazon.com recommendations: Item to item collaborative filtering, IEEE Internet
Computing 7 Issue 1 (January 2003), 7680.
Qazvin, Iran
Tehran, Iran
Maryamgholami83@yahoo.com
Mmeybodi@aut.ac.ir
Ali Nourollah
Department Of Computer Engineering
Islamic Azad University
Qazvin, Iran
Abstract: In this paper, we propose an intelligent backbone formation algorithm based on Cellular
Learning Automata (CLA) in which a near optimal solution to the minimum CDS problem in Unit
Disk Graphs (UDG) is found. UDGs are used for modelling Ad-Hoc networks, and finding MCDS
in such graphs is a promising approach to construct an efficient virtual backbone in wireless AdHoc networks. The simulation results show that the proposed algorithm outperforms the existing
CDS-based backbone formation algorithms in terms of the backbone size.
Keywords: Wireless Ad-Hoc networks; Backbone formation; Cellular learning automata; Connected dominating
set.
Introduction
to form a virtual backbone for wireless Ad-Hoc networks by finding a near optimal solution to the minimum CDS problem in the graph of the network. In
the energy constrained Ad-Hoc networks, the proposed
method helps to extend the network lifetime due to its
smaller size CDS compared to other CDS schemas, in
terms of:
492
Preliminary Concepts
2.3
Learning Automata
2.2
notes the set of the values can be taken by the reinforcement signal, and c {c1 , c2 , ..., cr } denotes the
set of the penalty probabilities, where the element ci
is associated with the given action i . The recurrence
equation shown by (1) and (2) is a linear learning algorithm by which the action probability vector p is
updated. Let i (k) be the action chosen by the automaton at instant k.
pj (n) + a[1 pj (n)]
j=i
pj (n + 1) =
(1)
(1 a)pj (n)
j j 6= i
When the taken action is rewarded by the environment
(i.e. (n) = 0) and
(1 b)pj (n)
j=i
pj (n + 1) =
(2)
b
( r1 ) + (1 b)pj (n) j j 6= i
When the taken action is penalized by the environment
(i.e. (n) = 1). r is the number of actions can be chosen by the automaton, a(k) and b(k) denote the reward
and penalty parameters and determine the amount of
increases and decreases of the action probabilities, respectively.
2.4
493
The Third International Conference on Contemporary Issues in Computer and Information Sciences
2.5
2.6
Related Works
Open Cellular Learning Automata structing CDS in network topology graph of a wireless
(OCLA)
Ad-Hoc network. The first step of the proposed al-
494
Domatic number of a connected graph is at least on the defined local rules in the cellular learning autwo;
tomata structure. By selecting the appropriate actions
Optimal substructure defined as subset of inde- by existing learning automatas in each cell, the subset
pendent dominator preferably with a common of CDS nodes in graph will be formed. The procedure
of this proposed algorithm is shown in algorithm 1.
connector.
Data: Max step, ICLA which mapped into the
network graph
Many algorithms also have been proposed for solving
Result: The minimum size CDS
the CDS problem. One can find a good survey of these
while step<max step do
algorithms in[18].
foreach LA in Parallel do
Select action depending on
action-prob-vector;
Calculate beta;
4 The
Proposed
Algorithm
Update action-prob-vector depending on
beta;
(OICLA-CDS)
end
end
Algorithm
1: Procedure of OICLA-CDS algorithm
The model we introduced and used in our proposed algorithm is a combination of open cellular learning auThree local rules used in the proposed algorithm
tomata and irregular cellular learning automata that
are
described as follows:
we call it open irregular CLA (OICLA). The proposed
model has all of ICLA features besides the provided reinforcement signal for updating probability action vec If one node has only one neighbour, and its learntor in any learning automata is a combination of global
ing automata does not select this node as a domand local environment responses. These responses are
inating node, will receive reward for its action;
combined via local rules. At last, all of learning au If a node has at least one leave neighbour, and
tomatas update their action probability vector based
thats learning automata select this node as a
on received signal. To solving the CDS problem, we
dominator node will receive reward.
apply the open irregular cellular learning automata as
a set of selector learning automatas.
And global rules are:
An OICLA is mapped on the network such that
each cell maps on a node in the network. Remember
all of nodes were dominated, means that each
that the network graph is a unit disk graph, so neighnode at least one dominated node as neighbour;
bouring that we apply in OICLA, is defined and applied
The resulting subgraph is a connected sub
by graph edges. In this model, each node i has a learngraph;
ing automata named LAi , that can select itself as a
dominated or dominating node by using it. Therefore
The number of dominated nodes in this set, is
all of actions of LAi is a 2 element set {a1 , a2 } . If LAi
equal to or less than number of dominated nodes
selects the action a1, means that the node i is selected
in previous iterations.
as a dominator node and If LAi selects the action a2,
means that the node i is selected as a non dominating
If these conditions will hold, one local rule should be
node.
applied to decide rewarding or penalizing selected actions by LAs: If a node has maximum degree between
The action selection periods is continued until the
its neighbours and is selected as a dominator node, its
number of stages exceeds pre-specified threshold maxselected action will receive reward.
step.
In this algorithm, all of nodes keep the neighbouring
informations in adjacent matrix format. The learning
automaton in each cell has an action probability vector that holds the probability of selecting each of the
legal actions. At the start of the process, the probability of selecting all the actions is equal to 0.5. These
probabilities are updated in the iterations of the algorithms executions with the ending condition and based
Computational Results
495
The Third International Conference on Contemporary Issues in Computer and Information Sciences
5.1
Figure 2: The size of the CDS constructed by algorithms, when the transmission range is 15.
Experiment 1
5.2
Experiment 2
Figure 3: The size of the CDS constructed by algorithms, when the transmission range is 30.
496
Conclusion
Refrences
[1] V. Bharghavan and B. Das, Routing in Ad-Hoc Networks Using Minimum Connected Dominating Sets, In-
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11] S. Guha and S. Khuller, Approximation algorithms for connected dominating sets, Journal of Computer Communications 20 (1998), no. 4, 374-387.
[12] S. Butenko, X. Cheng, C. Oliveira, and P.M. Pardalos, A
new heuristic for the minimum connected dominating set
problem on Ad-Hoc wireless networks, Recent Developments
in Cooperative Control and Optimization, Kluwer Academic
Publishers (2004), 61-73.
[13] J. Wu and H. Li, On calculating connected dominating set
for efficient routing in Ad-Hoc wireless networks, ACM
DIALM1999 (1999), 7-14.
[14] K.M. Alzoubi, X.Y. Li, Y. Wang, P.J. Wan, and O.
Frieder, Geometric spanners for wireless Ad-Hoc network,
IEEE Transactions on Parallel and Distributed Systems 14
(2003), no. 4, 408421.
[15] Y. Li, M.T. Thai, F. Wang, C.W. Yi, P.J. Wang, and D.Z.
Du, On greedy construction of connected dominating sets,
WCMC (2005).
[16] R. Xie, D. Qi, Y. Li, and J.Z. Wang, A novel distributed
MCDS approximation algorithm for wireless sensor networks, Journal of Wireless Communications and Mobile
Computing (2007).
[17] R. Misra and Ch. Mandal, Minimum Connected Dominating Set using a Collaborative Cover Heuristic for AdHoc Sensor Networks, Wireless communications & mobile
computing Distributed systems of sensors and actuators
archive 9 (2009), no. 3.
[18] Z. Liu, B. Wang, and L. Guo, A survey on connected Dominating Set Construction Algorithm for Wireless Sensor
Networks, Information Technology Journal 9 (2010), no. 6,
10811092.
497
Mahmoud Shirazi
Zanjan, Iran
Zanjan, Iran
azadehgholami@iasbs.ac.ir
m.shirazi@iasbs.ac.ir
Abstract: In this paper, we use Genetic Algorithms to find the Minimum Dominating Set (MDS)
of Unit Disk Graphs (UDG). UDGs are used for modelling Ad-Hoc networks and finding MDS in
such graphs is a promising approach to clustering the wireless Ad-Hoc networks. The MDS problem
is proved to be NP-complete. The simulation results show that the proposed algorithm outperforms
the existing algorithms for finding MDS in terms of the DS size.
Keywords: Wireless Ad-Hoc networks; Backbone formation; Genetic algorithm; Dominating set.
Introduction
498
3.1
In this paper, a genetic algorithm based method is
proposed to clustering the wireless Ad-Hoc networks
by finding a near optimal solution to the minimum DS
problem in the graph of the network. In the energy
constrained Ad-Hoc and sensor networks, the proposed
method helps to extend the network lifetime due to
its smaller size DS compared to other DS schemas, in
terms of:
Representation
3.2
Fitness
Related Works
3.3
If S is a feasible solution
(1)
otherwise
499
The Third International Conference on Contemporary Issues in Computer and Information Sciences
3.4
Mutation Operators
3.4.1
3.4.2
We illustrate the use of these operators in our proposed algorithm by an example. Consider a graph with
vertex set {1, 2, 3, 4, 5, 6, 7} , a typical chromosome is
1011011 means that nodes 1, 3, 4, 6, 7 are in the dominator set. Since the number of 1s is more than half of
the length of this chromosome, HM1 is applied. This
algorithm inverts the 1s one by one and so the chromosomes should be evaluated are 0011011, 1001011,
1010011, 1011001 and 1011010. Finally, the algorithm
selects the change that results in maximum improvement in the fitness of the chromosome.
Data: The chromosome
Result: The mutated chromosome
foreach node j with value 1 in chromosome X
do
Let Y be a new chromosome with value 0 at
jth gene;
Calculate the fitness of Y;
if fitness(Y) < fitness (best) then
Best = Y;
end
end
if fitness (best) < fitness(X) then
X = best;
end
Insert the new X into the population replacing
the old X;
Figure 3: The HM1 algorithm
500
rithm first considers the first position containing 1 and neighbours (KN). The implementation of KN is shown
changes its value to 0 and then searches for a non in Figure 5. Note that set H is calculated for each node
dominator node which making it a dominator causes in the chromosome.
maximum improvement in the fitness of the chromosome. The chromosomes that should be examined are
0110010, 0011010, 0010110, and 0010011. The drawback of this algorithm is that it takes the entire chromosome and mutates its node by node with all non
dominator nodes. So it takes a lot of time and it is
Data: The population
computationally expensive.
Result: The population with mutated
chromosomes
Data: The chromosome
Step
1:
Randomly
select a subset of 10% of the
Result: The mutated chromosome
chromosomes
from
the entire population;
Let H be the set of nodes in the graph that are
Step
2:;
currently 1 in the chromosome X;
foreach chromosome X selected in step 1 do
foreach i in set H do
foreach node i in X that has value 1 do
Best = X;
Best = X;
foreach node j that has value 0 in X do
Let H be the set of neighbours of node i
Let Y be a new chromosome with value 1
that are currently 0 in the chromosome
at jth gene and value 0 at ith gene;
X(maximum cardinality of this set is
Calculate the fitness of Y;
average degree of the graph);
if fitness(Y) < fitness (best) then
foreach node j in Set h do
Best = Y;
Let Y be a new chromosome with
end
value 1 at jth gene and value 0 at ith
end
gene;
if fitness (best) < fitness(X) then
Calculate the fitness of Y;
X = best;
if fitness(Y) < fitness (best) then
end
Best = Y;
end
end
Insert the new X into the population replacing
end
the old X;
if fitness (best) < fitness(X) then
X = best;
Figure 4: The HM2 algorithm
end
end
Insert the new X into the population
replacing the old X;
3.4.3 KN Algorithm
end
We developed another heuristic which is an enhanceFigure 5: The KN algorithm
ment of algorithms described above. This algorithm
uses a local optimization rather than a global optimization technique. The idea behind this algorithm is to
mutate every gene with only its neighbours. The numComputational Results
ber of neighbours that are investigated (K) depends 4
on the average degree of the network nodes. We define
average degree of the network nodes as (2).
To study the performance of our GA algorithm for solvPn
ing DS problem, we have conducted simulation experi
(2) iments in two groups. The first experiment involves
= i=1
n
comparing impact of applying three different mutation
In which denotes the average degree of the network. operators in implementation of proposed genetic algoi is degree of node i and n is the number of nodes in rithm on solving DS problem. The second group evalthe network.
uates the results of the proposed GA with those of
the best-known DS formation algorithms. The perforIn fact k neighbours of a given node are investi- mance measures of experiment 1 are DS size and run
gated that k is at most a, so we call this algorithm k time and in experiment 2, the performance measure is
501
The Third International Conference on Contemporary Issues in Computer and Information Sciences
4.2
only DS size.
In our experiments we generate random connected
graphs repeatedly and run the algorithms, measuring
the size of the DS. The size of the graph ranges from
60 to 200 nodes. To simulate the structure of Ad-Hoc
networks, we place nodes (hosts) randomly in a square
simulation area of size 100 100 units. The coordinates of the nodes are chosen uniformly in each dimension. It is assumed that the transmission range for each
host is 20. The parameters of our genetic algorithm
are setting as bellow: population size is equal to 100,
crossover rate is 0.8 and the number of the iterations
is 100.
4.1
Experiment 2
Experiment 1
CONCLUSION
502
Refrences
[1] V. Bharghavan and B. Das, Routing in Ad-Hoc Networks Using Minimum Connected Dominating Sets, International Conference on Communications97, Montreal,
Canada (1997), 376380.
[2] Y. Z. Chen and A. L. Liestman, Approximating minimum
size weakly connected dominating sets for clustering mobile
Ad-Hoc networks, MobiHoc2002 (2002), 157-164.
[3] K. M. Alzoubi, P. J. Wan, and O. Frieder, Maximal independent set, weakly connected dominating set, and induced
spanners for mobile Ad-Hoc networks, International Journal of Foundations of Computer Science 14 (2003), no. 2,
287-303.
[4] P. J. Wan, K. Alzoubi, and O. Frieder, Distributed construction of connected dominating set in wireless Ad-Hoc
networks, INFOCOM 2002 3 (2002), 1597-1604.
[5] J. Wu, B. Wu, and I. Stojmenovic, Power-aware broadcasting and activity scheduling in Ad-Hoc wireless networks using connected dominating sets, Journal of Wireless Communications and Mobile Computing (2002), 425-438.
[6] H. Lim, C. Kim, and I. Stojmenovic, Flooding in wireless
Ad-Hoc networks, Journal of Computer Communications
(2001), 353-363.
[7] M. R. Garey and D. S. Johnson, Computers and Intractability: A guide to the theory of NP-completeness, Freeman, San
Frncisco, 1978.
[8] J. H. Holland, Adaptation in Natural and Artificial Systems,
University of Michigan Press, Ann Arbor, MI, 1975.
[9] D. E. Goldberg, Genetic Algorithm in Search, Optimization and Machine Learning, Addison-Wesley, Reading, MA,
1989.
[10] L. Jia, R. Rajaraman, and T.Suel, An Efficient Distributed
Algorithm for Constructing Small Dominating Sets, Distributed Computing 15 (2002), no. 4, 193205.
[11] J. Li, J. Jannotti, D. S. J. D. Couto, D. R. Karger, and R.
Morris, A scalable location service for geographic Ad-Hoc
routing, Proc. of the sixth annual international conference
on Mobile computing and networking, Boston, MA (2000).
[12] E. S. Correa, M. T. A. Steiner, A. A. Freitas, and C.
Carnieri, A Genetic Algorithm for solving a capacity Pmedian problem, Numerical Algorithms 35 (2004), no. 24,
373388.
[13] Z. Liu, B. Wang, and L. Guo, A survey on connected Dominating Set Construction Algorithm for Wireless Sensor
Networks, Information Technology Journal 9 (2010), no. 6,
10811092.
503
Farnaz Derakhshan
University of Tabriz
University of Tabriz
Tabriz, Iran
Tabriz, Iran
hojjatemami@yahoo.com
derakhshan@tabrizu.ac.ir
Abstract: The current air traffic management systems are not able to manage the enormous
capacities of air traffic perfectly and have not sufficient capability to service different types of
flights. Free flight is a new concept presented potentially to solve problems in the current air traffic
management system. Despite of many advantages of free flight (such as less fuel consumption,
minimum delays and reduction of the workload of the air traffic control centers), it causes many
problems such as collisions between different aircrafts. Conflict detection and resolution (CDR) is
one of the fundamental challenges in air traffic management system. In this paper, we presented a
model for CDR between aircrafts in air traffic management using graph coloring problem concept.
In fact, we mapped the congestion area to a corresponding graph, and then addressed to find a
reliable and optimal coloring for this graph using a prioritization method. In prioritization method
we assign a priority for each aircraft based on its score.
Keywords: Air Traffic Control, Free Flight, Conflict Detection and Resolution, Graph Coloring Problem, Prioritization Method.
Introduction
Having a reliable, safe and efficient air traffic management system is a fundamental and critical need in aviation industry. In this paper, we define Air Traffic as:
Aircraft operating in the air or on an airport surface,
exclusive of loading ramps and parking areas [1] and
Air Traffic Control as: a service operated by appropriate authority to promote the safe, orderly, and expeditious flow of air traffic [1]. Air traffic management is a
very complex, dynamic and demanding problem which
involves multiple controls and various degree of granularity [2]. Generally, the main goals of air traffic management systems are as follows: providing safety (separate aircraft to prevent collisions between aircrafts),
performance and high efficiency for the flights, detecting and resolving conflicts, reducing travel time (min Corresponding
504
fic management systems have more freedom for selecting and modifying their flight paths in airspace during
flight time. The free flight concept changes the current
centralized and command-control airspace system (between air traffic controllers and pilots) to a distributed
system that allows pilots choose their own flight paths
more efficient and optimal, and plan for their flight
with high performance themselves. Free flight, also
called user preferred traffic trajectories, is an innovative concept designed to enhance the safety and efficiency of the National Airspace System (NAS) [9, 10].
Despite many advantages of this method, free flight imposes some problems for air traffic management system
that one of the most notably of them is the occurrence
of conflicts between different aircrafts. CDR is one of
the major and fundamental challenges in safe, efficient
and optimal air traffic management.
In this paper, conflict is defined as: conflict is the
event in which two or more than two aircrafts experience a loss of minimum separation from each other
[12]. In other words, the distance between aircrafts violates a criterion defining what is considered unwanted;
that we should avoid of these conflicts during a fast
and accurate process; otherwise air traffic management
may be deal with difficult and also will increase risk of
any aircraft collide. In addition, we use the definition
of conflict detection process as the process of deciding when conflict - conflict between aircrafts- will occur
[12], and also conflict resolution process is considered
as: specifying what action and how should be to resolve
conflicts [12].
So far, various models are proposed for conflicts detection and resolutions in air traffic. We also presented
an organized and systematic model for conflicts detection and resolution between aircrafts in air traffic
management which this model has high efficiency, flexibility and reliability. Our proposed model is based on
the prevention method of conflicts. In this paper, using mapping congestion area to corresponding graph,
we converted the problem of conflicts between aircrafts
to a Graph Coloring Problem (GCP) [11]; In fact, we
make a state space graph from congestion area. Each
node of this graph indicates one aircraft in congestion
area and each edge between two nodes represent the
conflict that may be occur between two aircrafts in
future, and the colors used for coloring this graph indicates a flight path. Then we use prioritization method
as an optimal method with least cost to solve the GCP
(i.e. for solve conflicts between aircrafts in airspace).
In this model, global approach is used to resolve the
multiple conflicts between aircrafts in congestion area.
In fact, for each aircraft, we allocate a flight path in
which this aircraft have a reliable distance (vertical or
horizontal) with each other aircrafts and there will be
no risk of conflict. We believe that if we use this model
beside new technologies such as multi-agent systems [2]
we can obtain promising efficiency in air traffic management systems. Multi-agent system is a natural tool for
air traffic management and if autonomous agents use
appropriate strategy (such as prioritization method)
they can manage the air traffic as properly. Following this short description, we describe Graph Coloring
Problem in Section 2, followed by description of our
proposed model in section 3, then in section 4 we describe prioritization method and finally in section 5 we
make some conclusion.
GCP (GCP) [11] is an optimization problem that includes finding an optimal coloring for a given graph
G. GCP is one of the most studied NP-hard problems.
Coloring a graph involves assigning labels to each graph
node so that adjacent nodes have different labels. A
minimum coloring for a graph is a coloring that uses
the minimum number of different labels (colors) as possible [13]. GCP is a practical method of representing
many real world problems including time scheduling,
frequency assignment, register allocation, and circuit
board testing. In GCP the fundamental challenge for
any given graph is to find the minimum number of colors for which. This is most often implemented by using
a conflict minimization algorithm [14].
The GCP can be stated as follows: given an undirected
graph G with a set of vertices V and a set of edges E,
(G= (V, E)), a k-coloring of G consists of assigning a
color to each vertex of V; such that neighboring vertices
have different colors (labels). Formally, a k-coloring of
G= (V, E) can be stated as a function F from V to
a set of colors K such that |K| = k and F (u) 6= F (v)
whenever E contain an edge (u, v) for any two vertices
u and v of V. The minimal number of colors allocated
to a graph is called the chromatic number of G. Optimal coloring is one that uses exactly the predefined
chromatic number for any given graph. Since the GCP
is NP complete [11, 13], we need to use heuristics methods to solve it. As we know there are many methods
that proposed for GCP such as: evolutionary methods (e.g. GA [15]), local search algorithms (e.g. Tabu
search [16] or Simulated Annealing [17]) or other mathematical and optimization methods. In this paper, we
use the Prioritization Method for solving the GCP.
505
The Third International Conference on Contemporary Issues in Computer and Information Sciences
formance for CDR. In our proposed model, the criterion of conflict detection is the reduction of the distance
between aircrafts of a certain limit in the future time
steps. Pseudo code of proposed model (solving conflict
problem by using GCPs concept) is shown in Figure
1.
As shown in figure 1, the traffic environment must first
be monitored, then appropriate current state information must be collected (using proper equipment [12]).
These states provide an estimate of the current traffic
situation (such as, the position, direction, destination
and speed of the aircraft). Then, the congestion area
is detected based on these status information from current air traffic. Also in this stage, the minimum reliable distance threshold can be determined to detecting
conflicts. Then congestion area is mapped to a corresponding graph based on minimum reliable distance
threshold; in other words, a state space graph is created
from congestion area. Next, distance matrix between
all aircrafts in congestion area is computed. Also the
adjacency matrix is created based on the distance between aircrafts and determined minimum reliable distance threshold.
In the second stage, the scores of aircrafts in congestion area is computed, and then based on these scores
the priority of each aircraft is computed. Computation of these scores and priorities is described in next
sections. In the third stage, the corresponding graph
is colored using prioritization method. In other words,
we used a prioritization method for solving GCP. The
output of the algorithm is an optimal and reliable coloring (an efficient solution for solving conflicts between
aircrafts in congestion area). If there is no collision, the
algorithm ends. Then, the new flight plan sent to the
aircrafts on flight paths that is free conflict plan. Here
we emphasize that our proposed model can interact
with innovative technologies (such as multi-agent system technology) to conflicts detection and resolution in
air traffic management and also in ground traffic and
related applications.
Prioritization Method
506
Conclusion
507
The Third International Conference on Contemporary Issues in Computer and Information Sciences
[14] M.R. GAREY, and D.S. JOHNSON, Computers and intractability: a guide to the theory of NP-completeness, W.H.
Freeman and Company, New York, 1979.
[15] C. FLEURENT, and J.A. FERLAND, Genetic and hybrid
algorithms for graph coloring. Dans G. Laporte, I.H. Osman, (Eds.), Metaheuristics in Combinatorial Optimization, Annals of Operations Research, 63 : 437-441, 1996.
508
marva.mirabolghasemi@yahoo.com
2
maziar.mirabolghasemi@yahoo.com
4
minshah@utm.my
v.zakerifardi@yahoo.com
Abstract: The growing number of patients suffering from chronic diseases causes a growing focus
on the use of information and communication technology to reduce the time consuming and costly
nature of treating chronic diseases. More than any other technology, mobile phones can provide
solutions for chronic diseases at various levels of organizations. The need for expansion of Chronic
Disease Management (CDM) is well recorded. Mobile technology is ubiquitous and can play an
essential role in healthcare, particularly in disease management. The objective of this study is to
review various researches in M-Health area to show the inevitable role of mobile communication
technologies in CDM.
Introduction
509
Health involves new devices, systems, technology, policies, and standards for communication between healthcare providers and patients, integration of applications
and disease management, collaboration and care coordination systems among others. It can be said that
mobile devices will bring significant cost savings for
the health sector by decreasing the frequency of patient visits to health facilities and enhancing detection
of causes for action. M-Health Strategy should be divided into two initiative parts which are citizen-centric
and health-worker centric [8]. Mobile technologies have
the potential to reduce isolation to provide ongoing
support to health care workers as well as citizens [9].
However, patients become the focus of care, not the
doctor or the hospital [10].
Mobile technology has the merit of being locationindependent, offering mobility and flexibility to the
range of healthcare stakeholders [7, 11]. This also enhances the ability of both clinicians and patients to
achieve information with consequent advantages for
constant monitoring of patients conditions, fast emergency responses, remote/rural care, and interactive
consultancy [6, 7, 11].
A core value of mobile technologies is the low cost.
In the context of chronic diseases where patients should
attend healthcare centres frequently for several years,
there may be a trade-off between the reduction in travel
costs and mobile costs. Although, as with telemedicine
[12], cost profits may happen more readily to patients
compared to providers and there is a necessity to distinguish real savings if they are to adopt m-health more
widely.
510
The Third International Conference on Contemporary Issues in Computer and Information Sciences
De Toledo et al., [20] described a standards-based ConPatient monitoring is a rapidly accepted element in nectivity Interface designed to interconnect a mobile
CDM strategies [20]. These technologies bring poten- telehealth solution with electronic healthcare record
tial benefits to both doctor and patient; doctors can systems from external providers, enhancing the appro-
511
priateness of this technical solution to different business models for mobile telehealth. Figure 3 includes a
cell phone that acts as a gateway for a set of devices and
telemonitoring information that may be used or not depending on the patients condition such as glucometer,
spirometer, sphygmomanometer, pulse oximeter and
scale. This collection of supported sensors enables the
identification of solutions suitable for the management
of a wide range of chronic diseases.
More professionalized functionality such as permitting the healthcare professional to modify the periodicity and type of tests directly and send this configuration to the cell phone is not requested at this point
of view, but it may be of interest in the future, when
patients and professionals become more adept to the
capabilities of Mobile Health Care Management [20].
Conclusion
512
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Refrences
[1] Kahn, C. James, J. Yang, and S. J. Kahn, Mobile Health
Needs And Opportunities In Developing Countries, Health
Affairs 29 (2010), 252-258.
[21] M. Schwaibold, M. Gmelin, and G. V. Wagner, Key factors for personal health monitoring and diagnosis devices,
Heidelberg, 2002.
[2] A. Opie, Nobodys asked me for my view: users empowerment by multidisciplinary health teams, Qualitative Health
Research 18 (1998), 188-206.
[3] P. Brennan and C. Safran, Patient empowerment, International Journal of Medical Informatics 69 (2003), 301-304.
[4] B. Paterson, Myth of empowerment in chronic illness,
Journal of Advanced Nursing 34 (2003), 574-581.
[5] R. Stockdale, Peer-to-peer online communities for people
with chronic diseases: a conceptual framework, Journal of
Systems and Information Technology 10 (2008), 39-55.
[6] R. Istepanian, Introduction to the Special Section on MHealth: Beyond Seamless Mobility and Global Wireless
Health-care Connectivity, IEEE Transactions on Information Technology in Biomedicine (2004), 405-413.
[7] R. Istepanian and J. Lacal, Emerging Mobile Communication Technologies for Health: Some Imperative notes on
m-Health, The 25th Silver 59 Anniversary International
Conference of the IEEE Engineering in Medicine and Biology Society (2003).
[8] A. Iluyemi, Feedback on Draft WHO mHealth Review, London, 2007.
[9] P. Mechael,
Creating an Enabling Environment for
mHealth, Information and Communications Technology,
ITI 5th International Conference (2007).
[10] D. Fuscaldo and Soon, Cellphones Will Monitor the Vital Signs of the Chronically Ill, The Wall Street Journal
On-line (2004).
[11] A. Prentza, S. Maglavera, and L. Leondaridis, Delivery
of healthcare services over mobile phones: e-Vital and CJS
paradigms, Proceedings of the 28th Annual International
Conference of the IEEE EMBS (2008).
[12] A. C. Norris, Essentials of telemedicine and telecare,
Chichester: Wiley, 2002.
[13] M. Pearson, S. Wu, J. Schaefer, A. Bonomi, and S. Mendel,
Assessing the implementation of the chronic care model in
quality improvement collaborative, RAND 40 (2005), 978996.
[14] K. Coleman, B. T. Austin, C. Brach, and E. H T. Wagner,
Evidence on the chronic care model in the new millennium,
Health Affairs 28 (2009), 75-85.
[15] J. Olmen, G. M. Ku, R. Bermejo, G. Kegel, K. Hermann, and W. V. Damme, The growing caseload of
chronic life-long conditions calls for a move towards full
self-management in low income countries, GLOBALIZATION AND HEALTH 38 (2011), 1-10.
[16] G. Chen, B. Yan, M. Shin, D. Kotz, G. M. Ku, and E. Berke,
MPCS: Mobilephone based patient compliance system for
chronic illness care, 6th Annual International Mobile and
Ubiquitous Systems: Networking & Services (2009), 1-7.
[17] SatelLife, Handhelds for Health: SatelLifes Experiences in
Africa and Asia, 2005.
513
Javad Poshtan
mostajabi@elec.iust.ac.ir
jposhtan@iust.ac.ir
Introduction
Author
514
Problem Statement
The recursive expression with u(n) input and y(n) output and also its equivalent transfer function of the IIR
filter can be described by (1) and (2):
y(n) =
N
X
k=o
bk x(n k)
M
X
aj y(n j)
(4)
1 X
+ [ ((absH(w) absHi (w))2 ]0.5 kwL
L
+variance[angle(w) angle(Hi (w))]
(1)
j=1
b0 + b1 z 1 + ... + bM z M
G(z) =
1 + a1 z 1 + ... + aN z N
0 w , 0 wL
(2)
where L is the number of sampling points in the dowhere ak s and bk s are filter coefficients that define its main w
L
poles and zeroes respectively. This parameters are estimated by genetic algorithm so that the error based
on fitness function between the frequency response of
designed IIR filter and the real frequency response (of
4 Case Study
the plant) is minimal.
Genetic algorithm begin with the random set of
possible solutions that each one is embedded in one
chromosome. Each chromosome has (M+N+1) genes.
At every generation, the cost of each individual (chromosome) is evaluated by a predetermined cost function. An individual with lower fitness value is considered. The population is then evolved based on the circled process of natural selection, survival of the fittest,
and mutation. This cycle is continued in order to find
the optimal solution.
0.1823
(5)
515
The Third International Conference on Contemporary Issues in Computer and Information Sciences
4.2
HRM =
1 + z 1 + z 2
1 + z 1 + z 2
(7)
516
also with reduced order. In each situation, the quality of estimated models are compared with each other
in both time and frequency responses. This numerical results indicate that the proposed fitness function
is effective in building an acceptable model for linear
identification.
Figure 5: Comparative frequency responses for estimated models based on reduced order IIR filter
[8] M. Haseyama and D. Matsuura, A filter coefficient quantization method with genetic algorithm, including simulated
annealing, IEEE Signal Processing Letters 13/4 (2006).
The proposed cost function is examined in two situations: a parametric model with matched order and
517
Parham Moradi
University of Kurdistan
University of Kurdistan
f.kiasat@uok.ac.ir
p.moradi@uok.ac.ir
Abstract: Recommender systems are widely applied in e-commerce websites to help customers
in finding the items they want. A recommender system should be able to provide users with useful
information about the items that might be interesting. Similarity measure is the most important
factor in recommender system which is used to compute the user similarity. One can propose a
precise similarity measure for improving the recommender system results. The purpose of this paper
is to introduce a new similarity measure based on the combination of both users profile and users
rating records. The major advantages of the proposed measure comparing with the previous ones
is using two different information sources which results in precise results. While the previous ones
show the similarity according to user profile or rating. Planning a new similarity measure based on
combination of different user information sources e.g. user profile and rating can overcome sparsity
and cold start which are the major problems in recommender systems. The experimental results
show that the proposed measure can give satisfactory and high quality recommendations.
Introduction
518
2.2
Cosine similarity
proposed user similarity meaA key factor in the quality of the recommendations 3
obtained in a CF based RS lies in its capacity to desure
termine which users are the most similar to a given
user. A series of algorithms and metrics[18] similarity
between users are currently available which enable this For calculating similarity measure we need something
important function to be performed in the CF core of to indicate this similarity correctly. Up to now a lot
of measures such as cosine similarity measure[8] and
this type of RS.
519
The Third International Conference on Contemporary Issues in Computer and Information Sciences
uik =
Pc
j=1
dik
djk
2/(m1)
3.1
c X
n
X
(uik )
M F x =< M F 1x , M F 2x , . . . , M F kx >
For example if we want to divide users into five clusters, degree of membership vector of users 1 and 2 is
shown as fallow:
MF1 =<0.1,0.6,0.1,0.18,0.02>
MF2 =<0.49,0.23,0.18,0.01,0.09>
Similarity of users x and y with this degree of membership are computed as follow:
2
Dik
(xk , vi )
i=1 k=1
520
P B (x, y) =
1
c
1
|M
F jx M F jy |
j=1
Pc
M
m
X
1
(i)
W (i) Vx,y
M m + 1 i=0
3.3
3.2
521
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Experiments
Figure 1: proposed metrics and genetic similarity Figure 4: proposed metrics and genetic similarity
method comparative Mean Absolute Error results
method comparative Precision results
522
conclusion
Choosing a suitable similarity measure in RS is an important factor to gain better results, therefore, similarity measure has great role to better recommendations for users and increases their satisfaction. This
paper proposed a new similarity measure according to
user profiles and ratings. Using both user profile and
ratings information causes independent computing of
user similarity from user-item matrix then imperfectness of matrix in similarity computing is decreased, on
the other, hand new similarity measure overcome sparsity and cold start.
[7] L. Candillier, K. Jack, F. Fessant, and F. Meyer, Stateof-the-art recommender systems, Collaborative and Social
Information Retrieval and Access-Techniques for Improved
User Modeling (2009), 122.
[8] J.C. Dunn, A fuzzy relative of the isodata process and its
use in detecting compact well-separated clusters (1973).
[9] LQ Gao and C. Li, Hybrid personalizad recommended model
based on genetic algorithm, Int. conf. on wireless commun.
netw. and mob. computing, 2008, pp. 92159218.
[10] R.J. Hathaway and J.C. Bezdek, Recent convergence results
for the fuzzy c-means clustering algorithms, Journal of Classification 5 (1988), no. 2, 237247.
[11] J.L. Herlocker, J.A. Konstan, L.G. Terveen, and J.T.
Riedl, Evaluating collaborative filtering recommender systems, ACM Transactions on Information Systems (TOIS)
22 (2004), no. 1, 553.
[12] Z. Huang, H. Chen, and D. Zeng, Applying associative retrieval techniques to alleviate the sparsity problem in collaborative filtering, ACM Transactions on Information Systems
(TOIS) 22 (2004), no. 1, 116142.
[13] M. Kalz, H. Drachsler, J. Van Bruggen, H. Hummel, and R.
Koper, Wayfinding services for open educational practices
(2008).
[14] B.M. Kim, Q. Li, C.S. Park, S.G. Kim, and J.Y. Kim, A new
approach for combining content-based and collaborative filters, Journal of Intelligent Information Systems 27 (2006),
no. 1, 7991.
[15] B. Krulwich, Lifestyle finder: Intelligent user profiling using large-scale demographic data, AI magazine 18 (1997),
no. 2, 37.
[16] R.D. Lawrence, G.S. Almasi, V. Kotlyar, M.S. Viveros,
and SS Duri, Personalization of supermarket product recommendations, Data Mining and Knowledge Discovery 5
(2001), no. 1, 1132.
[17] D. Li, Q. Lv, X. Xie, L. Shang, H. Xia, T. Lu, and N. Gu,
Interest-based real-time content recommendation in online
social communities, Knowledge-Based Systems (2011).
Refrences
[1] G. Adomavicius and A. Tuzhilin, Toward the next generation of recommender systems: A survey of the state-of-theart and possible extensions, Knowledge and Data Engineering, IEEE Transactions on 17 (2005), no. 6, 734749.
[2] S.K.L. Al Mamunur Rashid, G. Karypis, and J. Riedl,
Clustknn: a highly scalable hybrid model-& memory-based
cf algorithm, Proc. of webkdd 2006: Kdd workshop on web
mining and web usage analysis, in conjunction with the 12th
acm sigkdd international conference on knowledge discovery
and data mining (kdd 2006), august 20-23 2006, philadelphia, pa, 2006.
[3] M.Y.H. Al-Shamri and K.K. Bharadwaj, Fuzzy-genetic approach to recommender systems based on a novel hybrid
user model, Expert Systems with Applications 35 (2008),
no. 3, 13861399.
[4] J.C. Bezdek, Pattern recognition with fuzzy objective function algorithms, Kluwer Academic Publishers, 1981.
[5] J. Bobadilla, F. Serradilla, and J. Bernal, A new collaborative filtering metric that improves the behavior of recommender systems, Knowledge-Based Systems 23 (2010),
no. 6, 520528.
[6] J. Bobadilla, F. Serradilla, A. Hernando, et al., Collaborative filtering adapted to recommender systems of e-learning,
Knowledge-Based Systems 22 (2009), no. 4, 261265.
523
The Third International Conference on Contemporary Issues in Computer and Information Sciences
524
module for recommender systems in e-commerce, Computers & Operations Research (2010).
[27] K. Wei, J. Huang, and S. Fu, A survey of e-commerce recommender systems, Service systems and service management,
2007 international conference on, 2007, pp. 15.
The lattice structure of Signed chip firing games and related models
A. Dolati
S. Taromi
B. Bakhshayesh
Shahed University
Shahed University
Shahed University
Department of mathematics
Department of Mathematics
Department of Mathematics
dolati@shahed.ac.ir
taroomi@shahed.ac.ir
Bakhshayesh@shahed.ac.ir
Abstract: In this paper the lattice structure of Signed Chip Firing Games are studied and the
class of lattices induced by Signed Chip Firing Games with the class of U LD lattices and the class
of lattices induced by Mutating Chip Firing Games and Ablian sandpile model are compared.
Keywords: lattice, Signed Chip Firing Game (SCF G), Ablian sandpile model (ASM ), Mutating Chip Firing
Game (M CF G), U LD lattices.
Introduction
525
2.1
526
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Main results
Refrences
527
[1] P. Bak, C. Tang, and K. Wiesenfeld, Self-organized criticality: An explanation of the 1/f noise, Phys. Rev. Lett.
(1987).
[2] N. Biggs, Chip firing and the critical group of a graph, Journal of Algebraic Combinatorics 9 (1999), 2545.
[3] A. Bjorner and L. Lovasz, Chip-firing games on directed
graphs, Journal of Algebraic Combinatorics 1 (1992), 304328.
[4] A. Bjorner, L. Lovasz, and W. Shor, On computing fixed
points for generalized sandpiles, European Journal of Combinatorics 12 (1991), 283-291.
[5] R. Cori and D. Rossin, On the sandpile group of a graph,
European Journal of Combinatorics 21 (2000), 447459.
[6] R. Cori and T.T.T.Huong, Signed chip firing games on some
particular casesand its applications, LIX, Lecole Polytechnique, France Institute of Mathematics, Hanoi, Vietnam,
October 26 (2009).
[7] B. Davey and H. Priestley, Introduction to Lattices and Orders, 1990.
[8] D. Dhar, P. Ruelle, S. Sen, and D. Verma, Algebraic aspects
of sandpile models, Vol. 28, 1995.
[9] K. Eriksson, Chip firing games on mutating graphs, SIAM
Journal of Discrete Mathematics 9 (1996), 118128.
[10] E. Goles, M. Latapy, C. Magnien, and M. Movan, Sandpile
Models and Lattices: A Comprehensive Survey, Theoretical
Computer Science 322 (2004), 383 407.
[11] E. Goles, M. Morvan, and H. Phan, Lattice structure and
convergence of a game of cards, Annals of Combinatorics
(1998).
[12] M. Latapy and H. D. Phan, The lattice structure of chip
firing games and related models, Physica D 155 ( 2001),
69-82.
[13] C. Magnien, Classes of lattices induced by chip firing (and
sandpile) dynamics, European Journal of Combinatorics
(2003), 665 - 683.
[14] H.D. Phan, L. Vuillon, and C. Magnien, Characterization
of lattices induced by (extended) chip firing games, Discrete
Mathematics and Theoretical Computer Science, Proc. 1st
Internat. Conf. Discrete Models: Combinatorics, Computation, and Geometry (DM-CCG01), MIMD (July 2001), 229244.
[15] B. Monjardet, K.P. Bogart, R. Freese, and J. Kung, The
consequences of Dilworths work on lattices with unique irreductible decompositions, The Dilworth Theorems Selected
Papers of Robert P. Dilworth, Birkhauser, Boston (1990),
192-201.
Zanjan, Iran
Zanjan, Iran
j.khair@iasbs.ac.ir
b sadeghi b@iasbs.ac.ir
Rebvar Hosseini
Zanjan, Iran
Zanjan, Iran
r.hosseini@iasbs.ac.ir
z.alizadeh@iasbs.ac.ir
Abstract: A set of natural numbers tiles the plane if a square-tiling of the plane exists using
exactly one square of side length n for every n in the set. From [2] we know that N itself tiles the
plane. From that and [3] we know that the set of even numbers tiles the plane while the set of odd
numbers does not. According to [1] it is possible to tile the plane using only an odd square. Also
it is possible to tile the plane using exactly three odd squares, but it is impossible to tile the plane
using exactly 2 odd numbers. In this paper we will check that there exists a finite set containing
n 6= 3 odd numbers and a set of even numbers that can tile a finite plane.
Introduction
528
2.1
2.1.1
Proposition 1
2.1.2
Proposition 2
529
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Proof 2 : what if both sides in the finite plane are 2.3.1 Proposition
odd? The area will be odd, but if we calculate the area
for an even set with two odd numbers, it will be even.
So, an even set with two odd numbers can not tile the Definition: It is possible to tile a finite plane by 6 odd
numbers and a set of evens.
plane.
Proof: Start with a 98 65 squared rectangle composed of nine squares of sides 1, 4, 7, 8, 9, 10, 14, 15,
2.2 Set With Four or Five Odd Num- 18, 33, 65. This sequence of numbers can tile this finite
plane. So the proposition is true.
bers
Till now tiling finite planes with 1, 2 and 3 odd numbers and a set of evens has been covered. The next 2.4 Set With n 7 Odd Numbers
propositions will cover tiling for n 4 odds and a set
evens.
Theorem 1: For n 7 the sequence
1, 4, 7, 8, 9, 10, 14, 15, 18, 33, 65 and then Fibonacci
rule sequence starting from 88 can tile a finite plane
2.2.1 Proposition
with n 7 odd numbers and a set of evens.
Proof: Till now, tiling for different values of
Definition: It is possible to tile a finite plane by 4 odd
1
530
Refrences
[1] A. M. BERKOFF, J. M. HENLE, A. E. MCDONOUGH,
and A. P. WESOLOWSKI, possibilities and impossibilities
in Square-Tiling, IJCGA, World Scientific Publishing 21
(2011), no. 5, 545558.
[2] F. V. Henle and J. M. Henle, Squaring the plane, The Am.
Math. Monthly 115 (2008), no. 1, 312.
[12] M. Gardner, A New Selection, The Second Scientific American Book of Mathematical Puzzles & Diversions (1961).
[13] K. Scherer, New Mosaics, privately printed (197).
531
Lida Ahmadi
Department of Mathematics
University Of Kurdistan
Zanjan, Iran
Kurdistan, Iran
r.hosseini@iasbs.ac.ir
lida.ehmedi@gmail.com
Jalal Khairabadi
Zanjan, Iran
Zanjan, Iran
b sadeghi b@iasbs.ac.ir
j.khair@iasbs.ac.ir
Abstract: J2ME is a development platform for mobile devices and has been introduced by Sun
Micro-Systems Inc. In 1999 for programming limited devices such as phones, PDAs and other small
devices. But this architecture does not support APIs for data persistence management and relational
database because of its limitations. This paper presents a base for local relational database and
data persistence management for J2ME based applications and can be used in any database aware
application in J2ME platform. In this paper and implementation of this database system mobile
device and J2ME limitations have been considered. Also, the B+tree indexing structure has been
implemented in this project that allows us fast insertion, deletion, range queries, search, backup
and restore mechanism for RMS based databases.
Keywords: J2ME; B+tree; mobile devices; data management; storage; relational; database.
Introduction
The mobile devices are going to be more and more popular today and the need for data oriented software for
them is growing very fast. So the need for a fast and
acceptable manner of storing and retrieving data has
been more obvious in the past few years.
The mobile applications should be interactive in a
way that they could response to user actions. For an
Corresponding
application that uses database and storing and retrieving data, this is an important issue. since storing and
retrieving data if not implemented in a good way, can
be very lengthy and time consuming this definitely will
not let mobile platform to succeed. Also, a mobile application Should be available in off-line mode as it is
in on-line. This issue is important if either the cost
of the data transmission over the network is high or
the network is not always available. Also, the network
speed is also an issue. Hence a data persistence man-
532
System Internals
We have chosen J2ME as our development platform Till now, some information about the platform and limbecause the number of mobile devices that support this itations has been discussed. In this section we are going
platform has been increasing in the past few years. The to study the internal structure of this system.
JAVA virtual machine that is implemented in the mobile devices is called KVM (K for Kilobyte).
J2ME virtual machine is nearly installed on every
mobile devices that has been manufactured nowadays.
Thus, for devices that does not come with KVM by
default, the custom releases by third party manufacturers are available (like IBM J9 for Windows Mobile
and Palm-OS and JBED and Net-Mite for Android).
2.1
2.2
Add Mechanism
533
The Third International Conference on Contemporary Issues in Computer and Information Sciences
2.5
Updating Records
Updating records is a bit tricky. Update can be performed on a record and changes different parts of a
record. If changes affect the key part of a record, the location for the record in the B+tree index should be updated. Otherwise, updating non key part of the record
In this situation factor items share a common key is sufficient.
between them, so instead of accessing disk multiple
times, addition could be done via 2, 3 or more disk
accesses, of course, this will reduce disk accessed from
20 to 2 or even smaller with respect to database proper- 2.6 Backup and Restore
ties. This will save the program some time that in the
mobile platform, this time saving is considerable. Of If we have a large collection of records, and we want
course another type of addition is also supported that to create a B+ tree on some field, doing so by repeatthis will be discussed in next sections. This mechanism edly inserting records is very slow. Bulk loading is a
is called Bulk Loading and this will be used mainly for technique to overcome this shortcoming. Bulk loading
Backup and restore of the saved data [2].
can be done much more efficiently. This technique is
2.3
One of the applications of this method is importing data from other database systems and restoring
data from system backups. Furthermore compacting database for physical deletion of records that are
marked for deletion is possible via this method. Also,
this method for back up and restoring data is the best
method for this purpose, because accessing RMS file
directly and manipulating it in J2ME world is impos2.4 Deletion Mechanism
sible. So,using this mechanism will minimize number
of disk accesses and hence will minimize elapsed time
Two types of deletion is common in database program- for this operations [2].
ming world. One of them is physical deletion and the
other one is marking [4]. In this system the later is
used. That is, when a record is deleted, it will not
be deleted physically. This method is used by many
3 Case Study:
J2ME High
DBMS systems nowadays. Of course this method has
some space overhead, but considering size of memory
Speed Accounting Software
and space this days, this is not a big deal for database
management systems. On the other hand, the database
will be compacted in backup and restore operations. For testing system, a complete accounting software
That is, this marked records will be deleted physically had been developed. The complete software had been
when performing this operations. Unfortunately J2ME ported from Windows platform to J2ME and their opdoes not support services for running them in back- erations are identical. All operations are done in real
ground, hence we do not have any real time mecha- time. First step is creating the database schema. For
nism for compacting data whenever the program is not defining schema and opening a table, field sizes for the
active.
records of the table are needed. So in the constructor
534
of the table field sizes are passed to table creator. After this step, database is ready for all kind of actions.
After defining all the tables and creating the schemas,
tables will remain open until the programmer closes
them. This is done for speeding up the program by
reducing number of opening and closing operations on
tables. In this software a simple relational database
model had been used. Of course, the relational model
still needs more work. Taking reports is done via carrying out joins on tables manually.
work and communication system is also limited compared to computers. Indexing on multiple fields separately is also an idea for next works. Although data
exchange via Blue-tooth is another idea that is important for mobile devices.
Notes
[1] IBM toolbox for J2ME. Available at http://www.ibm.com/.
[3] PointBase Micro Available at:http://www.pointbase.com/.
[4] Oracle Lite. Available at http://www.oracle.com/.
Refrences
[1] J. Keogh, J2ME: The Complete Reference, McGrawHill/Osborne, 2003.
[2] R. Ramakrishnam.,
McGraw-Hill.
[7] D. Comer, The ubiquitous B-trees, ACM Computing Surveys 11 (1979), no. 2, 121137.
[8] E. Horowitz, S. Sahni, and S. Anderson-Freed, Fundamentals of Data Structures in C, Computer Science Press,
Rockville, MD, 1993.
[9] J. Jannink, Implementing deletion in B+-trees, ACM SIGMOD Record 24 (1995), no. 1, 3338.
[10] D. Knuth, The Art of Computer Programming: Sorting and
Searching, Vol. 3, Addison-Wesley, Reading, MA, 1973.
[11] J.D. Ullman, Principles of Database and Knowledge-Base
Systems, Computer Science Press, Rockville, MD, 1988.
[12] G. Wiederhold, Database Design, McGraw-Hill, New York,
1983.
535
Javad Poshtan
mostajabi@elec.iust.ac.ir
jposhtan@iust.ac.ir
Abstract: Because of the wide application of signal processing, such as echo cancelations, noise
reductions, bio systems, speech recognitions communications and control applications, the topic of
IIR modeling attracts the noticeable interest of researchers. IIR structures are very useful for modeling such recursive systems. However they produced multimodal error surfaces and need powerful
optimization technique such as genetic algorithm for minimizing the produced error function. On
the other hand, in order to find an acceptable model, we need a complete and informative data set
which is rarely at hand in many practical application. In this paper we employ genetic algorithm
for estimating parameters of IIR structures in which two kind of skimpy data are used simultaneously by using GA. The numerical results presented here indicate that the proposed method is
effective and practical in building an acceptable model based on IIR (infinite impulse response)
filters. Especially when there is a skimpy data set in time domain.
Introduction
Author
536
Problem Statement
The recursive expression with u(n) input and y(n) output and also its equivalent transfer function of the IIR
filter can be described by (1) and (2):
N
X
k=o
bk x(n k)
M
X
aj y(n j)
(3)
y(n) =
Nt
1 X
[y(n) yihat (n)]2
Nt n=1
1 X
[abs(H(w)) abs(Hi (w)]2
2Nt w
+
(4)
Nt
1 X
[y(n) yihat (n)]2
2Nt n=1
(1)
j=1
b0 + b1 z 1 + ... + bM z M
G(z) =
1 + a1 z 1 + ... + aN z N
Case Study
(2)
537
B(z 1 )
A(z 1 )
(5)
The Third International Conference on Contemporary Issues in Computer and Information Sciences
b0 + b1 z 1 + b2 z 2 + b3 z 3 + b4 z 4
1 + a1 z 1 + a2 z 2 + a3 z 3 + a4 z 4
(6)
538
At first, genetic algorithm is applied with 500 sampling number produced by white noise input, then GA
estimated model with only 10 number of them. After that, GA is employed in frequency domain with
only 10 data. Finally 10 data in time domain are combined with 10 frequency data in order to examine the
proposed method. In order to examine four estimated
model with each other and also with the real plant
(equation (5)), their step responses, impulse responses
and bode diagrams are depicted in figure 1 to 6 comparatively. These diagrams illustrate valuable information
about system structure that is necessary for other applications such as controller designing. Therefore it is
important that the estimated model behave as similar
as possible to the real plant in such responses. Figure(1) shows that the estimated model by 10 time data
has no acceptable performance in respect to the real
plant. In addition, this figure shows the bad behavior of step response when we only employ 10 frequency
data for estimation. Whereas when two skimpy data
is combined with each other by using our proposed
combinator method, the estimated model has an acceptable quality of step response (transient and steady
state), respect to the real plant. Similar results can be
concluded from impulse responses of estimated models
in figure(2) and also their bode diagrams in figure(3).
In figures (4) to (6) the estimated model by skimpy
data set collection including 10 time data plus 10 frequency data, is also compared with that one which is
estimated with 500 number of data set produced by
white noise input. These figures emphasis that the
estimated model by combinator method has better responses respect to the real plant. These results illustrate that combining skimpy time data with skimpy
frequency data can be useful for system identification.
Refrences
[1] T. Mostajabi and J. Poshtan, Control and System Identification via Swarm and Evolutionary Algorithms, International Journal of Scientific and Engineering Research 2
(2011).
[2] V. Hegde, S. Pai, and W. K. Jenkins, Genetic Algorithms
for Adaptive Phase Equalization of Minimum Phase SAW
Filters, 34th Asilomar Conf. on Signals, Systems, and Computers November (2000).
[3] R. Nambiar, C. K. Tang, and P. Mars, Genetic and Learning Automata Algorithms for Adaptive Digital Filters, Proc.
IEEE Int. Conf: on ASSP 4 (1992), 4144.
[4] S. C. Ng, S. H. Leung, C. Y. Chung, A. Luk, and W. H.
Lau, The genetic search approach: A new learning algorithm
for adaptive IIR filtering, IEEE Signal Processing Magazine
Nov (1996), 3846.
[5] O. Montiel, O. Castillo, R. Sepulveda, and P. Melin, Application of a breeder genetic algorithm for finite impulse filter
optimization, Information Sciences 161 (2004), 139158.
[6] Y. Yang and X. Yu, Cooperative Coevolutionary Genetic
Algorithm for Digital IIR Filter Design, IEEE Trans. Industrial Electronics 54/3 (2007).
[7] D. J. Krusienski and W. K. Jenkins, Design and performance of adaptive systems based on structured stochastic
optimization strategies, IEEE Circuits And Systems Magazine First Quarter (2005).
[8] S. T. Pan, Design of robust D-stable IIR filters using genetic
algorithms with embedded stability criterion, IEEE Trans.
Signal Processing 57/8 (2009).
[9] J. T. Tsai, J. H. Chou, and T. K. Liu, Optimal design of
digital IIR filters by using hybrid taguchi genetic algorithm,
IEEE Trans. Industrial Electronics 53/3 (2006).
[10] M. Haseyama and D. Matsuura, A filter coefficient quantization method with genetic algorithm, including simulated
annealing, IEEE Signal Processing Letters 13/4 (2006).
[11] T. Mostajabi and J. Poshtan, IIR Filter Design Using Time
and Frequency Responses by Genetic Algorithm for System
Identification, International eConference on Computer and
Knowledge Engineering (2011).
[12] S. L. Netto, P. R. Diniz, and P. Agathoklis, Adaptive IIR filtering algorithms for system identification, a general framework, IEEE Transactions on Education 38/1 (1995).
539
Alireza Meshkin
Damavand, Iran
Damavand, Iran
Mohsenta2003@gmail.com
Meshkin@ibb.ut.ac.ir
Mehdi Sadeghi
National Institute of Genetics Engineering and Biotechnology
Tehran, Iran
M Sadeghi@ibb.ut.ac.ir
Introduction
540
sites, in which more than one nucleic acid or gap is ob- sults [35]. A recent extension of the likelihood criterion,
served across the population. Such variations are called Bayesian inference [36-38], uses also biologically-based
single nucleotide polymorphisms (SNPs).
prior probabilities to obtain more accurate estimates
of haplotype frequencies [37], [39], [3]. Just as with the
Haplotype is the SNPs information for each of two likelihood criterion, however, finding the optimal phyunlike copes of chromosome in each diploid organic logeny using Bayesian inference is NP-hard [33][32].
and estimation from aligned Single Nucleotide Polymorphism (SNP) fragments. Due to its importance for
The parsimony criterion states that under many
analysis of many fine-grain genetic data and disease plausible explanations of an observed phenomenon, the
genes mapping to specific patterns of Haplotype and one requiring the fewest assumptions should be predrug design has attracted more and more in the recent ferred [32]. Hence, based on parsimony criterion, a set
years.
H of haplotypes is defined to be optimal or the most
parsimonious for the genotypes analyzed if H is charHaplotypes encode the genetic data of an individual acterized by having the smallest cardinality [3], [25],
at a single chromosome. Hence humans chromosome [37]. The parsimony criterion is well-suited when the
is diploid and for each chromosome there is maternal genotypes are characterized by a low-medium variabiland paternal origin, but it is technologically infeasible ity [6], [35] and is at the core of several versions of
to separate the information from homologous chromo- the haplotype problem, namely: Clarks problem [2],
somes by experimental methods. In this regards, incli- the pure parsimony haplotyping problem [25], the minnation to computational method would be increased in imum perfect phylogeny problem [35], the minimum
recent years. A relevant approach in haplotype infer- recombination haplotype configuration problem [40],
ence is the pure parsimony. The haplotype inference by the zero recombination haplotype configuration probpure parsimony (HIPP) aims at finding the minimum lem [40], and the k-minimum recombination configunumber of distinct haplotypes which explains a given ration problem [40]. As drawback, apart from some
set of genotypes. Parsimony haplotype inference is one polynomial cases ([35]; [40], [28]), each version of these
of problems belonging to NP-hard class or APX-Hard optimization problems has been proved to be NP-hard
class [14], [16], [20].
[33], [34].
The first ideas that proven the parsimonious criteria started by Gusfield on Clerk Algorithms result [3].
He observed that among all of Clerk algorithm running , the running with minimum number of distinct
haplotype or with haplotype with maximum number of
usage in phasing genotype sequences are the accurate
set for haplotype inference problem [2].In recent years
heuristic algorithm [15] ,[16] , [17], greedy algorithms
[3], branch and bound algorithms [4] , Linear programming methods [3] , [5] , [7] , semi defined programming
[8],[9], the SAT based Algorithms [21],[22],[23],[24], and
pseudo Boolean optimization algorithms[12] are proposed by bioinformatics researchers for HIPP. Also the
first approximate algorithms with O(2k1 ) guaranteed
performance when k is the number of heterozygote sites
in genotype was introduced by Lancia [25]. Hange
made other approximate algorithms in O(log n ) complexity class when n is the number of genotype [19].
A new heuristic algorithms base on parsimonious tree
grows (PTG) method announced by lie in [18] with
O(n2 m) time order when n is the number of genotype
with m sites SNPs. Recently in [9] with consider Clark
compatibility graph give a polynomial algorithm published for HIPP. Also Markov chain based algorithms
used for haplotype inference problem in PHASE [18],
[19], PLEM [20] software. In recent years, the SAT
based algorithm is many interested and SHIPS software [12] is one of SAT base algorithms. Also PBO as
541
The Third International Conference on Contemporary Issues in Computer and Information Sciences
a special form of SAT solver used in RPOLY software identities are unknown. In this article, the unknown
[5], [6].
sites being discard so genotype matrix represented by
0/1/2 in following.
In this article a new tree structured algorithm based
on divide and conquers method introduced. With using
The allele at locus i of haplotype h is denoted by
overlap window size as the partitioning factor and also h(i). Similarly, for a given genotype vector g, the genousing an enhanced version of PTG named as greedy type at locus i is denoted by g(i).
PTG for solving each sub partition dramatically improved haplotype differencing algorithm both in time
We say unordered haplotype pairs h, k, will solve
and accuracy. Also a powerful merge algorithms being the genotypes g and write h, k g if the following conused for mix the sub partitions results and shaping the ditions would be true for each j = 1, m:
final haplotype. Hence it named as overlap window
g[j] = 0
(h[j] = 0 and k[j] = 0)
partitioning and merges result method or in abbreviation as OWPMR. The partition solver, Greedy PTG is
g[j] = 1
(h[j] = 1 and k[j] = 1)
PROBLEM FORMULATION
SNP sites are positions in DNA sequence where nucleotides of different individuals in this position have
different alleles. All distinct nucleotides that occur in
SNP sites named as alleles of that SNP site. Usually
in SNP sites, only two type of four possible SNP site
occur, so it can be restrict our computation to bi-allelic
SNPs, which form the vast majority of known SNPs.
In this case a haplotype can be represented by a 0/1
vector typically by representing the most frequent allele as a 0 and the alternate allele as a 1. A genotype
will be represented as a 0/1/2/? Vector, where 0 (1)
means that both maternal and paternal chromosomes
contain the 0 (1) allele, 2 means that the two chromosomes contain different alleles, and ? means that allele
542
In other words, between two nodes of Tij , node result with a comparably good accuracy result. Obthat have a larger set of index in (j + 1)th level, is near viously greedy PTG is in same complexity order with
to parsimonious criteria, because it pointed by most in- PTG an both of them is member of O(n2 m) class.
dex in previous levels, so select of this node for solving
undivided index i can be made less node in level j+1
and finally minimum number of distinct haplotype.
In order to find a parallel algorithm for HI PP , divided and conquer method as a top down architecture
is seems to be suitable. Since HI with pure parsimonious situation is an optimization problem, the independent reservation of sub matrix and merging their
result cant gain the optimum result and solving and
merging of sub matrix are related to each others. In
this regards, there are two basic issues that must be
concern.
543
The Third International Conference on Contemporary Issues in Computer and Information Sciences
544
5.1
partitioning step
5.2
The main step of OWPMR can be merging the partitions results and shaping the final haplotypes. Merge
method undertakes the mixed results that concluded
by inference of sub matrixes and producing the final
Haplotypes , by probing the HAPSET data structure
row by row and for each row forms the parsimonious
Haplotype .Since following sub matrix have common
column, so their haplotype have common postfix and
prefix. If Gik is genotype sub matrix from column i until k , and Hi , kj , is the set of parsimony haplotype for
Gik inferred by greedy PTG, so some of Hi , kj postj
fixes are prefixes for Hi,k+1
and this rule is true for
each w partitions.. Existence of common postfixes and
prefixes between each w subsequences, make it possible to check the matching of inferred haplotypes and
merge them in efficient way. Merge algorithms needs
to find largest common sub string in each w column of
HAPSET. In this regard, OWPMR utilized MATCH
algorithms to find common largest substrings and produce final haplotypes. MATCH routine in each step,
compares last k bits with first k bits of two substrings
with start k from w-1 to 1 and try to find larger matching between two strings.
EVALUATION OF RESULT
Four widely used measures for assessing accuracy
are adopted in this study, the metrics include:
The Haplotype error rate (HE): average proportion of haplotypes incorrectly inferred (percentage of reconstructed haplotypes with, at least, one site erroneously assigned). [26];
The single-site error rate (SSE): average proportion of ambiguous SS (that is,
heterozygote SS in the individual) whose
phase is incorrectly inferred. [26];
The global single-site error rate
(GSSE): average proportion of all SS whose
phase is incorrectly inferred. Note that the
denominator here is the total number of
sites, regardless of them being ambiguous
or not. [26];
The Switch error (SWE), as defined by
[19], corresponds to one minus switch accuracy in [24] : average proportion of heterozygous positions miss assigned relative
to the previous heterozygous position. It
shows whether errors in haplotype reconstruction are mainly due to the miss assignment of isolated SS (high error), or of blocks
of neighboring SS (low error).[26].
545
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Refrences
[1] Bafna .V, Istrail. .S, and Lancia .G, Polynomial and APXhard cases of the individual haplotyping problem, Theoretical Computer Science 335 (2005), no. 1, 109125.
[2] Clark .A. G, Inference of haplotypes from PCR-amplified
samples of diploid populations, Molecular Biology and Evolution 7 (1990), no. 2, 111122.
[3] Gusfield.D, Haplotyping by pure parsimony, 14th Symposium on Combinatorial Pattern Matching (CPM) (2003),
144-155.
[4] Yuzhong.Zh, Xu-Yun.Z, Qiangfeng.Zh, and Guoliang. Ch,
An overview of the haplotype problems and algorithms,
Higher Education Press, co-published withSpringer-Verlag
GmbH 1 (2007), no. 3, 272282.
[7] Lynce, Graca.A, and Silva.J.M .Oliva. A.L, Haplotype Inference with Boolean Constraint Solving: An Overview, Oxford
Journal-Bioinformatics 14 (2008), 35453549.
[8] Li.ZP, Zhou.WF, Zhang.XS, and Chen.L, A parsimonious
tree-grow method for haplotype inference, Oxford JournalBioinformatics 21 (2005), 34753481.
[9] Benedettini.S, Roli1. A, and Gaspero.L. D, Two-level ACO
for Haplotype Inference under pure parsimony, IEEE/ACM
Transactions on Computational Biology and Bioinformatics
8 (2008), no. 12, 149-158.
[10] Hung.P and Chen.H, Parallel Algorithm for inferring Haplotype, 2007.
[11] Lynce.I, Marques-Silva .J, and Gaspero.L. D, Haplotype inference with Boolean satisfiability, International Journal on
Artificial Intelligence Tools 17 (2008), no. 2, 355-387.
[12] Grac. A, MarquesSilva.A.J, Lynce.I, and Oliveira.A, Efficient haplotype inference with pseudo-Boolean optimization
(2007), 125-139.
[13] Grac. A, Marques-Silva.A.J, Lynce.I, and Oliveira.A, Efficient haplotype inference with combined CP and OR techniques, CPAIOR08 (2008), 308312.
CONCLUSIONS
546
[21] Lynce. I, MarquesSilva. J, Xu X, and Liu J, Efficient haplotype inference with Boolean satisfiability, AAAI Conference
on Artificial Intelligence (2006), 104109.
[22] Lynce. I, Marques-Silva. J, Xu X, and Liu J, Haplotype inference with Boolean satisfiability, International Journal on
Artificial Intelligence Tools 17 (2008), no. 2, 104109.
[23] Grac .A, Marques-Silva .J, Lynce. I, and Oliveira. A, Efficient haplotype inference with pseudo-Boolean optimization, Algebraic Biology (2007), 125139.
[24] Grac .A, MarquesSilva .J, Lynce. I, and Oliveira. A, Efficient haplotype inference with combined CP and OR techniques, CPAIOR08 (2008), 308312.
[25] Lancia, Haplotyping Populations by Pure Parsimony Complexity, Exact and Approximation Algorithms, Bioinformatics (2004), 54.
[26] Stephens.M and Scheet.P, Accounting for Decay of Linkage
Disequilibrium in Haplotype Inference and Missing-Data
Imputation, Am. J. Hum. Genet 76 (2005), 449462.
[27] C.F Xu, Niu T, and Liu J, Effectiveness of computational
method in haplotype prediction Human Genetics, Human
Genetics 110 (2003), 148156.
[28] Zhang Y, Niu T, and Liu J, A coalescence-guided hierarchical Bayesian method of haplotype inference, Am. J. Hum.
Genet. 79: 313322 79 (2003), 313322.
[29] Excoffier L and Slatkin M, Maximum likelihood estimation
of molecular haplotype frequencies in a diploid population,
Molecular Biology and Evolution 12 (1995), no. 5, 921927.
[30] Fallin D and Schork N.J, Accuracy of haplotype frequency
estimation for biallelic loci via the expectation maximization algorithm for unphased diploid genotype data, American Journal of Human Genetics 67 (2000), 947959.
[31] Niu T, Qin Z, and S. Liu, Partitionligationexpectationmaximization algorithm for haplotype inference with singlenucleotide polymorphisms, American Journal of Human Genetics 71 (2002), 1242-1247.
[32] Catanzaro Da and Labb Ma, The pure parsimony haplotyping problem: overview and computational advances, International Transactions in Operational Research 16 (2009),
no. 5, 561-584.
[33] D Catanzaro, The minimum evolution problem: Overview
and classification, Networks 53 (2008), no. 2, 8990.
[34] Halldrsson B.V, Bafna V, Edwards N, and R Lippert, Combinatorial problems arising in SNP and haplotype analysis,
Discrete Mathematics and Theoretical Computer Science,
Springer-Verlag, Berlin 2731 (2003), no. 2, 2647.
[35] Gusfield D, Orzack S.H, Edwards N, and R Lippert, Haplotype inference, Handbook on Bioinformatics. CRC Press,
Boca Raton, FL (2005), 128.
[36] P Erixon, B Svennblad, T Britton, and B Oxelman, Reliability of Bayesian posterior probabilities and bootstrap frequencies in phylogenetics, Systematic Biology 52 (2003),
665-673.
[37] J.P Huelsenbeck, F Ronquist, and R Nielsen, Bayesian inference of phylogeny and its impact on evolutionary biology
294 (2001), 2310-2314.
[38] B Larget and D.L Simon, Markov chain Monte Carlo algorithms for the Bayesian analysis of phylogenetic trees,
Molecular Biology and Evolution 16 (1999), 750-759.
[39] B.V Halldrsson, V Bafna, N Edwards, and R Lippert, Combinatorial problems arising in SNP and haplotype analysis,
Discrete Mathematics and Theoretical Computer Science,
Springer-Verlag, Berlin 2731 (2003), 2647.
[40] J Li and T Jiang, Efficient inference of haplotype from genotype on a pedigree, Journal of Bioinformatics and Computational Biology 10 (2003), no. 1, 4169.
547
Farzaneh Yahyanejad
s-amini@iasbs.ac.ir
f.yahyanejad@iasbs.ac.ir
Alireza Khanteymoori
Abstract: Price prediction in a stock market is a challenging task due to the complexity of
behaviors of both customers and owners and of course many other factors that are effective in this
area. In this paper a Bayesian neural network is proposed to predict the final price of a company
(IranTransfo) in Tehran Stock Exchange. We use Monte Carlo Markov Chain (MCMC) sampling
for implementing our Bayesian neural network. In addition to that, some Multilayer perceptron
networks are discussed and theie performances are compared with the proposed Bayesian neural
network. The result shows that MCMC is more effective in stock market price prediction.
Keywords: Bayesian Neural Network, MCMC method, MLP Neural Networks, stock market.
Introduction
of hidden layers is optional if they be used with nonlinear activation functions, the computational power of
the network increases dramatically.
548
2.1
549
The Third International Conference on Contemporary Issues in Computer and Information Sciences
search direction
pk = gk + k pk1 .
Various options are available for k in (11).
Fletcher-Reeves [10] we have
k =
3.2.2
In
(12)
Polak-Ribiere
3.1
gkT gk
T g
gk1
k1
(11)
Gradient Descent
gkT gk
.
T
gk1 gk1
(13)
Powel-Beals Restarts
In Gradient Descent weights are updated on the basis 3.2.3
of the fastest decrease in the error function. Therefor,
we have
In this method search direction [10] resets to the nege
(i, j)
(9) ative of the gradient whenever
wij = wij
wij
Wherein e is the error function and is one of the synaptic weights of the network.
3.2
|gkT gk | 0.2kgk k2 .
3.2.4
Conjugate Gradient
In Gradient Descent direction of searching is the direction that Gradient of the error decreases faster than
any other directions. However, this direction is not the
direction that fastest convergence takes place. There
are some methods in which a conjugate direction is
found that produces faster convergence in comparison
to the steepest descent direction as is in Gradient Descent. In the following we review some of them[10].
3.3
3.2.1
(14)
BFGS
Fletcher-Reeves Update
Newtons method is an optimization algorithm and is
applied on the basis of Equation (15).
550
Methods
2HLs
3HLs
4HLs
5HLs
6HLs
7HLs
8HLs
9HLs
10HLs
cgb
cgf
gd
scg
lm
98.6767
99.0119
144.4841
96.4165
113.7195
98.7817
155.3317
227.4080
141.1372
192.8755
97.3968
96.8207
243.5424
103.1662
102.6875
183.2696
133.7310
414.7131
121.0033
104.9285
110.2846
174.3894
390.8027
99.9745
97.7067
120.4961
163.4421
331.1582
110.4170
112.2179
129.8047
208.7062
459.6300
125.3221
170.8513
107.0397
249.7942
524.0216
149.3719
190.8371
155.5716
308,8018
603.9015
107.3567
117.7208
Table 1: MlPs results with different hidden layers (columns) and different training methods(rows)
order to obtain the best performance we fix number of
hidden units and run each of the MLPs with different
T
yk xTk
xk xTk number of hidden layers and choose the best option for
yk xTk
Hk I T
+ T
.
= I T
yk xk
yk xk
yk xk each. The results are shown in Table 1 and the best
(17) number of hidden layers for each training method is
bold. At the end we compare MLP with best parameters (i.e. number of hidden layers) to BNN results in
Figure 2.
3.4
Levenberg-Marquardt
(18)
g = J t e,
(19)
Experimental Results
5
Conclusion
551
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Refrences
[1] S. Haykin, Neural Networks A comprehensive fundation: ch.
1, 1999.
[2] S. Sehad and C. Touzet, Reinforcement Learning and Neural Reinforcement Learning (1994).
[3] G. L. Rogova and J. Kasturi, Reinforcement Learning Neural Networks for distributed decision making, Proc. of the
FUSION (2001).
[4] Md. Rafiul Hassan and Baikunth Nath, Stock Market forecasting Using Hidden Markov Model: A New Approach,
Proci=eeding of the 2005 5th International Conference on
Inteligent Systems designs and Applications (2005).
[5] Md. Rafiul Hassan, A combination hidden Markov model
for stock market forecasting, Neurocomputing (2009).
552
[6] Y. Kovalchuk and Maria Fasli, Deploying Neural-NetworkBased Models for Dynamic Pricing for Supply Chain Management, Computational Intelligence (2008).
[7] Jarno Vahatalo and A Vehtari, MCMC Methods for MLP
Network and gussian Process and stuff- A documentation
in matlab Toolbox MCMCstuff (2006).
[8] Jouko Lampinen and A Vehtari, Bayesian Approach for
Neural Networks- Review and case Studies, Neural Networks (2001).
[9] B. Walsh, MarkovChain Monte Carlo and Gibbs Sampling:
Lecture Notes for EEB581, Version 26April (2004).
[10] www.mathworks.com/products/matlab/demos.html.
Marjan Abedin
Alzahra University
eskandari@alzahra.ac.ir
m.abedin@aut.ac.ir
Abstract: We present a simple algorithm for computing dual of the envelope polygon of an
arrangement of n lines in dual space and then we present an algorithm for finding sets of lines that
by adding them to the arrangement the envelope polygon of the primal arrangement remains fixed.
Introduction
Definitions
2.1
553
2.2
Duality
2.4
rangement
contribute in the envelope polygon.
and therefore l in dual space would change to the point
l = (a cot(), b/ sin()).
Now let us find dual of the biggest segment on each
line of an arrangement:
2.3
Constructing
polygon
the
envelope
The strategy which we are going to follow for constructing the envelope polygon is so simple and is based on
the fact that all edges of the envelope polygon are
bounded segments of EU andEL while rotating whole
arrangement from 0 to 2, and also the fact that by
rotating the arrangement the envelope polygon wont
554
The Third International Conference on Contemporary Issues in Computer and Information Sciences
change.
Concentrating on the note that during the rotation
whole arrangement from 0 to 2, each line would become the line with smallest/largest slope in the rotated
arrangement twice; we could discrete the computation
and stop whenever the line with largest slope in the
arrangement become a line with smallest slope during
the rotation. Lets describe the algorithm formally:
2.
3.
4.
5.
}
It is clear that the running time is still O(nlogn) for
arrangement of n lines to construct the envelope polygon, as it just need to compute the convex hull of n
At first, find dual of all biggest segments on the
points for 2n times; whenever we rotate all lines of the lines, as explained in section 2.3, and after that find
arrangement until the steepest line becomes a line with the intersections of the D.W.s, therefore the result resmallest slope.
gion in dual space contains points which dual of them
are the lines that satisfy the lemma 2. Lets call the result region as P . we could use the divided and conquer
algorithm in [3] to find the intersection of n D.W.s in
4 Maintaining the Envelope O(nlog(n)).
Polygon
For satisfying lemma 2, first compute the envelope
Each line that we add to an arrangement whether it polygon with simple algorithm in section 3 and for
would change the envelope polygon and this means that finding all reflex chains in envelope polygon we start
555
traversing the envelope polygon from an arbitrary crit- 4.1 Complexity analysis:
ical vertex that we found in section 2.4, up to the other
critical vertex that exists in critical vertices list. We
1. First step can be done in O(n2 ) because we just
need to save all the edges of envelope polygon that exneed to find two smallest angels for each dual
ist on a reflex chain during traversing. We save the
point in dual space.
segments on every reflex chain in Ci if there are more
2. We can compute the intersection of D.w.s related
than one edge of the envelope in the chain. Line `
to dual of n segments, which we found them in
should intersect a reflex chain of the envelope polygon
step 1, in O(n log(n)) by divided and conquer alat most once, note if ` doesnt intersect a reflex chain
gorithm
in [3].
of the envelope, then ` satisfies lemma 2, therefore, we
need to compute the union of intersections of D.W.s of
3. This step of the algorithm can also be done in
each pair of edges that exist in the chain. The result
O(n log(n)).
region in dual space contains points such that dual of
them would intersect each reflex chain more than once.
4. In [1], it is proved that envelope polygon has at
We should compute these spaces for all the chains and
most O(n) edges, so traversing the envelope polyunion of all of them, results a space (call Q) that dual of
gon can be done in O(n).
each point in Q is a line that intersects each reflex chain
5. Finding the intersection of D.W.s belong to each
of the envelope polygon more than once, and therefore,
pair of segments can be done in O(1) and we
if we add these lines to the arrangement, the T
envelope
has
at most O(n) edges in each reflex chain, and
Refrences
[1] D. Eu, E. Guevremont, and G.T. Toussaint, ON ENVELOPES OF ARRANGEMENTS OF LINES, Journal of
Algorithms (1996).
[2] D. Keil., A simple algorithm for determining the envelope of
a set of lines: Elsevier Science Publishers B. V., Information
Processing (1991).
[3] M. de Berg and D.T. Lee, Computational Geometry Algorithms and Applications, Third Edition, Springer-Verlag
Berlin Heidelberg, 2008.
556
Hamed Hagtalab
Torbat-E-Jam, Iran
Torbat-E-Jam, Iran
p.jafari551@gmail.com
Morteza Shokrzadeh
Hasan Danaie
Jolfa, Iran
Torbat-E-Jam, Iran
Abstract: The goal of this study is investigating and recognizing the barriers of exerting einsurance in Iran Insurance Company according to the 3-branched model of Mirzai Ahar Najai. In
this study, different environmental barriers (including legal, cultural, and technological barriers),
organizational barriers (such as policies, insurance rules, internal structure and technology), behavioral barriers (like, expert staff shortage, the lack of supporting top managers, staff resistance
against changes) were evaluated. This study is a descriptive survey with applied goals. The statistical population included the managers, assistants, organizational experts, and different branches
of Iran Insurance in Orumie city. Sampling method was simple random sampling. Research hypotheses were examined using a One-SampleT-Test to investigate the efficiency of each variable on
exerting e-insurance. Fridman test was also used to rank variables. Research results showed that
means of the barriers of exerting e-insurance were higher than average.
Introduction
557
Discussion
3 Investigating behavioral factors as the barriers of In Hypothesis 1, environmental factor variable was examined using 8 questions and 3 factors. To test its
e-insurance exertion
4 Investigating organizational factors as the barri- significance, One-SampleT-Test was used whose results
showed that legal factor with the mean of 11.98, culers of e-insurance exertion
tural factor with the mean of 8.32, technological factor
with the mean of 12.89, and enviromental factor with
the total mean of 33.2 in general act as the barriers
of using e-insurance since all the significance values of
Methodology
them were smaller than 0.05.
In the H2, organizational factor variable was examined using 9 questions and 4 factors including, internal policies, interorganizational technology, insurance
rules, and structural factor. To test their significance
One-SampleT-Test test was used. The results showed
that internal policies with the mean of 8.55, insurance
rules with the mean of 8, interorganizational technology factor with the mean of 8.84, structural factor with
the mean of 10.92, and generally, organizational factor
with the total mean of 33.2 act as the barriers of using
e-insurance in Iran Insurance Company (p < 0.05).
Research hypotheses
Conclusion
558
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Significance level
0.165
0.165
0.08
0.059
0.11
0.059
0.066
Colmogrov-Smirnov z
1.11
9.4
2.64
1.88
1.28
1.77
2.74
Standard deviation
2.8
7.5
2.1
1.19
4.05
1.22
0.88
mean
33.2
2.1
11.98
8.32
36.34
8.5
8.8
0.063
0.088
0.085
0.072
1.9
1.49
1.65
1.41
1.35
2.08
2.7
1.57
8
10.9
24.64
7.84
0.06
1.96
0.99
8.5
0.067
1.47
1.28
8.29
varaiables
Environmental factor
Technological factor
Legal factor
Cultural factor
Organizational factor
Internal policy factor
Interorganizational
factor
Insurance rule factor
Structural factor
Behavioral factor
Personnel
management against changes
factor
Lack of manager support factor
Expert human resource shortage factor
Table 1: The results of Colmogrov- Smirnov test results for identifying data normality
Rank
1st
2nd
3rd
4th
5th
6th
7th
8th
9th
10th
Mean Rank
9.43
8.65
7.68
5.09
4.46
4.45
4.15
4.12
3.61
3.35
Variable
Environmental Factor
Technological Factor
Legal Factor
Cultural Factor
Organizational Factor
Internal Policy Factor
Interorganizational Factor
Insurance Rule Factor
Structural Factor
Behavioral Factor
Since all research hypotheses were confirmed, reflecting the above average obstructiveness of environmental, organizational, and behavioral factors, it is
suggested that Iran Insurance managers should try to
remove them. Due to the highest obstruction value in
technological field, Iranian insurance managers should
improve their technological capabilities and remove its
obstacles. From the environmental aspect, specific regulations should be provided for the insurance companies in the field of electronic signs, contractions, and
transactions. Trade rules should be amended, supervised, and followed by the officials and all other stakeholders. People should be informed about the advantages of e-trade and extending the culture of it in the
organizations. The culture of using computer and Internet among different classes and insurers should be
extended. Definitions of Internet crime and penalties should be clarified for the Internet users. Enough
559
Refrences
[1] M Azad, Identifying and Investigating Effective Factors in
Purchase Purpose of E-Insurance in Tehran, 2010.
[2] A Ebrahimi, E-Trade and E-Insurance It.Technical Quarterly of Asia, 2005.
[3] F Deghpasand, E-Trade and E-Insurance. Planning Assistance of Trading Ministry, Sizan Publication, 2006.
[4] J Sahamian, The Challenges and Strategies of IT Development in Insurance Industry of Iran, Conference of Managing Insurance Challenges (2008).
[5] A Sarafizadeh, IT in the Organizations, Mir Publication,
2008.
[6] Sh Azizi, Identifying the Barriers of E-Trade Usages in Iran
Khodro Factory and Solutions for Them., 2005.
[7] F Ghasemzadeh, Legal Challenges of Using E-Trade in Iran,
Article Collections of E-Trade Conference. Bazargani publication., 2005.
[8] B Ghezelbash, The Principals of Supervising E- Insurance:
A Phenomenon in Insurance World, Insurance Research
House, 2005.
[9] M Castles, Information Arena: Society, Economy, and
Culture: Translated By Aligolian, A; Khakbaz, A.Tarhno.
Tehran, 2002.
[10] A Kameli, Marketing And Selling E- Insurance, Technical
Quarterly of Asia, 2005.
[12] M Nahavandian and A Haghighatkhah, E-Trade Development in Iran, Trade Reasearches, 2005.
560
Naser Norouzi
Jolfa, Iran
Jolfa, Iran
Morteza.Shokrzadeh@yahoo.com
Alireza Rasouli
Jolfa, Iran
Iran
Abstract: Nowadays, moving toward globalization, removing physical borders and living in global
village have made societies to accept information technology as an unseperable part of their lives.
Teleworking is an important innovation embeded in the context of information technology, and
internet. But, before any widespread use of every new technology , necessary basis should be provided for it to be welcomed by the users. Or else, obligation in its exertion will lead the society to
the blind usagae of them. This paper first investigated the effective factors inelectronic readiness
of governmental and semi-governmental organizations of Tabriz city ; Then , effective factors in
accepting information technologies and teleworking were recognized using research theories and exploratory factor analysis and KMO test . To identify different aspects of electronic readiness of the
organizations considering their types and dimensions, 34 factors were regarded from which 7 factors
were extracted expressing 66.74% of total changes. To identify different aspects of information
technology and teleworking, 19 variables were used from which 7 variables were extracted , eliminating 2 questions (11 and 19) from the questionnaire, expressing 75.27% of total changes. Using
One-SampleT-Test, effectiveness of each variable on electronic readiness of organizations was tested
through research hypotheses. Exerting Fuzzy AHP (Chang method), factors were ranked. The
results showed that electronic readiness variables have higher priority than technology acceptance
variables.
Introduction
contextes for it leads organizations to using teleworking (Abtahi 2010, 16). Since accepting teleworking
processes needs organizational and staff s behavioral
changes, managers evaluate organizational readiness
Investigating the readiness level of different organiza- for accepting teleworking processses or changes to identions is the first step. Then, providing the essential
Corresponding
561
tify a proper starting point for it, or else they will have
to bear excessive costs rather than benefits. Rediness is
a prerequisite for the successful confrontation of a person or organization with the organizational changes.
Then, a true readiness estimation seems necessary for
the true direction of the attempts and strategies. Other
prerequisites for the successful implementation of teleworking should also be carefully considered. The time
and place in which people accept a new technology and
adopt with it are imprtant. Finding effective variables
in accepting and using IT has been of great interest
for the researchers without which no efficiency can be
achieved.
Teleworking
Data analysis
562
The Third International Conference on Contemporary Issues in Computer and Information Sciences
IT
accessibility
Human
resource
indices
Managerial IT and
indices
informatic
bases
Mental
norms
and
picture
0.131
0.127
0.125
0.127
0.132
0.135
0.139
0.141
0.134
0.132
0.136
0.148
0.146
0.150
0.158
0.173
0.166
0.162
0.149
0.154
0.143
0.065
0.058
0.061
0.069
0.063
0.072
0.067
0.171
0.169
0.166
0.173
0.173
0.170
0.175
Job
relation
and
conformity
0.072
0.075
0.078
0.07
0.069
0.066
0.074
IT use
purpose
Ease of
use
Percieved
benefit
0.1
0.097
0.105
0.103
0.107
0.095
0.093
0.078
0.074
0.076
0.075
0.081
0.082
0.080
0.083
0.093
0.091
0.085
0.078
0.080
0.079
1
2
3
4
5
6
7
563
Conclusion
7.2
1. Managers should evaluate organizational capabilities and prioritize organizations according to electronic
readiness and IT acceptance using fuzzy AHP or the
model of this research.
2. All 14 criteria, identified by factoriel analysis,
should be weighed by fuzzy AHP.
3.The relation between effective factors in electronic
readiness and IT acceptance for teleworking should be
determined.
Refrences
[1] S Abtahi and B Jokar, Evaluating E-Trade Performance
in Manufacturing Units of Shiraz According to Electronic
Readiness, Business, and Their Effects: A Report of Study
Scheme of Business Organization of Fars Province (2010).
[2] American Management Association: AMA/ITAC Survey
on Telework, available at:www.amanet.org (2010).
[3] S Al-gahtani, Computes-Technology Adoption, in Saudi
Arabia: Correlates of Perceived Innovation Attributes, Information Technology for Development 10 (2006), 5769.
[4] Cyber Security Industry Alliance: Making Telework a Federal Priority: Security Is Not the Issue (2005).
[5] F. D Davis, R.P Bagozzi, and P.R. Warshaw, User Acceptance of Computer Technology: A Comparison of Two Theoretical Models, Management Science 35 (1989), no. 8, 982
1003.
7.1
[6] Edwards.
J,
Assessing
Your
Organizations
Readiness
for
Teleworking
A
Public
Manger:
http://www.thepublicmanager.org/docsarticles/archive/Vol30,2001/1./ol30 , Issue03W30N3AssessingYour0g-Edwards.pdf. 30 (2001), no. 1.
[7] Y. C Erensal, T Oncan, and M. L Demircan, Determining
Key Capabilities in Technology Management Using Fuzzy
Analytic Hierarchy Process, A Case Study of Turkey, Information Sciences, Industrial Engineering Department, Dogus
University 176 (2006), 27552770.
[8] M Fathian and M Khanjari, Teleworking and Provision of
Proper Entrepreneurship with Modern Technologies, 1st National Conference of Entrepreneurship, Creativity, and Future Organizations (2008).
[9] M Castles, Information Arena: Society, Economy, and
Culture: Translated By Aligolian, A; Khakbaz, A.Tarhno.
Tehran, 2002.
[10] V Illegems, A Verbeke, and R SJegers, The Organizational Context of teleworking, Implementation, Technological Forecasting and Social Change 68 (2001), no. 2, 275291.
[11] K.B Kowalski and Jennifer A.S, Critical Success Factors in
Developing Teleworking Programs, Benchmarking: An International Journal 12 (2005), no. 3, 236249.
[12] Y Lee, K. A Kozar, and K. R. T Larsen, The Technology
Acceptance Model: Past, Present, and Future, Communication of the Association for Information Systems 12 (2003),
no. 50, 752780.
[13] Mark. M.H and F Clark, Using the AHP to Determine the
Correlation of Productive Issues to Profit, European Journal of Marketing 35 (2001), no. 7.
564
The Third International Conference on Contemporary Issues in Computer and Information Sciences
[14] Robert E Morgan and W. B Sanders, Teleworking: an Assessment of the Benefits and Challenges, For European
Business Review 16 (2004), no. 4.
[15] Nag T Nguyen and J Marks, The Consequence of Spatial Distance and Electronic Communication Teleworks: A
Mull-level Investigation, A dissertation submitted to Temple University Graduate Broad. (2004).
[16] Obra Ana Rosa del Aguila, Sebastian Bruque Camara, and
Antonio Padilla Melendez, An Analysis of Teleworking Centres in Spain. Facilities 20 (2002), no. 11/12, 394-399.
[17] A Oddershede, A Arias, and H Cancino, Rural Development Decision Support Using the Analytic Hierarchy Pro-
565
Farzane Yahyanejad
b khazaei@iasbs.ac.ir
f.yahyanejad@iasbs.ac.ir
Angeh Aslanian
S. Mehdi Hashemi
Department of Mathematics
Department of Mathematics
Angeh.a2@gmail.com
hashemi@aut.ac.ir
Abstract: The Hop Constrained Connected Facility Location (HC-ConFl) Problem is a combination of connected facility location and Steiner trees with hob constraints. The HC-ConFL is a
NP-Complete problem and till now no heuristic algorithm is customized to solve this problem. This
paper customizes Harmony Search Algorithm in order to solve this problem. For comparison, we
also solve the problems model with CPlex. Experimental results demonstrate that this proposed
Algorithm is an effective procedure that finds high quality solutions very fast.
Keywords: Hob Constrained Steiner trees, Connected Facility Location, Harmony Search heuristic, Linear Programming Models.
Introduction
566
The rest of the paper is organized as follows. Section 2, Harmony search algorithm is introduced. The
customized Algorithm Details are mentioned in Section
3. Section 4 is devoted to show the implementation and
results and Section 5 concludes.
The harmony Search algorithm is a metaheuristic algorithm for optimizing mathematical functions and engineering problems,which is inspired by the art of music
[4]. Similar to the way a musician improves his skill,
based on an aesthetic standard, design variables in a
computer memory can be improved based on objective
function. Steps of the harmony search algorithm are
summarized as follows:
STEP 1. Define the objective function of problem and
parameter initialization(HMCR, PAR, bw);
STEP 2. Harmony memory construction(HM);
STEP 3. New harmony improvisation;
STEP 4. Harmony memory update;
STEP 5. Termination criterion check. In the step of
defining objective function, the optimization problem
is specified as
M inimizef (x)
(1)
subjecttoxi Xi , i = 1, 2, ..., N
(2)
where f(x) is an objective function. The solution vector X, is the set of each decision variable, and N is the
number of desicion variables. In this Step, the parameters of the algorithm are to be defined. The Basic parameters are: Harmony Memory Size (HMS), the Harmony Memory Considering Rate (HMCR), pitch adjusting rate (PAR), Bandwidth (bw), and the number
x11
x21
..
.
HM S1
x
1
S
xHM
1
x12
x22
x1N
x2N
S1
xHM
N
S
xHM
N
S1
xHM
2
S
xHM
2
f (x1 )
f (x2 )
..
.
HM S1
f (x
)
f (xHM S )
567
The Third International Conference on Contemporary Issues in Computer and Information Sciences
3.1
568
3.3
As mentioned before, the purpose is to open some facilities on some nodes of graphs in such a way that the
distance between the customers and their nearest facilities and distance between the facilities to the root
node with hop-constraint becomes minimum. Before
evaluating each harmony we need to ensure that each
harmony exactly meats HC-conFL problem constraints
and also we can simply make some improvements in the
generated harmony by some Greedy decisions. So before evaluating the harmonies we refine and validate
them. The generated harmony must be a connected
sub Graph, and also a tree with maximum depth of H
(The hop constraint) so that has the minimum cost.
First we calculate the shortest path to each node using
modified bellman ford Algorithm. The modified Bellman ford Algorithm is a variant of The Classic bellman
ford Algorithm in which we also count the number of
steps (intermediate Edges) using a 2D-array[number of
Steps][Vertices] and insure that no path length is more
than H. in This Algorithm the result is multiple shortest paths to each node by considering different number
of steps. Next we exclude all the nodes that are not
reachable with H number of Hubs from our solution
vector. Next step is to connect each customer node to
the nearest and cheapest Facility node which is open
(xi == 1 in our generated Harmony), we add all these
.
less Hub number], add best Harmonies to HM}
. While(it N U M BER OF IT ERAT ION )
.
While(var N U M BER OF V ARIABLES)
.
If (HM CR rand 0 1)
it
.
use a random value for Xvar
.
Else If (P AR rand 0 1 HM CR)
it
.
choose a value from all Xvar
in HM;
.
Else
it
.
choose a value from all Xvar
in HM and
.
adjust it to a close Value.
.
End While
.
%{Verify Harmony(in our case: if not a Tree,
.
change it to be a Tree with Best evaluation)}
.
%{Evaluate new harmony and accept it if it is
.
better than the worse harmony in HM}
. End While
. %{Choose the Best Harmony in the HM}
End
Figure 2: Example
Computational Results
569
(3)
(4)
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Number of Hops
Result
Time
AIMMS
Result
Time
H=3
H=5
H=7
H=3
H=5
H=7
H=3
H=5
H=7
H=3
H=5
H=7
3942.854
3755.054
3615.769
3802.791
3555.85
3525.83
3563.588
3489.542
3488.81
3474.557
3473.10
3471.801
0.115
1.574
12.173
0.661
12.66
29.265
85.76
76.52
221.35
153.224
810.77
1349.58
3942.854
3755.054
3614.792
3802.79
3552.57
3520.07
3561.83
3489.02
3487.792
3473.263
...
...
HS
Instances
{c5,d5}
{c10,d10}
{c15,d15}
{c20,20}
3.37
9.1
167.1
8.7
59.87
110.172
8.75
107.698
186.245
137.853
...
...
gap
0.0
0.0
0.02
0.0
0.09
0.16
0.04
0.0
0.02
0.03
...
...
H
X
P
Xij
Yi , j F \ {r},
(i,j)AS p=1
p
Xij
= 0,
(i, j) AS , {
X
i=r
i 6= r
Xij = 1,
,p=2,...,H
,p=1
k D
benchmarck data. Our extensive computational experiments show that our heuristic obtains high-quality so(5) lutions rapidly. The results are quite consistent in the
sense that the variance of the performance gap is quite
low.
(6)
(7)
Refrences
Xjk Yj , (i, j) AD ,
(8)
Yr = 1,
(9)
(j,k)AD
P
Xij
, Xjk , Yi
{0, 1}.
(10)
Conclusion
Hob Constrained Connected Facility Location is proposed by lijubic in 2009 and there isnt any heuristics
for it yet. In this paper we proposed a heuristic algorithm that combines Harmony Search and modified
Belmanford Algorithm and we considered a family of
570
[6] A. KAveh and H. Nasr, Solving the Conditional and Unconditional P-center with modified Harmony search: A real case
study., Scientia Iranica (2011).
[7] http://www.mpi-inf.mpg.de/departments/d1/projects/
benchmarks/UFLP.
[8] http://people.brunel.ac.uk/mastjjb/jeb/orlib/steininfo.html.
Mehdi Vasighi
f.yahyanejad@iasbs.ac.ir
Vasighi@iasbs.ac.ir
Angeh Aslanian
Bahareh khazaei
Angeh.a2@gmail.com
b khazaei@iasbs.ac.ir
Abstract: Prostate tumors are the second leading cause of cancer deaths. That is the most
common cancer in male around the world. This paper introduce a method which uses Tabu search
to identify most differentially expressed genes between normal and prostate cancer gene expressions.
Tabu search is an optimization method that provides solutions near to the optimal in a large set.
We want to find an optimal subset of genes from original large data set to reduce dimensionality
of data and improve classification accuracy between normal and cancer sample. We defined a class
separability index, a criterion, that is employed as an objective function in Tabu search to maximize
the class separability. For comparison, the Genetic Algorithm as a common optimization method
was also examined and the experimental results showed that the suggested method is a powerful
tool for gene selection in a microarray data.
Keywords: Tabu Search; Linear Discriminant Analysis; Gene Selection; Microarray; Prostate Cancer diagnosis .
Introduction
Recently prostate cancer has become the most common cancer in the world. Early diagnosis and detection of this disease lead to earlier treatment and can
save lives. Multiple genes are involved in cancer formation. Genomic methodologies have been used to explore gene expression correlates of prostate cancer[13].
The benefit gained from gene selection in microarray
data is the improvement of predictive performance of
analytical models to identify correlated gene expression
profiles[4].Functional genomics involves the analysis of
large datasets of information derived from various biological experiments. One such type of large-scale experiment involves monitoring the expression levels of
thousands of genes simultaneously under a particular
condition, called gene expression analysis. Microar Corresponding
571
(1)
Computational
discussion:
Results
and
3.1
Generation of neighbors
Tabu search (TS) is a meta-heuristic method, was introduced by Glover in 1986 for combinatorial problems. The basic idea of TS have also been sketched by
Hansen[9]. TS is a extension of Local search method,
which including short term memory, called Tabu list,
to guide the process of search and prevent reversal
of recent moves besides not trapped in local optimal.
This method is elegant that can be viewed as an iterative technique and local neighborhood search pro-
572
The Third International Conference on Contemporary Issues in Computer and Information Sciences
3.3
3.3.1
Parameter setting
Initial solution
3.3.4
High quality of expression profiles including were derived 55 of prostate tumor and no tumor prostate samples, that some of them are shavings of prostate tissue
with cancer, and the other are shavings of prostate
tissue without cancer. The matrix contains measurements on 12626 genes. Data matrix download from
[11].
3.3.5
573
Data set
The program was run several times with entirely random starting point. The program also has another
parameter which represents the number of iterations.
With 2000 iterations we optimized the other parameters of the algorithm and then different initial solu-
Refrences
[1] Veer L., Dai H., Vijvr M.v.D., He Y., Hart A., Moa M., Peterse H., Kooy K.v.D, Marton M., Witteven A., Schreiber
G., Kerkhoven R., Roberts C., Linsley P., Bernards R., and
Friend S., Gene expression profiling predicts clinical outcome of breast cancer, Nature (2002), 530-536.
[2] Perou C.M., Sorlie T., Eisen M.B., van de Rijn M., Jffrey
S.S., Rees C.A., Pollack J.R., Ross D.T., Johnsen H., Akslen L.A., and et al., Molecular portraits of human breast
tumors, Nature (2000), 747-752.
[3] Golub T.R., Slonim D.K., Tamayo P., Huard C., Gaasenbeek M., Mesirov J.P., Coller H., Loh M.L., Dowing J.R.,
Caligiuri MA, and et al., Molecular classification of cancer: class discovery and class prediction by gene expression
monitoring, Science (1999), 531-537.
[4] D Singh, PG Febbo, K. Ross, DG Jackson, J Manola, C
Ladd, P Tamayo, V. DAmico, P. Richie, S Lander, M Loda,
W Kantoff, R. Golub, and R Sellers, Gene expression correlates of clinical prostate cancer behavior, Cancer Cell
(2002).
[5] Madan Babu M., An Introduction to Microarray data Analysis, Chapter 11, pages: 225-249.
[6] Zhang H. and Sun G, Feature selection using tabu search
method, Pattern recognition (2002).
[7] Hageman JA., Streppel M., Wehrens R., and Buydens L. M.
C., Wavelength selection with tabu search, Pattern recognition (2003).
[8] Glover F., An introduction to Tabu search, ORSA Journal
on computing (1989).
[9] Hansen P., The steepest ascent mildest descent heuristic
for combinatorial programming: Lecture Notes in Computer
Science, Computing (1990).
[10] Balakrishnama S. and Ganapathiraju A., Linear Discriminant Analysis- A brief Tutorial (1998).
[11] http://www.broad.mit.edu/cgi-bin/cancer/dataset.cgi.
574
Zahra Jafari
Islamic Azad University,Borujerd branch
Faculty of Management
Borujerd, Iran
Tehran,Iran
ZZ.Jafari@gmail.com
M-Shirazi@Sbu.ac.ir
Abstract: This study aims to find out the roles of BI capabilities and decision environment in
BI (Business Intelligence) success the main objective. In this research is to realize how parameters
such as technological and organizational capabilities, can affect BI success considering the decision
environment in Iran. Based on our finding decision environment can have effects on some of items
of technological BI capabilities and organizational BI capabilities as they contribute to BI success.
Introduction
Business intelligence is not only seen as an instrument, product or system, but it is considered as a new
approach in organizational architecture. Such model
Today, due to progress in different fields of science, such helps managers to make accurate and right decisions
progress has led to expansion of technologies, transfor- in shortest time [3].
mation of local business to global business, customer
There are some steps to follow business intelligence
awareness, and high expectation from goods, quality
in
any
organization:
services, etc. Consequently, there is a tense competition in business sector to survive. In business, the
industries need to have access to some information re1 Planning.
garding their customers preference for some goods to
2 Collecting data.
be able to excel in the market. Paying attention to such
3 Processing data.
issues has helped the business world to overcome some
obstacles and reach to some new promising horizons.
4 Analyzing and producing data.
Business Intelligence as a new concept takes advantage
5 Distributing data [4].
of all their new trends to show itself as a successful
model in business administration [1].
One of the main reasons why organization employs
Business Intelligence was introduced by Howard business intelligence is the help it can provide for in
Dresner, a Gartner Research Group analyst, as collec- decision making. BI can also help them to develop sertion of concepts and ways to expand successful business vice quality. The related softwares are able to extract
decision making via read supporting systems [2].
analyses and provide reports [5].
Corresponding
575
BI Success
Technological BI provides the data necessary, and organization BI is used to see the efficiency of the data.
All these can lead to the profit of the organization by
making the decision making mature [8].
In 2010, James Meernik conducted a study in which
he came to conclusion that technological BI can greatly
affect BI success. This means that technology can stimulate BI. Additionally, he realized that organizational
capabilities can be effective in IT. In data analysis,
flexibility is important in organization BI.
576
The Third International Conference on Contemporary Issues in Computer and Information Sciences
3.1
3.1.1
Technological BI Capabilities
3.2
Data Sources
3.2.1
Flexibility
A BI needs to be flexible in order to be effective. Flexibility can be defined as the capability of a BI to accommodate a certain amount of variation regarding the requirements of the supported business process [14]. The
sixth question is: Is there any relationship between
flexibility and BI success?
3.2.2
3.1.2
Organizational BI Capabilities
Data Types
3.1.4
User Access
3.1.5
Data Reliability
577
The decision environment can be defined as the totality of physical and social factors that are taken directly
into consideration in the decision-making behavior of
individuals in the organization [14]. This definition
considers both internal and extern factors. Internal
factors include people, functional units and organization factors. External factors include customers, suppliers, competitors, sociopolitical issues and technological issues.
The information processing needs of the decision
maker are also a part of the decision environment, provided that decision making involves processing and applying information gathered. Because appropriate information depends on the characteristics of the decision
making context, it is hard to separate the information
processing needs from decision making. This indicates
that information processing needs are also a part of
the decision environment. They are topics of interest
BI success
Correlation coefficient
Sig
0.235
0.023
0.038
0.715
-0.147
0.159
0.190
0.068
0.129
0.035
0.209
0.045
0.082
0.435
0.293
0.004
0.058
0.290
-0.017
0.435
Deg of Freedom
91
91
91
91
91
91
91
91
95
95
Discussion
578
The Third International Conference on Contemporary Issues in Computer and Information Sciences
579
[12] C White, The next generation of business intelligence: Operational BI, Information Management Magazine 1 (2005).
[13] W Eckerson, Smart companies in the 21st century: The secrets of creating successful business intelligence solutions,
TDWI The Data Warehousing Institute Report Series, 1-35
290 (2003).
[14] S Damianakis, The ins and outs of imperfect data. DM Direct 2 (2008).
[15] J. Gebauer and F Schober, Information system flexibility
and the cost efficiency of business processes, Journal of the
Association for Information Systems 7 (2006), no. 3, 122
145.
[16] M. L Gonzales and L.E. Sucar, Whats your BI environment
IQ?, DM Review Magazine (2005).
[17] L Fink and S Neumann, Gaining agility through IT personnel capabilities: The mediating role of IT infrastructure
capabilities, Vol. 8, Journal of the Association for Information Systems, 2007.
root@saeedsalehi.ir
salehipour@tabrizu.ac.ir
Abstract: The theory of addition in the domains of natural (N), integer (Z), rational (Q), real
(R) and complex (C) numbers is decidable; so is the theory of multiplication in all those domains.
By G
odels Incompleteness Theorem the theory of addition and multiplication is undecidable in the
domains of N, Z and Q; though Tarski proved that this theory is decidable in the domains of R
and C. The theory of multiplication and order h, 6i behaves differently in the above mentioned
domains of numbers. By a theorem of Robinson, addition is definable by multiplication and order
in the domain of natural numbers; thus the theory hN, , 6i is undecidable. By a classical theorem
in mathematical logic, addition is not definable in terms of multiplication and order in R. In this
paper, we extend Robinsons theorem to the domain of integers (Z) by showing the definability
of addition in hZ, , 6i; this implies that hZ, , 6i is undecidable. We also show the decidability of
hQ, , 6i by the method of quantifier elimination. Whence, addition is not definable in hQ, , 6i.
Keywords: Decidability; First-Order Logic; Godels Incompleteness Theorems; Churchs Theorem; Presburger
Arithmetic; Skolem Arithmetic; Quantifier Elimination.
Introduction
paper is dedicated to Alan Turing, to commemorate the Turing Centenary Year 2012 his 100th birthyear.
580
The question of the decidability or undecidability of the structures hZ, , 6i and hQ, , 6i are missing in the literature. In this paper, by modifying
Tarskis identity we show that addition is definable in
the structure hZ, , 6i; this implies the undecidability
of hZ, , 6i. On the contrary, addition is not definable in hQ, , 6i; here we show a stronger result by the
method of quantifier elimination: the theory hQ, , 6i
is decidable. Whence, by Robinsons above-mentioned
result [?robinson], addition cannot be defined in this
structure. An interesting outlook of our results is that
though h+, i puts the domains N, Z and Q on the
undecidable side, and the domains R and C on the decidable side, the language h, 6i puts the domains N
and Z on the undecidable side, but Q and R on the
decidable side.
581
The Third International Conference on Contemporary Issues in Computer and Information Sciences
Here comes the next steps of quantifier elimination. The powers of x can be unified: let p be the
least common multiplier of the h s, i s, j s, k s and
l s. From the hQ+ , Liequivalences a = b aq = bq ,
a < b aq < bq and Rn (a) Rnq (aq ), we infer that
the V
above formula V
can be re-written
V equivalently as
x h (xp = vh ) i (ri < xp ) j (xp < sj )
V
V
p
p
k (Rnk (tk x ))
l (Rml (ul x ))
for possibly new vh s, ri s, sj s, nk s, tk s, ml s and ul s.
This
Vformula is inVturn equivalent
V to
y h (y = vh ) i (ri < y) j (y < sj )
V
V
k (Rnk (tk y))
l (Rml (ul y)) Rp (y)
(with the substitution y = xp ). Thus it suffices to show
that
formula
Vthe following V
V
x h (x = vh ) i (ri < x) j (x < sj )
V
V
k (Rnk (tk x))
l (Rml (ul x))
is equivalent
V to a quantifier-free formula. If the conjunction h (x = vh ) is not empty, then the above formula
formula
V is equivalent
V to the quantifier-free
V
[ h (v0 = vh ) i (ri < v0 ) j (v0 < sj )
V
V
k (Rnk (tk v0 ))
l (Rml (ul v0 ))
for some
V term v0 . So, let us assume that the conjunction h (x = vh ) is empty, and thus we are to eliminate
the V
quantifier of theVformula
V
x i (ri < x) j (x < sj ) k (Rnk (tk x))
V
l (Rml (ul x)) .
x)) is equivalent to (the quantifier-free
V
formula) i,j (ri < sj ) 6= R(n ,n ) (t t1
), since
Q
V
the solution x = N k (tk )k for k (Rnk (tk x))
can be chosen to satisfy maxi {ri } < x < minj {sj }:
choose a rational number Q+ between the posi1/N
Q
tive real numbers = maxi {ri } ( k (tk )k )
and
1/N
Q
= minj {sj } ( k (tk )k )
. Since the set Q is
dense in R,
Q there exists such a rational number . Then
x = N k (tk )k is the desired solution.
V
k
k (Rnk (tV
we show
Finally,
V
V that the formula
V
x i (ri < x) j (x < sj ) k (Rnk (tk x))
V
l (Rml (ul x))
is
the following quantifier-free formula
V equivalent toV
1
(r
<
s
)
i
j
6= R(n ,n ) (t t )
Vi,j
l:ml |N (Rml (ul t)),
where
Q N is the least common multiplier
P of nk s, and
t = (t ) in which k s satisfy k k N/nk = 1.
V
V
Q+ , i (ri < x) j (x < sj )
V If for some x V
Vk (Rnk (tk x)) l (Rml (ul x)) holds, then clearly
i,j (ri < sj ) is true, and it can be easily seen that we
V
also have 6= R(n ,n ) (t t1
). Assume ml | N ; we
show that Rml (ul t). Note that there exists some
such that x = N t. Now if Rml (ul t), then
ul x = N ul t, and so by ml | N we
V have Rml (ul x)
which
contradicts
the
assumption
V
V
l (Rml (ul x)).
The formula x i (ri < x) j (x < sj ) is Whence, V
(R
(u
t))
holds.
ml
l
l:ml |N
+
hQ
V , Liequivalent to (the quantifier-free formula)
V
Conversely, if we have
< sj )
i,j (ri < sj ) (that is maxi {ri } < minj {sj }), since
i,j (ri
V
V
+
1
hQ , <i is dense.
6= R(n ,n ) (t t )
l:ml |N (Rml (ul t)), then
by the above arguments there exist some positive real
V
For the formula x k Rnk (tk x), let p be a prime numbers < such that for any rational with
number, and put t0k be the greatest number such that < < , the number z = N t satisfies the for0
V
V
V
0
ptk divides tk ; similarly xV
is the greatest number such mula
i (ri < z)
j (z < sj )
k (Rnk (tk z)) where
x0
that
p
divides
x.
Then
R
(t
x)
is
equivalent
to
n
k
N
and
t
are
as
above.
Let
P
be
a sufficiently large
k
k
V
p k [t0k + x0 nk 0]. By a generalized form of the Chi- prime number which does not divide any of the numernese Remainder Theorem
V ([?lnt]) the existence of such ators or denominators
Q of (the reduced fractions of) tk s
an x0 is equivalent to 6= t0 (n ,n ) t0 ; here (a, b) is or ul s. Let M =
l ml and let be a positive ratiothe greatest
of a and b. That is equiva- nal number such that (/P)1/M < < (/P)1/M . We
V common divisor
V
N
N M
lent to V6= R(n ,n ) (t t1
t satisfies l Rml (ul x).
). We further note that in show that x = P
0
0
case of 6= Vt (n ,n ) t there are infinitely many Note that since < P M < we already have
V
V
V
0
0
solutions for
P k [tk0 + x nk 0] which are in the form
i (ri < x) j (x < sj ) k (Rnk (tk x)). For showing
0
0
x = N y k k tk for some fixed integers N and k s; Rml (ul x) we distinguish two cases. (1) If ml | N then
y 0 is arbitrary. In fact NP
is the least common multiplier Rml (ul x) Rml (ul PN N M t) implies Rml (ul t)
V
of nk s and k s satisfy k k N/nk = 1; the existence contradicting the assumption
l:ml |N (Rml (ul t));
of such k s follows from the fact that the greatest com- thus R (u x). (2) If (m | N ), then R (u x) or
ml
l
l
ml
l
mon divisor of (N/nk )s is 1. Moreover, the solution x0 equivalently R (u PN N M t) implies R (u tPN )
ml
l
ml
l
is unique up to the module
V N . So, if there exists some since ml | M . Since P does not divide any of the nux Q+ which satisfies k Rnk (tk x)Qfor some tk Q+ , merators or denominators of (the reduced fractions of)
then it must be of the form x = N k (tk )k for some u s or t (t s), then we must have R (PN ) which holds
l
k
ml
(arbitrary) Q+ .
if and only if ml | N ; this contradicts our assumption
(ml | N ). Thus Rml (ul x). Whence, all in all we
V
V
Thus, the formula x i (ri < x) j (x < sj ) showed that V R (u x) holds.
Q.E.D
ml
l
l
582
[2] J. Robinson, Definability and Decision Problems in Arithmetic, The Journal of Symbolic Logic 14 (1949), 98114.
[3] D. Marker, Model Theory: An Introduction, SpringerVerlag, Berlin, 2002.
[4] C. Smory
nski, Logical Number Theory I: An Introduction,
Springer-Verlag, Berlin, 1991.
Refrences
[1] E. B
orger, E. Gr
adel, and Y. Gurevich, The Classical Decision Problem, Springer-Verlag, Berlin, 2001.
583
Hossein Afsari
Information Technology and Digital Media Developments Centre
Ministry of Culture, Iran
Hosein.afsari@yahoo.com
Keywords: Software Quality Evaluation, Software Rating, Content Based Software Packages, Software Evaluation
Standard
Introduction
584
general model from above standards, a quality mea- is a person who has right, claim or share in system and
surement model has been offered to evaluate content- its characteristics to meet his needs and expectations.
based software products in ten steps.
The stakeholders have different needs and expectations
that can be classified in three general branches:
Software producers: This group either distributes the certain content in order to influence on its
users cultural, mental and psychological, or entertains
its audiences by a set of contents and functions which
In first stage, the system requirements have been de- the users have liking for. Some of them try to make a
termined. This stage is performed in five steps. Based tool for certain function or to offer certain services for
on this method, firstly the evaluation purposes have users needs.
been determined then, all stakeholders of these software packages have been diagnosed. The user needs- as
End users: This group contains the software authe basic stakeholders- assessed and their needs deter- diences, and they use software to see certain content or
mined. Fig.1 shows the relation between stake-holders to meet their functional needs.
requirements in the system.
Requirements
2.1
In the first step, the evaluation purpose is defined
as following: software quality evaluation with qualitative requirements that represent user needs. In the second step the type of evaluation product is determined
it is related to evaluation purpose: The basic step of
evaluation process is to determine products and in this
model, the media softwares are considered as the evaluation product. Media softwares are going to increase
the users scientific, cultural awareness by offering the
scientific, cultural, art, contents or to entertain them.
They influence cultural, mental and psychological on
users directly and indirectly. In the third step, the
system stakeholders are determined. The stakeholder
Explicit needs
585
The Third International Conference on Contemporary Issues in Computer and Information Sciences
2.2
Implicit needs
criteria (attributes) for each of six previous characteristics (in three layers). In the sixth step, the criteria
(attributes) determined. Attribute is inherent characThey are non-expressed but real needs such as some
teristic of an existence that can be determined quantineeds that do not expressed but are hidden because
tatively and qualitatively by human or automatic tools.
they are supposed obviously. In regard to field research of media software producers and the judgement
Attribute is divided in two groups: permanent atexperts, the following implicit needs were determined
tribute that is existed in nature of things and acquired
for every software:
attribute of a system, process of product (such as prod1- Software packaging: The user receives media softuct price, product owner ). The acquired attribute is
ware product as an insale package, So the software
not the inherent qualitative attribute of a system, propacking is considered as one of user needs. In fact,
cess or product.
every media software is considered as commercial off
the shelf software product.
Quantity determination and quality evaluation of
2- Internal consistency and installation. Every
software product is done by criteria and is related to
media software in its nature is a software, so two basufficient quality attributes. In the seventh step, for
sic factors must be considered to use it. It must enjoy
each of attributes the quality characteristics and criteinternal efficiency and consistency. In other words, it
ria is determined in three layers. By using determined
has the characteristics of reliability, efficiency, maincriteria in previous step, in second step the quality
tainability and security and without any Failure and
model has been designed. Quality of a system is refault. It must be without fault in order to install, persult of its constituents quality and their interaction
form, activate and delete a program in addition to apsoftware quality includes software product potential to
propriate software type and agreement to addresses.
meet implicit and explicit needs in certain conditions.
3- User interface: It is observable and touchable part
of software that user involves it directly. It includes
Quality model is a determined set of attributes
information channels that provide communication beand the relationship between them, that it provides
tween user and computer. The user interface in mea framework to determine quality needs and evaluadia softwares is generally one of two following types:
tion. Quality model is used as a framework to insure
Choice interface and user graphical interface. One of
all quality aspects are considered in regard to interother implicit user needs is user interface.
nal and user view aspect. In regard to the extracted
4- Support: Since the majority of media software
requirements of past step, the following quality model
users are public people, so support is one of requirehas been extracted and in every basic quality attribute,
ments for users. After determination stake-holders
the secondary attributes have been defined. In this
needs, in the fifth step, the system requirements have
model, two aspects of quality is defined:
been determined. A system often includes different
elements that each of them has certain specifications
and responds to different purposes in system. To func Internal software quality: it contains software
tioning, the system requirements must transform to repackage, internal consistency, user interface, conquirements of different elements in system. The result
tent, function and support.
of defining process of requirements is called stakehold Quality in use: the users ideas are obtained
ers requirement. In this step, for each of defined elabout software components.
ements in previous step that is extracted from user
needs, the quality requirements have been extracted.
586
In this paper, the quality reference evaluation is offered in two following views based on given method
in standard 25030: internal quality of software and
Quality in use. A standard method is derived from
standard 25030 to design model. Also, this model is
derived from media softwares constituents in regard
Figure 2: The weighting factors for every characteristic to cultural needs of users in media software. Morein different softwares by using AHP method
over, a scientific method is offered to measure content
in measurement reference model and defined quality
characteristics cover all quality aspects for most of meThis level includes some layers. AHP process re- dia softwares. So it can be used as inventory to assure
quires pair wise comparisons based on a tilde. To per- the complete coverage of quality. In future researches,
587
The Third International Conference on Contemporary Issues in Computer and Information Sciences
it can be done the weighting of third and fourth functions in order to decrease the judge idea and to consider
more quantitative index.
Acknowledgement: The authors wish to acknowledge Mr. Meisam Abdoli, Meisam ZargarVafa,
Ali Javedani and Madjid Paksima, whose help aided in
the completion of this study.
Refrences
588
Mahdi Vasighi
fmoghaddam@iasbs.ac.ir
vasighi@iasbs.ac.ir
Abstract: In this paper we have developed a collection of MATLAB routines for Multiple Sequence
Alignment using genetic algorithm, called TOMSAGA (TOolbox for Multiple SEquence Alignment
using Genetic Algorithm). TOMSAGA uses genetic algorithm to solve multiple sequence alignment
problem. Toolbox routines are programmed in MATLAB 7.0 and freely available through WWW.
http://www.iasbs.ac.ir/vasighi/TOMSAGA. The toolbox functions allow a user to have a proper
control on genetic algorithms parameters in an easy way and it gives a straightforward possibility
to visualize the obtained results.
Introduction
Phylogenetic analyses, PCR (polymerase chain reaction) primers construction and secondary or tertiary
structures prediction can be carried out by aligning sequences [6]. Being such a main subject, algorithms to
deal with sequence alignment have already been developed.
Multiple sequence alignment (MSA) is an extension
of pairwise sequence alignment [5]. Nowadays, multiple
sequence alignment is an important tool in molecular
biology and it provides key information for sequence
analysis. As the name suggests, in multiple sequence
alignment, we would like to find an optimal alignment
for a collection of sequences.
MSA is characterized by high computational complexity. Needleman and Wunsch [7] first used dynamic
programming in the comparison of two sequences. This
The exponential growth in size of biological method also has been extended directly to the compardatabases goes in parallel with the increasing necessity ison of three sequences using a three-dimensional mafor tools to analyse and extract the valuable informa- trix [8] reduced by Murata et al. [9] with O(n3 ) comtion. One of the first steps to extract and make this
Corresponding
589
GA FOR MSA
As an example of genetic algorithms, we used an algorithm introduced by Jorng -Tzong et. al. [3] solves
the multiple sequence alignment problem in biology using genetic algorithms. For simplicity and without loss
of generality, we avoid some mathematical representations in paper and try to describe them verbally or by
showing examples. Figure 1 shows the general structure of a Genetic Algorithm. More detail about the
GA can be found in quoted papers.
2.2
2.1
Chromosome Encoding
590
The Third International Conference on Contemporary Issues in Computer and Information Sciences
The number of generations exceeds the maximum value specified by the user (gmax ).
entropyi =
px log px
(3)
x=A,T,C,G
Entropy =
entropyi
(4)
2.4
Cross Over
In the crossover process, two parent chromosomes, denoted as X and Y are selected by Roulette Wheel
Method in order to produce two offspring chromo2.3 Fitness Value
somes. Two kind of crossover is used in this toolbox, Horizontal Crossover and Vertical Crossover. In
The sum-of-pairs function and Entropy function are Horizontal crossover a sequence includes gaps is ranused to evaluate the fitness of the generated chromo- domly selected from parent X and exchanged with corsomes [19]. SP-score is a very popular scoring scheme. responding row in parent Y (Figure 3)
It defines the quality of a multiple alignment as the sum
In Vertical Cross over, sequences in each parent ranof the scores of all distinct unordered pairs of letters in
domly
split in two parts and new offspring can be genthe columns. Given a set of N aligned sequences each
erated
by combination of different slices. (Figure 4)
of length L in the form of L*N ,MSA alignment matrix
A and a substitution matrix (PAM or BLOSUM [?22])
that gives the score s(x,y) for aligning two character
x,y, the SP-score for the ith column of M (denoted mi
), SP(mi) is calculated using the below formula:
SP (mi ) =
s(mki , mli )
(2)
k<l
591
3
3.1
SOFTWARE
Software Requirements
2.5
Mutation
The Mutation operator merges some spaces of a sequence together and then shifts to other columns. The 3.2
details of mutation operator is given in Algorithm 2.
Select a number-string xi =(xi ,1 ,xi ,2 ,..., xi ,m)
in X at random.
Select two numbers xi,g and xi,g+1
Select two numbers h and h+1 not member of xi
Replace the numbers xi,g and xi,g+1 to h and
h+1 respectively
Sort the numbers in xi by increasing
Algorithm 2: Mutation (chromosome X)
3.2.1
Input Data
Modules
Setting Parameters
592
The Third International Conference on Contemporary Issues in Computer and Information Sciences
3.3
3.2.3
Example of Analysis
Results
593
CONCLUSION
In this paper, we introduced GA Toolbox for Multiple Sequence alignment. This toolbox is a collection of
modules for calculating MSA. Algorithm settings (GA
and MSA setting), such as number of generation, mutation rate, Scoring scheme and etc. can be defined
by user and automatically stored in a MATLAB data
structure by means of a proper function. Then, the
user can calculate MSA via the MATLAB command
window. It is our hope that the TOMSAGA promotes
the utilization of this toolbox in research by making its
best features more readily accessible. This work suggests several interesting directions for future studies.
Designing a Graphical User Interface (GUI), capability to handle protein sequences, implementing different
kind of mutations and adding different types of scoring
schemes are among our future works in this project.
Refrences
[1] D. Greebaum N. M. Luscombe M. Gerstein, What is bioinformatics? A proposed definition and overview of the field,
Department of Molecular Biophysics and Biochemistry ,
Yale University, USA. (2001).
594
Mostafa Jafari
Zanjan University
Abstract: This research paper answers to this question: how we can improve the social living
analysing abilities of computer and information technology (IT) specialists due to enrich their social
life. This paper is base on an applied research and the target community is 90 persons of IT &
computer specialists (professors, instructors, scholars, engineers) and social science experts. The
three main hypotheses were as fallow: There is meaningful difference between the IT specialists
and Social specialists life strategy, The life schema of IT specialists is not balance, The middle of
literacy (knowledge) of IT specialists is low. The analysis of data just confirms the third hypotheses.
Based on results of research we have proposed a multi dimensional model due to measure and make
balance on IT specialists life schema shaping strategy.
Introduction
Scientific framework
2.1
595
shaping strategy. According cognition school of strategy schema is mental structure of people. Everyone is
bombarded with data. The problem is how to store it
and make it available on a moments notice. Schemas
do this by representing knowledge at different levels.
This enables people to create full pictures from rudimentary data- to full in the blank. When we think
about a matter for example about the ways of life enrich strategy .the mind likely triggers a schema with
knowledge at the political, financial, and technological levels. Certain implicit assumptions go with this
schema [10].
The combination of these schemas finally dynamically reshapes the identity of anyone and any nations.
While in a world of global flows of wealth, power, and
images, the search for identity, collective or individual, ascribed or constructed, becomes the fundamental
source of social meaning [4]. Thus all people strongly
need to have a suitable model due to be able to realize their own identity and continuously enrich their life
book content. On a network society to capture this
valuable vision the spider model is a simple, efficient
and effective model.
2.2
A spider model diagram is a graphical method of displaying multi variant data in the form of two- dimensional chart of three or more quantitative variables represented on axes starting from the same point. This is
a chart that consists of a equal-angular pokes, with
each spoke representing one of the variables. The data
length of a spoke is proportional to the magnitude of
the variable for the data point relative to the maximum magnitude of the variable across all data points.
A line is drawn connecting the data values for each
spoke. This gives the plot a star-like appearance and
the origin of one of the popular names for this plot.
One application of spider model is the control of quality improvement to display the performance metrics of
any ongoing program [8].The spider model are primarily suited strikingly showing outliers and commonality, or when one chart is grater in every variable than
another, and primarily used for ordinal measurement
where each variable corresponds to better in some
respect, and all variables on the same scale[6]. The
follow model is an example life and work balance.
596
The Third International Conference on Contemporary Issues in Computer and Information Sciences
We selected the spider model as a geometric modelfor the reason that this model is suitable (efficient and
simple) to analyze and compare any multidimensional
phenomena particular at a network context.
In this research the basic or target model was the
model of Iran Education superior consultant. Based
on this descriptive model all students should learn ten
types of literacy as fallow:
1 Technological literacy
2 Scientific literacy
3 Economical professional literacy
4 Political literacy
5 Social literacy e
6 Health literacy
Figure 1: A spider model
Methodology
9 Ecological literacy
10 Spiritual literacy
597
IT group
65
30
50
15
20
45
20
20
30
45
Other Group
40
30
60
30
30
45
20
20
30
45
Results
Discussion
598
Abbasfard, Mitra
Abdi reyhan, Zahra
Abdollahi, Mahdi
Abdollahi, Davood
Abedin, Marjan
Afsari, Hossein
Afsharchi, Mohsen
Agha-Mohaqeq, Mahnaz
Ahmadi, Lida
Ahmadian Ramaki, Ali
Ahmadzadeh, Vahid
Ahmadzadeh, Somayeh
Akbari, Ahmad
Akbari, Majid
Akbarzadeh, M
Alizadeh, H
Alizadeh, Hassan
Allahyar, Amin
AlmasiMousavi, SeyedMehrzad
Amini, Sara
Aminian, Media
Arabani Mostaghim, Saideh
Arabfard, Masoud
Asad Nejhad, Reza
Ashkezari Toussi, Soheila
Askari, Meisam
Askari Moghadam, Reza
Aslanian, Angeh
Asosheh, Abbass
Azadi, Neda
Azami, H
Azimi, Reyhane
Azmi, Reza
Babaee, Hossein
Babamir, Morteza
Babu, Praveen
Bagheri, Ahmad
Bagheri Shouraki, Saeed
Bakhshandegan Moghaddam, Farshad
Bakhshayesh, B
Banki, Hoda
Baraani, Ahmad
Barzegar, HamidReza
Bazargan, Kamal
Biglari, Mohsen
Bijari, Afsane
Borna, Keivan
ChaieAsl, Rana
Danaie, Hasan
Dastghibyfard, Gh
Davardoost, Farnaz
Dehghan Takhtfooladi, Mehdi
Derakhshan, Farnaz
Derakhshanfar, Roya
Derhami, Vali
Dolati, A
Ebadi, Shabnam
Ebadzadeh, Mohammad Mehdi
Ebrahimi Atani, R
Ebrahimpour-Komleh, Hossein
Eftekhary Moghadam, Amir Masoud
Emadi, Seyyed Peyman
Emami, Hojjat
Eskandari, Marzieh
Faez, Karim
Falahi, Amirreza
Farokh, Azam
Fatemie parsa, Susan
599
Firouzi, Mohsen
Forutan Eghlidi, Fatemeh
Fotouhi-Ghazvini, Faranak
Ghadimi, Fatemeh
Ghasem Azar, Armin
Ghasemzadeh, Mohammad
Gheibi, Amin
Ghiasbeigi, Masoud
Ghiasifard, Sonia
Gholami, Peyman
Gholami, Maryam
Gholami, Azadeh
Gholamiyan Yousef Abad, Bahareh
Gholamnezhad, Pezhman
Gohargazi, Hojjat
Golichenari, Fatemeh
H.Khalaj, Babak
Haghighat, Bahar
Hagtalab, Hamed
Haj Mirzaei, Milad
Haji Seyed Javadi, Mohammad
Hajinazari, Parvaneh
Hasanzadeh, Maryam
Hasanzadeh, Maryam
Hashemi, Seyyed Mohsen
Hasheminejad, S.M.Hossein
Hassanpour, Reza
Hassanzade, Elmira
Hatami, Einolah
Hatamzadeh, Payam
Hayati, Mohammad Hosseion
Hazrati Bishak, Akhtar
Hazrati Bishak, Morteza
Horri, Abbas
Hosseini, Seyed Rebvar
Iahad, N.A
Jabraeil Jamali, Mohammad Ali
Jafari, Amir Homayoun
Jafari, Parisa
Jafari, Zahra
Jafari, Mostafa
Jalalian, Zahra
Jalili, Saeed
Jamali Dinan, Samirasadat
Javadi, Marzieh
Javadi, SeyyedMohammadAli
Kalantari, Mohammad
Kargar, Saeed
Kargar, Hossein
Karimi, Mohammad Hossein
Karimian Ravandi, Masoud
Karimpour Darav, Nima
Katanforoush, Ali
Kesri, Vishal
Khairabadi, Jalal
Khakabi, Sina
Khalvandi, Tayebeh
Khanteimoory, Alireza
Khayyambashi, Mohammad Reza
Khazaei, Bahareh
Khodadadian, Elahe
Khosravi, Alireza
Khosravi, Mohsen
Khosravi-Farsani, Hadi
Kiasat, Fereshteh
Laleh, Abolghasem
Lausen, George
Lotfi, Shahriar
M.Bassiri, Maisam
Mahdavi, Mehrgan
Mahdavinataj, Hannane
Mahdiani, Hamid Reza
Mahini, Reza
Mahmoodi, Seyed Abbas
Mahmoodi, Maryam Sadat
Mahmoudzadeh, Behrouz
Maleki, Farhad
Marzaei Afshord, Masumeh
Marzi Alamdari, Jabrael
Masoud, Hamid
600
Meshkin, Alireza
Meybodi, Mohammad Reza
Minaei-Bigdeli, Behrooz
Mirabolghasemi, Marva
Mirabolghasemi, Maziar
Mirehi, Narges
Mirzaei, F
Mirzare Rad, Zahra
Moadab, Shahram
Moayyedi, Fatemeh
Mobedi, Parinaz
Moeinii, Ali
Mohades, Ali
Mohammad Alizadeh, Zohreh
Mohammad khanli, Leyli
Moradi, Amin
Moradi, Parham
Morovati, Mohamad Mehdi
Mortazavi, Reza
Mostajabi, Tayebeh
Naderi, Hassan
Najafi, Elahe
Najafi, Robab
Najafi, Adel
Naji, Hamid Reza
Namazi, Babak
Nasersharif, Babak
Nazemi, Eslam
Nematbakhsh, Mohammadali
Nikanjam, Amin
Nilforoushan, Zahra
Norouzi, Naser
Norozi, Narges
Noshirvani Baboli, Davood
Nourollah, Ali
Poshtan, Javad
Pourhaji Kazem, Ali Asghar
Pourzaferani, Mohammad
PR Hasanzadeh, Reza
Qiasi, Razieh
Rahimipour, Shiva
Rahmani, Amir Masoud
Rahmani Ghobadi, Zahra
Rajabzadeh, Maria
Raji, Masoumeh
Rashidi, Hasan
Rasouli, Alireza
Rezaei, Fateme
Roozbahani, Zahra
Sabaei, Masuod
Sadeghi, Mehdi
Sadeghi Bigham, Bahram
Sadoghi Yazdi, Hadi
Sadreddini, Zhaleh
SaeediNia, Ebrahim
Safaeinezhad, Mohsen
Safilian, Masoud
Sajedi, H
Salahshoor Mottaghi, Zahra
Salehi, Marzieh
Salehi, Saeed
Salehpour, Masoud
Samapour, Toofan
Sanei, S
Saniee Abadeh, Mohammad
Serajian, Mina
Setarehdan, S.Kamaledin
Seyyed Hamzeh, Mehdi
Shabani, B
Shahbahrami, Asadollah
Shahgholi, Abdolmajid
Shahraki, Shahram
Sharifi, Ahmad
Sheikhi, Sanaz
Sheikholslam, S. Mostafa
Shirazi, Mahmoud
Shirazi, Mahmoud
Shiri, Mohammad Ebrahim
Shirmohammadzadeh,Shahin
Shojaie, Aso
601
Shokrzadeh, Morteza
Shourie, Nasrin
Sojudi, Sevila
Solhnia, Mohsen
Tabibian, Shima
Taheri, Fatemeh
Taheri, Mohsen
Taheri, T
Taherian, Parisa
Tahmasbi, Maryam
Taromi, S
Tashakkori Hashemi, Seyyed Mehdi
602