Sunteți pe pagina 1din 120

International Journal of

Computational Intelligence and



Information Security

ISSN: 1837-7823







July 2011

Vol. 2 No. 7

IJCIIS Publication
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

2

IJCIIS Editor and Publisher
P Kulkarni

Publishers Address:
5 Belmar Crescent, Canadian
Victoria, Australia
Phone: +61 3 5330 3647
E-mail Address: ijciiseditor@gmail.com
Publishing Date: July 31, 2011
Members of IJCIIS Editorial Board
Prof. A Govardhan, Jawaharlal Nehru Technological University, India
Dr. A V Senthil Kumar, Hindusthan College of Arts and Science, India
Dr. Awadhesh Kumar Sharma, Madan Mohan Malviya Engineering College, India
Prof. Ayyaswamy Kathirvel, BS Abdur Rehman University, India
Dr. Binod Kumar, Lakshmi Narayan College of Technology, India
Prof. Deepankar Sharma, D. J. College of Engineering and Technology, India
Dr. D. R. Prince Williams, Sohar College of Applied Sciences, Oman
Prof. Durgesh Kumar Mishra, Acropolis Institute of Technology and Research, India
Dr. Imen Grida Ben Yahia, Telecom SudParis, France
Dr. Himanshu Aggarwal, Punjabi University, India
Dr. Jagdish Lal Raheja, Central Electronics Engineering Research Institute, India
Prof. Natarajan Meghanathan, Jackson State University, USA
Dr. Oluwaseyitanfunmi Osunade, University of Ibadan, Nigeria
Dr. Ousmane Thiare, Gaston Berger University, Senegal
Dr. K. D. Verma, S. V. College of Postgraduate Studies and Research, India
Prof. M. Thiyagarajan, Sastra University, India
Dr. Manjaiah D. H., Mangalore University, India
Dr.N.Ch.Sriman Narayana Iyengar, VIT University ,India
Prof. Nirmalendu Bikas Sinha, College of Engineering and Management, Kolaghat, India
Dr. Rajesh Kumar, National University of Singapore, Singapore
Dr. Raman Maini, University College of Engineering, Punjabi University, India
Dr. Seema Verma, Banasthali University, India
Dr. Shahram Jamali, University of Mohaghegh Ardabili, Iran
Dr. Shishir Kumar, Jaypee University of Engineering and Technology, India
Dr. Sujisunadaram Sundaram, Anna University, India
Dr. Sukumar Senthilkumar, National Institute of Technology, India
Prof. V. Umakanta Sastry, Sreenidhi Institute of Science and Technology, India
Dr. Venkatesh Prasad, Lingaya's University, India

Journal Website: https://sites.google.com/site/ijciisresearch/



International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

3

Contents
1. Development of neuron based artificial intelligent scientific computer engineering models for
estimating shelf life of instant coffee sterilized drink (pages 4-12)

2. An approach to reduce cost of using storage resources during scientific workflow execution on
cloud computing environment (pages 13-20)

3. Digital Advertising over Traditional Advertising (pages 21-29)

4. Secure Dwt Based Biometrics Inspired Steganography (pages 30-41)

5. Simulation Based Performance Analysis of Wired Computer Networks (pages 42-47)

6. Effective Car Monitoring and Tracking Model (pages 48-54)

7. Research Methodology on Agile Modeled Layered Security Architectures for Web Services
(pages 55-65)

8. Application Of Residue Number System To Advance Encryption Standard Algorithm
(pages 66-72)

9. Secured Data Communication Using Chaotic Neural Network Based Cryptography
(pages 73-84)

10. JMF Enabled Video Conference System Based on a Service Oriented Infrastructure for
Network Centric Warfare Collaboration (Pages 85-93)

11. The Study On Capital Market And Its Behaviour (pages 94-102)

12. Digital water marking using DCT and reversible method (pages 103-109)

13. Speed Control Of Electric Drives Using Soft Computing Techniques (pages 110-117)











International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

4

Development of neuron based artificial intelligent scientific computer
engineering models for estimating shelf life of instant coffee sterilized drink

Sumit Goyal
1
and Gyanendra Kumar Goyal
2

Dairy Technology Division,
National Dairy Research Institute, Karnal-132001 (Haryana), India
thesumitgoyal@gmail.com
gkg5878@yahoo.com

Abstract
The global spread of coffee growing and drinking began in the Horn of Africa, where, according to legend, coffee
trees originated in the Ethiopian province of Kaffa. Coffee was certainly being cultivated in Yemen by the 15th
century and probably much earlier. Artificial Neural Network (ANN) are a wide class of flexible nonlinear
regression and discriminant models, data reduction models, and nonlinear dynamical systems. Elman and
generalized regression artificial intelligence models for instant coffee flavoured sterilized drink were developed.
The input parameters were colour and appearance, flavour, viscosity, sediment and the overall acceptability was
used as output parameter for developing the artificial intelligence models. The dataset consisted of experimentally
developed 50 observations. The observations were randomly divided into two sets, namely, training set consisting
of 40 observations (80% of total observations) and validation set containing of 10 observations (20% of total
observations). Mean Square Error and Root Mean Square Error were used as prediction performance measures. The
best result for Elman models were Neurons 4:4; MSE: 0.001101304; RMSE: 0.033185894 and for generalized
regression models Spread Constant:2; MSE: 0.164212158;RMSE: 0.405230993.From the investigation, Elman
models were found to have a better modelling approach for instant coffee sterilized drink for estimating shelf life of
instant coffee sterilized drink.
Keywords: ANN, Artificial Intelligence, Elman, Generalized Regression, Neuron, Coffee Drink




















International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

5

1. Introduction
The global spread of coffee growing and drinking began in the Horn of Africa, where, according to legend, coffee
trees originated in the Ethiopian province of Kaffa. It is recorded that the fruit of the plant, known as coffee
cherries, was eaten by slaves taken from present day Sudan into Yemen and Arabia through the great port of its day,
Mocha. Coffee was certainly being cultivated in Yemen by the 15th century and probably much earlier. In an
attempt to prevent its cultivation elsewhere, the Arabs imposed a ban on the export of fertile coffee beans, a
restriction that was eventually circumvented in 1616 by the Dutch, who brought live coffee plants back to the
Netherlands to be grown in greenhouses. Initially, the authorities in Yemen actively encouraged coffee drinking.
The first coffeehouses or kaveh kanes opened in Mecca and quickly spread throughout the Arab world, thriving as
places where chess was played, gossip was exchanged and singing, dancing and music were enjoyed. Nothing quite
like this had existed before: a place where social and business life could be conducted in comfortable surroundings
and where - for the price of a cup of coffee - anyone could venture. Perhaps predictably, the Arabian coffeehouse
soon became a centre of political activity and was suppressed. Over the next few decades coffee and coffeehouses
were banned numerous times but kept reappearing until eventually an acceptable way out was found when a tax was
introduced on both. By the late 1600s the Dutch were growing coffee at Malabar in India and in 1699 took some
plants to Batavia in Java, in what is now Indonesia. Within a few years the Dutch colonies had become the main
suppliers of coffee to Europe, where coffee had first been brought by Venetian traders in 1615. This was a period
when the two other globally significant hot beverages also appeared in Europe. Hot chocolate was the first, brought
by the Spanish from the Americas to Spain in 1528; and tea, which was first sold in Europe in 1610. At first coffee
was mainly sold by lemonade vendors and was believed to have medicinal qualities. The first European coffeehouse
opened in Venice in 1683, with the most famous, Caffe Florian in Piazza San Marco, opening in 1720. It is still open
for business today[1]. Artificial Neural Network (ANN) are a wide class of flexible nonlinear regression and
discriminant models, data reduction models, and nonlinear dynamical systems. They consist of an often large
number of neurons, i.e., simple linear or nonlinear computing elements, interconnected in often complex ways and
often organized into layers. ANN are used in three main ways:
(i) as models of biological nervous systems and intelligence
(ii) as real-time adaptive signal processors or controllers implemented
(iii) in hardware for applications such as robots as data analytic methods The development of ANNs arose from
the attempt to simulate biological nervous systems by combining many simple computing elements (neurons) into a
highly interconnected system and hoping that complex phenomena such as intelligence would emerge as the result
of self-organization or learning. The alleged potential intelligence of neural networks led to much research in
implementing artificial neural networks in hardware such as VLSI chips [2].Neural networks take a different
approach to problem solving than that of conventional computers. Conventional computers use an algorithmic
approach i.e., the computer follows a set of instructions in order to solve a problem. Unless the specific steps that the
computer needs to follow are known the computer cannot solve the problem. That restricts the problem solving
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

6

capability of conventional computers to problems that we already understand and know how to solve. But computers
would be so much more useful if they could do things that we don't exactly know how to do.ANN process
information in a similar way the human brain does. The network is composed of a large number of highly
interconnected processing elements (neurones) working in parallel to solve a specific problem. Neural networks
learn by example. They cannot be programmed to perform a specific task. The examples must be selected carefully
otherwise useful time is wasted or even worse the network might be functioning incorrectly. The disadvantage is
that because the network finds out how to solve the problem by itself, its operation can be unpredictable. On the
other hand, conventional computers use a cognitive approach to problem solving; the way the problem is to solved
must be known and stated in small unambiguous instructions. These instructions are then converted to a high level
language program and then into machine code that the computer can understand. These machines are totally
predictable; if anything goes wrong is due to a software or hardware fault [3].

1.1 Feedforward networks
Feedforward ANNs allow signals to travel one way only; from input to output. There is no feedback (loops), i.e.,
the output of any layer does not affect that same layer. Feedforward ANNs tend to be straight forward networks that
associate inputs with outputs. They are extensively used in pattern recognition. This type of organization is also
referred to as bottom-up or top-down [3].

1.2 Feedback networks
Feedback networks can have signals travelling in both directions by introducing loops in the network. Feedback
networks are very powerful and can get extremely complicated. Feedback networks are dynamic; their 'state' is
changing continuously until they reach an equilibrium point. They remain at the equilibrium point until the input
changes and a new equilibrium needs to be found. Feedback architectures are also referred to as interactive or
recurrent, although the latter term is often used to denote feedback connections in single-layer organizations [3].
1.3 Network layers
The commonest type of artificial neural network consists of three groups, or layers, of units: a layer of "input" units
is connected to a layer of "hidden" units, which is connected to a layer of "output" units. The activity of the input
units represents the raw information that is fed into the network. The activity of each hidden unit is determined by
the activities of the input units and the weights on the connections between the input and the hidden units. The
behaviour of the output units depends on the activity of the hidden units and the weights between the hidden and
output units. This simple type of network is interesting because the hidden units are free to construct their own
representations of the input. The weights between the input and hidden units determine when each hidden unit is
active, and so by modifying these weights, a hidden unit can choose what it represents [3].


International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

7

1.3.1 Single-layer and multi-layer architectures
The single-layer organization, in which all units are connected to one another, constitutes the most general case and
is of more potential computational power than hierarchically structured multi-layer organizations as represented in
Fig.1.







Fig.1 Single hidden layer architecture








Fig.2 Two hidden layer architecture

In multi-layer networks, units are often numbered by layer, instead of following a global numbering supervised
learning which incorporates an external teacher, so that each output unit is told what its desired response to input
signals ought to be. Architecture of multiple hidden layer is displayed in Fig.2. During the learning process global
information may be required. Paradigms of supervised learning include error-correction learning, reinforcement
learning and stochastic learning. An important issue in supervised learning is the problem of error convergence, i.e.,
the minimization of error between the desired and computed unit values. The aim is to determine a set of weights
which minimizes the error. One well-known method, which is common to many learning paradigms, is the least
mean square convergence. Unsupervised learning uses no external teacher and is based upon only local information.
It is also referred to as self-organization, in the sense that it self-organizes data presented to the network and detects
their emergent collective properties. Paradigms of unsupervised learning are Hebbian learning and competitive
learning. A neural network learns on-line if it learns and operates at the same time. Usually, supervised learning is
performed off-line, whereas unsupervised learning is performed on-line [3]. ANN predicted beef sensory quality [4],
and shelf life of soya milk [5]. ANN provides a simple and accurate prediction method [6] .ANN has also been
successfully applied for predicting rice snacks [7], soya bean equilibrium moisture content [8], and soft mouth
melting milk cakes [9].
Input
Output
Input
Output
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

8

2. Method and Material









Fig.3. Input and output parameters

Colour and appearance, flavour, viscosity and sediment were used as input parameters. The Overall acceptability
was used as output parameter for developing the artificial intelligence models (Fig.3).













Fig. 4. Training Pattern of the ANNs

Experimentally developed 50 observations were used for developing ANN models. The observations were divided
into two disjoint subsets, namely, training set consisting 40 observations (80% of total observations) and testing set
containing of 10 observations (20% of total observations). The training pattern is represented in Fig.4, and Eq. (1)
and Eq. (2) show the performance measures used in evaluating performance of ANNs.



Colour &
Appearance
Flavour Viscosity Sediment
Overall
acceptability
Input
Output
Training
ANN
models
Adjustment
and
evaluation
of Weights
Minimum
error is
selected
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

9



Performance measures for prediction

|
|
.
|

\
|
=

2
1
exp
N
cal
n
Q Q
MSE (1)

|
|
.
|

\
|
=

2
1 exp
exp
1
N
cal
Q
Q Q
n
RMSE (2)

exp
Q = Observed value;
cal
Q = Predicted value; n = Number of observations in dataset.
3. Results and Discussion
Table 1: Performance of Elman Model with single hidden layer
Neurons MSE RMSE
2 0.043242468 0.207948233
4 0.132856092 0.364494296
5 0.063288611 0.251572278
7 0.057526684 0.239847209
10 0.069404164 0.263446701
13 0.018347492 0.135452915
15 0.134364326 0.366557398
17 0.026171808 0.161777033
18 0.012070421 0.109865365
20 0.015771956 0.125586448

Table 2: Performance of Elman Model with two hidden layers
Neurons MSE RMSE
2:2 0.076851838 0.277221640
3:3 0.018084910 0.134480147
4:4 0.001101304 0.033185894
5:5 0.010389898 0.101930849
8:8 0.019793804 0.140690455
12:12 0.014074553 0.118636220
14:14 0.015195112 0.123268455
16:16 0.020495486 0.143162445
18:18 0.027013994 0.164359346
20:20 0.022287159 0.149288844





International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

10

Table 3: Performance of Generalized Regression Model

Spread Constant MSE RMSE
2 0.164212158 0.405230993
5 0.181964204 0.426572625
7 0.183666472 0.428563265
8 0.184079188 0.429044506
10 0.166105405 0.407560308
25 0.185281031 0.430442831
80 0.185406690 0.430588771
100 0.184541367 0.430596877
150 0.185531367 0.430596877

3.1 Elman Experiments Modelling Approach
The Elman network has tansig neurons in its hidden (recurrent) layer, and purelin neurons in its output layer.
This combination is special in that two-layer networks with these transfer functions can approximate any
function (with a finite number of discontinuities) with arbitrary accuracy. The only requirement is that the
hidden layer must have enough neurons. More hidden neurons are needed as the function being fitted increases
in complexity. Elman network differs from conventional two-layer networks in that the first layer has a
recurrent connection. The delay in this connection stores values from the previous time step, which can be used
in the current time step. Thus, even if two Elman networks, with the same weights and biases, are given
identical inputs at a given time step, their outputs can be different because of different feedback states [10].
Elman networks were developed for modelling of instant coffee sterilized drink and single as well as double
hidden layer Elman networks were explored. Number of neurons varied from 1 to 20 in each hidden layer. The
research revealed that networks with two hidden layers outperformed single hidden layer models. The results of
single hidden layer and double hidden layers are represented in the Table 1 and 2, respectively. The best result
for Elman model was achieved with double hidden having 4 neurons in first layer and second layer with MSE
0.001101304 and RMSE as 0.033185894. Performance of Elman models were compared with Generalized
Regression (GR) network models and it was observed that Elman models gave better results in comparison with
GR models.
3.2 Generalized Regression Experiments Modelling Approach
In GR models first layer operates just like the newbe radial basis layer. Each neurons weighted input is the
distance between the input vector and its weight vector, calculated with dist. Each neurons net input is the
product of its weighted input with its bias, calculated with netprod. Each neurons output is its net input passed
through radbas. If a neurons weight vector is equal to the input vector (transposed), its weighted input will be
0, its net input will be 0, and its output will be 1. If a neurons weight vector is a distance of spread from the
input vector, its weighted input will be spread, and its net input will be sqrt (-log(.5)) (or 0.8326). Therefore, its
output will be 0.5. The second layer also has as many neurons as input/target vectors, but here LW{2,1} is set to
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

11

T. For example, an input vector p close to pi, one of the input vectors among the input vector/target pairs used
in designing layer 1 weights. This input p produces a layer 1 ai output close to 1. This leads to a layer 2 output
close to ti, one of the targets used to form layer 2 weights. A larger spread leads to a large area around the input
vector where layer 1 neurons will respond with significant outputs. Therefore if spread is small the radial basis
function is very steep, so that the neuron with the weight vector closest to the input will have a much larger
output than other neurons. The network tends to respond with the target vector associated with the nearest
design input vector. As spread becomes larger the radial basis functions slope becomes smoother and several
neurons can respond to an input vector. The network then acts as if it is taking a weighted average between
target vectors whose design input vectors are closest to the new input vector. As spread becomes larger more
and more neurons contribute to the average, with the result that the network function becomes smoother [10].
GR models were developed for modelling of instant coffee sterilized drink and spread constant were explored
form 1 to 150, results are represented in table 3.GR models were compared with Elman models, and it was
observed that Elman models perform better than GR models. The best result for GR model with spread constant
as 2 , MSE 0.164212158 and RMSE 0.405230993.

Fig. 5. Regression equations for estimating shelf life of instant coffee sterilized drink
Further, regressions equations based on overall acceptability score were developed for predicting shelf life and
constant came out as 7.87, regression coefficient as -0.057 and R
2
was found to be 93 percent as represented in
Fig.5, after solving them 3.55 came as the output which was subtracted from the actual experimental shelf life of the
product i.e., 45 days, resulting in 41.44 days. Since the predicted shelf life is within experimentally obtained shelf
life of 45 days, hence the product should be acceptable.




International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

12

4. Conclusion
In this study, artificial intelligence models of Elman and Generalized Regression were developed for modeling of
instant coffee sterilized drink and compared with each other .Investigation revealed that performance of generalized
regression models (Spread Constant:2; MSE: 0.164212158;RMSE: 0.405230993 ) were less effective as compared
to performance of Elman network models. Further, by comparing Elman network models of single and double
hidden layers, it was found that double hidden layer models( Neurons 4:4; MSE: 0.001101304;RMSE:
0.033185894) gave good results as compared to single hidden layer Elman networks. Neuron based intelligent
computing models estimated 41.44 days shelf life which is close to actual experimental shelf life of 45 days.
Therefore from the study, it can be concluded that neuron based computing models are efficient in predicting the
shelf life of instant coffee sterilized drink.

References
[1]. http://www.ico.org/coffee_story.asp (accessed on 16.5.2011)
[2]http://www.sasenterpriseminer.com/documents/Neural%20Networks%20and%20Statistical%20Models.pdf
(accessed on 28.5.2011)
[3] http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html (accessed on
22.5.2011)
[4] Park, B., Chen, Y. R., Whittaker, A. D., Miller, R. K. and Hale, D. S. (1994). ANN modelling for beef sensory
evaluation. Transactions of the American Society of Agricultural Engineers, 37, pp. 1547-1553.
[5] Ko, S. H., Park, E. Y., Han, K. Y., Noh, B. S. and Kim, S. S. (2000). Development of neurocomputing models
analysis program to predict shelf-life of soya milk by using electronic nose. Food Engineering Progress, 4,
pp.193-198.
[6] Goi, S.M., Oddone, S., Segura, J.A., Mascheroni, R.H. and Salvadori , V.O. (2008).Prediction of foods
freezing and thawing times: Artificial neural networks and genetic algorithm approach. Journal of Food Process
Engineering, 84(1), pp. 164-178,
[7] Chayjan, R.A. (2010). Modeling of sesame seed dehydration energy requirements by a soft-computing.
Australian journal of crop science. 4(3), pp. 180-184.
[8] Siripatrawan, U and Jantawat, P. (2009).Artificial neural network approach to simultaneously predict shelf life
of two varieties of packaged rice snacks. International Journal of Food Science & Technology, 44 (1), pp.42
49.
[10] Goyal, Sumit and Goyal, G.K. (2011). Development of Intelligent Computing Expert System Models for
Shelf Life Prediction of Soft Mouth Melting Milk Cakes. International Journal of Computer Applications.
(accepted for publication in 2011).
[11] Demuth. H, Beale. M and Hagan. M, (2009) , Neural Network Toolbox Users Guide. The MathWorks,
Inc., Natrick, USA.










International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

13

An approach to reduce cost of using storage resources during scientific
workflow execution on cloud computing environment

A. Zareie
1
, M.M. Pedram
2
, M. Kelarestaghi
2
, F.G. Alizamini
3
1
Computer Engineering Department, Islamic Azad University-Arak Branch, Arak, Iran.
2
Computer Engineering Department, Tarbiat Moallem University, Karaj/ Tehran, Iran.
3
Computer Engineering Department, Islamic Azad University-science and research Branch,
tehran, Iran.
ahmadzr8@gmail.com, pedram@tmu.ac.ir, kelarestaghi@tmu.ac.ir, fghorbanpour@srbiau.ac.ir

Abstract
Accomplishment of scientific workflow applications not only needs powerful computing resources, but also requires
high capacity(massive) storage resources; because in this kind of applications, in addition to input data, much
mediate data is produced during process execution and should be saved temporarily. Therefore, while scheduling
this kind of applications, we can determine production and usage of mediate data so that it would be hold in the
system at less duration. In most of scheduling algorithms, as soon as providing suitable resources and data for a task,
the work task will begin to be performed. In this work, we try to determine conditions to start each tasks execution
so that application can be scheduled in a way that temporal mediate data would be hold in system at less duration.
The simulations demonstrate that, through this approach without making delay during whole the workflow
application, we can have less storing during execution period.

Keywords: cloud computing, scientific workflow, workflow scheduling, storage cost






















International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

14

1. Introduction

Application of scientific workflow is considered as a set of tasks which should be performed in a certain order and
in order to reach given goal. This application is shown as a Directed Acrylic Graph (DAG) where nodes reveal tasks
and weight of nodes demonstrate the computational cost of tasks. Also, direction of edges is priority of tasks
execution and weight of edges shows data volume transferring among tasks. High volume of tasks and data in this
kind of applications and also possibility of parallel execution has caused to perform them onto a distributed system
[1]. One of the recent distributed systems which prepare a suitable potential to perform such applications is cloud
computing system. The cloud computing system has been presented since 2007 [2] and it has been utilised in many
area with some success [3, 4, 5, 6]. This system can provide such necessary resources as some utility services (e.g.
water, electricity, gas and telephone) and earn related charges based on rate of using services [7]. Using cloud
computing system has many advantages to execute workflow applications such as cooperation of researches from
plenty of institutions and entities to execute workflow [8], to reduce cost of building infrastructure, making access to
data anytime and anywhere, public execution of applications, etc.
But, as mentioned, applications of scientific workflow need high volume of computing and storage resources and
also in other hand, in cloud computing system the charges are paid based on usage rate. Therefore, to execute
workflow applications in cloud computing system, scheduling of the tasks should be done on a way so that the
usage rate of computing and storage resources would be as less as possible. In other words, the application
execution can be finished in a short period and also, during this period, less storage resources should be used. Less
usage of storage resources allows providers of cloud computing systems to offer better service and simultaneous
execution of other applications and also provides users with less charges payment.

One of the important discussions in execution of workflow applications which in depth is focused by researchers is
scheduling of tasks execution in distributed system. In this area, plenty of algorithms have been presented with
different goals. Such algorithms try to reduce scheduling length (Make span) as much as possible by considering
some restrictions and different executive environments. In some offered algorithms such as scheduling algorithms
[10, 11, and 13] and data placement strategy [9], the task will start to be performed as soon as providing resources
(e.g. suitable processor and required data) while paying no attention to less usage of storing resources; whilst in all
of the algorithms, other conditions could be used to begin each agreement so that no change can be found in
scheduling length but usage of storing resources would be less.
Accomplishment of scientific workflow applications is handled by two types of data: input data which This type of
data mainly includes the resource data from the existing file systems or databases and the applications data from
users as input for processing or analysis and should be hold on system from beginning to end of application
execution, and mediate data which is produced by application tasks in order to be used in next tasks of application
and will be deleted after using and only needs to temporal storage in interval between production period and using
period in which such mediate data has huge volume in applications of scientific workflow [12]. In this study, we try
to make a delay on starting time of tasks, according to data volumes which are produced and used by each task, so
that mediate data which is stored during whole process of application execution would be minimum as much as
possible and no change can be found in scheduling length.

2. Problem Analysis

Scientific workflow is shown as a Directed Acrylic Graph G ={T,D,R}, where T={ is set of nodes
which reveals tasks and the node weight, , which is shown by nominates the time of task execution.

D={ is a set of nodes which nominates input data and weight of each node shows data volume.
R={( is a set of edges where shows the fact that task produces the data which is
used by task whose volume is shown by .
Each task has one or more precedence task which should be finished to make
task allowed to start

execution. A task will be called "ready" if processing resources and also all of its precedence tasks
finished (i.e. all required datasets for
task are available).

Each task starts to execute in a time which is
dominated by and also finishing time of that task is shown by .
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

15

As mentioned before, workflow graph has a set called "R" which dominates priority of tasks execution and also
mediate data is produced and used during workflow execution. Therefore, each edge ( of set R shows a mediate
data having a volume; and the period in which the data is stored in system is dominated by
which is obtained by following equation:

We define a Data_Time parameter to calculate storage cost of each data into system which is calculated by
following equation for each data:
(2)
For instance, for a data having a weight of 3 which is stored on system for 5 time unites, Data_Time is equal to 15.

As inputs data "D" is stored on system permanently, so in different approaches, its Data_Time is equal; but rate of
storage cost to execute application of workflow can be reduced by make minimum value of Data_Time for mediate
data.

Therefore in every approach, storage cost rate of mediate data during workflow execution can be calculated as
following:


And we can reduce storage cost by make minimum value of above equation i.e:



To clarify the problem, consider following example:


Figure 1- an example of a workflow graph


To execute graph of figure 1 by two approaches: task execution as soon as being ready and task execution according
to data volume, value of Data_Time has been calculated (communication delay is considered 0).
For approach of task execution as soon as being ready:








International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

16

Table1-Data time for graph with approach of task execution as soon as being ready




















For approach of task execution according to data volume:


Table2-Data time for graph with approach of task execution according to data volume



















In both of approaches, scheduler length is equal. But, as you can see, in approach of task execution according to data
volume, the storage cost of temporal data is less.

3.Declaration of presented approach

A pseudo-code of presented approach is given in figure 2:










International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

17

An Approach for Start to Run a Ready Task









For each Ready Task do
If ReadyTask.inputsData>=ReadyTask.outputsData Then
ReadyTask.Start()
Else
For each Task of succ(ReadyTask)do
If Task is Wait only for ReadyTask Then
ReadyTask.Start()
Else if ReadyTask has most cost in pred(Task) then
ReadyTask.Start()
Figure 2- pseudo-code of approach to begin tasks execution

In this approach, each ready task which has its own required data and resources should have other conditions to start
execution. In lines 2-3, it is surveyed that whether rate of used and deleted data by ready task is less than rate of data
which is produced by the task and should be hold in the system. If rate of used data by task is equal or more than
rate of produced data, then storage load will be less by as quicker as execution of task, so task is performed.
In lines 6-7, it is surveyed that whether there is a task which waits only for ready task execution among tasks which
need produced data by ready task. These lines cause to reduce tasks delay for execution and no change can be found
during total end of workflow in comparison with other approaches.
Also, lines 8-9 restrict to make delay while comparing with other approaches. In these lines, a ready task which is
inserted among precedence a task, having the most cost is start executed. This cost can be task execution time (i.e.
execution time + communications time in the systems which have communicative delay).

4. Experiments and surveying of presented approach

To evaluate performance of approach of task execution according to data volume in comparison with approach of
task execution as soon as being ready, codes of both approaches were implemented in C# area and executed on
different graphs having 50 tasks. To compare storing volume of approaches, we have defined DT Ratio parameter as
following:

) 5 (
approach ready being as soon as execution task with the data e) ary(mediat for tempor cost storage
approach proposed with the data e) ary(mediat for tempor cost storage
= DTRatio

According to defined parameter, as less as amount of DT Ration shows higher performance of presented approach.

We have shown results of approaches by using MATLAB software in diagrams format. All the numbers in the
diagrams are average of outcomes resulted from approaches on 100 different graphs with same parameters.

Figure 3 shows impact of variance at mediate data volume on outcomes resulted from execution of both approaches.
In this experiment, we have set volume of mediate data of Data weight n% to produce random graphs. As
presented approach tries to store data having less volume in system and also use data having more volume
quicker and delete from system. By increasing "n", DT Ration will be reduced and presented approach will show
higher performance.
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

18


Figure 3- impact of mediate data variance on performance

In the next experiment, impact of variance time of tasks execution on performance of approaches has been surveyed.
In this experiment, we have set tasks time equal to Task Length m% on graphs having same parameters and by
changing m, resulted DT Ration values are shown in figure 4.

Figure 4- impact of variance time of tasks time execution on performance

As you can see from figure 4, the presented approach tries to hold less data volumes in system during long term
tasks execution and store huge data in system during short term tasks execution which causes to reduce usage of
storage capacity during whole execution of workflow. Therefore, DT Ratio can be improved by increasing variance
during tasks execution and performance of presented approach will be increased.
In next experiment, impact of communications coefficient and branch among tasks are presented. In this experiment,
we have executed approaches on graphs having same parameters and different branch factors and have shown the
results in figure 5.
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

19


Figure 5- impact of tasks branch coefficient on performance

According to figure 5, By increasing branch factors, performance of presented approach gets close to another
approach. Possibility of parallel execution of tasks may be reduced by increasing communications, therefore delay
of tasks execution can be decreased which causes to convert presented approach also into approach of task execution
as soon as being ready task and equalize storing cost in both approaches.

5.Conclusion

In this study, we have presented an approach to start execution of scientific workflow tasks so that we can reduce
approximate storing cost during execution by make a delay on starting of some tasks without extending schedule
length of whole workflow. particularly at such environments as cloud computing system; because of payment based
on usage rate, reducing usage of computing and storage resources is very important. In some approaches of
scheduling and placement, such techniques as tasks and data duplication are used. But, in cloud computing system,
using duplicate approaches causes to extend using resources and so extending the costs. Therefore, costs reduction
has a considerable effect on performance and quality of the service which is provided by providers of cloud service
and also on reducing the charge which is paid by customers of cloud.


References

[1] E. Deelman, A. Chervenak, Data management challenges of data-intensive scientific workflows, in: IEEE
International Symposium on Cluster Computing and the Grid, 2008, pp. 687692.
[2] A. Weiss, Computing in the Cloud, vol. 11, ACM Networker (2007) 1825.
[3] M. Brantner, D. Florescuy, D. Graf, D. Kossmann, T. Kraska, Building a Database on S3, in: SIGMOD,
Vancouver, BC, Canada, 2008, pp. 251263.
[4] R. Grossman, Y. Gu, Data Mining Using High Performance Data Clouds: Experimental Studies Using Sector
and Sphere, in: SIGKDD, 2008, pp. 920927.
[5] R. Buyya, C.S. Yeo, S. Venugopal, Market-oriented cloud computing: Vision, hype, and reality for delivering IT
services as computing utilities, in: 10th IEEE International Conference on High Performance Computing and
Communications, HPCC-08, Los Alamitos, CA, USA, 2008.
[6] C. Moretti, J. Bulosan, D. Thain, P.J. Flynn, All-Pairs: An abstraction for data-intensive cloud computing, in:
IEEE International Parallel & Distributed Processing Symposium, IPDPS08, 2008, pp. 111.
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

20

[7] R. Buyya, C. Yeo, S. Venugopal, J. Broberg,I.Brandic, Cloud computing and emerging IT platforms: Vision,
hype, and reality for delivering computing as the 5th utility ,

Future Generation Computer Systems
25(6)(2009),pp. 599-616.
[8] R. Barga and D. Gannon, Scientific versus business workflows. In: I.J. Taylor, E. Deelman, D.B. Gannon and M.
Shields, Editors, Workflows for e-Science, Springer, London, UK (2007), pp. 916.
[9]D. Yuan, Y. Yang, X.Liu,J,Chen, A data placement strategy in scientific cloud workflows ,Future Generation
Computer Systems,Article in Press, Corrected Proof .
[10] Fatma A. Omara,Mona M. Arafa, Genetic algorithms for task scheduling problem , Journal of Parallel and
Distributed Computing 70(1)(2010), pp. 13-22.
[11] Y. Yang, K. Liu, J. Chen, X. Liu, D. Yuan, H. Jin, An algorithm in SwinDeW-C for scheduling transaction-
intensive cost-constrained cloud workflows, in: 4th IEEE International Conference on e-Science, 2008, pp. 374
375.
[12] E. Deelman, D. Gannon, M. Shields and I. Taylor, Workflows and e-Science: An overview of workflow system
features and capabilities, Future Generation Computer Systems 25 (2009), pp. 528540.
[13] J. Yu, R. Buyya, and C. K. Tham, A Cost-based Scheduling of Scientific Workflow Applications on Utility
Grids, Proc. of the 1st IEEE International Conference on e-Science and Grid Computing, Melbourne,
Australia,December 2005; 140-147.




























International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

21

Digital Advertising over Traditional Advertising

*Ms. Deepa Chaudhary,** Mr. Ajay Chaudhary, ***Dr. Manorma Saini, ****Ms. Soniya Rajpoot,
*****Ms. Anshu Sirohi, ******Mr. Chetan Vashistth

*Sr. Lecturer, Department of Management Studies, Krishna Institute of Engineering &Technology, Ghaziabad(UP),
India
**Lecturer, Department of Computer Applications, Dr. K.N.M.I.E.T., Modinagar, Ghaziabad(UP), India
***Associate Professor, Department of Humanities, Samrat Ashok Technological Institute, Vidisha(MP), India
****Lecturer, Samrat Ashok Technological Institute, Vidisha(MP), India
*****PhD. Student, Mewar University, Ghaziabad(UP), India
******Assistant Manager, Food Corporation of India, Hapur, Ghaziabad(UP), India

deepa.life@rediffmail.com, 21nov1983@gmail.com, Manorama_saini@yahoo.co.in,
soniyasinghpawar@gmail.com, sirohianshu@gmail.com, Chetan.vashistth@gmail.com


Abstract
Information Technologies is a commanding factor in managing the retail market ever. Advertising is main factor in
retail industry or any other type of market. After evolution of Internet the advertising industry has changed
completely. Internet is overpowering the business of traditional advertising trends. Online advertising is affecting
almost every tradition advertising media. Online advertising is also has an upper edge over traditional methods.
Online advertising is far cheaper and effective over all other means of advertising.

Keywords: Online Advertising, Social Networking, Pop up, Adware, Traditional Advertising, Internet




















International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

22

Introduction:
Advertising is the main fundamental for growth of any business from very beginning of any markets. There is a
number of means for advertising. Some main advertising means, which are main concern during this paper, are
listed below:
Firstly there are two main types of advertising:

Promotional advertising:
a. Promotional advertising introduces new products and new businesses to market.
b. Promotional advertising creates interest of customer in any product.
c. Gives a brief description of services and features of any new product or business.

Institutional advertising:
The main aim of this type of advertising is to make a favorable impression of any product or business over
customer and in market.

Media are those agencies, means or instruments which used to convey advertisement to public. The basic types of
media are:
1. Print Media
2. Broadcast Media
3. Online Advertising
4. Specialty Media

Print Media:
Written advertising that may include everything from newspaper, magazine direct mail letter to home and also
signboards etc. too. These are among the oldest means of advertising.

Broadcast Media:
Broadcast media contains the television and radio in its category. According to a general study a man spends 10
years of his life on television and 6 years of his life on radio at the average age of 70 years. Hence broadcasting is
also one of the important means of advertising.

Online Advertising:
Placing a message or promotion on internet and making banners and wallpapers by using rich text media lies under
the category of online advertising.

Specialty Media:
Print the advertisement on a relatively inexpensive item like pen, pencil, key chain, notebook, calendar etc. lies
under the category of specialty media.

Other Media:
Now a days businesses are constantly creating new and innovative means of advertising their products. Sports arena
billboard, adds in movie theaters, hot air balloons and blimps skywriting etc. are some of them.

A comparative study of all advertising means in market:
Our main focus is over the edge of online media over all other media. Now we are going to study a comparative
study between online advertising media and other advertising media.

Print Media:
Print media advertising consists of following means for advertisement.

International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

23

(a).Newspaper advertising- Newspaper advertising is a very old and effective mean for advertisement. Newspapers
are further categorized as local and national. Local newspapers are area specific and cover a small span of market.
Local newspapers are used for advertisement of any local product only. While national newspapers cover a wider
and broader area or span. National newspapers are used for an advertisement of a product at a larger level or national
level.
Newspapers publish on weekly and daily basis.

Newspaper advertising has a number of benefits in their own as:
(I) Newspapers have a larger readership and a high level of reader involvement.
(II) You can force your advertise to a targeted crowd.
(III) The cost of advertisement is relatively low.
(IV) You can easily time your advertise according to seasons of your product.

There are a certain disadvantages too which are attached with newspaper advertising:
(I) It is seemed sometime that there is a wasted circulation of advertise, hence throughput of any advertisement
is just 10 percent or less in newspaper advertisement.
(II) The lifespan of newspaper advertising is quite shorter.
(III) The newspaper advertisement is not so much attractive in looks.

(b).Magazine advertising- Magazine advertisement is also an effective media. Magazines are a number of types
basically. Local magazines are published in a specific area, while regional magazines are used in a wider area and
national magazines are published nationwide at a large scale.
Secondly magazines are weeklies, monthlies and quarterlies. Hence we can manage our advertisement
accordingly as time span is needed.
Magazines also have a different classification as consumer magazine, business magazine (trade) and
educational magazine; hence we can categories our product accordingly.

Magazine advertising has a number of benefits:
(I) Magazines can target the selected audience.
(II) The life span of magazines is larger as people generally collect magazines.
(III) The print quality of advertises is good hence ads are more attractive.
(IV) The varieties of publication are larger in magazines.
There are a number of disadvantages too:
(I) The mass appeal in any specific geographical area is relatively lesser in magazines.
(II) Comparative to other print media, magazines are more costly.
(III) If any promotion has some deadline, then this deadline makes magazines less timely.

(c).Direct mailing- The other interesting print media is direct mailing to customers. These mails may be sent to
people directly to their home or business address or they may be electronic mails. Direct mailing includes
newsletters, catalogs, coupons, samplers, price lists, circulars, invitations, postage-paid reply cards, and letters.
Mailing lists of targeted customers may be assembled from current customer records or they may be purchased.
Direct mail advertising has following advantages over other media:
(I) Direct mailing is highly selective as they have direct human intervention in selecting targeted customers.
(II) Timing can be controlled in this mean of advertising.
(III) The main benefit of direct mailing is, direct mailing is used for actual sales.

Direct mailing also has some disadvantages for those it is refused:
(I) The response level of customer is very low for this mean as this is forced to people not by choice.
(II) Most of customers take these mails as junk.
(III) The cost is comparatively higher.

(d).Outdoor advertising- Outdoor advertising is a mean of advertising which can never be replaced; it is power of
big tycoons. You ever see the big cinema screen like billboards and other advertisement by roadside. These are
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

24

placed on the highly travelled roads and also to the places where visibility is relatively high. In this mean of
advertising pre-printed sheets are placed on holdings like wallpaper.

Outdoor advertising can prove their edge over other in following manners:
(I) The main benefit of outdoor advertising is visibility of advertisements. The outdoor advertisements are
highly visible and attractive. In comparison to their visibility their cost is far reasonable.
(II) The message is not time bound; its for 24 hours in a day, so there is no problem for targeted audience.

Similarly outdoor advertising also have some disadvantages:
(I) These are more restricted than other advertising means.
(II) The viewing time is very less in comparison to their cost.
(e).Transit advertising- There is one more type of advertising, which is called transit advertising. Transit advertising
is advertisement over public transport, auto rickshaw and also in trains. Posters on taxies and near bus stands also lie
in same category.

Transit advertising may be beneficial in following manners:
(I) Transit advertising has a reach to wide captive audience and very economical.
(II) The market for transit advertising is defined.

Transit advertising also has a number of pitfalls too:
(I) Transit advertising is not available in small towns and cities.

Broadcast Media:
Broadcast media includes television and radio in its region.

(a).Television advertising- Television advertising is most popular means of advertising among all. According to a
survey an average person spends almost 10 years of his life before television in 70 years of average life.
Television media communicates with sound, action and color. It is highly interactive and you can tell your
emotions in a perfect way. Prime time for television advertising is 8 PM to 11 PM, as this is the time when complete
family is before television. Television media mainly appeals to large companies which have worldwide distribution.

Television advertising has an edge over other medium in following manners:
(I) Television advertises are well directed to audience with a specific interest and you can highlight the point
easily which you want to highlight.
(II) By means of television one can easily take an advantage of festival and holiday season.

Although television media is most popular but still it has a number of disadvantages too:
(I) It has the highest production cost than any other media available in market.
(II) Highest cost for the time consume in advertising.
(III) Actual audience size is not sure, people leave room when advertises are displaying.

(b).Radio advertising- Radio advertising is booming means of advertising in recent years as it is in reach to about 96
percent of people. The best time of advertising over radio is driving time (morning and evening time). According to
a survey an average person spends almost 6 years of his life in 70 of average life in listening radio.

Radio advertising has a number of advantages:
(I) In radio advertising audience is highly selective, as you know which bandwidth is for whom. The
advertisement for teens broadcasts on the bandwidth for teens.
(II) These are highly flexible as message can be changed easily. If we compare with print advertising then its
edge over that.
(III) It is mobile medium as it can be taken to anywhere anytime.

International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

25

In spite of a number of advantages radio advertising still has many disadvantages:
(I) The life span of radio advertisement is quite short, once advertisement is broadcasted then it is over.
(II) There is always a lack of visual involvement.

Online advertising over traditional advertising:
If we talk of a comparison to traditional media marketing that would require a huge moneyed investments, planning
and also a lot of time to reach half the market that of covered by social media in just few hours, then it is quite easy
to see why social media sites are the obvious choice among advertisers. Now we are moving to a few other factors
why more and more e-entrepreneurs are leaving traditional media marketing to encourage social media advertising:
Advertising by Social media gives an assurance of a huge market and assured results that can be easily measured
and scaled. These results can be used to change, modify and also to improve marketing strategies. The sites like
Facebook, Orkut, Twitter, YouTube, FriendsFeed etc, covers a large customer group base with speed and precision
that is quite expensive in case of traditional media. Besides any other reason mentioned above online media is the
fastest among all.
In social media advertising, there are a huge number of choices, and frequent updates. Thats why awareness and
knowledge enjoys a wider scope for expansion, meaning customers become smarter by the minute and more
discriminative in their choices. A social media page addition to any website undoubtedly lead business to benefit.
Brief history and short history of online advertising:
The very first advertising banner was created in 1994 by AT&T, after this one first commercial spam was also
created in 1994 named Green Card Lottery. The very first ad server was created by Focalink Media Services and
that was hosted in 1995. And in series of success in 1998 Google made a $3.1 billion in cash.
Key Players in online advertising:
(i) Advertiser: this is one who gives money for advertises and needs publicity. The example for this : Reebok,
Coca-Cola, Airways, KingFisher etc.
(ii) Publisher: this is one have content for advertising and needs money. For example Cnn.com.
(iii) Ad-Network: this is one who have web space for hosting advertises and needs money. For example Google
adsense, Yahoo etc.
(iv) Consumer: this is most important one for whom all above are working. This is one who need free contents.


















International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

26

Growth of online advertising in recent years:

This chart shows the growth of online advertising from 2000 to 2009 per quarterly bases. Figures are given
million dollars.

Some deeper to digital advertising:
These are two basic business model for online advertising:
CPM(cost per thousand impression) Model: this advertise charges money for impression, means
customer just see the advertise. Rates vary from $0.25 to $200 for per thousand impressions. These rates are
dependent on publisher and ad-network.
CPC(cost per click) Model: this advertise charges for per click on advertise. Whenever any consumer
clicks on any advertise publishers charges some money to advertiser. The rates are around $0.3 per click.








International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

27

Reliability of consumers on different type of advertising means:



Above chart shows the level of reliability of consumers on different type of advertising means. This is a course
work done by students of business school of Italy. According to survey online advertising is performing well
even on the reliability factor. Consumers posts and Brand websites are working remarkably well.
Other means of online advertising are also playing great role as their volume is quite large. Hence even they
have lower reliability marking they can gather a huge number of consumers.

Some Facts about online advertising:

1. According to latest study the social networking site facebook has reached at a maximum of 100 billion hits
per day. [1]

2. Researcher The Kelsey Group has projected that online advertising will hit $147 billion by 2012. Thats
worldwide advertising and its part of their report, The Kelsey Groups Annual Forecast (2007-2012):
Outlook for Directional and Interactive Advertising.As it stands the group measures the market at $45
billion for last year (2007). Thats a compound annual growth rate (CAGR) of 23.4 percent.Interactive
advertising, which comprises search (including local search), display advertising, classifieds and other
interactive ad products, grew its share of global advertising revenues from 6.1 percent in 2006 to 7.4
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

28

percent in 2007. By 2012 Kelsey Group analysts expect the interactive share of global ad spending will
reach 21 percent.Just focusing on the US, the group predicts that for the years 2007 to 2012, in the United
States interactive advertising revenues will grow from $22.5 billion to $62.4 billion (22.6 percent CAGR).
[2]
3. And finally we can see the highest growth report of any online advertising tycoon Facebbok. It is the worlds
fastest growing social networking website which has changed the face of social networks. [3]

.

International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

29

4. The report of experts shows the growth of online advertising in recent coming years shows the shining
future of online advertising.

Meanwhile, total media spending is only forecast to increase by 3.0 percent, 1.2 percent, 4.5 percent, 2.0
percent, and 3.7 percent over the same five years. [4]

Conclusion:
On the basis of various researches and current market scenario online advertising is fastest growing advertising
medium now a days. Online advertising has changed the meaning of advertising. The future of advertising is spade
before us. The reasonable rates and huge consumer area attracts advertiser a lot. Its really a tough task to take the
position of traditional media but surely the next age will be digital advertising age.

References:
[1]. http://www.webpronews.com/topnews/2010/07/21/facebook-gets-100-billion-hits-per-day#close=1

[2]. http://www.marketingpilgrim.com/2008/02/online-advertising-growth-rate-at-least-20-a-year.html

[3]. http://techcrunch.com/2007/07/06/facebook-users-up-89-over-last-year-demographic-shift/

[4]. http://www.webpronews.com/emarketer-forecasts-big-increases-in-online-ad-spending-2010-12












International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

30

SECURE DWT BASED BIOMETRICS INSPIRED STEGANOGRAPHY
NIDHI JASWAL, PREETI JAYBHAR, ROHIT GULABANI, ISHA DHAR, SHRUTI SHINDE
nj0989@gmail.com, preeti.jay16@gmail.com, gulabani.rohit@gmail.com, ishadhr@gmail.com,,
shruti.shinde17@gmail.com
Computer Department, Pune University,
Maharashtra Academy of Engineering
Pune, Maharashtra 411015, India
i



Abstract
Steganography is the science of concealing the existence of data in another transmission medium. It does not replace
cryptography but rather boosts the security using its obscurity features. As proposed method is Biometric
Steganography, here the Biometric feature used to implement Steganography is Skin tone region of images.
Proposed method introduces a new method of embedding secret data within the skin portion of the image of a
person, as it is not that much sensitive to HVS (Human Visual System). Instead of embedding secret data anywhere
in image, it will be embedded in only skin tone region. This skin region provides excellent secure location for data
hiding. So, firstly skin detection is performed in cover images and then Secret data embedding will be performed in
DWT domain as DWT gives better performance than DCT while compression. This biometric method of
Steganography enhances robustness than existing methods.
Keywords: Steganography; Biometrics, DWT.

















International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

31

1. Introduction
Steganography is an alluring art and the increasing researches and experimentations give it the status of a vast
domain in itself. Basically, Steganography is defined as the science of hiding (embedding) data in transmission
medium. The modern framework of steganography is based on prisoner's problem. Prisoners problem is a way of
communication between Alice and Bob, two inmates, for hatching an escape plan. Covert Communication is used
since all the communication is examined by the warden, Wendy. This has been illustrated in Fig 1.

Fig1: General Model of Steganography
The most important research which laid a strong foundation for this paper is the 2008 IEEE paper entitled
Biometrics Inspired Digital Image Steganography by Abbas Cheddad, Joan Condell, Kevin Curran and Paul Mc
Kevitt. They proposed the use of human skin tone detection in colour images to form an adaptive context for an edge
operator which provided an excellent secure location for data hiding.
Also in 2009, Ali Al-Ataby and Fawzi Al-Naima published a paper entitled A Modified High Capacity Image
Steganography Technique Based on Wavelet Transform, wherein they propose a modified high-capacity image
steganography technique that depends on wavelet transform with acceptable levels of imperceptibility and distortion
in the cover image and high level of overall security. It was found that the proposed method allows high payload
(capacity) in the cover image with very little effect on the statistical nature of it.
The paper High Capacity And Security Steganography Using Discrete Wavelet Transform by K B Raja and H. S.
Manjunath Reddy treads further on the aspect of secure data transmission of images over the internet using Discrete
Wavelet Transform based Image Steganography.
In the near future, the most important use of steganographic techniques will probably lie in the eld of digital
watermarking. Content providers are eager to protect their copyrighted works against illegal distribution and digital
watermarks provide a way of racking the owners of these materials. Although it will not prevent the distribution
itself, it will enable the content provider to start legal actions against the violators of the copyrights, as they can now
be tracked down. Steganography might also become limited under laws, since governments already claimed that
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

32

criminals use these techniques to communicate. More restrictions on the use of privacy-protecting technologies are
not very unlikely, especially in this period of time with great anxiety of terrorist and other attacks.
2. Data Hiding Issues
All Steganographic algorithms have to comply with a few basic requirements. They are as follows:-
Invisibility The invisibility of a steganographic algorithm is the first and foremost requirement, since the strength
of steganography lies in its ability to be unnoticed by the human eye. The moment that one can see that an image has
been tampered with, the algorithm is compromised.
Robustness against statistical attacks Statistical steganalysis is the practice of detecting hidden information
through applying statistical tests on image data. Many steganographic algorithms leave a signature when
embedding information that can be easily detected through statistical analysis. To be able to pass by a warden
without being detected, a steganographic algorithm must not leave such a mark in the image as be statistically
significant.
Robustness against image manipulation In the communication of a stego image by trusted systems, the image
may undergo changes by an active warden in an attempt to remove hidden information. Image manipulation, such as
cropping or rotating, can be performed on the image before it reaches its destination. Depending on the manner in
which the message is embedded, these manipulations may destroy the hidden message. It is preferable for
steganographic algorithms to be robust against either malicious or unintentional changes to the image.
Independence of file format With many different image file formats used on the Internet, it might seem suspicious
that only one type of file format is continuously communicated between two parties. The most powerful
steganographic algorithms thus possess the ability to embed information in any type of file. This also solves the
problem of not always being able to find a suitable image at the right moment, in the right format to use as a cover
image.
Unsuspicious files This requirement includes all characteristics of a steganographic algorithm that may result in
images that are not used normally and may cause suspicion. Abnormal file size, for example, is one property of an
image that can result in further investigation of the image.
3. Image Steganography
Given the proliferation of digital images, especially on the Internet, and given the large amount of redundant bits
present in the digital representation of an image, images are the most popular cover objects for steganography. In the
domain of digital images many different image file formats exist, most of them for specific applications. For these
different image file formats, different steganographic algorithms exist.
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

33

Image steganography techniques can be divided into two groups: those in the Image Domain and those in the
Transform Domain. Image also known as spatial domain techniques embed messages in the intensity of the
pixels directly, while for transform also known as frequency domain, images are first transformed and then the
message is embedded in the image. Image domain techniques encompass bit-wise methods that apply bit insertion
and noise manipulation and are sometimes characterized as simple systems. The image formats that are most
suitable for image domain steganography are lossless and the techniques are typically dependent on the image
format. Steganography in the transform domain involves the manipulation of algorithms and image transforms.
These methods hide messages in more significant areas of the cover image, making it more robust. Many transform
domain methods are independent of the image format and the embedded message may survive conversion between
lossy and lossless compression. In the next sections steganographic algorithms will be explained in categories
according to image file formats and the domain in which they are performed.
4. Skin Detection
Skin color has proven to be a useful and robust cue for face detection, localization and tracking. Image content
filtering, content aware video compression and image color balancing applications can also benefit from automatic
detection of skin in images. Face detection and tracking has been the topics of an extensive research for the several
past decades. Many heuristic and pattern recognition based strategies have been proposed for achieving robust and
accurate solution. Among feature-based face detection methods, the ones using skin color as a detection cue have
gained strong popularity. Color allows fast processing and is highly robust to geometric variations of the face
pattern. Also, the experience suggests that human skin has a characteristic color, which is easily recognized by
humans. When building a system, that uses skin color as a feature for face detection, the researcher usually faces
three main problems. First, what color space to choose, second, how exactly the skin color distribution should be
modeled, and finally, what will be the way of processing of color segmentation results for face detection. In this
section we discuss pixel-based skin detection methods, which classify each pixel as skin or non-skin individually,
independently from its neighbors. In contrast, region-based methods try to take the spatial arrangement of skin
pixels into account during the detection stage to enhance the methods performance.
5. Color Spaces
Colorimetry, computer graphics and video signal transmission standards have given birth to many color spaces with
different properties. A wide variety of them have been applied to the problem of skin color modeling.
RGB- RGB is a color space originated from CRT (or similar) display applications, when it was convenient to
describe color as a combination of three colored rays (red, green and blue). It is one of the most widely used color
spaces for processing and storing of digital image data. However, high correlation between channels, significant
perceptual non-uniformity, mixing of chrominance and luminance data makes RGB not a very favorable choice for
color analysis and color based recognition algorithms.
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

34

YCbCr- YCrCb is an encoded nonlinear RGB signal, commonly used by European television studios and for image
compression work. Color is represented by luma (which is luminance, computed from nonlinear RGB), constructed
as a weighted sum of the RGB values, and two color difference values Cr(Chrominance red) and Cb(Chrominance
blue) that are formed by subtracting luma from RGB red & blue components. The transformation simplicity and
explicit separation of luminance and chrominance components makes this color space attractive for skin color
modeling.
a) Y=0.229+0.587G+0.114B; b) C
r
=R-Y; c) C
b
=B-Y
HSV (Hue, Saturation, Value) - Hue-saturation based colorspaces were introduced when there was a need for the
user to specify color properties numerically. They describe color with intuitive values, based on the artists idea of
tint, saturation and tone. Hue defines the dominant color (such as red, green, purple and yellow) of an area;
saturation measures the colorfulness of an area in proportion to its brightness. The intensity, lightness or value is
related to the color luminance. The intuitiveness of the colorspace components and explicit discrimination between
luminance and chrominance properties made these colorspaces popular in the works on skin color segmentation.
Several interesting properties of Hue were noted in: it is invariant to highlights at white light sources, and also, for
matte surfaces, to ambient light and surface orientation relative to the light source. However, this points out several
undesirable features of these colorspaces, including hue discontinuities and the computation of brightness
(lightness, value), which conflicts badly with the properties of color vision.
6. Skin modelling
The final goal of skin color detection is to build a decision rule that will discriminate between skin and non-skin
pixels. This is usually accomplished by introducing a metric, which measures distance (in general sense) of the pixel
color to skin tone. The type of this metric is defined by the skin color modeling method.
7. Explicitly Defined Skin Region
One method to build a skin classifier is to define explicitly (through a number of rules) the boundaries skin cluster in
some color space. For example:-
(R, G, B) is classified as skin if:
R > 95 and G > 40 and B > 20 and
max {R, G, B} min {R, G, B} > 15 and | R - G | > 15 and R > G and R > B

The simplicity of this method has attracted (and still does) many researchers. The obvious advantage of this method
is simplicity of skin detection rules that leads to construction of a very rapid classifier. The main difficulty achieving
high recognition rates with this method is the need to find both good color space and adequate decision rules
empirically. Recently, there have been proposed a method that uses machine learning algorithms to find both
suitable color space and a simple decision rule that achieve high recognition rates. The authors start with a
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

35

normalized RGB space and then apply a constructive induction algorithm to create a number of new sets of three
attributes being a superposition of r, g, b and a constant 1/3, constructed by basic arithmetic operations. A decision
rule, which achieves the best possible recognition, is estimated for each set of attributes. The authors prohibit
construction of too complex rules, which helps avoiding data over-fitting that is possible in case of lack of training
set representativeness. They have achieved results that outperform Bayes skin probability map classifier in RGB
space for their dataset.
8. 2D Haar DWT
The frequency domain transform we applied in this research is Haar-DWT, the simplest DWT. A 2-dimensional
Haar- DWT consists of two operations: One is the horizontal operation and the other is the vertical one. Detailed
procedures of a 2-D Haar-DWT are described as follows:
Step 1: At first, scan the pixels from left to right in horizontal direction. Then, perform the addition and subtraction
operations on neighboring pixels. Store the sum on the left and the difference on the right as illustrated in Figure 4.
Repeat this operation until all the rows are processed. The pixel sums represent the low frequency part (denoted as
symbol L) while the pixel differences represent the high frequency part of the original image (denoted as symbol H).


Fig 2:- Horizontal Operation on the first row
Step 2: Secondly, scan the pixels from top to bottom in vertical direction. Perform the addition and subtraction
operations on neighboring pixels and then store the sum on the top and the difference on the bottom as illustrated in
Figure 5. Repeat this operation until all the columns are processed. Finally we will obtain 4 sub-bands denoted as
LL, HL, LH, and HH respectively. The LL sub-band is the low frequency portion and hence looks very similar to
the original image.


Fig 3:- Vertical Operation
The whole procedure which has been described above is called the first-order 2-D Haar-DWT.
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

36

9. Proposed Secure Steganography Method
After completing the skin segmentation the chosen message image has to be embedded in the region of interest. This
will be achieved with the encoder. The output of encoder will be the stego image which in appearance is like the
cover image but contains the message image. This particular aim is achieved using DWT (discrete wavelet
transforms). The frequency domain transform we applied in this research is 2D Haar-DWT, the simplest DWT.
Because of the inherent multi-resolution nature, discrete wavelet transform is suitable for applications where
scalability and tolerable degradation are important. One of the most important security features apart from the two
keys is the IDWT. The IDWT will also be taken at the encoder end. This is done in order to increase the security so
that if some steganalyser (hacker) tries to take the IDWT on the stego image to get the secret message, then in such a
case the secret message gets completely distorted.


Fig 4: Proposed Encoder
The decoder end has purposefully not been kept the exact inverse of the encoder. Keeping in mind the general
tendency of assuming the decoder to be the inverse of encoder, we have implemented the IDWT on the encoder end
and no inverse transform on the decoder end.
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

37


Fig 5: Proposed Decoder
10. Performance Analysis
Fig 6: Obtaining the Stego Image

International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

38

Fig 7: Encoder

Fig 8: Decoder
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

39


Fig 9: Histogram analysis of Original and Stego Image

International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

40

Fig 10: PSNR Value

11. Conclusion
Digital Steganography is a fascinating scientific area which falls under the umbrella of security systems. Proposed
framework is based on steganography that uses Biometric feature i.e. skin tone region. Skin tone detection plays a
very important role in Biometrics and can be considered as secure location for data hiding. Secret data embedding is
performed in DWT domain than the DCT as DWT outperforms than DCT. Using Biometrics resulting stego image
is more tolerant to attacks and more robust than existing methods.
REFERENCES
Text References:
[1] Biometric Inspired Digital Image Steganography by Abbas Cheddad, Joan Condell, Kevin Curran and Paul Mc
Kevitt School of Computing and Intelligent Systems, Faculty of Computing and Engineering, (15th Annual IEEE
International Conference and Workshop on the Engineering of Computer Based Systems 978-0-7695-3141-0/08
$25.00 2008 IEEE DOI 10.1109/ECBS.2008.11.159)
[2] A Skin Tone Detection Algorithm for an Adaptive Approach to Steganography Abbas Cheddad, Joan Condell,
Kevin Curran and Paul Mc Kevitt by Abbas Cheddad, Joan Condell, Kevin Curran and Paul Mc Kevitt School of
Computing and Intelligent Systems, Faculty of Computing and Engineering University of Ulster, BT48 7JL,
Londonderry, Northern Ireland, United Kingdom.
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

41

[3] A DWT Based Approach for Image Steganography by Po- Yueh Chen * and Hung-Ju Lin Department of
Computer Science and Information Engineering ,National Changhua University of Education ,No. 2 Shi-Da
Road, Changhua City 500, Taiwan, R.O.C.
[4] A Survey on Pixel-Based Skin Color Detection Techniques by Vladimir Vezhnevets * Vassili Sazonov Alla
Andreeva Graphics and Media Laboratory Faculty of Computational Mathematics and Cybernetics Moscow
State University, Moscow, Russia.
Online Link References:
1. http://en.wikipedia.org/wiki/stegnalysis
2. http://www.jjtc.com/Steganography/
3. http://www.petitcolas.net/fabien/steganography/
4. www.google.co.in




























International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

42

SIMULATION BASED PERFORMANCE ANALYSIS OF WIRED
COMPUTER NETWORKS

Rahul Malhotra
1
,
Vaibhav Nijhawan
2
1,2
Department of Electronics & Communication Engineering
1
Bhai Maha Singh College of Engineering and Technology, Muktsar

2
Allenhouse Institute of Technology, Kanpur
1
blessurahul@gmail.com,
2
vaibhav.nijhawan@gmail.com

Abstract
Simulation is the process of accessing the real world scenario in a virtual way. The purpose of simulation modeling
is to ease the understanding of real working situations, to surge its behavior and its reactions during particular
events. It is the application of computational models to the study and prediction of physical events or the behavior of
engineered systems. Selection of a particular network simulator involves factors like availability of simulator, its
relative ease of use, network modeling flexibility, code reusability factor, hardware and software requirements,
output reports and graphical plots. In this paper performance analysis of Wired Networks has been done under
variable data packet size from 500 bits, 1000 bits to 1500 bits.

Keywords: OPNET, Packet Size, Wired Networks.























International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

43

1. Introduction
Wired network is an interconnection of remote nodes through physical infrastructure of wires and cables. It
provides advantages of fast, reliable, secure and long haul communication. The wired communications are most
suitable for environments with fixed entities like home and office networks and high data rates environments where
dedicated links are necessary as a link between keyboard and central processing unit (CPU). The primary parts of a
wired network are network cables, network adapters, hubs, switches, routers and process governing softwares like
transmission control protocol/internet protocol (TCP/IP), token ring, fiber distributed data interface (FDDI). The
processing capabilities and methodologies of the network may be slightly varied by using different network
processing softwares. Institution of electrical and electronics engineering (IEEE) model IEEE 802.3 Ethernet
working on transmission control protocol/internet protocol (TCP/IP) model is most famous of all the wired local
area network models. The most common Ethernet links used are 10BaseT, 100BaseT and 1000BaseX providing a
data transfer speed of 10 Mbps, 100 Mbps and 1 Gbps respectively. Fiber optic variants of Ethernet offer high
performance, electrical isolation and network span up to tens of kilometers. The backbone of internetwork of
interconnected networks (INTERNET) relies on the huge worldwide established wired network known as public
switch telephone network (PSTN). Some of the limitations of wired networks include immobility, uneasy
upgradation, difficult fault diagnosis, elaborate infrastructure, intricate installation and non availability at isolated
places.

2. Computer Networks
A computer network is an interconnection of two or more individual computer systems that can transfer
data, share resources or work mutually to perform some specific task. Computer networks are classified based upon
their topology, geographical span, structure and media. The choice of computer network depends upon the
application to be performed, cost factor, performance level and speed of data transfer. However the basic criterion
that each computer network must fulfill includes performance, reliability and security. Performance of a computer
network is a measure of its throughput and delay. Throughput is the average rate of successful delivery of data over
the network whereas delay is the average amount of time difference between the transmission and reception of data
packets. The data in the form of text, number, image, audio and video can be transferred over the computer network
electronically. This electronic data is converted either to analog or digital form before transmitting it over the
network. Data transfer in digitized format is immune to noise but requires extra bandwidth for transmission than the
data transfer in analog form. Presently, the digital data communication has outdated the traditional analog way of
data transmission because it meets the requirements of future scenarios of data transfer that includes higher data
rates, improved quality of service (QoS), impregnable security and complete reliability on computer networks.

3. Architecture of Computer Network
The structure of the computer network is defined by the relationship between the network nodes. A
computer network may have a client server model or a peer to peer model.

3.1 The Client Server Model
The client server network model is based on consumer producer relationship where the server acts as the
producer and provides services and resources to client nodes. The server always listens to the client node whereas
the client node initiates the communication. Electronic-mail and web browsing are some commonly used client
server model. It is easy to upgrade and repair server without disturbing any other node and functionality of the
network.
The data storage is centralized which keeps the data more secure and safe from theft. But the server gets overloaded
upon receiving excessive requests and queries from client computers. The network is not robust as it depends solely
on the server.

3.2 The Peer to Peer Model
In peer to peer model there is no central controller or server but all the nodes in the network themselves
manage all the communication and resource sharing and enjoy equal privileges. All the network nodes share some
portion of their resources like disk storage, processing power with other nodes on the network. Napster is an
example of online peer to peer file sharing service. The network model is robust due to distributed control and also it
is economical and easy to install. System administration is difficult due to distributed control which makes it less
reliable in terms of data security.

International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

44

4. Introduction to Simulation
Simulation is the process of accessing the real world scenario in a virtual way. The purpose of simulation
modeling is to ease the understanding of real working situations, to surge its behavior and its reactions during
particular events. It is the application of computational models for the study and prediction of physical events or the
behavior of engineered systems. Computer simulation is an indispensable tool for resolving multitude of scientific
and technological problems. In context with wired networks, simulators are used for the development and validation
of new algorithms and for testing networks capacity and efficiency under specific scenarios. Network simulators
facilitate this task by providing a framework in which the desired network configurations can be assembled virtually
and virtual traffic loads can be introduced over the network. Traffic flows across the network and measurements can
be taken without perturbing the system. Network simulators are classified into two categories viz. protocol simulator
and technology or processing simulator. Both these technologies houses the two methods of simulation namely
discrete event and analytical simulation method. The discrete event simulation produces predictions in the network
at low level which is packet by packet making them accurate but slow to generate results. Another approach is
analytic method which produces mathematical models to produce their results at much faster speed but may sacrifice
the accuracy. Usually hybrid operators are used which combines both the simulation technologies and provides
reasonable performance in terms of speed and accuracy. Some common network simulating tools are optimized
network engineering tool (OPNET), network simulator (NS2) and global mobile information system simulator
(GLOMOSIM). All these network simulators can be differentiated based on features like Network simulator may
have character user interface (CUI) or may have graphical user interface (GUI). It also describes the richness of
model repositories and animation capabilities. Level of extensibility of network simulators i.e. ability to create new
models or extend existing models and analysis tools. Customization mechanisms that includes availability of
scripting features or graphical configuration methods. Whether the network simulator is realized and used as a
network simulation language or as simulator object class libraries. Network simulator supports sequential execution
(execution using one processor) or parallel distributed execution. This factor determines the overall speed of
execution of the simulator. Network simulator may have the ability to simulate wired networks or wireless networks
or both. The availability of predefined models for commercial networking products like specific cisco router
products. Fidelity ranger that determines the level of details supported by the models such as detailed packet models
or aggregate fluid models. The selection of a particular network simulator involves factors like availability of
simulator, its relative ease of use, network modeling flexibility, code reusability factor, hardware and software
requirements, output reports and graphical plots. However the most important factor in choosing the particular
simulator depends solely on the problem domain.

5. Introduction to Network Model
5.1 Network Model: The Ethernet LAN comprising of 8 stations using a star topology is connected to a central
Ethernet 16 switch and a server is installed in 100 by 100 meters of office area. The LAN used 10BaseT cable and
variable data packet size from 500 bits, 1000 bits to 1500 bits.


Figure1: OPNET Project Editor Window for Nework Model.

5.2 Methodology: The LAN is tested under varying data packets size of 500 bits, 1000 bits and 1500 bits for
comparison of network parameters load-global (sec), load-nodal (sec), delay (bps) for node (nodevary) and traffic
received (bps) for node (nodevary). The simulation lasted for 30 minutes.
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

45


5.3 Network Result Analysis: Local area network based on Ethernet model having 8 fixed nodes has been tested
for parameters load-global (sec), load-node (sec), delay (bits/sec) and traffic received (bits/sec) for node (nodevary)
under varying number of data packet size. The simulation time was set to 30 minutes.

5.3.1 Variation in Delay-Global (Sec): On variation in data packet size from 500 bits, 1000 bits to 1500 bits the
delay-global (sec) slightly increases from 1.15 milliseconds to over 1.30 milliseconds over the period of 30 minutes.
Whereas the delay-node (sec) for node (nodevary) remains independent of the packet size as computed by the
simulation results.


Figure 2. Variation in delay (global) and delay (node) for node (nodevary)


5.3.2 Variation in Load: The load (bits/sec) by node (nodevary) increases with increase in data packet size from
500 bits, 1000 bits to 1500 bits. A sharp increase in load (bits/sec) by node (nodevary) is observed after 15
simulation minutes; whereas an exponential decrease in load (bits/sec) can be observed at 30 minutes of simulation
time. However at any point of time, the load (bits/sec) is greater for packet size of 1500 bits then packet size of 1000
bits and 500 bits respectively.

International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

46


Figure 3.Variation in load(bits/sec) for node

5.3.3 Variation in Traffic Received: The variation in traffic received (bps) for node (node vary) remains constant
for variation in data packet size from 500 bits, 1000 bits to 1500 bits. The simulation result verifies the same.

Figure 4 Variation in traffic received (bits/sec)

5. Conclusion
The results obtained by simulation of the network verify the increase in data packet size slightly increases the delay-
global (sec) in the network. An increase of 500 bits of data packet size increases the delay of 20 milliseconds. The
increase in number of bits in data packet increases the excessive load on the links, though the traffic remains same.
The excessive load on the link increases the delay parameter. Whereas no effect is observed in delay-node (sec) in
node (node vary) because the magnitude of delay contributed by a node will be negligible as compared to the delay
caused by all the nodes on the network. Load (bits/sec) by node (nodevary) increases for increase in data packet size
from 500 bits, 1000 bits to 1500 bits Increase in number of bits per packet of the node (nodevary) will contribute
additional number of bits on the network links. Consequently, by definition of load, load (bits/sec) will increase for
the node (nodevary). Traffic received (bps) pattern for node (nodevary) will remain independent from the variation
of data packet size from 500 bits, 1000 bits to 1500 bits. Increase in number of bits per packet will have no effect on
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

47

traffic received (bps) because traffic received is number of data packets received by a node in 1 second which
remains constant during the process.

TABLE 1 RESULT COMPARISON

NETWORK ATTRIBUTES NETWORK PARAMETERS
Type of Cable
Number of
Nodes
Packet Size
Delay
(sec)
Load
(bps)
Traffic
(bps)
10BaseT 8 500 1.25 app. 250 app. 580 app.
10BaseT 8 1000 1.25 app. 500 app. 580 app.
10BaseT 8 1500 1.30 app. 750 app. 580 app.


REFERENCES

[1] B.P. Crow, I. Widjaja, L.G. Kim, P.T. Sakai, IEEE 802.11 wireless local area networks, IEEE
communication magazine, September 1997, vol. 35, issue 9, pp 116-126
[2] M. Gastpar, M. Vetterli, On the capacity of wireless networks: the relay case, Proceedings of IEEE
computer and communication societys 21
st
annual joint conference, 2002, vol. 3, pp 1577-1586
[3] Matthias Grossglauser, David N.C. Tse, Mobility increases the capacity of adhoc wireless networks,
IEEE/ACM transactions on networking, 2002, vol. 10, issue 4, pp 477-486
[4] J.E. Wieselthier, G.D. Nguyen, A. Ephremides, On the construction of energy efficient broadcast and
multicast trees in wireless networks, IEEE proceedings on computer and communications technologies,
March 2000, vol. 2, pp 585-594
[5] Yaxin Cao, Scheduling algorithms in broadband wireless networks, Proceedings of the IEEE, January
2001, vol. 89, issue 1, pp 76-87
[6] Chee-Yee Chang, S.P. Kumar, Booz Allen Hamilton, Sensor networks: evolution, opportunities and
challenges, Proceedings of the IEEE, August 2003, vol. 91, issue 8, pp 1247-1256
[7] P. Gupta, P.R. Kumar, The capacity of wireless networks, IEEE transaction on information theory, vol.
46, issue 2, pp 388-404
[8] Andrew Rathmell, Controlling computer network operations, Information and security magazine, 2001,
vol. 7, pp-121-144
[9] Nazy Alborz, Maryam Keyvani, Milan Nikolic, and Ljiljana Trajkovic, Simulation of Packet Data
Networks using OPNET,
http://www.ensc.sfu.ca/people/faculty/ljilja/papers/opnetwork00_nazy.pdf
[10] Gilberto Flores Lucio, Marcos Paredes-Farrera, Emmanuel Jammeh, Martin Fleury and Martin J. Reed,
OPNET Modeler and Ns-2: Comparing the Accuracy of Network Simulators for a Packet-Level Analysis
using a Network Testbed http://citeseerx.ist.psu.edu/viewdoc/
[11] A. Goldsmith, Wireless Communications: Cambridge University Press, 1
st
Edition, August 2005.
[12] Behrouz A Forouzan, Data Communications and Networking: Tata McGraw Hill, 4
th
Edition, 2007.
[13] Ranjan Kaparti, OPNET IT Guru: A tool for networking education, REGIS University.
[14] OPNET Modeler Manual, available at http://www.opnet.com
[15] OPNET IT Guru Academic Edition available at
http://www.opnet.com/university_program/itguru_academic_edition


International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

48

Effective Car Monitoring and Tracking Model
1
Shaimaa M.abdalla.MESC
Department of Electrical and Computer Engineering
, Kulliyyah of Engineering, IIUM,
Malaysia
Shaimaa_mother@yahoo.com
2
Shihab A. Hameed, Assoc. Prof. Dr
Department of Electrical and Computer Engineering,
Kulliyyah of Engineering, IIUM,
Malaysia
Sh_ahmed01@yahoo.com

3
AISHA HASSAN ABDALLA, Assoc.prof.Dr
Department of Electrical and Computer Engineering,
Kulliyyah of Engineering, IIUM,
Malaysia
aisha@iiu.edu.my


Abstract
In our everyday life, car theft is the major issue that has been widely discussed all over the world. There are many
types of car security system that has been used, it is however still incompetent. This project entitled Automobile
Monitoring and Tracking System is being proposed to solve issue. In this project, the integration between
monitoring and tracking system is being introduced. Both elements are very crucial in order to have a powerful
security system. By using camera and MMS technology, the picture of the intruder will be sent via local
GSM/GPRS service provider to user and police. Furthermore, this project also incorporated with Malaysian
community police cop (Rakan Cop) server directly. In the event of theft local police and user can easily track the car
using GPS system that can be linked to Google Map and other mapping software.












International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

49

1. Introduction
As a subject of reality, the number of cars is growing speedy which reflect the total attempts of car theft.
It exits a verity and a number of car security systems that have been introduced recently on the automobile market.
However, these systems are not enough secured and as a result; the number of car stolen has been increasing. Even
these systems are using different technologies but the thieves are still discovering new methods and p stealing
techniques.
The main idea of research is to enchase existing alert system for car. Based on the research study, it is ineffective to
use the most developed sensors and not capable to alert the users in a minimum time. In most cases, the best and
most complete details can be collected if the users are directly informed by sending alert message.


2.0 Components and specifications

2.1 Sensors
Accurate sensors used for this project are tilt sensor and infrared sensor. The latter is suitable for the car. The
magnetic sensor consists of two magnetic bar coupled together between the doors. A wire is connected to the
microcontroller to detect the changes in voltage. When the door is closed, there is a constant flow of current in the
wire. When the door is opened, there will be change in voltage and; this will subsequuretly trigger the
alarm [1].

2.2 Microcontroller PIC-18 F4520
Microcontrollers PIC 18 are low-power, programmable memory and high performance devices. [2]

2.3 GSM/GPRS Modems
GPRS used for Wireless Application Protocol (WAP) access, Short Message Service (SMS), Multimedia Messaging
Service (MMS), and Internet communication services. GPRS data transfer is charged per megabyte of traffic
transferred, while data communication via traditional circuit switching is billed per minute of connection time. With
GPRS a certain Quality of Service (QoS) is guaranteed during the connection for non-mobile users [3].

2.4 GPS Receiver
A GPS receiver receives radio signals from 24 satellites orbiting the Earth. it can calculate its own location
accurately within a few feet. This information location must be transmitted to a base station to be displayed on a
computerized map [4]. three modes The GPS: Hot Start, Warm Start or Cold Start. The Time-To-First-Fix (TTFF)
depends on the startup mode, with cold starts giving the longest TTFF. The almanac contains satellites orbit
information and allows the GPS receiver to predict which satellites are overhead, shortening acquisition time. The
GPS receiver must have a valid almanac to be capable of booting up in warm or hot start modes [7]. The receiver
must have a continuous fix for approximately 15 minutes to receive a complete almanac from the satellites. Once
downloaded it is stored in nonvolatile memory. Execution of a cold start will automatically result in a new almanac
download. Ephemeris data contains precision corrections to the almanac data and is required for accurate
positioning.


2.5 Nokia 3200
NOKIA 3200 is the cheapest color mobile phone integrated with VGA camera module available
in the Malaysian market. The mobile phone integrated with camera is capable to support MMS
function [7].

2.6 Database
A database is a structured collection of data to meet the needs of a community users. The soft ware of computer
storage the organized data i.e. database management system DBMS. [5]


International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

50


2.7 Rakan Cop
Rakan Cop allows the public to have two-way communication with the police via SMS, hotline, Multimedia
messaging (MMS) and email if criminal activities informed. The central co-ordination is the 24-hour Police Control
Centre.


3.0 Proposed design
Inside the car, the alarm system circuit power up once the user locked the car. Magnetic and infrared sensors are
placed at specific places and act as a detector how give high voltage when triggered once the door opened illegally.
The PIC microcontroller will send a signal to operate Nokia 3200. Then, camera application open and capture a
picture. The picture will be sent through MMS to user through GSM provider (MAXIS, CELCOM, DIGI, etc.).
Outside the car, the user needs to confirm the event by calling. After receiving confirmation of theft event, the hand
phone will send the same MMS picture to MMS modem which is located at the control centre. By using RS232
cable, the MMS modem connected to the PC which contains database of information regarding the car (etc; color,
plate number, name of owner, and information from GPS). For tracking purposes; GPS receiver installed in the car,
and the receiver will be integrated to PIC microcontroller which will send text massages (SMS) to the database. The
control centre can pinpoint car location using mapping software. Finally, the control centre will retrieved all the
information needed and send to Police RakanCop server through the email.


Figure 1: Whole system design


4.0 Implementation
This project has been emphasizing on the monitoring and tracking. The PIC has been programmed to presses the
keypad to execute capturing picture, send MMS and text position coordinate of the unit. Adding the GPS on board
can improve tracking capability. The prototype can perform monitoring and tracking efficiently.

International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

51


Figure 2: Warning SMS received by user
Figure 3: MMS received by user


The outside unit consists of GSM/GPRS modem and database. The database is programmed using Microsoft Visual
Basic and contains all information about users and their car. The PC that contains the database is connected to MMS
modem which receives the MMS from the car and save its information in a MS-Access small data file. A small
graphical interface was built to receive massages with the MMS modem and to receive the MMS and store it in the
PC. The SMS massage from the car containing GPS coordinate can be link with Google earth to pinpoint the
location.


Figure 4: Database graphical interfac

International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

52



Figure 5: Linking GPS coordinate with Google Map for tracking



From the data we can notice that the total system time is between 3 to 5 minutes. In the above three data table
also we can observed that there are some fix time delay for certain event. So, we made a summary of fix delay in the
system in the next table.

Table 1: Fix Time Delay of the System

Fix delay in the system (s)
Delay to type SMS alert
32
Delay to build MMS message
25
Wait for 1 minute to initialize
GPS
60
Delay to type GPS coordinate
55
TOTAL
172


Hence, 172 seconds are the system delay due to programming and hardware implementation that had to be
compensated in the overall system.

Table 2: Delay time to receive one MMS

Time in
2400H
CELCOM MAXIS DIGI
0200 2.45566667 2.753667 3.044333
0600 3.9389 6.872333 3.855567
1000 3.7668 2.188867 5.4389
1400 10.8055667 13.8111 13.7
1800 8.06133333 4.761233 7.522333
2200 4.55 4.9 5.5
Busy Hour
(1pm-2pm)
13.2943333 13.389 13.6
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

53





Graph 1: Average MMS Transmission Time in minutes



Graph 2: Average MMS Transmission Time during Busy Hour in minutes

Table and graphs above show that the data collections of experimental delay time to receive one MMS with
different service providers at different time in shears area by using Nokia phone. From the analysis CELCOM
service provider can deliver same MMS massage in less time in average compared to other service provider. This
analysis might give different result for other days. For instance, sometimes MAXIS and DiGi might give less time
delay for the MMS transmission and sometimes is vice versa.



Graph 3: Average TTFF vs. Speed
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

54




Graph 4: Accuracy (m) vs. Time (minutes)


Since we know that the GPS receiver will have delay to power up during mobile, thus it is essential to study the
relationship between speed and TTFF. The test was done in comeback area and measurements were taken with GPS
receiver connected to laptop inside the car. From all the analysis we can safely assume that within 5 to 6 minutes
time delay are needed to power up GPS receiver in almost all situations. As time increases the accuracy of GPS
reading improves. This is because by increasing the time the GPS receiver can communicate to more satellites hence
improve its tracking capability. So, the study shows that for a stable and more accurate GPS reading it needs 15
minutes delay.

5.0 Conclusion and Future Research

Statistic shows that 96% of the public are not aware when they heard an alarm [6]. It shows that the alarm itself does
not contribute much in preventing the car from stolen. Normal car alarm system does not cover large areas; the area
is just less than 100m. Thus, the alert system for car has been upgraded, advance in technology, portable and less
expensive. This system proven to be more efficient in the sense that it uses multiple sensors as well as MMS system
for alerting the owner and police of the intruded car. Moreover, it also is capable for real time tracking during event
of theft. With many advantages of this system, it has the potential to be widely adopted. For future research for
example the cameras resolution need to be highly qualified with a night mode capability and of
suitable size to be hidden from the view of the theft, SIM card to be specifically designed for
addressing the issue to the service provider and immobilizer need to be involved in the system to
have an efficient safety.

6.0 References
1. http://www.ecplaza.net/tradeleads/seller/4765700/magnetic_contacts_magnetic.html,p34
2. http://www.cytron.com.my/index.asp,p
3. Noldus, Rogier. (2006), CAMEL: Intelligent Networks for the GSM, GPRS and UMTS Network, 1st edition,
Wiley ,p50-55
4. www.pegtech.com/rfgps.htm,p33
5. Ben Forta, (2005), MySQL Crash Course (Sams Teach Yourself in 10 Minutes), Sams, New York.p70-76
6. Brown, A. L. (1996). Vehicle Security Systems, 2nd edition, Newnes.p 8-20



International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

55

Research Methodology on Agile Modeled Layered Security Architectures for
Web Services
*D.Shravani, **Dr.P.Suresh Varma, ***Dr.B.Padmaja Rani, ****K.Venkateswar Rao and
*****M.Upendra Kumar
*Research Scholar Computer Science Rayalaseema University Kurnool A.P. India
**Principal and Professor CS Adikavi Nannaya University Rajahmundry A.P. India
***Professor CSE JNTU CEH Hyderabad A.P. India
**** Associate Professor CSE JNTU CEH Hyderabad A.P. India
*****Research Scholar CSE JNTU Hyderabad A.P. India
sravani.mummadi@yahoo.co.in vermaps@yahoo.com padmaja_jntuh@yahoo.co.in
kvenkateswarrao_jntuh@yahoo.co.in uppi_shravani@rediffmail.com


Abstract
Software Engineering covers the definition of processes, techniques and models suitable for its environment to
guarantee quality of results. An important design artifact in any software development project is the Software
Architecture. Software Architectures important part is the set of architectural design rules. A primary goal of the
architecture is to capture the architecture design decisions. An important part of these design decisions consists of
architectural design rules In an MDA (Model-Driven Architecture) context, the design of the system architecture is
captured in the models of the system. MDA is known to be layered approach for modeling the architectural design
rules and uses design patterns to improve the quality of software system. And to include the security to the software
system, security patterns are introduced that offer security at the architectural level. More over, agile software
development methods are used to build secure systems. There are different methods defined in agile development as
extreme programming (XP), scrum, feature driven development (FDD), test driven development (TDD), etc. Agile
processing is includes the phases as agile analysis, agile design and agile testing. These phases are defined in layers
of MDA to provide security at the modeling level which ensures that security at the system architecture stage will
improve the requirements for that system. Agile modeled Layered Security Architectures increase the dependability
of the architecture in terms of privacy requirements. We validate this with a case study of dependability of privacy of
Web Services Security Architectures, which helps for secure service oriented security architecture. This paper
discusses the latest and advantages in secure software development process in a early stage. The primary focus is on
security considerations early in the life cycle, i.e. at the system architecture stage, which has the potential to improve
the requiremnts engineering in software system. The ultimate goal is to have a better quality product. Initially we
discuss about the Research Methodology for Designing Dependable Agile Layered Web Services Security
Architecture Solutions, with an extension to Web Engineered Business Intelligence applications. Finally we discuss
a case study of Spatial Mobile Web Services secure application.
Keywords: Security Architectures, Agile Modeling, Web Services, Dependability, Spatial Mobile application










International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

56

1. Introduction to Agile Modeled Layered Security Architectures
Computer Security is rapidly becoming a critical business concern for companies, as security architects and
engineers must rethink the way their software applications are built[9]. Their mission critical applications should be
protected and privacy and integrity of their data should be maintained. System Security Architecture from a software
engineering viewpoint imposes that strong security must be a guiding principal of the entire development process. It
describes a way to weave security into systems architecture, and it identifies common patterns of implementation
found in most security products. Integrating security products into applications to satisfy corporate security
requirements is a major task. Security principles, software process, and security technologies usage for
cryptography, applications, databases, and operating systems security should be the major research agenda. Security
issues for other architectural goals include High Availability, Robustness, Reconstruction events, ease of use,
Maintainability Adaptability & Evolution, Scalability, Interoperability, Performance, Portability etc.
Agile implies the security requirements have changed because the threats have changed. Defensive security
uses OODA cycle [Observe (Senses change in its settings), Orient (analyze meaning and importance of this change),
Decide (determine an ideal strategy taking advantage of change), Act (implementing this strategy)] Layered implies
both multilevel security and multilateral security. Various layers include: Application Domain, Application,
Temporal, Distribution, Data, and Resource. Secure programming states that Software developers are the first and
best line of defense for security of their code. Good coding practices lead to secure code. Security Architecture must
define reusable security services that allow developers to not be security experts yet still build a secure software
system. Security functions as a collaborative design partner in the software development lifecycle (SDL)
(requirements, architecture, coding, deployment and withdrawal from service) by
aligning Threat management process, Vulnerability management process, identity management process. [1] Security
Architecture is a unifying framework and reusable services that implement policy, standards, and risk management
decisions. It is a strategic framework that allows the development and operations staff to align efforts, in addition it
drives platform improvements which are not possible to make at the project level.
Security Architecture Process The security architecture process is an iterative process that unifies the evolving
Business, technical, and security domains. The four main phases in the process are: Architecture Risk Assessment,
Security Architecture and Design, Implementation and Operation & Monitoring. Security Architecture and Design
implies architecture and design of security services that enable business risk exposure targets to be met. The policies
and standards, and risk management decisions drive the security architecture and the design of the security
architecture and the design of the security processes and defense in depth (Data, Applications, Host, and Networks)
stack.
Application Security Application Security deals with: Protecting the code and services running on the system, who is
connecting to them, and what is the output from the programs through a combination of secure coding practices,
static analysis, threat modeling, participating in the SDL, application scanning and fuzzing. Also Application
Security deal with: Delivering reusable application security services such as reusable authentication, authorization,
and auditing services enabling developers to build security into their systems.
Data Security Data security deals with: securing access to data and its use; this is a primary concern for security
architecture and works in concert with other domains.Vulnerability management tools conduct specialized scans
against database hosts. The SDL defines secure patterns for database integration based on data classification defined
in the policy. Database intrusion detection and monitoring provides ongoing intelligence as to the threats against the
database. The value in performing detection and monitoring at this layer is that attackers may not traverse the
expected path to get to the asset and the security system is trying to protect data.Database, XML documents,
transient messages, and other resources (configurations and management) are protected by data security
mechanisms.
criticality of security at various levels of a system Security is critical at various levels of the system [2]. However,
security solutions typically address a very specific vulnerability with little relation to the larger picture of secure
information systems. Organizations have successfully implemented these solutions without knowing if all security
requirements have been met or what impact these solutions have on other parts of the information systems. Focus of
research agenda will be to identify the various layers that exist in large distributed systems. And to lay the
groundwork for defining security requirements for each layer allowing for a mapping of security implications that
each layer has on other layers. This will eventually result in the design of a layered architecture which could assist
organizations in mapping out all required or successfully implemented security requirements at various levels of
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

57

information systems.
Research question: How a failure addresses a specific security service at a specific layer impact other
(interdependent) layers? Also how successful implementation of a security service had an affect on the rest of the
system?
This gives us the clearer understanding of where security services are required and how a failure to address this
requirement will impact on the system as a whole.
agile security for information warfare
Agile security for information warfare: a call for research [3]
Research Question 1: How can agile methods be used to generate effective security requirements? This
research question is a theoretical question leading to a theory formulation. Although this work is partly completed
above, additional theoretical development is needed to draw specific methods and techniques from the conflation of
information warfare and agile systems development theories. This question will address issues such as: Can the two
theories be directly ported into commercial security development? Does the conflation of the two theories lead to
practical methods and techniques? Exactly how must information warfare and short cycle Development theories be
extended or modified to shape an emergent security theoretical framework? Are new security organizational forms
or particular kinds of specialists required to fit these theories?
Research Question 2: In what ways do these agile methods change the development of security requirements?
This research question follows research question 1, and is an empirical question leading to descriptive results. It
addresses the security development experience under the light of the new theoretical framework. This question will
address issues such as: Is security requirements analysis easier, quicker, or less thorough than more traditional
approaches. Are multiple approaches to security development needed? Are the requirements definitions more
flexible, more attuned to changes in security vulnerabilities and threats, or dependent on standard information
security architecture?
Research Question 3: How is the outcome of emergent security development different from more traditional
forms? This research question follows research question 3, and it is also an applied question leading to Descriptive
results. It addresses the ultimate success arising from the application of theory and practice in developing security
safeguards for information systems. This question will address issues such as: Do emergent security organizations
detect sudden changes in vulnerabilities or threats better than more traditional security organizations? Do the short
cycle security safeguards deploy faster than more traditional methods? That is, is the result a form of security that is
indeed more agile? Is the security better because it is more responsive? Does emergent information security lead to
fewer security incidents? Are emergent safeguards maintainable or are they throwaway artifacts? Is emergent
security cheaper or more expensive than more traditional forms?
2. Research Methodology for Designing Dependable Agile Layered Web Services Security
Architecture Solutions Spatial Mobile Application Case Study
A Web Service is a software component or system designed to support interoperable machine or application-
oriented interaction over a network.[15 - 26] A Web Service security has an interface described in a machine-
processable format (specifically WSDL). Other systems interact with the Web service in a manner prescribed by its
description using SOAP (Simple Object Access Protocol) messages, typically conveyed using HTTP with an XML
serialization in conjunction with other Web-related standards. Web Services Security (WS-Security) is a mechanism
for incorporating security information into SOAP messages. WS-Security uses binary tokens for authentication,
digital signatures for integrity, and content-level encryption for confidentiality.
A Web Service is a software entity deployed on the Web whose public interface is described XML (eXtensible
Markup Language).It can interact with other systems by exchanging XML-based messages, using standard Internet
standard protocols. The Web Services definition and location (given by a Uniform Resource Identifiers URI) can be
discovered by querying common Web Service Registries. Web Services can be implemented using any programming
language and executed on heterogeneous platforms, as long as they provide the above features. This allows Web
Services owned by distinct entities to interoperate through message exchange.
As Web services become more widely adopted, developers must cope with the complexity of evolving trust
negotiation policies spanning numerous autonomous services. The Trust-Serv framework uses a state-machine-based
modeling approach that supports life-cycle policy management and automated enforcement.
The Web Services architecture is expected to play a prominent role in developing next generation distributed
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

58

systems. It targets the development of applications based on XML-related standards, hence easing the development
of distributed systems through the dynamic integration of applications distributed over the Internet,
independently of their underlying platforms. Web Services Security Architectures have three layers viz. Web Service
Layer, Web Services Framework Layer (.NET or J2EE), Web Server Layer. Web 2.0 increases web based access to
data processing particularly on the client side that enables web applications which contain enriched functionality.
Web 2.0 technologies have wide range of technologies and protocols which enable Web architectures greater access
to data and functions. The technologies include AJAX (Asynchronous JavaScript and XML), XML,
JSON(JavaScript Object Notation), SOAP (Simple Object Access Protocol) and WSDL(Web Services Description
Language), REST Web APIs, Microsoft Silver light, RSS, RDF, and Atom. Web 2.0 vulnerabilities include XML,
JavaScript, RSS, AJAX, SOAP, JSON, WSDL, in decreasing order of their attack statistics. In this research, We
want to implement security tools to Web Services Architecture in terms of layers and above attacks. Initially,
building web services by combing protocols like REST and WS-* will be studied. Later This Web Services will be
secured by adding policy, custom authentication, creating client Security Tools, .NET Cryptography, Securing Data
Access, and Protecting Code. Etc.
Services must be designed and composed in a secure manner. In particular, we are concerned with safety
properties of service behavior. Services can enforce security policies locally and can invoke other services that
respect given security contracts. This call-by-contract mechanism offers a significant set of opportunities, each
driving secure ways to compose services. We can correctly plan service compositions in several relevant classes of
services and security properties. We can propose a graphical modeling framework based on foundational calculus.
This formalism features dynamic and static semantics, thus allowing for formal reasoning about systems. Static
analysis and model checking techniques provide the designer with useful information to assess and fix possible
vulnerabilities.
Securing Web Services Architecture
An element of Security for Web Services consists of Authentication, Authorization, Integrity, Non-repudiation,
Confidentiality, and Privacy. ]Properties of Secure Software for Web Services are Predictability of operation,
Simplicity of software design and code, correctness, and safety. The challenge for secure web services has these
dimensions: Secure Messaging, Protection of resources, Negotiation of contracts, Trust management. Common
attacks against Web Services include: Reconnaissance attacks, Dictionary attack, Forceful browsing attack,
Directory traversal attacks, WSDL Scanning, Sniffing, Privilege escalation attempts, Format String attacks,
Exploiting unprotected administrator interfaces, Attacks on confidentiality, Registry disclosure attacks, attacks on
integrity: Parameter tampering, coercive parsing, schema poisoning, spoofing of UDDI/ebXML messages, Principal
spoofing, Routing detours, External entity attack, cannonicalization, intelligent tampering and impersonation, Denial
of Service attacks, Flooding attacks, Recursive payloads sent to XML parsers, Oversized payloads sent to XML
parsers, Buffer overflow exploits, Race conditions, Symlink attacks, Memory leak exploitation., Command
injection, Structured Query Language injection, XML injection, Malicious code attacks, URL String attack,
Parameter Tampering, Cross-site scripting, Session Hijacking, Malformed content, Logic Bombs Trapdoors
Backdoors. Several standards are establishing a framework for integrating security into domain-specific XML-based
applications. WS-Federation standardizes the way companies share user and machine identities. Risk management is
not well understood within the Information Security community. The Security and Software Engineering
communities must find ways to develop software correctly in a timely and cost-effective fashion. Overly broad and
vague laws have created a cloud of legal uncertainty over an important area of security research and
engineering.Security requires an end-to-end perspective and not just a point-to-point one. It is not simply the
exchange of data between the client and the server that is important, but instead the entire path that the data takes.
This includes not only technologies, but also operational processes. Do not encrypt the entire message. Due to the
overhead of encryption and decryption, only encrypt what needs to be encrypted. Encrypt data meant for different
people suing different keys. The advantage of using XML Encryption is that it supports both of these requirements.
Inline signatures with the information that they sign. Signed documents are important not only during transmission
between parties, but also as a means to prove and enforce accountability and liability. To do so, signed documents
must be easily archived, so that both the contents of a document as well as its signatures can be easily retrieved at a
later time. XML Digital Signatures supports inlined signatures and also allows different signatures for different parts
of a document. WS-Security is emerging as the defacto standard for a comprehensive framework for Web Services
security.
1. Through extensive literature survey, related work pertaining to Web Services Security Architecture and
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

59

Architecture Patterns in Security will be studied with a motivation for Good Architectural Design metrics. Roadmap
for adding security to software development life cycle (SDL) is studied.
2. Mapping the various layers to security service requirements of system entities are studied.
3. Drawbacks in the existing system pertaining to low level architecture (code review, trusted code, and secure
communications) and mid level architecture (application and operating system security) are listed out Other models
of software development like exploratory programming, Prototyping, Formal transformations, System assembly
from reusable components, extreme programming are explored for secure programming practices.
4. Security issues pertaining to other security goals are found out. (high availability, robustness, reconstruction
of events, ease of use, maintainability, adaptability and evolution, scalability, interoperability, performance,
portability etc.)
5. The present and future industry needs related to Security Architectures are studied and documented. A
Universal design approach had been followed meeting most of the industry standards.
6. Tools are implemented for obtained research fond outs e.g. for secure programming and its performance are
analysed for assurance purposes. Prototype tradeoffs are verified using Formal methods.
7. A case study on Spatial Mobile Web Services Security Architectures is carried out to justify security costs.
extension of this research for web engineerred business intelligence applications security
Web Engineering is the evolution of software engineering which focuses on the methodologies, techniques and
tools that are the foundation of Web application development and which support their design, development,
evolution, and evaluation [27 - 55]. Modern Web applications are full-fledged, complex software systems, and in
order to be successful their development must be thorough and systematic. Web Engineering is the application of
quantifiable approaches to the cost-effective requirements analysis, design, implementation, testing, operation and
maintenance of high quality web applications. Web engineers face the same traditional concerns as software
engineers: the risks of failure to meet business needs, project schedule delays, budget overruns and poor quality of
deliverables. But in the Web environment new and complicated issue demand attention, too. Web Engineering
addresses the problems associated with shorter lead times which require rapid prototyping and agile methods, the
interactivity and visual nature of the medium which makes Human computer Interaction aspects highly significant,
and multimedia features of Web applications. Web Engineering has an interdisciplinary approach covering Web
development concepts, methods, tools and techniques, useful for Web Software developers, Web designers and
project managers. These mining techniques eventually enhance insights of web engineering applications pertaining
to the areas of web site design, consumer behaviour and security architectures Companies negotiate the Web through
a wide range of business activities. The effective use of business intelligence using web engineering applications
requires an in-depth understanding of the interaction of, and interface between, systems design and consumer
behaviour in the online environment. Web Mining is a technique that applies data mining techniques to analyse
different sources of data in the web (such as web usage data, web content data, and web structural data). With the
rapid growth of the World Wide Web, Web Mining therefore becomes a very active and important topic of research
area related to Web Evolution. Web Mining now plays an important role for E-Commerce websites and E-services to
understand how their websites and services are used and to provide better services for their customers and users.
Business Intelligence generally includes Reporting, Visualization and data mining. Data Analysis, Reporting
and Query tools can help business users wade through a sea of data to synthesize valuable information from it
Today these tools collectively fall into a category called business intelligence. Traditionally data mining techniques
in Business Intelligence involves requirement of knowledge of the business processes and interaction with the end
users. Although most traditional evaluation has held time constant, the time variable cannot be forgotten when data
mining software is put into the business process. Traditional Business Intelligence strategies were: expensive to
implement; Uses a decentralized store of data feeds and insight (e.g. on a client work station); Does not always
provide facilities for sharing data; Usually uses proprietary Architecture; Does not always provide an API; some
times uses closed systems because of proprietary formats. Over the last decade, we have been witnessing an
increasing trend of Business Intelligence (BI) solutions leaving their traditional boundaries and moving toward the
Web, either to source data from it or to expose a web-accessible interface on it (for both human and programmatic
consumption). The goal of Business Intelligence applications is to allow business people to query, understand and
analyse their business data in order to make better decisions. Traditionally BI applications allowed business people
to acquire insights into the data of their organization by means of a variety of technologies such as data
warehousing, data mining, business performance management, OLAP, periodical business reports and the like.
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

60

However, the recent evolutions are reshaping the technological landscape and the features provided by modern BI
applications are challenging.
Next Generation Business Intelligence Application Development Technologies (NGBIADT) refers to business
intelligence applications, practices, and specific implementations, to leverage newer software Design practices. The
three main technology trends that helped shaped Business Intelligence software are: Web 2.0; Agile Development;
Service Orientation . This research focuses on developing web mining methods for Business Intelligence, with
validations in real world environments. These methods can be used to produce business insights to the business
analysts to improve decision making. Web Intelligence from NGBIADT perspectives, is a form of knowledge that
allows applications to be dynamically shaped by the respective users. Various advantages of this approach are :
empowering the user; keeps users coming back, generates better data, keep the system current, achieves a better
understanding of the system, collaborate with human, personalize user experiences.
3. Impelmentations and Validations
In Software Engineering terminology, a Qualitative or descriptive model will be built along with appropriate
notation or tool for providing specific solution with a validation of a Case Study. This Research is a combination of
Fundamental theoretical contributions, with new insights into experimental analysis.The proposed work is for
integrating security and software engineering with design of Security Architectures. Software Engineering problems
must be treated by both theoretical and empirical methodologies. The former is characterized by abstract, inductive,
mathematics-based, and formal-inference-centered studies; while the latter is characterized by concrete, deductive,
data-based, and experimental-validation-centered studies. This research involves in theoretical designing of Secure
UML diagrams using Agile Modeling. Also the results will be validated with experimental work on Web Services
Security Architecture Case Study.

3.1 Design and Development of Spatial Mobile Privacy Web Services Application Case
Study
This case study discusses about privacy issues and implementations of Spatial Web Services Security Architectures.
Role-Based Access Control (RBAC) Model is a widely deployed model in commercial systems and for which a
standard has been developed. [56 62] The widespread deployment of location-based services and mobile
applications, as well as the increased concern for the management and sharing of geographical information in
strategic applications like environmental protection and homeland security has resulted in a strong demand for
spatially aware access control systems. These application domains impose interesting requirements on access control
systems. In particular, the permissions assigned to users depend on their position in a reference space; users often
belong to well-defined categories; objects to which permissions must be granted are located in that space; and access
control policies must grant permissions based on locations and user positions. In this implementation, we want to
review various strategies for Geo-RBAC and its future research work for grid computing, virtualized environments
and cloud Spatial computing.
Introduction to Spatial Web Services A lot of research has been developed for integrating the analysis functionality
that is available in both analytic and geographic processing systems. The main goal is to provide users with a system
capable of processing both geographic and multidimensional data by abstracting the complexity of separately
querying and analyzing these data in a decision making process. However, this integration may not be fully achieved
yet or may be built by using proprietary technologies. A service integration model had been already built, for
supporting and/or geographic requests over the web. This model had been implemented by a Web Service, named
GMLA WS, which is strongly based on standardized technologies such as Web Services, Java and XML. The
GMLA WS query results are displayed in a Web browser as maps and/or tables for helping users in their decision
making.
Overview of Web Services Security Architectures According to WWW Consortium a web service is defined as, A
Web Service is a software application identified by a URI (Uniform Resource Identifier), whose interface and
bindings are capable of being identified, described and discovered by XML artifacts and supports direct interactions
with other software applications using XML based messages via Internet-based protocols. Web Services Security
Architectures have three layers viz. Web Service Layer, Web Services Framework Layer (.NET or J2EE), Web
Server Layer. Refer to Figure 1 which provides NIST draft, Web Services Security Architectures.
In location-based services, users with location-aware mobile devices are able to make queries about their
surroundings anywhere and at any time. While this ubiquitous computing paradigm brings great convenience for
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

61

information access, it also raises concerns over potential intrusion into user location privacy. To protect location
privacy, one typical approach is to cloak user locations into spatial regions based on user-specified privacy
requirements, and to transform location-based queries into region-based queries. We study the representation of
cloaking regions and show that a circular region generally leads to a small result size for region based queries.
Moreover, the progressive query processing mode achieves a shorter response time than the bulk mode by
parallelizing the query evaluation and result transmission.
The Disruptive Cloud Cloud computing is a service consumption and delivery model that can help improving
business performance, control costs and ultimately transform business models. Cloud computing can bring
opportunities to many, ranging from businesses that consume IT infrastructure, to providers of such infrastructure,
general users and government as well.[ 63]
Refer to the Fig. 1, 2, 3, below which provides the class diagram, sequence diagram, and execution screen shot
respectively of the case study application.

Figure 1: Class diagram of the Case Study application
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

62

: Server : Server
Process Query Process Query
Search Nearest
Nieghbour
Search Nearest
Nieghbour
View/Store
Query in DB
View/Store
Query in DB
: Mobile Host : Mobile Host
Send Query Send Query Response from
Neighbour
Response from
Neighbour
1: Enter portno, location & criteria,
3: Process query
2: Send query to server
4: Respond to Mobile host
5: Search nearest neighbour
6: Send nearest neighbour's port number
7: Send the query to nearest neighbour
8: Send query
9: Give response from neighbour
10: View/Store Queries from/in database

Figure 2: Sequence diagram of the application

Figure 3: Execution Screen shot of the case study application

International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

63

For details of implementations, detailed documentation, UML diagrams etc., please refer to the web site
http://sites.google.com/site/upendramgitcse
4. Conclusion and Future Research
In this paper, we discussed about Research methodology on Agile Modeled layered security architectures for
Web Services with a case study of Spatial Mobile privacy application. Future work includes extension of web
services security architectures to cloud computing architectures, with spatial clouds a s case study. This paper case
study has presented a complete study on processing privacy-conscious location-based queries in mobile
environments.

References
[1] Gunnar Peterson, Security Architecture Blueprint, Arctec Group, LLC, 2007
[2] Heiko Tillwick, Martin S Olivier, A Layered Security Architecture: Design Issues, in Proceedings of the
Fourth Annual Information Security South Africa Conference (ISSA 2004), July 2004.
[3] Baskerville, Richard, Agile Security for Information Warfare: A call for research, Georgia State University,
USA
[4] Ross Anderson, Security Engineering: A guide to building Dependable Distributed Systems, Wiley
publishers, 2003.
[5] Matt Bishop, Computer Security: Art and Science, Pearson Education, 2003
[6] Wembo Mao, Modern Cryptography: Theory and Practice, Pearson education, 2004
[7] Vipul Gupta, et. al., Sizzle: A standards-based end-to-end security architecture for the embedded Internet,
Elsevier, Pervasive and Mobile Computing, 2005
[8] Durai Pandian M et.al., Information Security Architecture Context aware Access control model for
Educational applications , International Journal of Computer Science and Network Security, December 2006
[9] J.J.Whitmore,A method for designing secure solutions, IBM systems Journal, Vol 40 No 3 2001 pp. 747-
768
[10] D.K.Smetters, R.E.Grinter, Moving from the design of usable security technologies to the design of useful
secure applications, ACM New Security paradigms workshop September 2002 pp 82 89
[11] Betty H C Cheng, Sascha Konrad, Laura A Campbell, Ronald Wassermann,Using Security Patterns to Model
and Analyze Security requirements.
[12] John Hunt,Agile Software Construction, Springer Verlag publishers 2006
[13] Lenny Zeltser,Security Architecture cheat sheet for Internet applications
[14] Mark Harman, Afshin Mansouri,Search based Software Engineering: Introduction to the special issue of the
IEEE Transactions on Software Engineering, November December 2010, pp. 737 741
[15] NIST Draft, Guide to Secure Web Services, September 2006.
[16] Massimo Barloletti, et. al. Semantics-Based Design for Secure Web Services , IEEE Transactions on
Software Engineering, Vol 34, No.1, January 2008
[17] Sasikanth Avancha, A Framework for Trustworthy Service Oriented Computing, ICISS 2008, pp. 124 132.
[18] Cenzic Inc., Web Application Security Trend Reports, 2009.
[19] Halvard Skogsrud, Modeling Trust Negotiation for Web Services, IEEE February 2009
[20] David Geer, Taking Steps to Secure Web Services, IEEE, October 2003.
[21] Martin Naedele, Standards for XML and Web Services Security, IEEE April 2003
[22] Ferda Tartanoglu et al,Dependability in the Web Services Architecture, Architecting Dependable Systems,
LNCS 2677, pp. 90 109, 2003
[23] Spyros T Halkidis et. al., Architecture Risk Analysis of Software Systems based on Security Patterns, IEEE
Transactions on Dependable and Secure Computing Vol 5 No. 3, July September 2008, pp. 129 142
[24] Sandeep Chatterjee, Developing Enterprises Web Services An Architects Guide, Pearson, 2004
[25] Constance L Heitmeyer, Applying Formal Methods to a Certifiably Secure Software System, IEEE
Transactions on Software Engineering, Vol 34 No1 1, January 2008
[26] Sarah Spiekermann, Lorrie Cranor,Engineering Privacy, IEEE Transactions on Software Engineering, Vol
35 No 1 January February 2009 pp. 67 82
[27] Athula Ginge and San Murugesan, Web Engineering: A Methodology for Developing Scalable, Maintainable
Web Applications, Cutter IT Journal Vol.14, No.7 pp. 24-35, July 2001
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

64

[28] Jim Highsmith, Alistair Cockburn Agile Software Development: The Business of Innovation, IEEE
Computer September2001 pp:120:122
[29] Jiawei Han, Kevin Chen-Chuan Chang, Data Mining for Web Intelligence, IEEE Computer November, 2002
pp. 64-70.
[30] Beibei Li and Jiajin Le Application of Web Service in Web Mining, CIS 2004, LNCS 3314, PP 989-994.
[31] Stephen J Miller Agile MDA A White Paper 2004.
[32] Pranam Kolari and Anupam Joshi, Web Engineering Column: Web Mining: Research and Practice, IEEE
Computing and Science and Engineering, July/August 2004 pp. 49-53.
[33] Schahram Dustdar, Robert Gombotz, Karim Baina Web Services Interaction Mining2004.
[34] Prof.Ladislav Burita, Vojtech Ondryhal, Extending UML for Modeling of Data Mining Cases, 2006
[35] Rosa Meo, Maristella Matera Designing and Mining Web Applications: A Conceptual Modeling approach
Idea Group Publishing 2006.
[36] Srinivasa Narayana, Subbu N Subramanian, Manish Arya, and the Tavent team, On engineering Web-based
Enterprise applications, International conference on Management of Data, COMAD 2006 CSI 2006.
[37] Walid Galoul, Sami Bhiri and Claude Godart, Research Challenges and Opportunities in Web Services
Mining, Atelier Systems dInformation et Services Web, INFORSID 2006, pp. 1-11.
[38] Wingyan Chung Designing Web-based Business Intelligence Systems: A Framework and Case Studies: In
DESRIST pp. 147 171, February 24-25, California CA USA (2006).
[39] Xiaocheng Ge, Richard F Paige, Fiona A.C.Polack, Howard Chivers, Phillip J Brooke, Agile development of
Secure Web Applications, ACM ICWE 06 pp. 305-312.
[40] I. Lazar, B. Parv, S. Motogna, I.-G. Czibula, C.-L. Lazar, An Agile MDA approach for Executable UML
Structured Activities, Studia Univ. Babes-bolyai, Informatica, vol. LII, No. 2, 2007, pp.111-114
[41] Mohammad A. A. Alhawamdeh, Web Mining: Strategic Web Site Design for small business Proceedings of
the world congress on Engineering 2007 Vol I.
[42] Tao Xie, Jian Pei, Ahmed E. Hassan Mining Software Engineering Data, 29th International Conference on
Software Engineering, 2007, IEEE.
[43] Anupam Joshi, Tim Finin, akshay Java and Pranam Kolari, Web 2.0 Mining: Analyzing Social Media 2008
Taylor & Francis Group.
[44] Hossein Keramati, Seyed-Hassan Mirian-Hosseinabadi, "Integrating software development security activities
with agile methodologies," aiccsa, pp.749-754, 2008 IEEE/ACS International Conference on Computer
Systems and Applications.
[45] Jesus Pardillo, Jose Norberto Mazon Towards a Model-Driven Engineering Approach of Data Mining IADIA
2008 pp: 144-147.
[46] Paola Britos, Oscar Dieste, and Ramon Garcia Requirements Elicitation in Data Mining for Business
Intelligence Projects, IFIP 2008 pp 139-150.
[47] Yann-Gael Gueheneuc, Giuliano Antoniol, DeMIMA: A Multilayered Approach for Design Pattern
Identification, IEEE Transactions on Software Engineering, vol. 34, no. 5. pp. 667-684, September/October
2008.
[48] Anders Mattsson, Bjorm Lundell, Brian Lings, and Brian Fitzgerald, Linking Model-Driven Development and
Software Architecture: A Case Study, IEEE Transactions on Software Engineering, vol. 35, no. 1. pp. 83-93
January/February 2009.
[49] Sarah Spiekermann and Lorrie Faith Cranor, Engineering Privacy, IEEE Transactions on Software
Engineering, Vol 35, No.1, Jan/Feb 2009, PP 67 82.
[50] Tao Xie, Suresh Thummalapenta, David Lo, Chao Liu, Data Mining for Software Engineering, 2009, IEEE.
Pp: 55 - 62.
[51] Joao Antunes, Nuno Neves, Miguel Correla, Paulo Verissimo, Rui Neves, Vulnerability Discovery with
Attack Injection, IEEE Transactions on Software Engineering, Vol. 36, No. 3, pp. 357-369, May/June 2010.
[52] Joel da Silva, Valeria C.Times, Robson Fidalgo, Roberto Barros, Towards a Web Service for Geographical
and Multidimensional processing, pp. 1-17.
[53] Jim Gray, Microsoft Research, Real Web Services Talk at Charles Schwab Technology Summit, Friday,
September 20, 2002
[54] Elisa Bertino, Lorenzo D.Martino, Federica Paci, Anna Squicciarini, Security for Web Services and Service-
Oriented Architectures, Springer Book 2010, Appendix A Access Control pp. 202-204, ISBN 978-3-540-
87741-7
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

65

[55] Bernard Menezes, Network Security and Cryptography, Cengage Learning India Pvt. Ltd., 2010, ISBN 978-
81-315-1349-1
[56] Kearsten Sohr, Michael Drouieaud, Gail Joon Ahn, Martin Gogolla,Analyzing and Managing Role-Based
Access Control Policies, IEEE Transactions on Knowledge and Data Engineering, Vol. 20, No. 7, pp.924-
939, July 2008.
[57] Michael S Kirkpatrick, Elisa Betrino, Enforcing Spatial Constraints for Mobile RBAC Systems, ACM 2010
SACMAT10, June 9-11, 2010, Pittsburg, USA.
[58] Reza BFar,Mobile Computing Principals Designing and Developing Mobile Applications with UML and
XML, Cambridge University Press, 2005, ISBN: 0-521-69623-2.
[59] Alastair Airchison, Beginning Spatial with SQL Server 2008, Apress Publisher, ISBN 978-1-4302-1829-6,
2009
[60] Ravi Kothuri, Albert Godfrind, Euro Beinat,Pro Oracle Spatial for Oracle Database 11 g, Apress publisher,
ISBN 9788181288882
[61] Michael Juntao Yuan,Enterprise J2ME Developing Mobile Java Applications, Pearson Education Inc., 2004,
ISBN 81-297-0694-6
[62] Patrick Stuedi, Iqbal Mohammed, Doug Terry, (Microsoft Research) , WhereStore: Location-based Data
Storage for Mobile devices Interacting with the Cloud, MCS 10, June 15, 2010,San Francisco USA, ACM
2010
[63] Hiren Bhatt, Arup Dasgupta, The Disruptive Cloud, Geo Spatial World, May 2011 pp. 20 28


























International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

66

APPLICATION OF RESIDUE NUMBER SYSTEM TO ADVANCE
ENCRYPTION STANDARD ALGORITHM
H. Siewobr and K.A.Gbolagade

Department of Computer Science University for Development Studies, Navrongo, Ghana.
siewobrministry@gmail.com/gkazy1@yahoo.com

Abstract
In this paper, we present a brief survey on Residue Number System (RNS) application to cryptography. We first
present some fundamental concepts in cryptography with emphasis on Advance Encryption Standard (AES)
algorithm and we suggest RNS as a measure to help offset the complexity, high power demand and time consuming
nature of the AES algorithm. We also suggest RNS as a means of enhancing security in the AES cryptosystems.

Keywords: Residue Number System, Advance Encryption Standard, Cryptosystems.




























International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

67

1. Introduction
Residue Number System (RNS) is a non-weighted number system, which represents integers by their
remainders with respect to a given moduli set Thus, RNS represents larger integers
using a set of smaller integers thereby enhancing faster and efficient computations. following
interesting features: parallelism, modularity, fault tolerance, and carry-free operations [1]. Due to t
RNS has received considerable attention in Digital Signal Processing (DSP) applications such as Fast
Fourier Transform, digital filtering, Discrete Cosine Transform, and also in cryptography, image processing, etc.
Despite all these RNS advantages, RNS has not found a widespread usage in general purpose processors due to the
following difficult operations; overflow detection, sign detection, magnitude comparison, scaling and division [2].
As stated earlier RNS has been applied in addition and multiplication dominated DSP applications. Advance
Encryption Standard (AES) algorithm is another important area where RNS can be applied. In this paper, we present
a survey on how RNS can be applied in order to enhance the performance of AES algorithm.
1.1 Number Representation
In RNS, a number is uniquely represented by an n-tuple of integers The integers

are called the residues of the integer with respect to moduli set and given by:
(1)
It follows from the Chinese Remainder Theorem (CRT) that, for any given n-tuple satisfying Equation (1),
there exists one and only one integer such that where is the dynamic range Consider the moduli
set , decimal numbers 5 and 50 are represented as and respectively.
The process of converting a number from another representation say binary or decimal, into residue
representation is known as forward conversion. The opposite of forward conversion is the time consuming reverse
conversion.
Many RNS researchers are working on building faster converters in order for RNS based processors to
become a reality.
In the next section, we present a brief description of Symmetric Cryptosystem and the AES algorithm.
1.2 Symmetric and Asymmetric Cryptosystems
Cryptography is the science of writing in secret codes. Some of the security requirement in cryptosystems
include: authentication, confidentiality, integrity, and non-repudiation.
There are two forms of cryptosystems, Symmetric and Asymmetric cryptosystems.
In a symmetric cryptosystem, both parties must use the same key for encryption and decryption. This
means that the encryption key must be shared between the two parties before any messages can be decrypted.
Symmetric cryptosystems are significantly faster than asymmetric cryptosystems, but the requirements for key
exchange make them difficult to use [13].
In an asymmetric cryptosystem, the encryption key and the decryption keys are separate. In an asymmetric
system, each person has two keys. One key, the public key, is shared publicly. The second key, the private key,
should never be shared with anyone [13].
When you send a message using asymmetric cryptosystem, you encrypt the message using the recipients
public key. The recipient then decrypts the message using his private key. That is why the system is called
asymmetric [13].
Because asymmetric ciphers tend to be significantly more computationally intensive, they are usually used
in combination with symmetric ciphers to implement effective public key cryptosystems. The symmetric cipher is
used to encrypt a session key and the encrypted session key is then used to encrypt the actual message. This gives
the key-exchange benefits of asymmetric ciphers with the speed of symmetric ciphers [13].
1.3 Advanced Encryption Standard (AES)
The AES is a symmetric block cipher algorithm in which the key length can be independently specified to
be 128,192, or 256 bits. The AES specification uses the same three key size alternatives but limits the block length
to 128 bits.
AES encryption is an efficient scheme for both hardware and software implementation. As compared to software
implementation, hardware implementation provides greater physical security and higher speed. Hardware
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

68

implementation is useful in wireless security like military communication and mobile telephony where there is a
greater emphasis on the speed of communication [4].
The AES has been used in many applications such as internet routers, Virtual Private Networks (VPNs), mobile
phone applications and electronic financial transactions [3].
1.4 AES Encryption
The encryption process is iterative in nature. The iterations are known as rounds and the number of AES
encryption rounds depends on the key length for 128, 192 and 256 bits AES respectively, with
being the total number of rounds. Each round is composed of a sequence of four transformations: SubByte,
ShiftRows, MixColumns, and AddRoundKey. These transformations are described as folows:
1. Sub Byte Transformation - a nonlinear transformation applied to the elements of the matrix. This first
step in each round is a simple substitution, when implemented as a Look up Table (LUT). The byte, become
through a defined substitution table (S-box) [6].


SubBytes






Figure 1: SubByte Transformation
2. ShiftRows Transformation - This second step in each round is permutation of rows by left circular shift;
the first (leftmost, high order) elements of row are shifted around to the end (rightmost, low order) [5].






Figure 2: ShiftRows Transformation
3. MixColumns Transformation - the third step is a resource intensive transformation in which the column
of the State are considered as polynomials over GF (2) and are multiplied with a fixed polynomial. The
MixColumn component does not operate in the last round of the algorithm [3].
























International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

69






Figure 3: MixColumns Transformation

4. AddRoundKey Transformation - performs modulo operation with the round key, which is
obtained from the initial key by a key expansion procedure. The encryption flow starts with the addition of the initial
key to the plaintext. Then the iteration continues for ( ) rounds [5].

For each round of the main loop, a round key is derived from the original key through a process called Key
Scheduling. Finally, a last round consisting of three transformations, SubBytes, ShiftRows and AddRoundKey, is
executed.

AES Decryption
The AES decryption algorithm operates in a similar manner by applying the inverse of all the transformations
described above in reverse order:
The inverse SubByte transformation makes use of the inverse S-box which is constructed by applying the
inverse of the transformation of the actual equation followed by taking the multiplicative inverse of GF(2^8) [5].
The inverse ShiftRow transformation performs the circular shifts in the opposite direction for each of the
last three rows, with a one-byte circular right shift for the second row and so on [5].
During the inverse MixColumn transformation, the inverse matrix times the forward transformation matrix
equals the identity matrix [5].
The inverse AddRoundKey transformation is identical to the forward AddRoundKey transformation,
because the XOR operation is in its inverse [5].
AES cryptosystems are built based on binary and hexadecimal number systems and are complex, time
consuming and very expensive to realize due to factors like carry propagation and modular reduction (which is
required in finite field polynomial operations over GF (2)). Therefore, immerging methods and technology that
would eventually lead to reduced cost, complexity and time would be required once security is not compromised.
In the next section we present RNS based AES cryptosystems, which would be less complex, perform faster,
use less time and cost less whilst increasing security.

2. Application of RNS to AES Algorithm

1. Parallelism and High Speed: With the use of RNS, operations within the SubBytes, ShiftRows and
MixColumns of each round can be computed in parallel, thereby increasing speed drastically. Also carry free
propagation between the arithmetic blocks in RNS results in high speed processing. In conventional digital
processors, the critical path is associated with the propagation of the carry signal to Most Significant Bit (MSB) of
the arithmetic unit. Using RNS representation, large words are encoded into small words, which results in critical
path minimization in the AES algorithm computation [8].

International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

70

2. Reduced Power: The RNS processor reduces the switching activities in each channel. Consequently, the
dynamic power of the AES algorithm is reduced, since the dynamic power is directly proportional to switching
activities [8].
3. Reduced Complexity: Because RNS representation encodes large numbers into small residues [8], the
complexity of the arithmetic units in the AES will be reduced. This simplifies the entire AES design.
4. Error Detection and Correction: RNS is a non-positional system with no dependence between its channels.
Thus, an error in one channel does not propagate to other channels. Therefore, isolation of the faulty residues allows
fault tolerance and facilitates error detection and correction. RNS has some embedded error detection and correction
features [8, 11], which facilitates efficient implementation and operation of the AES algorithm.
5. Improved Security: The use of residues with respect to a moduli set say will enhance
security in the AES algorithm since this may not be known to an observer.
6. Modular Reduction Eliminated: The costly modular reduction computed in AES algorithm as a result of
finite field polynomial arithmetic over GF(2) is eliminated by representing the finite field equations and operations
in RNS [7]. This modular reduction is computed four times in the MixColumns Transformation which occurs
times in a single encryption and decryption process.
Since it has been shown that RNS could enhance the AES algorithm, our future research is centered on the
realization of RNS based AES cryptosystems.





























International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

71


Plaintext Key Plaintext


W[0, 3]





W[4,7]





W[36,39]





W[40,43]

Ciphertext Ciphertext
(a) Encryption (b) Decryption




















Figure 4: Block diagram of RNS AES algorithm


Forward converter
ExpandKey



Round 10

InverseShiftRows

InverseSubBytes

AddRoundKey

AddRoundKey





Round 1

SubBytes
AddRoundKey
MixColumn

ShiftRows






Round 1

InverseMixColumn

InverseShiftRows

InverseSubBytes

AddRoundKey











Round 9
InverseMixCols

InverseShiftRows

InverseSubBytes

AddRoundKey

AddRoundKey





Round 9

SubBytes
AddRoundKey
MixColumn

ShiftRows





Round 10

SubBytes
ShiftRows

AddRoundKey
Reverse Converter
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

72


3. Conclusion
In this paper, we have briefly looked at RNS and its application to cryptography, specifically to the
Advance Encryption Standard (AES) algorithm. We proposed that RNS should be used in order to obtain faster, less
expensive, error correcting, low power AES algorithm. Based on this explanation, we suggest that RNS based AES
cryptosystems should be built.

References
[1] K.A. Gbolagade, S. D. Cotofana, (2008), Residue Number System Operands to Decimal Conversion for 3-
Moduli Sets, Proceedings of 51st IEEE Midwest Symposium on Circuits and Systems (MWSCAS 08),
Knoxville, USA, pp. 791-794.
[2] K.A. Gbolagade, S. D. Cotofana, (2008), MRC Technique for RNS to Decimal Conversion Using the
Moduli Set {2n + 2, 2n + 1, 2n}, Proceedings of the 16th Annual Workshop on Circuits, Systems and
Signal Processing, Veldhoven, The Netherlands, pp. 318-321.
[3] A. E. Rohiem, F. M. Ahmed and A. M. Mustafa, (2009), FPGA Implementation of Reconfigurable
Parameters AES Algorithm, 13th International Conference on Aerospace Sciences & Aviation
Technology, Cairo, Egypt.
[4] P. Karthigaikumar, S. Rasheed, (2011), Simulation of Image Encryption using AES Algorithm, IJCA
Special Issue on Computational Science - New Dimensions & Perspectives, pp: 166-172.
[5] C. Navya Latha, Garima Agarwal, Anila Kumar GVN: Secret File Sharing Techniques using AES
algorithm, web.iiit.ac.in/~navya/projects/AES_documentation.pdf
[6] AES CCM Encryption and Decryption, http://www.inno-logic.com/resourcesEncryption.html
[7] J.C. Bajard, (2007), A Residue Approach of the Finite Fields Arithmetics, Asilomar Conference on
Signals, Systems, and Computers (ISBN: 978-1-4244-2110-7 ISSN: 1058-6393), Asilomar CA, USA.
[8] Omar Abdelfattah, (2011), Data Conversion in Residue Number System, A thesis submitted to McGill
University in partial fulfillment of the requirements for the degree of Master of Engineering.
[9] K. A. Gbolagade, S. D. Cotofana, (2009), Residue-to-Decimal Converters for Moduli Sets with Common
Factors, Proceedings of 52nd IEEE International Midwest Symposium on Circuits and Systems,
(MWSCAS 2009), Cancun, Mexico, pp. 624-627.
[10] Theodore L. Houk, Seattle Wash, (1989), Residue Addition Overflow Detection Processor Boeing
Company, Appl. No.: 414276.
[11] F. Barsi and P. Maestrini, Error correcting properties of redundant residue number systems, IEEE
Transac tions on Computers, vol. 23, no.9, pp. 915-923.
[12] N. Szabo and R. Tanaka, (1967), Residue Arithmetic Technology, New York: McGraw Hill.
[13] Symmetric and Asymmetric Ciphers, http://www.tech-faq.com/symmetric-and-asymmetric-ciphers.html














International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

73

Secured Data Communication Using Chaotic Neural Network Based
Cryptography
Rahul Malhotra

and Aashu Gupta
Deptt. of Electronics & Comm. Engg., Bhai Maha Singh College of Engg, Muktsar (Pb.)
Deptt. of Electronics & Comm. Engg., Adesh Institute of Engg. & Techn., Faridkot (Pb.)
blessurahul@gmail.com

Abstract
Cryptography is the science of using mathematics to transform the contents of information in secure mode and
also immunes to attack. The original message is called as the Plaintext. The disguised message is called as the
Cipher text. The method of producing cipher text from plaintext using the key is called as Encryption. The reverse
procedure of producing the plaintext from cipher text using the key is called as Decryption. The science of breaking
cryptosystems is called the Cryptanalysis. Cryptanalysis plays an important role in the cryptography because; it
attacks the encoded message to produce the relevant plaintext. Cryptography is the science of using mathematics to
transform the contents of information in secure mode and also immunes to attack. Although in the past cryptography
referred only to the encryption and decryption of message using secret keys. In this age of universal electronic
connectivity, of viruses and hackers, of electronic eavesdropping and electronic fraud, there is indeed needed to store
the information securely. This, in turn, led to a heightened awareness to protect data and resources from disclosure,
to guarantee the authenticity of data and messages, and to protect systems from network-based attacks. This paper
proposes and implements a data encryption in communication using chaotic neural network and studies the
advantages and disadvantages of the algorithm.
Keywords: Cryptography. Chaos, neural networks, artificial intelligence, Data security





















International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

74

1. Introduction
Cryptography, a word with Greek origins, means secret writing. Crypto is secret and grapy is writing.
Cryptography is the science of using mathematics to transform the contents of information in secure mode and also
immunes to attack. . Some of the common terms that are used in cryptosystems are explained here. The original
message is called as the Plaintext. The disguised message is called as the Cipher text. The method of producing
cipher text from plaintext using the key is called as Encryption. The reverse procedure of producing the plaintext
from cipher text using the key is called as Decryption. The science of breaking cryptosystems is called the
Cryptanalysis. Cryptanalysis plays an important role in the cryptography because; it attacks the encoded message to
produce the relevant plaintext. Cryptography is the science of using mathematics to transform the contents of
information in secure mode and also immunes to attack.
In this age of universal electronic connectivity, of viruses and hackers, of electronic eavesdropping and
electronic fraud, there is indeed needed to store the information securely. This, in turn, led to a heightened
awareness to protect data and resources from disclosure, to guarantee the authenticity of data and messages, and to
protect systems from network-based attacks. Cryptography, the science of encryption, plays a central role in mobile
phone communications, pay-tv, e-commerce, sending private emails, transmitting financial information, security of
ATM cards, computer passwords, electronic commerce digital signature and touches on many aspects of our daily
lives . Cryptography is the art or science encompassing the principles and methods of transforming an intelligible
message (plaintext) into one that is unintelligible (cipher text) and then retransforming that message back to its
original form .In modern times, cryptography is considered to be a branch of both mathematics and computer
science, and is affiliated closely with information theory, computer security, and engineering.
Although in the past cryptography referred only to the encryption and decryption of message using secret keys.
Nowadays, cryptography generally classified into two categories, the symmetric and asymmetric. Conventional
Encryption is referred to as symmetric encryption or single key encryption. The Hill cipher algorithm in Galois field
GF (using polynomial is one of the symmetric key algorithms that have several advantages in data encryption.
Galois field is used for one of error detecting code. But, the inverse of the key matrix used for decrypting of the
cipher text does not always exist. If the key matrix is not invertible, then encrypted text cannot be decrypted. In the
Self-invertible matrix generation method, the key matrix used for the encryption is self invertible. So, at the time of
decryption we need not find the inverse of the key matrix.

2. Artificial Neural Network Based Cryptogaphy
Artificial neural network (ANN) takes its name from the network of nerve cells in the human brain. McCulloch
and Pitts have developed the neural networks for different computing machines. There are extensive applications of
all kinds of ANN in the field of communication, control, instrumentation and forecasting. The ANN is capable of
performing on nonlinear input and output systems in the workspace due to its large parallel interconnection between
different layers and its nonlinear processing characteristics. An artificial neuron basically consists of a computing
element that performs the weighted sum of the input signal and the connecting weight. The sum is added with the
bias or threshold and the resultant signal is then processed for nonlinear function of sigmoid or hyperbolic tangent
type. Every neuron is associated with three parameters whose learning can be adjusted; these are 1) the connecting
weights, 2) the bias, 3) the slope of the nonlinear function. The structure of a neural network (NN) may be single
layer or it may be multilayer. In multilayer structure, there can be one or many artificial neurons in each layer. In the
practical cases, there may be a number of layers to each and every parameter under consideration. Each neuron of
the one layer is connected to each neuron of the next layer.

Figure 1: Structure of a single neuron
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

75

The functional-link ANN is another type of single layer NN. In these networks, the input data is allowed to pass
through a functional expansion block where the input data are nonlinearly mapped to more number of points. This is
achieved by using trigonometric functions, products or power terms of the input. The output of the functional
expansion is then passed through that single neuron.
The basic structure of an artificial neuron is presented in figure 1. The neuron is involved in the computation of
the weighted sum of inputs and threshold. The resultant signal is then passed through a nonlinear activation function.
This is also known as a perceptron. Perceptron which is built around a nonlinear neuron. The output of the neuron
may be represented as,
1
( ) ( ) ( ) ( )
N
j j
j
y n w n x n n
=

= +


where ( ) n is the threshold to the neurons in the first layer.

( )
j
w n is the weight associated with the j
th
input.
N is the number of inputs to the neuron

(.) is the nonlinear activation function. Different types of nonlinear functions are shown in the figure 2.

Signum Function: For this type of activation function, we have

Threshold Function: This function is represented as,

Sigmoid Function: This function is S-shaped and is the most common form of the activation function used in
artificial neural network. It is a function that exhibits a graceful balance between linear and nonlinear behaviour.

where v is the input to the sigmoid function, a is the slope of the sigmoid function.
For the steady convergence a proper choice of a is required.

Figure 2: Different types of nonlinear activation function (a) Signum function or hard limiter, (b) Threshold
function, (c) Sigmoid function and (d) Piecewise Linear
Multilayer Perceptron: In the multilayer neural network or multilayer perceptron (MLP), the input signal
propagates through the network in a forward direction, on a layer-by-layer basis. This network has been applied
successfully to solve some difficult and diverse problems by training in a supervised manner with a highly popular
algorithm known as the error back-propagation algorithm.

International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

76

The scheme of MLP using four layers is shown in Fig. below. x
i
(n) represent the input to the network, f
j
and f
k
represent the output of the two hidden layers and y
l
(n) represents the output of the final layer of the neural network.
The connecting weights between the input to the first hidden layer, first to second hidden layer and the second
hidden layer to the output layers are represented by w
ij
, w
jk
and w
kl
respectively.

Figure 3 MLP block diagram


Figure 4 MLP Structure

If P1 is the number of neurons in the first hidden layer, every element of the output vector of first hidden layer can
be found using the following equation. If number of neurons is known and number of neurons are T
1
in the first
hidden layer.
1
( )
N
j j ij i j
i
o w x n
=

= +


For i= 1,2,3 .N
j= 1,2,3, T
1
where j is the threshold to the neurons of the first hidden layer.
N is the no. of inputs and
j
(N) nonlinear activation function. n is the time index dropped for simpler equation.
Now consider T2 be the number of neurons in the second hidden layer. Then output of second layer can be given by:
1
1
P
k j jk j k
j
f w f
=

= +


where, k is the threshold to the neurons of the second hidden layer. The final output of first layer can be found by
2
1 1
1
( )
P
k kl k
k
y n w f
=

= +


where, i is the threshold to the neuron of the final layer and T3 is the no. of neurons in the output layer. The output
of the MLP may be expressed as
2 1
1
1 1 1
( ) ( )
P P N
n kl k jk j ij i j k l
k j i
y n w w w x n
= = =
| |

= + + +
` |
)
\ .


2.1 Applications of artificial neural networks
There are large classes of problems that appear to be more amenable to solution by neural networks than by
other available techniques. These tasks often involve ambiguity, such as that inherent in handwritten character
recognition. Problems of this sort are difficult to tackle with conventional methods such as matched filtering or
nearest neighbor classification, in part because the metrics used by the brain to compare patterns may not be very
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

77

closely related to those chosen by an engineer designing a recognition system. Likewise, because reliable rules for
recognizing a pattern are usually not at hand, fuzzy logic and expert system designers also face the difficult and
sometimes impossible task of finding acceptable descriptions of the complex relations governing class inclusion. In
trainable neural network systems, these relations are abstracted directly from training data. Moreover, because
neural networks can be constructed with numbers of inputs and outputs ranging into thousands, they can be used to
attack problems that require consideration of more input variables than could be feasibly utilized by most other
approaches. It should be noted, however, that neural networks will not work well at solving problems for which
sufficiently large and general sets of training data are not obtainable.
Neural network techniques find their applications in the telecommunications industry for solving problems
ranging from control of a nationwide switching network to management of an entire telephone company, in air-
conditioning systems, in automotive systems, and in industrial applications for Active control of vibration and noise
by using an adaptive actuator to generate equal and opposite vibration and noise, for Hand-printed character
recognition to support Automated Data Entry System to recognize handwritten forms, Quality control in
manufacturing, Event detection in particle accelerators, Petroleum exploration, Medical applications, Financial
forecasting and portfolio management, Real estate analysis, Marketing analysis, Electric arc furnace electrode
position control, Semiconductor process control, Chemical process control, Petroleum refinery process control,
Continuous-casting control during steel production, Food and chemical formulation optimization, Speech
recognition and Biomedical applications, Drug development, and Control of copies. Adaptivity allows the neural
network to perform well even when the environment or the system being controlled varies over time. There are
many control problems that can benefit from continual nonlinear modeling and adaptation. Neural networks, such as
those used by Pavilion in chemical process control, and by Neural Application Corp. in arc furnace control, are
ideally suited to track problem solutions in changing environments. Additionally, with some programmability,
such as the choices regarding the number of neurons per layer and number of layers, a practitioner can use the same
neural network in a wide variety of applications. Engineering time is thus saved. Another example of the advantages
of self-optimization is in the field of Expert Systems. In some cases, instead of obtaining a set of rules through
interaction between an experienced expert and a knowledge engineer, a neural system can be trained with examples
of expert behavior.

3. Chaos
There is no generally accepted definition of chaos. From a practical point of view chaos can be defined as none
of the above; that is, as bounded steady-state behavior that is not an equilibrium point, not periodic, and not quasi-
periodic. The key question is, "If it is not any of these, then what is it?" To start the discussion, several examples of
chaotic trajectories are shown here. It is evident from these pictures that the trajectories are, indeed, bounded, that
they are not periodic, and that they dont have the uniform distribution characteristic of quasi-periodic solutions.
Though this last observation does not rule out quasi-periodic behavior, the spectra of the chaotic trajectories do. A
chaotic spectrum is not composed solely of discrete frequencies, but has a continuous, broad-band nature. This
noise-like spectrum is characteristic of chaotic systems. The limit set for chaotic behavior is not a simple
geometrical object like a circle or a torus, but is related to fractals and Cantor sets. Another property of chaotic
systems is sensitive dependence on initial conditions: given two different initial conditions arbitrarily close to one
another, the trajectories emanating from these points diverge at a rate characteristic of the system until, for all
practical purposes, they are uncorrelated.
Chaotic behavior is also observed in natural systems, such as the weather. This may be explained by a chaos-
theoretical analysis of a mathematical model of such a system, embodying the laws of physics that are relevant for
the natural system. The chaotic behavior occurs in many areas of practical engineering, i.e., in communications, the
information transmission plays a crucial role, where an ever-growing capacity for communication services is
required. Two of the major requirements in communication systems are privacy and security. The chaotic systems
have been greatly motivated by the possibility of encoding information by using a chaotic carrier.
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

78


Figure 5: Chaotic Trajectories (a) 2
nd
order non autonomous system, (b) Time waveform of 1
st
component of 2
nd
order non
autonomous system, (c) Spectrum of 1
st
component of (a), (d) 3
rd
order autonomous system, (e) Time waveform of 1
st

component of 3rd order non autonomous system and (f) Spectrum of 1
st
component of (c)

Different Models of Chaos
There are different models of chaos, continuous models include viz. Lorentz, Chua and Rossler and discrete models
include Henon Map and Logistic Map.
Lorentz Model: Lorenz wrote a remarkable article in 1963, he described a three parameter of the nonlinear first-
order ordinary differential equation that, when integrated numerically on a computer, appeared to have extremely
complicated solutions. This set of ordinary differential equations that would model some of the unpredictable
behavior that we normally associate with the weather. They are
( )
1 2 1
( ) ( ) ( ) x t x t x t =
2 1 2 1 3
( ) ( ) ( ) ( ) ( ) x t rx t x t x t x t =
3 1 2 3
( ) ( ) ( ) ( ) x t x t x t bx t =
b = 8/3, r = 28 and = 10
1. 0<r<1. There is only stable equilibrium point at the origin.
2. 1<r<1.346. Two new stable nodes are born and the origin becomes a saddle with a one-dimensional, unstable
manifold.
3. 1.346<r<13.926. At the lower value the stable nodes become stable spirals.
4. 13.926<r<24.74. Unstable limit cycles are born near each of the spiral nodes, and the basins of attraction of
each of the two fixed points become intertwined. The steady-steady notion is sensitive to initial conditions.
5. 24.74<r. All three fixed points becomes unstable. Chaotic motions result.
Lorentz Oscillator: The equations that govern the Lorenz oscillator are:



is called the Prandtl number and is called the Rayleigh number. All , , > 0, but usually = 10, = 8/3
and is varied. The system exhibits chaotic behavior for = 28 but displays knotted periodic orbits for other values
of .
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

79


Figure 6: Lorentz oscillator
Chaotic Neural Network: A network is called chaotic neural network if its weights are biases are determined by
chaotic sequence.
In this section, we consider the following Hopfield neural networks which exhibit chaotic phenomenon.
( ) ( ) ( )
( ) ( ) ( ) ( ) x t Cx t Af x t Bf x t t I = + + + (1)
( ) ( ) ( )
1 1
( ) ( ) ( ) ( )
n n
i i i ij j j ij j j ij i
j j
x t c x t a f x t b f x t t I
= =
= + + +

, i = 0,1,2,3-----
where n denotes the number of units in a neural network,
1 2
( ) ( ( ), ( ),......., ( ))
T n
n
x t x t x t x t R = is the state
vector associated with the neurons,
1 2
( , ,......., )
T n
n
I I I I R = is external input vector
1 1 2 2
( ( )) ( ( ( )), ( ( )),......., ( ( )))
T n
n n
f x t f x t f x t f x t R = corresponds to the activation functions of neurons, (t)
= ij (t) (i, j = 1, 2, . . . , n) are the time delays, the initial conditions of (1) are given by
( ) ( ) ([ , 0], )
i i
x t t C r R = with
{ }
1
max , , ( )
ij
r i j n t R t = , where C([r, 0],R) denotes the set of all
continuous functions from [r, 0] to R. C = diag(c1, c2, . . . , cn) is a diagonal matrix, A = (aij )nn and B = (bij
)nn are the connection weight matrix and the delayed connection weight matrix, respectively. As is known to all
that (1) can exhibit chaotic phenomenon.
1
1 1 1
2 2 2 2
( )
( ) tanh( ( )) tanh( ( ( )))
( ) tanh( ( )) tanh( ( ( ))) ( )
dx t
x t x t x t t
dt
C A B
x t x t x t t dx t
dt




= + +






Figure 7 Trajectories of state variables x1(t) and x2(t)
Let g denotes a digital signal of length M and g(n); 0<n<M-1 is the 1 byte value of signal g at position n.
Algorithm for cryptography using chaotic neural network [11]
1. Set the value of M.
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

80

2. Determine parameter and initial point x(0).
3. Generate the chaotic sequence x(1),x(2),x(3).. x(M) by the formula ( 1) ( )(1 ( )) x n x n x n u + = and
create b(0), b(l), ..., b(8M-1) from x(l), x(2), ..., x(M) by the generating scheme that 0.b(8m-8)b(8m-7) ..
b(8m-2)b(8m-l) is the binary representation of x(m) for m = 1, 2,. . . ., M.
4. for n = 0 to M-1
7
0
( ) 2
i
i
i
g n d
=
=


for i = 0 to 7
1 , (8 ) 0
1 1, (8 ) 1
0
ji
j i b n i
j b n i
j i

= + =

= = + =


{ } 0,1, 2, 3, 4, 5, 6, 7 j
1
(8 ) 0
2
1
(8 ) 1
2
i
b n i
b n i


+ =

+ =


end
for i = 0 to 7

7
'
0
i ji i i
j
d f d
=
| |
= +
|
\ .


Where f(x) is 1 if x >=0
end
7
'
0
( ) 2
i
i
i
g n d
=
=


end

In the chaotic systems, it is well-known that
1) It has sensitive dependence on initial conditions
2) There exist trajectories that are dense, bounded, but neither periodic nor quasi-periodic in the state space.
Hence, the chaotic binary sequence is unpredictable. It is very difficult to decrypt an encrypted image correctly by
making an exhaustive search without knowing x(0) and . Hence, CNN is one of guaranteed high security.

Figure 8 Block diagram of conventional cryptography
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

81


Figure 9 Block diagram of ANN based chaotic cryptography

Criteria for Designing Chaotic Cryptosystems
When designing chaotic equations for data encryption, it is important to consider the time for data encryption (and
decryption) and the level of security. The following were several important criteria for the design of a good chaotic
cipher. The computation time for encryption and decryption depends on the complexity of equations and the value
of state variable
The complexity of equations: The lower the complexity of the equations, the shorter the computation time will be.
If the complexity of equation was low, it would obviously reduce the computation time during data encryption and
decryption. On the other hand, if the complexity of equation was high, a longer time would be needed for data
encryption and decryption. So in order to choose an equation with lower complexity, a discrete chaotic map is
suggested. If the nature of chaotic equation was a discrete map, it would only involve basic arithmetic operations
like summations, subtractions, multiplications and divisions etc. On the other hand, if the nature of chaotic equation
was a continuous flow, it would involve differential or integration type operations when calculating the value of next
state variable.
The value(s) of state variable(s): From the data complexity point of view, an integral value of state variable is
more preferable. If the value of state variable was an integer, it would take a shorter time for computing the value of
the next state variable. On the other hand, if the value of state variable was a floating point number, it would need a
longer time for computing the value of the next state variable.
The level of security: Most chaotic encryption methods are basically symmetric key encryption in which both
encryption and decryption key being use the same set of chaotic equations. In most of the case, the parameters of
these chaotic equations and their initial values of state variable will be used as the encryption keys (the symmetric
keys). Hence, the level of security will depend on two primitive factors: the key length and the output of encrypted
cipher.
Key Length and Numbers of Key: If the key length or numbers of keys are small, it would shorten the time of
cryptanalysis of the keys. However, it will impose an intrinsic problem for setting the key length because for most
chaotic equations, it would only allow a relatively narrow range of parameter to be chosen with chaotic behavior.
The traditional key value of chaotic equation is floating point number. It means that the key length would be
increased based on the precision value of floating point number. However, as mentioned before, floating point
number would substantially increase the computation time. This would also lead to contradiction for designing a
good chaotic encryption method as computational complexity and system efficiency is one of the major factors for
the design of cryptosystem, especially in a real-time cryptosystem. So In this thesis, an integral valued-key is being
proposed for the design of real-time cryptosystem.
Number of set of chaotic equations: A large number of sets of chaotic equations will induce difficulties in
cryptanalysis (and hence a better security level). If the number of set of chaotic equations was small, it would be
easier for cryptanalysis.
Chaotic real-time encryption based on Synchronization Technique uses two identical sets of Chaotic Map
Equations like Logistic Equation. At the transmitter, the chaotic equation generates chaotic signal. It then uses an
add-up function to mix up (mask) the original signal with the chaotic signal. At the receiver, the chaotic equation
generates chaotic signal which is same as that in the transmitter side. It then uses a reverse mix-up (masking)
function to retrieve the original signal using the received signal and the generated chaotic signal.
Chaotic real-time encryption using two identical sets of Chaotic Flow Equations such as Lorenz Equation and
Rossler Equation. Receiver will receive a driven message from Sender. At the transmitter, the original signal will be
imposed with the chaotic equation and outputs a chaotic signal which will be transmitted to the receiver. At the
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

82

receiver, the received chaotic signal will be injected into the chaotic equation and outputs original signal. Under this
chaotic real-time encryption scheme, certain time is needed to synchronize those state variables with the transmitter.

Figure 10 Architecture of Self-Synchronization Technique in a typical Real-time Chaotic Cryptosystem
Chaotic cryptography system in secured image application

(a)

(b)

(c)
Figure 11(a,b,c): Known chosen plain text attack experiments
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

83

4. Conclusion
Artificial Neural Networks is a simple yet powerful technique which has the ability to emulate highly complex
computational machines. In this project, we have used this technique to built simple combinational logic and
sequential machine using back-propagation algorithm. A comparative study has been done between two different
neural network architectures and their merits/demerits are mentioned. ANNs can be used to implement much
complex combinational as well as sequential circuits.
Data security is a prime concern in data communication systems. The use of ANN in the field of Cryptography
is investigated. A chaotic neural network for digital signal cryptography is analyzed. Better results can be achieved
by by use of these algorithms. Thus, Artificial Neural Network can be used as a new method of encryption and
decryption of data

References
[1] T. Godhavari, N. R. Alainelu and R. Soundararajan, Cryptography using Neural Network, IEEE Indicon
2005 Conference, 2005, pp. 258-261.
[2] Shiguo Lian, Zhongxuan Liu, Zhen Ren, Haila Wang, Hash function based on chaotic neural networks,
ISCAS 2006, pp. 237-240.
[3] Andreas Ruttor, Wolfgang Kinzel and Ido Kanter, Neural cryptography with queries, Journal of statistical
mechanics: Theory and Experiment, 2005.
[4] Tai-Wen Yue, Suchen Chiang, "A Neural Network Approach for Visual Cryptography," IEEE-INNS-ENNS
International Joint Conference on Neural Networks (IJCNN'00)-vol 5, 2000, pp. 494-499
[5] Ahmed M. Allam and Hazem, Improved security of neural cryptography using don't-trust-my-partner and
error prediction, in Proceedings of the 2009 international joint conference on Neural Networks, 2009, pp.
1900-1906.
[6] Jason L. Wright and Milos Manic, Neural Network Architecture Selection Analysis With Application to
Cryptography Location, WCCI 2010 IEEE World Congress on Computational Intelligence, 2010, pp. 2941-
2946.
[7] Ilker Dalkiran, Kenan Danisman, ANN Based Chaotic Generator for Cryptology, Turk Journal of Electrical
Engineering and Computer Science, vol. 18, no. 2, 2010, pp. 225-240.
[8] Khaled M. Alallayah, Waiel F. Abd El-Wahed, Mohamed Amin and Alaa H. Alhamami, Attack of Against
Simplified Data Encryption Standard Cipher System Using Neural Networks, Journal of Computer Science 6
(1), 2010, pp. 29-35
[9] Raymond S. T Tee and Henery W.S Lam, A Chaotic Real-time Cryptosystem using a Switching Algorithmic-
based Linear Congruential Generator (SLCG), IJCSNS International Journal of Computer Science and
Network Security, vol. 6 no.8B, Aug 2006, pp. 116-124
[10] C Li et.al, Cryptanalysis of a Chaotic Neural Network Based Multimedia Encryption Scheme, Advances in
Multimedia Information Processing - PCM 2004 Proceedings, Part III, volume 3333 of Lecture Notes in
Computer Science, 2004, pp. 418-425
[11] Scott Su, Alvin Lin and Jui Cheng yen, Design and Realization of A New Chaotic Neural Encryption
Decryption Network, 2000 IEEE Asia-Pacific Conference on Circuits and Systems, 2000, pp. 335-338
[12] Thomas S Parker, Leon O Chua, Chaos: A tutorial for engineers, Proceedings of IEEE, vol. 75, no. 8, Aug
1987, pp. 982-1008
[13] Miles E Smid, Dennis K Branstad, The data encryption standard: Past and future, Proceedings of IEEE, vol.
76, no. 5, May 1988
[14] Lian, S., Chen, G., Cheung, A., Wang, Z.: A chaotic-neural-network-based encryption algorithm for
JPEG2000 encoded images. In: Proc. ISNN 2004-II. LNCS 3174, 2004, pp. 627632
[15] Lian, S., Sun J., Li Z., Wang, Z.: A Fast MPEG4 Video Encryption Scheme Based on Chaotic Neural
Network. In Proc. ICONIP 2004. LNCS 3316, 2004, pp. 720-725
[16] Yen, J.C., Guo, J.I.: The design and realization of a chaotic neural signal security system. Pattern Recognition
and Image Analysis 12, 2002, pp. 7079
[17] Yen, J.C., Guo, J.I.: A chaotic neural network for signal encryption/decryption and its VLSI architecture. In:
Proc. 10th VLSI Design/CAD Symposium, 1999, pp. 319322
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

84

[18] Cardoza-Avendao L., Lpez-Gutirrez R.M., Inzunza-Gonzlez E., Cruz-Hernndez C., Garca-Guerrero E.,
Spirin V., and Serrano H., Encryptor Information Software Using Chaotic Generators, World Academy of
Science, Engineering and Technology, 54, 2009, pp. 391-395
[19] Khalil Shihab, A back propagation neural network for computer network security, Journal of Computer
Science, vol. 2, issue 9, 2006, pp. 710-715
[20] Monisha Sharma and Manoj Kumar Kowar, Image encryption techniques using chaotic systems: A review,
International Journal of Engineering Science and Technology, vol. 2(6), 2010, pp. 2359-2363
[21] S Zhou, Q Zhang, X Wei and C Zhou, A Summarization on Image Encryption, IETE Technical Review, vol.
27, issue 6, 2010, pp. 503-510
[22] L P Yee, L C De Silva, Application of Multi Layer Perceptron Networks in Public Key Cryptography,
IJCNN, 2002
[23] Odlyzko A.M., Discrete Logarithms in Finite Fields and Their Cryptographic Significance, EUROCRYPT
84, 1984.
[24] Rivest R., Shamir A. and Adlemann L., A Method for Obtaining Digital Signatures and Public-Key
Cryptosystems, Communication ACM, 21, 1978, pp. 120126.
[25] Vrahatis M.N., Androulakis G.S., Lambrinos J.N. and Magoulas G.D., A Class of Gradient Unconstrained
Minimization Algorithms with Adaptive Step size, Journal of Computer Application and Mathematics, 114,
no. 2, 2000, pp. 367386
[26] Menezes A.J., Van Oorschot C.P. and Vanstone S.A. Handbook of Applied Cryptography, CRC Press, 1996.
[27] Diffie W. and Hellman M., New Directions In Cryptography, IEEE Transaction of Information Theory, 22,
1976, pp. 644654
[28] Nigel Crook and T O Scheper, A Novel Chaotic Neural Network Architecture, European Symposium on
Artificial Neural Networks, 2001, pp. 295-300
[29] Specht D.F, Probabilistic Neural Networks, Neural Networks, 3 No. 1, 1990, pp. 109118
[30] Meletiou G., Tasoulis D.K. and Vrahatis M.N., A First Study of the Neural Network Approach to the RSA
Cryptosystem, IASTED 2002 Conference on Artificial Intelligence, 2002, pp. 483488.
[31] Vrahatis M.N., Androulakis G.S., Lambrinos J.N. and Magoulas G.D., A Class of Gradient Unconstrained
Minimization Algorithms with Adaptive Step size, Journal of Computer Application and Mathematics, 114,
no. 2, 2000, pp. 367386.


















International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

85

JMF Enabled Video Conference System Based on a Service Oriented
Infrastructure for Network Centric Warfare Collaboration
1
Nasrin Sohrabi,
2
Pooia Lalbakhsh, and
3
Mehdi N. Fesharaki
1,2
Islamic Azad University Borujerd Branch, Borujerd, Lorestan, Iran
3
Department of Engineering, Science and Research Branch, Islamic Azad University, Tehran,
Iran
1
Sohrabi_na@yahoo.com,
2
Lalbakhsh@ieee.org,
3
Mehfesharaki@yahoo.com

Abstract
The paper introduces a service oriented video conference system implemented by Java Media Framework
(JMF). The proposed four-layer architecture presents an open distributed video conference system which prepares a
flexible and scalable collaboration infrastructure for critical environments such as network centric warfare. The
proposed system not only creates service oriented audio/video connections in three models of unicast, multicast, and
broadcast, it also take advantage of supervising and monitoring levels responsible for controlling the system entities
and their interconnections. All the multimedia transmissions are done according to Real Time Protocol (RTP).
Multimedia streams can be dfelivered in two formats of H263 compressed format and Jpeg. The proposed system
not only introduces a highly flexible distributed video conference architecture, it eliminates the tightly coupled
connections of the traditional multimedia systems. Such a granular service oriented system can form a highly
maintainable, adaptable, and robust package which can be involved in the existing systems with the least overhead
and complexity.
Keywords: H263, Network centric warfare, Real time protocol, Service oriented architecture, Video conference






















International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

86

1. Introduction
Introduction of new concepts in the information age, change the way many kinds of systems and the related
infrastructures are designed, implemented, and deployed [1]. The emergence and growth of multi-agent philosophy
and swarm-based systems, and finally the social and cultural strategies opened up new areas of cognitive and
collaborative doctrines such as network centric warfare or NCW [2]. The high level dynamics of such novel
approaches are totally different from industrial age systems which are mostly based on technological or solid
environmental characteristics. Such high level cognitive dynamics need to be defined in conceptual levels of the
system architecture, and cannot be satisfied by traditional roadmaps [3].
Unlike industrial age system architectures which consider human and system in two independent layers, most
of the new age systems not only consider human entities as the consumers of the processed and refined information,
they also note humans as intermediate manipulating entities which can not be seperated from the other parts of the
system; so novel doctorines usually lead to interwoven human-computer mixed architectures. NCW is such a
doctrine in which human has the most important role in different operational levels of the system and the other
aspects of the system is architectured around him and his cognitive dynamics. In this sense, all of the psychological
and cognitive aspects of human should be well analyzed and involved in the ultimate architecture.
Collaboration is one of the most important services of cooperative environemnets such as NCW, as the enabler
of member communications for the required synchronization in each operational group. It is interesting to note that,
about 55% of human communication accomplishes by gestures and body language emerging through visual contact,
which is totally eliminated through traditional vocal communication channels. On the other hand, since the human
dimension of NCW is based on human swarms, this potential part of communication can not be ignored when the
system is dealing with cognitive and social characteristics such as trust or commitment.
Video conference is a common comprehensive collaboration service, not only for human face to face
communication, but also for other operational entities of the battlefield such as UAVs or even radar stations. In
addition to its potential to manipulate cognitive dynamics of social systems, It can also be used as a powerful
sensing media to provide a realtime common operational picture which is the visual aspect of NCW [4;5]. Although
it may be used for both military and urban application, the one adopted for critical environments would be totally
different in the sense of capability and technological design. In the other words, according to the fragile and vital
characteristics of critical environments such as battlefield, it is not possible to make use of traditional video
conference systems for battlefield collaborations partacularly when we are talking about NCW.
In network centric warfare video conference is considered as a course-grained service, and each service shares
special characteristics with the other services. Agility, scalability, robustness, and maintainability are important
characteristics defined for NCW services; while the other popular factors such as security, performance, etc. are still
under consideration. Integrating these desirable attributes in to one system as a whole has been an illusion that never
came to reality in traditional industrial age systems with centralized structures; while novel methods such as service
orientation and multiagent strategies are showing potentials that can be used to approach the ideal structures.
In this paper we propose a multi-layer architecture for a video conference system based on the NCW
requirements considering service oriented architecture. This video conference system is formed as a course-grained
orchestrated service, satisfying all the communication requirements in battlefield collaborative environments. The
architecture focuses on the core services, so each service can be implemented free from the other layers such as user
interface, which improves the system flexibility and customization. To support agility all the codes belonging to web
services and user interface are written in Java. Java Media Framework (JMF) [6] is used for multimedia streams, and
RTP is used as the transmission protocol [7].
In addition to agility, flexibility, and scalability, our proposed architecture prepares an appropriate infrastructure
on which extensive range of heterogonous systems can be used because of the transparency provided by the loosely
coupled connections of the underlying SOA [8]. Therefore existing technologies can be easily attached to the
system, without time-consuming modifications to the current system. Such loosely coupled connections allow
various services to operate in even multinational environments such as NATO directed under different organizational
policies and outsourcing strategies.
In the following, we are about to review the vital characteristics of battlefield and critical environments. then
the proposed service oriented video conference is studied and implementation assumptions are explained. Finally a
conclusion section ends the paper.


International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

87

2. Characteristics of the Critical Environments
Although video conference plays a vital role in exchanging information in a face to face manner, the limited
number of supportable users and the required bandwidth are two main technological challenges that should be faced.
In addition to these two challenges, critical environments requires some other attributes that worsen the complexity
of the problem. For critical real-time environments such as NCW, the following characteristics have to be considered
for each sub-system:
Agility
Scalability
Robustness
Maintainability
Agility means the capability of rapidly and cost efficiently adapting to changes [9]. It is evident that traditional
stove piped systems are not able to satisfy such an important characteristic because of their complex interwoven and
mostly centralized structure. Loosely coupled connections of service oriented architecture can be useful to improve
the system agility because it allows runtime strategical changes. Scalability of a system indicates its ability to either
handle growing amounts of work in a graceful manner or to be readily enlarged [10]. It should be noted that, making
use of tightly coupled connections and centralized philosophy (even with distributed technology) is the bottleneck of
scalability. Robustness refers to the stability of a system to accomplish its desired functionality in the presence of
failure within its internal structure or the environment. Interwoven systems cannot prepare permissible level of
robustness, since the failure cannot be isolated and is distributed towards the other parts of the system.
Maintainability is the ability of the system to be modified in order to correct defects, meet new requirements, make
future maintenance easier, and cope with a changed environment [11]. In critical environments the whole system and
its sub-systems should be runtime-maintainable. Such systems can be repaired or upgraded during system
functionality without any destructive interruptions. Centralized interwoven systems are not able to satisfy this
characteristic because of their complex structure and tight inter-dependencies.
The proposed architecture tries to overcome challenges involved in traditional video conference systems, and
satisfy the five mentioned characteristics. The next section outlines this architecture and reviews the emergent
advantages.

3. Service Orientecd Video Conference
Although some video conference systems with various connection models are previously proposed [12], we
tried to set up a novel video conference system on a service oriented platform to take advantage of SOA outcomes.
Figure1 shows the four layer architecture of the proposed video conference system. The first layer introduces
the application layer. System user deals with this layer to use the services of the service layer. This upper layer is not
tightly attached to the underlying layer; thus each user would be able to create its own interface to take advantage of
system services according to its organizational standards and policies. Service layer contains two categories of
services namely: video conference core services, and complementary services. Video conference services are the
essential services for the process of video and sound stream manipulations. Complementary services are not
dedicated to multimedia applications, and might have been provided for the other processes. For example an
authentication service may be though as a complementary service which may be used as a perliminiary service to
benefit from video conference services. It should be noted that in this architecture the user iterface layer would be
able to connect to each desired but authorized service. These services are floated on an IP network freely from
various vendors, different containers and different accessability policies where users can make selections among
them to complete their SOA puzzle.
Figure 2 illustrates the mentioned flexibility considering three different users and a cloud of services. In this
example, the application layer of User 1 tries to bind to some services to create a service oriented multimedia system
equipped with a kind of authentication process. According to the organizational policies, User 1 selects the
authentication method served by Authentication Service1or Aut.1 as given in the figure. This user also prefers to use
strategies used by Audio Service1, ans Video Service1 shown by Aud.S1, and V.S1 respectively. So, the resulted
application on the platform of the User1, would be an orchestration among the three mentioned services. Similar to
User1, User2 is also trying to make a multimedia applications with same capabilities; while this user selects
different services. User2 finds Authentication Sevice2, Audio Service2, and Video Service2 more convinient
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

88

according to its organizational purposes and policies
1
The above example shows the relation between the two top layers of the architecture, and how they can form
different applications for different goals. It should also be noted that all the service bindings are loosely created
based on SOA standards. On the other hand, since the system is not created as a whole, and created as a granular
virtual system, it can be reconfigure by swaping service connections, adding new connections, or removing them.
This feature results in a highly maintainable system which is a must in critical environments such as NCW.
. On the other hand, User3 uses the same service layer but to
provide a different application with different capabilities. This user uses the Audio Service2, Text Service1 and
Cryptography Service2 to create a secure audio/text application.


















Figure 1: The 4-layer video conference architecture



















Figure 2: An example of service layer with different services; some services are multimedia services, and the others can be
considered as complementary services. Aut.S stands for Authentication Service, Aud.S stands for Audio Service, V.S
stands for Video Service, T.S Stands for Text Service, C.S stands for Cryptography Service.

After services have been selected through the two top layers of the architecture, the transport layer will be

1
Suppose that Authentication Service1 uses traditional user name and password in text format, while Authentication Service2 uses
fingerprints to complete the authentication process. User1 prefers to user username and passwords while User2 prefers to use fingerprints.


International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

89

responsible for delivering the request/response messages. As shown in Figure 1 this layer involves different kinds of
transport protocols for different kinds of applications. For multimedia communications we use real-time transport
protocol or RTP because of its realtime nature. This protocol uses UDP to provide the fastest possible stream
transmission over the network. web service request/response communications are accomplished based on SOAP
protocol which will be explained in the following.
In our proposed architecture, the concept of service oriented architecture is emerged through web service
technology; therefore, the service layer of the architecture is implemented as web services considering the three
standards of WSDL
2
, SOAP
3
, and UDDI
4
. WSDL is prepared as an XML file containing the required information to
connect the corresponding web service [13]. It contains the prototypes of the internal classes and methods of the
service. Having such information, the user will able to select the prefered method and call it by passing the required
arguments as mentioned in the WSDL file. All the request/response communications are passed in the form of SOAP
messages. UDDI can be considered as a directory of the existing services in the service layer. The entries of the
UDDI repository contains service names together with the addresss of the corresponding WSDL files. A web service
can be considered as an existing web service if it is advertised by the system UDDI
5
Figure 3 shows how a sender and a receiver communicate to each other through a web service. As shown in the
figure each web service is deployed on a web service container on the underlying network. We use Apache Axis as
our web service container. Network entities in the aspects of sender or receiver nodes (or both) are connected to the
web service according to the corresponding WSDL to use the collaboration service. All the request/response
communications are accomplished according to SOAP and stream transmissions are done by RTP. In the
sender/receiver side, both applications and web browsers can be used according to the loosely coupled connections
presented by web services. Authentication process can be completed using the same web service functions or
through the other independent web service as in [5]. The system may contain duplicated web services on the
network even with different authentication or communication policies. After passing the authentication process, in
the sender side, an RTP session is created through JMF, creating and initializing a session manager. The captured
stream is transferred through the created session towards the web server. The receiving web server then sends the
stream to the receiver. If there are more than one receiver, the web server clones the stream and sends the clones to
the receivers.
. Each web service is deployed
on a service container, and the IP address of this container would be considered as the web service IP address.
On the other hand, each network node is able to connect to the system by means of WSDL information only, so
the internal structure of the system remains secure which is important for critical environments. It should be noted
that, since all the connecting information for sender/receiver entities are gathered through WSDL files, replication of
services is easily done and the system scalability is guaranteed. On the other hand, since all the connections are
accomplished through web servers, the load is totally transformed from connecting entities to collaboration servers.
Finally the network layer of the architecture may be any kind of IP based network. It should be noted that web
services are independent from the underlying network while the containers make the network transparent to the web
services. This feature improves the web service security, since only the container is aware of the internal code of the
web service; all the requesters must deliver their requests to the container and receive the corresponding responses
back from the container.












2
Web Service Description Language
3
Simple Object Access Protocol
4
Universal Description Discovery and Integration
5
It should be noted that, UDDI repositories may be implemented as web services over the network.
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

90



















Figure 3 Relations between web service and sender / receiver entities in the proposed architecture

3. Implementation of the System
We implemented the system according to the following assumptions and technologies:
Java programming language is use to code user interface, and all web services.
Java Media Framework (JVM) is used to support multimedia processes and transmission.
RTP is used as the transport protocol for video and audio streams.
Apache Axis is used as an open source Java-based web service container.
Java Data Base Connectivity (JDBC) is used as our data base system to support user information.
Using Java programming language improves the system agility, while java code only depends on Java Virtual
Machine (JVM) and it is independent from platform. As mentioned above all the building blocks of our system are
based on Java.
According to the collaboration functionality, we implemented our service layer considering the following sub-
layers:
Security layer: this layer of the proposed architecture deals with authentication, authorization, and some
other security-based processes such as encryption or steganography [14]. These processes can be
implemented as functions of web service or can be implemented as independent web services. This is
determined according to the whole view of the system architecture and the related policies.
Monitoring layer: this layer is responsible for monitoring processes such as event-logging, and
sender/receiver collaboration information. In our system, logs are recorded in both server and clients.
Server has the operational logs of the clients functionalities, while clients have multimedia logs of their
video conference if desired.
Collaboration layer: in this layer the audio/video connections are created. It should be noted that, web
service is always the intermediate node, and no direct peer to peer connections are allowed. Before stream
transmissions, the multimedia information is compressed though H263 algorithm to save the existing
bandwidth [15].
Finally the human interactions are accomplished through the application layer. Since the other end is a web
service, this layer contains both applications and web browsers. It should be noted that the client side can be
developed independently and only the corresponding WSDL should be considered.
Figure 4 shows the service oriented layer of the implemented video conference application regarding service
interconnections. When the number of replicated web services grows, a service bus can be added to the system
which is responsible for dispatching and optimizing service interconnections. In addition to service bus, other
services can be added to the system for encryption, steganography, and/or the other high level manipulations such as
speech recognition or face detection. Adding such services is independent from the others, and will not interrupt the
functionality of the system. For ultra large systems, with vast numbers of services, service brokers may also be

International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

91

required which is not in the scope of this paper.

























Figure 4: Video conference system entities and their relations

Figure 5 shows the service oriented video conference system in runtime. In this version of the application, all
the unicast, multicast, and broadcast audio/video communication is supported. Video streams can be communicated
using two formats of Jpeg, and H263 compressed format. Each receiver would be able to record the incoming
streams. Text message is also available for users by a text service. The messenger web service is able to supervise
the communications and even save each stream transparent from the users. Figure 6 shows the network behavior
when a user sends audio and video strams simultaneously to both receivers. In this transmission H263 compression
algorithm is adopted.

4. Conclusions
When information is the value, real-time exchange of information plays a vital role in the life cycle of any
information-based system. Although video conference is an efficient way of information exchange, existing
challenges of such systems make them still unpopular. Scalability and limited number of users are two major
challenges that the paper is going to face them by introducing a novel service oriented video conference system. In
this system each business can be implemented as independent standard web services working together to accomplish
a controlled and supervised collaboration processes. In addition to granularity, flexibility, maintainability, and
robustness which are the consequences of SOA, the system is able to be combined with the other existing and/or
future technologies without imposing complexity to the system structure. We implemented the proposed architecture
using JAVA programming language, JMF library, RTP protocol, and H263 compression algorithm which can work in
three modes of unicast, multicast, and broadcast.







International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

92
















Figure 5: Video conference system in run time






















Figure 6: Network behavior while transmitting simultaneous audio/video stream from one sender to two receivers


4. References
[1] D. Alberts, R. Hayes, Power to the Edge: Command and Control in Information Age, CCRP Publications, 2003.
[2] P. Lalbakhsh, N. Sohrabi, M. N. Fesharaki, Swarm Formation in a Multi-swarm Network Centric Warfare,
Proc. Int. Conf. Computer and Network Technology, India, 2009, pp. 138-141.
[3] P. Lalbakhsh, Highly Reliable Interconnection Network for C4ISR Framework, M.S.c Thesis, School of
Computing, Islamic Azad University-Science & Research Branch, Tehran, Iran, 2006.
[4] P. Lalbakhsh, M. S. K. Fasaei, N. Sohrabi, M. N. Fesharaki, An Ontology-based SOA Enabled Brokering Model


International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

93

for Common Operational Picture, Regarding Network Centric Warfare Services, Proc. 4
th
National Conf. Irans
Scientific Society on Command, Control, Communications, Computers & Intelligence, Tehran, Iran, 2010.
[5] P, Lalbakhsh, N. Sohrabi, M. N. Fesharaki, The Role of Service Oriented Architecture in Battlefield Situational
Awareness, Proc. 2nd IEEE Int. Conf. Computer Science and Information Technology, China, 2009, pp. 476-
479.
[6] JavaTM Media Framework API Guides, Sun Microsystems Inc., USA, 1999.
[7] A. Durresi, R. Jain, RTP, RTCP, and RTSP-Internet Protocols for Real-Time Multimedia Communication, in the
Industrial Information Technology Handbook, edited by R. Zurawski, Published by CRC Press, 2005.
[8] P. Lalbakhsh, A. Goodarzi, M. N. Fesharaki, Towards Virtual Audio/Video Environments using Semantic
Service Composition on a Service Oriented Infrastructure, Int. Conf. Advanced Computer Control, Singapore,
2008, pp. 485-492.
[9] www.wikipedia.org See definition for Agility.
[10] A. B. Bondi, Characteristics of Scalability and their Impact on Performance, Proc. 2nd Int. Workshop on
Software and Performance, Canada, 2000, pp. 195-203.
[11] www.wikipedia.org See definition for Maintainability.
[12] P. Zeng, Y. Hao, Y. Song, Y. Liu, A Grouped Network Video Conference System Based on JMF in
Collaborative Design Environment, 3rd IEEE Int. Conf. Signal-Image Technologies and Internet-based
Systems, China, 2007, pp. 129-136.
[13] Developing Web Services - Jbuilder 2005, Borland Software Corporation, USA, 2005.
[14] P. Lalbakhsh, S. Ravanbakhsh, M. N. Fesharaki, N. Sohrabi, Service Oriented Steganography - A Novel
Approach towards Autonomic Secured Distribured Heterogeneous Environments, Proc. Int. Conf. Signal
Processing Systems, Singapore, 2009, pp. 418-422.
[15] K. Rijkse, H.263: Video Coding for Low-Bit-Rate Communication, IEEE Communication Magazine, 34(12),
1996, pp. 42-45.




















International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

94

THE STUDY ON CAPITAL MARKET AND ITS BEHAVIOUR
M.Thiyagarajan*,T.Chitrakalarani** and S.Indrakala***
*Professor, School of Computing, SASTRA University, Thanjavur.
**Associate Professor, Kundavai Nachiaar Govt. Arts College(W) Autonomous, Thanjavur
***Asst. Professor, Kundavai Nachiaar Govt. Arts College(W) Autonomous, Thanjavur
Email: s.indirakala@yahoo.com


Abstract

In this study an attempt is made to model at Capital Market as a complete Market. The variation among the
averages of major industries completing for the development has drawn by the respective Capital Analysis. The
type of moving average technique has adopted to illustrate the behaviour of these major industries with appropriate
probability distribution. Under Central Limit Theorem the average of the sample distribution is normal, the sample
mean converges with population mean and sample variance converges with population variance by number of
observation. Every time series were affected by cyclic variation. We want to remove the cyclic variation by
identifying the period of the cycle. This is illustrates the data and analysis of specific data collection from standard
sources

Keywords: Capital Market, Central Limit Theorem, Complete Market, Portfolio Analysis.






















International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

95


1. Introduction
In a stock market terminology, the term Capital Markets refers to the markets where all financial instruments
like shares and bonds, as well as commodities and currencies are traded.

Capital market is the part of financial market in which trade of long term debts and securities are done by
brokers and it includes share/stock market and bond market. The buyers are general public, middle investors,
companies and brokers who was interested to invest their money for getting profit in the form of interest or dividend
and profit from speculating. Capital market is source of long term funds for companies because any company which
started its business can sell their shares in the primary market of capital and next time, this company is allowed to
sell in the secondary market. Buyers were also allowed to sell their purchased shares in any time. Each country has
made the control possible by their regulations for capital market. In USA, the regulatory Authoritys name is
Security Exchange Commission which controls the capital market. It was established in 1934 where as in India
Capital Market controller name is Security Exchange Board of India which was came into Existence in 1992 after
Harshad Mehtas Scam. Indias capital market is so wide and more than 30 million investors have trading their
money in Indian capital market. BSE and NSE are the famous stock exchanges in India like New York Stock
exchange in USA.

Price discovery efficiency has been considered as the predominant feature of the efficient futures market
(Telser (1981), Garbade and Sibler (1983)). By applying Vector Autoregression (VAR) methodology, it has been
observed that futures market is relatively more efficient as compared to the cash market. In addition, the above
papers reports the efficient price discovery through futures market during the high volatile periods, viz; one year
around 11th Sept. 2001 (Terrorist Attack on America) and 17th May, 2004 (Biggest ever stock market crash in India
due to unexpected Parliament election results). Efficient price discovery in the futures market implies that traders
can take significant edging positions to minimize the risk exposure in the cash market.

The basic objective of financial reporting is to provide investors and creditors with useful information that help
them to assess the amount, timing, and uncertainty of cash flows to help them make national investment and credit
decisions. Over the past three decades, a significant amount of accounting research has emerged to evaluate the
usefulness of accounting data to investors and others by explaining the association between the release of accounting
numbers and security return (price). The underlying assumption of these studies is that the capital markets should be
efficient. In an efficient capital market, security prices react instantaneously unbiased to impound new information
in such a way that it leaves no opportunity to market participants to consistently earn abnormal returns. Previous
empirical research in accounting and financial literature provides evidence of supporting efficient market
hypothesis.

The objective of this paper is to review certain aspects of Capital Market to make it as a Complete Market and
its implications on accounting numbers with some applications on S&P CNX Nifty Fifty in Bombay Stock
Exchange. The rest of this paper is arranged as follows: Definition of Complete Market, Central Limit Theorem,
First and Second Order Differences and Arithmetic Progression are given in Section 2. Section 3 discusses models.
In Section 4 describes the sample selection and data collection. Section 5 presents research methods. Section 6
presents the empirical results and the conclusion of this paper presented in section 7.

2. Basic Definition
2.1 Central limit theorems for independent sequences
A statistical theory that states has given a sufficient large sample size from a population with a finite level of
variance; the mean of all samples from the same population will be approximately equal to the mean of the
population. Furthermore, all the samples will follow an approximate normal distribution pattern, with all variances
being approximately equal to the variance of the population divided by each sample's size.



International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

96

The CLT
[6]
frame in the following form:
Let {X
1
, X
2
, , X
n
} be a random sample of size n that is, a sequence of independent and identically
distributed random variables with expected values and variances
2
. Suppose we are interested in the
behaviour of the sample average of these random variables: S
n
=
n
1
(X
1
+ + X
n
). Then the central limit theorem
asserts that for large, the distribution of S
n
is approximately normal with mean and variance
n
1

2
. The true
strength of the theorem is that S
n
approaches normality regardless of the shapes of the distributions of individual
X
i
s. . Formally, the theorem can be stated as follows:
LindebergLvy CLT
Suppose {X
i
} is a sequence of iid random variables with E[X
i
] = and Var[X
i
] =
2
. Then as n approaches
infinity, the random variable n(S
n
) converges in distribution to a normal N(0,
2
)

Convergence in distribution means that the cumulative distribution function of n (S
n
) converges
pointwise to the cdf of the N(0,
2
) distribution: for any real number z,

where (x) is the standard normal cdf.
2.2 Complete Markets

The theory can be traced from the work of Kenneth Arrow (1964), Grard Debreu (1959), Arrow & Debreu
(1954) and Lionel McKenzie(1954)
[23]
.

A complete market is one in which the complete set of possible gambles on future states-of-the-world can be
constructed with existing assets.

This is a theoretical ideal against which reality can be found more or less wanting. It is a common
assumption in finance or macro models, where the set of states-of-the-world is formally defined.

2.3 First Order Differences
[2]

A member of sequence that is formed from a given sequence by subtracting each term of the original
sequence from the next succeeding term.

2.4 Second Order Differences
[3]


One of the first-order differences of the sequence formed by taking the first-order differences of a given
sequence.

2.5 Arithmetic Progression
[1]

An arithmetic progression is a sequence of numbers such that the difference of any two successive members
of the sequence is a constant.
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

97

2.6 Portfolio
[26]

In finance, a portfolio is a collection of investments held by an institution or an individual.
Holding a portfolio is a part of an investment and risk-limiting strategy called diversification. By having several
assets, certain types of risk (in particular specific risk) can be reduced. The assets in the portfolio could include bank
accounts, stocks, bonds, options, warrants, gold certificates, real estate, futures contracts, production facilities, or
any other item that is expected to retain its value.
In building up an investment portfolio a financial institution will typically conduct its own investment analysis,
while a private individual may make use of the services of a financial advisor or a financial institution which offers
portfolio management services.
2.7 Portfolio Analysis
Portfolio analysis involves quantifying the operational and financial impact of the portfolio. It is vital to
evaluate the performances of investments and timing the returns effectively.

The analysis of a portfolio extends to all classes of investments such as bonds, equities, indexes, commodities,
funds, options and securities. Portfolio analysis gains importance because each asset class has peculiar risk factors
and returns associated with it. Hence, the composition of a portfolio affects the rate of return of the overall
investment.
3. Description of the Models
The type of moving average technique is adopted to illustrate the behaviour of these major industries with
appropriate probability distribution. Using Central Limit Theorem
[10]
to illustrate the average of sample mean
coincide with the population mean and population coincide with population variance .

4. Sample and Data Collection

The sample for this study consists of 50 industrial firms that were in existence from 2006-2011 and have
complete data set for the required variables, and all firms were listed in CNX Nifty 50, BSA. The data source is
BSE C.D. Rom and the firm list is presented in Appendix (A).

5. Research Methods
CNX Nifty 50 releases sensex values daily. The daily difference between high and low are being
calculated for each industry. Among these we time out the differences through Mean and Standard Deviation for
each industries. The above Mean and Standard Deviation values were used in this analysis.
Central Limit Theorem(CLT)used to compare the sample mean to coincide with Population Mean.

6. Results
CLT is used to arrange the fifty industries in the following form, we get
9397 . 17
3
3
3 2 1
= =
+ +

x x x

International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

98

9397 . 17
5
5
5 4 3 2 1
= =
+ + + +


x x x x x

9397 . 17
7
7
7 6 5 4 3 2 1
= =
+ + + + + +


x x x x x x x

And so on.
Any five successive members of the sequence is a constant in arithmetic progression (Refer Appendix (B)).
From the knowledge of Moving Average and the entries in arithmetic progression form a trend and have
Mathematical Analysis.
Differences estimated to the cyclic variation period.
7. Conclusion
Convergence issue of the averages of different observation is settled positively by our data analysis. The
Analysis were extended to large number of Capital Market data declaring. The Capital Market can be recorded
complete Market were in loss and gain are shares among the participant in the trend. From the CLT, mean of all
samples follow a distribution having the mean same as population mean and variance is population variance. We
estimate the cyclic variation for period three.
8. Future Analysis
The time series is one of non linear dynamics with the deterministic chaos and Probabilistic chaos .We use
Moving average process to obtain the error term. The error term is called noise.














International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

99

APPENDIX (A)
S.No Company Name Industry Mean S.D
1 ACC Ltd. CEMENT AND CEMENT PRODUCTS 32.5472 22.2924
2 Ambuja Cements Ltd. CEMENT AND CEMENT PRODUCTS 4.52284 2.61299
3 Axis Bank Ltd. BANKS 32.9655 21.4455
4 Bajaj Auto Ltd.
AUTOMOBILES - 2 AND 3
WHEELERS 47.2408 28.5328
5 Bajaj Auto Ltd. ELECTRICAL EQUIPMENT 72.9273 44.3953
6 Bharat Petroleum Corporation Ltd. REFINERIES 18.0138 11.6265
7 Bharti Airtel Ltd. TELECOMMUNICATION - SERVICES 24.5703 22.3240
8 Cairn India Ltd. OIL EXPLORATION/PRODUCTION 9.52526 5.88096
9 Cipla Ltd. PHARMACEUTICALS 8.83241 4.36029
10 DLF Ltd. CONSTRUCTION 23.0626 20.2424
11 Dr. Reddy's Laboratories Ltd. PHARMACEUTICALS 0.50398 0.3075
12 GAIL (India) Ltd. GAS 13.5155 9.61113
13 Grasim Industries Ltd. TEXTILES - SYNTHETIC 80.0031 54.0026
14 HCL Technologies Ltd. COMPUTERS - SOFTWARE 16.2071 12.0773
15 HDFC Bank Ltd. BANKS 47.9952 33.1084
16 Hero Honda Motors Ltd.
AUTOMOBILES - 2 AND 3
WHEELERS 39.0834 27.0640
17 Hindalco Industries Ltd. ALUMINIUM 19.95 3.04337
18 Hindustan Unilever Ltd. DIVERSIFIED 6.68560 4.11485
19
Housing Development Finance
Corporation Ltd. FINANCE - HOUSING 80.3997 58.7157
20 I T C Ltd. CIGARETTES 6.65581 3.13554
21 ICICI Bank Ltd. BANKS 32.7481 23.2110
22 Infosys Technologies Ltd. COMPUTERS - SOFTWARE 63.6833 36.1730
23
Infrastructure Development Finance Co.
Ltd. FINANCIAL INSTITUTION 6.06748 4.35184
24 Jaiprakash Associates Ltd. DIVERSIFIED 14.5467 14.0842
25 Jindal Steel & Power Ltd. STEEL AND STEEL PRODUCTS 68.8947 75.4223
26 Kotak Mahindra Bank Ltd. BANKS 57.3600 51.6986
27 Larsen & Toubro Ltd. ENGINEERING 76.5972 67.1112
28 Mahindra & Mahindra Ltd. AUTOMOBILES - 4 WHEELERS 40.7630 32.8159
29 Maruti Suzuki India Ltd. AUTOMOBILES - 4 WHEELERS 37.6262 22.4008
30 NTPC Ltd. POWER 6.07 4.54527
31 Oil & Natural Gas Corporation Ltd. OIL EXPLORATION/PRODUCTION 37.2953 26.4447
32 Power Grid Corporation of India Ltd. POWER 4.21355 3.47534
33 Punjab National Bank. BANKS 26.0186 15.5092
34 Ranbaxy Laboratories Ltd. PHARMACEUTICALS 15.8792 10.3131
35 Reliance Capital Ltd. FINANCE 45.5939 52.0849
36 Reliance Communications Ltd. TELECOMMUNICATION - SERVICES 16.3940 13.7385
37 Reliance Industries Ltd. REFINERIES 53.8364 50.8435
38 Reliance Infrastructure Ltd. POWER 48.0519 50.0076
39 Reliance Power Ltd. POWER 7.58598 8.84441
40 Sesa Goa Ltd. MINING 58.8583 91.4801
41 Siemens Ltd. ELECTRICAL EQUIPMENT 53.9357 97.0537
42 State Bank of India. BANKS 59.1797 39.1266
43 Steel Authority of India Ltd. STEEL AND STEEL PRODUCTS 7.10131 5.23662
44 Sterlite Industries (India) Ltd. METALS 10.9172 30.0958
45 Sun Pharmaceutical Industries Ltd. PHARMACEUTICALS 41.648 30.0796
46 Tata Consultancy Services Ltd. COMPUTERS - SOFTWARE 33.7399 22.5126
47 Tata Motors Ltd. AUTOMOBILES - 4 WHEELERS 0.48649 0.33458
48 Tata Power Co. Ltd. POWER 41.1671 32.2520
49 Tata Steel Ltd. STEEL AND STEEL PRODUCTS 23.9908 19.2535
50 Tata Power Co. Ltd. COMPUTERS - SOFTWARE 18.7492 12.4198


International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

100

APPENDIX (B)
Arithmetic Progression of fifty Industries
S.No Mean value AP of Three Data AP of Five Data
1 24.57031
2 32.96552
3 37.6262 31.72068
4 23.99081 31.52751
5 37.2953 32.97077 32.07299
6 39.08344 33.45652 32.41887
7 23.0626 33.14711 32.56452
8 33.73995 31.962 32.61278
9 41.16711 32.65655 32.83859
10 48.05193 32.68382 32.7812
11 8.832416 32.68382 32.62666
12 41.648 32.84412 32.56606
13 47.99525 32.82522 32.73871
14 0.48649 30.04325 32.21604
15 47.24081 31.90752 32.06078
16 45.5936 31.10697 31.74541
17 4.522879 32.45243 31.66708
18 40.76305 30.29318 31.16067
19 53.83674 33.04089 31.7602
20 4.213556 32.93778 31.96625
21 32.76305 30.27112 31.79908
22 57.36004 31.44555 31.5977
23 6.067481 32.06352 31.95177
24 32.5472 31.99157 31.74191
25 58.85835 32.49101 31.65255
26 6.685609 32.69705 32.13774
27 26.01868 30.52088 31.95281
28 59.17973 30.62801 31.6657
29 14.54675 33.24839 31.91707
30 18.74922 30.82523 31.58391
31 63.68332 32.32643 31.50979
32 14.54675 32.32643 31.8709
33 19.95 32.72669 32.29063
34 72.92739 35.80805 32.80257
35 7.101319 33.32624 33.30277
36 18.01388 32.68086 33.37365
37 68.89479 31.33666 33.1757
38 10.91725 32.60864 33.15209
39 13.51552 31.10919 32.21232
40 53.9357 26.12282 30.77164
41 15.87922 27.77681 29.79083
42 16.39403 28.73632 29.27076
43 80.0031 37.42545 30.23412
44 6.07 34.15571 30.84342
45 7.585989 31.2197 31.8628
46 76.5973 30.08443 32.32432
47 9.52529 31.23619 32.8243
48 16.20717 34.10992 32.16119
49 80.39972 35.37739 32.40553
50 6.655814 34.4209 33.04577
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

101

APPENDIX (C)
TABLE FOR DIFFERRENCES
Mean 1
st
differences 2
nd
differences 3
rd
differences 4
th
differences 5
th
differences 6
tt
differences
0.48649
0.503982 0.017493
4.213556 3.709574 3.692082
4.522847 0.309291 -3.40028 -7.09236
6.067481 1.544634 1.235343 4.635626 11.72799
6.07 0.002519 -1.54212 -2.77746 -7.41308 -19.1411
6.655814 0.585814 0.583295 2.12541 4.902868 12.31595 31.45703
6.685608 0.029794 -0.55602 -1.13932 -3.26473 -8.16759 -20.4835
7.101319 0.415711 0.385917 0.941937 2.081252 5.345977 13.51357
7.585989 0.48467 0.068959 -0.31696 -1.2589 -3.34015 -8.68612
8.832416 1.246427 0.761757 0.692798 1.009756 2.268651 5.608798
9.52526 0.692844 -0.55358 -1.31534 -2.00814 -3.01789 -5.28655
10.91725 1.39199 0.699146 1.252729 2.568069 4.576207 7.594101
13.51552 2.59827 1.20628 0.507134 -0.7456 -3.31366 -7.88987
14.54675 1.03123 -1.56704 -2.77332 -3.28045 -2.53486 0.778805
15.87922 1.33247 0.30124 1.86828 4.6416 7.922054 10.45691
16.20717 0.32795 -1.00452 -1.30576 -3.17404 -7.81564 -15.7377
16.39403 0.18686 -0.14109 0.86343 2.16919 5.34323 13.15887
18.01388 1.61985 1.43299 1.57408 0.71065 -1.45854 -6.80177
18.74922 0.735342 -0.88451 -2.3175 -3.89158 -4.60223 -3.14369
19.95 1.200778 0.465436 1.349944 3.667442 7.55902 12.16125
23.06262 3.11262 1.911842 1.446406 0.096462 -3.57098 -11.13
23.99081 0.92819 -2.18443 -4.09627 -5.54268 -5.63914 -2.06816
24.57031 0.5795 -0.34869 1.83574 5.932012 11.47469 17.11383
26.01868 1.44837 0.86887 1.21756 -0.61818 -6.55019 -18.0249
32.5472 6.52852 5.08015 4.21128 2.99372 3.6119 10.16209
32.74818 0.20098 -6.32754 -11.4077 -15.619 -18.6127 -22.2246
32.96552 0.21734 0.01636 6.3439 17.75159 33.37056 51.98325
33.73995 0.77443 0.55709 0.54073 -5.80317 -23.5548 -56.9253
37.2953 3.55535 2.78092 2.22383 1.6831 7.48627 31.04103
37.6262 0.3309 -3.22445 -6.00537 -8.2292 -9.9123 -17.3986
39.08344 1.45724 1.12634 4.35079 10.35616 18.58536 28.49766
40.76304 1.6796 0.22236 -0.90398 -5.25477 -15.6109 -34.1963
41.16711 0.40407 1.6796 1.45724 2.36122 7.61599 23.22692
41.648 0.48089 0.07682 -1.60278 -3.06002 -5.42124 -13.0372
45.5939 3.9459 3.46501 3.38819 4.99097 8.05099 13.47223
47.24081 1.64691 -2.29899 -5.764 -9.15219 -14.1432 -22.1942
47.99525 0.75444 -0.89247 1.40652 7.17052 16.32271 30.46587
48.05193 0.05668 -0.69776 0.19471 -1.21181 -8.38233 -24.705
53.83647 5.78454 5.72786 6.42562 6.23091 7.44272 15.82505
53.9357 0.09923 -5.68531 -11.4132 -17.8388 -24.0697 -31.5124
57.36004 3.42434 3.32511 9.01042 20.42359 38.26238 62.33208
58.85835 1.49831 -1.92603 -5.25114 -14.2616 -34.6852 -72.9475
59.17973 0.32138 -1.17693 0.7491 6.00024 20.2618 54.94695
63.68336 4.50363 4.18225 5.35918 4.61008 -1.39016 -21.652
68.89479 5.21143 0.7078 -3.47445 -8.83363 -13.4437 -12.0536
72.92739 4.0326 -1.17883 -1.88663 1.58782 10.42145 23.86516
76.59729 3.6699 -0.3627 0.81613 2.70276 1.11494 -9.30651
80.0031 3.40581 -0.26409 0.09861 -0.71752 -3.42028 -4.53522
80.39972 0.39662 -3.00919 -2.7451 -2.84371 -2.12619 1.29409
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

102

Reference:
[1] William F. Ames, Numerical Methods for Partial DifferentialEquations, Section 1.6. Academic Press,
NewYork, 1977. ISBN 0-12-056760-1.
[2] Francis B. Hildebrand, Finite-Difference Equations and Simulations, Section 2.2, Prentice-Hall, Englewood
Cliffs, New Jersey, 1968.
[3] Boole, George, A Treatise On The Calculus of Finite Differences, 2
nd
ed., Macmillan and Company, 1872.
[See also: Dover edition 1960].
[4] Levy, H.; Lessman, F. (1992). Finite Difference Equations. Dover. ISBN 0-486-67260-3.
[5] Robert D. Richtmyer and K. W. Morton, Difference Methods for Initial , Value Problems, 2
nd
ed., Wiley, New
York, 1967.
[6] Barany, Imre; Vu, Van (2007), "Central limit theorems for Gaussian polytopes", The Annals of Probability
(Institute of Mathematical Statistics) 35 (4): 15931621, doi:10.1214/009117906000000791.
[7] S.N.Bernstein, On the work of P.L.Chebyshev in Probability Theory, Nauchnoe Nasledie P.L.Chebysheva.
Vypusk Pervyi: Matematika. (Russian) [The Scientific Legacy of P. L. Chebyshev. First Part:
Mathematics] Edited by S. N. Bernstein.] Academiya Nauk SSSR, Moscow-Leningrad, 1945. 174 pp.
[8] Billingsley, Patrick (1995), Probability and Measure (Third ed.), John Wiley & sons, ISBN 0-471-00710-2
[9] Bradley, Richard (2007), Introduction to Strong Mixing Conditions (First ed.), Heber City, UT: Kendrick
Press, ISBN 097404279X
[10] Dinov, Ivo; Christou, Nicolas; Sanchez, Juana (2008), "Central Limit Theorem: New SOCR Applet and
Demonstration Activity", Journal of Statistics Education (ASA) 16 (2). Also at ASA/JSE.
[11] Durrett, Richard (1996), Probability: theory and examples (Second ed.)
[12] Fischer, H. (2010) A History of the Central Limit Theorem: FromClassical to Modern Probability Theory,
Springer. ISBN 0387878564
[13] Gaposhkin, V.F. (1966), "Lacunary series and independent functions",Russian Math. Surveys 21 (6): 182,
doi:10.1070/RM1966v021n06ABEH001196.
[14] Klartag Boazn (2007), "A central limit theorem for convex sets", Inventiones Mathematicae 168, 91
131.doi:10.1007/s00222-006-0028-8 Also arXiv.
[15] Klartag, Bo'az (2008), "A Berry-Esseen type inequality for convex bodies with an unconditional basis",
Probability Theory and Related Fields. doi:10.1007/s00440-008- 0158-6 Also arXiv.
[16] Gaposhkin [18] Le Cam, Lucien (1986), "The central limit theorem around 1935", Statistical Science 1:1,
7891.
[17] Meckes, Elizabeth (2008), "Linear functions on the classical matrix groups", Transactions of the American
Mathematical Society 360: 53555366, doi:10.1090/S0002-9947-08-04444-9. Also arXiv.
[18] Rempala, G. and J. Wesolowski, (2002) "Asymptotics of products of sums and U- statistics", Electronic
Communications in Probability, 7, 4754.
[19] Rice, John (1995), Mathematical Statistics and Data Analysis (Second ed.), Duxbury Press, ISBN 0-534-
20934-3
[20] Tijms, Henk (2004) Understanding Probability: Chance Rules in Everyday Life, Cambridge: Cambridge
University Press. ISBN 0521540364
[21] Zygmund, Antoni (1959), Trigonometric series, Volume II, Cambridge. (2003 combined volume I,II: ISBN
0521890535)
[22] Endo, Tadashi. 1998. The Indian Securities MarketA Guide for Foreign and Domestic Investors. Vision
Books. India.







International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

103

Digital water marking using DCT and reversible method
Mithlesh Sigaroli
1
, Prof. Anurag Jain
2

1
Computer Science department Radharaman Institute of science & technology, Bhopal
2
Computer Science department Radharaman Institute of science & technology, Bhopal
Patelmithlesh219@gmail.com
Anurag.akjain@gmail.com

Abstract
A robust, computationally efficient and blind digital image watermarking in spatial domain has been discussed in
this paper. Embedded watermark is meaningful and recognizable and recovery process needs only one secret image.
Watermark insertion process exploits average brightness of the homogeneity regions of the cover image. Spatial
mask of suitable size is used to hide data with less visual impairments. Experimental results show resiliency of the
proposed scheme against large blurring attack like mean and Gaussian filtering, non linear filtering like median,
image rescaling, symmetric image cropping, lower order bit manipulation of gray values and lossy data compression
like JPEG with high compression ratio and low PSNR values. Almost as discreetly as the technology itself, digital
watermarking has recently made its debut on the geo-imaging stage. This innovative technology is proving to be a
cost-effective means of deterring copyright theft of mapping data and of ensuring the authenticity and integrity of
rasterized image data. First developed around six years ago, digital watermarking is a sophisticated modern
incarnation of steganography-the science of concealing information within other information. In the field of e-
commerce, digital watermarking has already established itself as an effective deterrent against copyright theft of
photographs and illustrations. Now digital watermarking software is finding uses within national mapping agencies
and others working with rasterized images or map data. Current applications range from protecting valuable map
data against copyright theft to securing photographic survey or reconnaissance images against tampering.















International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

104

1. Introduction
In the recent time, the rapid and extensive growth in Internet technology is creating a pressing need to develop
several newer techniques to protect copyright, ownership and content integrity of digital media. This necessity arises
because the digital representation of media possesses inherent advantages of portability, efficiency and accuracy of
information content in one hand, but on the other hand, this representation also puts a serious threat of easy, accurate
and illegal perfect copies of unlimited number. Unfortunately the currently available formats for image, audio and
video in digital form do not allow any type of copyright protection. A potential solution to this kind of problem is an
electronic stamp or digital watermarking which is intended to complement cryptographic process [1].

The technology
Digital watermarking, an extension of steganography, is a promising solution for content copyright
protection in the global network. It imposes extra robustness on embedded information. To put into words, digital
watermarking is the art and science of embedding copyright information in the original files. The information
embedded is called watermarks. Digital watermarks dont leave a noticeable mark on the content and dont affect
its appearance. These are imperceptible and can be detected only by proper authorities. Digital watermarks are
difficult to remove without noticeably degrading the content and are a covert means in situations where
cryptography fails to provide robustness. The content is watermarked by converting copyright information into
random digital noise using a special algorithm that is perceptible only to the content creator. Digital watermarks can
be read only by using the appropriate reading software. These are resistant to filtering and stay with the content as
long as originally purposely degraded. The content is watermarked by converting copyright information into random
digital noise using a special algorithm that is perceptible only to the content creator. Digital watermarks can be read
only by using the appropriate reading software. These are resistant to filtering and stay with the content as long as
originally purposely degraded. While the later technique facilitates access of the encrypted data only for valid key
holders but fails to track any reproduction or retransmission of data after decryption. On the other hand, in digital
watermarking, an identification code (symbol) is embedded permanently inside a cover image which remains within
that cover invisibly even after decryption process. This requirement of watermarking technique, in general, needs to
possess the following characteristics:
(a) Imperceptibility for hidden information,
(b) redundancy in distribution of the hidden information inside the cover image to satisfy robustness in water mark
extraction process even from truncated(cropped) image .and (c) one or more keys to achieve cryptographic security
of hidden content [2]. Besides these general properties, an ideal watermarking system should also be resilient to
insertion of additional watermarks to retain the rightful ownership. The perceptually invisible data hiding needs
insertion of watermark in higher spatial frequency of the cover image since human eye is less sensitive to this
frequency component. But in most of the natural images majority of visual information are concentrated on the
lower end of the frequency band. So the information hidden in the higher frequency components might be lost after
quantization operation of lossy compression [3]. This motivates researchers in recent times to realize the importance
of perceptual modeling of human visual system and the need to embed a signal in perceptually significant regions of
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

105

an image, especially if the watermark is to survive lossy compression [4]. In spatial domain block based approach,
this perceptually significant region is synonymous to low variance blocks of the cover image. The Watermark
recovery process does not require either the cover/watermarked image or the watermark symbol only except the
secret image. The paper is organized as follows: section 2 describes the watermarking principles. Section 3 describes
insertion and extraction of watermark. Result is depicted in section 4 with conclusion in section 5.

2 Watermarking principles
All watermarking methods share the same building blocks [3]: an embedding system and the watermark extraction
or recovery system. Any generic embedding system should have as inputs: cove (data/image)/hiding medium (I),
watermark symbol, (w) (image/text/number) and a key (k) to enforce security. The output of the embedding process
is always the watermarked data/image. The generic watermark recovery process needs the watermarked data, the
secret key or public key and depending on the method, the original data and /or the original watermark as inputs
while the output is the recovered watermark W with some kind of confidence measure for the given watermark
symbol or an indication about the presence of watermark in the cover document under inspection. Depending on the
combination of inputs and outputs three types namely private, semi private public watermarking system can be
defined [2].

3 Insertion and Extraction of watermark
The cover image I is a gray-level image of size NXN where and digital watermark (logo) W is a two
level image of size M X M where . About the value of p and n, p n and (p/n) should be of the order of
4. In the proposed work a binary image of size (16 X16) as watermark and, 8 bits gray images as cover image is
considered.
Insertion of Watermark
In the present work, a block based spatial domain algorithm is used to hide copyright mark (invisible logo) in the
homogenous regions of the cover image exploiting average brightness.
Step 1
The cover image is partitioned into non-overlapping square blocks of size (8X8) pixels. A block is denoted by the
location of its starting pixel (x, y). If the cover image is of size (NXN), total (N/8XN/8) number of such block is
obtained for watermark insertion. Next, all such blocks are arranged in ascending order based on their variance
values. The variance () of a block of size (M X N) is denoted by m-1 n-1
= 1/m*n [(, y)-u] (1)
x=0 y=0 where m-1 n-1 u= 1/m*n [(, y)] (2)
x=0 y=0 is the statistical average value of the block.
A two level map of size (N/8XN/8) _is constructed based on the location of homogenous blocks in the cover
image assigning each homogeneous block of the cover image by value 1 while all other blocks by value 0. This
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

106

two level map later modified as multi-level image, also called as secret image (s), is used for extraction of
watermarks pixels. The formation of multilevel image from two level maps is described in step 3.
Step 2
In the proposed scheme, one watermark pixel is inserted in each homogenous block. Before insertion, the binary
watermark is spatially dispersed using a chaotic system called torus automorphism. Basically, the torus
automorphism is a kind of image independent permutation done by using pseudo random number of suitable length.
This pseudo random number is generated using Linear Feedback Shift Register. The pseudo random number in the
present case is of length 256 and the spatially dispersed watermark data thus obtained is denoted by L1.
a J

Step 3
From the two levels image formed in step 2, desired blocks Of the cover image are selected and statistical
average value of these blocks are used for watermark insertion. Let for one such block this average value and its
integer part are denoted by A and A=A respectively. Now one pixel from L1 replaces a particular bit (preferably
Least Significant Bit planes) in bit plane representation of A for each homogenous block. The selection of particular
bit in bit plane representation may be determined based on the characteristics (busyness /smoothness of regions) of
the block. The bit plane selection is also governed by global characteristics of the cover image besides the local
property of candidate block, such as mean gray value.
Step 4
The choice of lower order MSB plane (say 3rd or higher from the bottom plane) may result in more robust
watermarking at the cost of greater visual distortion of the cover image. Further bit manipulation is done to
minimize this aberration and to counter the effect of smoothing that may cause possible loss of embedded
information. The process effectively changes those mean gray values of the blocks that have been used in watermark
insertion. Implementation is done by estimating the tendency of possible change in mean gray value after the attack
like mean filtering. Larger size of spatial mask such as 7 x 7 is used to adjust suitably the gray values of all pixels of
the block. The use of spatial mask reduces visual distortion on and average fifty percent times.
Watermark Extraction
The extraction of watermark requires the secret image(s) and the key (k) used for spatial dispersion of the watermark
image. The watermarked image under inspection with or without external attacks is partitioned into non-overlapping
block of size 8x8 pixels. The spatially dispersed watermark image thus obtained is once again permuted using the
same key (k) (pseudo random number) and watermark in original form is thus obtained. This completes watermark
extraction process. A quantitative estimation for the quality of extracted watermark Image W(x,y) with reference to
the original watermark W(x,y) may be expressed as normalized cross correlation (NCC) where :
olom!qp L p O n_ES\_]T__r_s_j__SG_]T__ p L p O Q n__SG_UTV_rZ BNCC= x y W(x,y) w(x,y)/x
y [W(x,y)] gives maximum value of NCC as unity.
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

107


Results
Figure 3 shows Fishing boat image used as cover image and Figure 4 is the watermarked image using logo/hidden
symbol M as shown in Figure 11. Peak Signal to Noise Ratio (PSNR) of the watermarked image to the original
image is about 42.40 dB and hence quality degradations could hardly be perceived by human eye. Robustness
against different attacks is shown in table 1 and 2 for other five test images such as Bear, New York, Lena, Opera
and Pills images shown in Figure 18,19,20,21 and 22 respectively [6,7].
Mean Filtering
Figure 12 shows extracted watermark (NCC=0.80) from blurred version of watermarked image (after mean filtering)
using 5x5 mask. PSNR value of Watermarked image is 23.80dB and is shown in Figure 5.
Gaussian filtering
Watermarked image (PSNR=24.15dB) after two times Gaussian filtering with variance 1 (window size 9x9 ) is
shown in Figure 6. Figure 13 shows the extracted watermark with NCC=0.88.
Median Filtering
Watermarked image (PSNR=25.22 dB) obtained after five times median filtering using a mask of size 3x3 is
shown in Figure 7. Figure 14 shows extracted watermark image (NCC=0.94).
Image Rescaling
The watermarked image was scaled to one half of its original size and up sampled to its original dimensions. Figure
8 shows the modified image (PSNR=24.85 dB) with many details lost. Extracted watermark (with NCC=0.87) is
shown in Figure 15.
JPEG Compression
Figure 16 shows the extracted watermark with NCC=0.958 from the watermarked image (PSNR=18.73 dB) as
shown in Figure 9 obtained after JPEG compression with compression ratio 45.0. As compression ratio increases
NCC value of the extracted watermark decreases and the quality of the watermark will also decrease accordingly.
Least Significant Bits manipulation
Two Least Significant bit(s) for all pixels (or randomly selected pixels) of the watermarked image are
complemented and the modified image with PSNR=40.94dB is shown in Figure 10. The extracted watermark with
NCC=0.88 is shown in Figure 17. result shows that the extracted watermark will not be
so good in visual quality if watermark pixel is inserted even in desired portion of the cover image in sequential
manner rather than pseudo-random fashion obtained by chaotic mixing.
Image Cropping Operation
Robustness of the proposed method against different types of image cropping operations that may be performed (as
deliberate external attack) on the watermarked image has been tested. In all cases extracted watermark, although
interfered by noise by different amount, still recognizable. Experimental result shows that the extracted watermark
will not be so good in visual quality if watermark pixel is inserted even in desired portion of the cover image in
sequential manner rather than pseudo-random fashion obtained by chaotic
mixing.
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

108



fig3:fishing boat fig4:watermarked image fig5:wI after mean filtering fig6:WI after two guassian filterings
fig7:Wi after 5 times median filtering fig8:WI after rescaling fig9:Wi after jpeg compression fig10:wi after
LSBs manipulation fig11:WI fig fig12:WI extracted from fig5

International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

109

Conclusion

Proposed technique describes robust and blind digital image watermarking in spatial domain,
which is computationally efficient. Embedded watermark is meaningful and recognizable rather than a sequence of
real numbers that are normally distributed or a Pseudo-Noise sequence. Proposed technique has been tested over
large number of benchmark images as suggested by watermarking community and the results of robustness to
different signal processing operations are found to be satisfactory. Currently investigation is being carried out to
insert the same watermark symbol in other region of the cover image also to make the present scheme more resilient
to other types of external attacks. Further research works should be carried out in spatial domain watermarking to
exploit other higher order factors such as size, shape, color, location and foreground/background [5] of the cover
image to generate watermarked image with less visible impairments along with robustness against other types of
external attacks such as the image flip and image rotation.

BIBILIOGRAPHY:
[1]. R. Anderson.Information Hiding. Proceedings of the First Workshop on Information Hiding, LNCS-1174,
Springer Verlag, New York, 1996.
[2]. S.Katzenbesser and F.A.P Petitcolas.Information Hiding Techniques for Steganography and Digital
Watermarking. Artech House, Boston, MA, 2000.
[3]. Chiou-Ting Hsu and Ja-Ling Wu. Hidden Digital Watermarks in Images. IEEE Transaction on Image
Processing, 8, pp. 58-68, 1999.
[4]. I.J. Cox, J. Kilian, T. Leighton and T. Shammon. Secure Spread Spectrum Watermarking for Multimedia.
IEEE Transaction on Image Processing, 6, pp. 1673-1687, 1997.
[5]. S.Pereira, S.Voloshynoskiy and T.Pun. Optimal Transform Domain Watermark Embedding via Linear
Programming. Signal processing, 81, pp. 1251-1260, 2001.
[6]. http//www.cl.cam.ac.uk/ fapp2/watermarking.
[7]. http//sipi.use.edu /services/database/ Database /html.
















International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

110


SPEED CONTROL OF ELECTRIC DRIVES USING SOFT COMPUTING
TECHNIQUES
Rahul Malhotra and Saloni Gupta
Deptt. of Electronics & Comm. Engg., Bhai Maha Singh College of Engg, Muktsar (Pb.)
Deptt. of Electronics & Comm. Engg., Adesh Institute of Engg. & Techn., Faridkot (Pb.)
blessurahul@gmail.com, er.salonigupta@gmail.com

Abstract
The speed control of electric drives is a challenging engineering problem. The Direct Torque Control (DTC)
method is characterized by its simple implementation and a fast dynamic response. Soft computing techniques viz.
fuzzy control is a way for controlling a system without the need of mathematical model. It uses the experience of
people's knowledge to form its control rule base. There are two techniques to control the speed of electrical drives.
The traditional technique uses hardware by the knowledge of process and the tuning of controller whereas the
dynamic model technique is a soft computing technique usually done by computer simulation before going for
control hardware. This paper presents the implementation of intelligent ways to control the speed of synchronous
motor and the induction motor. Direct torque control of induction motor and synchronous motor were implemented
in MATLAB/SIMULINK. SIMULINK is software for modeling, simulating and analyzing dynamical systems. It
supports linear and non-linear systems, modeled in continuous time, sampled time or a hybrid of two.
Keywords: DTC, Induction motor, speed, torque, flux






















International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

111

1. Introduction
Advanced control of electrical machines requires an independent control of magnetic flux and torque. For that
reason it was not surprising, that the DC-machine played an important role in the early days of high performance
electrical drive systems, since the magnetic flux and torque are easily controlled by the stator and rotor current,
respectively. The introduction of Field Oriented Control meant a huge turn in the field of electrical drives, since with
this type of control the robust induction machine can be controlled with a high performance.
Synchronous motors are now used in a wide variety of industrial applications. A majority of these motors are
constructed with the permanent magnets mounted on the periphery of the rotor core. Here permanent magnets are
buried inside the rotor core rather than bonded on the rotor surface, the motor provides mechanical ruggedness as
well as also arises a possibility of increasing its torque capability. By designing a rotor magnetic circuit such that the
inductance varies as a function of rotor angle, the reluctance torque can be produced in addition to the mutual
reaction torque of synchronous motors. The type of Interior Permanent Magnet (IPM) synchronous motors can be
considered as the reluctance synchronous motor and the Permanent Magnet synchronous motor combined in one
unit. It is now widely used in industrial as well as military applications because it provides high power density and
high efficiency compared to other types of motors.
When applied voltage to most of the DC motors is varied they start exhibiting increase or decrease in their
running speed. The major drawback of DC motors is that they require a switching process. AC motors however
dont require switching process but the main drawback of AC motors is that we cant adjusted their speed easily,
because of the close relation of frequency of line voltage with the speed of the AC motor. The other advantage of AC
motors is that they dont require rectified DC supply.

2. Induction Motor Direct Torque Control
Advanced control of electrical machines requires an independent control of magnetic flux and torque. For that
reason it was not surprising, that the DC-machine played an important role in the early days of high performance
electrical drive systems, since the magnetic flux and torque are easily controlled by the stator and rotor current,
respectively. The introduction of Field Oriented Control meant a huge turn in the field of electrical drives, since with
this type of control the robust induction machine can be controlled with a high performance. Later in the eighties a
new control method for induction machines was introduced: The Direct Torque Control (DTC) method is
characterized by its simple implementation and a fast dynamic response. Furthermore, the inverter is directly
controlled by the algorithm, i.e. a modulation technique for the inverter is not needed. However if the control is
implemented on a digital system (which can be considered as a standard nowadays); the actual values of flux and
torque could cross their boundaries too far, which is based on an independent hysteresis control of flux and torque.
The main advantages of DTC are absence of coordinate transformation and current regulator absence of separate
voltage modulation block. Common disadvantages of conventional DTC are high torque ripple and slow transient
response to the step changes in torque during start-up. These are disadvantages that we want to remove by using and
implementing modern resources of artificial intelligence like neural networks, fuzzy logic and genetic algorithms. In
the following, we will describe the application of fuzzy logic in DTFC control. Fuzzy control is a way for
controlling a system without the need of knowing the plant mathematic model. It uses the experience of people's
knowledge to form its control rule base. There has appeared many applications of fuzzy control on power electronic
and motion control in the past few years..

Figure 1: Induction motor cross section

International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

112

A fuzzy logic controller was reported being used with DTC. However there arises the problem that the rule
numbers it used is too many which would affect the speed of the fuzzy reasoning. In this work an approach to
improve the direct torque control (DTC) of an induction motor (IM) is proposed. The proposed DTC is based on
fuzzy logic technique switching table and the platform to perform the simulation by using SIMULINK.Direct
Torque Control (DTC) is a method that has emerged to become one possible alternative to the well-known Vector
Control of Induction Motors. This method provides a good performance with a simpler structure and control
diagram. In DTC it is possible to control directly the stator flux and the torque by selecting the appropriate VSI
state. The main advantages offered by DTC are:
Decoupled control of torque and stator flux.
Excellent torque dynamics with minimal response time.
Inherent motion-sensorless control method since the motor speed is not required to achieve the torque control.
Absence of coordinate transformation (required in Field Oriented Control (FOC)).
Absence of voltage modulator, as well as other controllers such as PID and current controllers (used in FOC).
Robustness for rotor parameters variation. Only the stator resistance is needed for the torque and stator flux
estimator.
These merits are counterbalanced by some draw-backs:
Possible problems during starting and low speed operation and during changes in torque command.
Requirement of torque and flux estimators, implying the consequent parameters identification (the same as for
other vector controls).
Variable switching frequency caused by the hysteresis controllers employed.
Inherent torque and stator flux ripples.
Flux and current distortion caused by sector changes of the flux position.
Higher harmonic distortion of the stator voltage and current waveforms compared to other methods such as
FOC.
Acoustical noise produced due to the variable switching frequency. This noise can be particularly high at low
speed operation.
A variety of techniques have been proposed to overcome some of the drawbacks present in DTC. Some
solutions proposed are: DTC with Space Vector Modulation (SVM); the use of a duty-ratio controller to introduce a
modulation between active vectors chosen from the look-up table and the zero vectors; use of artificial intelligence
techniques, such as Neuro-Fuzzy controllers with SVM. These methods achieve some improvements such as torque
ripple reduction and fixed switching frequency operation. However, the complexity of the control is considerably
increased.
A different approach to improve DTC features is to employ different converter topologies from the standard
two-level VSI. Some authors have presented different implementations of DTC for the three-level Neutral Point
Clamped (NPC) VSI. This work will present a new control scheme based on DTC designed to be applied to an
Induction Motor fed with a three-level VSI. The major advantage of the three-level VSI topology when applied to
DTC is the increase in the number of voltage vectors available. This means the number of possibilities in the vector
selection process is greatly increased and may lead to a more accurate control system, which may result in a
reduction in the torque and flux ripples. This is of course achieved, at the expense of an increase in the complexity
of the vector selection process.
In DTC induction motor drive there are torque and flux ripples because none of the VSI states is able to
generate the exact voltage value required to make zero both the torque electromagnetic error and the stator flux
error. The suggested technique is based on applying to the inverter the selected active states just enough time to
achieve the torque and flux references values. A null state is selected for the remaining switching period, which
won't almost change both the torque and the flux.
Therefore, a duty ratio () has to be determined each switching time. By means of varying the duty ratio
between its extreme values (0 up to 1), it is possible to apply any voltage to the motor. Therefore, this technique is
based on a two-state modulation. These two states are the active one and a null one. The optimum duty ratio per
sampling period is a non-linear function of the electromagnetic torque error, the stator flux position and the working
point, which is determined by the motor speed and the electromagnetic torque. It is obvious that it is extremely
difficult to model such an expression since it is a different non-linear function per working point. Thus, it is believed
that by using a Fuzzy Logic based DTC system it is possible to perform a Fuzzy Logic based duty-ratio controller,
where the optimum duty ratio is determined every switching period.
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

113

The suggested Fuzzy Logic system is divided into two different Fuzzy Logic controllers. The first one will act
each time that the selected active VSI state has changed, being different to the previous one. The second controller
will act in the opposite situation, which is when the active VSI selected state is the same as the previous one. These
Fuzzy Logic controllers and its functionality are explained in proceeding section.
Both fuzzy logic controllers use the Centroid Defuzzification method. The relation between different conditions
in the same rule is done by means of "and" operator. On the other hand, the relationship between different rules is
done by means of "or" operator..

3. Improvements In Direct Torque Control
In the classical DTC, there are several drawbacks. Some of them can be summarized as Sluggish response
(slow response) in both start up and changes in either flux or torque. Large and small errors in flux and torque are
not distinguished. In other words, the same vectors are used during start up and step changes and during steady state.
In order to overcome the mentioned drawbacks, there are different solutions, which can be classified as Non
artificial intelligence methods, mainly "sophisticated tables". Predictive algorithms, used to determine the switching
voltage vectors. A mathematical model of the induction motor is needed. Electromagnetic torque and stator flux, are
estimated for sampling period for all possible inverter sates. Then, the predictive algorithm selects the inverter
switching states to give minimum deviation between the predicted value of the electromagnetic torque and the
reference torque Fuzzy logic based systems.

4. Problem Formulation
In this paper the performance of two motors namely Permanent Magnet Synchronous Motor Drive (3HP) and
DTC Induction Motor Drive of 200 HP is analyzed for different operating parameters as follows.
4.1 Permanent Magnet Synchronous Motor Drive (3HP)
In this circuit a permanent magnet (PM) synchronous motor drive with a braking chopper for a 3HP motor is
used. The PM synchronous motor is fed by a PWM voltage source inverter, which is built using a Universal Bridge
Block. The speed control loop uses a PI regulator to produce the flux and torque references for the vector control
block. The vector control block computes the three reference motor line currents corresponding to the flux and
torque references and then feeds the motor with these currents using a three-phase current regulator. Motor current,
speed, and torque signals are available at the output of the block.
4.2 Simulation description
Start the simulation. Observe the motor stator current, the rotor speed, the electromagnetic torque and the DC
bus voltage on the scope. The speed set point and the torque set point are also to be calculated. At time t=0s, the
speed set point is 300 rpm. Observe that the speed follows precisely the acceleration ramp. At t=0.5s, the full load
torque is applied to the motor. You can observe a small disturbance in the motor speed, which stabilizes very
quickly. At t=1s, the speed set point is changed to 0 rpm. The speed decreases down to 0 rpm following precisely the
deceleration ramp. At t=1.5s., the mechanical load passes from 11Nm to -11Nm. The motor speed stabilizes very
quickly after a small overshoot. Finally, the DC bus voltage to be regulated during the whole simulation period can
be observed.

International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

114


Fig. 2: Permanent magnet synchronous motor drive of 3HP

Fig.3: Permanent magnet synchronous motor internal architecture


Fig. 4: Stator current, rotor speed, electromagnetic torque and DC bus voltage
simulation results

A typical permanent magnet synchronous motor of 3 HP,512 segments having 0.85 reminent flux and saturation
flux of 1.2 is controlled by artificial intelligence(AI) techniques. Further an induction motor of 200 HP,500RPM is
taken up and is controlled by fuzzy logic..


International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

115

4.3 DTC Induction Motor Drive (200 HP)
In this circuit a direct torque control (DTC) induction motor drive with a braking chopper for a 200HP AC
motor is used. The induction motor is fed by a PWM voltage source inverter which is built using a Universal Bridge
Block. The speed control loop uses a proportional-integral controller to produce the flux and torque references for
the DTC block. The DTC block computes the motor torque and flux estimates and compares them to their respective
reference. The comparators outputs are then used by an optimal switching table which generates the inverter
switching pulses. Motor current, speed, and torque signals are available at the output of the block.

Fig. 5: DTC Induction 200 HP Motor Drive.
4.4. Simulation details
Start the simulation. Observe the motor stator current, the rotor speed, the electromagnetic torque and the DC
bus voltage on the scope. The speed set point and the torque set point are also shown. At time t = 0 s, the speed set
point is 500 rpm. Observe that the speed follows precisely the acceleration ramp. At t = 0.5 s, the full load torque is
applied to the motor shaft while the motor speed is still ramping to its final value. This forces the electromagnetic
torque to increase to the user-defined maximum value (1200 Nm) and then to stabilize at 820 Nm once the speed
ramping is completed and the motor has reached 500 rpm. At t = 1 s, the speed set point is changed to 0 rpm. The
speed decreases down to 0 rpm by following precisely the deceleration ramp even though the mechanical load is
inverted abruptly, passing from 792 Nm to - 792 Nm, at t = 1.5 s. Shortly after, the motor speed stabilizes at 0 rpm.
Finally, the regulation of DC bus voltage during the whole simulation period is noted.


Fig.6: Stator current, rotor speed, electromagnetic torque and DC bus voltage simulation results.

5. Conclusion
Permanent Magnet Synchronous Motor and Direct Torque Control Induction Motor have been simulated using
MATLAB. It is investigated that in a typical permanent magnet synchronous motor of 3 HP, 512 segments having
0.85 reminent flux and saturation flux of 1.2 better control can be achieved by artificial intelligence (AI) techniques.
Further, a Direct Torque control strategy has been implemented on induction motor of 200 HP, 500RPM. It is
evident from the simulation results that the evolutionary computing techniques can reduce error namely
electromagnetic torque and flux error.
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

116

References
[1] T. Sebastian, G. Slemon, and M. Rahman, "Modelling of permanent magnet synchronous motors," Magnetics,
IEEE Transactions on, vol. 22, pp. 1069-1071, 1986.
[2] T. M. Jahns, G. B.Kliman, and T. W. Neumann, "Interior Permanent-Magnet Synchronous Motors for
Adjustable-Speed Drives," Industrial Applications, IEEE Transactions on, vol. IA-22, pp. 738-746, 1986.
[3] P. Pillay and R. Krishnan, "Modeling of permanent magnet motor drives," Industrial Electronics, IEEE
Transactions on, vol. 35, pp. 537-541, 1988.
[4] P. Pillay and R. Krishnan, "Modeling, simulation, and analysis of permanent-magnet motor drives. I. The
permanent-magnet synchronous motor drive," Industry Applications, IEEE Transactions on, vol. 25, pp. 265-
273, 1989.
[5] S. Morimoto, Y. Tong, Y. Takeda, and T. Hirasa, "Loss minimization control of permanent magnet
synchronous motor drives," Industrial Electronics, IEEE Transactions on, vol. 41, pp. 511-517, 1994.
[6] A. H. Wijenayake and P. B. Schmidt, "Modeling and analysis of permanent magnet synchronous motor by
taking saturation and core loss into account," 1997.
[7] K. Jang-Mok and S. Seung-Ki, "Speed control of interior permanent magnet synchronous motor drive for the
flux weakening operation," Industry Applications, IEEE Transactions on, vol. 33, pp. 43-48, 1997.
[8] B. K. Bose, Modern power electronics and AC drives: Prentice Hall, 2002.
[9] B. Cui, J. Zhou, and Z. Ren, "Modeling and simulation of permanent magnet synchronous motor drives,"
2001.
[10] C. Mademlis and N. Margaris, "Loss minimization in vector-controlled interior permanent-magnet
synchronous motor drives," Industrial Electronics, IEEE Transactions on, vol. 49, pp. 1344-1347, 2002.
[11] X. Jian-Xin, S. K. Panda, P. Ya-Jun, L. Tong Heng, and B. H. Lam, "A modular control scheme for PMSM
speed control with pulsating torque minimization," Industrial Electronics, IEEE Transactions on, vol. 51, pp.
526-536, 2004.
[12] R. E. Araujo, A. V. Leite, and D. S. Freitas, "The Vector Control Signal Processing blockset for use with
Matlab and Simulink," 1997.
[13] C.-m. Ong, Dynamic Simulation of Electric Machinery using Matlab/Simulink: Prentice Hall, 1998.
[14] H. Macbahi, A. Ba-razzouk, J. Xu, A. Cheriti, and V. Rajagopalan, "A unified method for modeling and
simulation of three phase induction motor drives," 2000.
[15] J. H. Reece, C. W. Bray, J. J. Van Tol, and P. K. Lim, "Simulation of power systems containing adjustable
speed drives," 1997.
[16] C. D. French, J. W. Finch, and P. P. Acarnley, "Rapid prototyping of a real time DSP based motor drive
controller using Simulink," 1998.
[17] S. Onoda and A. Emadi, "PSIM-based modeling of automotive power systems: conventional, electric, and
hybrid electric vehicles," Vehicular Technology, IEEE Transactions on, vol. 53, pp. 390-400, 2004.
[18] G. Venkaterama, "Simulink Permanent Magnet Simulation," University of Wisconsin.
[19] N Gautam, S N Singh, A Binder, A Rentschler, T Schneider, Modeling and Analysis of Parallel Connected
Permanent Magnet Synchronous Generator in a Small Hydro Power Plant, IE(I) journals, Vol 88, June 2007
[20] Li Liu et. al., Permanent Magnet Synchronous Motor Parameter Identification using PSO, International
Journal of Computational Intelligence Research, Vol.4, No.2 (2008), pp.211218
[21] I. Takahashi and T. Noguchi, A new quick-response and high-efficiency control strategy of an induction
motor, IEEE Trans. Ind. Applicat., vol. 22, no. 5, pp. 820827, Sept./Oct. 1986.
[22] C. French and P. Acarnley, Direct torque control of permanent magnet drives, IEEE Trans. Ind. Applicat.,
vol. 32, no. 5, pp. 10801088, Sept./Oct. 1996.
[23] L. Zhong, M. F. Rahman, W. Y. Hu, and K. W. Lim, Analysis of direct torque control in permanent magnet
synchronous motor drives, IEEE Trans. Power Electron., vol. 12, no. 3, pp. 528536, May 1997.
[24] P. Vas, Sensorless Vector and Direct Torque Control. New York: Oxford University Press, 1998, pp. 223237.
[25] G. S. Buja and M. P. Kazmierkowski, Direct torque control of PWM inverter-fed AC motors a survey,
IEEE Trans. Ind. Electron., vol. 51, no. 4, pp. 744757, Aug. 2004.
[26] M. R. Zolghadri, J. Guiraud, J. Davoine, and D. Roye, A DSP based direct torque controller for permanent
magnet synchronous motor drives, in Conf. Rec. IEEE 29th Annual Power Electronics Specialists Conference
(PESC98), vol. 2, May 1722, 1998, pp. 20552061.
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

117

[27] M. F. Rahman, L. Zhong, and K. W. Lim, A direct torque-controlled interior permanent magnet synchronous
motor drive incorporating field weakening, IEEE Trans. Ind. Applications., vol. 34, no. 6, pp. 12461253,
Nov./Dec. 1998.






































International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

118

IJCIIS Reviewers
A. Govardhan, J awaharlal Nehru Technological University, India
Ajay Goel, Haryana Institute of Engineering and Technology, India
Ajay Sharma, Raj Kumar Goel Institute of Technology, India
Akshi Kumar, Delhi Technological University, India
Alok Singh Chauhan, Ewing Christian Institute of Management and Technology, India
Amandeep Dhir, Helsinki University of Technology Finland, Denmark Technical University, Denmark
Amit Kumar Rathi, J aypee University of Engineering and Technology, India
Amol Potgantwar, Sandip Institute of Technology and Research Centre, India
Anand Sharma, MITS, India
Aos Alaa Zaidan Ansaef, Multimedia University, Malaysia
Arash Habibi Lashkari, University of Technology, Malaysia
Arpita Mehta, Christ University, India
Arul Lawrence Selvakumar, Kuppam Engineering College, India
Ayyappan Kalyanasundaram, Rajiv Gandhi College of Engineering and Technology, India
Azadeh Zamanifar, Iran University of Science and Technology University and Niroo Research Institute,
Iran
Bilal Bahaa Zaidan, University of Malaya, Malaysia
Binod Kumar, Lakshmi Narayan College of Technology, India
B. L. Malleswari, GNITS, India
B. Nagraj, Tamilnadu News Prints and Papers, India
Chakresh Kumar, Galgotias College of Engineering and Technology, India
C. Suresh Gnana Dhas, Vel Tech Multitech Dr.Rengarajan Dr.Sagunthla Engg. College, India
C. Sureshkumar, J . K. K. M. College of Technology, India
Deepankar Sharma, D. J . College of Engineering and Technology, India
Dhirendra Pandey, Babasaheb Bhimrao Ambedkar University, India
Durgesh Kumar Mishra, Acropolis Institute of Technology and Research, India
D. S. R. Murthy, SreeNidhi Institute of Science and Technology, India
G. N. K. Suresh Babu, SAMS College of Engineering and Technology, India
Hafeez Ullah Amin, KUST Kohat, NWFP, Pakistan
Hanumanthappa J ayappa, University of Mysore, India
Himanshu Aggarwal, Punjabi University, India
J agdish Lal Raheja, Central Electronics Engineering Research Institute, India
J atinder Singh, UIET Lalru, India
J . Samuel Manoharan, Karunya University, India
Iman Grida Ben Yahia, Telecom SudParis, France
Kanwalvir Singh Dhindsa, B. B. S. B. Engineering College, India
K Padmasree, Yogi Vemana University, India
K. V. N. Sunitha, G. Narayanamma Institute of Technology and Science, India
Leszek Sliwko, CITCO Fund Services, Ireland
M. Azath, Anna University, India
Md. Mobarak Hossain, Asian University of Bangladesh, Bangladesh
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

119

Mohd Nazri Ismail, University of Kuala Lampur, Malaysia
Mohammed Salem Binwahlan, Hadhramout University of Science and Technology, Yemen
Mohamed Elshaikh, Universiti Malaysia Perlis, Malaysia
M. Surendra Prasad Babu, Andhra University, India
M. Thiyagarajan, Sastra University, India
Manjaiah D. H., Mangalore University, India
Nahib Zaki Rashed, Menoufia Univesity, Egypt
Nagaraju Aitha, Vaagdevi College of Engineering, India
Natarajan Meghanat han, J ackson State University, USA
Navneet Sikarwar, B. S. A. College of Engineering and Technology, India
N. J aisankar, VIT University, India
Ojesanmi Olusegun Ayodeji, Ajayi Crowther University, Nigeria
Oluwaseyitanfunmi Osunade, University of Ibadan, Nigeria
Perumal Dananjayan, Pondicherry Engineering College, India
Piyush Kumar Shukla, University Institute of Technology, Bhopal, India
Poonam Garg, Institute of Management Technology, India
P. Ramesh Babu, Rajamahendri Institute of Engineering and Technology, India
Praveen Ranjan Srivastava, BITS, India
P. V. Sarathchand, Indur Institute of Engineering and Technology, India
Rajesh Kumar, National University of Singapore, Singapore
Rajeshwari Hegde, BMS College of Engineering, India
Rakesh Chandra Gangwar, Beant College of Engineering and Technology, India
Raman Kumar, D A V Institute of Engineering and Technology, India
Raman Maini, University College of Engineering, Punjabi University, India
Ramveer Singh, Raj Kumar Goel Institute of Technology, India
Sateesh Kumar Peddoju, Vaagdevi College of Engineering, India
Shahram J amali, University of Mohaghegh Ardabili, Iran
Sriman Narayana Iyengar, India
Suhas Manangi, Microsoft, India
Sujisunadaram Sundaram, Anna University, India
Sukumar Senthilkumar, National Institute of Technology, India
S. Murugan, Alagappa University and Centre for Development for Advanced Computing, India
S. S. Mehta, J . N. V. University, India
S. Smys, Karunya University, India
S. V. Rajashekararadhya, Adichunchanagiri Institute of Technology, India
Thipendra P Singh, Sharda University, India
T. Ramanujam, Krishna Engineering College, Ghaziabad, India
T. Venkat Narayana Rao, Hyderabad Institute of Technology and Management, India
Vasavi Bande, Hyderabad Institute of Technology and Management, India
Vishal Bharti, Dronacharya College of Engineering, India
Vuda Sreenivasarao, St. Mary's College of Engineering and Technology, India
V. Umakanta Sastry, Sreenidhi Institute of Science and Technology, India
Yee Ming Chen, Yuan Ze University, Taiwan
International Journal of Computational Intelligence and Information Security, July 2011 Vol. 2, No. 7

120

S-ar putea să vă placă și