Sunteți pe pagina 1din 433

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014

ISSN 2091-2730

2 www.ijergs.org

Table of Content
Topics Page no
Chief Editor Board 3-4
Message From Associate Editor 5
Research Papers Collection

6-244























International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

3 www.ijergs.org

CHIEF EDITOR BOARD
1. Dr Gokarna Shrestha, Professor, Tribhuwan University, Nepal
2. Dr Chandrasekhar Putcha, Outstanding Professor, University Of California, USA
3. Dr Shashi Kumar Gupta, , Professor, IIT Rurkee, India
4. Dr K R K Prasad, K.L.University, Professor Dean, India
5. Dr Kenneth Derucher, Professor and Former Dean, California State University,Chico, USA
6. Dr Azim Houshyar, Professor, Western Michigan University, Kalamazoo, Michigan, USA
7. Dr Sunil Saigal, Distinguished Professor, New Jersey Institute of Technology, Newark, USA
8. Dr Hota GangaRao, Distinguished Professor and Director, Center for Integration of Composites into
Infrastructure, West Virginia University, Morgantown, WV, USA
9. Dr Bilal M. Ayyub, professor and Director, Center for Technology and Systems Management,
University of Maryland College Park, Maryland, USA
10. Dr Sarh BENZIANE, University Of Oran, Associate Professor, Algeria
11. Dr Mohamed Syed Fofanah, Head, Department of Industrial Technology & Director of Studies, Njala
University, Sierra Leone
12. Dr Radhakrishna Gopala Pillai, Honorary professor, Institute of Medical Sciences, Kirghistan
13. Dr P.V.Chalapati, Professor, K.L.University, India
14. Dr Ajaya Bhattarai, Tribhuwan University, Professor, Nepal
ASSOCIATE EDITOR IN CHIEF
1. Er. Pragyan Bhattarai , Research Engineer and program co-ordinator, Nepal
ADVISORY EDITORS
1. Mr Leela Mani Poudyal, Chief Secretary, Nepal government, Nepal
2. Mr Sukdev Bhattarai Khatry, Secretary, Central Government, Nepal
3. Mr Janak shah, Secretary, Central Government, Nepal
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

4 www.ijergs.org

4. Mr Mohodatta Timilsina, Executive Secretary, Central Government, Nepal
5. Dr. Manjusha Kulkarni, Asso. Professor, Pune University, India
6. Er. Ranipet Hafeez Basha (Phd Scholar), Vice President, Basha Research Corporation, Kumamoto, Japan
Technical Members
1. Miss Rekha Ghimire, Research Microbiologist, Nepal section representative, Nepal
2. Er. A.V. A Bharat Kumar, Research Engineer, India section representative and program co-ordinator, India
3. Er. Amir Juma, Research Engineer ,Uganda section representative, program co-ordinator, Uganda
4. Er. Maharshi Bhaswant, Research scholar( University of southern Queensland), Research Biologist, Australia


















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

5 www.ijergs.org

Message from Associate Editor In Chief
Let me first of all take this opportunity to wish all our readers a very happy, peaceful and
prosperous year ahead.
This is the Fourth Issue of the Second Volume of International Journal of Engineering Research
and General Science. A total of 58 research articles are published and I sincerely hope that each
one of these provides some significant stimulation to a reasonable segment of our community of
readers.
In this issue, we have focused mainly on the upcoming technology and research. We also welcome more research oriented
ideas in our upcoming Issues.
Authors response for this issue was really inspiring for us. We received many papers from many countries in this issue
than previous one but our technical team and editor members accepted very less number of research papers for the
publication. We have provided editors feedback for every rejected as well as accepted paper so that authors can work out
in the weakness more and we shall accept the paper in near future. We apologize for the inconvenient caused for rejected
Authors but I hope our editor feedback helps you discover more horizons for your research work.
I would like to take this opportunity to thank each and every writer for their contribution and would like to thank entire
International Journal of Engineering Research and General Science (IJERGS) technical team and editor member for their
hard work for the development of research in the world through IJERGS.
Last, but not the least my special thanks and gratitude needs to go to all our fellow friends and supporters. Your help is
greatly appreciated. I hope our reader will find our papers educational and entertaining as well. Our team have done good
job however, this issue may possibly have some drawbacks, and therefore, constructive suggestions for further
improvement shall be warmly welcomed.



Er. Pragyan Bhattarai,
Assistant Editor-in-Chief, P&R,
International Journal of Engineering Research and General Science
E-mail -Pragyan@ijergs.org
Contact no- +9779841549341




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

6 www.ijergs.org

Design of Magnetic Levitation Assisted Landing and Take-off mechanism of
Aircraft using Hammock Concept
Kumar Poudel
1

1
Hindustan Institute of Technology, Department of Aeronautical Engineering, Coimbatore
Email- kumarpoudelkx27@gmail.com

ABSTRACT For safe, efficient landing and take-off of aircraft in future magnetic levitation assisted TOL could turn out to be
best alternative to conventional landing gear system. Thus in this paper design and working principle of the magnetic levitation
assisted TOL is being purposed using hammock concept. Hammocks used in this concept are slings made up of high strength fibre and
steel cables, often used in the construction of bridges. The hammock will be attached to a sledge at which the aircraft is placed during
TOL operation. The sledge is also provided with wheels and can be detached from the hammock for ground operations like taxiing,
hanger operations etc. There will be the provision of joining two sledges together vertically in order to increase the length of the
sledge for larger aircraft. The tracks based on the principle of electrodynamics suspension is used here to drive the hammock and
sledge unit during TOL operation and the source of power is electricity.
Keywords:- Magnetic levitation,Take-off and Landing, Halbach Arrays ,Hammocks, sledge, steel cables, Barricade.
INTRODUCTION
Magnetic Levitation system uses a magnetic force to levitate the aircraft on a rail and to accelerate it during take-off. When landing,
this system can be utilized to decelerate the aircraft. If an aircraft is assisted with magnetic levitation system during take-off and
landing excessive amount of impact force, vibration and shock will be produced. In conventional system hydraulic shock absorbers are
used for this purpose, which nearly consumed 7% weight of the total aircraft and complex hydraulic mechanism is required. Thus in
magnetic levitation system hammocks can be utilized for the purpose with which can reduce the weight and it is a good shock,
vibration and impact force absorber.
Generally used hammock is a sling made of fabric, rope, or netting, suspended between two points, used for swinging, sleeping, or
resting. It normally consists of one or more cloth panels, or a woven network of twine or thin rope stretched with ropes between two
firm anchor points such as trees or posts.
Usually in aircraft carries, emergency recovery system called barricade are widely used. It consist of upper and lower loading strap
joined together to arrest the motion of aircraft and it looks like the hammock. Similarly bridges are constructed through high strength
suspended cables which holds the entire weight of the bridge and the payload. Thus designing a magnetic levitation assisted sledge
mechanism with hammock for TOL operation could be reliable and cost effective mechanism.
The magnetic levitation system consist of a special arrangement of the permanent magnets which augments the magnetic field on one
side of the array and cancelling the other side nearly equal to zero, this special arrangement is known as halbach arrays and this array
concept was developed by Klaus Halbach of the Lawrence Berkeley National Laboratory in the 1980s for use in particle accelerator.

The figure 1 shows the linear Halbach Array:
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

7 www.ijergs.org


Fig 1.Linear Halbach Arrays

Methodology

A. Aircraft

For this project the conventional landing gear system has to be removed and the belly of the aircraft has to be redesigned, since the
aircraft will be carried on the sledge.

B. Basic design concept of magnetic levitation assisted sledge with hammock

Here the main components are Sledge, Hammock, and Electromagnetic Rail. The length and other specification of the sledge,
hammock and rail can be varied according to various factors such as length of the aircraft, weight etc. Thus for this project only the
basic conditions of TOL mechanism are considered. The length and various other specification used here are all assumption. The basic
design of this mechanism is represented schematically by the figure 2.







International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

8 www.ijergs.org



High strength suspension cables sledge frame














Hammock


Fig. 2: Basic design concept of magnetic levitation sledge with hammock.

C. Designing the sledge

Sledge is the main portion where the entire aircraft will be supported. Hence the sledge will be assisted with the following elements:

At the starboard and portside of the sledge a latching mechanism will be provided which will attach and detach the sledge to the
hammock.
Similarly at forward and backward of the sledge similar latching mechanism will be provided whose purpose is to attach and detach
the sledge with another so that the length of the sledge could be increased or decreased depending upon the size of the aircraft.

S
e
p
a
r
a
t
i
o
n

B
l
o
c
k

o
f

t
h
e

s
l
e
d
g
e

f
r
o
m

h
a
m
m
o
c
k

S
e
p
a
r
a
t
i
o
n

B
l
o
c
k

o
f

t
h
e

s
l
e
d
g
e

f
r
o
m

h
a
m
m
o
c
k

Sledge, where
the aircraft will
be. (The sledge
is placed at the
aircraft centre
of gravity,
usually belly of
the aircraft.)

T
r
a
c
k


T
r
a
c
k

E
l
e
c
t
r
o
m
a
g
n
e
t
i
c

l
e
v
i
t
a
t
i
o
n

T
r
a
c
k


E
l
e
c
t
r
o
m
a
g
n
e
t
i
c

l
e
v
i
t
a
t
i
o
n

T
r
a
c
k


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

9 www.ijergs.org

The midsection of the sledge is provided with hydraulic actuators so that the section could be moved horizontally and vertically for the
increasing the precision and the frame of the sledge will remain stationary.
Various electrical sensors are implemented to ensure the functioning and safety condition of the sledge and other mechanism.
Electric motor driven wheel will be provided to the sledge system so that the aircraft could be moved from the track to the hanger or
for performing ground operation.
The wheels will be of retractable type, because fixed wheels increases drag.

The 3D view of the sledge with its components is shown in the figure 3.


Fig 3. 3D view of sledge


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

10 www.ijergs.org

D. Aerodynamics of sledge

In order to reduce the drag and excess noise aerodynamic cowling should be designed, this could also be made retractable during the
time of landing because for landing excess drag is more essential for braking effect.
E. Designing sledge slot

The sledge slot is nothing but the same sledge and it is added in addition to the main sledge in order to comfort the TOL operation of
the larger aircraft. Each sledge is unique and contains all components, they are similar but slotted according to length and breadth.
F. Designing the hammock

Barricade are the good example for designing the structural concept of the hammock. The hammock will be constructed with fibre
and cables having high tensile strength and good stiffness factor such as those used in the construction of bridges. The bunch of high
strength steel cables will be combined together to form a suspension cable so that the fail safe design could be achieved (i.e.) whole
system will not be affected due to failure of a single cable from bunch. The figure 4 illustrates the design of cable system for fail safe
and figure 5 is the picture of commercially available steel cable.








High strength steel cables





Fig 4. Suspension cable design for hammock

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

11 www.ijergs.org



Fig 5. Commercially used steel cable
Working Principle

The working principle can be defined according to various operational states of the components used in this mechanism.
Magnetic Levitation track
Magnetic levitation track provides levitation and traction to the entire setup. The magnetic levitation induct track is constructed
through the series of halbach arrays which can produce a flux density of more than 1T. At the operation speed of the sledge, the
levitation force of the induct track acts like a stiff spring. Thus more than 2cm clearance between the sledge and the track can be
produced. Since no friction force is acting on the system, the sledge can be accelerated to its maximum speed, which is the required
speed of the aircraft to produce lift. Hence lift produced by the wing takes off the aircraft and the sledge will be finally detached from
the aircraft.
Hammock
Here the hammocks acts as the shock absorbing agents or they can be called as the replacement of the hydraulic system. Though some
parts used here in this project are provided with hydraulic system for the purpose of safety and to increase the efficiency. Hammocks
are the slings connecting the track to the sledge. The sledge can be detached from the hammocks with the help of detach and attach
hinges. Hammocks play vital role during landing operation.
Sledge
Sledge is the main component used for holding the aircraft and it is the main function of the sledge. The provision of moving the
sledge horizontally and vertically with respect to sledge frame provides the precision landing and take-off of the aircraft and also plays
vital role in gust wind landing. According to the total length of the aircraft additional sledge slots can be attached or detached. And it
works similar to the concept of attaching and detaching the train compartments.
Attach/detach hinges
They are provided in starboard and port side for the hammock purpose. The hinge at forward and backward is for the provision of
addition of sledge slots.
The figure 6 and figure 7 shows the flowchart for take-off and landing operation.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

12 www.ijergs.org















Fig 6.Flow chart of take-off operation













Fig 7. Flow chart of landing operation
The sledge carries the aircraft from the hanger
The sledge is attached to the hammock
Electromagnetic levitation mechanism accelerates the
aircraft to the take-off speed.
Sledge detach from the aircraft, aircraft takes off
Passengers on board, ready to take-off
Landing approach
Aircraft lands on the sledge, induct track decelerates.
Sledge detach from hammock
Passenger arrival station
Aircraft to hanger
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

13 www.ijergs.org

CONCLUSIONS
This paper gives the idea of design of the hammock concept with magnetic levitation assistance for TOL operation of aircraft. This
method could increases the fuel efficiency of aircraft since take-off and landing is done through ground assisted power source, thus
smaller engines can be used, Removal of conventional landing gear could reduce 7% weight of the aircraft, less noise production so
that airports can be built nearer to the cities.
Finally I conclude that more than more, this method is the most cost effective method because of the use of less hydraulic mechanism
and can reduce runway length. This is the best alternative solution to conventional TOL mechanism. This could enhance the aviation
industries go green.
ACKNOWLEDGMENT
I would like to convey thanks my parents, to those who encouraged me, to the Hindustan institutions, the faculty and staff of
Hindustan Institute of technology, and to all my friends.

REFERENCES:
Richard F.Post; Magnetic Levitation for Moving Objects, U.S Patent No.5, 722,326.
GABRIEL out of the box project, Possible Solutions to Take-off and land an Aircraft; version 1.3,GA NO.FP7-284884.
Barricade, http://en.wikipedia.org/wiki/Arresting gear; hammock, http://en.wikipedia.org/wiki/Hammock; Halbach Arrays,
en.wikipedia.org/wiki/Halbach array.
Klaus Halbach; Application of permanent magnets in accelerators and Electron Storage Rings, Journal of Applied Physics, Volume
57, pp.3605, 1985.
David Pope; Halbach Arrays Enter the Maglev Race, the industrial Physicist pp.12, 13.










International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

14 www.ijergs.org

Generation of Alternative Process Plans using TLBO Algorithm
Sreenivasulu Reddy. A
1
, Sreenath K
1
, Abdul Shafi M
1

1
Department of Mechanical Engineering, S V University College of Engineering, Tirupati-517502, India
Email- seetharamadasubcm@gmail.com

ABSTRACT : Computer Aided Process Planning (CAPP)system is an important production activity in the
manufacturing industry to generate process plans that contains the required information of machining operations, machining
parameters (speeds, feeds and depth of cuts), machine tools, setups, cutting tools and accessories for producing a part as per given
part drawing. In this context, to generate the optimum process plans, one of the AI based meta heuristic algorithm is used i.e.,
TeachingLearning Based Optimization (TLBO) to solve the process planning problem to minimize operation sequence cost and
machining time based on the natural phenomenon of teachinglearning process like in the class room.
Keywords: CAPP, TLBO, Optimized solution, Alternative process plans, Teacher phase, Learner phase.
INTRODUCTION
Computer aided Process planning (CAPP) deals with the selection of the machining operations sequence as per given
drawing and determination of conditions to produce the part [9].It includes the design data, selection of machining processes, selection
of machine tools, sequence of operations, setups, processing times and related costs. It explores operational details such as: sequence
of operations, speeds, feeds, depths of cut, material removal rates, and job routes [10]. Required inputs to the planning scheme
include: geometric features, dimensional sizes, tolerances and work materials. These inputs are analyzed and evaluated in order to
select an appropriate operations sequence based upon available machinery and workstations.Therefore the generation of consistent and
accurate process plans requires the establishment and maintenance of standard databases and the implementation of an effective and
efficient Artificial Intelligence (AI) heuristic algorithms like Genetic algorithm (GA), Simulated Annealing(SA), Ant Colony
Optimization (ACO) and TLBO algorithm are used to solve these problems.
LITERATURE REVIEW
Since last three decades many evolutionary and heuristic algorithms have been applied to process planning
problems. Usher and Sharma (1994) mentioned that several feasibility constraints which affects the sequencing of the machining
operations. These constraints are processed sequentially based on the precedence relationsof the design features. Usher and Bowden
(1996) proposed an application of a genetic algorithm (GA) for finding near-optimal solutions.In 2002 Li et al. developeda hybrid GA
and SA approach to solvethese problems for prismatic parts. Gopal Krishna and Mallikarjuna Rao (2006) and Sreeramulu et al. (2012)
presenteda developed meta-heuristic Ant Colony Optimizationalgorithm (ACO) as a global search technique for the quick
identification of the operations sequence. Recently, TLBO is a newly developed algorithm introduced by Rao et al.(2011) based on the
natural phenomena of teaching and learning process like in a classroom. Therefore it does not require any specific constraint process
parameters.And also they (2013) proposed to solve the job shop scheduling problems to minimize the make span using TLBO
algorithm. All the evolutionary algorithms require common controlling parameters like population size, number of generations etc.In
addition to these common parameters, they may require own algorithm-specific parameters. For example GA contains mutation and
cross over rate, PSO uses inertia weight.
TEACHING-LEARNING-BASED OPTIMIZATION ALGORITHM
In TLBO Algorithmteacher and learners are the two vital components. This describes two basic modes of the
learning, through teacher (known as teacher phase) and interacting with the other learners (known as learner phase). Teacher is usually
considered as a highly learned person who trains learners so that they can have better results in terms of their marks or grades.
Moreover, learners also learn from the interaction among themselves which also helps in improving their results. TLBO is population
based method. In this optimization algorithm a group of learners is considered as population and different design variables are
considered as different subjects offered to the learners and learners result is analogous to the fitness value of the optimization
problem. In the entire population the best solution is considered as the teacher. TLBO algorithm mainly working of two phases,
namely teacher phase and learner phase.



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

15 www.ijergs.org

Teacher Phase
Teacher phase is the first phase of TLBO algorithm. In this phase teacher will try to improve mean of class.A good
teacher is one who brings his or her learners up to his or her level in terms of knowledge. But in practice this is not possible and a
teacher can only move the mean of a class up to some extent depending on the capability of the class. This follows a random process
depending on many factors. Generate the random population according to the population size and number of generations [6].

Calculate the mean of the population, which will give the mean for the particular subject as M,D = [m1, m2, .mD]. The
best solution willact as a teacher for that iteration Xteacher = Xf(X)=min.The teacher will try to shift the mean from MD towards X
teacher which will act as a new mean for the iteration. So,Mnew, D =X teacher D.

The difference between two meansis expressed as
Difference D = r
i
(Mnew, DTFMD) (1)
Where, r
i
is the random number in the range [0, 1], the value of Teaching Factor (TF) is considered 1 or 2. The obtained difference is
added to the current solution to update its values using
X new,D = Xold, D + Difference D. (2)
Accept Xnew if itgives better function value.
Learner Phase
A learner interacts randomly with other learners for enhancing his or her knowledge [4]. Randomly select two learners Xi and
Xj.
X'new,D= Xold,D+ r
i
(Xi-Xj) if f (Xi) < f (Xj)
X'new,D= Xold,D+ r
i
(Xj- Xi) if f (Xi) >f (Xj)
Termination criterion: Stop if the maximum generation number is achieved; otherwise repeat from Step Teacher phase.
PROCESS PLANNING METHODOLOGY
In this algorithm the operation sequences are considered as learners and operations acts as subjects. The operation
sequences are generated randomly according to the procedure of the algorithm. Calculate the time and cost for the generated
sequences and identify the best teacher. In teacher phase update the solutions (from equation 2) and again calculate the time and
cost. The flow chart of the TLBO Algorithm is as shown in figure 3.

The operation sequences aregeneratedto develop a feasible and optimal sequenceof operations for a part based on
the technical requirements, including part specifications in the design, the givenmanufacturing resources, and certain objectives related
to cost or time. The following formulas are used to calculate total time and manufacturing costs [8].

1. Machine cost (MC), MC is the total costs of the machines used in a process plan and it can be computed as:
MC = ( )
1
[ [ ]. ]. * [ ]
n
i
Machine Oper i Mac id Cost machining time of Oper i


Where Oper (i) = operation I, MCI is the machine cost index for the machine and Mac-id is the machine used for the operations.
2. Tool cost (TC), TC is the total costs of the cutting tools used in a process plan and it can be computed as :
| | | | ( )
1
. . *
n
i
TC Tool OPer i Tool id Cost machining time of Oper i

=
( =


Where TCI is the tool cost index for the tool and Tool-id is the tool used for the operation.
3. Number of set-up changes (NSC), the number of set-ups (NS) and the set-up cost (SC).
| | | | ( ) | | | | ( ) ( )
1
1
1
. , 1 . . , 1 .
n
i
NSC Oper i Mac id Oper i Mac id Oper i TAD id Oper i TAD id

1
=
= O O + O +


The correspondence NS and SC can be computed as:
NS = 1+NSC
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

16 www.ijergs.org

1
NS
i
SC SCI
=
=

, Where ( ) ( )
1 2
1 1 0
, , ,
0 0
X Y X Y
X Y X Y
X Y otherwise
= = =
O = O =

=


And SCI is the set up cost index.
4. Number of Machine Changes (NMC) and Machine Change Cost (MCC), NMC and MCC can be computed as:
NMC =

=
+ O
1
1
1
) _ ]. 1 [ , _ ]. [ (
n
i
id Mac i Oper id Max i Oper
MCC =

=
NMC
i
MCCI
1

Where MCCI is the machine change cost index.
5. Number of Tool Changes (NTC) and Tool Change Cost (TCC) are computed as:
NTC=
)) _ ]. 1 [ , _ ]. [ ( ), _ ]. 1 [ , _ ]. [ ( (
1 1
1
1
2
id Tool i Oper id Tool i Oper id Mac i Oper id Mac i Oper
n
i
+ O + O O

=

TCC =

=
NTC
i
TCCI
1

Where TCCI is the tool change cost index.
6. Total Weighted Cost (TWC)
TWC = TCC MCC SC TC MC + + + +



Case study


In this paper the process plans are generated for a prismatic part drawing based on manufacturing time and related cost. The
part details,costs, precedence relations and number of generations are given as input to the algorithm. The output contains the process
plans and their costs, machining times, setups. Part drawing details are shown in Fig.1 and Table.1 respectively.











International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

17 www.ijergs.org


Fig.1. Part Drawing Fig.2. Precedence relation of the part drawing
Operations Information
Table.1 Operations information for part drawing
F ID Feature Operations Dimensions
1. Surface Milling L=150,H=90,W=150
2. Pocket Shaping L=150,H=40,W=35
3. Pocket Shaping L=80,H=40,W=35
4. Pocket Shaping L=150,H=40,W=35
5. Pocket Shaping L=80,H=40,W=35
6. Hole Drilling D=16,H=30
7. Hole Drilling D=16,H=30
8. Hole Drilling D=16,H=30
9. Hole Drilling D=16,H=30
10. Hole Drilling D=16,H=30
11. Hole Drilling D=16,H=30
12. Hole Drilling D=16,H=30
13. Hole Drilling D=16,H=30
14. Hole Drilling D=60,H=11
15. Hole Drilling D=26,H=90

The precedence relations for the part drawing are shown in Fig.2. These precedence relations are generated
according to some standard rules. However, the user is allowed to choose the precedence relations according to requirements and
available resources.














1
2
13
6
12
3 7
4
8
9
10
5 11
14 15
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

18 www.ijergs.org






Initialize the population, design variables and
Number of generations
Generate the plans randomly and find the objective function
Calculate the mean of the each design variables
Identify the best solution
Calculate the Difference Mean and modify the solutions based on best
solution

Find the objective function for the modified solutions

Is new solution better
than existing?
Select any two solutions randomly X
i
and X
j
Is new solution better
than existing?
Is termination criteria
fulfilled
Final solution
Accept
Accept
No
No
No Yes
Yes
Yes
Fig.3. Flow chart of the TLBO Algorithm
Keep the previous
solution
Start
Keep the previous
solution
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

19 www.ijergs.org

Table 2: Best two process plans for part drawing

Table.3: Alternative five process plans for part drawing


OPERATION ID 1 2 3 4 5 14 13 6 12 7 8 9 15 11 10
OPERATION TYPE 7 10 10 10 10 3 3 3 3 3 3 3 3 3 3
OPERATION NAME Milling Shaping Shaping Shaping Shaping Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling
MACHINE ALLOCATED 4 13 13 13 13 4 8 8 3 8 3 4 9 7 8
TOOL ALLOCATED 9 15 15 15 15 4 4 4 4 4 6 5 4 4 5
SET UP ALLOCATED 2 6 6 6 6 6 1 1 1 1 6 6 6 6 6
558.17
389.08 561.1465
4
7
11
2.97675
OPERATION ID 1 2 3 4 5 14 15 6 12 7 8 9 10 11 13
OPERATION TYPE 7 10 10 10 10 3 3 3 3 3 3 3 3 3 3
OPERATION NAME Milling Shaping Shaping Shaping Shaping Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling
MACHINE ALLOCATED 4 13 13 13 13 10 7 3 3 3 3 10 8 8 10
TOOL ALLOCATED 10 16 16 15 15 6 4 7 5 4 4 4 6 5 7
SET UP ALLOCATED 6 6 6 6 6 6 6 1 1 1 6 6 6 6 1
587.57
384.08 590.54675
4
11
8
2.97675
CRITERIAN 1: MINIMUM COST
CRITERIAN 2: MINIMUM TIME
COST
TOTAL TIME
NO.OF SETUPCHANGES
NO. OF TOOL CHANGES
COST
TOTAL TIME
NO.OF SETUPCHANGES
NO. OF TOOL CHANGES
NO.OF M/C CHANGES
RAW MATERIAL COST
TOTAL COST
TOTAL COST
NO.OF M/C CHANGES
RAW MATERIAL COST
Part No 2
PLAN1
OPERATION ID 1 2 3 4 5 14 13 6 12 7 8 9 15 11 10
OPERATION NAME Milling Shaping Shaping Shaping Shaping Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling
MACHINE ALLOCATED 4 13 13 13 13 4 8 8 3 8 3 4 9 7 8
TOOL ALLOCATED 9 15 15 15 15 4 4 4 4 4 6 5 4 4 5
SET UP ALLOCATED 2 6 6 6 6 6 1 1 1 1 6 6 6 6 6
OPERATION TIME 1111.33 493.92 263.42 493.92 263.42 9.96 74.08 74.08 74.08 74.08 74.08 74.08 666.8 74.08 74.08
PLAN2
OPERATION ID 1 2 3 4 5 14 13 6 12 15 8 9 10 11 7
OPERATION NAME Milling Shaping Shaping Shaping Shaping Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling
MACHINE ALLOCATED 4 13 13 13 13 3 10 10 8 8 9 4 4 3 8
TOOL ALLOCATED 10 16 16 15 15 7 4 7 6 6 7 4 7 7 5
SET UP ALLOCATED 6 6 6 6 6 6 6 6 6 1 1 6 6 6 1
OPERATION TIME 1111.33 493.92 263.42 493.92 263.42 9.96 74.08 74.08 74.08 666.8 74.08 74.08 74.08 74.08 74.08
PLAN3
OPERATION ID 1 2 3 4 5 14 15 6 12 7 8 9 10 11 13
OPERATION NAME Milling Shaping Shaping Shaping Shaping Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling
MACHINE ALLOCATED 4 13 13 13 13 10 7 3 3 3 3 10 8 8 10
TOOL ALLOCATED 10 16 16 15 15 6 4 7 5 4 4 4 6 5 7
SET UP ALLOCATED 6 6 6 6 6 6 6 1 1 1 6 6 6 6 1
OPERATION TIME 1111.33 493.92 263.42 493.92 263.42 9.96 666.8 74.08 74.08 74.08 74.08 74.08 74.08 74.08 74.08
PLAN4
OPERATION ID 1 2 3 4 5 14 13 6 12 7 8 9 10 15 11
OPERATION NAME Milling Shaping Shaping Shaping Shaping Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling
MACHINE ALLOCATED 10 13 13 13 13 7 7 9 10 3 3 8 3 8 8
TOOL ALLOCATED 12 16 15 16 16 4 6 6 6 4 7 6 4 5 5
SET UP ALLOCATED 6 6 5 5 5 6 1 1 1 1 1 1 1 6 6
OPERATION TIME 1111.33 493.92 263.42 493.92 263.42 9.96 74.08 74.08 74.08 74.08 74.08 74.08 74.08 666.8 74.08
PLAN5
OPERATION ID 1 2 3 4 5 14 13 6 12 7 8 9 10 11 15
OPERATION NAME Milling Shaping Shaping Shaping Shaping Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling Drilling
MACHINE ALLOCATED 4 13 13 13 13 9 9 7 9 3 3 7 8 3 10
TOOL ALLOCATED 11 16 16 16 16 7 6 4 6 6 4 5 5 5 4
SET UP ALLOCATED 6 6 6 5 5 6 6 6 1 1 1 1 1 1 1
OPERATION TIME 1111.33 493.92 263.42 493.92 263.42 9.96 74.08 74.08 74.08 74.08 74.08 74.08 74.08 666.8 74.08
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

20 www.ijergs.org

CONCLUSION
In this paper TLBO algorithm is used for solving process planning problem based on sequencing of machine operations. The
problem modeled with manufacturing time and associated cost as the objectives. The better results are obtained with TLBO algorithm.
REFERENCES:

[1] BhaskaraReddy, S.V., Shunmugam, M.S., and Narendran, T.T. Operation sequencing in CAPP using genetic algorithms,
International Journal of Production Research, vol. 37, no. 5, pp. 10631074, 1999.
[2] GopalKrishna, A., and Mallikarjun Rao, K. Optimization of operations sequence in CAPP using an ant colony algorithm,
Advanced Manufacturing Technology, vol. 29, no. 1-2, pp. 159164, 2006.
[3] Li, W.D., Ong, S.K., and Nee, A.Y.C. Hybrid genetic algorithm and simulated annealing approach for the optimization of
process plans for prismatic parts, International Journal of Production Research, Vol. 40, No. 8, pp.18991922, 2002.
[4] Keesari, H.V., and Rao, R.V. Optimization of job shop scheduling problems using teaching-learning-based optimization
algorithm, Operational Research Society of India 2013.
[5] Nallakumarasamy, G., Srinivasan, P.S.S., Venkatesh Raja, K., and Malayalamurthi, R. Optimization of operation
sequencing in CAPP using simulated annealing technique (SAT), International Journal of Advanced Manufacturing
Technology, vol. 54, no. 5-8, pp. 721728, 2011.
[6] Rao R.V., Savsani, V.J., and Vakharia, D.P. Teaching-learning-based optimization: an optimization method for continuous
non-linear large scale problems. Info. Sci. 183, 115 2012.
[7] Sreenivasulu Reddy, A. Generation of Optimal process plan using Depth First Search(DFS) Algorithm Proceedings of the
IV National conference on Trends in Mechanical Engineering. TIME10,30
th
December 2010, Kakatiya Institute of
Technology & Science, Warangal.
[8] Sreenivasulu Reddy, A., and Ravindranath, K. Integration of Process planning and scheduling activities using Petrinets,
International journal of Multidisciplinary Research and Advanced in Engineering (IJMRAE), ISSN 0975-7074, Volume 4,
No.III, pp. 387-402, July 2012.
[9] Sreeramulu, D., and Sudeep Kumar Singh Generation of optimum sequence of operations using ant colony algorithm, Int.
J. Advanced Operations Management, Vol. 4, No. 4, 2012.
[10] Srinivas P.S., RamachandraRaju, V., and Rao, C.S.P. Optimization of Process Planning and Scheduling using ACO and
PSO Algorithms, International Journal of Emerging Technology and Advanced Engineering,ISSN 2250-2459, Volume 2,
Issue 10, October 2012.
[11] Usher, J. M., and Bowden, R.O. The Application of Genetic Algorithms to Operation Sequencing for Use in Computer-
Aided Process Planning, Computers Ind. Engg., Vol. No. 4, pp. 999-1013, 1996.
[12] Usher J.M., and Sharma G. Process planning in the face of constraints, Proc. Industrial Engineering and Management
System Conference., pp 278283, 1994.







International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

21 www.ijergs.org

A Strategical Description of Ripple Borrow Subtractor in Different Logic
Styles
T. Dineshkumar
1
, M. Arunlakshman
1
1
Research Scholar (M.Tech), VLSI, Sathyabama University, Chennai, India
Email- arunlakshman@live.com

ABSTRACT The demand and popularity of portable electronics is driving designers to strive for small silicon area, higher speeds,
low power dissipation and reliability. Design of 2-input AND, 2-input OR, 2-input XOR and an INVERTER, which are the basic
building blocks for the 4- bit Ripple borrow subtractor. This paper thoroughly involves designing of ripple borrow subtractor in cMOS
logic, transmission gate logic and pass transistor logic styles. The schematic design is further transferredto prefabrication layout.
Simulation of the microwind layout realizations of the subtractor is performed and results are discussed. From the results obtained
comparison of cMOS logic, transmission gate logic and pass transistor logic is done and discussing the efficient logic for ripple
borrow subtractor.
Keywords cMOS logic, transmission gate logic, pass transistorlogic, fullsubtractor, ripple borrow subtractor.
INTRODUCTION
In thispaper,we have presenteda brief review on Rippleborrow subtractorusingcMOS,Transmission
gatesandPasstransistorlogic style.Thebasiccircuitdiagramof 1 bit Fullsubtractoris asdescribedbelowalongwiththe blockdiagram andits
truthtable. Full subtractor is a combinational circuit which is used to perform subtraction of three bits, it has three inputs a(minuend)
and b(subtrahend) and borrow_in(subtrahend) and two outputs d(difference) and borrow_out(borrow).


Fig. 1. Gate level representation of full subtractor




Fig. 2. Truth table and block diagram of full subtractor

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

22 www.ijergs.org

CIRCUITTECHNIQUES

FULLSUBTRACTOR

Thefull-subtractorcircuitsubtractsthreeone-bitbinary numbers(A,B,borrow_in)andoutputstwoone-bit binary numbers,a
difference(D)anda borrow(borrow_out).

RIPPLE BORROW SUBTRACTOR
It is possible tocreate logicalcircuit usingmultiplefullsubtractortosubtractN(precase4)bitnumbers.
Eachfullsubtractorinputsaborrow_in(borrow input)whichistheborrow_out (borrow output) of
previoussubtractor.Thiskindofsubtractorisripple borrow subtractorsince eachborrowbitripples tothenextfullsubtractor



Fig. 3. Ripple borrow subtractor

4-BITRIPPLEBORROW SUBTRACTORUSINGCMOSCIRCUITS
cMOS is referred to as complementarysymmetry metaloxide semiconductor (COS-MOS).Thewords complementary-symmetry" refer
to the fact that the typical digital design style with CMOS uses complementary and symmetrical pairs of p-type and n-type metal oxide
semiconductor field effect transistors (MOSFETs) for logic functions. The circuit level description of the Ripple Borrow Subtractor in
cMOS logic is described below.,

Fig. 4. Ripple borrow subtractor in cMOS logic.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

23 www.ijergs.org

4-BIT RIPPLEBORROW SUBTRACTOR USING TRANSMISSIONGATES

TheCMOStransmissiongateconsistsoftwoMOSFETs,one n-channelresponsibleforcorrecttransmissionoflogiclow,and one p-
channel, responsibleforcorrecttransmissionof logic high.The circuit level description of the Ripple Borrow Subtractor in
Transmission Gate logic is described below.


Fig. 3. Ripple Borrow Subtractor in Transmission Gate Logic.

4-BIT RIPPLEBORROW SUBTRACTOR USINGPASS TRANSISTORS

Wecanview thecomplementaryCMOSgateasswitchingthe output pintooneof powerorground.A slightlymore general
gateisobtainedifweswitchtheoutputtooneofpower; ground;orany oftheinputsignals.Insuchdesignsthe MOSFETis consideredto be
a passtransistor.When used as a passtransistorthe devicemay conductcurrentineither direction.The circuit level description of
the Ripple Borrow Subtractor in Pass Transistor logic is described below.,

Fig. 4. Ripple Borrow Subtractor in Transmission Gate Logic.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

24 www.ijergs.org


DESIGN ANDLAYOUT ASPECTS

LAYOUT OF RIPPLE BORROW SUBTRACTOR USING cMOS LOGIC



Fig. 5. Layout of ripple borrow subtractor using cMOS logic

LAYOUT OF RIPPLE BORROW SUBTRACTOR USING TRANSMISSION GATE LOGIC



Fig. 6. Layout of ripple borrow subtractor using transmission gate logic

LAYOUT OF RIPPLE BORROW SUBTRACTOR USING PASS TRANSISTOR LOGIC

Fig. 7. Layout of ripple borrow subtractor using pass transistor logic
SIMULATIONAND RESULTS
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

25 www.ijergs.org


SIMULATION OF RIPPLE BORROW SUBTRACTOR USING cMOS LOGIC


Fig. 8. Simulation of ripple borrow subtractor using cMOS logic

SIMULATION OF RIPPLE BORROW SUBTRACTOR USING TRANSMISSION GATE LOGIC


Fig. 9. Simulation of ripple borrow subtractor using transmission gate logic

LAYOUT OF RIPPLE BORROW SUBTRACTOR USING PASS TRANSISTOR LOGIC



Fig. 10. Simulation of ripple borrow subtractor using pass transistor logic


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

26 www.ijergs.org

POWER ANALYSIS

Thetable showstheresultsof4-bitrippleborrow subtractorusing cMOS circuits,
TransmissiongatesandPassTransistors.Itcomparesthesecircuitsregarding Power consumption.Fig.11 representstheaboveresults
graphically.


Fig. 11. Power consumption.

CONCLUSION

In this paper, an attempt has been made to design 2input AND, 2inputOR, 2inputXOR,whichare the basicbuilding
blocksforthebenchmarkcircuits4-bitRippleborrow subtractor. Theproposedcircuitshaveofferedanimprovedperformance inpower
dissipation.Inthispaper,we canbe concludedthatasthepowerdissipationof transmission gatecircuitsis
muchlessthanthepowerdissipationofcMOSandPasstransistors,thusitprovestobemuchefficientthan the circuits from cMOSandPass
transistors.The circuitandits VLSItechnologyisveryusefulinthe applications relatedtorural developmentas itislesspower
consumingandthuscanbeefficiently usedinvarious technologies.

REFERENCES:

[1] Nilesh P. Bobade ,Design and Performance of CMOS Circuits in microwind. IJCA,Jan2012,wardha.M.S., India.
[2] S. Govindarajulu, T. Jayachandra Prasad, Low-Power,High Performance Dual Threshold Voltage CMOS Domino Logic
Circuits, published in ICRAES, 8th & 9th Jan2010, pp-109- 117,KSR College of Engg., Tiruchengode, India.
[3] S.Govindarajulu, T.Jayachandra Prasad, Considerations of Performance Factors in CMOS Designs, ICED 2008, Dec.1- 3
Penang, Malaysia, IEEE Xplore.
[4] Gary K. yeap , Practical low power digital vlsi design.
[5] John P.Uyemura ,Cmos logic circuits design.
[6] A.AnandKumar ,Fundamental ofdigitalcircuits.
[7] Sung-MoKang,Yusuf Leblebici , CMOSDigitalIntegratedCircuits.
[8] Microwind and Dsch Users Manual , Toulouse, france.
[9] http://www.allaboutcircuits.com
[10] http://www.ptm.asu.edu
[11] http://vides.nanotcad.com/vides/
[12] http://en.wikipedia.org/wiki/Field-effect_transistor
[13] http://en.wikipedia.org/wiki/design-logics(electronics)
[14] http://en.wikipedia.org/wiki/MOSFET
[15] http://en.wikipedia.org/wiki/Transistor


0 20 40 60 80
cmos
trans_gate
pass
power consumption
CIRCUITS
POWER
CONSUMPTION

CMOSCIRCUITS 68.356uW
TRANSMISSIONGATES 9.225uW
PASSTRANSISTORS 37.515uW
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

27 www.ijergs.org

Comparison of Forced convective heat transfer coefficient between solid pin
fin and perforated pin fin
Anusaya Salwe
1
, Ashwin U. Bhagat
1
, Mohitkumar G.Gabhane
1

1
Deparatment of mechanical Engineering, Manoharbhai Patel Institute of Engineering and Technology, Shahapur Rashtrasant Tukdoji
Maharaj Nagpur University, Nagpur, Maharashtra, india
Email- mohitgabhane79@gmail.com

ABSTRACT The rapid growth in high speed multi-functional miniaturized electronics demands more stringent thermal management.
The present work numerically investigates the use of perforated pin fins to enhance the rate of heat transfer. In particular, the numbers of horizontal
perforations, horizontal diameters of perforation on each pin-fin are studied. Results show that heat transfer in perforated pin fin is greater than solid
pin fin. Pressure drop with perforated pins is reduced as compared with that in solid fins and more surface area get available which enhance the
convective heat transfer.

Keywords: Heat Transfer, Extended Surface, Forced convection, Perforated Fin.
1. Introduction
Extended Surface (Fin) is used in a large number of applications to increase the heat transfer from surfaces. Typically, the fin
material has a high thermal conductivity. The fin is exposed to a flowing fluid, which cools or heats with it. The high thermal
conductivity allowing increased heat being conducted from the wall through the fin. Fins are used to enhance convective heat transfer
in a wide range of engineering applications, and offer practical means for achieving a large total heat transfer surface area without the
use of an excessive amount of primary surface area. Fins are commonly applied for heat management in electrical appliances such as
computer power supplies or substation transformers. Other applications include IC engine cooling, such as Fins in a car radiator.
Heat sinks are employed to dissipate thermal energy generated by electronic components to maintain a stable operation
temperature. A compact, efficient and easily fabricated heat sink is required. However, the design of heat sink device is strongly
dependent upon the need to balance thermal dissipation and pressure drop across the system such that the overall cost and efficiency
may be optimized. An example of a familiar solution is to apply pin fin into a heat sink design.
The thermal dissipation performance of pin fin and pin fin heat sinks when subject to a horizontal impinging flow.
We concluded that the heat transfer and pressure coefficients for cylindrical attach perforated pin fin are higher than those of
solid pin fin. Fins are widely used in the trailing edges of gas-turbine blades, in electronic cooling and in the aerospace industry. The
relative fin height (H/d) affects the heat transfer of pin-fins, and other affecting factors include the velocity of fluid flow, the thermal
properties of the fluid, the cross-sectional area of fluid flow.
2. Experimentation set-up
The experimental set-up consisting of the following parts
A. Main Duct (cylindrical)
B. Heater Unit
C. Base Plate
D. Data Unit

A. Main Duct (cylindrical): A cylindrical channel constructed by using galvanizing steel of 1 mm thickness and has a diameter of
150mm and length of 1200mm . at the middle there is attach the perforated pin fin .It will be operated in force draught mode by the
blower of 0.5 H.P. 13000 rpm.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

28 www.ijergs.org


B. Heater Unit: Heater Unit (test section) has a diameter of 160mm and width of 20mm which is wound on the cylindrical fin portion
the heating unit mainly consisted of an electrical heater The heater output has a power of 200 W at 220V and a current of 10 amp.

C .Central portion: On the central portion of the cylindrical duct there is pin fin attach and to heat that pin fin on the central portion
of cylindrical duct band heater is wound to heat the pin fin.

D. Data Unit: It consists of various indicating devices which indicate the reading taken by the various components like sensors,
voltmeter, manometer. There are temperature indicator which shows reading taken by the seven sensors in the range 0c to 450 c
among this, two gives inlet and out temperature of air, three gives temperature at base, middle, and tip of the fin.

There is one sensor which shows temperature above the fin. One sensor gives reading at outlet.
Inlet flow rate of air is indicated by velocity indicator using manometer.




3. Experimentation Procedure

1) First blower and heater are started simultaneously.
2) After starting the blower pressure difference due to the fins employed using manometer are noted.
3) Reading of atmospheric temperature is also taken.
4) Voltage is set to different values like 90v, 100v, 120v, 130v, 140v etc. and readings are taken for solid pin fin, single hole
pin fin, double hole pin fin, three hole pin fins .
5) The voltage, current and temperatures at different points where thermocouples are attached are noted down.
Similarly readings at same voltage for the different pin fin sets. (i.e. solid, single hole, double hole, three holes are observed and
noted)

Fig1: pictorial view of experiment

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

29 www.ijergs.org

4. Nomenclature

Q heat transfer
Qcon heat transfer due to convection
Qrad heat transfer due to radiation
h heat transfer coefficient
A
s
Surface area of fin
Tm mean temperature
I current (amp)
D diameter of duct
R resistance

5. Governing Equations

The Convective heat transfer rate electrically heated test surface is calculated by using

Q
conv
.= Q
e
- Q
cond
- Q
rad
(1)


where ,
Q
conv
is the heat transfer rate by convection
Q
e
is the heat transfer rate of electrical
Q
cond
is the heat transfer rate by conduction
Q
rad
. is the heat transfer rate by radiation,
Q
e
is calculated using following equation
Q
e
= I
2
X R

(2)
Where,
I is current flowing through heater and R is resistance.

In similar studies, investigators reported that total heat loss through radiation from a similar test surface would be about 0.5%
of the total electrical heat input. The conductive heat losses through the sidewalls can be neglected in comparison to those
through the bottom surface of the test section. Using these findings, together with the fact that wall of the test section are
well insulated and readings of the thermocouple placed at the inlet of tunnel should be nearly equal to ambient temperature,
one could assume with some confidence that the last two terms of Eq. (1) may be ignored.

The heat transfer from the test section by convection can be expressed as

Q
conv.
= h
avg
A
s
[T
m1
-T
m2
] (3)

Hence, the average convective heat transfer coefficient have could be deduced using

h
avg
=

[12]
(4)

where
A
s
is the surface area of fin.
T
m1
is the mean temperature over surface.
T
m2
is the temperature outside the fins.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

30 www.ijergs.org

Friction factor to measure amount of friction using pressure drop is calculated by equation below

=
P

L
D
h

V
2
2
(5)




Fig2: Solid Fins
Fig4: 2 holes fins





Fig3: 1 hole fins Fig5: 3 holes fins

6. Observations

The various observation like heat input Q in Watt, mean temperature over the fins T
m1
in
o
C, mean outside temperature T
m2
in
o
C,
temperature difference T in
o
C and h heat transfer rate in W/mm
2 o
C and P pressure drop in mm of water for solid pin fin , 1
hole pin fin, 2 holes pin fin, and 3 holes pin fins are made and calculated.


7.Result
Result stated that heat transfer increases with increasing number of parforation on fins .

7.1. Pressure Drop Effect

Fig.2, Fig.3, Fig.4 & Fig.5 shows how the fins are arranged in circular duct. f i.e. Friction Factor decreases with increasing
number of perforation as the perforations decrease the blockage effect. Since the number of perforation is restricted on a given pin, f
may be further reduced by increasing the perforation diameter. It is important to note that vertically perforated pins are critical for heat
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

31 www.ijergs.org

sinks subject to impinging flow. As shown in Fig., pins with horizontal and vertical perforations have lower f than pins without, and
pins with vertical perforations have the lowest f.

7.2. Heat Transfer Performance

More importantly, thermal dissipation is higher with perforated pin fins than with solid pins. It is found that the larger the number
of perforation on each pin fin However, further increasing the perforation diameter reduces heat transfer from base to tip of the fin.
This is due to the decrease in the cross sectional area of the pin for heat conduction along the pins.




7.3.Heat Transfer Efficiency

It is found that the perforated pin fins have higher efficiency than the solid pin fins. The result shows that heat transfer
increases with number of perforations, when solid fins are compared with 3 holes pin fin it is find out that h increases with increasing
number of hole from no hole to 3 hole obtained successfully. Also the temperature difference decreases with increase of number of
perforation. This shows that low temperature difference leads to high heat transfer. The efficiency of the perforated pin fins are 15 to
17 % more than solid pin fins.


7.4. Conclusions
In this study, the overall heat transfer and friction factor for the heat exchanger equipped with cylindrical cross-sectional perforated
pin fins were investigated experimentally. The effects of the flow and geometrical parameters on the heat transfer and friction
characteristics were determined:

a) P across the pin fins are smaller with increasing number of perforation and perforation diameter. In all cases, perforated pin fin
array performs better than the solid pins. Hence, perforated pin fins require less pumping power than the solid pins for the same
thermal performance.

b) Maximum h is obtained from pin fin with 3 perforations, 3mm horizontal perforation diameter, It is approximately 10% higher
than that for the solid pins at R
ep
=1110
3
. More importantly, the thermal energy is dissipated at a smaller pressure drop.

c) Further increasing the perforation diameters will lead to a reduction in thermal dissipation. This is due to the decrease in vertical
heat conduction along the perforated pin fins, as well as the perforations induces reshaping of wakes behind the pins.
0
5
10
15
20
25
30
35
T(oc)solid
T(oc)1 hole
0
0.0005
0.001
0.0015
0.002
0.0025
0.003
0.0035
66.4385.49 96.8 114 133.9
h(W/mm2(oc)
h(W/mm2(oc)
h(W/mm2(oc)
h(W/mm2(oc)

X axis power input
Y axis temp.dif.
Graph 1.
Graph 2.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

32 www.ijergs.org


REFERENCES:
[1] -Jinn Foo
1, 2
, Shung-Yuh Pui
1
, Yin-Ling Lai
1
, Swee-Boon Chin, SEGi Review ISSN 1985-5672 Vol. 5, No. 1, July 2012 .

[2] Abdullah H. AlEssa1*, Ayman M. Maqableh
1
and Shatha Ammourah
2
,
Enhancement of natural convection heat transfer from a
fin by rectangular perforations with aspect ratio of two International Journal of Physical Sciences Vol. 4 (10), pp. 540-547, October,
2009

[3] Raaid R. Jassem, SAVAP International, EFFECT THE FORM OF PERFORATION ON THE HEAT TRANSFER
IN THE PERFORATED FINS", ISSN 1985-5672 Vol. 5, No. 1, July 2012, 29-40.
















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

33 www.ijergs.org

Developments in Wall Climbing Robots: A Review
Raju D.Dethe
1
,Dr. S.B. Jaju
2

1
Research Scholar (M.Tech), CAD/CAM, G.H.Raisoni College of Engineering, Nagpur
2
Professor, .H.Raisoni College of Engineering, Nagpur

ABSTRACT The purpose of wall climbing robots is climbing mainly on the vertical surfaces like that of walls. The robots are
required to have high, maneuverability and robust & efficient attachment and detachment. The robot can automate tasks which are
done manually with an extra degreed of human safety in a cost effective manner. The robot can move in all the four directions
forward, backward, left and right. The other locomotion capabilities include linear movement, turning movement, lateral movement,
rotating and rolling movement. Apart from the reliable attachment principal the robot should have low self weight and high payload
capacity. The design and control of robot should be such that it can be operated from any place. A wireless communication link is
used for high performance robotic system. Regarding the adhesion to the surface the robots should be able to produce secure griping
force. The robots should adopt to different surface environlments from steel, glass, ceramic, wood, concrete etc. with low energy
consumption and cost. This paper presents a survey of different proposed and adopted climbing robots developed on the recent
technologies to fulfill the objective
Keywords: robot, climbing, adhesion, suction, magnetic. Electrostatic.
1 INTRODUCTION
Wall climbing robots (WCR) are special mobile robots that can be used in a variety of application like inspection and
maintenance of surfaces of sea vessels; oil tanks, glass slabs of high rise building etc. To increase the operational efficiency and to
protect human health and safety in hazardous tasks make the wall climbing robot a useful device. These systems are mainly adopted
in such conditions where direct access by human operator is very expensive due to hazardous environment or need of scaffolding.
During navigation wall climbing robots carry instrument hence they should have the capability to bear high payload with lower self
weight. Researchers have developed various types of wall climbing robot models after the very first wall climbing robot dated back
to 60s,developed by Nishi based on single vacuum suction cup. Basically these are design factors for developing the mobile robots,
their adhesion and locomotion. Based on locomotion the robots can be differentiated into three types viz. crawler, wheeled and
legged type. Although the crawler type is able to move relatively faster, it cannot be applied in rough environments. On the other
hand legged type easily copes with obstacle found in the environments. Generally its speed is lower and requires complex control
system. The wheeled robots can have relatively high speed except they cannot be used for surfaces with higher obstruction. Based on
the adhesion method the robots can be classified into magnetic, vacuum or suction, grasping grippers, Electrostatic and biologically
inspired robots. The magnetic type robots are heavy due to weight of magnets and can only be used on ferromagnetic surface.
Vacuum based robots are lightweight and easy to control but they cannot be used on cracked surfaces due to leakage of compressed
air. The biologically inspired robots are still in the development stage as newer material is tested and to be improved. The technology
based on electrostatic adhesion is lightweight and have high flexibility to be used on different type of walls is in the developing stage.
2 CLIMBING ROBOTS DESIGN CONCEPT AND APPLICATIONS
The paper by Shigeo Hirose and Keisuke A.Rikawa describes seemingly two opposite design and control concepts based on coupled
and decoupled actuation of robotic mechanism. From the viewpoint of controllability, decoupled actuation is better than coupled
actuation.
5

Manual F. Silvas paper presents the survey of different technologies proposed and adopted for climbing robots adhesion to surfaces,
focusing on the new technologies that are developed recently to fulfill these objectives.
15

The paper by H.X.Zhang presents a novel modular caterpillar named ZC-I featuring fast building mechanical structure and low
frequency vibrating passive attachment principle.
20

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

34 www.ijergs.org

Shanging Wu proposes wireless distributed wall climbing robotic system for reconnaissance purpose.
2

The solution for inspection of marine vessels is proposed in design and control of a lightweight magnetic climbing robot for
vessel inspection by Markus Eich and Thomas Vogele.
4

A paper by Hao Yangand Rong Liu Proposes the vibration suction method(VSM) which is a new kind of suction strategy for
wall climbing robots.
6

Stephen Paul Linder have designed a handhold based low cost robot to climb a near vertical indoor climbing wall using computer
vision.
10

The paper by Jason Gu presents a proposed research on wall climbing robot with permanent magnetic tracks. The mechanical
system architecture is described in the paper.
11

The inspection of large concrete walls with autonomous system to overcome small obstacles and cracks are described by
K.Berns.
16

Gecko, climbing robot for wall climbing vertical surfaces and ceiling is presented by F.Cepolina.
24

Climbing service robots for improving safety by Bing L.Luk describes how to overcome the traditional manual inspection and
maintenance of tall building, normally require scaffolding and gondolas in which human operator need to work in mid air and life
threatening environment.
25

Houxiang Zhangs paper describes three different kinds of robots for cleaning the curtain walls of high rise building.
26

Climbing robots are useful devices that can be adopted in a variety of application such as Non
Destructive Evaluation (NDE), diagnosis in hazardous environments, welding, construction, cleaning and maintenance of high
rise buildings, reconnaissance purpose, visual inspection of manmade structures. They are also used for inspection and
maintenance of ground storage tanks and can be used in any type of surveying process including inspection of marine vessels, to
detect damaged areas, cracks and corrosion on large cargo hold tanks and other parts of ships. Small sized wall climbing robots
can be used for anti terror and rescue scout tasks. Firefighting, inspection and maintenance of storage tanks in nuclear power
plants, airplanes and petrochemical enterprise etc.
Application of some of the wall climbing robots is given in the table below.
SR. NO AUTHOR YEAR APPLICATION
1 Young Kouk Song, Chang Min
Lee
2008 Inspection purpose
2 Love P. Kalra, Weimin Shen,
Jason Gu
2006 Non destructive inspection
3 Shanqiang Wu, Mantian Li,
Shu
2006 Reconnaissance purpose
4 Markus Eich And Thomas
Vogele
2011 Vessel inspection
5 shuyan liu, xueshan gao, kejie
li, jun li
2007 Anti terror scout
6 Juan Carlos Grieco, Manuel
Prieto.
1998 Industrial application
7 K. Berns, C.Hillenbrand - Inspection of concrete walls
8 F. Cepolina, R.C. Michelini, 2003 Wall cleaning
9 Bing L. Luk, Louis K. P. Liu 2007 Improve safety in building
maintenance
10 Houxiang Zhang , Daniel
Westhoff
- Glass curtain walls cleaning
Table No. 1
3 PRINCIPAL OF LOCOMOTION
The wall climbing robots are based on the following three types of locomotion
a) Wheeled
b) Legged and
c) Suction cups
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

35 www.ijergs.org


The following robots comes under this category
Wheeled wall climbing robots
This section describe, the hardware platform of a wall climbing robot ,called LARVA as shown in Fig. 1 and its control method.
LARVA is the robot containing all the components except the power, when is supplied via a tether cable. Total weight of system
is 3.3 kg. Its dimensions are 40.0cm width, 34.5cm length, and 11.0 cm height. Impellent force generator can evacuate the
chamber to 5kpa. It is same 300M approximately. Finally, it can move on the wall in 10cm/s as a maximum speed
1

The mechanical design of the proposed WCR is shown in Fig 2. The robot consists of aluminum frame, motors and drive train,
and tracked wheels with permanent magnets plate in evenly spaced steel channels.
3

A differential drive mechanism has been selected for this robot in which the wheels or tracks on each side of the robot are driven
by two independent motors, allowing great maneuverability and the ability to rotate the robot on its own axis. The tracks provide
a greater surface area for permanent magnets near the contact surface than moral wheels, creating enough attraction force to keep
the robot on the wall and enough flexibility to cross over small obstacles like welding seams resulting in a more stable
locomotion.
The mechanical design of the city-Climber is divided into three main areas; the adhesion mechanism , the drive system and the
transition system. The adhesion mechanism is the most critical of these as it allows the robot to adhere to the surface on which it
climbs. The drive system is designed to transmit power to four wheels of the robot and to provide maximum traction as it climbs
to move from a vertical wall to the ceiling.
22

Fig.
No. 1 WCR LARVA Fig. No. 2 Magnetic Wheel WCR Fig. No. 3 TheCity Climber WCR

The Legged wall climbing robots are described below

Distributed inward Gripping (DIG) advances the concept of directional attachment by directing legs on opposite sides of the body
to pull tangentially inward toward the body. The shear forces oppose each other rather than the pull of gravity, allowing the robot
to climb on surface of any orientation with respect to gravity including ceilings.
12

REST design was focused on the main specification features , which include :
- Capacity to carry high payloads (up to 100 kg) on vertical walls and ceilings.
- Some degree of adaptation to traverse obstacles and irregularities.
- High safeness for industrial environment operation.
- Semiautonomous behavior.
13

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

36 www.ijergs.org


Fig. No. 4 DIGbot WCR Fig. No. 5 Legged WCR
The basic function of inspired climbing caterpillar include following aspects. The climbing caterpillar has to be safely attached to
the slope with different material and has to overcome gravity. The mechanical structure for safe and reliable attachment to the
vertical surface is needed. Now our research is focusing on the realization of new passive suckers which will save considerable
power. Because of the unique vibrating adsorbing principle, the passive suckers can attach not only to glass, but also to a wall
with maximum tiles.
20


Following table shows robots categorized on the basis of method of climbing
SR. NO AUTHOR YEAR METHOD OF CLIMBING
1 Young Kouk Song, Chang
Min Lee
2008 Impellar with sction seal
2 Love P. Kalra, Weimin Shen,
Jason Gu
2006 Magnets
3 Shanqiang Wu, Mantian Li,
Shu
2006 Distributed wall climbing
4 Hao Yang And Rong Liu, 2008 New vibration suction robotic
foot
5 Akio YAMAMOTO, Takumi
NAKASHIM
2007 Electrostatic attraction
6 Yu Yoshida And Shugen Ma 2010 Passive suction cups
7 L. R. Palmer Iii, E. D. Lkiller 2009 Distributed inward gripping
8 XiaoQI Chen 2007 Bernoulli effect
9 Sangbae Kim 2008 Geckos
10 Philip Von Guggenberg 2012 Electro adhesion
Table No. 2

4 TECHNOLOGIES FOR ADHERING TO SURFACES
To hold a robot on the wall is the basic concept behind the development of adhesion principal. There are many factors which
affect holding especially on all vertical walls and ceiling. Forces, robot movement and mechanical design are such factors.
The Suction force based wall climbing robots are


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

37 www.ijergs.org


Fig no. 6 Vaccum Cup WCR Fig. No. 7 Scout Task WCR

This papers proposes the critical suction method (CSM) the method means that the suction force includes two types of forces,
one is the negative suction force generated inner suction disc, and the other is the thrust force generated by the propeller . And
while the robot is adsorbing on the wall surface, the two forces could push it on the wall safely, and improve its obstacle
overleaping abilities. The robot suction principle mainly composted of suction cup, flexible sealed ring and propeller. Once
propeller goes round and round at full speed, the air vents and thrust force produces that pushes the suction cup to the wall. What
more air enters into the suction cup through the flexible sealed ring and it makes the cup achieve the negative pressure state. So
there is the pressure force for the robot to suck on the wall. By adjusting the gap between the sealed ring and the wall surface, the
critical suction would be obtained in the robot suction system. It also meets the demand that the robot can stay on the wall and
move smoothly.
9

Fig. 6 shows materials handling application where a vacuum cup called a suction cup is used to establish the force capability to
lift a flat sheet. The cup is typically made of a flexible material such as rubber so that a seal can be made where its lip contacts the
surface of the flat sheet. A vacuum pump is turned on to remove air from the cavity between the inside of the cup and top surface
of the flat sheet. As the pressure in the cavity falls below atmosphere pressure, the atmosphere pressure acting on the bottom of
the flat sheet pushes the flat sheet up against the lip of the cup. This action result in vacuum pressure in the cavity between the
cup and the flat sheet that causes an upward force to be exerted on the flat sheet.
23

The requirement of the robot is to be self contained i.e. it should be able to operate throughout its task by totally depending upon
the on board batteries. This demands on adhesion mechanism that does not require any external power. Permanent magnet makes
a great candidate for such a requirement. By carefully selecting the size of the magnets and by introducing an appropriate air gap
between the magnet and the wall surface we can have a very efficient adhesion mechanism unlike other alternative like vacuum
suction cups which need a continuous supply of negative pressure to stick
.3
The previous adhesion techniques make the robot suitable for moving on at walls and ceilings. However, it is difficult for them to
move on irregular surfaces and surfaces like wire meshes. In order to surfaces this difficulty, some robots climb through manmade
structure or through natural environments, by gripping themselves to the surface where they are moving over. These robots
typically exhibit grippers.
17


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

38 www.ijergs.org


Fig. No. 8 Grasping WCR
A prototype wall climbing robot was designed and fabricated using flexible electrode panels. The robot was designed to utilize
the inchworm walking mechanism. Two square frames made of aluminum were conducted by a linear guide. Their relative
position was controlled by two RC Servo motors. Electrode panels were redesigned to fit the frame design. On each square frame,
two electrode panels measures 130 mm in which and 75 mm in height. Each panel weighs 12 g and the total weight of the robot is
327 g
.7
Geckos are renowned for their exceptional ability to stick and run on any vertical and inverter surface. However gecko toes are
not sticky in the usual way like duct tape or post it notes. Instead, they can detach from the surface quickly and remain quite clean
around everyday contaminates even without grooming. The two front feet of a tokay gecko can withstand 20.1 N of force parallel
to the surface with 227 mm
2
of pad area , a force as much as much as 40 times the geckos weight. Scientists have been
investigating the secret of this extraordinary adhesion ever since the 19
th
century and at least seven possible mechanism for gecko
adhesion have been discussed over the past 175 years.. There have been hypotheses of glue, friction, suction, and electrostatics,
micro-interlocking and intermolecular forces. . Sticky secretions were rules out first early in the study of gecko adhesion since
geckos lack glandular tissue on their toes.
19


Fig. No. 9 Stickbot WCR


5 NEW ADHESION PRINCIPLES
Climbing robot based on new principal of adhesion: an overview
Existing wall climbing robots are often limited to selected surfaces. Magnetic adhesion only works on ferromagnetic metals,
Suction pads may encounter problems on the surface with high permeability. A crack in a wall would cause unreliable functioning
of the attachment mechanism and cause the robot to fall off the wall materials and surface condition is desirable. To this end, the
university of Canterbury has embarked on a research program to develop novel wall climbing robot which offer reliable adhesion,
maneuverability, high payload, to weight ratio, and adaptability on a variety of wall material and surface conditions. The research
has led to the development of a novel wall climbing robot based on the Bernoulli Effect
.14

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

39 www.ijergs.org


Fig. No. 10 Bernoulli Pad based WCR


The proposed robot moves by crawler driven mechanism and attaches by suction cups. The robot has one motor, which drives the
rear pulleys. Several suction cups are installed on the outside surface of the belt with equal intervals as shown in fig. 1 and the
cups rotate together with the belt.
The moving process of the robot can be described as follows, firstly the robot is attached to a wall by pushing of the crawler belts
makes suction cups contact and attach to the wall at the front pulleys. Then the guide shafts slide into a guide rail as shown in fig.
2 when a suction cup reaches the rear pulley, it is detached from the wall by the rotation of the belts. A sequence of this progress
makes the robot move on the wall to keep adhesion
.8



Fig. No. 11 Passive Suction Cup WCR

To develop a robot capable of climbing a wide variety of materials, we have taken design principles adapted from geckos. The
result is stickybot (fig. 9), a robot that climbs glass and other smooth surfaces using directional adhesive pads on its toes.
Geckos are arguable natures most agile smooth surface climbers. They can run at over 1 m/s , in any direction , over wet and dry
surface of varying roughness and of almost any material , with only a few exception like graphite and Teflon . The gecko prowess
is due to a combination of design features that work together to permit rapid, smooth locomotion. Foremost among this features
is hierarchical compliance , which helps the gecko conform to rough and undulating surface over multiple length scales. The
result of this conformability is that the gecko achieves intimate contact with surfaces so Waals forces produce sufficient adhesion
for climbing. The gecko adhesion is also directional. This characteristic allows the gecko to adhere with negligible preload in the
normal direction and to detach with very little pull off force and effect that is enhanced by peeling the toes in digital
hyperextension.
18
The electro adhesion exploits the electrostatic force between the material that serves as a substrate and the electro adhesive pad. The
pad is generally made up of polymer coated electrodes or simply by conductive materials. When the charges are induced on the
electrodes, the field between the electrodes polarizes the dielectric substrate causing electrostatics adhesion. It is essential to maintain
the electro adhesive pad and the surface in close contact. Since the electrostatic forces decrease dramatically with the square of the
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

40 www.ijergs.org

distance, the basic idea is to create structure with two electrodes that have shape, size and distance requirements that ensure a high
electrostatic field and that generate high adhesion forces on different types of material as wood, glass, paper, ceramics, concrete etc
.7


Fig. No. 12 WCR for conductive walls


SRI International is introducing wall climbing robot prototypes for surveillance, inspection , and sensor placement application .
Ideal of remote surveillance or inspection of concrete pillars or other structure, this robot uses SRIs patented electro adhesion
technology to enable wall climbing. It can also be used to carry payloads such as cameras, wireless network nodes, and other
sensors.
27

Fig. No. 13 SRIs WCR














International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

41 www.ijergs.org

6 Limitations of WCR
Some of the limitations of different wall climbing robots is given in the following tabular form.
SR. NO AUTHOR YEAR OBJECTIVE OF
STUDY
OUT COME LIMITATION
1 Love P. Kalra, Weimin
Shen, Jason Gu
2006 A wireless wall
climber
Used magnets for
adhesion
Limited to
ferrous walls &
less battery life
2 Shanqiang Wu,
Mantian Li, Shu
2006 Wireless
operation
Distributed wall
climbing
Mother & child
two robots
3 Markus Eich And
Thomas Vogele
2006 Light weight
robot
Used LED based
sensor
Crawler felled if
other bright light
spot found nearby
4 Akio YAMAMOTO,
Takumi
NAKASHIMA
2007 To realize
Electrostatic
adhesion
Improvement of
speed
Very low speed
5 Yu Yoshida And
Shugen Ma
2010 Passive suction
cup based
Prototype fells
due to larger
power
requirements
Mechanism was
to be improved
6 Stephen Paul Linder,
Edward Wei
2005 Balancing of
Hands &legs
Computr vision
reliably locates
itself
Less flexibility
7 L. R. Palmer Iii, E. D.
Lkiller
2009 Design hexapod
for advance
maneuvering
Leg motion &
body balancing
Gripping limited
to tangential
force
8 Juan Carlos Grieco 1998 High payload
carrying
Complexity of
design
High self weight
9 Wikipedia - Study of bio
inspired robots
Climbs smooth
walls
Cost/less research
on material
10 Jizhong Xiao and Ali
Sadegh
- Modular
climbing
caterpiller
A highly
integrated robotic
system
Manufacturing
complexity
Table No. 3
7 CONCLUSIONS
During the two last decades, the interest in climbing robotic systems has grown steadily. Their main intended cleaning to
inspection of difficult to reach constructions. This paper presented a survey of different technologies proposed and adopted for
climbing robots adhesion to surfaces, focusing on the new technologies that are presently being developed to fulfill these objectives. A
lot of improvement is expected in the future design of the wall climbing robots depending upon its utility .This paper gives a short
review of the existing wall climbing robot

REFERENCES:
[1] Young Kouk Song, Chang Min Lee, Ig Mo Koo, Duc Trong Tran, Hyungpil Moon And Hyouk Ryeol Choi Development Of
Wall Climbing Robotic System For Inspection PurposeIEEE/RSJ International Conference On Intelligent Robots And Systems,
2008, pp. 1990 - 1995
[2] Shanqiang Wu, Mantian Li, Shu, Xiao And Yang Li A Wireless Distributed Wall Climbing Robotic System For Reconnaissance
Purpose IEEE international Conference On Mechatronices And Automation, 2006,pp. 1308 1312
[3] Love P. Kalra, Weimin Shen, Jason Gu A Wall Climbing Robotic System For Non Destructive Inspection Of Above Ground
Tanks IEEE CCECE/CCGEI, Ottawa, May, 2006, pp. 402 405
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

42 www.ijergs.org

[4] Markus Eich And Thomas Vogele Design And Control Of A Lightweight Magnetic Climbing Robot For Vessel Inspection
IEEE 19
th
Mediterranean Conference On Control And Automation Aquis Corfu Holiaday Palace, Corfu.Greece June 20 - 23,
2011, pp. 1200-1205
[5] Shigeo HIROSE and Keisuke ARIKAWA Coupled And Decoupled Actuation Of Robotic Mechanisms IEEE International
Conference On Robotics & Automation san Francisco. CA. April 2000, pp. 33 39
[6] Hao Yang And Rong Liu, Qingfeng Hong And Na Shun Bu He A Miniature Multi Joint Wall Wall - Climbing Robot Based
On New Vibration Suction Robotic Foot IEEE International Conference On Automation And Logistics Qingdao, China
September 2008, pp. 1160 1165
[7] Akio YAMAMOTO, Takumi NAKASHIMA, and Toshiro HIGUCHI Wall Climbing Mechanisms Using Electrostatic Attraction
Generated By Flexible Electrodes IEEE 2007, PP. 389 - 394
[8] Yu Yoshida And Shugen Ma desing of a wall climbing robot with passive suction cups IEEE international conference on
robotics and biomimetics December 14 18, 2010,Tianjin, China, pp. 1513 1870
[9] shuyan liu, xueshan gao, kejie li, jun li, and xingguang duan a small-sized wall-climbing robot for anti-terror scout IEEE
international conference on robotics and biominetics becember 15-18, 2007, sanya, china, pp. 1866-1870
[10] Stephen Paul Linder, Edward Wei, Alexsander Clay Robotic Rock Climbing Using Computer Vision And Force Feedback
IEEE International Conference On Robotics And Automation Barcelona, Spain, April 2005, pp. 4685 4690
[11] Weimin Shen And Jason Gu, Yanjun Shen Proposed Wall Clombing Robot With Permanent Magnetic Tracks For Inspecting Oil
TanksIEEE International Conference On Mechatroincs & Automation Niagara Falls, Canada. July 2005, pp. 2072 2077
[12] L. R. Palmer Iii, E. D. Lkiller. And R. D. Quinn Design Of A Wall-Climbing Hexapod For Advanced Maneuvers IEEE/RSJ
International Conference On Intelligent Robots And Systems October 11-15, 2009 St. Louis, USA, pp. 625-630
[13] Juan Carlos Grieco, Manuel Prieto. Manuel Armada. Pablo Gonzalez De Santos A SIX LEGGED CLIMBING ROBOT FOR
HIGH PAYLOADS IEEE international conference on control application Trieste, ltaly 1-4 september 1998, pp. 446 450
[14] XiaoQI Chen, Senior Member, IEEE Matthias Wager, Mostafa Nayyerloo, Wenhui Wang, Member, IEEE, And J. Geoffrey
Chase a novel wall climbing robot based on Bernoulli effect
[15] Manuel F. Silva, J. A. Tenreiro Machada New Technologies For Climbing Robots Adhesion To Surfaces
[16] K. Berns, C.Hillenbrand Robotics Research Lab, Department Of Computer Science , Technical University Of Kaiserslautern A
Climbing Robot Based On Under Pressure Adhesion For The Inspection Of Concrete Walls
[17] K. Berns, C. Hillenbrand, T. Luksch University Of Kaiserslautern, 67653 Kaiserslautern, Germany Climbing Robots For
Commercial Applications A Survey
[18] Sangbae Kim, Student Member, IEEE, Matthew Spenko, IEEE, Salomon Trujillo, Barrett Heyneman, Daniel Santos, Student
Member, IEEE, And Mark R. Cutkosky, Member, IEEE Smooth Vertical Surface Climbing With Directional Adhesion IEEE
TRANSACTIONS. VOL.24,NO 1. FEBRUARY 2008, PP. 1-10
[19] synthetic setae from Wikipedia, the free encyclopedia
[20] H. X. Zhang, Member, IEEE, J. Gonzlez-Gmez, S.Y. Chen, Member, IEEE, W. Wang, R. Liu, D. Li, J. W. Zhang A Novel
Modular Climbing Caterpillar Using Low-frequency Vibrating Passive Suckers
[21] Jizhong Xiao and Ali Sadegh The City College, City University of New York USA City Climber: A New Generation Wall-
climbing Robots Climbing & Walking Robots, Towards New Applications, Book edited by Houxiang Zhang,
[22] William Morris, Class of 2008, Major: Mechanical Engineering
Mentor: Jizhong Xiao, Department of Electrical Engineering City-Climber:Development ofa Novel Wall-Climbing Robot
[23] Surachai PanichSrinakharinwirot Development of a Wall Climbing Robot" University, 114, Sukhumvit 23, Bangkok 10110,
Thailand
[24] F. Cepolina, R.C. Michelini, R. P. Razzoli, M. Zoppi Rmar Lab Dept. Of Mechanics And Machine Desing University Of
Genova, Via Aii Opera Pia 15/A 16145 Genova, Gecko, A Climbing Robot For Walls Cleaning1
st
int. workshop on advances
in service robotics ASER03, march 13-15, bardolino, Italy 2003
[25] Bing L. Luk, Louis K. P. Liu And Arthur A. Collie Climbing Serv Ice Robots For Improvingy Safety In Building Maintenance
Industry bioinspiration and robotics: walking and climbing robots, 2007, pp. 127-146
[26] Houxiang Zhang , Daniel Westhoff, Jianwei Zhang Guanghua Zong Service Robotic Systems For Glass Curtain Walls Cleaning
On The High- Rise Buildings seminar on robotics in new markets and application
[27] Philip Von Guggenberg; Director Business Development SRI International, Silicon Valley




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

43 www.ijergs.org

Comparative Analysis of Improved Domino Logic Based Techniques for VLSI
Circuits
Shilpa Kamde
1
, Dr. Saanjay Badjate
2
, Pratik Hajare
1

1
Research Scholar
Email- sshilpa_11@ymail.com

ABSTRACT - In modern VLSI design, Domino logic based design technique is widely used and in which power ignites the speed of
circuit. The Dynamic (Domino) logic circuit are often favored in high performance designs because of the high speed and low area
advantage. But in integrated circuits, the power consumed by clocking gradually takes a dominant part, and therefore our research
work in this paper is mainly focused on to study the comparative performance of various domino logic based techniques proposed
recently in last decade viz. basic logic domino technique, domino with keeper, high speed leakage tolerant domino, low swing
domino logic and domino logic with variable threshold voltage keeper, sleep switch dual threshold voltage domino.
This work evaluates the performance of the different domino techniques in terms of delay, power and their product on BSIM4
model using Agilent Advanced Design System tool. The domino techniques compared in this work were found to have optimized
area, power, delay and hence better power delay product (PDP) as compared with standard domino.
The main focus of this research work is to find the best possible trade off that would optimize multiple goals viz. area, power, speed
and noise immunity at the same time to meet the multi-objective goal for our future research work.

Keywords - Domino logic circuit, Domino logic with keeper, High speed and leakage Tolerant Domino, Low Swing Domino,
Domino Logic with Variable Voltage Threshold Keeper, Sleep Switch Dual Threshold Voltage Domino.
INTRODUCTION
Domino logic circuit techniques are extensively applied in high-performance microprocessors due to the superior speed and area
characteristics of dynamic CMOS circuits as compared to static CMOS circuits. High-speed operation of domino logic circuits is
primarily due to the lower noise margins of domino circuits as compared to static gates [1,2]. Domino logic offers speed and area
advantages over conventional static CMOS and is especially useful for implementing complex logic gates with large fan-outs. A
limitation of the domino technique is that only non-inverting gates are possible. This limits the logic flexibility and implies that logic
inversion has to be performed at the inputs or outputs of blocks of domino logic [2]. In this paper, we explored the various domino
logic based techniques for combinational circuit design for high fan in and high speed application in deep submicron VLSI
technology.
DOMINO LOGIC TECHNIQUES
A. Basic Domino Logic

Domino CMOS was proposed in 1982 by Krambeck. It has same structure as dynamic logic gates, but adds static buffering
CMOS inverter to its output. The introduction of the static inverter has the additional advantage of the output having a low-impedance
output, which increases noise immunity and drives the fan-out of the gate. The buffer furthermore reduces the capacitance of the
dynamic output node by separating internal and load capacitance. The buffer itself can be optimized to drive the fan-out in an optimal
way for high speed. This logic is the most common form of dynamic gates, achieving a 20% to 50% performance increase over static
logic [3].
In Basic Domino logic family evolved from PMOS and NMOS transistors and therefore retained two phase of operation. A
single clock is used to both precharge and evaluation phase. This circuitry incorporates a static CMOS buffer into each logic gate as
shown in Figure.1 During the precharge phase input is low (CLK=0), PMOS transistor is ON and NMOS transistor is OFF, Node Vo
is charged up to Vdd and the output from the inverter is at close to the 0 voltage level. In this phase no path between pull down
network to Vo.[9]
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

44 www.ijergs.org

Next, during the evaluation phase, NMOS transistor is ON creating the path node Vo through to pull down network to the ground.
Node Vo is discharged and inverter make output one.. It should be noted that in Domino logic the transition of nodes Y is always from
low to high and it is rippled through the logic from the primary inputs lo the primary outputs.


Fig. 1 Basic Domino logic circuit
B. Domino Logic Circuit with keeper
The Keeper technique improves the noise immunity and avoids the problem of charge sharing of Domino logic circuit.
The keeper is a weak pMOS transistor that holds the output at the correct level when it would otherwise float. When the dynamic
node is high, the output is low and the keeper is ON to prevent from floating (Figure.2). When the dynamic node (Y) falls, the keeper
initially opposes the transition so it must be much weaker than the pull down network. Eventually Z rises, turning the keeper OFF and
avoiding static power dissipation.
The keeper must be strong enough to compensate for any leakage current drawn when the output is floating and the pull down stack is
OFF.If increase the width of keeper transistor then increase delay, so keeper transistor are order of 1/10 the strength of the pull down
stack.[5]

Fig. 2 Domino logic circuit with keeper
C. High Speed Leakage Tolerant Domino
The HSLTD circuit scheme is shown in Figure3. Transistor M3 is used as stacking transistor. Due to voltage drop across M3, gate-to-
source voltage of the NMOS transistor in the PDN (Pull down network) decreases. M7 causes the stacking effect and makes gate-to-
source voltage of M6 smaller (M6 less conducting). Hence circuit becomes more noise robust and less leakage power consuming. But
performance degrades because of stacking effect in mirror current path. This can be increased by widening the M2 (high W/L) to
make it more conducting.[6]
If there is noise at the inputs at the onset of evaluation, the dynamic node can be discharged resulting in wrong evaluation.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

45 www.ijergs.org


Fig. 3 High Speed Leakage Tolerant Domino Circuit
D. Low Swing Domino Logic
Low swing domino technique applied to reduce dynamic switching power. Two techniques are under the low swing domino
circuit. The first technique is low swing domino with fully driven keeper (LSDFDk). The output voltage swing between ground
and VDD-Vtn. And second is low swing domino circuit with weakly driven keeper (LSDWDK).

Fig. 4.a: LSDFDK Fig. 4.b: LSDWDK
Fig. 4 Low Swing Domino Logic
These techniques reduce the voltage swing at the output node using the NMOS transistor as a pull up transistor. The first technique is
improved the delay and power while maintaining robustness against noise. The second technique reduces the contention current by
reducing the gate voltage swing of keeper transistor. LSDWDK generate two different voltage swings. The output voltage swing
between ground and VDD-Vtn. The gate voltage swing between |Vtp| and VDD[2].


E. Domino Logic with Variable Voltage Threshold Keeper (DVTVK)
The operation of the DVTVK circuit behaves in the following manner. When the clock is low, the pullup transistor is on and the
dynamic node is charged to VDD1. The substrate of the keeper is charged to VDD2 (VDD2 > VDD1) by the body bias generator,
increasing the keeper threshold voltage. The value of the high threshold voltage (high-Vt) of the keeper is determined by the
reverse body bias voltage (VDD2 - VDD1) applied to the source-to-substrate p-n junction of the keeper. The current sourced by
the high-Vt keeper is reduced, lowering the contention current when the evaluation phase begins. A reduction in the current drive
of the keeper does not degrade the noise immunity during precharge as the dynamic node voltage is maintained during this phase
by the pullup transistor rather than by the keeper.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

46 www.ijergs.org

When the clock goes high (the evaluation phase), the pullup transistor is cut-off and only the high-Vt keeper current contends
with the current from the evaluation path transistor(s). Provided that the appropriate input combination that discharges the
dynamic node is applied in the evaluation phase, the contention current due to the high-Vt keeper is significantly reduced as
compared to standard domino logic. After a delay determined by the worst case evaluation delay of the domino gate, the body
bias voltage of the keeper is reduced to VDD1, zero biasing the source-to-substrate p-n junction of the keeper. The threshold
voltage of the keeper is lowered to the zero body bias level, thereby increasing the keeper current. The DVTVK keeper has the
same threshold voltage of a standard domino (SD) keeper, offering the same noise immunity during the remaining portion of the
evaluation phase.

Fig. 5 Domino Logic with variable Voltage Threshold Keeper
The threshold voltage of the keeper transistor is dynamically modified during circuit operation to reduce contention current without
sacrificing noise immunity.
F. Sleep Switch Du al Threshold Voltage Domino Logic
The operation of this transistor is controlled by a separate sleep signal. During the active mode of operation, the sleep signal is set low,
the sleep switch is cut-off, and the proposed dual-Vt circuit operates as a standard dual-Vt domino circuit. During the standby mode of
operation, the clock signal is maintained high, turning off the high-Vt pull-up transistor of each domino gate. The sleep signal
transitions high, turning on the sleep switch. The dynamic node of the domino gate is discharged through the sleep switch, thereby
turning off the high-Vt NMOS transistor within the output inverter. The output transitions high, cutting off the high-Vt keeper.

Fig. 6 Sleep Switch Dual Threshold Voltage Domino Logic
After a sleep switch dual-Vt domino gate is forced to evaluate, the following gates (fed by the non-inverting signals) also evaluate in
a domino fashion. After the node voltages settle to a steady state, all of the high-Vt transistors in the circuit are strongly cut-off,
significantly reducing the subthreshold leakage current.
The sleep switch circuit technique exploits the scalable of the dual-Vt transistors to reduce the subthreshold leakage current by
strongly cutting off all of the high-Vt transistors.
POWER DISSIPATION
The power consumed by CMOS circuit classified in two type.
- Static power dissipation
- Dynamic power dissipation
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

47 www.ijergs.org

i. Static Power Dissipation:- This is the power dissipation due to leakage currents which flow through a transistor when no
transactions occur and the transistor is in a steady state. static power dissipation in CMOS inverter is negligible.[6]
ii. Dynamic Power Dissipation:-The PMOS and NMOS transistors are on during the perform operation simultaneously. the
duration of changing inputs low to high and discharging high to low pMOS and nMOS turn on respectively. During this time
a current flows between Vdd to GND (make short path) and Dynamic Power produce. The dynamic power dissipation is
proportional to the square of voltage supply.[7-8]

SIMULATION AND RESULT
In this work, the OR and AND logic gates had used for implementation of six techniques. The power consumption (Pavg),
propagation delay (Tpd) and power delay product (PDP)are used to compare these techniques. The circuits implemented are OR gate
for 4 input, 6 input and AND gate for 4 input and 6 input. These design styles are compared by performing detailed transistor-level
simulations on circuits using Advance Design System (ADS). The results of the circuits for all techniques are given below. Table1
showed the comparison for all the techniques for four input OR gate. Table2 shows the comparison of all the six techniques with that
of standard domino circuit for six input OR gate.Table3 shows the comparison of all the six techniques for four input AND gate.
Table4 shows the comparison of all the six techniques with that of standard domino circuit for six input AND gate.




From the results, it can be observed that the Domino logic techniques, viz., Domino logic circuit with keeper, High speed leakage
tolerant domino and Low Swing Domino, Domino Logic with Variable Voltage Threshold Keeper, Sleep Switch Dual Threshold
Voltage Domino techniques provide lower values of power dissipation, propagation delay and PDP when compared to the standard
domino logic structure. The propagation delay (Tpd-Sec), power consumption (Pavg-Watt) and power delay product (PDP-Watt-Sec)
calculated and plotted in the form of graph.
Table.1: Comparison for four input OR gate
















Technique Tpd Pavg PDP
Domino 3.77E-08 3.77E-06 1.42E-13
Keeper 3.76E-08 4.22E-06 1.59E-13
HSLDT 3.78E-08 2.19E-06 8.21E-14
LSDFDK 3.77E-08 5.85E-06 2.20E-13
DVTVK 3.77E-08 2.72E-05 1.02E-12
SLS 3.77E-08 4.69E-05 1.77E-12
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

48 www.ijergs.org

Table.2:Comparison for six input OR gate













Table.3: Comparison for four input AND gate

















Table.4: Comparison for four input AND gate









Technique Tpd Pavg PDP
Domino 1.05E-07 4.45E-06 4.67E-13
Keeper 1.03E-07 6.46E-05 6.68E-12
HSLDT 1.06E-07 6.67E-06 7.07E-13
LSDFDK 1.05E-07 7.55E-06 7.91E-13
DVTVK 3.61E-08 2.31E-04 8.37E-12
SLS 1.06E-07 6.70E-05 7.07E-12
Technique Tpd Pavg PDP
Domino 5.09E-09 1.057E--6 5.38E-15
Keeper 1.01E-08 8.11E-07 8.19E-15
HSLDT 5.00E-09 5.73E-07 2.87E-15
LSDFDK 1.01E-08 6.10E-07 6.16E-15
DVTVK 5.00E-09 4.68E-05 2.34E-13
SLS 1.02E-08 3.95E-05 4.01E-13
Technique Tpd Pavg PDP
Domino 5.09E-09 1.48E-06 7.52E-15
Keeper 1.01E-08 8.09E-07 8.17E-15
HSLDT 1.34E-09 2.99E-06 4.00E-13
LSDFDK 5.12E-09 3.01E-07 1.54E-15
DVTVK 1.00E-08 5.19E-05 5.19E-13
SLS 1.02E-08 3.64E-05 3.71E-13
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

49 www.ijergs.org


Chart.1: Comparison for four input OR gat

Chart.2 Comparison for six input OR gate

Chart.3 Comparison for four input AND gate


Chart.4 Comparison for six input AND gate

CONCLUSION
In this work, an attempt had been made to simulate OR gate and AND gate for four and six inputs by using six domino based
techniques including basic domino (standard domino).
0.00E+00
1.00E-05
2.00E-05
3.00E-05
4.00E-05
5.00E-05
Power Delay
Pavg
Tpd
0.00E+00
5.00E-05
1.00E-04
1.50E-04
2.00E-04
2.50E-04
Power
Delay
Pavg
0.00E+00
2.00E-05
4.00E-05
6.00E-05
Power Delay
Pavg
Tpd
0.00E+00
2.00E-05
4.00E-05
6.00E-05
Power Delay
Pavg
Tpd
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

50 www.ijergs.org

The comparative analysis from table.1 for 4 input OR gate showed HSLDT had less power, less Tpd and low PDP compared to
other domino techniques.
The comparative analysis table.2 showed the maximum number (six) of input for OR gate Basic domino logic technique is better
because it had low power consumption, PDP but DVTVK had less Tpd.
Similarly comparison for the four input AND gate table.3 showed HSLDT had less power, less Tpd and low PDP compare to other
techniques. The table also showed propagation delay of DVTVK had equal to the HSLDT.
The comparative analysis table.4 showed the maximum number (six) of input for AND LSDFDK technique is better because it had
low power consumption and less PDP but HSLDT had less Tpd.

REFERENCES:

[1] V. Kursun and E. G. Friedman, "Variable Threshold Voltage Keeper for Contention Reduction in Dynamic Circuits," Proceedings
of the IEEE International
[2] Volkan Kursun and Eby G. Friedman, Speed and Noise Immunity Enhanced Low Power Dynamic Circuits, Department of
Electrical and Computer Engineering, University of Rochester, Rochester, New York, 2005.
[3] Jaume Segura, Charles F. Hawkins, CMOS Electronics How IT WORKS, HOW IT FAILS, IEEE Press, John Wiley & Sons, Inc.
Publications.
[4] Farshad Moradi, Dag T. Wisland, Hamid Mahmoodi and Tuan Cao, High Speed and Leakage Tolerant Domino Circuits for
High Fan in Applications in 70 nm CMOS technology, IEEE Proceedings of the 7th International Caribbean Conference on
Devices, Circuits and Systems, Mexico, Apr. 28-30, 2008.
[5] Neil H.E Weste, David Harris, Ayan Banerjee, CMOS VLSI DESIGN Third edition, Pearson Education 2006.
[6] H.Mahmoodi -Meimand, Kauchic Roy, "A Leakage-Tolerant High Fan-in Dynamic circuit Design Style, IEEE Trans 2004 Y.
Yorozu, M. Hirano, K. Oka, and Y. Tagawa, Electron spectroscopy studies on magneto-optical media and plastic substrate
interface, IEEE Transl. J. Magn. Japan, vol. 2, pp. 740-741, August 1987 [Digests 9th Annual Conf. Magnetics Japan, p. 301,
1982].
[7] Salendra.Govindarajulu, , Dr.T.Jayachandra Prasad, P.Rangappa, Low Power, Reduced Dynamic Voltage Swing Domino Logic
Circuits, Indian Journal of Computer Science and Engineering Vol 1 No 2, 74-81, 2011.
[8] Sung-Mo Kang,Yusuf LeblebiciCMOS Digital Integrated Circuits,Tata McGraw-Hill Publishin company limited 2004.
[9] Vojin G. Oklobdziza and Robert K. Montoye Design Performance Trad-Offs in CMOS Domino Logic IEEE Journal solid state
circuiit VOL Sc12 No-2 1987l
[10] Preetisudha Meher, K.K. Mahapatra, A New Ultra low- Power and Noise Tolerant Circuit Technique for CMOS Domino Logic,
ACEEE Int. J.on.Information Technology, Vol . 01, No. 03, Dec 2011.
[11] Volkan Kursun and Eby G. Friedman, Low Swing Dual Threshold VoltagemDomino Logic, Dept. of Electrical and Computer
Engineering University of Rochester, New York,14627-0231.
[12] Srinivasa V S Sarma D and Kamala Kanta Mahapatra Improved Technique for High Performance Noise Tolerant Domino
CMOS Logic Circuit
[13] Salendra.Govindarajulu, , Dr.T.Jayachandra Prasad, P.Rangappa, Energy Efficient,Noise Tolerant CMOS Domino VLSI
Circuits in VDSM Technology, Indian Journal of Advanced Computer Science and Application Vol 2 No 4, 2011
[14] Volkan Kursun and Eby G. Friedman, Sleep Switch Dual Threshold Voltage Domino Logic, IEEE Transactions on VERY
LARGE SCALE INTEGRATION (VLSI) Systems, Vol. 12, No. 5, May 2004.




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

51 www.ijergs.org

Review Paper on Leak Detection
S.B.Kakuste
1
, U.B.Bhujbal
1
, S.V.Devkar
1

1
Department of mechanical Engineering, Sinhgad Institute Of Technology, Lonavala Maharashtra, India
Email- sandy.kakuste@gmail.com

ABSTRACT - The word leak and leakage appears in the field of vessels hermetical closing and do not confront only
with vacuum technologies but also engineering working with high pressure. Practically it is impossible to build a completely leak
proof vacuum system. There are multiple of applications in the industry, where it is necessary to test a hollow fabricated body for fluid
leakage.Number of leak testing method has been proposed for testing of hollow components.This paper gives a review of various
methods of leak detection of vacuum system.
Keywords: pressure decay, water bubble test, vacuum, helium leak detectors, Helium mass spectrometer, Radioisotope method, Dye
penetrate method, fluid transient model.
INTRODUCTION
All sealed systems leak. Every pressure system has leaks because imperfection exists at every joint fitting, seam or weld. These
imperfection may be too small to detect even with the best of leak detection instruments but given time, vibration, temperature and
environmental stress, these imperfection become larger, detectable leaks.
A LEAK IS NOT...Some arbitrary reading on a meter. Gas escapes at different times and at different rates. In fact, some leaks cannot
be detected at the time of the test. Leaks may plug, and then re-open under uncommon conditions.
A LEAK IS...A physical path or hole, usually of irregular dimensions. The leak may be the tail end of a weld fracture, a speck of dirt
on a gasket or a microgroove between fittings.
Production leak testing is implemented to verify the integrity of a manufactured part. It can involve 100% testing or sample inspection.
The goal of production leak testing is to eliminate leaky parts from getting to the customer. Because manufacturing processes and
materials are not perfect, leak testing is often implemented as a final inspection step.In some cases, leak testing is mandated by a
regulation or industry specification. For example, in order to reduce hydrocarbon emissions from automobiles, auto makers are now
designing and leak testing fuel components to tighter specifications required by the EPA. Also, the nuclear industry enforces
regulations and leak test specifications on components such as valves used in nuclear facilities. Whether mandated by regulation or
implemented to insure product function and customer satisfaction, leak testing is commonly performed on manufactured parts in many
industries including automotive, medical, packaging, appliance, electrical, aerospace, and other general industries.
One of the greatest challenges in production leak testing is often correlating an unacceptable leaking part in use by thecustomer (in the
field) with a leak test on a production line. For example, the design specification of a water pump may require that no water leaks
externally from the pump under specified pressure conditions. However, in production it may be desirable to leak test the part with air.
It is intuitive to assume that air will leak more readily through a defect than water. One cannot simply state no leakage or even no
leakage using an air pressure decay test. This would result in an unreasonably tight test specification resulting in an expensive test
process and potential scrap of parts that may perform acceptably in the field. Therefore, one must set a limit using an air leak test
method that correlates to a water leak. Establishing the proper leak rate reject limit is critical to insure part performance and to
minimize unnecessary scrap in the manufacturing process. Determining the leak rate specification for a test part can be a significant
challenge. Having a clear and detailed understanding of the part and its application is necessary in order to establish the leak rate
specification. Even then, many specifications are estimates and often require the use of safety factors.The automotive industry has
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

52 www.ijergs.org

implemented a leak rate specification for fuel handling components that species a maximum allowable theoretical leak diameter. The
advantage of this way of expressing the leak rate limit is that it gives the part manufacturer significant leeway in designing the
appropriate leak test. The challenge, however, is correlating the theoretical leak diameter to an actual leak rate. Users of these
specifications must understand the theoretical relationships between leak hole geometry and gas flow and all users must implement
these relationships consistently. A second option is to set the leak rate limit of the specific test using a leak orifice or channel that has
been manufactured and dimensionally calibrated to the geometry (diameter and path length) required by the specification..

[1] N. Hilleret
In this paper, various methods of leak detection are explained and also gives information about instruments used for leak
detection purpose. In the case of vacuum the vessels, it is necessary to check that tightness of vessel by means of guarantee of leak
proof before installation. Depending upon the size of leak, method of leak detection is selected from various methods. All methods
based on the variation of a physical property measured along the vacuum vessel. For large leakage gas flow can generate mechanical
effects but for small size leakage finer method required.
The various methods of leak detection such as tracer gas, helium leak detectors, direct flow method, counter flow method,
detector probe method (sniffer) as well as characteristics of detector, the use ofthe detector is described in this paper.

[2] K Zapfe
This paper gives an introduction about leak detection of vacuum system. Helium leak detector and its different applications
along with various leak detection methods are described. Helium leak detector is most widely used method in the industries. It is
important to specify an acceptable leak rate for each vacuum system.
Leak detection plays an important role in manufacturing. After manufacturing of the vacuum vessel is must be proven than
the tightness specifications are fulfilled. Further checks are necessary during as well as after assembly and installation to locate
possible leaks. For that various methods like mechanical effects, pressure increase, tracer gas, helium leak detector, direct flow
method, counter flow method are introduced in this paper. Leakage rate, types of leaks, practical experience and examples of leak
detection, different application of helium leak detector are explained in this paper.

[3] Andrej Pregeli et al
In the industries there is a need to manufacture detect free hermetically closed elements.
In this paper, discussed about leak detection methods and defining the sizes of leakage. In this paper describes the maximum
acceptable leak rate. According to that the product should be accepted or rejected. Various methods of leak detection i.e. Pressure
change method, overpressure method, Halogen leak detector, Dye penetrant method, Acoustical leak detection, Radioisotope method,
mass spectrometer as leak detector, Helium mass spectrometer are described in this paper.

[4] Donald T. Soncrant
In this paper, describes the method to improve speed of testing of hollow particles of fluid leakage consist of closed charged
valve, open charged valve, compressor and hollow workpiece. Time delay valve is used to regulate pressurized air supply. When time
delay valve cut off, test valve get actuated it measure flow rate through hollow component, if workpiece is acceptable it turns ON
accept light. If the flow rate exceeds predetermined value, it turns ON reject light.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

53 www.ijergs.org

This leakage testing method is used in the industry for testing hollow body for fluid leakage. Electronically actuated valve
and relays used to conduct test in a sequence. Here, no special voltage reduction, filtering and voltage regulating devices are required.
Operation is independent of voltage variation. This method is more reliable and less complex method. Hence used in the industry for
testing of hollow components.

[5] Joachim W. Pauly
In this paper, vessel such as submarine is selected for testing of leakage of air, by establishing pressure level and test flow to
the vessel. For determining the leakage of air in vessel, difference in pressure in the vessel is monitored, and determining whether the
leakage rate from the vessel exceeds a predetermined rate by relating the test flow rate to its effect on the pressure level in the vessel.
In 1
st
operation, variable test flow is delivered to the vessel and adjusted such that as needed to maintain pressure in the vessel
at test level, rate of this flow is measured when stabilized and measured values are converted into standard units. In 2
nd
operation,
constant flow rate is delivered to the vessel which is equivalent to leakage in vessel and effect of pressure difference in vessel
indicates the relation between leakage rate and test flow rate.
[6] Sami Elaoud et al
This paper presents a technique for detection and location of leakages in a single pipe by means of transient analysis of the hydrogen
natural gas mixture flows. In this technique transient pressure waves used which are initiated by the sudden closure of a downstream
shut off valves. The purpose of this paper is to present a numerical procedure utilizing transient state pressure and a discharge analysis
to detect leakage in a piping system of a hydrogen and natural gas mixture. The presence of leak in pipe partially reflects transient
pressure waves at allows for location of the leak. To determine the leak location, the mathematical formulation has been solved by the
characteristics methods of specified time intervals.
[7] S.Hiroki
ax
et al
In this paper Krypton (Kr) is used as a water soluble tracer for detecting water leak in a fusion reactor. This method was
targeted for applying to the international thermonuclear experimental reactor and 10
-3
Pa m
3
/s order of water leak valves where
fabricated and connected to the water loop circuit. Water dissolve in a Krya is detected by the Quadruple mass spectrometer (QMS)
Imposed leak detection method for the water channels is proposed where the leak detection can be done with fully circuiting cooling
water. Water soluble tracer gas is effused into the vacuums vessel through a water leak passage.

[8] T. Kiuchi
This paper describes a method for detection of leak and location of leak by applying fluid transient model. In this testing of
real time pipeline and resulting conclusion is obtained by using fluid transient model. This method considered flow rate measurement
and pressure measurement. Because of this, the method gives more accurate detection of leak and position of leak than conventional
methods, but in this method assumption is made that flow inside the pipeline is quasi steady state flow. The influence of method
accuracy is examined, and the result shows the advantage of method compared to conventional methods.

[9]John Mushford
In this paper, presents a method of investigation of data obtained by collection of all data monitoring from pressure sensors
inthe pipe network, which gives not only location but also size ofthe leak. In this paper use of support vector machine which acts as a
pattern recognizer, which gives location and size of leak with a high degree of accuracy, and the support vector machine is trained and
tested on data obtained from EPANET data hydraulic system.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

54 www.ijergs.org


[10] Guizeng Wang et al
In this paper, a new leak detection method based on autoregressive modelling proposed. Testing of pipeline model and
resulting concussion is obtained by using Kullback information. Kullback information is very in time sequence analysis. A leak above
0.5% can be easily detected by Kullback information. This process does not require flow rate measurement. Four pressure
measurements, two at each end of pipe is required.

CONCLUSION
Proper selection and implementation of a production leak test method starts with an understanding of WHY the test is being
performed, followed by establishing what the leak rate limit is, and finally a determination of how the leak test will be performed. A
careful and thoughtful evaluation at each of these steps, combined with the selection of high quality leak test hardware, will result in a
cost effective, high performance, and reliable production leak test.
This project has described methods for the finding the leaks and there location in a hollow casting and other components.
Pressure difference obtained by the pressure decay test will give confirmation about presence of leaks, and water immersion test will
give us location of leaks. These two methods are less time consuming and give the quick results with high accuracy. The end result is
stricter quality controls for leak testing.
REFERENCES:
[1]N.Hilleret, Leak detection, CERN, Geneva, Switzerland
[2] K, Zapfe, Leak detection, Hamburg.Germany
[3] Andrej pregelj et al, Leak detection methods and defining sizes of leaks, april1997
[4] Donald T.Soncrant,Fluidic type leak testing machine
[5] Joachim W. Pauly, Method and apparatus for testing leakage rate, May07/1974
[6] Sami Elaoud et al, Leak detection of hydrogen-natural gas mixture in pipe using the characteristics methods of specified
time interval, 21 June 2010
[7] S.Hiroki, Development of water leak detection method in fusion reactor using water-soluble gas, 18 June 2007
[8] T.Kiuchi, A leak localization method of pipeline by means of fluid transient model
[9] John Mashford et al, An approach to leak detection in pipe network using analysis of monitored pressure values by
support vector machine.
[10]Guizengwang et al,Leak detection for transport pipelines based on autoregressive modelling.
[11] William A.McAdams et al, Leakage testing method, Aug 12, 1958
[12] Percy Gray, Jefferson, Lined tank and method of construction and leakage testing the same.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

55 www.ijergs.org

Design and Verification of Nine port Network Router
G. Sri Lakshmi
1
, A Ganga Mani
2

1
Assistant Professor, Department of Electronics and Communication Engineering, Pragathi Engineering College, Andhra Pradesh,
India
2
Research Scholar (M.Tech), Embedded Systems, Department of Electronics and Communication Engineering, Pragathi Engineering
College, Andhra Pradesh, India
Email- srilakshmi1853@gmail.com

ABSTRACT - The focus of this Paper is the design of Network Router and verifies the functionality of the eight port router for
network on chip using verilog qualifies the Design for Synthesis and implementation. This Design consists of Registers, FSM and
FIFOs. This Router design contains Eight output ports and one input port, it is packet based Protocol. Router drives the incoming
packet which comes from the input port to output ports based on the address contained in the packet.. The router has an active low
synchronous input resetn which resets the router. Thus the idea is borrowed from large scale multiprocessors and wide area network
domain and envisions on chip routers based network. This helps to understand how router is controlling the signals from source to
destination based on the header adders.

KEYWORDS: FIFO, FSM, Network-On-Chip, Register blocks,Router Simulation,verification plan


INTRODUCTION
System on chip (SOC) is a complex interconnection of various functional elements. It creates communication bottleneck in the gigabit
communication due to its bus based architecture. Thus there was need of system that explicit modularity and parallelism, network on
chip possess many such attractive properties and solve the problem of communication bottleneck. It basically works on the idea of
interconnection of cores using on chip network. The communication on network on chip is carried out by means of router, so for
implementing better NOC, the router should be efficiently design. This router supports four parallel connections at the same time. It
uses store and forward type of flow control and FSM Controller deterministic routing which improves the performance of router. The
switching mechanism used here is packet switching which is generally used on network on chip. In packet switching the data the data
transfers in the form of packets between cooperating routers and independent routing decision is taken. The store and forward flow
mechanism is best because it does not reserve channels and thus does not lead to idle physical channels. The arbiter is of rotating
priority scheme so that every channel once get chance to transfer its data. In this router both input and output buffering is used so that
congestion can be avoided at both sides. A router is a device that forwards data packets across computer networks. Routers perform
the data "traffic direction" functions on the Internet. A router is a microprocessor- controlled device that is connected to two or more
data lines from different networks. When a data packet comes in on one of the lines .The router reads the address information in the
packet to determine its ultimate destination. Then, using information in its routing table, it directs the packet to the next network on its
journey.

WHY WOULD I NEED A ROUTER?
For most home users, they may want to set-up a LAN (local Area Network) or WLAN (wireless LAN) and connect all computers to
the Internet without having to pay a full broadband subscription service to their ISP for each computer on the network. In many
instances, an ISP will allow you to use a router and connect multiple computers to a single Internet connection and pay a nominal fee
for each additional computer sharing the connection. This is when home users will want to look at smaller routers, often called
broadband routers that enable two or more computers to share an Internet connection. Within a business or organization, you may
need to connect multiple computers to the Internet, but also want to connect multiple private networks not all routers are created equal
since their job will differ slightly from network to network. Additionally, you may look at a piece of hardware and not even realize it
is a router.
What defines a router is not its shape, color, size or manufacturer, but its job function of routing data packets between computers. A
cable modem, which routes data between your PC and your ISP can be considered as a router. In its most basic form, a router could
simply be one of two computers running the Windows 98 (or higher) operating system connected together using ICS (Internet
Connection Sharing). In this scenario, the computer that is connected to the Internet is acting as the router for the second computer to
obtain its Internet connection. Going a step up from ICS, we have a category of hardware routers that are used to perform the same
basic task as ICS, albeit with more features and functions often called broadband or Internet connection sharing routers, these routers
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

56 www.ijergs.org

allow you to share one Internet connection with multiple computers. Broadband or ICS routers will look a bit different depending on
the manufacturer or brand, but wired routers are generally a small box-shaped hardware device with ports on the front or back into
which you will plug each computer along with a port to plug in your broadband modem. These connection ports allow the router to do
its job of routing the data packets between each of the computers and the data going to and from the Internet. These routers also
support NAT (network address translation), which allows all of your computers to share a single IP address on the Internet.

ROUTER DESIGN PRINCIPLES

Given the strict contest deadline and the short implementation window we adopted a set of design principles to spend the available
time as efficiently as possible. This document provides specifications for the Router is a packet based protocol. Router drives the
incoming packet which comes from the input port to output ports based on the address contained in the packet. The router is a
Network Router has a one input port from which the packet enters. It has Eight output ports where the packet is driven out. Packet
contains 3 parts. They are Header, data and frame check sequence. Packet width is 16 bits and the length of the packet can be between
1 byte to 8192 bytes. Packet header contains three fields DA and length. Destination address (DA) of the packet is of 16 bits. The
switch drives the packet to respective ports based on this destination address of the packets. Each output port has 16-bit unique port
address. If the destination address of the packet matches the port address, then switch drives the packet to the output port, Length of
the data is of 16 bits and from 0 to 8191. Length is measured in terms of bytes. Data should be in terms of bytes and can take anything.
Frame check sequence contains the security check of the packet. It is calculated over the header and data. The communication on
network on chip is carried out by means of router, so for implementing better NOC, the router should be efficiently design. This router
supports four parallel connections at the same time. It uses store and forward type of flow control and FSM Controller deterministic
routing which improves the performance of router. The switching mechanism used here is packet switching which is generally used on
network on chip. In packet switching the data the data transfers in the form of packets between co-operating routers and Independent
routing decision is taken. The store and forward flow mechanism is best because it does not reserve channels and thus does not lead to
idle physical channels. The arbiter is of rotating priority scheme so that every channel once get chance to transfer its data. In this
router both input and output buffering is used so that congestion can be avoided at both sides.
Features
Full duplex synchronous serial data transfer
Variable length of transfer word up to 8192 bytes.
HEADER is the first data transfer.
Rx and Tx on both rising or falling
Fully static synchronous design with one clock domain
Technology independent VERILOG
Fully synthesizable.

ROUTER is a Synchronous protocol. The clock signal is provided by the master to provide synchronization. The clock signal controls
when data can change and when it is valid for reading. Since ROUTER is synchronous, it has a clock pulse along with the data. RS-
232 and other asynchronous protocols do not use a clock pulse, but the data must be timed very accurately.


OPERATION

The Nine Port Router Design is done by using of the three blocks. The blocks are 16-Bit Register, Router Controller and output block.
The router controller is design by using FSM design and the output block consists of four FIFOs combined together. The FIFOs store
data packets and when you want to send data that time the data will read from the FIFOs. In this router design has Eight outputs i.e.
16-Bit size and one 16-bit data port. It is used to drive the data into router. we are using the global clock, reset signals, error signal and
suspended data signals are the outputs of the router. The FSM controller gives the error and SUSPENDED_DATA_IN signals. These
functions are discussed clearly in below FSM description. The ROUTER can operate with a single master device and with one or more
slave devices. If a single slave device is used, the RE (read enable) pin may be fixed to logic low if the slave permits it. Some slaves
require the falling edge (HIGHLOW transition) of the slave select to initiate an action such as the mobile operators, which starts
conversion on said transition. With multiple slave devices, an independent RE signal is required from the master for each slave device.






International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

57 www.ijergs.org



FIGURES


Figure 1: Block Diagram of Nine Port Router



Figure 2: Internal Structure of Nine Port Router


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

58 www.ijergs.org




Figure 3: Simulation of FSM Controller



Figure 4: Simulation of Router
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

59 www.ijergs.org

APPLICATIONS

When multiple routers are used in interconnected networks, the routers exchange information about destination addresses, using a
dynamic routing protocol. Each router builds up a table listing the preferred routes between any two systems on the interconnected
networks. A router has interfaces for different physical types of network connections, (such as copper cables, fiber optic, or wireless
transmission). It also contains firmware for different networking protocol standards. Each network interface uses this specialized
computer software to enable data packets to be forwarded from one protocol transmission system to another. Routers may also be used
to connect two or more logical groups of computer devices known as subnets, each with a different sub-network address. The subnet
addresses recorded in the router do not necessarily to map directly to the physical interface connections.

Eda Tools And Methodologies
HVL: System VERILOG.
HDL:VERILOG
Device :Sparatan 3e
EDA Tools: MODELSIM,XILINX ISEE

CONCLUSION

I have designed network ROUTER and I have verified the functionality of the ROUTER with VERILOG which has one input and
eight output ports with each 16-bit observed its functional of ROUTER.For design we had verified functionality of router by giving
different test cases to different FIFOs based on header address of the packet.


REFERENCES:

[1]D.Chiou,MEMOCODE2011Hardware/SoftwareCoDesignContest,https://ramp.ece.utexas.edu/redmine/ Attachments/
DesignContest.pdf
[2] Blue spec Inc, http://www.bluespec.com
[3]Xilinx,ML605HardwareUserGuide,http://www.xilinx.com/support/documentation/boardsand its/ug534.pdf
[4]Xilinx,LogiCOREIPProcessor Local Bus (PLB) v4.6,http://www.xilinx.com/support/documentation/ip documentation/plb
v46.pdf
[5] Application Note: Using the Router Interface to Communicate Motorola, ANN91/D Rev. 1, 01/2001.
[6] Cisco Router OSPF: Design& Implementation Guide, Publisher: McGraw-Hill
[7] Nortel Secure Router 4134, Nortel Networks Pvt. Ltd.
[8] LRM, IEEE Standard Hardware Description Language Based on the Verilog Hardware Description Language IEEE STD
1364-1995.
Books:
[9]. Chris Spears SYSTEMVERILOG FOR VERIFICATION, Publisher: Springer.
[10]. Bhaskar. J, A Verilog HDL Primer,







International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

60 www.ijergs.org

Performance Evaluation of Guarded Static CMOS Logic based Arithmetic and
Logic Unit Design
FelcyJeba Malar.M
1
, Ravi T
2

1
Research Scholar (M.Tech), VLSI Design, Sathyabama University, Chennai Tamilnadu
2
Assistant Professor, Sathyabama University, Chennai Tamilnadu.
Email:
1
felcyjebamalar@gmail.com

ABSTRACT Real World applications tend to utilize the improved low power processes to reduce power dissipation and to
improve the device efficiency. With regards to this unique aspect, optimization techniques help in reducing down the parameters like
power and area which are of a major concern. The commonly found arithmetic and logic unit in every processor is likely to consume
more power for its internal operations. This power consumed can be reduced using the low power optimization techniques. With
reference to the above issue, in this paper an efficient Arithmetic and Logic unit is designed with a modified static CMOS logic
design. This modified logic is found to be more efficient than the existing logic in terms of many parameters like average power and
power delay product. This way the modified architecture of arithmetic and logic unit in different CMOS technologies performs the
processing with high speed.
Keywords Low power, modified static CMOS logic, power delay product, arithmetic and logic unit
I.INTRODUCTION
Very Large Scale Integrated (VLSI) circuit technology is a rapidly growing technology for a wide range of innovative
devices and systems that have changed the world today. The tremendous growth in laptop and portable systems and the cellular
networks have intensified the research efforts in low power electronics [1]. High power systems often may lead to several circuit
damages. Low power leads to smaller power supplies and less expensive batteries. Low-power design is not only needed for portable
applications but also to reduce the power of high performance systems. With large integration density and improved speed of
operation, systems with high frequencies are emerging.

The arithmetic logic unit is one of the main components inside a microprocessor. It is responsible for performing arithmetic
and logic operations such as addition, subtraction, increment, and decrement, logical AND, logical OR, logical XOR and logical
XNOR [2]. They use fast dynamic logic circuits and have carefully optimized structures [3]. Its power consumption accounts for a
significant portion of total power consumption of data path. Arithmetic and Logic Units (ALU) also contributes to one of the highest
power-density locations on the processor as it is clocked at the highest speed and is kept busy most of the time resulting in thermal
hotspots and sharp temperature gradients within the execution core. Therefore, this strongly motivates energy-efficient ALU designs
that satisfy the high-performance requirements, while reducing peak and average power dissipation. ALU is a combinational circuit
that performs arithmetic and logical micro-operations on a pair of n bit operands [4]. The power consumption in digital circuits, which
mostly use complementary metal-oxide semiconductor (CMOS) devices, is proportional to the square of the power supply voltage;
therefore, voltage scaling is one of the important methods used to reduce power consumption. To achieve a high transistor drive
current and thereby improve the circuit performance, the transistor threshold voltage must be scaled down in proportion to the supply
voltage [5].

II.EXISTING ALU DESIGN
The existing method includes a simple Arithmetic and Logic Unit design with different arithmetic and logic operations [10].
The existing basic design consists of a conventional type of arithmetic and logic circuits that perform various arithmetic and logic
operations required shown in Fig 2.1.These conventional circuits are designed in CMOS logic. When the architecture is simulated, it is
found to consume more power and it is the main disadvantage of the existing systems.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

61 www.ijergs.org


Fig 2.1 Basic Concept of ALU design
The existing feed through Logic, given in Fig 2.2, works in two phases, Reset phase and Evaluation phase [11], [12]. It can
be shown, when clock is HIGH, the output node is pulled to zero value because transistor Tr is ON and transistor TP is OFF.
When clock goes LOW, reset transistor Tris turned OFF and Tpbecomes ON resulting in charging or discharging of output node
with respect to input. Reset Transistor Tralways provides 0->1 transition, initially in evaluation phase, therefore outperforms the
dynamic CMOS in cascading structure. When dynamic CMOS is cascaded, produced result may be false due to 1-> 0 transitions in
evaluation phase.

Fig 2.2 Existing Feed through Logic
III.PROPOSED ALU DESIGN
The proposed system uses a guarded static logic principle which is explained below. The Fig 3.1 below shows a simple low
power technique. This simple modified technique is designed with two control inputs. It works similar to the existing static CMOS
logic in which during the high phase of clock, the output node does not give the exact output. When clock becomes low the output
node conditionally evaluates to either logic high or low, depending on the inputs to pull up and pull down networks present in the
circuit.


Fig 3.1 Proposed Technique (Guarded static CMOS logic)

The proposed system consists of a modified Arithmetic and Logic Unit, which includes a modified architecture with the
proposed Guarded Static CMOS logic logic (GSCL). Hence an additional loss in power consumption of the circuit is further observed.
The modified arithmetic and logic unit block with the control unit is shown in Fig 3.2 below. According to the block diagram below,
each block is fed with two control signals. One of the control signals chooses whether the operation to be executed is arithmetic block
or logic block. The second control signal chooses which particular block needs to be executed. Hence the choice is made by the user in
providing the arithmetic and logic unit with the necessary control input.
Fig 3.2 depicts the main architecture of the paper. There are different blocks that are interlinked to form a complete
architecture. For this control signal to be activated inside a particular block, each block is modified with low power techniques to
reduce the power consumption that gets wasted because of the execution of all other processing blocks. Using these techniques, only a
single arithmetic or logic block is activated and its output is achieved in accordance with the control signal provided. The low power
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

62 www.ijergs.org

technique for this requirement used is discussed below. This way the design is made simple and the processing is done in a continuous
manner and used in bigger circuits to compensate for high power dissipation.


Fig 3.2 Modified Architecture of Arithmetic and Logic unit
3.1 DESCRIPTION OF ARCHITECTURE
3.1.1 Input Block:
Input block consists of two general purpose registers. These registers provide the necessary inputs to the arithmetic and logic
unit blocks. These blocks functions various arithmetic and logic operations.
3.1.2 ALU Block:
This arithmetic block consists of four bit adder, subtractor, right and left shifter, comparator, encryption and decryption
circuit and multiplier. The logic block consists of four bit AND, OR, NOT, NAND, NOR, XOR and XNOR gates. This architecture is
modified in such a way that from the control unit above, instructions are fed to choose a particular operation to be performed and only
the execution of that particular operation reaches the output port of the processor. Hence this saves time in turn the power
consumption gets reduced to a greater extent than the existing system.
3.1.3 Output Block:
The output block consists of a simple OR gate whose inputs are the outputs from the 8 units of ALU block. Since only one
block outputs a true value, the necessity of an OR gate gets fulfilled here. This is how a simple output block is designed in this paper.
IV. TRANSIENT ANALYSIS


Fig 4.1Transient analysis of existing arithmetic and logic unit
Time(s)

V
o
l
t
a
g
e

(
V
)


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

63 www.ijergs.org

Fig 4.1 shows the output waveform of existing 4-bit arithmetic and logic unit in which v(19)-v(26) represents the ALU input
v(104)-v(107) represents the arithmetic unit output, v(108)-v(111) represents the logic unit output, v(112)-v(115) represents the ALU
output.



Fig.4.2 Transient analysis of proposed Arithmetic and logic unit

Fig.4.2 shows the output waveform of proposed Arithmetic and logic unit in which v(15),v(16),v(5),v(6) represents control
inputs v(19)-v(26) represents the ALU input v(104)-v(107) represents the arithmetic unit output, v(108)-v(111) represents the logic
unit output, v(112)-v(115) represents the ALU output.
V. POWER ANALYSIS

Table 5.1 Power Analysis of Existing System and Proposed System
DEVICE:MOSFET TECHNOLOGY:130nm
OPERATING FREQUENCY:1GHz

DESIGN EXISTING PROPOSED


ALU DESIGN
Avgpwr
(w)
Delay
(s)
PDP
(pJ)

Avgpwr
(w)
Delay
(s)
PDP
(pJ)
126.6 2.002 253.45 101.19 0.009 0.910


Table 5.2 Power Analysis of Existing System and Proposed System
DEVICE:MOSFET TECHNOLOGY:32nm
OPERATING FREQUENCY:1GHz

DESIGN EXISTING PROPOSED

Avgpwr Delay PDP Avgpwr Delay PDP
V
o
l
t
a
g
e

(
V
)


Time(s)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

64 www.ijergs.org

ALU DESIGN (w) (s) (pJ)

(w) (s) (pJ)
3967 2.026 8037.1

11.94 0.009 0.107

Table 5.3 Power Analysis of Existing System and Proposed System
DEVICE:MOSFET TECHNOLOGY:16nm
OPERATING FREQUENCY:1GHz

DESIGN EXISTING PROPOSED


ALU DESIGN
Avgpwr
(w)
Delay
(s)
PDP
(pJ)

Avgpwr
(w)
Delay
(s)
PDP
(pJ)
513.8 2.026 1040.9

20.86 0.009 0.18

The above Tables 5.1, 5.2 & 5.3 shows the performance analysis report for existing and modified arithmetic and logic circuit
with the Guarded static CMOS logic technique in three different CMOS nanometer technologies


Fig5.1 Power consumption comparison of existing and proposed system
The above chart in Fig 5.1 shows the comparative performance of ALU with operating voltage of 3.3V using HSPICE in
130nm CMOS technology. The analysis clearly shows that the arithmetic and logic unit in the existing system consumes 126.6 W of
power and the proposed system consumes a considerable less amount of 101.19W of power. When the controlled feed through logic
is used in the ALU circuit, the power consumption experiences a drastic reduction increasing the speed of the device.
CONCLUSION
Thus the power consumption is greatly reduced in the modified design using the Guarded static CMOS logic and it is found
to be more efficient. With the conventional type of arithmetic and logic unit that executes all the operations at the same time, the
power dissipation gets uncontrolled. Hence to discover an alternative, this static CMOS logic was taken as a base and a proposed
Guarded Static CMOS logic was introduced. The performance analysis clearly shows that the arithmetic and logic unit designed using
Guarded Static CMOS logic shows appropriate dimensions of various parameters helping to obtain a near optimum arithmetic and
logic circuit. Hence the power consumption of the modified ALU design is further reduced. The proposed Arithmetic and Logic Unit
design can be used in high end real time applications like ARM processors and also in various other low power applications.
0
1000
2000
3000
AVG PWR PDP
P
O
W
E
R

C
O
N
S
U
M
P
T
I
O
N
COMPARISON CHART
EXISTING
PROPOSED
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

65 www.ijergs.org

REFERENCES:

[1] K.Nehru,Dr.A.Shanmugam,DarmilaThenmozhi.G, Design of low power ALU USING 8T FA and PTL based MUX Circuits,
IEEE International Conference On Advances In Engineering, Science And Management, pp.724-730, 2012.
[2] B. Lokesh, K. Dushyanth, M. Malathi, 4 Bit Reconfigurable ALU with Minimum Power and Delay,International Journal of
Computer Applications ,pp. 10-13, 2011.
[3] Mazen Al Haddad, ZaghloulElSayed, MagdyBayoumi, Green Arithmetic Logic Unit, IEEE Transaction, 2012.
[4] MeetuMehrishi, S. K. Lenka, VLSI Design of Low Power ALU Using Optimized Barrel Shifter,International Journal of VLSI
and Embedded Systems, Vol 04, Issue 03,pp.318-323, 2013.
[5] Nazrul Anuar, Yasuhiro Takahashi, and Toshikazu Sekine, " Two Phase Clocked Adiabatic Static CMOS Logic and its Logic
Family ", Journal Of Semiconductor Technology And Science, Vol.10, No.1, March, 2010, pp~1-10.
[6] R.K. Krishnamurthy, S. Hsu, M. Anders, B. Bloechel, B. Chatterjee, M. Sachdev and S. Borkar, "Dual Supply voltage clocking
for 5GHz 130nm integer execution core", proceedings of IEEE VLSI Circuits Symposium, Honolulu Jun. 2002, 128-129.
[7] S. vangal, Y. Hoskote, D. Somasekhar, V. Erraguntla, J. Howard, G. Ruhl, V. Veeramachaneni, D. Finan, S. Mathew, and N.
Borkar, "A 5-GHz floating point multiply accumulator in 90-nm dual VT CMOS", in Proc. IEEE Int. Solid-State Circuits Conf.,
San Francisco, CA, Feb.2003, 334335.
[8] V. Navarro Botello, J. A. Montiel Nelson, and S. Nooshabadi, "Analysis of high performance fast feedthrough logic families in
CMOS", IEEE Trans. Cir. & syst. II, vol. 54, no. 6, Jun. 2007, 489-493.
[9] Rabaey, J. M., Chandrakasan, A., and Nikolic. B., 2002. Digital integrated circuits: A design perspective, 2nd ed, Upper Saddle
River, NJ: Prentice-Hall.
[10] BishwajeetPandey and ManishaPattanaik, Clock Gating Aware Low Power ALU Design and Implementation on FPGA,
International Journal of Future Computer and Communication, Vol. 2, No. 5, October 2013,pp~461-465.
[11] Nooshabadi, S., and Montiel-Nelson, J. A. 2004. Fast feedthrough logic: A high-performance logic family for GaAs. In IEEE
transaction on circuits, Syst. I, Reg. Papers, vol. 51, no. 11, pp. 21892203.
[12] Navarro-Botello, V., Montiel-Nelson, J. A., and Nooshabadi, S. 2007. Analysis of high performance fast feedthrough logic
families in CMOS. In IEEE transaction on circuits, Syst. II, vol. 54, no. 6, pp. 489-493.













International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

66 www.ijergs.org

Fabrication and Analysis of Tube-In-Tube Helical Coil Heat Exchanger
Mrunal P.Kshirsagar
1
, Trupti J. Kansara
1
, Swapnil M. Aher
1

1
Research Scholar, Sinhgad Institute of Technology ,
sswapnilaher@gmail.com,9881925601

ABSTRACT Conventional heat exchangers are large in size and heat transfer rate is also less and in conventional heat exchanger
dead zone is produce which reduces the heat transfer rate and to create turbulence in conventional heat exchanger some external
means is required and the fluid in conventional heat exchanger is not in continuous motion with each other. Tube in tube helical coil
heat exchanger provides a compact shape with its geometry offering more fluid contact and eliminating the dead zone, increasing the
turbulence and hence the heat transfer rate. An experimental setup is fabricated for the estimation of the heat transfer characteristics. A
wire is wounded in the core to increase the turbulence in turn increases the heat transfer rate. The paper deals with the pitch variation
of the internal wounded wire and its result on the heat transfer rate. The Reynolds number and Dean number in the annulus was
compared to the numerical data. The experimental result was compared with the analytical result which confirmed the validation. This
heat exchanger finds its application mostly in food industries and waste heat recovery.
KeywordsTube-in-tube helical coil, Nusselt number, wire wound, Reynold number, Dean number, dead zone, efficiency .
1. INTRODUCTION
Several studies have indicated that helically coiled tubes are superior to straight tubes when employed in heat transfer
applications. The centrifugal force due to the curvature of the tube results in the development of secondary flows (flows perpendicular
to the axial direction) which assist in mixing the fluid and enhance the heat transfer. In straight tube heat exchangers there is little
mixing in the laminar flow regime, thus the application of curved tubes in laminar flow heat exchange processes can be highly
beneficial. These situations can arise in the food processing industry for the heating and cooling of either highly viscous liquid food,
such as pastes or purees, or for products that are sensitive to high shear stresses. Another advantage to using helical coils over straight
tubes is that the residence time spread is reduced, allowing helical coils to be used to reduce axial dispersion in tubular reactors.
The first attempt has been made by Dean to describe mathematically the flow in a coiled tube. A first approximation of the steady
motion of incompressible fluid flowing through a coiled pipe with a circular cross-section is considered in his analysis. It was
observed that the reduction in the rate of flow due to curvature depends on a single variable, K, which is equal to 2(Re)2r/R, for low
velocities and small r/R ratio. It was then continued for the study of Dean for the laminar flow of fluids with different viscosities
through curved pipes with different curvature ratios (). The result shows that the onset of turbulence did not depend on the value of
the Re or the De. It was concluded that the flow in curved pipes is more stable than flow in straight pipes. It was also studied the
resistance to flow as a function of De and Re. There was no difference in flow resistance compared to a straight pipe for values of De
less than 14.6.

Figure 1.1: Diagram of helical coil


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

67 www.ijergs.org


Rough estimates can be made using either constant heat flux or constant wall temperature from the literature. The study
of fluid-to-fluid heat transfer for this arrangement needs further investigation. The second difficulty is in estimating the area of the coil
surface available to heat transfer. As can be seen in Figure, a solid baffle is placed at the core of the heat exchanger. In this
configuration the baffle is needed so that the fluid will not flow straight through the shell with minimal interaction with the coil. This
baffle changes the flow velocity around the coil and it is expected that there would be possible dead-zones in the area between the
coils where the fluid would not be flowing. The heat would then have to conduct through the fluid in these zones, reducing the heat
transfer effectiveness on the outside of the coil.

Figure 1.2 close-up of double pipe heat exchanger
Additionally, the recommendation for the calculation of the outside heat transfer coefficient is based on the flow over a bank of non-
staggered circular tubes, which is another approximation to account for the complex geometry. Thus, the major drawbacks to this type
of heat exchanger are the difficulty in predicting the heat transfer coefficients and the surface area available for heat transfer. These
problems are brought on because of the lack of information in fluid-to-fluid helical heat exchangers and the poor predictability of the
flow around the outside of the coil.

Nomenclatures:
A surface area of tube (m
2
) C constant in Eq. (4)
d diameter of inner tube (m) D diameter of annulus (m)
De* modified Dean number (dimensionless) h heat transfer coefficient (W/m
2
K)
k thermal conductivity (W/m K) L length of heat exchanger (m)
LMTD log-mean temperature difference (K or C) q heat transfer rate (J/s)
U overall heat transfer coefficient (W/m
2
K) T1 temperature difference at inlet (K)
v velocity (m/s) density (kg/m
3
)
T2 temperature difference at outlet (K) dynamic viscosity (kg/ms)


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

68 www.ijergs.org

Subscripts
I inside/inner hotin Hot fluid in
o outside/outer i Inside/inner
c Cold max Maximum
coldin Cold fluid in min Minimum
cur Curved tube o Outside/outer
h Hot

2. DIMENSIONAL AND OPRATING PARAMETERS:
Table 1: Characteristic dimensions of heat exchanger
Dimensional parameters
Heat
Exchanger
di,mm 10
do,mm 12
Di,mm 23
Do,mm 25
Curvature Radius,mm 135
Stretch Length,mm 3992
Wire diameter,mm 1.5

Table 2: Range of parameters:
Parameters Range
Inner tube flow rate 200-500LPH
Outer tube flow rate 50-200 LPH
Inner tube inlet temperature 28-30
Outer tube inlet temperature 58-62
Inner tube outlet temperature 30-40
Outer tube outlet temperature 35-46

2.1 METHODLOGY:
The heat exchangers were constructed from mild steel and stainless steel. The inner tube having outer diameter 12mm and
inner 10mm was constructed from mild steel and outer tube of outer diameter 25mm and inner diameter 23mm was constructed from
stainless steel. Mild steel wire is wounded on the inner tube which has pitch 6 mm and 10 mm on the heat exchangers. The curvature
radius of the coil is 135 mm and the stretched length of the coil is 3992 mm. While the bending of tubes very fine sand filled in tube to
maintain smoothness on inner surface and this washed with compressed air. The care is taken to preserve the circular cross section of
the coil during the bending process. The end connections soldered at tube ends and two ends drawn from coiled tube at one position.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

69 www.ijergs.org

3. EXPERIMENTAL SETUP AND WORKING:

Figure 3.1: Experimental setup
Cold tap water was used for the fluid flowing in the annulus. The water in the annulus was circulated. The flow was controlled by a
valve, allowing flows to be controlled and measured between 200 and 500 LPH. Hot water for the inner tube was heated in a tank with
the thermostatic heater set at600 C. This water was circulated via pump. The flow rate for the inner tube was controlled by flow
metering valve as described for the annulus flow. Flexible PVC tubing was used for all the connections. J-Type thermocouples were
inserted into the flexible PVC tubing to measure the inlet and outlet temperatures for both fluids. Temperature data was recorded using
a creative temperature indicator.

Figure 3.2: Actual setup

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

70 www.ijergs.org


3.1 Experimental Study
A test run was completed on the apparatus. Once all of the components were in place, the system was checked thoroughly for
leaks. After fixing the leaks, the apparatus was prepared for testing. The test run commenced with the apparatus being tested under
laboratory conditions. Data was recorded every five minutes until the apparatus reached steady state. The hot temperatures fell as
expected; the cold temperatures seemed to be more unpredictable in one instance rising six degrees in five minutes and then on the
next reading falling three degrees. The apparatus took 120 minutes to reach steady state, which can vary based on operating
conditions. Readings were taken until the three-hour mark; however, the data became inconsistent, so a steady state set was
determined based on proximity of the readings.
Flow rates in the annulus and in the inner tube varied. The following five levels were used: 100, 200,300, 400, and 500 LPH.
All possible combinations of these flow rates in both the annulus and the inner tube were tested. These were done for all the coils in
counter flow configurations. Furthermore, three replicates were carried out every combination of flow rate, coil size and configuration.
This resulted in a total of 50 trials. Temperature data was recorded every ten seconds. The data used in the calculations was
synthesized only after the system had stabilized. Temperature measurements from the 120 s of the stable system were used, with
temperature reading fluctuations within 1.10C. All the thermocouples were constructed from the same roll of thermocouple wire thus
carried out for the repeatability of temperature readings being high.

4. DATA COLLECTION AND ANALYSIS:
In present investigation work the heat transfer coefficient and heat transfer rates were determined based on the measured
temperature data. The heat is flowing from inner tube side hot water to outer tube side cold water. The operating parameter range is
given in table 2.
Mass flow rate of hot water (Kg/sec):
m
H
= Q
HOT
(LPH) (Kg/m
3
)
Mass flow rate of hot water (Kg/sec)
m
C
= Q
COLD
(LPH) (Kg/m
3
)
Velocity of hot fluid (m/sec)
V
H
=

1000Area

Heat transfer rate of hot water (J/sec)
q
H
= m
H
C
P
t
hot
1000

Heat transfer rate of cold water (J/sec)
q
C
= m
C
C
P
t
cold
1000

Average heat transfer rate
Q
avg
=
qH+ qC
2

The heat transfer coefficient was calculated with,
=


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

71 www.ijergs.org

The overall heat transfer surface area was determined based on the tube diameter and developed area of heat transfer which is
A= 0.22272m
2
, The total convective area of the tube keep constant for two geometry of coiled heat exchanger.
LMTD is the log mean temperature difference, based on the inlet temperature difference T
1
, and outlet temperature difference T
2
,
LMTD=
(12)
(ln(
1
2
))


The overall heat transfer coefficient can be related to the inner and outer heat transfer coefficients by the following equation,
]
1
0
=
0

+
0ln(

)
2
+
1


Where di and do are inner and outer diameters of the tube respectively. k is thermal conductivity of wall material and L, length of tube
(stretch length) of heat exchanger. After calculating overall heat transfer coefficient, only unknown variables are hi and ho convective
heat transfer coefficient inner and outer side respectively, by keeping mass flow rate in annulus side is constant and tube side mass
flow rate varying,
h
i
=CV
i
n

Where Vi are the tube side fluid velocity m/sec., the values for the constant, C, and the exponent, n, were determined through curve
fitting. The inner heat transfer could be calculated for both circular and square coil by using Wilson plot method. This procedure is
repeated for tube side and annulus side for each mass flow rate on both helical coils.
The efficiency of the heat exchanger was calculated by,

=
1


= 93.33%.
The Reynolds number
Re=
(VD)

.
Dean number,
D
e
=

2
)
1
2
.
Friction factor,
()
(2
2
)
.




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

72 www.ijergs.org

5. RESULT AND DISSCUSION:
The experiment was conducted for single-phase water to water heat transfer application. The tube in tube helical coil
heat exchanger has been analyzed in terms of temperature variation and friction factor for changing the pitch distance of wire which is
wound on outer side of inner tube. The results obtained from the experimental investigation of heat exchanger operated at various
operating conditions are studied in detail and presented.

Figure 5.1: Inner Reynold Number vs inner Nusselt Number
Nusselt Number VS Reynolds Number (Annulus Area)
As the Reynolds number increases Nusselt number increases. A larger Nusselt number corresponds to more active
convection the 10 mm pitch wire mesh tube place in the tube helical coil shows rapid increment after 5000 Re because of the
decreasing friction factor.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

73 www.ijergs.org

Figure 5.2: Variation of inner tube flow rate with inner Nusselt Number at constant annulus flow rate for plain tube in tube helical coil
heat exchanger

Figure 5.3: Variation of inner tube flow rate with inner Nusselt Number at constant annulus flow rate for 10 mm pitch of wire wound
of tube in tube helical coil heat exchanger

Figure 5.4: Variation of inner tube flow rate with inner Nusselt Number at constant annulus flow rate for 6 mm pitch of wire wound of
tube in tube helical coil heat exchanger
The Nusselt Number of inner tube at constant flow rate from annulus side was linearly increasing with increasing
flow rate of water through inner tube. Similarly the inner Nusselt Number was proportionally changed with variation of annulus side
flow rate at same inner side flow rate.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

74 www.ijergs.org



Figure 5.5: Annulus Reynolds Number vs Annulus Friction Factor

5.5 FRICTION FACTOR V/S R
A
It is observed from the figure.5.5, that the pressure drop in the annulus section is higher. This may be due to friction
generated by outer wall of the inner-coiled tube, as well as inner wall of the outer-coiled tube. As expected, the friction factor obtained
from the tube with coil-wire wound is significantly higher than that without coil-wire insert.


ACKNOWLEDGMENT
This work is currently supported by Prof. Milind S. Rohokale, H.O.D.,Sinhgad Institute of Technology and Prof. Annasaheb
Narode for their valuable input and encouragement.
CONCLUSION
Experimental study of a wire wound tube-in-tube helical coiled heat exchanger was performed considering hot water
in the inner tube at various flow rate conditions and with cooling water in the outer tube. The mass flow rates in the inner tube and in
the annulus were both varied and the counter-current flow configurations were tested.
The experimentally obtained overall heat transfer coefficient (Uo) for different values of flow rate in the inner-coiled
tube and in the annulus region were reported. It was observed that the overall heat transfer coefficient increases with increase in the
inner-coiled tube flow rate, for a constant flow rate in the annulus region. Similar trends in the variation of overall heat transfer
coefficient were observed for different flow rates in the annulus region for a constant flow rate in the inner-coiled tube. It was also
observed that when wire coils are compared with a smooth tube, it was also observed that overall heat transfer coefficient is increases
with minimum pitch distance of wire coils.
The efficiency of the tube-in-tube helical coil heat exchanger is 15-20% more as compared to the convention heat exchanger and
the experimentally calculated efficiency is 93.33%.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

75 www.ijergs.org

REFERENCES:
1. Robert Bakker, Edwin Keijsers, and Hans van der Beak Alternative Concepts and Technologies for Beneficial Utilization of
Rice Straw Wageningen UR Food & Biobased Research ,Number Food & Biobased Research number 1176 ,ISBN-number
978-90-8585-755-6,December 31st, 2009
2. T.J. Rennie, V.G.S. Raghavan Experimental studies of a double-pipe helical heat exchanger, Experimental Thermal and Fluid Science
29 (2005) 919924.
3. Dependra, Uprety, and Bhusal Jagrit. "Development of Heat Exchangers for Water Pasteurization in Improved
Cooking." Table of Content Topics Page no: 6.
4. V. Kumar, Numerical studies of a tube-in-tube helically coiled heat exchanger, Department of Chemical Engineering
Chemical Engineering and Processing 47 (2008) 22872295.
5. P. Naphon, Effect of coil-wire insert on heat transfer enhancement and pressure drop of the horizontal concentric tubes,
International Communications in Heat and Mass Transfer 33 (2006) 753763.
6. A. Garcia, Experimental study of heat transfer enhancement with wire coil inserts in laminar-transition-turbulent regimes at
different Prandtl numbers, International Journal of Heat and Mass Transfer 48 (2005) 46404651.
7. Jung-Yang San*, Chih-Hsiang Hsu, Shih-Hao Chen, Heat transfer characteristics of a helical heat exchanger, Applied
Thermal Engineering 39 (2012) 114e120, Jan 2012.
8. W. Witchayanuwat and S. kheawhom,Heat transfer coefficient for Particulated air flow in shell and coil tube heat
exchanger,International journal of chemical and biological engineering 3:1, 2010.
9. Mohamed A. Abd Raboh, Hesham M. Mostafa,Experimental study of condensation heat transfer inside helical coil,
www.intechopen.com.
10. John H. Lienhard IV,A heat transfer text book,3
rd
edition.
11. Paisarnnaphon,Effect of coil-wire insert on heat transfer enhancement andPressure drop of the horizontal concentric tubes.
12. Handbook of Heat transfer third edition Mcgraw-hill Third Edition.
13. F M White heat and mass transferMcgraw-hill Second Edition.
14. Ahmad Fakheri, Heat Exchanger Efficiency, ASME 1268 / Vol. 129, SEPTEMBER 2007.













International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

76 www.ijergs.org

Assessment of labour Risk in High-Rise Building
R Kathiravan
1
, G.Ravichandran
1
, Dr.S.Senthamil Kumar
2

1
Research Scholar (M.Tech),Construction Engineering and Management, Periyar Maniammai University, Thanjavur
kathiravancivilengg@gmail.com
[1]
, 09585544664

ABSTRACT - In the recent past the infrastructural development in India has been developing at a rapid rate. The
infrastructural development plays a major role in the economic development of the country. There are several risks allied with the
construction industry. Managing risks in construction projects has been recognized as a very important management process in order
to achieve the project objectives in terms of time, cost, quality, safety and environmental sustainability. Project risk management has
been intensively discussed in recent years. This paper aims to identify and analyze the risks associated with the development of
construction projects from project stakeholder and life cycle perspectives in terms of human safety and its effect on time and cost. This
can be done by calculating the productivity rate of the labors and also analyzing the organization needs from the work force. This
research found that these risks are mainly related to contractors, labors who directly take part in the construction process. Among
them, tight project schedule is recognized to have high influence on all project objectives maximally. In this study the survey has to be
conducted with in various construction industries in Tamil Nadu and the opinion at various levels of management through the standard
questionnaires are to be collected and the result are to be analyzed and aims at providing recommendations to overcome those risk
mitigations.

Keywordsrisk, risk management, construction projects, labour risk, human safety, productivity, life cycle perspectives.
1. INTRODUCTION

1.1 An Over View On Construction Industry
The construction industry is the second largest industry of the country after agriculture. It makes a significant contribution to the
national economy and provides employment to large number of people. The use of various new technologies and deployment of
project management strategies has made it possible to undertake projects of mega scale. In its path of advancement, the industry has to
overcome a number of challenges. However, the industry is still faced with some major challenges, including housing, disaster
resistant construction, water management and mass transportation. Recent experiences of several new mega-projects are clear
indicators that the industry is poised for a bright future. It is the second homecoming of the civil engineering profession to the
forefront amongst all professions in the country.
Construction industry, with its backward and forward linkages with various other industries like cement, steel bricks etc.
catalyses employment generation in the country. According to Planning Commission the Infrastructure spending of the
government is around 1500 USD million or Rs. 67,50,000/- for 11th and 12th year plan. Statistics over the period have shown
that compared to other sectors, this sector of economic activity generally creates 4.7 times increase in incomes and 7.76 times
increase in employment generation potentiality. Sustained efforts by the Indian construction industry and the Planning
Commission have led to assigning the industry status to construction today. This means formal planning and above board financial
planning will be the obvious destination of the construction sector in the country, with over 3.1 Cr persons employed in it. The
key drivers of this growth are government investment in infrastructure creation and real estate demand in the residential and
industrial sectors.
There are mainly three segments in the construction industry like real estate construction which includes residential and
commercial construction; infrastructure building which includes roads, railways, power etc; and industrial construction that
consists of oil and gas refineries, pipelines, textiles etc .The construction activity differs from segment to segment.
Construction of houses and roads involves about 75% and 60% of civil construction respectively. Building of airports and
ports has construction activity in the range of 40-50%. For industrial projects, construction component ranges between 15-
20%. Within a particular sector also construction component varies from project to project.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

77 www.ijergs.org

2. CONCEPT OF RISK MANAGEMENT
2.1 Risk
Risk An uncertain event or condition that results from the work, having an impact that contradicts expectations. An
event is at least partially related to other parties in a business.
Risk management is recognized as an integral part of good management practice. To be most effective, risk
management should become part of an organization's culture. It should be integrated into the organization' s philosophy,
practices and business plans rather than be viewed or practiced as a separate program. When this is achieved, risk management
becomes the business of everyone in the organization. Risk management enables continual improvement in decision-making. It is
as much about identifying opportunities and avoiding or mitigating losses.
2.2 Major Human Risk In Construction Projects
Inability to work.
Unwillingness to work.
Inadequate supervision while executing work activities.
Insufficient labours.
Effect of severe weather condition.
Labour and contractors issues.
Over time of work.
These are some of the major factors that causes damages and situation of risk in the construction site. There are several other factors
that also involved in the factor causing the situation of risk. These factors are to be identified from technicians point of view and also
from labours point of view so that the actual situation or the factors causing the risk are identified.
2.3 Lean Approach
In the recent past 'Lean Construction' - a philosophy based on the 'Lean Manufacturing' approaches undertaken in
the automotive industry has been applied to reduce wastes and increase efficiency in construction practices. The objective of
Lean Construction is to design a production system that will deliver a custom product instantly on order but maintain no
intermediate inventories. Applied to construction, 'Lean' changes the way work is done throughout the delivery process.
Current construction techniques attempt to optimize the project activity by activity and pay little attention to how value is created
and flows to the customer.

2.4 Work Sampling

Labor productivity has a major impact on whether a construction project is completed on time and within budget. Therefore,
it is important for construction managers to improve the conditions that affect labor productivity on their jobsites. Work sampling is a
method that evaluates the amount of productive, supportive, and non-productive time spent by the trade workers engaged in
performing their assigned work activities. It also helps identify any trends affecting labor productivity.
Construction companies are constantly searching for ways to improve labor productivity. Since labor is one of the greatest risks in a
construction contract it must be controlled and continuously improved. The construction company with the most efficient operations
has a greater chance to make more money and deliver a faster construction project to the project owner. There are several factors that
affect labor productivity on a jobsite such as weather conditions, workers skill level, overcrowding of work crews, construction
methods used, and material delivery/ storage/ handling procedures.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

78 www.ijergs.org



Table 2.1 work sampling model























*VA-
valuable activities NVA- non valuable activities NVAN-non valuable activities by labour.


Table 2.2 Percentage Of Activities
S.no Activities Percentage
1 Value activities 65%
2 Non value activities 34%
3 Non value activities by labours 1%




2.5 Last Planner System
Date 12-sep-13
Time Labour
No of
labour
Time
Consumed
(in hrs)
Total
Time
Consumed
Man-
days
Team
Observation
Reason for NVA Impact
08:00:00
09:00:00 Skilled 15 0.5 7.5 1 NVAN Minor Resources 4.1%
10:00:00 Skilled 190 0.5 95 12 NVA Water & restroom 4.1%
11:00:00
13:00:00 Skilled 190 1 190 24 NVA Lunch 8.2%
14:00:00
15:00:00 Skilled 190 0.5 95 12 NVA Water & restroom 4.1%
16:00:00

17:00:00

18:00:00 Skilled 190 0.5 95 12 NVA Snacks & Tea 4.1%
19:00:00 Skilled 55 2 110 14 NVA Lighting 4.8%
20:00:00 Skilled 190 0.5 95 12 NVA Snacks 4.1%
21:00:00 Skilled 20 0.5 10 1 NVAN Minor Resources 4.1%
22:00:00 Skilled 190 0.5 95 12 NVA Water & restroom 4.1%
23:00:00

00:00:00
Total 100

Total Man-day per day 285

Waste % on crew 36%

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

79 www.ijergs.org

Better planning improves productivity by reducing delays, getting the work done in the best constructability sequence,
matching manpower to available work, coordinating multiple interdependent activities, etc. The relationship is obvious and very
powerful. One of the very most effective things you can do to improve productivity is to improve planning.


Table 2.3 Last planner log sheet

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

80 www.ijergs.org


Fig 2.1 Floor level VsPercentage Work Completed
From the graph it is incurred that the completion of the work within the cycle time is in increasing order but not with in the
specified time of completion. An average of 70% of the planned completion of the work are carried out in every cycle time. This
shows that 30% of the remaining work are completed with extra time or extended time of completion. It is inferred that there should
be a constrain for the labours to finish the work completely within the stipulated time.
It is in the recent trend of economy that the real estate company or the builders want to finish the project as for as possible so
that the consumer will satisfied and the margin of profit will rise. The companies are in a look of finishing the project on before hand
by continues and fast working. This greatly affects the labours in several aspects such as mental stress, health problems etc. If the
workers are likely to act as per the companies needs there should be a situation of risk occurs. This create situation of damage or even
cause loss of life.
2.6 Productivity
Productivity in the sense the amount of work done by the in a work man day or in a hour. Different companies having
different productivity rate.Construction companies are constantly searching for ways to improve labor productivity. Since labor is one
of the greatest risks in a construction contract it must be controlled and continuously improved. The construction company with the
most efficient operations has a greater chance to make more money and deliver a faster construction project to the project owner.
There are several factors that affect labor productivity on a jobsite such as weather conditions, workers skill level, overcrowding of
work crews, construction methods used, and material delivery/ storage/handling procedures . Several methods exist for measuring and
analyzing worker productivity. In this study video visual of the progress of work is monitored using the cctv cam recording system.
This helps greatly in watching the progress of work without any obstructions. Since the camera is located in the highest elevation
points such as tower crane.

0%
20%
40%
60%
80%
100%
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
29%
0%
29%
100%
29%
71%
57%
43%
100% 100% 100%
31%
79%
100% 100%
86%
43%
71%
43%
29%
71%
71%
100% 100%
54%
62%
100%
79%
100% 100%
Pour 1 Pour 2
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

81 www.ijergs.org

Productivity is simply the ratio of overall quantity of work done to the ratio of quantity of lobours who take part in the
completion of the work in one day or one cycle time.
PRODUCTIVITY= (total work done/ number of labours involved)
In this study both the productivity rate of concrete work and the steel work are calculated as the structure was a typical shear
wall structure. The labours involved in this category are carpenter and the bar benders. They are accompanied by the helpers so as to
help the work force in the completion of the work within the stipulated time. The shear wall structure of the typical block 9 be filled
with 2950 sq.m of formwork and 20.59 tons of rebar work. Each block of the typical floor 9 is to be filled with same amount of
resource materials as mentioned. The concrete is prepared in the site itself where a RMC plant was located. Necessary peptalk and
safety precautions are provided for the labours every day before going for work by technicians and the concerned officers. These are
necessarily done to increase the productivity rate.
A meeting was held with the general manager of the contractor in question to describe the procedures of the work sampling
study. The data collection method was described as well as the type of information that could be extrapolated during the analysis
phase. After the general manager was familiar with the process and the information that could be obtained from a work sampling
study, an objective was determined. The contractor wanted to have a baseline of the labor productivity for the companys profit
centers.
Table 2.4 Quantities Of Work To Done
S.no Resource Amount
1 Form work 2950 sq.m
2 Steel 20.59 tons

Table 2.5Lobour strength of block 9, pour of concrete 1
Sl.no Floor Carpenters Bar benders
1 12 304 502
2 13 620 835
3 14 408 464
4 15 328 400
5 16 256 325
6 17 320 361

Table 2.6Lobour strength of block 9, pour of concrete 2
Sl.no Floor Carpenters Bar benders
1 11 365 682
2 12 411 607
3 13 404 426
4 14 408 460
5 15 269 390
6 16 243 325
7 17 272 358



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

82 www.ijergs.org


The above tables shows the total amount of workers worked in each floors to accomplish the cycle time project. It is clear
that the above work force worked in each floors only completed the work. Their productivity rate only be used or consumed for the
completion of the block 9 of the project. There is a great fluctuation in the amount of workers employed in the construction process in
every floor. Thus the productivity rate of the workers will be increased due to shortage of labours. This increase in productivity falls
on the head of the concerned labours employed to complete the project within the stipulated time.

Table 2.7 Form work and reinforcement productivity
Sl.no Form work Reinforcement work
Pour1 Pour2 Pour1 Pour 2
1 1.7 2.0 26 27
2 1.9 2.4 30 27
3 1.6 3.3 27 35
4 3.0 3.4 46 35
5 2.4 4.7 34 44
6 2.7 4.0 34 24
7 4.9 6.2 31 35
8 3.8 5.3 36 35
9 4.3 5.2 43 46
10 5.3 6.3 49 28
11 4.9 5.4 45 27
12 6.4 3.3 27 32
13 1.1 3.7 35 32
14 4.2 3.8 30 30
15 4.6 6.4 34 35
16 7.6 8.1 42 42
17 5.3 7.8 38 38

The table shows the productivity rate of each the carpenters and the bar benders in each floor that the average rate of productivity for
each labour concerned with thework are calculated. But it is noted that the productivity rate of the company was not achieved as the
average productivity rate of each labour in the work concerned are low. There may be several problems that may cause or stops the
worker. The worker available in the work for a day will not be available on another or next day or the forth coming cycle wor k as
there is a increase in the productivity rate of the work concerned.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

83 www.ijergs.org


Fig 2.2 Floor level Vs Formwork productivity

Fig 2.3 Floor level Vs Rebar productivity
This shows that the average productivity rate of the form work fixing and reinforcement work. The average productivity rate
displays that the companies productivity rate was not achieved so as to complete the project within the stipulated time and also to
achieve the calculate margin of profit. In the economic point of view it recogonised that people may want the facility as for as possible
so that they can satisfied. It is why Honda maze car selling at a rapid rate than any other cars. As because the delivers of the car will be
done as soon as possible after the order and also comfortable in the economic point of view. In the same way the building and real
estate industries also in need of satisfying the market needs so that they can get the marginal profit. It is not possible to achieve the
target with the use of available resources. The availability of resources also less. So the only way of achieving the target is through
increasing the labour productivity of the available labour.
There is a matter of concern that these increased labour productivity will create several risk factors that affects the labours
concerned directly or indirectly. This also may cause the situation of accident due to increased productivity rate. This may affects the
entire course of the project and may cause even loss of life and injuries to the workforce concerned.
1.7
2.0
1.6
3.0
2.4
2.7
4.9
3.8
4.3
5.3
4.9
6.4
1.1
4.2
4.6
7.6
5.3
1.9
2.4
3.3
3.4
4.7
4.0
6.2
5.3
5.2
6.3
5.4
3.3
3.7
3.8
6.4
8.1
7.8
0.0
2.0
4.0
6.0
8.0
10.0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
P
r
o
d
u
c
t
i
v
i
t
y

(
i
n

s
q
.
m
/
M
a
n

D
a
y
)
Floor (in Nos)
Pou
r 1
26
30
27
46
34 34
31
36
43
49
45
27
35
30
34
42
38
27
27
35 35
44
24
35
35
46
28
27
32 32
30
35
42
38
0
10
20
30
40
50
60
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
P
r
o
d
u
c
t
i
v
i
t
y

(
i
n

k
g
/
M
a
n

D
a
y
)
Floor (in Nos)
Pou
r 1
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

84 www.ijergs.org

CONCLUSION
The increase in the economic development of the country greatly influences in creating the demand and requirement. The construction
industries are in the view of satisfying the needs of the customers and to achieve large margin of profits can readily agree with short
completion of the work. This can greatly affects the labour force by increasing their productivity rate of work.As there is shortage in the
availability of the construction labours the companies assign the work on available labour and impose them to work faster to complete the
project in stipulated time. On the other hand this may create mental disturbance for the labours working in the site due to increased
productivity and increased time of work. For working of long hours thelabours may consume drugs like pan masala, cigarette, and some
timeseven consume liquor during the course of work. This leads to poor quality of work and makes the labour lazy by diverting their
concentration over the construction process. The study shows a great decrease in the availability of labour for work when the floor level goes
on increasing. This may create the situation of risk and causes severe consequences in forms of collapse of structure, damage, time waste,
injuries, loss of life, waste of money.

REFERENCES:

[1] Arbulu, R. J. and Tommelein, I. D. (2002). Value stream analysis of construction supply chains: Case study on pipe supports
used in power plants. Proceeding, 12th Annual Conference of the International Group for Lean Construction, Federal
University of RioGrande doSul, Brazil, 183 - 195.
[2] Polat, G., and Ballard, G. (2004), "Waste in Turkish construction: need for leanconstruction techniques." Proceeding, 12th
Annual Conference on Lean Construction,Helsingor, Denmark, 488-501.
[3] Subramanyan, H., Sawant, P., and Bhatt, V. (2012). Construction Project Risk Assessment: Development of Model Based
on Investigation of Opinion of Construction Project Experts from India.
[4] Aggarwal,S., (2003), Challenges for Construction Industries in DevelopingCountries, Proceedings of the 6th National
Conference on Construction, 10-11November 2003, New Delhi, CDROM, Technical Session 5, Paper No.1.
[5] Bhattacharya,C., (2002), JJ Hospital Flyover Precast Piers, Gammon Bulletin,Vol.126, April-June 2002.
[6] Daniel baloi, Risk analysis techniques in construction engineering projects, journal of risk sanalysis and crisis Response,
vol 2, no 2 ( aug 2012), 115-123.











International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

85 www.ijergs.org

Optimization of Transmission Power in Ad Hoc Network
M.D. Boomija
1

1
Assistant Professor, Department of IT, Prathyusha Institute of Technology and management, Chennai, Tamil Nadu
Email: boomija.md@gmail.com

ABSTRACT - A mobile ad-hoc network is a infrastructure less, self-configuring network of mobile devices. Infrastructure less
networks have no fixed router, all nodes are capable of movement and can be connected dynamically in an arbitrary manner. Nodes of
these networks function as routers which discover and maintain routes to other nodes in the network. Each device in a mobile ad hoc
network is free to move independently in all direction, and will therefore change its links to other devices frequently. The primary
challenge in building an ad hoc is equipping each device to continuously maintain the information required to properly route traffic.
The Optimization of Mobile Ad Hoc Network System Design engine works by taking a specification of network requirements and
objectives and allocates resources which satisfy the input constraints and maximize the communication performance objective. The
tool is used to explore networking design options and challenges, including power control, flow control, mobility, uncertainty in
channel models and cross-layer design. The project covers the case study of power control analysis.
Keywords Ad hoc network, optimization, power control, Time slot, MIMO, AMPL, Multi objective optimal
I INTRODUCTION
A mobile ad-hoc network (MANET) is a self-configuring network of mobile routers topology. The routers are free to move randomly
and organize themselves at random. So, the network's wireless topology may change rapidly and unpredictably. Such a network may
operate in a standalone fashion, or may be connected to the larger Internet. Minimal configuration and quick deployment make ad hoc
networks suitable for emergency situations like natural or human induced disasters, military conflicts, emergency medical situations
etc.
Optimization of Mobile Ad hoc network system as in Fig 1, the network design is approached as a process of optimizing variabl es.
The optimization of network parameters is a feedback process of optimization and performance estimation through simulation. Two
approaches (i) Generic Solver (ii) Specialized method. The set of control variables and objective parameters are the input to the
project. If specialized method is available for the given problem, then the solution is formulated by using AMPL modeling language.
It is a comprehensive and powerful algebraic modeling language for linear and nonlinear optimization problems.

Design Problem Optimization of ad hoc network Output









Fig 1 Mobile ad hoc network framework

II OPTIMIZATION PROBLEM

Optimization refers to the process of making a device or a collection of devices run more efficiently in terms of time and resources
(e.g., energy, memory). Optimization is a necessity for MANET management decisions due to the inherent individual and collective
Generic Specialized
Optimization Process
Simulation
Performance Analysis
Resource Allocations
Performance Estimate
Specification parameters
Design Problem
Model parameters
Optimized parameters

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

86 www.ijergs.org

resource limitations within the network. Mathematically, optimization entails minimizing or maximizing an objective function by
choosing values for the input variables from within an allowed set. An objective function is a mathematical expression made up of one
or more variables that are useful in evaluating solutions to a problem. [4]







Fig 2 Optimization problem in AMPL

Mathematical explanation of optimization
Set : A = a set of feasible solutions to the objective function, f
Variable : x = an element (a vector of input variables) in the set of feasible solutions, A
Objective Function: f = a given function
If the optimization problem calls for minimizing the results of the function, then we find an element, x
0
, of the set A such that:
f (x
0
) f (x) x A
If the problem calls for maximizing the results, then we find an element, x
0
, of the set A such that: f (x
0
) f (x) x A
The elements of the allowed set, x, are combinations of variable assignments that result in a feasible solution (a solution that satisfies
all of the constraints in the optimization problem). A feasible solution that minimizes or maximizes the value of the objective function
is called an optimal solution. [6]
The first step is to find an optimization method most appropriate to the set of control variables and objectives provided as input. If no
specialized algorithm is available in the framework for the specified problem, then the problem is formulated as a mathematical
program in the AMPL modeling language as shown in Fig. 2.
An appropriate generic solver is then used to solve the program, depending on whether objectives and constraints are linear or
nonlinear, and whether the variables are discrete or continuous. [12] The power control problem of minimizing power under a signal-
to-noise-interference constraint is an example of a linear program which is optimized using this generic solver approach. If a
specialized method is available for the problem, the framework automatically uses it to find a solution. An example of a specialized
method is a heuristic packing procedure. It schedules a set of concurrent transmissions and ensures a chance for every node to transmit
at least once. [3]
III RELATED WORK
The generally accepted network design cycle consists of three steps: 1) developing a network model 2) estimating the performance of
the proposed network through simulation 3) manually adjusting the model parameters until an acceptable performance is achieved for
the target set of scenarios. The complexity of networks and the large number of design parameters, changes in the design of a model
may have unintended effects. This project allows the designer to control high-level objectives instead of controlling low-level decision
variables. It applies optimization theory to generalized networking models. The existing optimization techniques combined with the
simulation capabilities of existing tools for the task of performance estimation.
IV SOFTWARE DESIGN
The ad hoc framework has two distinct forms: 1) an application with a graphical user interface and 2) a library with an application
programming interface. The former is an interface for human users while the latter is an interface for other programs that link against
it. One of the goals of the proposed framework as a network design tool is to provide a mechanism for comparing network
technologies. Each such model or algorithm is implemented in this framework in a modular way such that it can be swapped out for
any number of alternatives. The GUI provides a streamlined way of configuring multiple alternatives, and compare and test them
Ad hoc
framework
Optimization
Problem (
Input
Scenario)
AMPL
Resource Allocation
Output
Problem
Solvers
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

87 www.ijergs.org

through concurrent simulation and optimization. Without the need of any modification, the API supports such an extension. A set of
control parameters only added with the new extension. These parameters are then automatically added to the GUI through active
generative programming
V NETWORK DESIGN
The resources which are to be efficiently allocated on an ad hoc wireless network are naturally distributed, residing either on the nodes
or the edges of the graphs that represent the network state. The algorithms in this framework are separated into two categories: 1)
centralized and 2) distributed. The former operates on single snapshots or on a time-averaged model of the global network state. The
latter operates as a control mechanism on the node.
VI POWER CONTROL ANALYSIS
A. Introduction
Allocation of physical resources (e.g., transmission power) based on knowledge of the network state is often complicated by the
presence of uncertainty in the available information. Therefore, when the characteristics of the wireless propagation channel are highly
dynamic or only noisy measurements are available, the framework represents the uncertainty as a collection of S samples of each
channel state Hij in which represent a range of values that each channel between a transmitter and a receiver can take on. The problem
of optimally allocating resources under such a statistical representation of the channels can be solved in the proposed model by
assuming the distribution mean for each channel state or by using a optimization method which seeks to quantify the dependability of
the resource allocation solution. [1]
A fundamental problem in this optimization method is the tradeoff between feasibility and optimality. It may be interpreted to be a
multi objective optimization problem with two objectives: maintain feasibility and seek optimality. With this view in mind, a Pareto
front can be constructed to demonstrate the tradeoff between the two objectives. A network designer then only provides this
framework with 1) the requirement of sufficiently high feasibility or 2) a ceiling for the transmission power on the network.
B. Multi Objective Optimal
Multi objective optimization (known as multi objective programming, vector optimization, multi criteria optimization multi attribute
optimization or Pareto optimization) is an area of multiple criteria decision making, that is concerned with mathematical optimization
problems involving more than one objective function to be optimized simultaneously. Multi objective optimization has been applied in
many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of
trade-offs between two or more conflicting objectives. [7]
In practical problems, there can be more than three objectives. For a nontrivial multi objective optimization problem, there is not a
single solution that simultaneously optimizes each objective. In that case, the objective functions are said to be conflicting, and there
exists a (possibly infinite number of) Pareto optimal solutions. A solution is called non dominated, Pareto optimal, Pareto efficient or
non inferior, if none of the objective functions can be improved in value without impairment in some of the other objective values.
Without additional preference information, all Pareto optimal solutions can be considered mathematically equally good (as vectors
cannot be ordered completely). Researchers study multi objective optimization problems from different viewpoints and, thus, there
exist different solution philosophies and goals when setting and solving them. The goal may be finding a representative set of Pareto
optimal solutions, and/or quantifying the trade-offs in satisfying the different objectives, and/or finding a single solution that satisfies
the preferences of a human decision maker. [2]
A multi objective optimization problem is an optimization problem that involves multiple objective functions. In mathematical terms,
a multi objective optimization problem can be formulated [5]

where the integer is the number of objectives and the set is the feasible set of decision vectors defined by constraint
functions. In addition, the vector-valued objective function is often defined as [8]
.
If some objective function is to be maximized, it is equivalent to minimize its negative. The image of is denoted by
An element is called a feasible solution or a feasible decision. A vector for a feasible
solution is called an objective vector or an outcome. In multi objective optimization, there does not typically exist a feasible
solution that minimizes all objective functions simultaneously.[9] Therefore, attention is paid to Pareto optimal solutions, i.e.,
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

88 www.ijergs.org

solutions that cannot be improved in any of the objectives without impairment in at least one of the other objectives. In mathematical
terms, a feasible solution is said to (Pareto) dominate another solution , if
1. for all indices and
2. for at least one index . [7]
A solution (and the corresponding outcome ) is called Pareto optimal, if there does not exist another solution that
dominates it. The set of Pareto optimal outcomes is often called the Pareto front. [11]
VII IMPLEMENTATION
























Fig 3 Framework Architecture
The framework in Fig. 3 takes model parameters, optimized parameters and controllable resources as power and time as input. The
nodes are created with initialized parameters as distance and bandwidth. Each node detects its neighboring node automatically within
its range. Multiple nodes are created. Select any two nodes as source and destination. N nodes are deployed randomly in a surface
Resource Allocation
Time slots Packet Dynamics
Centralized Time Slot
Algorithm
Rate Control Power control
Robust Power Control
Algorithm
Output : Resource Allocation, Visualization of Sampling Result

Ad hoc Network

Neighbor Node Detection
Input : Design Problem, Scenario Specification, Model Parameters,
Optimized Parameters, Controllable Resources

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

89 www.ijergs.org

uniformly. In a conventional multi-hop transmission, each source communicates to its intended destination through multiple
intermediate nodes (hops). It guarantees each node at least once chance to transmit and guarantee concurrent transmitters and succeed.
The majority of the works involving transmitting data through different clusters we propose an adaptive cluster size dynamically
throughout the network. In the first stage, a source node of a MIMO link performs a local transmission at a rate to a subset of its
neighbors. This is followed by the simultaneous transmission of encoded versions of the same message by the cooperating neighbors
including the source to the destination of the MIMO link Strategies for routing. It presents a joint power and rate control adaptive
algorithm to optimize the trade-off between power consumption and throughput in ad hoc networks. Each node chooses its own
transmission power and rate based on limited environment information in order to achieve optimal transmission efficiency. Fig 4 7
shows the node creation and optimal path findings process to send packets form source node to destination node.



Fig 4 Node 1 Creation Fig 5 Multiple Node Creation

Fig 6 Path Creation Fig 7 Send Content

A Simulation and Results

A trade-off (or tradeoff) is a situation that involves losing one quality or aspect of something in return for gaining another quality or
aspect. Pareto efficiency, or Pareto optimality, is a state of allocation of resources in which it is impossible to make any one individual
better off without making at least one individual worse off.The Pareto front formed from solving the power control problem allows the
network designer to choose an operating point based on the prioritization of the two objectives, transmit power and channel feasibility.
We first look at a single mobile network where the uncertainty in the channel state (represented by the set of samples of each channel)
comes from the changing topology due to the movement of the nodes. We then look at the effect of considering only a fraction of
nearest interferers at each active receiver.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

90 www.ijergs.org



Fig 8 Optimization of Mobile Ad Hoc Network Simulation for 3 topology
Fig 8 shows the throughput in packets per time slot under slotted Aloha for three transmitters as a function of the system contention
rate (defined as Np for N = 3 users and p the per-user contention probability). The figure shows the theoretical throughput under the
channel collision model (error free reception iff a transmitter is the sole transmitter in the slot), the measured throughput using the
WARP testbed when the nodes employ 64-QAM, and the simulated throughput under the framework when the nodes operate in the
collision channel model.

B Pareto Optimal Tradeoff

The simulation setup is a mobile network of 100 nodes on a 1 km 2 square arena. The nodes are placed uniformly at random. The
nodes then move under a random waypoint model [10] at a speed of 2 m/s for a duration of 1 second during which the channel
sampling process is performed. The transmission packing procedure is performed once and results in 10 unicast transmission-receiver
pairs. The interference set I1 in (1) is used for computing the SINR. In other words, in this case, every active transmitter is defined in
the optimization problem as a potential source of interference. A single simulation run for 100 nodes with a full interference set takes
approximately 1 second to execute on a 2.4 GHz processor.

The bottom curve in Fig. 8 shows a Pareto front of solutions produced by the framework. Given the network topology, this solution set
provides the network designer a range of optimal transmission power allocations. The designer can then choose one of these solutions
based on the relative value of the power objective versus the feasibility objective.

C Shrinking the Interference Set Effect

The set of channel state samples is collected and optimized over for that single network, producing a Pareto front of solutions. In this
section, we consider the effect of k in the interference set Ikj . This set is the set of k closest interferers to active receiver j.





Fig 9 Tradeoff between the feasibility objective and the optimality objective (minimizing the total transmit power)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

91 www.ijergs.org


Fig. 9 shows the effect of decreasing k from the maximum of 9 to 1 for the single network described in the previous section. This plot
provides an intuition that increasing k has diminishing returns. This pattern is looked at more closely in this section. However, for the
objective function in the optimization problem, a more limited interference set is used. The importance of keeping k small and
independent of network size is twofold. First, a small constant k significantly reduces the complexity of the optimization problem as
the size of the SINR computation in (1) no longer depends on the number of active transmitters, and thus is independent of network
size. Second, a constant k removes the need for every transmitter-receiver pair to have channel state information from every other
interfering transmitter on the network to this pairs receiver. Therefore, the significant overhead of sharing this information between
nodes is removed, allowing for distributed power control approaches to make resource allocation decisions without first gathering
channel state information from the whole network.
VIII CONCLUSION
Power control in ad-hoc networks is a more difficult problem due to non availability of access point in network. Power control
problem is defined by two ways. First, in ad-hoc networks, a node can be both a data source and a router that forwards data for other
nodes .Node is involving in high-level routing and control protocols. Additionally, the roles of a particular node may change over
time. Second, there is no centralized entity such as an access point to control and maintain the power control mode of each node in the
network. The power control analysis of mobile ad hoc network system shows the tradeoffs and optimization approaches implemented
in the framework. The method for finding an optimized power allocation solves the power control problem. This empirical result
indicates that only a small number of nearest interfering transmitters have a significant effect on the feasibility of a channel.
REFERENCES:
[1] OMAN: A Mobile Ad Hoc Network Design System July 2012 (vol. 11 no. 7) pp. 1179-1191 Fridman, A. Weber. S,Graff
C, Breen D.E, Dandekar.R,. Kam, M.
[2] Analyzing performance of ad hoc network mobility models in a peer-to-peer network application over mobile ad hoc
network Amin, R.; Ashrafch, S.; Akhtar, M.B.; Khan, A.A. Electronics and Information Engineering (ICEIE), 2010
International Conference Volume: 2
[3] Power Aware and Signal Strength Based Routing Algorithm for Mobile Ad Hoc Networks Varaprasad,G. Communication
Systems and Network Technologies (CSNT), 2011 International Conference Publication Year: 2011 , Page(s): 131 - 134
[4] Performance analysis of effect of transmission power in mobile ad hoc network Das, M.; Panda,B.K.;Sahu,B.Wireless and
Optical Communications Networks (WOCN), 2012 Ninth International Conference Page(s): 1 - 5
[5] Lin, Multiple-Objective Problems: Pareto-Optimal Solutions by Method of Proper Equality Constraints, IEEE Trans.
Automatic Control, vol. AC-21, no. 5, pp. 641-650, Oct. 1976. S. Agarwal, R. Katz, S. Krishnamurthy, and S. Dao,
Distributed Power Control in Ad-Hoc Wireless Networks, Proc. 12th IEEE Intl Symp. Personal, Indoor and Mobile Radio
Comm., vol. 2, 2001
[6] Chen, Y., Yu, G., Qiu, P., & Zhang, Z. (2006). Power aware cooperative relay selection strategies in wireless ad-hoc
networks. In Proceedings of IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (pp. 1
5).
[7] Yasser Kamal Hassan, Mohamed Hashim Abd El-Aziz, and Ahmed Safwat Abd El-Radi, Performance Evaluation of
Mobility Speed over MANET Routing Protocols, in International Journal of Network Security, Vol.11, No.3, PP.128 - 138,
Nov. 2010.
[8] P. Gupta and P. Kumar, Critical Power for Asymptotic Connectivity in Wireless Networks, Stochastic Analysis, Control,
Optimization and Applications: A Volume in Honor of WH Fleming, pp. 547-566, Springer, 1998.
[9] T. Camp, J. Boleng, and V. Davies, A Survey of Mobility Models for Ad Hoc Network Research, Wireless Comm. and
Mobile Computing, vol. 2, no. 5, pp. 483-502, 2002.
[10] X. Jia, D. Kim, S. Makki, P. Wan, and C. Yi, Power Assignment for k-Connectivity in Wireless Ad Hoc
Networks, J. Combinatorial Optimization, vol. 9, no. 2, pp. 213-222, 2005.
[11] F. Dai and J. Wu, On Constructing k-Connected k-Dominating Set in Wireless Networks, Proc. 19th IEEE Intl Parallel
and Distributed Processing Symp., 2005.
[12] Pradhan N, Saadawi T. Adaptive distributed power management algorithm for interference-aware topology control in
mobile ad hoc networks. In: Global Telecommunications Conference 2010, IEEE GLOBECOM 2010, 2010.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

92 www.ijergs.org

Analysis of Design of Cotton Picking Machine in view of Cotton Fibre Strength
Nikhil Gedam
1

1
Research Scholar (M.Tech), Raisoni College of Engineering affiliated by RTM Nagpur University
Email- nikhilgedam8388@gmail.com

ABSTRACT The mechanical cotton picker is a machine that automates cotton harvesting in a way that reduces harvest time and
maximizes efficiency. To develop a mechanical cotton picker with the intent on replacing manual labor. The first pickers were only
capable of harvesting one row of cotton at a time, but were still able to replace up to forty hand laborers
The current cotton picker is a self-propelled machine that removes cotton lint and seed (seed-cotton) users rows of barbed
spindles that rotate at high speed and remove the seed-cotton from the plant. The seed-cotton is then removed from the spindles by a
counter-rotating doffer and is then blown up into the basket the plant at up to six rows at a time. The picker or spindle type machine
was designed to pick the open cotton from the bolls using spindles, fingers, or prongs, without injuring the plant's foliage and
unopened bolls.
In this cotton picking by spindle type machine will resul in sshort fibre content ,micronair and fibre length will indirectly looses the
fibre strength quality as compare to hand picking machine.over come to these problem make a cotton picking machine by suction will
made a pressure equal to the hand picking( 100gm)
Keywords:-cotton fibre/cotton harvesting/cotton fibre properties/cotton fibre testing/pneumatic cotton picking machine
Introduction
cotton is primarily grown for its fiber and its reputation and attraction are the natural feel and light weight of cotton fabrics. Heavy
competition from synthetic fibers dictates that continued improvement is needed in cotton fiber properties. There is then an
opportunity to exploit cotton fibers advantage and enhance its reputation by improving and preserving its fiber qualities through the
growing and processing value chain to match those properties sought by cotton spinners, who require improved fiber properties:
longer, stronger, finer, more uniform and cleaner to reduce waste, allow more rapid spinning to reduce production costs and allow
better fabric and garment manufacture. Cotton fibers are naturally variable and it is a challenge to manage this variability. Our
experience with variability in fiber quality shows a substantial range across seasons and irrigated sites for micronaire (35%), with
lesser ranges for length and strength (<7%). Note lint yield had a 58% range across the same data set. If raingrown systems are
included in such an analysis, yield and fiber length have a larger range due to moisture stress. Fiber strength is mostly affected by
cultivar unless the fiber is very immature.
To ensure the best realization of fiber quality from a cotton crop, the best combination of cultivar, management, climate and
processing is required. For example, if you start with a cultivar with poor fiber quality, there is nothing that can be done with
management and processing to make the quality better. However, if you start with a cultivar with good fiber quality traits, there is
some insurance against unfavorable conditions but careful management and processing are still required to preserve quality.
Historically there has generally been greater production problems for low micronaire (assume immature) cotton especially when
grown in relatively short season production areas having a cooler and wetter finish to the season. The response by cotton breeders can
be to select for a higher micronaire in parallel with high yield during cultivar development. Given a negative association between yield
and fiber fineness (Price 1990), such a breeding strategy could produce cultivars with coarse and immature fibers exactly the
opposite combination required by spinners. Thus although more difficult, it is clear the breeding strategy should be to ensure selection
for intermediate micronaire with fine and mature fibers. Therefore separate measurement of fineness and maturity are important.
These require specialized instruments.
There are many measurements of cotton fiber quality and a corresponding range of measuring instruments. The more common
instruments in commercial use and in marketing are of the high volume type and this paper will concentrate on values measured on
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

93 www.ijergs.org

Uster High Volume Instrumentation (HVI) or equivalent. The range of measurements include fiber length (and its components
uniformity, short fiber content); fiber strength (and elongation or extension); fiber micronaire (and fineness and maturity); grade
(including color, trash, neps, seed coat fragments). This paper will concentrate on fiber length, fiber strength and micronaire. All other
measurements are acknowledged as being important in many circumstances, but we will use length, strength and micronaire to
represent the effects that various factors such as cultivar,
management, climate or processing may have on fiber quality. We aim to review opportunities for breeding, management and
processing to optimize fiber quality under commercial practice.
Material and methods

Cotton harvesting by machine
spindle-type cotton picking machine, remove the cotton from open bolls
The spindles, which rotate on their axes at a high speed, are attached to a drum that also turns, causing the spindles to enter the plant.
The cotton fibre is wrapped around the moistened spindles and then taken off by a special device called the doffer, from which the
cotton is delivered to a large basket carried above the machine During wraping of cotton fibre around the spindles bars, fibre was
stretched will result in loose in fibre quality in terms of short fibre content Possibility of increase short fibre content and trash. Result
into looses the quality of cotton fibre characteristics;

BASIC FIBRE CHARACTERISTICS:
A textile fibre is a peculiar object. It has not truly fixed length, width, thickness, shape and cross-section. Growth of natural fibres or
prodction factors of manmade fibres are responsible for this situation. An individual fibre, if examined carefully, will be seen to vary
in cross-sectional area along it length. This may be the result of variations in growth rate, caused by dietary, metabolic, nutrient-
supply, seasonal, weather, or other factors influencing the rate of cell development in natural fibres. Surface characteristics also play
some part in increasing the variablity of fibre shape. The scales of wool, the twisted arrangement of cotton, the nodes appearing at
intervals along the cellulosic natural fibres etc.
Following are the basic chareteristics of cotton fibre
- fibre length
- fineness
- strength
- maturity
- Rigidity
- fibre friction
- structural features
STANDARD ATMOSPHERE FOR TESTING:
The atmosphere in which physical tests on textile materials are performed. It has a relative humidity of 65 + 2 per cent and a
temperature of 20 + 2 C. In tropical and sub-tropical countries, an alternative standard atmosphere for testing with a relative humidity
of 65 + 2 per cent and a temperature of 27 + 2 C,
may be used.
FIBRE LENGTH:
The "length" of cotton fibres is a property of commercial value as the price is generally based on this character. To some extent it is
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

94 www.ijergs.org

true, as other factors being equal, longer cottons give better spinning performance than shorter ones. But the length of a cotton is an
indefinite quantity, as the fibres, even in a small random bunch of a cotton, vary enormously in length. Following are the various
measures of length in use in different countries
- mean length
- upper quartile
- effective length
- Modal length
- 2.5% span length
- 50% span length
Mean length:
It is the estimated quantity which theoretically signifies the arithmetic mean of the length of all the fibres present in a small but
representative sample of the cotton. This quantity can be an average according to either number or weight.
Upper quartile length:
It is that value of length for which 75% of all the observed values are lower, and 25% higher.
Effective length:
It is difficult to give a clear scientific definition. It may be defined as the upper quartile of a numerical length distribution eliminated
by an arbitrary construction. The fibres eliminated are shorter than half the effective length.
Modal length:
It is the most frequently occurring length of the fibres in the sample and it is related to mean and median for skew distributions, as
exhibited by fibre length, in the follwing way.

(Mode-Mean) = 3(Median-Mean)
where,
Median is the particular value of length above and below which exactly 50% of the fibres lie.
2.5% Span length:
It is defined as the distance spanned by 2.5% of fibres in the specimen being tested when the fibres are parallelized and randomly
distributed and where the initial starting point of the scanning in the test is considered 100%. This length is measured using
"DIGITAL FIBROGRAPH".
50% Span length:
It is defined as the distance spanned by 50% of fibres in the specimen being tested when the fibres are parallelized and randomly
distributed and where the initial starting point of the scanning in the test is considered 100%. This length is measured using
"DIGITAL FIBROGRAPH".
The South India Textile Research Association (SITRA) gives the following empirical relationships to estimate the Effective Length
and Mean Length from the Span Lengths.
Effective length = 1.013 x 2.5% Span length + 4.39
Mean length = 1.242 x 50% Span length + 9.78
FIBRE LENGTH VARIATION:
Even though, the long and short fibres both contribute towards the length irregularity of cotton, the short fibres are particularly
responsible for increasing the waste losses, and cause unevenness and reduction in strength in the yarn spun. The relative proportions
of short fibres are usually different in cottons having different mean lengths; they may even differ in two cottons having nearly the
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

95 www.ijergs.org

same mean fibre length, rendering one cotton more irregular than the other.It is therefore important that in addition to the fibre length
of a cotton, the degree of irregularity of its length should also be known. Variability is denoted by any one of the following attributes
1. Co-efficient of variation of length (by weight or number)
2. irregularity percentage
3. Dispersion percentage and percentage of short fibres
4. Uniformity ratio
Uniformity ratio is defined as the ratio of 50% span length to 2.5% span length expressed as a percentage. Several instruments and
methods are available for determination of length. Following are some
- shirley comb sorter
- Baer sorter
- A.N. Stapling apparatus
- Fibrograph
uniformity ration = (50% span length / 2.5% span length) x 100
uniformity index = (mean length / upper half mean length) x 100
SHORT FIBRES:
The negative effects of the presence of a high proportion of short fibers is well known. A high percentage of short fibres is usually
associated with,
- Increased yarn irregularity and ends down which reduce quality and increase processing costs
- Increased number of neps and slubs which is detrimental to the yarn appearance
- Higher fly liberation and machine contamination in spinning, weaving and knitting operations.
- Higher wastage in combing and other operations.
While the detrimental effects of short fibres have been well established, there is still considerable debate on what constitutes a 'short
fibre'. In the simplest way, short fibres are defined as those fibres which are less than 12 mm long. Initially, an estimate of the short
fibres was made from the staple diagram obtained in the Baer Sorter method


Short fibre content = (UB/OB) x 100
While such a simple definition of short fibres is perhaps adequate for characterising raw cotton samples, it is too simple a definition to
use with regard to the spinning process. The setting of all spinning machines is based on either the staple length of fibres or its
equivalent which does not take into account the effect of short fibres. In this regard, the concept of 'Floating Fibre Index' defined by
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

96 www.ijergs.org

Hertel (1962) can be considered to be a better parameter to consider the effect of short fibres on spinning performance. Floating fibres
are defined as those fibres which are not clamped by either pair of rollers in a drafting zone.
Floating Fibre Index (FFI) was defined as
FFI = ((2.5% span length/mean length)-1)x(100)
The proportion of short fibres has an extremely great impact on yarn quality and production. The proportion of short fibres has
increased substantially in recent years due to mechanical picking and hard ginning. In most of the cases the absolute short fibre
proportion is specified today as the percentage of fibres shorter than 12mm. Fibrograph is the most widely used instrument in the
textile industry , some information regarding fibrograph is given below.
FIBROGRAPH:
Fibrograph measurements provide a relatively fast method for determining the length uniformity of the fibres in a sample of cotton in
a reproducible manner.
Results of fibrograph length test do not necessarily agree with those obtained by other methods for measuring lengths of cotton fibres
because of the effect of fibre crimp and other factors.
Fibrograph tests are more objective than commercial staple length classifications and also provide additional information on fibre
length uniformity of cotoon fibres. The cotton quality information provided by these results is used in research studies and quality
surveys, in checking commercial staple length classifications, in assembling bales of cotton into uniform lots, and for other purposes.
Fibrograph measurements are based on the assumptions that a fibre is caught on the comb in proportion to its length as compared to
toal length of all fibres in the sample and that the point of catch for a fibre is at random along its length.


FIBRE FINENESS:
Fibre fineness is another important quality characteristic which plays a prominent part in determining the spinning value of cottons. If
the same count of yarn is spun from two varieties of cotton, the yarn spun from the variety having finer fibres will have a
larger number of fibres in its cross-section and hence it will be more even and strong than that
spun from the sample with coarser fibres.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

97 www.ijergs.org

Fineness denotes the size of the cross-section dimensions of the fibre. AS the cross-sectional features of cotton fibres are irregular,
direct determination of the area of cro-section is difficult and laborious. The Index of fineness which is more commonly used is the
linear density or weight per unit length of the fibre. The unit in which this quantity is expressed varies in different parts of the world.
The common unit used by many countries for cotton is microgrames per inch and the various air-flow instruments developed for
measuring fibre fineness are calibrated in this unit.
Following are some methods of determining fibre fineness.
- gravimetric or dimensional measurements
- air-flow method
- vibrating string method
Some of the above methods are applicable to single fibres while the majority of them deal with a mass of fibres. As there is
considerable variation in the linear density from fibre to fibre, even amongst fibres of the same seed, single fibre methods are time-
consuming and laborious as a large number of fibres have to be tested to get a fairly reliable average value.
It should be pointed out here that most of the fineness determinations are likely to be affected by fibre maturity, which is an another
important characteristic of cotton fibres.
AIR-FLOW METHOD(MICRONAIRE INSTRUMENT):

The resistance offered to the flow of air through a plug of fibres is dpendent upon the specific surface area of the fibres. Fineness
tester have been evolved on this principle for determining fineness of cotton. The specific surface area which determines the flow of
air through a cotton plug, is dependent not only upon the linear density of the fibres in the sample but also upon their maturity. Hence
the micronaire readings have to be treated with caution particularly when testing samples varying widely in maturity.
In the micronaire instrument, a weighed quantity of 3.24 gms of well opened cotton sample is compressed into a cylindrical container
of fixed dimensions. Compressed air is forced through the sample, at a definite pressure and the volume-rate of flow of air is measured
by a rotometer type flowmeter. The sample for Micronaire test should be well opened cleaned and thoroughly mixed( by hand fluffing
and opening method). Out of the various air-flow instruments, the Micronaire is robust in construction, easy to operate and presents
little difficulty as regards its maintenance.
FIBRE MATURITY:
Fibre maturity is another important characteristic of cotton and is an index of the extent of
development of the fibres. As is the case with other fibre properties, the maturity of cotton fibres varies not only between fibres of
different samples but also between fibres of the same seed. The causes for the differences observed in maturity, is due to variations in
the degree of the secondary thickening or deposition of cellulose in a fibre.
A cotton fibre consists of a cuticle, a primary layer and secondary layers of cellulose surrounding the lumen or central canal. In the
case of mature fibres, the secondary thickening is very high, and in some cases, the lumen is not visible. In the case of immature
fibres, due to some physiological causes, the secondary deposition of cellulose has not taken sufficiently and in extreme cases the
secondary thickening is practically absent, leaving a wide lumen throughout the fibre. Hence to a cotton breeder, the presence of
excessive immature
fibres in a sample would indicate some defect in the plant growth. To a technologist, the presence of excessive percentage of immature
fibres in a sample is undesirable as this causes excessive waste losses in processing lowering of the yarn appearance grade due to
formation of neps, uneven dyeing, etc.
An immature fibre will show a lower weight per unit length than a mature fibre of the same cotton, as the former will have less
deposition of cellulose inside the fibre. This analogy can be extended in some cases to fibres belonging to different samples of cotton
also. Hence it is essential to measure the maturity of a cotton sample in addition to determining its fineness, to check whether the
observed fineness is an inherent characteristic or is a result of the maturity.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

98 www.ijergs.org

DIFFERENT METHODS OF TESTING MATURITY:
MATURITY RATIO:
The fibres after being swollen with 18% caustic soda are examined under the microscope with suitable magnification. The fibres are
classified into different maturity groups depending upon the relative dimensions of wall-thickness and lumen. However the procedures
followed in different countries for sampling and classification differ in certain respects. The swollen fibres are classed into three
groups as follows
1. Normal : rod like fibres with no convolution and no continuous lumen are classed as "normal"
2. Dead : convoluted fibres with wall thickness one-fifth or less of the maximum ribbon width are classed as "Dead"
3. Thin-walled: The intermediate ones are classed as "thin-walled"
A combined index known as maturity ratio is used to express the results.
Maturity ratio = ((Normal - Dead)/200) + 0.70
where,
N - %ge of Normal fibres
D - %ge of Dead fibres
MATURITY CO-EFFICIENT:
Around 100 fibres from Baer sorter combs are spread across the glass slide(maturity slide) and the overlapping fibres are again
separated with the help of a teasing needle. The free ends of the fibres are then held in the clamp on the second strip of the maturity
slide which is adjustable to keep the fibres stretched to the desired extent. The fibres are then irrigated with 18% caustic soda solution
and covered with a suitable slip. The slide is then placed on the microscope and examined. Fibres are classed into the following three
categories
1. Mature : (Lumen width "L")/(wall thickness"W") is less than 1
2. Half mature : (Lumen width "L")/(wall thickness "W") is less than 2 and more than 1
3. Immature : (Lumen width "L")/(wall thickness "W") is more than 2
About four to eight slides are prepared from each sample and examined. The results are presented as percentage of mature, half-
mature and immature fibres in a sample. The results are also expressed in terms of "Maturity Coefficient"
Maturity Coefficient = (M + 0.6H + 0.4 I)/100 Where,
M is percentage of Mature fibres
H is percentage of Half mature fibres
I is percentage of Immature fibres
If maturity coefficient is
- less than 0.7, it is called as immature cotton
- between 0.7 to 0.9, it is called as medium mature cotton
- above 0.9, it is called as mature cotton
AIR FLOW METHOD FOR MEASURING MATURITY:
There are other techniques for measuring maturity using Micronaire instrument. As the fineness value determined by the Micronaire is
dependent both on the intrinsic fineness(perimeter of the fibre) and the maturity, it may be assumed that if the intrinsic fineness is
constant then the Micronaire value is a measure of the maturity
DYEING METHODS:
Mature and immature fibers differ in their behaviour towards various dyes. Certain dyes are preferentially taken up by the mature
fibres while some dyes are preferentially absorbed by the immature fibres. Based on this observation, a differential dyeing technique
was developed in the United States of America for estimating the maturity of cotton. In this technique, the sample is dyed in a bath
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

99 www.ijergs.org

containing a mixture of two dyes, namely Diphenyl Fast Red 5 BL and Chlorantine Fast Green BLL. The mature fibres take up the red
dye preferentially, while the thin walled immature fibres take up the green dye. An estimate of the average of the sample can be
visually assessed by the amount of red and green fibres.
FIBRE STRENGTH:
The different measures available for reporting fibre strength are
1. breaking strength
2. tensile strength and
3. tenacity or intrinsic strength
Coarse cottons generally give higher values for fibre strength than finer ones. In order, to compare strength of two cottons differing in
fineness, it is necessary to eliminate the effect of the difference in cross-sectional area by dividing the observed fibre strength by the
fibre weight per unit length. The value so obtained is known as "INTRINSIC STRENGTH or TENACITY". Tenacity is found to be
better related to spinning than the breaking strength.
The strength characteristics can be determined either on individual fibres or on bundle of fibres.
SINGLE FIBRE STRENGTH:
The tenacity of fibre is dependent upon the following factors
chain length of molecules in the fibre orientation of molecules size of the crystallites distribution of the crystallites gauge
length used the rate of loading type of instrument used and atmospheric conditions
The mean single fibre strength determined is expressed in units of "grams/tex". As it is seen the the unit for tenacity has the
dimension of length only, and hence this property is also expressed as the "BREAKING LENGTH", which can be considered
as the length of the specimen equivalent in weight to the breaking load. Since tex is the mass in grams of
Uniformity
Length uniformity is the ratio between the mean length and the upper half mean length of the cotton fibres within a sample. It is
measured on the same beards of cotton that are used for measuring fibre length and is reported as a percentage. The higher the
percentage, the greater the uniformity. If all the fibres in the sample were of the same length, the mean length and the upper half mean
length would be the same, and the uniformity index would be 100. The following tabulation can be used as a guide in interpreting
length uniformity results. Measurements are performed by HVI. Cotton with a low uniformity index is likely to have a high percentage
of short fibres and may be difficult to process
Length uniformity index
Descriptive Designation Length Uniformity
Very Low Below 77
Low 77 - 79
Average 80 - 82
High 83 - 85
Very High Above 85
Result
Length,uniformity ratio,elongation ,strength and Micronaire,.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

100 www.ijergs.org

Can be determined about a cotton fiber by analyzing its basic characteristics by HVI-900 testing machine

Cotton harvested by hand is tested by hvi-900 with gossypium hirsutum sample as fallows


property S1 S2 S3 S4 S5
Length,mm 30.1 29.9 29.87 30.5 30.2
Uniformity,% 54.20 53.21 53.9 50.9 52.9
Strength,g/tex 29.20 28.9 29.1 29.5 27.65
elongation 5.6 5.8 5.0 5.4 5.3
s.f.i % 9.2 9.7 9.2 9.4 9.0
micronaire 4.5 4.5 4.4 4.5 4.4

Cotton harvested by machine is tested by hvi-900 with gossypium hirsutum sample as fallows

property S1 S2 S3 S4 S5
Length,mm 26.9 26.4 27.2 27 26.93
Uniformity,% 51 50.3 51.2 51.30 50.15
Strength,g/tex 26.2 26.8 26.9 27.10 26.44
elongation 5.4 5.6 4.7 5.1 5.0
s.f.i % 13.0 12.9 12.60 12.90 12.55
micronaire 4.1 4.1 4.0 4.2 4.0

conclusion

The purpose of this study was to evaluate the impact of harvesting method on the cotton fibre quality Although it is practical to use a
spindle harvester designed to pick cotton from wider row spacings to harvest cotton planted in rows,will damage the quality of fibre
compare to hand picking .for overcome to thes problem cotton is harvested with near about same picking pressure as impart in hand
picking for that purpose developed a pneumatic cotton picking machine by suction mechanism to pick a cotton with good quality.

REFERENCES:

[1] Determination of Variety Effect of a Simple Cotton Picking Machineon Design ParametersM. Blent COKUN' Geli Tarihi:
12.07.2002
[2] Quantitation of Fiber Quality and the Cotton Production-Processing Interface: A Physiologist's Perspective; Judith M.
Bradow* and Gayle H. Davidonis
[3] The Effect of Harvesting Procedures on Fiber and Yarn Quality of Ultra-Narrow-Row Cotton; David D. McAlister III* and
Clarence D. Rogers
[4] Harvesting of cotton residue for energy production ,T.A. Gemtosa, *, Th. Tsiricoglou b, Laboratory of Farm Mechanisation,
University of Thessaly, Pedio Areos, 38334 Volos, Greece TEI of Larissa, 41110 Larissa, Greece
[5] . Physiological Cost Analysis for the Operation of Knapsack Cotton Picker in India, M. MUTHAMILSELVAN, K.
RANGASAMY, R. MANIAN AND K. KATHIRVEL; Karnataka J. Agric. Sci.,19(3): (622-627) 2006
[6] RECENT ADVANCES IN GINNING FOR LOWERING COST AND IMPROVING OF EFFICIENCY; M.K. Sharma and
Lav Bajaj, Bajaj Steel Industries Limited, Nagpur, India
[7] Mechanical Cotton Harvesting harvesting costs, value of field waste and grade-loss contribute to economics of machine-
picking of cotton; Warren R. Bailey, Agricultural Economist, Bureau of Agricultural Economics, United States Department
of Agriculture
[8] Assessing Cotton Fiber Maturity and Fineness by Image Analysis; Ghith Adel1, Fayala Faten2, Abdeljelil Radhia1; 1National
Engineering School of Monastir, Monastir, TUNISIA ;Laboratoire des Etudes des Systmes Thermiques et Energtiques,
LESTE, ENIM, TUNISIA
[9] Evaluation of cotton fiber maturity measurements; Dev R. Paudela, Eric F. Hequeta,b,, Noureddine Abidi; Fiber and
Biopolymer Research Institute, Texas Tech University, Lubbock, TX 79409, USA
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

101 www.ijergs.org

[10] A Different Approach to Generating the Fibrogram from Fiber-length-array Data Parti: Theory; R.S. Krowicki, J.M.
Hemstreet*, and K.E. Duckett; Southern Regional Research Center, ARS, USDA, New Orleans, LA, USA; The University of
Tennessee, Knoxville, TN, USA; Received 19.11.1992 Accepted for publication 29.2.1996
[11] Physical and mechanical testing of textiles; X WANG, X LIU and C HURREN, Deakin University, Australia
[12] . Relationships Between Micronaire, Fineness, and Maturity; Part I. Fundamentals; Joseph G. Montalvo, Jr.; The Journal of
Cotton Science 9:8188 (2005






















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

102 www.ijergs.org

A Review on: Comparision and Analysus of Edge Detection Techniques
Parminder Kaur
1
, Ravi Kant
2

1
Department of ECE, PTU, RBIEBT, Kharara
2
Assistant professor, Head ECE Department, RBIEBT, Kharar
E-mail: pinki_sidhu81@yahoo.com

ABSTRACT - The author has tried to compare the different edge detection techniques on real images in the presence of noise and
then calculating the signal to noise ratio. Edge detection is a tool which is used in shape, colour, contrast detection, image
segmentation and scene analysis etc. Edge detection of Image which provides the information related to the intensity changes at a
point of an image. In this paper the comparison of various edge detection techniques and the visual performance analysis of the
various techniques in noisy conditions is performed by using the different methods such as, Canny, LoG (Laplacian of Gaussian),
Robert, Prewitt, Sobel, Laplacian and wavelet. These methods exhibit the different performance under such conditions respectively.

KeywordsEdge detection, image processing.

INTRODUCTION
The Edges can be defined if there are significant local changes of intensity at a point in an image and these can be formed by
connecting the groups of pixels which takes place on the boundary between two different regions in an image. The first derivative is
used to consider a local maximum at the point of an edge. The gradient magnitude is used to measure the intensity changes at a
particular point of edge. It can be expressed in other two terms such as: the gradient magnitude and the gradient orientation.
In other the objective which is used for the comparison of various edge detection techniques and to analyse the performance of the
various techniques in different conditions. There are different methods which are used to perform edge detection. Thus the majority of
different methods may be grouped into two categories [1]. The different methods are used for the edge detection but in this paper only
1D and 2D edge detection techniques are used.

EDGE DETECTION TECHNIQUES
Sobel Operator
It was invented by the Sobel Irwin Edward in 1970. It is the operator which consists of pair 33 convolution kernels as
shown in Figure1.One kernel is simply the other one rotated by 90. The convolutional kernel provides the way to
multiply two array of numbers of different sizes but of the same dimensionality .This can be used to implement the
operators in digital image processing where output pixel values are simple linear combinations of certain input pixel
values. Thus this kernel is a smallest matrix of numbers that is used in image convolutions. Different sized kernels contain
the different patterns of numbers that give rise to different results under convolution. Convolution is done by moving the
kernel across the frame one pixel at a time. As each pixel and its neighbours are weighted by the corresponding value in
the kernel and summed to produce a new value. The Gx and Gy of the gradient is calculated by subtracting the upper row
to lower row and left column to right column. The gradient magnitude is given by:
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

103 www.ijergs.org


G =
2
+
2

Typically, an approximate magnitude is computed using:





Gx Gy
Fig1:Sobel convolution kernels

|G|= |Gx| + |Gy|

This is much faster to compute. It is used to detect the thicker edges only such as horizontal and vertical gradients. It is not used to
detect the diagonal edges of any image .Kernels are designed to respond maximally to running vertical and horizontally relative to the
pixel grid .In other 0 is taken to show the maximum contrast from black to white . The Sobel operator performs a 2-D spatial gradient
measurement on an image and so emphasizes regions of high spatial frequency that correspond to edges. Typically it is used to find
the approximate absolute gradient magnitude at each point in an input grayscale image[1] .The angle of orientation of the edge
(relative to the pixel grid) giving rise to the spatial gradient is given by:
=arc tan (Gy/Gx)
Roberts cross operator: It was invented by Robert Lawrence Gilman scientist in 1965.This type of detector is very sensitive to the
noise .It is based on pixels. The Roberts Cross operator performs a simple, quick to compute, 2-D spatial gradient measurement on an
image. It highlights the regions of high spatial frequency which often correspond to edges.




Gx Gy
Fig2: Roberts cross convolution kernels
In its most common usage the input to the operator is a grayscale image. Pixel values at each point in the output represent the
estimated absolute magnitude of the spatial gradient of the input image at that point. The operator consists of a pair of
22 convolution kernels .In which only addition and subtraction takes place. One kernel is simply the other rotated by 90. This is
-1 0 +1
-1 0 +1
-1 0 +1
+1 +1 +1
0 0 0
-1 -1 -1
+1 0
0 -1
0 +1
-1 0
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

104 www.ijergs.org

very similar to the Sobel operator. In this detector the parameters are fixed which cannot be changed. Convolution is done by moving
the kernel across the frame one pixel at a time. At each pixel and its neighbours are weighted by the corresponding value in the kernel
and summed to produce a new value. Thus the Gx and Gy of the gradient is calculated by subtracting the upper row to lower row and
left column to right column. The gradient magnitude is given such as:
|G|=


The angle of orientation of the edge giving =arc tan (Gy /Gx) -3/ 4
Rise to the spatial gradient (relative to the pixel grid orientation) [2].

PREWITTS OPERATOR:
It was discovered in 1970 by Judith M. S. Prewitt. It is similar to the Sobel edge detector but having different Masks than the Sobel
detector. It is proved that Prewitt is less sensitive as compared to the Roberts edge detector. It is used in Image processing for edge
detection [3].





Gx Gy

Fig.3:Masks for the Prewitt edge detector

The Prewitt operator is based on convolving the image with a small, separable, and integer valued filter in horizontal and vertical
direction and it is inexpensive in terms of computations. Convolution is done by moving the kernel across the frame one pixel at a
time. As each pixel and its neighbours are weighted by the corresponding value in the kernel and summed to produce a new value.
Thus the Gx and Gy of the gradient is calculated by subtracting the upper row to lower row and left column to right column. It is used
to calculate the gradient of the image intensity at each point and giving the direction of the largest possible increase from light to dark
and the rate of change in that direction. The obtained results show how abruptly or smoothly the image changes at that point.

LAPLACIAN OF GAUSSIAN:
It was discovered by David Marr and Ellen C.Hildreth. It is the combination of laplacian and Gaussian. The image is smoothed to a
greater extent. The Laplacian of an image highlights regions of rapid intensity change and is therefore often used for edge detection
(zero crossing edge detectors). Second order derivative also known to be as Marr-Hildreth edge detector. The Laplacian is often
applied to an image that has first been smoothed with something approximating a Gaussian smoothing filter in order to reduce its
sensitivity to noise and the two variants will be described together here. The operator normally takes a single grey level image as input
and produces another grey level image as output. The Laplacian of an image highlights regions of rapid intensity change and is often
used for edge detection [4].This pre-processing step reduces the high frequency noise components prior to the differentiation step. x is
the distance from the origin in the horizontal axis and y represents the distance from the origin in the vertical axis. is the spread of
+1 +2 +1
0 0 0
-1 2 -1
-1 0 +1
-2 0 +2
-1 0 +1
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

105 www.ijergs.org

the Gaussian and controls the degree of smoothing. Greater the value of broader is the Gaussian filter, more is the smoothing. Two
commonly used small kernels are shown in Figure 4.
LOG(x, y) = -
1

4
1

2
+
2
2
2


2+2
2
2







Fig.4:Three commonly used discrete approximations to the laplacian filter [5-9].

CANNY EDGE DETECTON
The Canny operator was designed to be an optimal edge detector. It was invented by canny john F.in 1986.In other the input is
provided in the form of grey scale image and then it produces the output of an image which is showing the positions of tracked
intensity discontinuities. The Canny operator was designed to be an optimal edge detector .It is taking an input as a grey scale image
and produces the output of an image showing the positions of tracked intensity discontinuities

WORKING:
It uses the maximum and minimum thresholds and if the magnitude is between the two thresholds then it could be set to zero unless
there is a path from this pixel to a pixel with a gradient above T2. Edge strength can be found out by taking the gradient of the image
.The mask which is used for the canny edge detector can be a sobel or Robert mask [10-12]. The magnitude or edge strength of the
gradient is approximated by using the formula such as:
|G| = |Gx| + |Gy|
The formula which is used to find the edge direction such as:
Theta = invtan (Gy / Gx)
Convolution is done by moving the kernel across the frame one pixel at a time. At each pixel and its neighbours are weighted by the
corresponding value in the kernel and summed to produce a new value.

LAPLACIAN EDGE DETECTION METHOD:

The Laplacian method searches the zero values than those surrounding it. The Laplace operator is named after the French
mathematician Pierre-simon de Laplace (1749-1827).In other the Zero crossings in the second derivative of the image which are used
to find edges.
1 1 1
1 8 1
1 1 1
-1 -2 -1
2 -4 -2
-1 2 -1
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

106 www.ijergs.org


L(x, y) =

2
+

2

To remove the noise [13].The Laplacian L(x, y) of an image with pixel intensity values I(x,y) is given by:






Fig. 5:Mask of Laplacian


HAAR WAVELET:

It was invented by Haar Alfred in 1910.Wavelet is the combination of small waves. When it is applied to the image then it provides
the approximation and detail coefficient information of the image. Detail a coefficient which contains high frequency information is
only used to detect the edges and the approximation coefficients contains the low frequency information. It decomposes the
discrete signal into sub-signals of half its length, one signal is running average and the other subsignal is running difference. There are
also other types of waveforms but the haar wavelet is the origin of other wavelets. It is also used as an edge detection method. Haar
wavelet consists the rapid fluctuations between the just non-zero values and with an average value of 0.Reasons for introducing the 1
level Haar wavelet represents the 1 level fluctuations [12].

ADVANTAGES AND DISADVANTAGES OF EDGE DET ECTOR:
Edge detection is an important tool which is providing the important information related to the shape, colour, size etc. To find out the
true edges to get the better results from the matching process. Thats why it is necessary to take the edge detector that fit best to the
application [14-21].





0 1 0
1 -4 1
0 1 0
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

107 www.ijergs.org


TABLE 1: ADVANTAGES AND DISADVANTAGES OF EDGE DETECTORS

Operator Advantages Disadvantages
Classical(Sobel, Prewitt, Robert) Simplicity and easy to
implement. Detection of
edges and their directions.
These are sensitive to noise,
Inaccurate
Zero Crossing(Laplacian,
Second directional
derivative)
These are used to Detect the
edges and their directionality.
Thus there is a fixed
characteristics in all
directions
These operators are used to
Respond some of the Existing
edges.
Laplacian of Gaussian (LoG) (Marr-
Hildreth)
It is used to find the correct
places of edges, Testing
broad area around the pixel
.Emphasizes the pixels at the
place where intensity changes
takes place.
Malfunctioning takes place at
the corners, curves and where
the grey level intensity
function varies. Not finding
the direction of edge because
of using the Laplacian filter.
Gaussian(Canny) It is Using a probability for
finding error rate,
Localization and response.
Improving signal to noise
ratio. It is providing a Better
detection specially in noisy
conditions
Complex to Compute, False
zero crossing. It is a Time
consuming.

Comparison of various edge detection techniques
Edge detection of all seven types was performed as in fig. 6.Prewitt provided the better results as compared to the other
methods. On the noisy imagescannot provide the better result
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

108 www.ijergs.org


Fig. 6: Comparison of edge Detection techniques on the college image

Fig. 8: comparison of edge detection techniques on noisy clock image.
CONCLUSI ON
1D edge detection method involves the methods such as: Sobel, Prewitt and Roberts edge detectors and the 2D edge detection
involves the methods such as: Laplacian, Laplacian of Gaussian, optimal edge detector and wavelets are used to find the optimum
edge detector technique.Results on the college image in which the horizontal,vertical and diagonal edges are properly detected by
using the Prewitt edge detector. The LOG and canny also providing the better results even on the low quality images than the other
methods.In other the results on the noisy clock images are better obtained by using canny edge detector than the other methods.
Different detectors are useful for different quality of the images. In the future use the hybrid techniques can be used for better results.
REFERENCES:

[1] J .Matthews.An introduction to edge detection.The sobel edge detector Available http://www/ generation5.
org/content/2002.im01.im 01.Asp,2002.


[2] L. G. RobertsMachine perception of 3-D solids ser .optical and electro- optical information Processing MIT press,
1965.

[3] E. R. Davies.Constraints on the design of template masks for edge detection Pattern recognition Lett,vol.4, pp.111- 120,
Apr.1986.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

109 www.ijergs.org

[4] Mamta Juneja, Parvinder Singh Sandhu (2009), Performance Evaluation of Edge Detection Techniques for images
in`spatial Domain international Journal of computer Theory and Engineering, Vol.1, No.5, December 2009,pp.614- 621.

[5] V. Torre and T. A. Poggio. On edge detection.IEEE Trans .Pattern Anal. Machine Intell. vol. PAMI-8, no.2, pp.187-163, Mar.
1986.

[6] W.Frei and`C.-C. Chen.Fast`boundary Detection,Ageneralization`and`a`new algorithm .lEEE Trans. Comput. vol. C-26, no.
10, pp. 988-998, 1977.

[7 ] W. E. Grimson and E. C. Hildreth. Comments on Digital step edges from zero crossings of second directional
Derivatives. IEEE Trans.Pattern Anal. Machine Intell., vol. PAMI-7, no. 1, pp. 121-129, 1985
.
[8] R. M. Haralick. Digital step edges from zero crossing of the second directional derivatives, IEEE Trans. Pattern Anal. Machine
Intell. vol. PAMI-6, no. 1, pp. 58-68, Jan. 1984.

[9] E. Argyle.Techniques for edge detection,Proc. IEEE, vol. 59,pp. 285- 286, 1971

[10] J. F. Canny.A computational approach to edge detection. IEEE Trans. Pattern Anal. Machine Intell. vol. PAMI-8, no. 6, pp.
679-697, 1986

[11] J. Canny. Finding edges and lines in image. Masters thesis, MIT, 1983.

[12] R. C. Gonzalez and R. E. Woods. Digital Image Processing. 2nd ed. Prentice Hall, 2002.

[13] Kumar Parasuraman and Subin P.S (2010)SVM Based License Plate Recognition System 2010 IEEE International conference
on Computational intelligence and Computing Research.

[14] Olivier Laligant and Frederic Truchetet (2010)A Nonlinear Derivative Scheme Applied to Edge Detection IEEE
Transactions on Pattern analysis and Machine Intelligence,vol.32,No.2, February2010 ,pp.242-257.

[15] Mohammadreza Heydarian, Michael D. Noseworthy, Markad V. Kamath, Colm Boylan, and W. F. S. Poehlman (2009)
Detecting Object Edges in MR and CT Images IEEE Transactions on Nuclear Science, vol.56, No.1, February2009,
pp.156- 166.

[16] Olga Barinova, Victor Lempitsky, and Pushmeet Kholi (2012) On Detection of Multiple Object Instances Using Hough
TransformsIEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, NO. 9, September 2012 pp.1773-
1784.

[17] G.Robert Redinbo (2010)Wavelet Codes for Algorithm-Based Fault Tolerance ApplicationsIEEE Transactionson
Dependable and Secure Computing, vol.7, No.3, july- september 2010, pp.315-328.

[18] Sang-Jun Park, Gwanggil Jeon, and Jechang Jeong (2009), Deinterlacing Algorithm using Edge Direction from Analysis of the
DCT Coefficient Distribution IEEE Transactions on Consumer Electronics, Vol. 55, No. 3, August 2009.pp 174-1681.

[19] Sonya A. Coleman, Bryan W. Scotney, and Shanmugalingam Suganthan(2010) Edge Detecting for Range Data Using
Laplacian OperatorsIEEE Transactions on image Processing,vol. 19, No.11,November 2010,pp.2814- 2824.

[20] Pablo Arbelaez, Michael Maire,Charless Fowlkes,and Jitendra Malik (2011)Contour detection and Hierarchical``Image
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.33,No.5 May 2011, pp.898-916.

[21] Abbas M.Al-Ghaili, Syamsiah Mashohor, Rahman Ramli,and Alyani Ismail (2013)Vertical-Edge-Based car-License- Plate
Detection MethodIEEE Transactions On Vehicular Technology,vol.62, No.1, January 2013,pp.26-38.

[22] Vinh Dinh Nguyen, Thuy Tuong Nguyen, Dung Duc Nguyen, Sang Jun Lee, and Jae ok Jeon,(2013)A Fast
rvolutionaryAlgorithm for Real-Time Vehicle`Detection IEEE Transaction on Vehicular Technology,vol.No.6,july2013, pp.2453-
2468.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

110 www.ijergs.org

Steganography & Biometric Security Based Online Voting System
Shweta A.Tambe
1
, Nikita P. Joshi
\1
, P.S. Topannavar
1

1
Scholar, ICOER, Pune, India
Emai- suchetakedari@gmail.com

ABSTRACT Online voting system that helps to manage elections easily and securely. With the help of steganography one can try
to provide a biometric as well as password security to the voters account. The system will make a conclusion whether the voter is
correct person or not. System uses voters fingerprint image as cover image and embed voters secret data into the image using
steganography. This method produces a stego image which is equal to the original fingerprint image only. On the whole there are
changes in the original fingerprint image & stego image but they are not visible by human eye.
Keywords Biometric, Cover, Fingerprint, Online, Password, Steganography, Security
INTRODUCTION
An election is an official process by which person chooses an individual to hold all kind of public issues. The elected person should
satisfy all necessary needs of common people so the system of whole country works properly. The main requirements of election
system are like authentication, speed, accuracy, and safety. The voting system should be speedy so the valuable time of voters as well
as the voting system conductors will be saved. Accuracy means the whole system should be accurate with respect to result. Safety
involves the secure environment around the election area so that voters will not be under any force. In online voting system main aim
is to concentrate the focus on security of voters account. For any type of voting system following points must be taken into
consideration. This can include confusing or misleading voters about how to vote, violation of the secret ballot, ballot stuffing,
tampering with voting machines, voter registration fraud, failure to validate voter residency, fraudulent tabulation of results, and use of
physical force or verbal intimation at polling places. If online voting system works well then it will be a good progress over the current
system. In the next section the proposed methodology, database creation & embedding the secret data, online voting system,
recognition of the embedded message, & analysis of the system is explained.
PROPOSED METHODOLOGY

The methodology includes steganography with the help of biometric security. Fundamentally there are various types of steganography
like text, audio, image, and video. Images are the well-liked cover media used for steganography. In many applications, the most
important requirement for steganography is the security, which means that the stego image should be visually and statistically similar
to their corresponding cover image strictly. Now a days steganographic system uses images as cover object because people often send
digital images by email. So using image for steganography is the good choice as all kind of emails contain at least single image. After
digitalization, images contain the quantization noise which provides space to hide data.
When images are used as the cover image they are generally manipulated by changing one or more of the bits of an image. With the
help of least significant bit (LSB) insertion, system hides the message. As LSB of an image contain less amount of information,
individual can easily hide any personal data by replacing those bits by message bits. To work with the system each person should be
provided with a PAN number (Personal Authentication Number).This is like a serial number allocated to every person. System also
needs the thumb impression of all voters as a cover image. Finally at the time of account creation a secret key is given to each voter
which the voter should hide from every single person.
Considering that all above data is collected from every voter the system will work as follows. First of all the voter has to sign in to the
account with the help of voters account identification number. Then voter is asked to give the thumb impression. Then the voter is
asked to enter the secret key for PAN number decryption from the database embedded fingerprint image. Finally the voter has to enter
the PAN number. If PAN number match is found then the voter is an authenticate person & can cast a vote. Then the account will be
closed for that person. Once the account will be closed then that account will not be opened again for second time. So the fraudulent
cases such as duplicate voting will be avoided in the online voting system. After giving vote the count will be incremented for that
political person.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

111 www.ijergs.org

A. DATABASE CREATION & EMBEDDING PROCESS

For database creation a voter committee should be appointed. Committee members job is to collect the data from each person. Every
voter should have an account identification number to maintain the account, PAN number for voter authentication & a secret key as a
password or cross verification of the database. As shown in the fig.1 finger print image block takes the fingerprint image of voter as an
input. PAN number block accepts the personal authentication number as an input. Steganography block performs steganography on
the personal authentication number. Thus a stego image is saved as database image. Different aspects in data hiding systems are of
great concern like capacity and security. Capacity means the amount of data that can be hidden in the cover object; security means an
eavesdroppers failure to detect hidden information. We have concentrated our focus on security.








Figure.1 Block Diagram for Database Creation
The fingerprint image should be plain which will act as cover image after data hiding. So the cover image for each voter is its own
fingerprint image only. Prior to the least significant bit insertion, system uses discrete wavelet transform. In discrete wavelet transform
with the help of HAAR transform the fingerprint image is transformed from spatial domain to frequency domain. For 2-D images,
HAAR transform processes the image by 2-D filters in each dimension. The filters divide the input image into four non-overlapping
sub bands. The Discrete Wavelet Transform is made up of realization of Low pass Filters and High Pass Filters.
It is one of the simplest and basic transformations from the time domain to a frequency domain. First of all HAAR transform convert
the fingerprint input image into four non overlapping sub bands LL, LH, HL, HH as shown in the fig.2 (a) . Where L stands for low
frequency band & LL is shown at left upper most corners. H stands for high frequency band & HH is shown at right lower most
corners. With the help of LSB (least significant bit) insertion technique the PAN number is embedded into the LL sub band. The
fingerprint image after PAN number embedding is shown in fig.2 (b) as embedded image. If compared to the Fourier transform which
only differs in frequency, the Haar function varies in both scale and position.


Figure.2
(a) Four Sub bands of DWT (b) Embedded Image
Applying a discrete wavelet transform to images, much of the signal energy lies at low frequencies and they appear in the upper left
corner of the discrete wavelet transform. This property of energy compaction is made use of in this embedding procedure. Embedding
is achieved by inserting the secret data into a set of discrete wavelet transform coefficients, thus ensuring the personal authentication
number (PAN) invisibility. The combination of fingerprint image & PAN number is nothing but a stego image is produced with the
Fingerprint Image
Steganography
Database
PAN Number
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

112 www.ijergs.org

help of LSB insertion technique. It is assumed that, embedding message in this way is not going to destroy the information of the
original image to a great extent. A secret key is separately provided to each voter along with the PAN number. Voter should remember
that in order to use it at the time of online voting. After completion of all the steps thus the database creation of the voter is complete.
This task will be performed for each person.

B. ONLINE VOTING SYSTEM

At the time of online voting as shown in the fig.3 a voter is first asked for voters account identification number so that voters
election account will be opened. Then voter is asked to give the fingerprint image followed by secret key. If the secret key is correct
then the PAN number decryption & recognition












Figure.3 Online Voting System

is carried out with the help of discrete wavelet transform. Discrete wavelet transform is applied to the embedded fingerprint image in
order to get the embedded PAN number. Then the voter is asked to enter the PAN number. After comparing both the PAN numbers, if
the match is found then the voter is an authenticate person & can cast a vote.

C. RECOGNITION OF EMBEDDED MESSAGE

The result of embedded process is a stego image. Recognition process includes extraction of the PAN number from the stego image.
For recognition purpose discrete wavelet transform is applied to extract the hidden message from database image as shown in fig. 4.
Principle Component Analysis is used for fingerprint recognition. Principle component analysis is a way of identifying the patterns of
fingerprint image in order to highlight their similarities & differences. PCA is a useful method having use in face recognition and
image compression, and for finding patterns. Thus system uses PCA for finding fingerprint patterns. PCA representation is explained
by eigen values & eigen vectors. This system finds variance, covariance matrix & eigen values. To find out the above parameters one
should know about the standard deviation, covariance, eigenvectors and eigen values. Variance is nothing but the amount of data
extends in a data set. It is equal to the standard deviation. One can measure the covariance always between two dimensions. When we
get a set of data points, we divide it into eigenvectors and eigen values. Every eigenvector has a corresponding eigen value.
Voter Account
ID
Fingerprint Image &
Secret Key
PAN Decryption
& Recognition
Enter PAN
Number
Voting Panel
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

113 www.ijergs.org












Figure.4 Block Diagram for Extraction Process
Eigen vector with the highest eigen value is therefore the principal component. Finally comparison is done to find out the match using
euclidean distance. If match is found between database image & test image then the voter is authorized person.
RESULT & ANALYSIS

This system uses account identification number to maintain the voters account, fingerprint image as biometric security, PAN number
for authentication & secret key for cross verification of the database. Thus the system provides a multilevel security which is the
advantage over the earlier election system. Hence no fraudulent cases such as duplicate voting.
Steganographic Performance
Basically the least significant bit insertion technique is the method of data hiding by direct replacement i.e. spatial domain
technique. But there are disadvantages like low robustness to modifications made to the stego image & low imperceptibility. Hiding
data with the help of transform domain is the great benefit which appeared to overcome the robustness and imperceptibility problems
found in the LSB substitution techniques. The proposed system was applied to fingerprint images at each time & it achieved
satisfactory results. The performance of the proposed technique can be evaluated in terms of comparison between quality of the stego
image & original image. The comparison was done on the basis of imperceptibility.
Imperceptibility measures how much distortion was caused after data hiding in the cover image that is the quality of the image.
Where, high quality stego image reflects more invisible the hidden message. We can evaluate the stego-image quality by using Peak
Signal to Noise Ratio (PSNR). The PSNR ratio is used as a quality measurement between the cover image & stego image. The higher
the values of PSNR better the quality of the stego image. Typical values for the PSNR are between 30 and 50 dB, with the bit depth of
8 bit. The PSNR for size M x N image I and its noisy approximation K is calculated by
PSNR = 10 log
10
{255
2
/ MSE}
And
Stego Image
Read Stego Image
Apply DWT to image to
divide it into 4 sub bands
Extract secret message
Secret message
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

114 www.ijergs.org

M-1 N-1
MSE = 1/MN [I (i, j) - K (i, j)]
2
i=0 j=0
Where MSE - Mean Square Error

The PSNR was calculated for each stego image & PSNR ranges from 30 to 50 which give reasonable visual quality of the stego
image. After applying above formula one can comparatively find out the PSNR of the fingerprint images. Actual PSNR observed for
fingerprint image-1 is 40.92 dB. For figure-2 its 41.45 dB, Likewise PSNR of all fingerprint images can be calculated. In General, any
steganography technique is done either in spatial or frequency domain. Spatial domain techniques are easy to create and design. They
give an ideal reconstruction in the lack of noise. There are several techniques put forward in spatial domain like embedding utilizing
the luminance components, manipulating the Least Significant Bits for embedding, Image Differencing. But using the spatial domain
is not that much safe as it hide the secret data directly. On the other hand, In Frequency Domain, the cover image is subjected to a
transformation into the frequency domain where detail manipulations of the coefficients with perceptible degradation to the cover
image is possible. Thus the system supports two stages to hide data. First is transformation of fingerprint image from time domain to
frequency domain & then manipulation of least significant bit with the help of least significant bit insertion technique. Thus frequency
domain technique is better approach for hiding data.
CONCLUSION
Considering the difficulty of elections the system provides adequate proof of authenticity in terms of biometric protection as well as
multilevel security. The security level of system is very much enhanced by the idea of individuals fingerprint image as cover image for
each user. Fingerprint image and PAN number has been used to obtain high degree of authenticity. This methodology does not give
any idea for searching predictable modifications in the cover image. Countries with large population have to invest large amount of
money, time as well as man power for voting set up. But because of online voting system all the mentioned problems will be reduced
to great extent.
REFERENCES:
[15] Shivendra Katiyar, Kullai Reddy Meka, Ferdous A. Barbhuiya, Sukumar Nandi, Online Voting System Powered By Biometric
Security Using Steganography Second International Connference on Emerging Applications of Information Technology 2011
[16] William Stallings, Cryptography and Network Security Principle and Practices, Third Edition, pp. 67-68 and 317-375, Prentice
Hall, 2003
[17] Sutaone, M.S. and Khandare, M.V., Image based steanography Using LSB insertion technique, IEEE WMMN, pp. 146-151,
January 2008.
[18] J.Samuel Manoharan,Dr.Kezi C.Vijila, A.Sathesh, Performance Analysis of Spatial & Frequency Domain (4); Issue (3)
[19] Lindsay I Smith A tutorial on Principal Component Analysis February 26, 2002
[20] R. EI Safy, H. H. Zayed, and A.EI Dessouki An Adaptive Steganographic Technique Based on Integer Wavelet Transform
[21] Mohit Kr. Srivastava, Sharad Kr. Gupta, Sushil Kushwaha, Brishket S. Trip athi Steganalysis of LSB Insertion method in
Uncompressed Images Using Matlab
[22] T. Morkel 1, J.H.P. Eloff 2, M.S. Olivier 3 An Overview of Image Steganography
[23] Yeuan-Kuen Lee and Ling-Hwei Chen An Adaptive Image Steganographic Model Based on Minimum-Error LSB Replacement
[24] Mehdi Kharrazi, Husrev T. Sencar, and Nasir Memon Image Steganography: Concepts and Practice
[25] Linu Paul, Anilkumar M.N. Authentication for Online Voting Using Steganography and Biometrics International Journal of
Advanced Research in Computer Engineering & Technology (IJARCET) Volume 1, Issue 10, December 2012
[26] M. Sifuzzaman, M.R. Islam1 and M.Z. Ali Application of Wavelet Transform and its Advantages
Compared to Fourier Transform


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

115 www.ijergs.org

Morphological & Dynamic Feature Based Heartbeat Classification
N P. Joshi
\1
,Shweta A.Tambe
1
, P.S. Topannavar
1

1
Scholar, ICOER, Pune, India

ABSTRACT - In this paper, a new approach for heartbeat classification is proposed. The system uses the combination of
morphological and dynamic features of ECG signal. Morphological features extracted using Wavelet transform and independent
component analysis (ICA). Each heartbeat undergoes both the techniques separately. The dynamic features extracted are RR interval
features. Support vector machine is used as a classifier, after concatenating the results of both the feature extraction techniques, to
classify the heartbeat signals into 16 classes.
Whole process is applied to both the lead signals and then the classifier results are fused to make final decision about the
classification. The overall accuracy in classifying the signals from MIT-BIH arrhythmia database should be 99% in class-oriented
evaluation and an accuracy more than 86% in the subject-oriented evaluation.
Keywordsheartbeat classification, support vector machine, independent component analysis, wavelet transform, RR features, ECG
signal, evaluation schemes.
I. INTRODUCTION
Electrocardiogram (ECG) analysis is basically used to control cardiac disorders. Cardiac disorders are the conditions such as abnormal
behaviour of heart. if the medical emergencies are not provided properly then the subject can cause impulsive death.
There is another class of arrhythmias which is not critical for life but still should be given attention and treated. Classification of
heartbeats depending on classes of heartbeats is an important step towards treatment. Classes are based on consecutive heartbeat signal
[1]. To satisfy the requirements of real-time diagnosis, online monitoring of cardiac activity is preferred on human monitoring &
interpretation. Also automatic ECG analysis is preferred for online monitoring & detection of abnormal activity observed in heart.
Hence, automatic heartbeat classification using parameters or characteristic features of ECG signals is discussed in this paper.

II. DATASET

A .Classes of ECG signal

The MIT-BIH arrhythmia database [2] is the standard material used for training & testing of the algorithm developed for detection &
classification of arrhythmia ECG signals. By using this database we cancompare the proposed method with the approaches in
published results. MIT-BIH arrhythmia database is exploited for testing the system.
There are total 48 records. All signals are two lead signals denoted as lead A & lead B signal. These signals are filtered using BPF at
0.1 Hz - 100Hz. Sampling of thesesignals is performed at 360 Hz.

TABLE I
CLASSES OF ECG SIGNAL

Heartbeat type Annotation
Normal Beat N
Left Bundle Branch Block L
Right Bundle Branch Block R
Atrial Premature Contraction A
Premature Ventricular Contraction V
Paced Beat P
Aberrated Atrial Premature Beat A
Ventricular Flutter Wave !
Fusion of Ventricular and Normal Beat F
Blocked Atrial Premature Beat X
Nodal (Junctional) Escape Beat J
Fusion of Paced and Normal Beat F
Ventricular Escape Beat E
Nodal (Junctional) Premature Beat J
Atrial Escape Beat E
Unclassifiable Beat Q
TOTAL: 16
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

116 www.ijergs.org

All 48 records belong to one of the classes of ECG signal shown in TABLE I. According to the clinical terms, V1 to V6 leads
represent the area of heart. In 45 records, Lead A signal is a modified limb Lead II while Lead B is a modified Lead V1. In remaining
3 records, Lead A is from position V5 and Lead B signal is V2.

B. Evaluation schemes

Previous literature [3]-[9] is divided into two categories according to the evaluation scheme followed. Following evaluation schemes
are used:

1) Class-oriented evaluation.
2) Subject-oriented evaluation.

All 48 records contained in MIT-BIH arrhythmia database are not used. 4 ECG signals are excluded as they are paced beats. Each
ECG signal contains its own annotation file in database. Those annotations of QRS complex are used for segmentation of ECG signals
from which heartbeat segments can be obtained.44 ECG signals are divided into 2 datasets. One of them is used as a training dataset
and another one is used as testing dataset. This division is done for the experiment purpose.Above datasets are prepared by selecting a
random amount of fraction from each of the 16 classes. Training dataset constitutes following fractions of beats. Normal class
contributes 13% of the beats, 40% contribution is from each of the five bigger classes i.e. L,A,R,V & P while 50%
contribution is of all the small ten classes. Mapping of these 16 classes is done in 5 classes as shown in TABLE II.

TABLE II
MAPPING OF MIT-BIH CLASSES TO AAMI CLASSES

AAMI Classes MIT-BIH Classes
N NOR, LBBB, RBBB, AE, NE
S APC, AP, BAP, NP
V PVC, VE, VF
F VFN
Q FPN,UN

III. PROPOSED METHODOLOGY

Section I contains a brief introduction of the proposed automatic heartbeat classification system. In this section, we have discussed all
the theoretical details and the techniques used in the process. Fig. 2 shows the flow of the proposed system. The process has following
blocks Pre-processing, Heartbeat segmentation, Feature extraction, Classification, Two-lead fusion and Decision. Lead I & Lead II
signals are nothing but raw ECG signals. Artefacts contained in these raw ECG signals are removed by using the first block of the
process i.e. Pre-processing. After pre-processing, these ECG signals are divided to obtain heartbeat segments. For this purpose, we use
provided R peak locations.

We apply Wavelet transform (WT) and independent component analysis (ICA) separately to each heartbeat and concatenate
corresponding coefficients. Now we use Principal component analysis (PCA) and represent these coefficients in a lower dimensional
space. Now the resulting principal components that represent most of the variance are selected and a morphological descriptor of the
heartbeat is obtained by utilizing these components. RR interval features are derived, which give descriptive information about
dynamic features of the heartbeat.

After performing the feature extraction, the main classification algorithm is applied. Heartbeats are then classified into 16 above
classes by using a classifier based on Support vector machine (SVM) is used. According to the data given in [2] all the ECG signals
are two-lead signals, all the above process is separately applied to the signals from leads A & B. The two sovereign decisions for each
heartbeat are obtained, which then are fused to build the final composed decision of heartbeat classification. By integrating both leads
signals, confidence about classification can be improved for the final decision.

A. Pre-processing

It is necessary to perform the pre-processing of raw ECG signals as they can contain various types of noise. These noises must be
reduced so that signal-to-noise ratio (SNR) is improved. Improved SNR helps in detection of the subsequent fiducial point. Types of
noise like power-line interference, baseline wander, artifacts due to muscle contraction, and electrode movement affect the quality of
ECG signals. In this study, the pre-processing of ECG signals consists of baseline wander correction. The baseline wander is removed
by subtracting mean of the signal from signal itself. The pre-processed signals were used in subsequent processing.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

117 www.ijergs.org




Fig. 1 Flow of proposed system

B. Heartbeat segmentation

There are three waveform components of one heartbeat of ECG signal known as P wave, QRS complex and T wave. To have the full
segmentation of ECG signal, the boundaries and peak locations i.e. fiducial points should be properly detected. To obtain heartbeat
segments or R-peaks the annotations provided for R-peak locations are utilized. In real applications, an automatic R-peak detector may
be used so that the classification method can actually be fully automatic. But there are two disadvantages of this automatic R-peak
detector. 1. If some leading heartbeats are missed then error may get added in those heartbeat signals and hence they cannot be
classified correctly.

A number of heartbeat detection schemes do exist [7], [12], [13], which have capacity to detect heartbeat signals present in MIT-BIH
arrhythmia database with an error rate less than 0.5%. 2. The quality of RR interval features will be degraded to some extent because
of addition of the error by the automatic R peak detector. The sampling rate is given as 360 Hz. Hence in each heartbeat segment there
are 100 samples before the R peak location as the pre-R segment & 200 samples after the R peak as the pro-R segment, i.e., a total of
300 samples. The segment size is selected such that it includes most of the information of one heart cycle. The segment size of
heartbeat is kept fixed. The ratio of lengths of the pre-R segment and the pro-R segment is kept so that it matches with lengths of PR
interval & the QT interval. There is an advantage to keep the fixed segment size it avoids the detection of the P wave and T wave.

C. Wavelet transform- Morphological Feature extraction

ECG signals i.e. biomedical signals in real exhibit non-stationary nature. Non-stationary nature actually means the presence of some
statistical characteristics. These signals change over position or time. Due to this nature, they cannot be responsive and hence cannot
be analysed using classical Fourier transform (FT). Therefore, it becomes must to use wavelet transform (WT). Wavelet transform is
capable of performing analysis in both the domains i.e. time & frequency domains. It is possible to analyse ECG signal by using WT.

There are various purposes of using WT in ECG signal processing. It includes de-noising, heartbeat detection and feature extraction.
We use WT as a feature extraction method in this study. As can be seen, Daubechies wavelets of order 8 have most similar
characteristics as that of QRS complex, hence are selected. Since the sampling frequency is given to be 360 Hz, the maximum
frequency is 180 Hz.

C. Independent component analysis- Morphological Feature extraction

In this study, ICA is used for feature extraction[15].Five sample beats are randomly selected from every class for preparation of
training set. These training sets are used to compute Independent components. If the total number of beats in any of the recording is
less than five, then all beats are taken. This makes a training set of total 626 beats taken from all 16 classes. These beats are used for
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

118 www.ijergs.org

calculating ICs. The ICs obtained are used as source signals for ICA and hence applied to both the datasets viz. training and testing
datasets. To obtain actual number of ICs, tenfold cross validation is evaluated. Number of independent components are varied between
10 & 30, the ICA coefficients obtained after that are actually considered as features and given as input to SVM classifier. This process
is performed in 5 iterations. And average is taken. When average performance is observed, the accuracy increases at number of ICs
between 10 & 14 and afterwards it decreases. So number of ICs is selected to be 14.

D. Principal component analysis-Morphological feature extraction

The two features obtained i.e. ICA features and wavelet features are combined together and PCA is applied to obtain the reduction in
feature dimension. Then 10-fold cross validation is performed and final morphological features are obtained.

E. RR Interval Features

RR interval features are extracted to obtain dynamic information of the heartbeat signal input. These are known as dynamic features.
There are four RR interval features namely, previous RR, post RR, local RR, and average RR interval features.
The previous RR feature is nothing but the interval between a present R peak and the previous R peak. Post RR feature is calculated as
the interval between current R peak and next R peak. The local RR interval is calculated by taking average of all the RR intervals
within past 10-s period of the given heartbeat. Likewise, the average RR interval is calculated as the average of RR intervals within
past 5-min period of the heartbeat.

In previous literature, the local RR & average RR feature extraction shows poor performance when applied in real-time application.
The local RR feature is calculated as average of consecutive 10 heartbeats whose centre will be at given beat. Whereas average RR
feature is calculated as average of all beats from same recording. In the proposed method, these features are calculated such that they
ensure to work at real-time.

F. Support vector machine

Support vector machines are nothing but binary classifiers. This classifier is given by Vapnik. It builds a prime hyperplane which
separates two classes from each other due to increase in margin between them. As this approach has an excellent ability to build the
classification model on general basis it is enough powerful to be used in many applications.A number of multiclass classification
strategies have been developed to extend SVM to address multiclass classification problem [14], such as heartbeat classification
problem.In this paper, the technique used for classification of the heartbeats is an SVM classifier which classifies the heartbeat under
consideration into one of the 16 classes.

The training set contains N examples. It is used in two-class classification problem. N examples are given as {(xi, yi), i = 1, . . . ,N},
where xi is nothing but d-dimensional feature vector of the ith example and xi d whereas yi is the class label of ith example and yi
{1}. Now a decision function is to be constructed on the basis of the training set. This function is used to predict output class labels of
test examples. These are based on input feature vector. The resultant decisionfunction is given as

f (x) = sign (
i
y
i
K (xi, x) + b)
i SVs

where,K(., .) is kernel function. iis Lagrangemultiplier for each training datasample. Few Lagrange multipliers are nonzero. The
examples of training set which are nonzero are known as Support vectors. These support vectors actually determine f(x). Two seperate
classifiers are applied to signals from lead A & lead B.

G. Two-lead fusion

As two different classifiers are applied, each classifier gives its seperate answer. Now the two answers are fused together to get a final
answer which actually gives the class of the heartbeat it belong to. Two seperate answers can be fused together by using rejection
approach.

IV. RESULTS& ANALYSIS

As seen in Fig.2 & Fig.5, the original lead 1 signal is shifted from its axis and offset is added in it. This happens because of patients
movement or in-line interference. The pre-processing of signal results into reduction of these noises. The shift of axis is called as
baseline wander. The baseline wander is removed after pre-processing. Pre-processing also helps in R-peak detection. The amplitude
of the signal is compared with the threshold and hence the R peaks are found out from the pre-processed signal. These R peaks give
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

119 www.ijergs.org

post-R and pre-R features. While average of these R-peaks are found to get average-R feature. Segmentation gives the actual
separation of the QRS complex from whole recording. Sampling frequency is kept to be 180 Hz. We take 100 samples before and 200
samples after to get proper segment, which actually contains total QRS complex, P wave and T wave. This helps in finding out the
exact class of the ECG signal after applying feature extraction techniques and classifier. Hence segmentation size is kept fixed.



Fig. 2 Results showing pre-processing for lead 1 signal of 109 ECG recording.




Fig. 3 Results showing segmentation of lead 1 signal of 109 ECG recording









International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

120 www.ijergs.org

Fig.4 Results showing R-peak detection of lead 1 signal of 109 ECG recording

Fig.5 Results showing pre-processing of lead 2 signal of 109 ECG recording


Fig. 6 Results showing segmentation of lead 2 signal of 109 ECG recording


Fig.7 Results showing R-peak detection of lead 2 signal of 109 ECG recording
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

121 www.ijergs.org


Fig.8 Results showing 14 Independent components of 109 ECG recording
REFERENCES:
[1] Can Ye, B. V. K. Vijaya Kumar, Miguel Tavares Coimbra, Heartbeat Classification Using Morphological and Dynamic
Featuresof ECG Signals, IEEE Trans. Biomed. Eng., vol. 59, no. 10, pp. 2930-2941, Oct. 2012.
[2] MIT-BIH Arrhythmias Database. Available online at: http://www.physionet.org/physiobank/database/mitdb/.
[3] M. Lagerholm, C. Peterson, G. Braccini, L. Edenbrandt, and L. Sornmo,Clustering ECG complexes using Hermite functions
and self-organizing maps, IEEE Trans. Biomed. Eng., vol. 47, no. 7, pp. 838848, Jul. 2000.(6)
[4] P. de Chazal, M. O. Dwyer, and R. B. Reilly, Automatic classification of heartbeats using ECG morphology and heartbeat
interval features,IEEETrans. Biomed. Eng., vol. 51, no. 7, pp. 11961206, Jul. 2004. (8)
[5] S. Osowski, L. T. Hoa, and T. Markiewic, Support vector machine-based expert system for reliable heartbeat recognition,
IEEE Trans. Biomed.Eng., vol. 51, no. 4, pp. 582589, Apr. 2004. (9)
[6] J. Rodriguez, A. Goni, and A. Illarramendi, Real-time classification of ECGs on a PDA, IEEE Trans. Info. Tech. Biomed.,
vol. 9, no. 1, pp. 2334, Mar. 2005. (10)
[7] P. Laguna, R. Jane, and P. Caminal, Automatic detection of wave boundaries in multilead ECG signals: Validation with the
CSE database, Comput.Biomed. Res.,vol. 27, no. 1, 1994. (12)
[8]W. Jiang and G. S. Kong, Block-based neural networks for personalized ecg signal classification, IEEE Trans. Neural
Networks, vol. 18, no. 6,pp. 17501761, Nov. 2007. (13)
[9] T. Ince, S. Kiranyaz, and M. Gabbouj, A generic and robust system for automated patient-specific classification of ECG signals,
IEEE Trans.Biomed. Eng., vol. 56, no. 5, pp. 14151426, May 2009. (14)
[10] M. Llamedo and J. P. Martinez, Heartbeat classification using feature selection driven by database generalization criteria,
IEEE Trans. Biomed.Eng., vol. 58, no. 3, pp. 616625, Mar. 2011. (15)
[11] G. de Lannoy, D. Francois, J. Delbeke, andM. Verleysen, Weighted conditional random fields for supervised
interpatient heartbeat classification, IEEE Trans. Biomed. Eng., vol. 59, no. 1, pp. 241247, Jan. 2012. (17)
[12] V. X. Afonso, W. J. Tompkins, T. Q. Nguyen, and L. Shen, ECG beat detection using filter banks, IEEE Trans. Biomed.
Eng., vol. 46, no. 2, pp. 192202, Feb. 1999. (25)
[13] S. Kadambe, R. Murray, and G. F. Boudreaux, Wavelet transform-based QRScomplex detector, IEEE Trans.Biomed. Eng.,
vol. 46, no. 7, pp. 838 848, Jul. 1999. (27)
[14] C. Cortes and V. N. Vapnik, Support vector networks, J. Mach. Learn.,vol. 20, pp. 125, 1995. (34)
[15] Jarno M.A. Tanskanen, Jari J. Viik, Independent Component Analysis in ECG Signal Processing, Finland, Tampere University
of Technology and Institute of Biosciences and Medical Technology.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

122 www.ijergs.org

Fuzzy logic power Control for Zigbee Cognitive Radio
P.Vijayakumar
1
, Sai Keerthi Varikuti
1

1
Department of Electronics and communication, SRM University
E-mail- saikeerthi.v@gmail.com

ABSTRACT - Spectrum sharing without interfering to the primary users is one of the challenging issue in cognitive networks, at the same time
power control is one of the feasible solution for spectrum sharing without disturbing the primary user to achieve required performance at cognitive
radio.
In this paper one Zigbee is configured as primary user and another Zigbee as secondary user which together forms a cognitive network.
Here the implementation of the designed setup is carried out by analyzing signal strength , transmit power level assignment and routing algorithm on
specific transceiver model of IEEE 802.15.4/Zigbee on Arduino board which leads to better performance of cognitive radio to access spectrum.
KeywordsCR (Cognitive Radio), PU (Primary User), SU (Secondary User), TPC(Transmit power control), (RSSI)Received signal strength
indicator.
INTRODUCTION
Today wireless systems take up more and more of the frequencies that are available. Most of them are licensed to high speed wireless
internet, telephone and mobile operators internet[1]. There is a need to develop the new system due to high demands on frequencies as
there seems to be lack of free frequencies while new wireless systems are developed One such technology is called Cognitive Radio,
which allow the re-use of spectrum.[2]
The idea for cognitive radio has come out of the need to utilize the radio spectrum more efficiently, it is possible to develop a radio
that is able to look at the spectrum, detect which frequencies are clear, and then implement the best form of communication for the
required conditions thus one can say Cognitive Radio (CR) is a form of wireless communication in which a transreciever can
intelligently detect which communication channels are in use and which are not, and instantly move into vacant channels while
avoiding occupied ones. This optimizes the use of available radio-frequency (RF) spectrum while minimizing interference to other
users. [2].
In an underlay CR system the secondary users (SUs) protect the primary user (PU) by regulating their transmit power to maintain the
PU receiver interference below a well defined threshold level. The limits on this received interference level at the PU receiver can be
imposed by an average/peak constraint [3].
Power control in CR systems presents its own unique challenges. In spectrum sharing applications, SU power must be allocated in a
manner that achieves the goals of the CR system while not adversely affecting the operation of the PU. In [4], a distributed approach
was used for power allocation to maximise SU sum capacity under a peak interference constraint. Fuzzy logic decision has been to
choose the most suitable access opportunity for various transmit power levels using Zigbee to dynamically adjust to various power
levels to analyze the interference scenario effectively[5]
In this paper hardware system is proposed with Zigbee modules in the ISM band and microcontroller ATMEL ATMEGA328P based
Arduino board to control the transmit power levels and spectrum sensing in order to provide a reliable communication by reducing the
interference w.r.t to the power levels to the primary user unit of the cognitive radio thus to design a system to limit the interference
w.r.t to transmit power levels
The outline of the paper is as follows. Section II describes system model of primary user system and cognitive secondary user system
with respect to receiver and transmitter and their working model. Section III shows the experimental setup. Experimental
measurements and results are shown and described in section IV. Section V tells about the future work.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

123 www.ijergs.org

SYSTEM MODEL
In this paper we consider scenario in which a primary system is licensed service and cognitive secondary system present in the same
area along with primary system using opportunistic radio spectrum access which should not increase level of interference observed by
primary system. Here the cognitive system consists of two units namely Surveilling Unit and Regulating/Supervising Unit.







Fig 1 : Surveilling Unit
Surveilling Unit consists two Zigbee module one for each primary user and secondary user which are connected to PC and monitored
through the X-CTU software.


















Fig 2 : Regulating/Supervising Unit

PC
PU Zigbee
Module
SU Zigbee
Module

PC
PC
Transmit
power level
assignment
PC
PU Zigbee
Module as
receiver

RSSI
SU Zigbee
Module as
transmitter

Fuzzy
Control
System
Transmit
Power control
and routing
algorithm
Data
Transmissio
n
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

124 www.ijergs.org

Regulating/Supervising Unit consists of a microcontroller to which two Xbee modems are connected one as primary user unit as a
receiver/end device and another as secondary user unit as a transmitter/coordinator. The microcontroller used is ATMEL
ATMEGA328P based Arduino board called Duemilanove programmed using wiring programming language operating at 16 MHz .
The RSSI of the last received packet i.e. the detected signal is evaluated. Received signal strength indicator (RSSI) is the signal
strength level of a wireless device measured in dBm of the last received packet [6]. Next the transmit power level assignments are
done with the help of analyzation of RSSI values of various received signal packets. Transmit power level assignment is done with the
help of Friis transmission equation [7]. Using the Friis transmission equation, the ratio of the received power Pr to the transmission
power Pt can be expressed as

2
4
|
.
|

\
|
H
=
d
G G P P
r t t r



where, Gt, Gr are gain of transmitter and gain of receiver respectively. is a wavelength, and d is the distance between the sender and
receiver, w.r.t square of the distance the signal strength degrades in free space. The Regulating/Supervising Unit each router then
sends its link quality data along with its battery charge to the coordinator, which performs the transmit power level assignment and
routing algorithm. Next the fuzzy logic is applied to dynamically adjust the transmit power control ratio of the specific secondary user
in cognitive network according to the changes in transmit power level assignment, transmit power control and routing algorithm.

EXPERIMENTAL SETUP
Fig1 and Fig 2 shows the experimental setup established in the paper. In Surveilling Unit two Xbee Series 2, 2mW modules from Digi
International, model XB24-ZB is used each as a primary user module and secondary user module. Each module is equipped with a
wire antenna. XBee offers transmission range of 40 m in indoor scenarios and 140 m in outdoor.
X-CTU, a free software provided which is provided by Digi International is used for programming each unit i.e. primary user unit and
secondary user unit. A user can renew the parameters, enhance the firmware and perform communication testing readily using this
software. Communication with XBee modules i.e. primary user unit and secondary user unit is done via XBee Interface board
connected using a USB cable to a personal computer (PC).
In Regulating/Supervising Unit one Xbee modem as a end device and another as a coordinator is connected to microcontroller where
programming is done in Arduino IDE software version 0022.

EXPERIMENTAL MEASUREMENTS AND RESULTS
Firstly transmission and reception of the signals are analyzed using Xbee-series 2 transreciever module and X-CTU software which
are shown in fig 3 and fig 4. Next the same transmission and reception of the signals is established between Xbee-series 2
transreciever and microcontroller ATMEL ATMEGA328P based Arduino board where programming is done in Arduino IDE software
version 0022 with the help of Xbee-series2 .

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

125 www.ijergs.org


Fig 3 : Transmission of signals using X-CTU

Fig 4 : Reception of signals using X-CTU

Next the RSSI values of the different signals is obtained and analyzed with the help of X-CTU and AT command ATDB w.r.t to the
distance between two Xbee's and without interference between them which is shown in the fig 5. From the graph one can interpret that
as the distance increases the RSSI decreases.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

126 www.ijergs.org


Fig 5: Signal strength of various signals without interference
with respect to distance.

Further RSSI values of different signals is obtained using XBee power levels for transmission with the help of AT command ATPL
which sets the XBee at one of its power level for transmission. Here the RSSI values that obtained are w.r.t to the distance at three
different values for transmit power Pt : (i) -8 dBm, (ii) -4 dBm, and(iii) 2 dBm. as shown in the fig 6


Fig 6. Measured RSSI values versus distance at different values of transmit power.

With respect to the transmit power levels the RSSI degrades with the square of the distance from the sender. The fluctuations on the
graph at distance between 200-300m with Pt of -8dBm can be associated with the presence of reflection and multipath phenomenality
due to the presence of interference like wall and from Wi-Fi Routers located between the Zigbee's. Thus reasonably increase in
transmit power leads to a better performance.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

127 www.ijergs.org

FUTURE WORK
The future implementation and application will be done to reduce the harmful interference to the primary user unit with the help of
transmit power control and suitable routing algorithm on specific transceiver model of IEEE 802.15.4/Zigbee on Arduino board which
could be one of the feasible solution for spectrum sharing with interference minimization upon that implementing the fuzzy logic to
dynamically adjust the transmit power control ratio of the specific secondary user in cognitive network both in homogeneous and
heterogeneous environment according to the changes of desired Zigbee parameters which could lead to achieving the required
performance at cognitive radio secondary users and minimizes battery consumption of mobile terminals for next generation wireless
networks and services.

REFERENCES:

[1] FCC, Et docket no. 08-260, second report and order and memorandum opinion and order, Tech. Rep., 2008.
[2] S. Haykin, Cognitive radio: brain-empowered wireless communications, IEEE J. Sel. Areas Commun., vol. 23, no. 2, pp. 201
220, Feb.2005.
[3] A. Ghasemi and E. S. Sousa, Fundamental limits of spectrum-sharing in fading environments, IEEE Trans. Wireless Commun.,
vol. 6, pp. 649658, February 2007.
[4] Q. Jin, D. Yuan, and Z. Guan, Distributed geometric-programming based power control in cellular cognitive radio networks, in
Proc. VTC 2009, April 2009, pp. 15.
[5] N. Baldo and M. Zorzi, "Cognitive Network Access using Fuzzy Decision Making", IEEE ICC 2007 Proceedings, pp. 6504-6510.
[6] D. International, "XBee User Manual," ed:Digi International, 2012, pp. 1-155.
[7] W. Dargie and C. Poellabauer. (2010, July 2010). Fundamentals of Wireless Sensor Networks: Theory and Practice

















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

128 www.ijergs.org

Energy Efficient Spectrum Sensing and Accessing Scheme for Zigbee
Cognitive Networks
P.Vijayakumar
1
, Slitta Maria Joseph
1

1
Department of Electronics and communication, SRM University
E-mail- vijayakumar.p@ktr.srmuniv.ac.in

ABSTRACT - We consider a cognitive radio network that makes access to spectrum licensed to the primary user. In this network, the
secondary user will be allowed to use the idle frequency channel of the primary user. Its primarily depending on the proper spectrum
sensing. If the channel is seems to be idle the secondary user can occupy the channel but whenever the primary user returns to their
frequency channel they have to either switch to the other idle channel or they can wait still on the same channel till it free. In this
paper we are considering a cognitive network with one primary user and a secondary user. Secondary user (SU) accessing multiple
channels via periodic sensing and spectrum handoff. In our design Implementation is done by utilizing the concept of energy detection
algorithm on specific transceiver model of 802.15.4/Zigbee based on Arduino board by analysis of RSSI values of Zigbee devices
according to the distance. Also include analyzing of the sensing duration and finding appropriate threshold value for sensing based on
Zigbee modems. Energy efficient design is being implemented by utilizing sleeping mode of the Zigbee devices.

Keywords RSSI, Energy efficiency, Cognitive Radio,
INTRODUCTION
The Electromagnetic Radio Spectrum, a natural resource, is currently licensed by regulatory bodies for various applications. Presently
there is a severe shortage of the spectrum for new applications and systems. Recent studies of Federal communications commission
show that 70% of the channels are occupied in US and also found that 90 percentage of the time licensed frequency bands remain
unused [1]. To solve this scenario of the spectrum shortage, the concept of Cognitive radio is implemented. Cognitive radio enables
the temporary use of the unused spectrum knows as spectrum hole [2]. While if the secondary user who do not have the license, can
have the spectrum while its idle and whenever the primary user returns who have the license, secondary user have to return frequency
spectrum to the primary user the moment it returns and either it can wait till primary user again gets free or can go in search of other
idle channels. If there will be a delay to return there a collision will occur [3].
Most important thing in this is the channel sensing, its a critical task. In some cognitive systems, channel sharing is facilitated through
periodic sensing [4]. For some system, their energy is critical in that cases its not suitable to handoff frequently and some time the
secondary user choose to wait one the same channel and stop transmission at the cost of increased delay and reduced average
throughput[5]. In this paper we propose hardware system with microcontroller, based on Arduino board as to control the spectrum
sensing there after the switching of the channel using Zigbee modules in the ISM band and thus to design a system with very less
energy consumption.
In the rest of the paper is organized as follows: Section II describes about system model concerning about the transmitter and receiver
section and their working mechanism. Section III describes about the implementation of the hardware and software part. Simulation
results and discussions are shown in section IV.





International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

129 www.ijergs.org

System Model
A. Channel model

In this section, we will describe the channel model. The primary users are the licensed users who are the one to access the channel
same time secondary users are the one, who doesnt have the licensed spectrum and they will be seeking the opportunity to access the
channel which is not used by the primary. We assume that there is only one pair of secondary user transmitter and receiver. The
secondary user can sense only one channel at a time and access one channel for single transmission [8]. In this paper entire design
consist of two parts
1) Monitoring Section
2) Controlling Section
Monitoring section consists of two transceiver which is connected to the PC, can be monitored. Controlling section is fully controlled
by the microcontroller.







Figure.1 Monitoring part

Two transceivers are connected to the microcontroller to one has set as primary user receiver and other as secondary user transmitter.
From the receiver get the RSSI value and detecting the idle channel and data transmission on the sensed idle channel.
B. Sensing model
We consider secondary user as a single channel spectrum sensor. At each interval the secondary user will be checking the presence of
the PU. We employ the hypothesis of spectrum sensing by using the energy detection algorithm. In which microcontroller collects all
required data from the PU and makes its own decision. Microcontroller makes the final decision according to certain rule and solving
a hypothesis testing problem, i.e., the microcontroller determines whether a primary user system is transmitting, given by hypothesis
1
H , or not, given by hypothesis
0
H [10].
| |
| |
| | | |
0
1
under H
under H
w n
x n
s n w n

=

+

(1)
Here, n = 0, 1, 2 ... N-1, N represents the index of sample, [ ] w n specifies the noise and [ ] s n is the primary signal
required to detect.
0
H is the hypothesis which means that the received signal consists of the noise only. In case
PC
PU Zigbee
Modem
Zigbee
modem
SU Zigbee
Modem
Zigbee
modem
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

130 www.ijergs.org

of
0
H is true then the decision value will be less than the threshold set by microcontroller. So the controller will
be concluding that there is availability of the vacant spectrum. On the other hand, if
1
H is true then the received
signal has both signal and noise, the decision value will be larger than the threshold. So the microcontroller
concludes that the vacant spectrum is not available [6].
EXPERIMENTAL SETUP AND IMPLEMENTATION
The experimental setup used in this paper is illustrated in Fig.1 and Fig. 2. We make use of Xbee transceivers which is
based on the Zigbee protocol. This low power radio is designed for wireless personal area networks to provide a data rate
up to 250 Kbps in indoor/Urban at a range up to 100m [7]. Xbee is programmed for 802.15.4 transmission in the ISM 2.4
GHz frequency band. For monitoring part we are using two Xbee as shown in Fig. 1 one is configured as primary user
coordinator and other as the secondary user router/end device and these radios are monitored using the software X-CTU
provided by the Digi international Inc. we can see the software window Fig.3
Designed controlling part mainly consists of two Xbee modems and microcontroller. The microcontroller is ATMEL
ATMEGA328P based Arduino board called Duemilanove programmed using wiring programming language operating at
16 MHz.
Controller has been coded for 1) sensing 2) decision making 3) data transmission. In the controlling part we also have the
two Xbee modems configured as primary user router/end device and other one as secondary user coordinator.















Figure.2 Controlling part



Controller
PU
Zigbee
modem

Controller
SU
Zigbee
modem
Energy
detection
Threshold
Periodic
Detectio
n
Data
transmissio
n
Optimal
channel
switching
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

131 www.ijergs.org

A. Monitoring part

This part mainly consist of the two Xbee modules, they are connected to the personal computer and monitored through the
X-CTU software. In this part we will be able to communicate to the Xbees using the transparent /command mode. We
make use of AT commands to check current channel used by the Xbee modules in the transmission process and the Table
I is the list of all channels that a Xbee make use while it will be in the communication with each other. There is a total of
16 channels can occupied by Xbee in the ISM bands and they can utilize the frequency range of 2.405GHz to 2.480GHz.

B. Spectrum sensing part

There has been a lot research on spectrum sensing is going on. As our total design is meant for low power, we consider a
simple sensing technique based on energy detection in this paper.
The spectrum sensing part in the microcontroller solves a binary testing problem, choosing a threshold value in a
controlled environment observation [9].
Threshold has been set from the value which is obtained from the received signal strength indication (RSSI) it can
obtained from the RSSI pin of the Xbee module or either with help of the AT command. By the value received or sensed
from the Xbee is made to compare with threshold value set previously. It has been designed to sense RSSI periodically in
interrupt location with a interval of 90 seconds. Its the most critical part in the cognitive radio networks.

C. Detection and decision making part.
We can evaluate RSSI values obtained and, next are to make the decision to conclude that the primary user is present or
not. If the sensed value is less than the threshold value then primary user is absent and in other case channel not available.
In the design it has been coded to detect current channel of the primary user if the channel is available.
D. Switching and data transmission
In data transmission section after detection of the available channel the secondary user is allowed to access the channel
which is available. The secondary user is allowed to take over the channel used by the primary user and allow the
secondary user to change the channel with the help of the AT commands.
While switching to the idle channel the secondary user is allowed to sense whether the primary user returned if so the
secondary user have to switch to the another available channel in the ISM 2.4GHz band. The total process flow is Fig.4







International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

132 www.ijergs.org

TABLE I. CHANNEL DETAILS
Channel in
hexadecimal
value
Frequency (GHz) SC mask
0X0B
2.405 0x0001
0X0C
2.410 0X0002
0X0D
2.415 0X0004
0X0E
2.420 0X0008
0X0F
2.425 0X0010
0X10
2.430 0X0020
0X11
2.435 0X0040
0X12
2.440 0X0080
0X13
2.445 0X0100
0X14
2.450 0X0200
0X15
2.455 0X0400
0X16
2.460 0X0800
0X17
2.465 0X1000
0X18
2.470 0X2000
0X19
2.475 0X4000
0X1A
2.480 0X8000


Programming is done in the Arduino IDE software version 0022 its an open project written, debugged and supported by
Massimo Banzi, David cuartielles, Tom Igoe Gianluca martino and David mellis, Based on processing by Casey Reas and
Ben fry.
Microcontroller board having a serial port which is connected to the secondary user coordinator. Primary user has been
connected to the soft serial port which is assigned to the 2
nd
and 3
rd
pin of the controller board.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

133 www.ijergs.org



Figure.3 X-CTU software window


IV. PERFORMANCE ANALYSIS AND RESULTS
In this section, we evaluate the values obtained from the RSSI pin of the Xbee module and the RSSI value obtained from
the AT command ATDB. It has been observed that value obtained from the RSSI pin is always above the 600 even though
the distance varies Fig.6. Value from the ATDB is variable according to the distance and we can reach a relation between
these two.





International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

134 www.ijergs.org






























Figure.4 Flow chart of the total system


Start
Make initial setup for Xbee and microcontroller. Establish a
communication between them
Timer
expires?
Switch to free
channel
Collect Received signal
strength
Getting
channel
from the PU
and
switching
SU to that
channel
Compare average
power with
ththreshold power
Channel idle
Channel busy
Idle
channels
available ?
Yes
No
Yes
No
Less than
Greater than or
equal to
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

135 www.ijergs.org





Figure.5 Bar graph for RSSI pin value

As in the observation done using AT command it implies the as the distance varies the RSSI value will get decreased.



Figure.6 simulation result of relation between distance and RSSI
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

136 www.ijergs.org


In Xbee it will responds in hexadecimal value in the form of dBm [6].Range for Xbee indoor we are getting 40 meter without
interference it should have a clear path between two Xbee modules then communication can extend few more meters. In channel
detection the Xbee almost preferring channel D in the starting and while switching they need some delay to switch to the other
channel. And modules needed a network resetting to follow the channel selected by the coordinator. In detection of the last received
packet will be checked within a specified time gap and the value is updated always. And the updated value compared to predefined
threshold value. It has been observed that, in switching case the Xbee sometimes not switching to the channel what we described but
rather it switching to channels which is more comfortable in the given sixteen channels in the Table.1.
FUTURE WORK
The future applications will be done to reduce the delay in the detection and can carry out for more applications on the including more
channels. And the more energy can be saved by enabling the sleep and wake up system in the end devices. When the primary user
presence is sensed for a long time it will be possible to make the Xbee modems to sleep for certain amount of time.

REFERENCES:

[1] FCC Spectrum Policy Task Force Fcc report of the spectrum efficiency working group. Nov. 2002.

[2] D. Cabric, S. M. Mishra, And R. W. Brodersen, Implementation Issues In Spectrum Sensing For Cognitive Radios, In Proc. 38th
Asilomar Conference On Signals, Systems And Computers, Pp. 772776, Nov. 2004.
[3] C.-W. Wang, L.-C.Wang, and F. Adachi, Modeling and analysis for reactive decision spectrum handoff in cognitive radio networks, in
Proc. IEEE Globecom, Dec 2010, pp. 16.
[4] Y.-C. Liang, Y. Zeng, E. C. Peh, and A. T. Hoang, Sensing-throughput tradeoff for cognitive radio networks, IEEE Trans. Wireless
Commun., vol. 7, no. 4, pp. 13261337, Apr. 2008.
[5] S. Maleki, A. Pandharipande, and G. Leus, Energy-efficient distributedspectrum sensing for cogntive sensor netowrks, IEEE Sens. J.,
vol. 11,no. 3, pp. 565573, Mar. 2011.
[6] Dynamic spectrum acess and management in Cognitive Radio Networks-Hossain,Niyato and Han published by Cambridge University
Press.
[7] xbee product manual XBee ZNet 2.5/XBee-PRO ZNet 2.5 OEM RF Modules
[8] He li, Xinxin Feng, Xiaoying Gan, Zhongren cao. Joint spectrum sensing and transmission strategy for energy-efficient cognitive radio
networks 2013 8
th
international conference on cognitive radio oriented wireless networks.
[9] Sina maleki, Ashish Pandharipande, Geert leus. Energy-efficient distributed spectrum sensing for cognitive sensor networks IEEE sensors
journal, VOL. 11, No.3,March 2011.
[10] Sundeep Prabhakar chepuri, Ruden de Franscisco and Geert Leus. Performance evaluation of an IEEE 802.15.4 cognitive radio link in the
2360-2400 MHz band IEEE WCNC 2011.



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

137 www.ijergs.org

Survey on Analysis of Various Techniques for Multimedia Data Mining
PriyankaA.Wankhade
1
, Prof. Avinash P.Wadhe
2

1
Research Scholar (M.E), CSE, G. H Raisoni college of engineering and Management, Amravati
2
Faculty, CSE, G. H Raisoni college of engineering and Management, Amravati, Email- Wankhade_priyanka.ghrcemamecse@raisoni.net

ABSTRACT Data mining is an art which is used and applied discipline for grown out the various statistical pattern
recognition,learning macheine,and artificial intelligence which is combined with business decision making to optimize and enhance
it.Initially the techniques of Data mining have been applied to the already structured data from the database. The highly usage of
computers makes data mining affordable for small companies but on the other hand the invention of cheap massive memory and
digital recording electronic devices allowed the misuse of the private sector such as corporate,governmental and private documents.for
ex-e-mail message from customer,recording from telephonic conversation between customers and operators.to handle such condition
multimedia data mining is available.The aim of multimedia data mining is to process media data alone or combination with other data
for finding patterns usefull for business.
Keyword-data mining,multimedia,text mining,image mining,audio mining,video mining.

INTRODUCTION-Multimedia data mining is use to exploration of audio, video, image and text data together by automatic or
semi-automatic means in order to discover meaningful patterns and rules. When all the needed data get collected, computer program
analyse data and look for meaningful connection. This information is used by government sector, marketing sector etcThere are lot
of use of Multimedia data mining in Todays society.(e.g. The use of traffic camera footage, to show the traffic flow).Whenever the
planning of new street will going on, in that location this information can be used. Basically there are four types of Multimedia data
mining that are text, image, audio, video. All these four types of multimedia data mining use techniques for the further process. In the
following section the description of the process and techniques for the multimedia data mining is given. Multimedia Data mining is
the way of extracting useful data from the huge data. The unstructured or semi-structured data is sorted by the multimedia data
mining. Pravin M. Kamde, Dr.Siddu. P.Algur[5] describes that World wide web is an important and popular medium for knowing all
types of information which are related to sports, news, education, booking, business, science, engineering etc.. In todays competitive
world the ability to extracting hidden knowledge from such type of information is very important .the entire process of applying
computer methodology on such types of big information and extracting the useful knowledge from that is successfully done by
multimedia data mining. Xingquan Zhu, Member, Xindong Wu,AhmedK.Elmagarmid,Zhe Feng, and Lide Wu[12] explain that
organization which deals with the big digital assets they have a need of that type of tool which deals with the retrieving information
extraction from such collection. In
this situation the use of multimedia data mining is get processed. In fig 1 the basic process of multimedia data mining is shown.


Fig.1 Multimedia Data mining Process


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

138 www.ijergs.org

LITERATURE SURVEY-Bhavani Thuraisingham [10] explains that Multimedia data mining process is done by using some
important techniques. In figure 1 the basic process of Multimedia data mining with their techniques is given. In the figure text, audio,
video, and image are combine explained here the common process for mining all types of multimedia are shown. Starting point is the
selection of multimedia type .i.e. audio, video, Images, text; it can also be called as raw data. Then the goal of text feature, audio
feature, video feature, image feature is to discover important features from raw data. At this stage the data pre-processing is done.
Data pre-processing includes feature extraction and transformation. If the informative features can be identified at the feature
extraction stage. Detailed procedure depends highly on raw data. Finally the result of all these stages gets in the final stage.
Knowledge Interpretation and reporting and using knowledge. It is the post processing and result evaluation stage. S. Kotsiantis, D.
Kanellopoulos, P.Pintelas [11] describes that, as compare to data mining, multimedia data mining covers higher complexity resulting
from: i) The huge volume of data, ii) The variability and heterogeneity of the multimedia data and iii) The multimedia content. A.
Hema,E.Annasaro[1] explain All the views and ideas of all authors in field of multimedia data mining. The need ofimage mining is
mainly focussed. Image mining have the great importance in the geological field, biological field, and pharmacy field. Pattern
matching technique plays a vital role in Image mining. The process of extraction of useful Information hidden inside the image can be
retrieve by pattern matching technique also. Xinchen ,Mihaela Vorvoreanu, and Krishna Madhvan[2] give knowledgeful information
for those students or people who spend their more time on social media sites such as twitter, Facebook and you tube .And their elder
ones worry about them, but by mining video also student can be focus on their study also. The focus of the paper is highly on the
engineering students. NingZhong, Yuefeng Li, and Sheng-Tang Wu[3] describe the effective discovery of pattern which is used in text
mining. Digital data on the internet is growing day by day, for turning such type of data in useful form, the need of text mining is
occur. patterns can be discovered with Pattern Taxonomy model, Pattern deploying Method, inner pattern evolution. K. A.
Senthildevi, Dr.E.Chandra[4] deals with the all techniques used in audio mining. In the areas of speech, audio processing and dialog
the need of data mining is emerge. Speech processing process on the speech data mining voice data mining, audio data mining ,video
data mining and conversation data mining . Speech data mining is useful for improving system operation and extracting business
intelligence. And voice data mining (VDM) find and retrieve group of spoken documents such as TV or FM. audio of birds and pet
animals recorded. Video data mining is use for the surveillance video. Conversation data mining is used in call centre. All the
problems. Issue of caller gets understood. Pravin M.Kamde, Dr.Siddu.P.Algur [5] the diagrammatical representation of web mining
taxonomy, mining multimedia database, text mining, image mining ,video mining, multimedia mining process are explain.
Classification model, Clustering model and Association rule are some technique use for multimedia mining.Cory Mc Kay.David
Bainbridge[6] describe the greenstone digital library software for extraction of musical web mining audio. feature extraction
extension. JMIR is software tool is use for other resources. JMIR includes the components that are jAudio, jSymbolic , jWebminer2.0,
jLyrics, ACE 2.0, jMusic Meta Manager, lyric feature, jMIR utilities, ACE XML.

A.Text mining with Information extraction
Ning Zhong, Yue feng Li, and Sheng-Tang Wu [3] say that there is lot of information is in the textual form. This could be library data
or electronic books or web data. The one problem face by text data is, it is not well structured as relational data. In many cases it can
be unstructured or it may be semi-structured. So the Text Mining is useful for describing the application of data mining techniques
to automated discovery of useful and interesting knowledge from unstructured or semi-structured text.)Raymond J. Mooney and Un
Yong Nahm [9] describes that, there are several techniques are proposed for text mining. That are conceptual structure, association
rule mining, episode rule mining, decision trees and rule induction method. with attachment to this Information Retrieval technique is
widely use for performing task such as document matching, ranking and clustering. From large text database, extraction of patterns
and association is done by text mining. For text document, identifying the keywordsthat summarizes the content is needed. Words can
occur frequently, Such as the, is, in, of are no help at all, since they are avoided in every document. During the pre-processing
stage these common English words can be removed using stop-listBhavani Thuraisingham [10] describe that One can form
association from the keywords. In one article the keyword can be Belgium, nuclear weapons and keyword in another article can be
Spain, nuclear weapons. The data mining could make the association that author from Belgium and Spain write articles on nuclear
weapon. Fig 2 shows the process of Text mining. Xinchen ,Mihaela Vorvoreanu,and Krishna Madhvan[2] give knowledgeful
information for those students or people who spend their more time on social media sites such as twitter, Facebook and you tube .And
their elder ones worry about them, but by mining video also student can be focus on their study also. The focus of the paper is highly
on the engineering students. Fig 2 shows the process of text mining.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

139 www.ijergs.org


Fig.2 converting unstructured data to structured data for mining.Bhavani Thuraisingham [10]

B.Image mining with Information extraction-Nearly the techniques use for all types of multimedia data mining are identical,
But the structure of various multimedia types are different, so according to that, the process of the mining of various multimedia type
is different. Sometimes question get arises, if there is an availability of image processing so exactly what is the use of image mining?
Image processing applications are in various domains, such as medical imaging for detection of cancer, Satellite images processing for
space and intelligence application. Images include the geographical area, structure of biology.Tao Jiang ,Ah-Hwee Tan[7] explains
that, important application of Image mining is, image mining not only detect the outcome from unusual pattern in image but also
identify recurring themes in image, both these thing are done at the level of raw images and with higher-level concept. to find
existence of pattern within a given description, the Matching technique is used.A.Hema,E. Annasaro[1] says that In the field of image
mining, image matching is the vital application. There are so many techniques have been developed till today and still research for
developing the optimized matching technique is going on. Nearest neighbourhood technique, least square method technique, co-
efficient of co-relation technique, relational graph isomorphism technique all these are matching techniques. Nearest neighbourhood
technique is an important technique used in applications where objects to be matched are represented as n-dimensional vector. Fig 3
shows the process of image mining

Fig.3.Image Mining Process.)Pravin M .Kamde, Dr. Siddu. P. Algur [5]

Video mining with feature extraction-Video mining is the third type of multimedia data mining. Video is the combination of
images so the first step for successful video mining is to have a good handle on image mining. Ajay Divakaran, Kadir A. Peker, Shih-
Fu Chang, Regunathan Radhakrishnan, Lexing Xie[11]says that, In terms of feature extraction, video feature extracted for each shot
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

140 www.ijergs.org

based on detected shot boundries. There are totally five video feature extracted for each shot,
namedas,pixel_change,histo_change,background_mean,background_var,anddominant_color_ratio.when the raw data is taken for the
information extraction in video mining these five features are help for mining the video.) Mei-Ling Shyu, Zongxing Xie, Min Chenand
Shu-Ching Chen[8] describes The basic techniques for video data mining, that are pre-processing of raw data, classification and
association. In pre-processing of raw data technique, the important terms are get considered, that are, video shot detection and
classification, video text detection and recognition, camera motion characterization, and salient audio event detection. Now in
association Mining technique there are three terms are get considered that are video data transformation, definition and terminology,
and video association mining. Video mining is day after day improving their techniques in various ways. Fig 4 shows the direct video
mining process.

Fig.4.Direct video mining Bhavani Thuraisingham [10]

Audio mining with feature extraction-In multimedia application, Audio data plays vital role.Cory McKay. David
Bainbridge[6]describes that Music information basically have two categories. a)Symbolic and b)Audio information. Audio is now
became the continuous media type like videos. The techniques used in audio mining is similar to techniques used in video mining.
audio data can be available in any form such as speech, music, radio, spoken language etc. The primary need for mining the audio data
is the conversion of audio into text, using speech transcription technique this process can be done. other techniques are also available
for this such as keyword extraction and then mining the text. Audio mining is that type of technique which is used to search audio
files. K.A.Senthildevi, Dr.E.Chandra[4] explains that there are two main approaches of audio mining. 1) Text based indexing and 2)
Phoneme based indexing. Text based indexing deals with the conversion process of speech to text. And Phoneme based indexing
doesnt deals with conversion from speech to text, but instead works only with sound. Fig 5 shows the process of Audio mining.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

141 www.ijergs.org


Fig.5.Mining text extracted from audio. [10]

APPLICATION of MULTIMEDIA DATA MINING
Multimedia data mining is the big application for all types of sectors and field. In todays society multimedia is the essential part of all
kinds of work. Some application of multimedia data mining is as follows.
A. To Know the geographical condition, agriculture, forestry, crops measurement, monitoring of urban growth, mapping of pollution,
mapping of ice for shipping. Identification of sky object the satellite data is used.[5]Pravin M. Kamde, Dr.Siddu.P.Algur
B.The use of Audio and video mining can be seen in movie mining system.
C. For vehicle identification, traffic flow, and the spatio-temporal relations of the vehicle at intersection, the mining of traffic video
sequences is used.[8]Mei-Ling Shyu,ZongxingXie, Min Chenand Shu-Ching Chen.
D. For detecting sports video, or in big shops, the video mining is used.

CONCLUSION and FUTURE SCOPE
In this paper the description of techniques which is needed by the multimedia data mining is given. In text mining two approaches are
used for information extraction. In first approach the general knowledge can extract from direct text. And in second approach, one can
extract structured data from text documents. In image mining matching technique is used for finding the existence pattern of an image.
To handle video mining, one should know all about the image.

REFERENCES:
[1]A.Hema, E.Annasaro. A SURVEY IN NEED OF IMAGE MINING TECHNIQUES, International Journal of Advanced Research in Computer and
Communication Engineering (IJARCCE) ISSN (Print) : 2319-5940 ISSN (Online) : 2278-1021, Vol. 2, Issue 2, February 2013.
[2]Xinchen,MihaelaVorvoreanu,andKrishnaMadhvan,Mining Social Media Data for Understanding Students Learning
Experiences, IEEE computer society.1939-1382(c)2013 IEEE.
[3]NingZhong, Yuefeng Li, and Sheng-Tang Wu, Effective Pattern Discovery for Text Mining, IEEE TRANSACTIONS ON KNOWLEDGE AND DATA
ENGINEERING, VOL. 24, NO. 1, JANUARY 2012.
[4]K.A.Senthildevi, Dr.E.Chandra, Data Mining Techniques and Applications in Speech Processing - A Review, International Journal of Applied Research &
Studies(IJARS) ISSN 2278 9480Vol.I / Issue II /Sept-Nov, 2012/191.
[5]PravinM. Kamde, Dr.Siddu.P.Algur. A SURVEY ON WEB MULTIMEDIA MINING, the International Journal of Multimedia & Its Applications (IJMA) Vol.3,
No.3, August 2011.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

142 www.ijergs.org

[6] Cory McKay.David Bainbridge, A MUSICAL WEB MINING AND AUDIO FEATURE EXTRACTION EXTENSION TO THE GREENSTONE DIGITAL
LIBRARY SOFTWARE,12th International Society for Music Information Retrieval Conference (ISMIR 2011).
[7] Tao Jiang ,Ah-Hwee Tan, Learning Image-Text Associations, IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 21, NO. 2,
FEBRUARY 2009.
[8] Mei-Ling Shyu,ZongxingXie, Min Chenand Shu-Ching Chen, Video Semantic Event/Concept Detection Using a
Subspace-Based Multimedia Data Mining Framework,IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 10, NO. 2, FEBRUARY 2008.
[9]Raymond J. Mooney and Un Yong Nahm, Text Mining with Information Extraction,Multilingualism and Electronic Language Management: Proceedings of the
4th International MIDP Colloquium,September 2003, Bloemfontein, South Africa, Daelemans, W., du Plessis, T.,Snyman, C. and Teck, L. (Eds.) pp.141-160, Van
Schaik Pub., South Africa, 2005.
[10]Bhavani Thuraisingham, Managing and Mining multimedia Database,International Journals on Artificial Intelligence Tools,Vol.13,No.3,739-759,20 March 2004.
[11] Raymond J. Mooney and RazvanBunescu. Mining Knowledge from Text Using Information Extraction,SIGKDD Explorations. Volume 7, Issue 1.
[12] Ajay Divakaran, Kadir A. Peker, Shih-Fu Chang, RegunathanRadhakrishnan, LexingXie, Video Mining: Pattern Discovery Versus Pattern Recognition,IEEE
International Conference on Image Processing (ICIP)TR2004-127 December 2004.
















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

143 www.ijergs.org

Design of Universal Shift Register Using Pulse Triggered Flip Flop
Indhumathi.R
1
, Arunya.R
2

1
Research Scholar (M.Tech), VLSI Design, Department of ECE, Sathyabama University, Chennai
2
Assistant Professor, VLSI Design, Department of ECE, Sathyabama University, Chennai
Email- r.indhumathi12@gmail.com
ABSTRACT Universal shift registers, as all other types of registers, are used in computers as memory elements. Flip-flops are an
inherent building block in Universal shift registers design. In order to achieve Universal shift registers, that is both high performances
while also being power efficient, careful attention must be paid to the design of flip flops. Several fast low power flip flops, called
pulse triggered flip flop (PTFF), design is analyzed and designed the universal shift registers.. The paper presents a modified design
for explicit pulse triggered Flip-flop with reduced transistor count for low power and high performance applications. HSPICE
simulation results of Shift Register at a frequency of 1GHz indicate improvement in power-delay product with respect to the Existing
pulse triggered flip flop configurations using CMOS technology.

Keywords: MOSFET, Pulse triggered flip flop, universal shift registers, low power, delay, power delay product
INTRODUCTION
Flip Flops are the basic storage elements used in all types of digital circuit designs. Conventional master slave flip flops are made up
of two stages and are characterized by hard edge property. But pulse triggered flip flops reduce the two stages into one stage and are
characterized by soft edge property [10]. Nowadays Pulse triggered flip flops have been considered as an alternative to the
conventional master-slave [7]. A pulse triggered flip flop consists of a pulse generator for strobe signal and a latch for data storage.
Since the pulses are generated on the transition edges of the clock signals and very narrow pulse width, the latch acts like an edge
triggered flip flop [3]. PTFF uses a conventional latch design clocked by a short pulse train and it also acts as a flip flop. Advantages
of pulse triggered flip flop are that it is simpler in circuit complexity and leads to higher toggle rate for high speed operations and also
allows time borrowing across cycle boundaries. To achieve low power in high speed regions, the different low power techniques are
conditional capture, conditional precharge, conditional discharge, conditional data mapping and clocking gating technique [3]
EXISTING PULSE TRIGGERED FLIP FLOP
An explicit type pulse triggered structure and a modified true single phase clock latch based on a signal feed through scheme as shown
in Fig 1

Fig 1 Existing pulse triggered flip flop
The key idea was to provide a signal feed through from input source to the internal node of the latch, which would facilitate extra
driving to shorten the transition time and enhance both power and speed performance. The design was intelligently achieved by
employing a simple pass transistor. However, with the signal feed through scheme, a boost can be obtained from the input source via
the pass transistor and the delay can be greatly shortened.[3]



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

144 www.ijergs.org

PROPOSED PULSE TRIGGERED FLIP FLOP
The proposed system is designed with signal feed through scheme without feedback circuits that is only capable of designing the
sequential circuits that does not have feedback operation as shown in Fig.2. Added to the pass transistor in the existing system, a
pMOS transistor is used controlled by clock signal to reduce power

Fig 2 Proposed Pulse Triggered Flip Flop
UNIVERSAL SHIFT REGISTER
A universal shift register is an integrated logic circuit that can transfer data in three different modes designed using pulse triggered flip
flop as shown in the Fig 3. Like a parallel register it can load and transmit data in parallel.Like shift registers it can load and transmit
data in serial fashions, through left shifts or right shifts. In addition, the universal shift register can combine the capabilities of both
parallel and shift registers to accomplish tasks that neither basic type of register can perform on its own.

Fig 3: Universal Shift Register
For instance, on a particular job a universal register can load data in series and then transmit/output data in parallel. Universal shift
registers, as all other types of registers, are used in computers as memory elements.[11] Although other types of memory devices are
used for the efficient storage of very large volume of data, from a digital system perspective when we say computer memory we mean
registers. In fact, all the operations in a digital system are performed on registers. Examples of such operations include multiplication,
division, and data transfer. Due to increasing demand of battery operated portable handheld electronic devices like laptops, palmtops
and wireless communication systems (personal digital assistants and personal communicators) the focus of the VLSI industry has been
shifted towards low power and high performance circuits. Flip-flops and latches are the basic sequential elements used for realizing
digital systems like Universal shift Register
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

145 www.ijergs.org


PERFORMANCE ANALYSIS
In CMOS design, analysis of the average power, delay and power delay product of the ExistingPulse Triggered Flip Flop based
universal shift register using 130nm technology is shown in Table.1.
Table 1 Universal Shift Register Using Existing Pulse Triggered Flip Flop In 130nm Technology


DESIGN
PULSE TRIGGERED FLIP FLOP
POWER
(W)
DELAY
(ps)
POWER DELAY
PRODUCT
(fJ)

UNIVERSAL
SHIFT
REGISTER


684.4
113.70 77.816
119.38 81.703
119.09 81.505
111.75 76.481

In CMOS design, analysis of average power, delay and power delay product of the Existing Pulse Triggered Flip Flop based
Universal shift register using 22nm technology is shown in Table 2.
Table 2 Universal Shift Register Using Existing Pulse Triggered Flip Flop In 22nm Technology



DESIGN
PULSE TRIGGERED FLIP FLOP

POWER
(W)

DELAY
(ps)
POWER
DELAY
PRODUCT
(fJ)

UNIVERSAL
SHIFT
REGISTER


13.46
14.399 0.1938
14.825 0.1995
15.089 0.2030
13.839 0.1862

In CMOS design, analysis of the average power, delay and power delay product of the existing Pulse Triggered Flip Flop based
Universal shift register using 16nm technology is shown in Table 3.
Table 3 Universal Shift Register Using Existing Pulse Triggered Flip Flop In 16nm Technology



DESIGN
PULSE TRIGGERED FLIP FLOP

POWER
(W)

DELAY
(ps)
POWER
DELAY
PRODUCT
(fJ)

UNIVERSAL
SHIFT
REGISTER


6.473
10.699 0.0069
12.012 0.0077
13.416 0.0086
12.239 0.0079
CONCLUSION
The pulse triggered flip flop based on signal feed through scheme is used to design universal shift registers. The universal shift
registers are designed using existing and proposed pulse triggered flip flop using CMOS design with nanometer Technology to
achieve low power, less delay and power delay product

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

146 www.ijergs.org

REFERENCES:

[1] Guang-Ping Xiang,Ji-Zhang Shen,Xue-Xiang Wu and Liang Geng (2013), Design of a low power Pulse Triggered Flip-Flop
withconditional clock techniques,IEEE, pp.122-123.

[2] Jin-Fa-Lin (2012), Low power pulse-TriggeredFlip Flop design based on a signal feed-through scheme, IEEE Trans. Very
Large Scale Integr.(VLSI) Syst, pp.1-3,2012.

[3] James Tschanz, Siva Narendra, Zhan Chen, ShekharBorkar, ManojSachdev, VivekDC,Comparative delay and energy of single
edge triggered &dual edge triggered pulsed flip flopsfor high performance microprocessors,2001.

[4] Jinn-ShyanWang,Po-Hui Yang (1998) ,A pulse triggered TSPC flip flop for high speed low power VLSI design applications
,IEEE,pp-II93- II95.

[5] Jin-FaLin,Ming-HwaSheu and Peng-Siang Lang (2010) ,A low power dual-mode pulse triggered flip-flop using pass transistor
logic,IEEE, pp-203-204.

[6] KalarikalAbsel ,Lijo Manuel ,and R.K.Kavitha (2001), Low power dual dynamic mode pulsed hybrid flip-flop featuring efficient
embedded logic .

[7] Logapriya.S.P, Hemalatha.P (2013), Design and analysis of low power pulse triggered flip flop,International Journal of
Scientific and Research Publications, Volume 3, Issue 4, April 2013 pp.1-3.

[8] Mathan.N, T.Ravi, E.Logashanmugam, (2013), Design And Analysis Of Low Power Single Edge Triggered D Flip Flop Based
Shift Registers Volume 3, Issue 2 .

[9] SusruthaBabuSukhavasi, SuparshyaBabuSukhavasi, K.Sindhur, Dr. Habibulla Khan (2013), Design of low power & energy
proficient pulse triggered flip flops, International Journal of Engineering Research and Applications (IJERA) ,Vol. 3, Issue 4,
Jul- Aug 2013, pp-2085-2088.

[10] Saranya.M, V.Vijayakumar, T.Ravi, V.Kannan, Design of Low Power Universal Shift Register,International Journal of
Engineering Research & Technology (IJERT),Vol. 2 Issue 2, February- 2013.


[11] T.Ravi, Mathan.N, V.Kannan, " Design and Analysis of Low Power Single Edge Triggered D Flip Flop", International Journal of
Advanced Research in Computer Science and Electronics Engineering, Volume 2, Issue 2, February 2013, ISSN: 2277 9043, pp
172-175.
[12] Venkateswarlu. Adidapu, Paritala. AdityaRatnaChowdary, Kalli. Siva Nagi Reddy (2013),Pulse Triggered flip-flops power
optimizationtechniques for future deep sub-micronapplications, International Journal ofEngineering Trends and Technology (IJETT)
Volume 4 Issue 9- September 2013,pp.4261-4264





International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

147 www.ijergs.org

First Record on Serological Study of Anaplasma marginale Infection in Ovis
aries by ELISA, in District Peshawar, Khyber Pakhtunkhwa, Pakistan
Muhammad Kashif
1
, Munawar Saleem Ahmad
1
, Iftikhar Fareed
2

1
Department of Zoology, Hazara University, Mansehra-23100, Pakistan
2
Department of Natural Resource Engineering and Management, University of Kurdistan, Kurdistan, Iraq
E-mail- saleemsbs@gmail.com, Contact- +92-3224000024

ABSTRACT Geographical sero-prevalence of Anaplasma marginale (T) in sheep, Ovis aries (L) was done from January-May,
2012 in district Peshawar which is a crowded area of Pakistan. In this area sheeps infection with A. marginale is not reported before.
For this purpose, 376 serum samples were obtained conveniently from 4 different breeds of sheep, from different geographical areas
of Peshawar. An indirect ELISA using recombinant MSP-5 as antigen of A. marginale, was performed. Totally, 92/376 (24.47%) of
the overall sheep sera were positive. In 6 areas of Peshawar, Peshthakhara and Mashokhel area were found highly infected i.e. 32.00%
and 32.00% respectively, while Ghazi baba area was less infected comparatively. While age wise, adults were highly infected
specially turkai ones. This is the first record of A. marginale showing high rate infection in sheep in Peshawar, Pakistan, This research
should be useful in epidemiological applications.

Keywords: Sheep; epidemiology; A. marginale; MSP-5; indirect ELISA; Peshawar.

Introduction:
Peshawar, the capital city of Khyber Pakhtunkhwa is the administrative center and central economic hub for the Federal
Administrated Tribal Areas (FATA) of Pakistan. It is situated in a large valley near the eastern end of the Khyber Pass, between the
eastern edge of the Iranian Plateau and the Indus valley strategically it has an imoptant location on the crossroads of central Asia and
South Asia. Peshawar under Koppens climate classification features has a semi-arid climate with very hot summers and mild winters.
It is located at 3401N and 7135E, area with on of 1,257 km2 and population of 3,625,000
[9]
(Figure 1). Sheep, Ovis aries (L) is
one of the initial animals, domesticated for agricultural purposes; it is raised for meat, (hogget or mutton, lamb) milk and fleece
production. These quadru-pedal ruminant mammals are members of the order Artiodactyla, the even-toed ungulates typically kept as
livestock. It has great economic potential because of their early maturity and high fertility as well as their adaptability to moist
environment
[7]
. However, the benefits derived are too low from the expected due chiefly to low productivity. Numerous factors are
involved in this low productivity, in which the major one is disease
[2]
.
Diseases caused by heamoparasites are most apparent. These heamoparasites are parasites found in the blood of mammals in which A.
marginale is also include. Ticks are biological vectors of Anaplasma sp.; tick, mammalian or bird hosts with persistent Anaplasma sp.
infection can serve as reservoir of infection naturally. Anaplasma sp. is intracellular, gram-negative bacteria and representatives of the
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

148 www.ijergs.org

order Rickettsiales classified into Rickettsiaceae and Anaplasmataceae families
[5]
. The tick vector distribution is the factor influencing
the transmission of tick-borne diseases
[3]
. However, for A. marginale, mechanical transmission through contaminated hypodermic
needles and biting flies plays an important role
[9]
.
Erythrocytes are phagocyted by reticulo-endothelial cells during infection. Animals may die older than 2 years due to the infection
[7]
.
Nevertheless, concerning ovine anaplasmosis, little information is available, in despite of the expressive number of sheep, goat and
expansion of small ruminant herds in this country. Diagnosis of anaplasmosis in small ruminants mainly based in the identification of
the rickettsia in stained blood smears. However, below 0.1% rickettsemias in chronic carriers are not detected by this method
[9]
.
Serological assays, based on Major Surface Protein 5 (MSP-5) of A. marginale have been successfully used, for the detection of
antibodies against Anaplasma sp.
[11]
. In this study, we observed for the first time sero-prevalence of Anaplasma sp., in different
breeds of sheep using an indirect ELISA based on MSP5 recombinant of A. marginale, in Peshawar, Pakistan. This research should be
particularly useful for epidemiological applications such as prevalence studies, awareness, education, research and control programs in
this region.
Materials and Methods:
Samples collection: Conveniently, 376 blood sampling was collected from the overall sheep population of
different areas of Peshawar from January to May 2012. About 5 ml blood samples were collected from the jugular vein of each sheep
with a sterile hypodermic syringe into an evacuated tube containing gel and clot activator. Some information like breed, age, and sex
were noted. The blood sample was then centrifuge for 5 minutes at 12000 rpm to separate serum and stored at 35

C until further use


[6]
. The SVANOVIR

A. marginale-Ab ELISA kit (Svanova Biotech AB, Uppsala, Sweden) was used for the diagnosis of specific
antibodies against A. marginale in bovine serum samples. The kit procedure was based on the Indirect Enzyme Linked
Immunosorbent Assay (Indirect ELISA). The whole procedure was done according to the protocol given with the kit.
Protocol for Indirect Enzyme Linked Immunosorbent Assay (iELISA):
All reagents were equilibrated to room temperature 18 to 25 C before use. Pre-dilution of control and samples 1/40 in PBS-tween
buffer (e.g., 10 l sample in 390 l of PBS-tween buffer). Hundred micro liter of pre-diluted serum sample was added to selected
wells. The plate was then seal and incubate at 37 C for 30 minutes. The plate was rinse 4 times with PBS-tween buffer. Hundred
micro liter of conjugate dilution was added to each well and then sealed the plate and incubate on 37 C for 30 minutes. Again, the
plate was rinse 4 times with PBS-tween buffer. Hundred micro liter substrate solution was added to each well and then incubated for
30 minute at room temperature (18 to 25 C). Hundred micro liter of stop solution was added to each well and mixed thoroughly. The
optical density (OD) of the controls and sample was measured at 405 nm in a micro-plate photometer (BIOTEK Instruments Inc.,
Winooski, Vermont, U.S.A.). Mean OD values were calculated for each of the control and samples.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

149 www.ijergs.org

Data analysis:
The following formula was used for the percent positivity (PP):
PP= [(Sample OD 100)/Mean positive control OD]
Interpretation of the results:
The calculated percent positivity (PP) if less than 25%, the sample was consider as negative and if PP was equal or more than 25%,
then the sample was consider as positive.

Results:
There were overall 92 (24.47%) positive samples for A. marginale of O. aries. In Ghazi Baba 19 (19.00%) positive cases were
detected in which 6 (13.33%) were Balkhai, 4 (16.00%) Watanai, 1 (16.67%) Punjabai and 8 (30.77%) Turkai. In Warsak road 17
(22.66%) positive cases were detected in which 3 (12.00%) were Balkhai, 7 (31.82%) Watanai, 3 (18.75%) Punjabai and 4 (33.33%)
Turkai. In Badabher 19 (25.33%) positive cases were detected in which 5 (20.83%) were Balkhai, 6 (50.00%) Watanai, 4 (16.67%)
Punjabai and 4 (26.67%) Turkai. In Peshtakhara 16 (32.00%) positive cases were detected in which 4 (40.00%) were Balkhai, 1
(8.33%) Watanai, 3 (23.10%) Punjabai and 8 (53.33%) Turkai. In Mashokhel 16 (32.00%) positive cases were detected in which 4
(33.33%) were Balkhai, 4 (25.00%) Watanai, 3 (33.33%) Punjabai and 5 (38.46%) Turkai. In Barha 8 (32.00%) positive cases were
detected in which 2 (28.57%) were Balkhai, 3 (27.027%) Watanai, 3 (37.5%) Punjabai and 0 (0.00%) Turkai (Table 1).
The infection was high in Peshtakhara, Mashokhel and Barha, while lower in Ghazi baba as compare to other areas. In total 17
(18.28%) positive Balkhai males, 9 (16.07%) were adult and 8 (21.62%) were young, in 21 (29.57%) positive Watanai males, 16
(40.00%) were adult and 5 (16.13%) were young, in total 12 (20.68%) positive Punjabai males, 7 (22.58%) were adult and 5 (18.52%)
were young and in total 22 (33.85%) positive Turkai males, 6 (14.28%) were adult and 14 (60.87%) were young (Table 2).
In total 6 (19.35%) positive Balkhai females 5 (41.67%) were adults and 1 (5.26%) was young, in 5 (19.23%) positive Watanai
females 2 (14.28%) were adults and 3 (25.00%) were young, in 4 (22.22%) positive Punjabai females 1 (12.50%) was adults and 3
(30.00%) were young and in 7 (50.00%) positive Turkai females 4 (50.00%) were adults and 3 (50.00%) were young (Table 3).
Discussion:
The research on sheep anaplasmosis (A. marginale) is rare and little literature is available. The frequency of sero-positivity of sheep
anaplasmosis in this research were (24.47%) which is very low as compared to the prevalence of sero-positive sheep found by Hornok
et al.
[6]
(99.4%) in Hungry and high as compared to the prevalence of sero-positive sheep found by Cabral et al.
[4]
(8.92%). Sero-
prevalence were found by Ramos et al.
[10]
(16.17%) in Ibimirim county, semi-arid region of Pernambuco State, Brazil using
monoclonal antibody ANAF16C1 and De La Fuente et al.
[5]
(75.0%), in sicily, Italy, using competitive ELISA, based on recombinant
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

150 www.ijergs.org

MSP-5 of A. marginale. The low sero-prevalence rate in this research work can be the cause of low tick vector population in Peshawar
area. However, some ticks were also observed in sheep during blood samples collection. This result represents the first description of
antibodies for Anaplasma sp. in sheep from Peshawar, Pakistan. Further studies are require to know the epidemiology of Anaplasma
sp. infection in sheep, in Pakistan, particularly to define which species is involved, possible impacts and vectors in animal production
and in public health.

















Figure 1. Map of Distract Peshawar, Pakistan (Google, 2012)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

151 www.ijergs.org


Table 1. Area wise collected and positive blood samples for A. marginale by indirect Enzyme Linked
Immunosorbent Assay (iELISA) in sheep during January -May, 2012 in Peshawar, Pakistan.

n
1
, n
2
, n
3
and n
4
: Shows the total number of collected samples of Balkhai, Watanai, Punjabai and Turkai breed
respectively.
P: Indicate the positive samples for A. marginale.


S No. Area
Total
sample
Positive
(%)
Balkhai Watanai Punjabai Turkai
n
1

P

(%) n
2
P

(%) n
3

P

(%)
n
4
P (%)
1
Ghazi Baba,
Ring road
100 19 (19.00) 45
6
(13.33)
25
4
(16.00)
6
1
(16.67)
26
8
(30.77)
2 Warsak Road 75 17 (22.66) 25
3
(12.00)
22
7
(31.82)
16
3
(18.75)
12 4 (33.33)
3 Badabher 75 19 (25.33) 24
5
(20.83)
12
6
(50.00)
24
4
(16.67)
15
4
(26.67)
4 Peshtakhara 50 16 (32.00) 10
4
(40.00)
12
1
(8.33)
13
3
(23.10)
15
8
(53.33)
5 Mashokhel 50 16 (32.00) 12
4
(33.33)
16
4
(25.00)
9
3
(33.33)
13 5 (38.46)
6 Barha 26
8
(32.00)
7
2
(28.57)
11
3
(27.27)
8
3
(37.5)
0
0
(0.00)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

152 www.ijergs.org

Table 2. Male age wise collected and positive blood samples for A. marginale by indirect Enzyme Linked
Immunosorbent Assay (iELISA) in sheep during January -May, 2012 in Peshawar, Pakistan.
*More than one year
**Less than one year


Table 3. Female age wise collected and positive blood samples for A. marginale by indirect Enzyme Linked
Immunosorbent Assay (iELISA) in sheep during January -May, 2012 in Peshawar, Pakistan.
*More than one year
**Less than one year

Acknowledgments:
We are grateful to Dr. Ghufran Ullah, Dr. Ikhwan Khan and Dr. Ijaz Khan, Senior Researchers, Veterinary Research Institute (VRI),
Peshawar for their full support and cooperation at every step in current research work. The experiments comply with the current laws
of the country in which they were performed.

S
No.
Breeds
Total
samples
Male
samples
Male +v
(%)
Male
Total
*adult
Adult +v
(%)
Total
**young
Young +v
(%)
1 Balkhai
124 93 17(18.28) 56 9 (16.07) 37 8 (21.62)
2 Watanai 97 71 21(29.57) 40 16 (40.00) 31 5 (16.13)
3 Punjabai 76 58 12(20.68) 31 7 (22.58) 27 5 (18.52)
4 Turkai 81 65 22(33.85) 42 6 (14.28) 23 14(60.87)
S No. Breeds
Total
samples
Female
samples
Females
+v (%)
Female
Total
*adult
Adult +v
(%)
Total
**young
Young +v
(%)
1 Balkhai 124 31 6 (19.35) 12 5 (41.67) 19 1 (5.26)
2 Watanai 97 26 5 (19.23) 14 2 (14.28) 12 3 (25.00)
3 Punjabai 76 18 4 (22.22) 8 1 (12.50) 10 3 (30.00)
4 Turkai 81 14 7 (50.00) 8 4 (50.00) 6 3 (50.00)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

153 www.ijergs.org

REFERENCES:
1. Akerejola, O.O., Schillhorn van V.T.W., Njoku, C.O. 1979. Ovine and Caprine diseases in Nigeria: a review of economic
losses. Bulletin of Animal Health and Production in Africa 27, 65 70.
2. Bazarusanga, T., Geysen, D., Vercruysse, Madder, M. 2007a. An update on the ecological distribution of Ixodid ticks
infesting cattle in Rwanda: country-wide cross-sectional survey in the wet and the dry season. Experimental and Applied
Acarology, 43, 279291.
3. Cabral, D.A., Arajo, Flbio Ribeiro de, RAMOS, Carlos Alberto do Nascimento, Alves L.C., Porto W.J.N., Faustino M.A.
da Gloria. 2009. Serological survey of Anaplasma sp. in sheep from State of Alagoas, Brazil, Revista Brasileira Sade
Producao Animal, 10(3), 708-713.
4. Dumler, J.S., Barbet, A.F., Bekker, C.P.J., Dasch, G.A., Palmer, G.H., Ray, S.C., Rikihisa, Y. and Rurangirwa, F.R. 2001.
Reorganization of genera in the families Rickettsiaceae and Anaplasmataceae in the order Rickettsiales: unification of some
species of Ehrlichia with Anaplasma, Cowdria with Ehrlichia and Ehrlichia with Neorickettsia, descriptions of six new
species combinations and designation of Ehrlichia equi and HGE agent as synonyms of Ehrlichia phagocytophila.
International Journal of Systematic Evolutionary Microbiology, 51, 21452165.
5. Hornok, S., Elek, V., De La Fuente, J., Naranjo, V., Farkas, R., Majoros, G., Foldvri, G. 2007. First serological and
molecular evidence on the endemicity of A. ovis and A. marginale in Hungary. Veterinary Microbiology, 122(4), 316-322.
6. Kashif M. and Ahmad M.S. 2014. Geographical seroprevalence of A. marginale infection by ELISA in O. aries, in district
Peshawar, Pakistan. Journal of Zoology Studies., 1(2): 15-18.
7. Kocan, K.M., de la Fuente J., Guglielmone, A.A., Melendez, R.D. 2003. Antigens and alternatives for control of A.
marginale infection in cattle. Clinical Microbiology Reviews, 16, 698712.
8. Palmer, G.H. 1992. Development of diagnostic reagents for anaplasmosis and babesioses. In: Dolan, T.T. Recent
developments in the control of anaplasmosis babesioses and cowdriosis. English Press, International Laboratory for Animal
Diseases, Nairobi: pp 56-66.
9. Perveen, F. and Kashif, M., 2012. Comparison of infestation of gastrointestinal helminth parasites in locally available equines
in Peshawar, Pakistan. Res. Opin. Animal. Vet. Sci., 2(6): 412-417.
10. Potgieter, F.T., Stoltsz, W.H. 2004. Bovine anaplasmosis. In: Coetzer JAW, Tustin RC, (Eds.), Infectious Diseases of
Livestock, vol. I. Oxford University Press, Southern Africa, Cape Town, pp. 594616.
11. Ramos, R.A.N., Ramos, C.A.N., Arajo, F.R., Melo, E.S.P., Tembue, A.A.S., Faustino, M.A.G., Alves, L.C., Rosinha,
G.M.S., Elisei, C. and Soares, C.O. 2008. Deteco de anticorpos para Anaplasma sp. em pequenos ruminantes no semi-rido
do Estado de Pernambuco, Brasil. Revista Brasileira de Parasitologia Veterinria, 17(2), 115-117.
12. Strik, N.I., Alleman, A.R., Barbet, A.F., Sorenson, H.L., Wamsley, H.L., Gaschen, F.P., Luckschander, N., Wong, S., Foley,
J.E., Bjoersdorff, A. and Stuen, S. 2007 Characterization of A. phagocytophilum major surface proteins 5 and the extent of its
cross-reactivity with A. marginale. Clinical and Vaccine Immunology, 14(3), 262-268.




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

154 www.ijergs.org

Mixing Wind Power Generation System with Energy Storage Equipments
Mohammad Ali Adelian
1

1
Research Scholar, Email- Ma_adelian@yahoo.com

ABSTRACT with the advance in wind turbine technologies, the cost of wind energy becomes competitive with other fuel-
based generation resources. Due to the price hike of the fossil fuel and the concern of the global warming, the development of wind
power has rapidly progressed over the last decade. The annual growth rate of the wind generation installation has exceeded 26% since
1990s. Many countries have set goal for high penetration levels of wind generations. Recently, several large-scale wind generation
projects have been implemented all over the world. It is economically beneficial to integrate very large amounts of wind capacity in
power systems. Unlike other traditional generation facilities, using wind turbines present technical challenges. Electric power. The
distinct feature of the wind energy is its nature of intermittent. Since it is difficult to predict and control the output of the wind
generation, its potential impacts on the electric grid are different from the traditional energy sources. At high penetration level, an
extra fast response reserve capacity is needed to cover shortfall of generation when a sudden deficit of wind takes place. However, this
requires capital investment and infrastructure improvement. To enable a proper management of the uncertainty, this paper presents an
approach to make wind power become a more reliable source on both energy and capacity by using energy storage devices. Mixing
the wind power generation system with energy storage will reduce fluctuation of wind power. Since it requires capital investment for
the storage system, it is important to estimate reasonable storage capacities for desired applications. In addition, energy storage
application for reducing the output variation and improving the dynamic stability during the gust wind and severe fault are also
studied.

Keywords Wind Power Generation, Conversion System, Energy Storage , Batteries, Pumped Water, Compressed Air, Steady
State Power Flow, Model of the Wind Turbine and Energy Storage .
INTRODUCTION
The development of wind power has rapidly growth over the last decade, largely due to the improving in the technology, the provision
of government energy policy, the public concern about global warming, and concerned on the limited resource of conventional fuel
based generation [1]. As the fossil fuel causes the serious problem of environmental pollution, the wind energy is one of the most
attractive clean alternative energy sources. Wind power is one of the most mature and cost effective resources among different
renewable energy technologies. Wind energy has gained an extensive interest and become one of the most promising renewable
energy alternatives to the conventional fuel based power resources. Despite various benefits of the wind energy, the integration of
wind power into the grid system is difficult to manage. The distinct feature of the wind energy from other energy resources is that its
produced energy is intermittent. Due to the wind power is an unstable source, its impact on the electric grid are different from the
traditional energy sources.

Challenge
Due to its intermittent in nature and partly unpredictable, wind power production introduces more uncertainty into operating a power
grid. The major challenge to use wind as a source of power is that wind power may not be available when electricity is needed. the
excess wind power has driven the wholesale electricity price to the negative territory in the morning while reduction of the wind
generation has caused price spike in the afternoon. Thus uncertainty wind power may create the other issues for power system
operation. For that reason, this paper studies the use of Energy Storage Equipment to reduce the uncertainty and negative impact of
the wind generation. The integration of energy storage system and wind generation will enhance the grid reliability and security.
Energy storage system can shift the generation pattern and smooth the variation of wind power over a desired time horizon. It is also
be used to mitigate possible price hikes or sags. However, this requires significant capital investment and possible infrastructure
improvement. It is important to perform cost benefit analysis to determine proper size of energy storage facilities for the desired
operations.

Wind Power Generation
The amount mechanical power of a wind turbine is formulated as:
Where is the air density, R is the turbine radius, v the wind speed and CP is
the turbine power coefficient which represents the power conversion efficiency of a wind turbine. Therefore, if the air density, swept
area, and wind speed are constant, the power of wind turbine will be a function of power coefficient of the turbine.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

155 www.ijergs.org


Wind Generator Modeling
There are many different generator technologies for wind-power applications in use today. The main distinction can be made between
fixed-speed and variable-speed wind-generator concepts.
Fixed Speed Wind Generator:
A fixed-speed wind-generator is usually equipped with a squirrel cage induction generator whose speed variations are only very
limited (see figure 2.3). Power can only be controlled through pitch-angle variations. Because the efficiency of wind-turbines
(expressed by the power coefficient CP) depends on the tip-speed ratio , the power of a fixed-speed wind generator varies directly
with the wind speed. Since induction machines have no reactive power control capabilities, fixed or variable power factor correction
systems are usually required for compensating the reactive power demand of the generator.

Figure 2.3 Fixed speed induction generator

Variable Speed Wind Generator: Doubly-Fed Induction and Converter-Driven Generator (DFIG)

In contrast to fixed-speed, variable speed concepts allow operating the wind turbine at the optimum tip-speed ratio and hence at the
optimum power coefficient CP for a wide wind-speed range. Varying the generator speed requires frequency converters that increase
investment costs. The two most-widely used variable-speed wind-generator concepts are the doubly-fed induction generator (figure
2.4) and the converter driven synchronous generator (figure 2.5 and figure 2.6). Active power of a variable-speed generator is
controlled electronically by fast power electronics converters, which reduces the impact of wind-fluctuations to the grid. Additionally,
frequency converters (self-commutated PWM-converters) allow for reactive power control and no additional reactive power
compensation device is required.

Figure 2.4 Doubly-fed induction generator Figure 2.5 Converter-driven synchronous generator


Figure 2.6 Converter-driven synchronous generator (Direct drive)

Figure 2.5 and figure 2.6 show two typical concepts using a frequency converter in series to the generator. Generally, the generator
can be an induction or a synchronous generator. In most modern designs, a synchronous generator or a permanent magnet (PM)
generator is used. In contrast to the DFIG, the total power flows through the converter. Its capacity must be larger and cost more
compare to the DFIG with the same rating. Figure 2.6 shows a direct drive wind-turbine that works without any gear box. This
concept requires a slowly rotating synchronous generator with a lot of pole-pairs [9].
Energy Storage
Energy storage is the storing of some form of energy that can be drawn upon at a later time to perform some useful operations.
Energy storages are defined in this study as the devices that store energy, deliver energy outside (discharge), and accept
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

156 www.ijergs.org

energy from outside (charge). Energy storage lets energy producers send excess electricity over the electricity transmission grid to
temporary electricity storage sites that become energy producers when electricity demand is greater. Grid energy storage is
particularly important in matching supply and demand over a 24 hour period of time. Energy storage system can shift the generation
pattern and smooth the variation of wind power over a desired time horizon. These energy storages, so far, mainly include chemical
batteries, pumped water, compressed air, flywheel, thermal, superconducting magnetic energy, and hydrogen.
Batteries:
Battery storage has been used in the very early days of direct-current electric power networks. With the advance in power electronic
technologies, battery systems connected to large solid-state converters have been used to stabilize power distribution networks for
modern power systems. For example, a system with a capacity of 20 megawatts for 15 minutes is used to stabilize the frequency of
electric power produced on the island of Puerto Rico.Batteries are generally expensive, have maintenance problems, and have limited
life spans. One possible technology for large-scale storage is large-scale flow batteries. For example, sodium-sulfur batteries could be
implemented affordably on a large scale and have been used for grid storage in Japan and in the United States. Battery storage has
relatively high efficiency, as high as 90% or better.
Pumped Water:
In many places, pumped storage hydroelectricity is used to even out the daily demand curve, by pumping water to a high storage
reservoir during off-peak hours and weekends, using the excess base-load capacity from coal or nuclear sources. During peak hours,
this water can be used for hydroelectric generation, often as a high value rapid-response reserve to cover transient peaks in demand.
Pumped storage recovers about 75% of the energy consumed, and is currently the most cost effective form of mass power storage. The
main constraint of pumped storage is that it usually requires two nearby reservoirs at considerably different heights, and often requires
considerable capital expenditure. Recently, a new concept has been reposed to use wind energy to pump water in pumped-storage.
Wind turbines that direct drive water pumps for an 'energy storing wind dam' can make this a more efficient process, but are again
limited in total capacity and available location.
Compressed Air:
Another grid energy storage method is to use off-peak or renewably generated electricity to compress the air, which is usually stored
in an old mine or some other kind of geological feature. When electricity demand is high, the compressed air is heated with a small
amount of natural gas and then goes through expanders to generate electricity.
Model of the Wind Turbine and Energy Storage:
A study system consisting of wind turbine and energy storage connected to a power system is modeled using the Power System
Simulation for Engineering (PSS/E) software by Power Technologies Incorporation. In the PSS/E, the wind turbine model is equipped
with an IPLAN program that guides the user in preparing the dynamic modules related to this model. The collection of wind turbines,
wind speed information, wind turbine parameters, generator parameters, and the characteristics of the control systems are included
[16]. This study uses the wind package of PSS/E to simulate and combine the wind power generation system with energy storage
equipments integrated into a power grid.The dynamic model is shown in Figure 3.3. A user-written model can be used to simulate a
wind gust by varying input wind speed to the turbine model. The GE 3.6 machine has a rated power output of 3.6 MW. The reactive
power capability of each individual machine is 0.9 pf, which corresponds to Qmax = 1.74 MVAR and Qmin = -1.74 MVAR, and an
MVA rating of 4.0 MVA. The minimum steady-state power output for the WTG model is 0.5 MW. In this study, the GE wind turbine
models are used for simulation following the manufacturers recommendations [17].

Figure 3.3 Dynamic model of GE 3.6 MW wind turbine
For energy storage model, EPRI battery model CBEST of PSS/E is used for simulation in this study. It simulates the dynamic
characteristics of a battery. This model simulates power limitations into and out of the battery as well as AC current limitations at the
converter. The model assumes that the battery rating is large enough to cover entire energy demand that occurs during the simulation
[18].
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

157 www.ijergs.org

Typical Variation of Wind Power:
Figure 4.1 to 4.4 show that storage capacity requirement to maintain the output of the wind farm as constant from one hour to one day
under a typical variation of wind power. The storage capacities are 2.036MWh, 5.508MWh, 16.233MWh and 103.451MWh
respectively. The maximum charging or discharging power ratings are 7.39MW, 10.66MW, 13.53MW and 17.58MW respectively for
different desired operation scenarios. Summary of these estimated values relative to energy storage in typical variation of wind power
scenario are shown Table 4.1.





Smaller Variation of Wind Power:
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

158 www.ijergs.org

As shown in Figure 4.5 to 4.8, they present simulation results for the combined system with storage capacity from one hour to one day
when the wind speed is relative stable. As one can see, the required storage capacities and charging/discharging power ratings are
smaller than the previous case. The storage capacities are 0.870MWh, 1.690MWh, 3.160MWh and 10.435MWh and the
charging/discharging power ratings are 4.63MW, 4.69MW, 5.74MW and 6.26MW respectively. Summary of these estimated values
relative to energy storage in smaller variation of wind power scenario are shown Table 4.2.





Larger Variation of Wind Power:
Figure 4.9 to 4.12 show that the behavior of the system for one hour to one day storage capacity when there is large variation of the
wind speed. The required storage capacities are 5.164MWh, 10.524MWh, 22.819MWh and 137.863MWh respectively. Maximum
charging/discharging power rating requirements are 16.20MW, 23.31MW, 27.94MW and 26.69MW respectively. Summary of these
estimated values relative to energy storage in smaller variation of wind power scenario are shown Table 4.3.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

159 www.ijergs.org





Steady State Power Flow Result:
The purpose of this power flow study is to observe the potential system impact during normal and contingency conditions after the
39.6 MW proposed wind farm is interconnected with the grid system. The contingency analysis considers the impact of the new wind
power on transmission line loading, transformer facility loading, and transmission bus voltage during outages of transmission line
and/or transformers. This study assumes that the energy storage systems is to keep 39.6 MW power output from wind collected bus
350 to grid. Therefore, the power flow result with energy storage equipments is the same as without them. To keep power system
operates safely and reliably, the power flow result need to comply with the Taipower Grid Planning Standards [20]. The single line
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

160 www.ijergs.org

diagram of system near the wind farm is shown in Figure 4.14. Table 4.4 compares the steady state and single contingency (N-1)
power flow results before and after the installation of the wind farm. All power flows in the list are expressed in MVA. For N-1
analysis, the obtained result showed no negative impact of the wind farm on the power system. The analysis indicated that an
installation of the 39.6 MW wind power has very little effect on the grid system.





Acknowledgment
I want to say thank you to my family, specially my mother for supporting me during my study in M tech and all my friend which help
me during this study. I have to also thanks for my college to support me during my m tech in electrical in Bharati Vidyapeeth deemd
university college of engineering.


CONCLUSION
Wind generation is the fastest growing renewable energy source all over the world with an average annual growth rate of more than 26
% since 1990 [22]. Annual wind generation markets have been increasing by an average of 24.7% over the last 5 years. Global Wind
Energy Council (GWEC) predicts that the global wind market will reach 240 GW of total installed capacity by the year 2012 [23].
Based on information from studies and operational experience, the report of European Wind Energy Association (EWEA) concludes
that it is perfectly feasible to integrate the targeted wind power capacity of 300GW in 2030 corresponding to an average penetration
level of up to 20% [24, 25]. For high penetration levels of wind power, optimization of the integrated system should be explored. One
has to establish strategies to modify system configuration and operation practices to accommodate high level wind penetration. For
storage capacity option, our study reveals that more energy storage capacity and power rating are required if longer stable wind power
output is desired. For simulation result during wind gust, combining the wind power generation system with proper energy storage
equipments can reduce most of power system fluctuation.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

161 www.ijergs.org

REFERENCES
[1] Ming-Shun Lu, Chung-Liang Chang, and Wei-Jen Lee, Impact of Wind Generation on a Transmission System, Power
Symposium, 2007 39th NAPS 2007.
[2] Chai. Chompoo-inwai, W.J. Lee, P. Fuangfoo, M. Williams, and J.Liao, System Impact Study for the Interconnection of Wind
Generation and Utility System,IEEE I&CPS conference, Clearwater Beach, Florida, May 1-6, 2004, Industry Application, IEEE
Transaction, Jan.-Feb. 2005.
[3] Energy Efficiency and Renewable Energy web site Wind energy topic ttp://www1.eere.energy.gov/windandhydro/index.html
[4] ERCOT Report, Analysis of Transmission Alternatives for CompetitiveRenewable Energy Zones in Texas, December 2006.
[5] M. Hashem Nehrir, A Course on Alternative Energy Wind/PV/Fuel Cell PowerGeneration, IEEE PES General Meeting, June
2006.
[6] Debosmita Das, Reza Esmaili, Longya Xu, and Dave Nichols, An Optimal Design of a Grid Connected Hybrid voltaic/Fuel Cell
System for Distributed Energy Production, IEEE IES, IECON 2005 31st Annual Conference, Nov 2005.
[7] Z. Chen, Y. Hu, A Hybrid Generation System Using Variable Speed Wind Turbines and Diesel Units, IEEE IES, IECON 2003
29th Annual Conference, Nov2003.
[8] W.J. Lee, Wind Generation and its Impact on the System Operation, Renewable Energy Course Presentation at UTA, August
2007
[9] Markus Pller and Sebastian Achilles, Aggregated Wind Park Models for Analyzing Power System Dynamics.
[10] Wikipedia, the free encyclopedia, Energy storage http://en.wikipedia.org/wiki/Energy_storage
[11] Wind resource of Taiwan Map from Renewable Energy in Taiwan.http://re.org.tw/com/f1/f1w1.aspx
[12] Taiwan Power Company, Tatan wind speed data between Feb 1, 2006 and May 31, 2007 .
[13] K. Methaprayoon, C. Yingvivatanapong, W.J. Lee, and J.Liao, An Integration of ANN Wind Power Estimation into
UConsidering the Forecasting Uncertainty,IEEE Industrial and Commercial Power System Technical Conference, May 2005.
[14] Michael R. Behnke, and William L. Erdman, Impact of Past, Present and FutureWind Turbine Technologies on Transmission
System Operation and Performance,PIER Project Report, March 9, 2006.
[15] Taipower Company 2006-2007 study planning No.006-2821-02, The system study of the Taipower system with the rapid
increased wind power generation capacity, Middle report in 2007,3.
[16] GE Wind 1.5 MW and 3.6MW Wind Turbine Generators, PSS/E Dynamic Models Documentation PTI., Issue 3.0, June 10,
2004.
[17] Nicholas W.Miller, William W.Price, Juan J.Sanchea-Gasca, Dynamic Modeling of GE 1.5MW and 3.6MW Wind Turbine-
Generators, October 27, 2003, Version3.0.
[18] PSS/E Power Operation Manual and Program Application Guide, PTI., August 2004.
[19] Ming-Shun Lu, Chung-Liang Chang, Wei-Jen Lee, and Li Wang, Combining the Wind Power Generation System with Energy
Storage Equipments IEEE IAS 43thAnnual Meeting, October 2008.
[20] Taiwan Power Company, TPCs Grid Planning Standards, Octorber , 2005.
[21] Kyung Soo Kook, McKenzie K.J., Yilu Liu and Atcitty S., A study on applications of energy storage for the wind power
operation in power systems, IEEE PES General Meeting, June 2006.
[22] Global Wind Energy Council (GWEC), Global Wind 2005 Report, August, 2005.
[23] Global Wind Energy Council (GWEC), Global Wind 2007 Report, Second Edition, May , 2008.
[24] EWEA, Large scale integration of wind energy in the European power supply: analysis, December, 2005. ttp://www.ewea.org/
[25] First results of IEA collaboration Design and Operation of Power Systems with Large Amounts of Wind Power, Global Wind
Power Conference September 1821,2006, Adelaide, Australia.







International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

162 www.ijergs.org

Low Power Test Pattern Generation in BIST Schemes
Yasodharan S
1
, Swamynathan S M
2

1
Research Scholar (PG), Department of ECE, Kathir college of Engineering, Coimbatore, India
2
Assistant Professor, Department of ECE, Kathir college of Engineering, Coimbatore, India
Email- yasodharanece@rediffmail.com

ABSTRACT BIST is a viable approach to test today's digital systems. During self-test, the switching activity of the Circuit
Under Test (CUT) is significantly increased compared to normal operation and leads to an increased power consumption which often
exceeds specified limits. The proposed method generates Multiple Single Input Change (MSIC) vectors in a pattern. The each
generated vectors are applied to a scan chain is an SIC vector. A class of minimum transition sequences is generated by the use of a
reconfigurable Johnson counter and a scalable SIC counter. The proposed TPG method is flexible to both the test-per-scan schemes
and the test-per-clock. A theory is also developed to represent and analyze the sequences and to extract a class of MSIC sequences.
The proposed BIST TPG decreases transitions that occur at scan inputs during scan shift operations and hence reduces switching
activity in the CUT. As the switching activity is reduced, the power consumption of the circuit will also be reduced.
Keywords Built-in self-test (BIST), Circuit Under Test (CUT), Low Power, Single-Input Change (SIC), Test Pattern Generator
(TPG), Linear Feedback Shift Register (LFSR).
INTRODUCTION
A digital system is tested and diagnosed during its lifetime for several times. Test and diagnosis techniques applied to the system
must be speedy and have very high fault coverage. One method to ensure this is to specify test as system functions, so it becomes Built
In Self Test. It reduces the complexity and difficulty in VLSI testing, and thereby decreases the cost and reduces reliance upon
external (pattern-programmed) test equipment. Test pattern generators (TPGs) comprising of linear feedback shift register (LFSR) are
used in the conventional BIST architectures. The major negative aspect of these architectures is that the pseudorandom patterns
generated by the LFSR results in the high switching activities in the CUT. It can damage the circuit and the lifetime, product yield will
be reduced. In addition, the target fault coverage is achieved by generating very long pseudorandom sequences by LFSR.
A. Prior Work
Several advanced BIST techniques have been studied and applied. The first class is the LFSR tuning. Girard et al. analyzed the
impact of an LFSRs polynomial and seed selection on the CUTs switching activity, and proposed a method to select the LFSR seed
for energy reduction.
The second class is low-power TPGs. One approach is to design low-transition TPGs. Wang and Gupta used two LFSRs of
different speeds to control those inputs that have elevated transition densities [5]. Corno et al. provided a low power TPG based on the
cellular automata to reduce the test power in combinational circuits [6]. Another approach focuses on modifying LFSRs. The scheme
in [7] reduces the power in the CUT in general and clock tree in particular. In [8], a low-power BIST for data path architecture is
proposed, which is circuit dependent. So the nondetecting subsequences must be determined for each circuit test sequence. Bonhomme
et al. [9] used a clock gating technique where two nonoverlapping clocks control the odd and even scan cells of the scan chain so that
the shift power dissipation is reduced by a factor of two. The ring generator [10] can generate a single-input change (SIC) sequence
which can effectively reduce test power. The third approach focuses on reducing the dynamic power dissipation during scan shift
through gating of the outputs of a portion of the scan cells. Bhunia et al. [11] inserted blocking logic into the stimulus path of the scan
flip-flops to prevent the propagation of the scan ripple effect to logic gates. The need for transistors insertion, however, makes it
difficult to use with standard cell libraries that do not have power-gated cells. In [12], the efficient selection of the most suitable subset
of scan cells for gating along with their gating values is studied.
The third class makes use of the prevention of pseudorandom patterns that do not have new fault detecting abilities [13][15].
These architectures apply the minimum number of test vectors required to attain the target fault coverage and therefore reduce the
power. However, these methods have high area overhead, need to be customized for the CUT, and start with a specific seed.
Gerstendorfer et al. also proposed to filter out nondetecting patterns using gate-based blocking logics [16], which, however, add
significant delay in the signal propagation path from the scan flip-flop to logic.
Several low-power approaches have also been proposed for scan-based BIST. The architecture in [17] modifies scan-path
structures, and lets the CUT inputs remain unchanged during a shift operation. Using multiple scan chains with many scan-enable (SE)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

163 www.ijergs.org

inputs to activate one scan chain at a time, the TPG proposed in [18] can reduce average power consumption during scan-based tests
and the peak power in the CUT. In [19], a pseudorandom BIST scheme was proposed to reduce switching activities in scan chains.
Other approaches include LT-LFSR [20], a low-transition random TPG [21], and the weighted LFSR [22]. The TPG in [20] can
reduce the transitions in the scan inputs by assigning the same value to most neighboring bits in the scan chain. In [21], power
reduction is achieved by increasing the correlation between consecutive test patterns. The weighted LFSR in [22] decreases energy
consumption and increases fault coverage by adding weights to tune the pseudorandom vectors for various probabilities.
B. Contribution and Paper Organization
This paper presents the theory and application of a class of minimum transition sequences. The proposed method generates SIC
sequences, and converts them to low transition sequences for each scan chain. This can decrease the switching activity in scan cells
during scan-in shifting. The advantages of the proposed sequence can be summarized as follows.
1) Minimum transitions
2) Uniqueness of patterns
3) Uniform distribution of patterns
4) Low hardware overhead consumed by extra TPGs
The rest of this paper is organized as follows. In Section II, the proposed MSIC-TPG scheme is presented. The principle of the new
MSIC sequences is described in Section III. In Section IV, the properties of the MSIC sequences are analyzed. In Section V,
experimental methods and results on test power, fault coverage, and area overhead are provided to demonstrate the performance of the
propsoed MSIC-TPGs. Conclusions are given in Section VI.

PROPOSED MSIC-TPG SCHEME
This section develops a TPG scheme that can convert an SIC vector to unique low transition vectors for multiple scan chains. First,
the SIC vector is decompressed to its multiple codewords. Meanwhile, the generated codewords will bit-XOR with a same seed vector
in turn. Hence, a test pattern with similar test vectors will be applied to all scan chains. The proposed MSIC-TPG consists of an SIC
generator, a seed generator, an XOR gate network, and a clock and control block.
















Fig. 1.Symbolic simulation of an MSIC pattern for scan chains

A. Test Pattern Generation Method
Assume there are m primary inputs (PIs) and M scan chains in a full scan design, and each scan chain has l scan cells. Fig. 1 shows
the symbolic simulation for one generated pattern. The vector generated by an m-bit LFSR with the primitive polynomial can be
expressed as S(t) = S0(t)S1(t)S2(t), . . . , Sm1(t) (hereinafter referred to as the seed), and the vector generated by an l-bit Johnson
counter can be expressed as J (t) = J
0
(t)J
1
(t)J
2
(t), . . . , J
l1
(t).
In the first clock cycle, J = J
0
J
1
J
2
, . . . , J
l1
will bit-XOR with S = S
0
S
1
S
2
, . . . , S
M1
, and the results X
1
X
l+1
X
2l+1
, . . . , X
(M1)l+1
will
be shifted into M scan chains, respectively. In the second clock cycle, J = J
0
J
1
J
2
, . . . , J
l1
will be circularly shifted as J = J
l1
J
0
J
1
, . .
. , J
l2
, which will also bit-XOR with the seed S = S
0
S
1
S
2
, . . . , S
M1
. The resulting X
2
X
l+2
X
2l+2
, . . . , X
(M1)l+2
will be shifted into M scan
chains, respectively. After l clocks, each scan chain will be fully loaded with a unique Johnson codeword, and seed S
0
S
1
S
2
, . . . , S
m1

will be applied to m PIs.
Since the circular Johnson counter can generate l unique Johnson codewords through circular shifting a Johnson vector, the circular
Johnson counter and XOR gates in Fig. 1 actually constitute a linear sequential decompressor.
B. Reconfigurable Johnson Counter
According to the different scenarios of scan length, this paper develops two kinds of SIC generators to generate Johnson vectors
and Johnson codewords, i.e., the reconfigurable Johnson counter and the scalable SIC counter.




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

164 www.ijergs.org











(a)
















(b)

Fig. 2. SIC generators. (a) Reconfigurable Johnson counter. (b) Scalable SIC counter.

For a short scan length, we develop a reconfigurable Johnson counter to generate an SIC sequence in time domain. As shown in
Fig. 2(a), it can operate in three modes.
1) Initialization: When RJ_Mode is set to one and Init is set to logic zero, the reconfigurable Johnson counter will be initialized to
all zero states by clocking CLK2 more than l times.
2) Circular shift register mode: When RJ_Mode and Init are set to logic one, each stage of the Johnson counter will output a
Johnson codeword by clocking CLK2 l times.
3) Normal mode: When RJ_Mode is set to logic zero, the reconfigurable Johnson counter will generate 2l unique SIC vectors by
clocking CLK2 2l times.

C. Scalable SIC Counter
When the maximal scan chain length l is much larger than the scan chain number M, we develop an SIC counter named the
scalable SIC counter. As shown in Fig. 2(b), it contains a k-bit adder clocked by the rising SE signal, a k-bit subtractor clocked by
test clock CLK2, an M-bit shift register clocked by test clock CLK2, and k multiplexers. The value of k is the integer of log2 (lM). The
waveforms of the scalable SIC counter are shown in Fig. 2(c). The k-bit adder is clocked by the falling SE signal, and generates a new
count that is the number of 1s (0s) to fill into the shift register. As shown in Fig. 2(b), it can operate in three modes.
1) If SE = 0, the count from the adder is stored to the k-bit subtractor. During SE = 1, the contents of the k-bit subtractor will be
decreased from the stored count to all zeros gradually.
2) If SE = 1 and the contents of the k-bit subtractor are not all zeros, M-Johnson will be kept at logic 1 (0).
3) Otherwise, it will be kept at logic 0 (1). Thus, the needed 1s (0s) will be shifted into the M-bit shift register by clocking CLK2 l
times, and unique Johnson codewords will be applied into different scan chains.













(a)

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

165 www.ijergs.org














(b)


Fig. 3. MSIC-TPGs for (a) test-per-clock and (b) test- per-scan schemes.

D. MSIC-TPGs for Test-per-Clock Schemes
The MSIC-TPG for test-per-clock schemes is illustrated in Fig. 3(a). The CUTs PIs X1 Xmn are arranged as an nxm SRAM-like
grid structure. Each grid has a two-input XOR gate whose inputs are tapped from a seed output and an output of the Johnson counter.
The outputs of the XOR gates are applied to the CUTs PIs. A seed generator is an m stage conventional LFSR, and operates at low
frequency CLK1. The test procedure is as follows.
1) The seed generator produces a new seed by clocking CLK1 one time.
2) The Johnson counter generates a new vector by clocking CLK2 one time.
3) Repeat 2 until 2l Johnson vectors are generated.
4) Repeat 13 until the expected fault coverage or test length is achieved.
E. MSIC-TPGs for Test-per-Scan Schemes
The MSIC-TPG for test-per-scan schemes is illustrated in Fig. 3(b). The stage of the SIC generator is the same as the maximum
scan length, and the width of a seed generator is not smaller than the scan chain number. The seed generator and the SIC counter
produces the vectors which are given as the inputs of the XOR gates and their outputs are applied to M scan chains, respectively. The
outputs produced by the seed generator and XOR gates are given to the CUTs PIs, respectively. The test procedure is as follows.
1) The seed circuit generates a new vector by clocking CLK1 one time.
2) RJ_Mode is set to 0. The reconfigurable Johnson counter will operate in the Johnson counter mode and generate a Johnson
vector by clocking CLK2 one time.
3) After a new Johnson vector is generated, RJ_Mode and Init are set to one. The reconfigurable Johnson counter will operates as a
circular shift register, and generates l codewords by clocking CLK2 l times.
4) Repeat the 23 steps until 2l times of Johnson vectors are generated.
5) Repeat 14 until the expected fault coverage or test length is achieved.

PRINCIPLE OF MSIC SEQUENCES
The main objective of the proposed algorithm is to reduce the switching activity. In order to reduce the hardware overhead, the
linear relations are selected with consecutive vectors or within a pattern, which can generate a sequence with a sequential
decompressor, facilitating hardware implementation.
Finally, uniformly distributed patterns are desired to reduce the test length (number of patterns required to achieve a target fault
coverage) [21]. This section aims to extract a class of test sequences that meets these requirements.

PROPERTIES OF MSIC SEQUENCES
A. Switching Activity Reduction
For test-per-clock schemes, M segments of the CUTs primary inputs are applied with M unique SIC vectors. The mean input
transition density of the CUT is close to 1/l. For test-per-scan schemes, the CUTs PIs are kept unchanged during 2l2 shifting-in clock
cycles, and the transitions of a Johnson codeword are not greater than 2. Therefore, the mean input transition density of the CUT
during scan-in operations is less than 2/l.
B. Uniform Distribution of MSIC Patterns
If test patterns are not uniformly distributed, there might be some inputs that are assigned the same values in most test patterns.
Hence, faults that can only be detected by patterns that are not generated may escape, leading to low fault coverage. Therefore,
uniformly distributed test patterns are desired for most circuits in order to achieve higher fault coverage [5].
C. Relationship Between Test Length and Fault Coverage
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

166 www.ijergs.org

The test length of conventional LFSR methods is related to the initial test vector. In other words, the number of patterns to hit the
target fault coverage depends on the initial vector in conventional LFSR TPGs [21].
PERFORMANCE ANALYSIS
To analyze the performance of the proposed MSIC-TPG, experiments on ISCAS85 benchmarks and standard full-scan designs of
ISCAS89 benchmarks are conducted. The performance simulations are carried out with the Xilinx ISE 12.3 and ISIM Simulator.
Fault simulations are carried out with ISIM Simulator. Synthesis is carried out with Xilinx ISE 12.3 based on 45-nm typical
technology. The test frequency is 100 MHz, and the power supply voltage is 1.1 V. The test application method is test-per-clock for
ISCAS85 benchmarks.


(a)





(b)







(c)
Fig. 4. Waveforms of (a) LFSR (b) Reconfigurable Johnson counter (C) Multiple Single Input Change
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

167 www.ijergs.org

Table 1: Total and Peak power Reduction of CUTs









CONCLUSION
This paper has proposed a low-power test pattern generation method that could be easily implemented by hardware. It also developed
a theory to express a sequence generated by linear sequential architectures, and extracted a class of SIC sequences named MSIC.
Analysis results showed that an MSIC sequence had the favorable features of uniform distribution, low input transition density, and
low dependency relationship between the test length and the TPGs initial states. Combined with the proposed reconfigurable Johnson
counter or scalable SIC counter, the MSIC-TPG can be easily implemented, and is flexible to test-per-clock schemes and test-per-scan
schemes. For a test-per-clock scheme, the MSIC-TPG applies SIC sequences to the CUT with the SRAM-like grid. For a test-per scan
scheme, the MSIC-TPG converts an SIC vector to low transition vectors for all scan chains. Experimental results and analysis results
demonstrate that the MSIC-TPG is scalable to scan length, and has negligible impact on the test overhead

REFERENCES

[1] Y. Zorian, A distributed BIST control scheme for complex VLSI devices, in 11th Annu. IEEE VLSI Test Symp. Dig. Papers,
Apr. 1993, pp. 49.
[2] P. Girard, Survey of low-power testing of VLSI circuits, IEEE Design Test Comput., vol. 19, no. 3, pp. 8090, MayJun.
2002.
[3] A. Abu-Issa and S. Quigley, Bit-swapping LFSR and scan-chain ordering: A novel technique for peak- and average-power
reduction in scan-based BIST, IEEE Trans. Comput.- Aided Design Integr. Circuits Syst., vol. 28, no. 5, pp. 755759, May
2009.
[4] P. Girard, L. Guiller, C. Landrault, S. Pravossoudovitch, J. Figueras, S. Manich, P. Teixeira, and M. Santos, Low-energy BIST
design: Impact of the LFSR TPG parameters on the weighted switching activity, in Proc. IEEE Int. Symp. Circuits Syst., vol.
1. Jul. 1999, pp. 110113.
[5] S. Wang and S. Gupta, DS-LFSR: A BIST TPG for low switching activity, IEEE Trans. Comput.-Aided Design Integr.
Circuits Syst., vol. 21, no. 7, pp. 842851, Jul. 2002.
[6] F. Corno, M. Rebaudengo, M. Reorda, G. Squillero, and M. Violante, Low power BIST via non-linear hybrid cellular
automata, in Proc. 18th IEEE VLSI Test Symp., Apr.May 2000, pp. 2934.
[7] P. Girard, L. Guiller, C. Landrault, S. Pravossoudovitch, and H. Wunderlich, A modified clock scheme for a low power BIST
test pattern generator, in Proc. 19th IEEE VTS VLSI Test Symp., Mar.Apr. 2001, pp. 306311.
[8] D. Gizopoulos, N. Krantitis, A. Paschalis, M. Psarakis, and Y. Zorian, Low power/energy BIST scheme for datapaths, in
Proc. 18th IEEE VLSI Test Symp., Apr.May 2000, pp. 2328.
[9] Y. Bonhomme, P. Girard, L. Guiller, C. Landrault, and S. Pravossoudovitch, A gated clock scheme for low power scan testing
of logic ICs or embedded cores, in Proc. 10th Asian Test Symp., Nov. 2001, pp. 253258.
[10] C. Laoudias and D. Nikolos, A new test pattern generator for high defect coverage in a BIST environment, in Proc. 14th
ACM Great Lakes Symp. VLSI, Apr. 2004, pp. 417420.
[11] S. Bhunia, H. Mahmoodi, D. Ghosh, S. Mukhopadhyay, and K. Roy, Low-power scan design using first-level supply gating,
IEEE Trans. Very Large Scale Integr. (VLSI) Syst., vol. 13, no. 3, pp. 384395, Mar. 2005.
[12] X. Kavousianos, D. Bakalis, and D. Nikolos, Efficient partial scan cell gating for low-power scan-based testing, ACM Trans.
Design Autom. Electron. Syst., vol. 14, no. 2, pp. 28-128-15, Mar. 2009.
[13] P. Girard, L. Guiller, C. Landrault, and S. Pravossoudovitch, A test vector inhibiting technique for low energy BIST design,
in Proc. 17
th
IEEE VLSI Test Symp., Apr. 1999, pp. 407412.
[14] S. Manich, A. Gabarro, M. Lopez, J. Figueras, P. Girard, L. Guiller, C. Landrault, S. Pravossoudovitch, P. Teixeira, and M.
Santos, Low power BIST by filtering non-detecting vectors, Aided Design Integr. Circuits Syst., vol. 28, no. 5, pp. 755759,
May 2009.
CUT Total Power W Peak Power W
MSIC LFSR MSIC LFSR
C2670 19.9 38.55 312.4 433.1
C3540 46.6 81.44 755.5 918.3
C5315 55.1 110 821.8 1157
C6288 274.8 366.2 1994 2363
C7552 69.6 137 1012 1502
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

168 www.ijergs.org

Review of a Digital Circuit Using Power Gating Techniques to Reduce Leakage
Power
Priyanka Singhal
1
, Nidhi raghav
2
, Pallavi Bahl
3

1
Research Scholar (M.Tech), Department of ECE, BSAITM, Faridabad, haryana, India
2
Guide, Lecturer, Department of ECE, BSAITM, Faridabad, haryana, India
3
Co-guide, Lecturer, Department of ECE, BSAITM, Faridabad, haryana, India

ABSTRACT - Power dissipation is kept in consideration while implementing a digital circuit, On the other hand the process of
scaling is used to analyze the output of that circuit. The process of scaling has its own limitations as the leakage current can flow out
of the circuit due to scaling. The power dissipated from the circuit can be increased by making use of leakage current. The Power
gating techniques are used to compensate the leakage current flowing through the digital circuit. This paper consist the nanometer
technology being used to get different results. The process discussed above can be implemented and simulated by making use of
TANNER suit using s-edit and T-SPICE at 130nm.
Key Words: Power gating circuits, Ground bounce noise, sleep methods, T- SPICE, H- SPICE.

1. Introduction:
The VLSI design system has increased the efficiency of our technological equipments by amending the various parameters such as
reduction in power supply voltage by applying the process of scaling in fabrication process of CMOS design. These parameters has
reduced the power dissipation but could not overcome the problems related to leakage current and circuit delay. To reduce the delay in
circuit lower threshold voltage can be applied while at the same time leakage current can be reduced by CMOS logic. The use of a
multi-threshold CMOS circuit, called a power gating structure widely use in all the portable devices. Power gating technique makes
use of high threshold and low leakage devices such as sleep transistors, which isolates the idle blocks from the power supply and
ground, or from the both. This technique uses higher Vt sleep transistors which disconnect VDD from a circuit block when the block
is not switching.Power gatingis more beneficial than the clock gating. It increases the delay in time as the circuits modified with the
power gating are to be safely entered and exited through power gated modes. Architecture experiences some power exchangesbetween
leakagepowersused for designing and the power dissipation for entering and exiting the low power modes. The blocks can be shut
down by the hardware or software. Power reduction operations can be can be optimized by the driver software. An alternate for this
can be a power management controller. The power gating can be used to achieve leakage power reduction for long term by connecting
an external power supply. An externally switched power supply is a very basic form of power gating to achieve long term leakage
power reduction. Power gating is much more suitable for closing the blocks for a short span. CMOS switches are used to provide the
power to the digital circuit and these CMOS switches are controlled by the power gating controllers.that provide power to the circuitry
are controlled by power gating controllers..









Fig 1. Power gated Circuit [1]


1.2 Ground Bounce
Due to shrinkness of a device at 130 nm and below, the signal integrity becomes a severe problem in VLSI circuits and is increasingly
as per the reduction of size of circuit.The circuit noise is mainly caused due to inductive noise.The imposition of Moores law results
in faster clock speeds and larger number of I/O devices whereas it also results in higher amount of noise in power and ground
planes.This inductive noise is sometimes referred to as the simultaneous switching noise because it is most pronounced when a large
number of I/O drivers switch simultaneously.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

169 www.ijergs.org


2. Ground bounce reduction
In the general circuits using sleep transistors, some logic has separate power and ground pads, while other logic may share the power
and ground pads. A certain length of PCB (Printed Circuit Board) transmission line connects each pad with the real power or ground.
If the PCB has poorly layout, the transmission lines will contribute large parasitic capacitors and inductances, which can deteriorate
the ground bounce effect when the sleep transistors are switched on. The parasitic capacitors, inductances depend largely on what
types of the pads are and how the PCB layout is, however, many empirical data shows that these parasitic parameters can be quite
considerable. The equivalent circuit of the logic using sleep transistors is shown in Figure 2. There are four parts of the equivalent
circuit. Part I is the intrinsic capacitor, inductance and resistor of the power pad and the corresponding on-board transmission lines;
Part II is the equivalent circuit of the functional logic; the sleep transistor is modeled as two resistors in Part III, where RST, ON
<<RST, OFF. When the sleep transistor is turned on, it equals a small resistor which has negligibleeffect on the normal function of the
circuit. When the sleep transistor is turned off, its resistor becomes hugeand cuts off the leakage path of the logic. Part IV is
theintrinsic capacitor, inductance and resistor of the ground pad and the corresponding on-board transmission lines.



Fig 2. Ground bounces reduction logic [4]




Fig 3. Equivalent circuit. [4]



4. Conclusion

We have done a review of scaling of power dissipation using power gating techniques. These power gating techniques are used to
reduce the leakage current, circuit delay and ground bounce etc.

REFERENCES:
[1] Suhwan Kim, Memb, Stephen V. Kosonocky, Daniel R. Knebel, Kevin Stawiasz, and Marios C. Papaefthymiou, A Multi-Mode
Power Gating Structure for Low-Voltage Deep-Submicron CMOS ICs, IEEE Transactions On Circuits and System, 2007.

[2]Velicheti Swetha,S Rajeswari Design and Power Optimization of MTCMOScircuits using Power Gating International Journal of
Advanced Research in Electrical,Electronics and Instrumentation Engineering, 2013
[3]Payam Heydari, Massoud Pedram Ground Bounce in Digital VLSI Circuits, IEEE. J. of Solid-State Circuits.
[4]Ku He , Rong Luo, Yu Wang A Power Gating Scheme for Ground Bounce Reduction during Mode Transition IEEE Trans. on
VLSI systems,2007.

[5] R.Divya,J.Muralidharan Leakage Power Reduction Through Hybrid Multi-Threshold CMOS Stack Technique In Power Gating
Switch International Journal of Advanced Research in Computer Engineering & Technology (IJARCET), 2013.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

170 www.ijergs.org

Socio-technical Interactions in OSS Development
Jasveen Kaur
1
, Amandeep Kaur
1
, Prabhjot Kaur
1
1
Scholar, Guru Nanak Dev University, Amritsar
E-mail-jasveenkaur.1990@gmail.com

ABSTRACT -This study is going to provide directions to open source practitioners to better organize their projects to achieve
greater performance. In this research, we try to understand socio-technical interactions in a system development context by examining
the joint effect of developer team structure and open source software architecture on OSS development performance. We hypothesize
that developer team structure and software architecture signicantly moderate each others effect on OSS development performance.
Empirical evidence supports our hypotheses and suggests that Larger teams tend to produce more favorable project performance when
the project being developed has a high level of structural interdependency while projects with a low level of structural
interdependency require smaller teams in order to achieve better project performance. Moreover, centralized teams tend to have a
positive impact on project performance when the OSS project has a high level of structural interdependency. However, when a project
has a low level of structural interdependency, centralized teams can impair project performance.

KeywordsOpen source software, collaboration network, social network analysis, software architecture, software project
performance, network centralization, software structural interdependency

I) INTRODUCTION
In recent years, Open Source Software(OSS) development has caused great changes in software world. The software developers
collaborate voluntarily to develop software that they or their organizations need [1]. Compared with traditional software development,
OSS development is unique in that it is self-organized by voluntary developers. Moreover, OSS projects automatically generate
detailed and public logs of developer activities and project outputs in the form of repositories, allowing a clear view of their inner
working details [1]. These unique aspects of OSS have inspired studies regarding motivations of individual participants, governance of
OSS projects [3],organizational learning in OSS projects[2], architecture of OSS code [4], in OSS projects. These OSS studies have
increasingly pointed toward the inseparable role of the social and the technical aspects in shaping OSS development processes and
outcomes. Previous OSS studies suggest that OSS development is particularly suited for an examination of combined effects of the
social and the technical in a system development context since it promotes interactions between software developers and software
artifacts. This study focuses on OSS developer team structure as the social aspect and software architecture as the technical aspect of
OSS projects. Our general research question is: what is the joint effect of developer team structure and OSS project architecture on
OSS development performance? The answer to our research question can serve as a step towards integrating the separate lines of
work on OSS developments social and technical dimensions into a coherent research literature and also helping OSS practitioners to
understand the strengths and weaknesses of the OSS development process
II) INSEPARABLE ROLE OF THE SOCIAL AND THE TECHNICAL ASPECTS
Researchers have long recognized the relationship between social processes and technical properties in an organizational work
context. The organizational information processing theory (OIPT) provides a widely cited perspective of this composite relationship:
to achieve optimal performance, there should be a match between information processing capabilities of organizational structure
(social processes) and information processing needs of a given task (technical properties)[6]. In the organizational literature,
information processing capabilities are typically assessed by looking at the collaboration structure of the workforce while information
processing needs are evaluated by examining the level of interdependency among task units. Cataldo et al developed a social
technical congruence measure that captures the proportion of collaboration activities actually occurred in development teams relative
to the total number of collaboration activities required by interdependency among software development task assignments. A
signicant impact of this socialtechnical congruence on project productivity manifests the equally important roles of organizational
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

171 www.ijergs.org

structure and task characteristics in determining software project performance [9]. In addition, Kim and Umanath [7] employed a
multiplicative interaction model in examining the relationship between development team structure, software task characteristics, and
project performance. This modeling approach revealed that team structure and task characteristics served mutually moderating roles in
affecting project performance outcomes.
In summary, both the organizational and software engineering literatures emphasize the inseparable role of organizational structure
and task characteristics in organizational work performance. Taken together, prior organizational and software engineering research
suggests that social processes and technical properties can play equally important and mutually moderating roles in software
development performance. An understanding of the mutually moderating roles of team structures and project architecture can help
OSS practitioners to realize the social and the technical aspects of OSS development altogether and harvest project performance gains
from their joint effect.
III) SOCIO-TECHNICAL INTERACTIONS IN OSS
1)SocialDevelopment Team Structure:
Open source software development is a kind of distributed software development that has a large amount of contributors and because
of using Internet and make sharing freely it is so successful and useful that developers can communicate over the distance. Due to
having variety of contributors in OSS projects and the story of knowledge sharing among the project can be more powerful and this
might lead to improve even the position of contributors. For example they can move to the developers group from users or even in
developers group move to the core developers. Core developers are a small amount of expert developers, integerated to control and
manage the system. Codevelopers are the people who have directly impact on software development in the project, also they effect on
the code base and they can find issues on licensing.
Following prior research, we view OSS development team structure as an important social aspect of OSS development since it
manifests information processing capabilities of OSS workforce[5]. We conceptualize the development team structure according to
social network theory[11]. Social network theory models individual actors as nodes of a graph joined by their relationships depicted as
links between the nodes [8].When relationships are dened as collaborations on a task, the social network is specied as a
collaboration network.We choose to generate collaboration networks on an intraproject level; that is, each collaboration network
include developers of a single OSS project as nodes and collaboration incidences on tasks (i.e. source code files) of the same project as
links.An intraproject collaboration network is a close-up view of relationship structures within a particular project. Compared with an
interproject (i.e., community-level) network, an intraproject (i.e., project-level) network is more relevant to our research question since
it allows us to evaluate how organizational structure of a particular OSS project team affects performance of the corresponding
project.
In OSS projects, the basic unit of work is a le in the OSS distributed version control system (DVCS) like git. Hence, we view
collaboration tasks as the les in the DVCS system. A collaboration incidence occurs when two developers make code commits to the
same sourcecode le. A collaboration network refers to the graph made of open source developers as nodes and the collaboration
incidences on the same le as links.
We characterize OSS collaboration network structure by two commonly used measures: network size and network centralization.
Network size is the number of nodes in a graph. It indicates the overall scope of the collaboration network.
Network size captures the open nature of OSS development where a large number of developers collaborate to develop software in
contrast to traditional software development where the number of developers is comparatively lower. Network centralization indicates
the extent to which a network is centralized around one or a few nodes. Centralization of a network is measured in comparison to the
most central networkthe star network. In the star network, one central node connects to all of the other nodes while all other nodes
are only connected to the central node. Any deviation from this structure indicates a reduction in network centralization. Although
there are other metrics of collaboration network structure, we chose to focus on the two measures because network size and
centralization reect the major difference between the two alternative philosophies for organizing programming teams: the chief
programmer team and the egoless team. Such teams are intentionally small and centralized around a few programming experts. In
contrast, the egoless team reflects decentralized communication and collaboration among programmers and is less concerned about
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

172 www.ijergs.org

team size. An examination of network size and centralization of OSS teams helps us to understand the link between the unique
characteristics of OSS collaboration network structure and OSS development performance.
2)TechnicalSoftware Architecture:
In recent years, the study of software architecture (SA) has emerged as an autonomous discipline requiring its own concepts,
formalisms, methods, and tools. SA represents a very promising approach since it handles the design and analysis of complex
distributed systems and tackles the problem of scaling up in software engineering. Through suitable abstractions, it provides the means
to make large applications manageable.
An important technical aspect of software development projects is the structural interdependency among processing elements of the
software being developed. Software structural interdependency is the strength of association established by a connection from one
module to another. [4]. In other words, software architecture is a formal way to describe the structural interdependency of a software
system in terms of components and their interconnections. By decomposing the overall task into parts and then designing,
implementing, or maintaining each individual part, software architecture or software structural interdependency provides a feasible
way to develop and manage large systems. It reduces the complexity of software development projects to associations.
Prior research shows that software architecture is an important indicator of the information processing need of a software project. In
general, the load and diversity of information needed to be processed by a software project increases with the level of structural
interdependency among its processing elements.
3)Interaction Between Team Structure and Software Architecture:
In this study, interaction between the social and the technical aspects in an organizational work context refers to the mutually
moderating or mutually contingent relationship between team structure (network size and centralization) and software architecture
(structural interdependency) in OSS development. In other words, we conceptualize socio-technical interactions in OSS projects as
multiplicative interaction of team structure and software architecture.
IV) HYPOTHESES
We develop our hypotheses following the central argument of OIPT that organizational work performance is jointly determined by
information processing capabilities of the workforce and information processing needs of the task. As discussed before, structure of
the team collaboration network reflects the information processing capabilities of a development team. Information processing need of
the software development process is represented by the structural interdependency of the software being developed. The mutually
dependent effects of development team structure and software structural interdependency on project performance are specied below :
A. Network Size and Structural Interdependency
Network size has mixed implications to information processing capabilities of a team. On one hand, larger networks incur higher
coordination and communication cost.On the other hand, larger networks carry more diverse expertise and are better at specialization
and division of labour among team members.The overall effect of network size on task performance depends on the structural
interdependency of OSS projects. Projects with a lower level of structural interdependency do not take full advantage of the diverse
expertise and perspectives in a large team while these projects have to bear the increased communication cost in such a team. Network
size can therefore have a negative impact on the project performance of these projects. In a project with a high level of structural
interdependency, capabilities of a large team in processing a heavy load of diverse information can produce salient project
performance gains, compensating for the communication cost associated with a large team. The negative impact of network size on
project performance can therefore be reduced in this scenario.
Reciprocating effects of network size on project performance, impact of software structural interdependency on project performance
can vary across development teams with different network size. In traditional software development, where team size tends to be
small, software structural interdependency is often found to increase software development effort which in turn can impair project
performance. However, recent OSS research suggests that OSS development may resist the negative effect of software structural
interdependency on development effort due to its self-organized nature. With the motive to adjust development team structure
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

173 www.ijergs.org

according to project characteristics, an OSS team can recruit new members when a high level of structural interdependency is
perceived. This will allow the project to take advantage of information processing capabilities afforded by a large collaboration
network. On the other hand, when a team is unwilling or unable to recruit additional members for a project with a high level of
structural interdependency, project performance may be impaired as a result of insufcient information processing capabilities in this
team. Therefore,
Hypothesis1: Collaboration network size and software structural interdependency mutually and positively moderate each others
impact on OSS project performance; that is, impact of network size on project performance is more likely to be positive when
software structural interdependency is higher, and impact of software structural interdependency on project performance is more likely
to be favorable when network size increases.
B. Network Centralization and Structural Interdependency
Similar to network size, network centralization has mixed effects on information processing capabilities of a team. A centralized team
is better at identifying and consolidating expertise in a team. It incurs lower coordination cost than a chain-like network (low network
centralization). However, a centralized team structure imposes signicant information processing load on the central nodes. This can
hamper the effectiveness of the whole team. Projects with a low level of structural interdependency do not require much consolidation
among knowledge domains. Thus centralization of project team with low structural interdependency has no much importance, leading
to suboptimal project performance. However, as the structural interdependency of a project increases, the advantage of a centralized
team structure in identifying and consolidating expertise from a wider range of knowledge domains becomes important. This
advantage enables a more centralized team to achieve better performance.
On the other hand, the tendency for software structural interdependency to negatively affect project performance can be particularly
strong in a team with chain-like structure (low centralization) since such a team is relatively ineffective for knowledge
consolidation.This tendency can be reduced by a centralized team since such a team can identify diverse information and coordinate
information processing activities. Therefore,
Hypothesis 2: Collaboration network centralization and software structural interdependency mutually and positively moderate each
others impact on OSS project performance; that is, impact of network centralization on project performance is more likely to be
positive in projects with a higher level of structural interdependency, and impact of software structural interdependency on project
performance is more likely to be favorable when network centralization increases.
V) IMPLEMENTATION
The data for the study were collected from Github.com. Git is a distributed version control and source code management (SCM)
system with an emphasis on speed. Every Git working directory is a full-fledged repository with complete history and full version
tracking capabilities, not dependent on network access or a central server. When you get a copy of the repository, you do not just get
the snapshot, but the whole repository itself.
A. Data Collection
From hundreds of OSS projects hosted at Github, we selected a sample of 15 projects for analysis. We selected the sample projects
that were registered between Jan 2005 and Nov 2005 and that have atleast ten developers in the collaboration network. This ensures
that the sampled projects have sufcient elapsed time since starting so that signicant amount of development activity has already
taken place. Collaboration network measures are sensitive to the size of the network. Particularly, when the network size is small,
some network measures, such as centralization, become meaningless. Therefore, we have restricted our sample to projects with at least
ten developers in the collaboration network.
1) Collaboration Network Structure: As discussed earlier, we use network size and network centralization to measure collaboration
network structure. Network Size is measured by the total count of nodes in the network. The network centralization measure follows
the approach proposed by Freeman [8]. It expresses the degree of inequality in a network as a percentage of that of a perfect star
network of the same size. The higher the value, the more centralized the network is. We employed the widely used social network
analysis software UCINET6[12] to compute the structure metrics for the collaboration networks in our sample.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

174 www.ijergs.org

2) Software Structural Interdependency: Although automatic tools(e.g., Lattix) are available for evaluating software architecture,
these tools are usually limited to a few programming languages such as C/C++ and Java. The overwhelming amount of source code
and the wide range of programming languages in our sample prevent us from measuring software structural interdependency either
manually or using the automatic tool. In a study about OSS code MacCormack et al. pointed out that, in software designs,
programmers tend to group source les of a related nature into directories that are organized in a nested fashion. This suggests that
the Git le tree structure should be considered in gauging the level of software structural interdependency. So we measure software
structural interdependency by taking the average number of source code les per folder in the Git tree. A large number of task les per
folder indicates a high level of structural interdependency since les grouped into the same folder are typically related.
3) OSS Development Performance: In OSS projects, performance cannot be measured by parameters such as cost of development and
development within schedule because OSS projects generally do not involve a budget or a deadline. Prior research on OSS
development has employed various OSS project performance measures such as OSS developers perceived project success, the
percentage of resolved bugs, increase in lines of code (LoC), promotion to higher ranks in an OSS project, the number of subscribers
associated with a project, and number of code commits. Among these measures, we choose the number of code commits per developer
per day as our measure of OSS development performance. A code commit refers to a change in the working directory through adding,
deleting, or changing software code. The number of code commits is an objective measure of project performance that has been
repeatedly used by prior studies regarding effects of social processes on technical success of OSS projects. These studies indicate that
the number of code commits is particularly suited for an examination of both social and technical factors in OSS development. Our
measure, the number of code commits per developer per day, allows us to compare performance across projects with different team
size and duration.
4) Control Variables: We controlled for the following variables based on the previous literature: Product Size: In the software
engineering literature, product size has been identied as an important factor in manpower, time, and productivity of a project.
Therefore, product size is used as a control variable. According to the literature, we measure product size as the total LoC of an OSS
project.
Programming Language: Software programming language is another well-recognized factor that may affect software performance.
Many projects employ more than one language. Java,C++, and C are the most frequently used. Due to the limited sample size, we
created four dummy (binary) variables: Java, C++, and C to account for the top three most frequently used languages, and
other to represent all the other languages.A project receives a value of 1 for a language dummy variable if it uses the language in
consideration, a value of 0 otherwise. In other words, a project has four language values, one for each of the four dummy variables.
The other language variable was left out of the regression model in order to prevent dummy variable trap.
License Type: OSS projects use different licensing schemes and the specic license type used may affect developer motivation and
project success due to commercial or noncommercial nature of a license. License type is measured as a binary variable. All projects
with the most popular OSS license, the GPL (general public license, usually indicating a noncommercial OSS product) license, are
given a value of 1. All other projects have a value of 0.
B. Data analysis
We apply linear regression model to verify our hypotheses. Linear regression is a statistical technique for relating a dependent variable
to one or more independent variables(predictors). This model employs the ordinary least squares (OLS) technique in hypothesis
testing. It captures a linear relationship between the expected value of dependent variable and each independent variable (when the
other independent variables are held fixed). If linearity fails to hold, even approximately, it is sometimes possible to transform either
the independent or dependent variables in the regression model to improve the linearity. Here, the number of code commits per
developer per day, network size and product size are log transformed to account for the nonlinear relationship between project
performance and network size and product size.Other variables remain in their original form, because it is possible for them to take
0 as a value. A log transformation of these variables would arbitrarily truncate out meaningiful data points. We have used IBM
SPSS statistical tool for analysis and results.
C. Results
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

175 www.ijergs.org

The model incorporates several interaction effects. Interaction effects represent the combined effects of variables on the dependent
measure. When an interaction effect is present, the impact of one variable depends on the level of the other variable. We have taken
code commits per developer per day as the dependent variable and product of software structural interdependency and network size as
a first interaction variable and product of software structural interdependency and network centralization as the second interaction
variable. In such models, multicollinearity is a common problem.We attempted to correct for multicollinearity by centering the
interacting variables on their mean. Centering just means subtracting a single value (here mean) from all of your data points. The
descriptive statistics for the resulting dataset are shown in Table I. The results of the regression analysis are reported in Table II and
Table III. Note that the mean values in Table I are uncentered values.




TABLE I
Descriptive Statistics

N
Mean Std. Deviation Minimum Maximum Valid Missing
codecommits 15 0 .2989 .4789 .0098 1.9000
Network size 15 0 2.6557 .8992 -1.7807 1.8893
N/w centralization 15 0 .1694 .1441 -.1495 .4305
SSI 15 0 3.9189 3.9189 -6.3811 6.0189
Product size 15 0 10.4221 1.8959 8.0380 13.2020
Licence 15 0 .4667 .5164 .0000 1.0000
C 15 0 .2000 .4140 .0000 1.0000
Cplus 15 0 .2667 .4577 .0000 1.0000
Java 15 0 .2000 .4140 .0000 1.0000




TABLE II
Model Summary
Model R R Square
Adjusted R
Square Std. Error of the Estimate
1 .890 .792 .272 .408635




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

176 www.ijergs.org


TABLE III
RESULTS OF LINEAR REGRESSION MODEL
a

Variable
Unstandardized Coefficients
Standardized
Coefficients
Significance Hypothesis B Std. Error Beta
1 (Constant) -.9857 .1309 .0433
SSI * Networksize .0465 .0495 .4021 .0401 Supported
SSI * N/w centralization .1140 .5570 -.1419 .0848 Supported
SSI -.0062 .0597 -.5050 .0300
Network size -.6325 .2870 1.1874 .0920
Network centralization -.2036 .9232 -.6126 .0350
Product size -.0579 .1115 -.2294 .0631
Licence -.3221 .3766 -.3473 .0441
C -.0311 .4419 -.0268 .0947
Cplus .1046 .5625 .1000 .0861
Java -.2838 .3973 -.2453 .0515
a. Dependent Variable: codecommits
H1 says that collaboration network size and software structural interdependency mutually and positively moderate each others impact
on OSS project performance. This concerns the coefcient of the interaction term of network size and software structural
interdependency. As shown in Table III this coefcient is positive and signicant ( = 0.0465) in our model testing results. Hence H1
is supported.
Following Aiken and West [10], we calculated simple slopes of the effect of network size on the number of code commits per
developer per day at three values of software structural interdependency (SSI): SSI-high = 3.918, SSI-mean = 0, SSI-low = 3.918.
These three values are one standard deviation above the mean, the mean, and one standard deviation below the mean of centered SSI
values, respectively.
To gain a complete view of the mutually moderating effect of network size and SSI, we also computed the simple slopes of the effect
of SSI on the number of code commits per developer per day at one standard deviation above the mean (NS-high = 0.8992), the mean
(NS-mean = 0), and one standard deviation below the mean (NS-low = 0.8992) values of network size.

Fig. 1. Simple Slopes of Network Size Fig.2. Simple Slopes of SSI
Fig.1 shows that network size tends to have a negative effect on project performance in terms of the number of code commits per
developer per day. However, this negative effect can be mitigated by the interaction between network size and software structural
interdependency since the negative simple slopes of network size become less steep as the structural interdependency values increase.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

177 www.ijergs.org

In order to nd the SSI value at which the effect of network size on project performance turns from negative to positive,we equate the
derivative of the number of code commits with respect to network size to zero( -0.6325+ 0.0465SSI =0). Since the result of this
equation is centered SSI value, we add the mean of SSI to this result to gain a precise view of the inection point. The result indicates
that when there are more than 18 les per folder in a project, the effect of network size of project performance turns positive.
With respect to the effect of software structural interdependency on project performance, Fig 2 reveals that this effect is negative in
small development teams but positive in large teams. Therefore, the interaction of network size and structural interdependency plays a
key role in the relationship between project characteristics and project performance. By equating the derivative of the number of code
commits with respect to SSI to zero (0.0062 + 0.0465Network-Size = 0) and adding the mean value of network size to this result,
we found that when there are more than 16 members in a project team,the effect of SSI on project performance becomes positive.
H2 proposes that collaboration network centralization and software structural interdependency mutually and positively moderate each
others impact on OSS project performance. As shown in Table III, the coefcient of the interaction term of network centralization
and software structural interdependency is positive and signicant ( = 0.1140). Thus, H2 is supported. Following the same simple
slope finding approach for H1, we calculated the simple slopes of the effect of network centralization (NC) on the number of code
commits per developer per day at SSI-high, SSI-mean, and SSI-low values .

Fig. 3. Simple Slopes of Centralization. Fig. 4. Simple Slopes of SSI
Meanwhile, the simple slopes of the effect of SSI on the number of code commits per developer per day at one standard deviation
above the mean (NC-high = 0.1441), the mean (NC-mean = 0), and one standard deviation below the mean (NC-low = 0.1441)
values of network centralization are calculated.
Figs. 3 and 4 suggest that network centralization and software structural interdependency reciprocally remedy each others negative
effect on project performance. Setting the derivative of the number of code commits with respect to network centralization to zero
(0.2036 + 0.1140 SSI = 0), we found that at a moderate level of software structural interdependency (more than 6 CVS les per
folder) the effect of network centralization turns from negative to positive. The derivative analysis of the number of code commits
with respect to SSI (0.0062 + 0.1140 Network-Centralization = 0) reveals that 0.22 degree of network centralization is the
inection point where the effect of structural interdependency turns from negative to positive.
VI) CONCLUSION
A key motivation leading us to undertake this study is the lack of research directed at socio- technical interactions in a system
development context. To OSS practitioners, the main implication of our ndings is that they can gain the best of both worlds by
adopting a hybrid software development process that incorporates strengths of both traditional software development model and recent
OSS model. Our empirical analysis demonstrates a feasible way for OSS practitioners to quantify their team structure and soft ware
architecture in order to achieve better development performance. For example, Mozilla was redesigned toward a lower level of
structural interdependency so that a less focused team distributed across geographic and organizational boundaries could contribute to
it. Moreover, the inection points found in our study can be used as qualitative benchmarks for OSS practitioners to evaluate the
socio-technical interactions in their projects.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

178 www.ijergs.org

REFERENCES
[1] Sajad Shirali-Shahreza,Mohammad Shirali-Shahreza, Various Aspects of Open Source Software Development, 2008 IEEE

[2] C. L. Huntley, Organizational learning in open-source software projects: an analysis of debugging data, IEEE Trans. Eng.
Manage., vol. 50, no. 4, pp. 485493, Nov. 2003.

[3] E. Capra, C. Francalanci, and F. Merlo, An empirical study on the relationship among software design quality, development
effort, and governance in open source projects, IEEE Trans. Softw. Eng., vol. 34, no. 6, pp. 765782, Nov./Dec. 2008.

[4]Caryn A. Conley, Lee Sproull, Easier Said than Done: An Empirical Investigation of Software Design and Quality in Open Source
Software Development, Proceedings of the 42nd Hawaii International Conference on System Sciences 2009

[5]Kevin Crowston, An exploratory study of open source software development team structure, ECIS 2003, Naples Italy

[6] J. R. Galbraith, Organization design: An information processing view.Interfaces, vol. 4, no. 3, pp. 2836, 1974.

[7] K. K. Kim and N. S. Umanath, Structure and perceived effectiveness of software development subunits: A task contingency
analysis, J. Manage.Inform. Syst., vol. 9, pp. 157181, 1992
[8] L. C. Freeman, Centrality in social networks: Conceptual clarication, Social Netw., 1979.
[9] M. Cataldo, J. D. Herbsleb, and K. M. Carley, Socio-technical congruence: A framework for assessing the impact of technical and
work dependencies on software development productivity, in Proc. Second ACM-IEEE Int. Symp. Empirical Softw. Eng. Meas.,
2008, pp. 211.
[10] S. G.West and L. S. Aiken, Multiple Regression: Testing and Interpreting Interactions. Newbury Park, CA: Sage, 1991.
[11] G.Madey,V. Freeh, andR. Tynan, The open source software development phenomenon: An analysis based on social network
theory, in Proc. Amer.Conf. Inform. Syst., 2002, pp. 18061813
[12] S. P. Borgatti,M. G. Everett, and L. C. Freeman, UCINET IV Version 1.00. Columbia, SC: Analytic Technologies, 1992










International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

179 www.ijergs.org

Environmental Monitoring and Controlling Various Parameters in a Closed
Loop
R Vijayarani
1
, S. Praveen Kumar
1

1
Scholars, SRM University, Kattankulathur, Chennai
E-mail- vijiece29@gmail.com

ABSTRACT A smart temperature monitoring and controlling has been implemented with the use of standard technology, which
actively monitor the environmental conditions. The system allows for a user to input the desired conditions regarding the surrounding
atmosphere`s temperature requirements. This paper incorporates design and development of monitoring the temperature and
controlling it. The objective of the project is to develop a system, which demonstrates intelligent monitoring and controlling system
This system uses ZigBee technology for communication. A temperature effect on devices and heavy machines is a major concern for
many in the industrial and domestic applications. In such applications monitoring temperature and controlling it through some external
solutions like coolants and heaters is done. In order to overcome these problems many industries and domestic users have been
implementing many solutions. The project consists of two modules. One is the parameter monitoring and the other one is the
parameter controlling. Monitoring and controlling physical parameters like temperature is of outmost importance. A temperature
sensor LM35 will be used for the purpose of measuring temperature. By our project we are demonstrating a cost effective, user
friendly system. ZigBee offers many advantages like Low cost, Range and obstruction issues avoidance, Multi-source products, Low
power consumption and a huge network of more than 64,000 devices can be connected. It offers secured environment for
communication. A main target for this system is to have it designed and implemented as cost efficient as possible.

Keywords Microcontroller, Sensor, LM35, ZigBee, control test, peltier, PWM
1. INTRODUCTION
In recent years, the rapid advancements in embedded system technologies can be seen, to bring a great impact on the industries and
hence sophisticated society is evolving. We are going to use the embedded knowledge in warehousing and industries in order to
measure and control it. Temperature control is a process to maintain the temperature at certain level. This process is commonly use in
all area of the world. Recently in globalization era, this process becomes important element because there are many applications in
daily life, like warehousing and industries which depends on temperature measurements.
During the process, they are needed to be monitored frequently in order to ensure its functional and efficiency especially on
temperature. It is important to study the level of temperature recommended in particular area. Good temperature control is important
during the research, reaction, separation, processing, and storage of products and feeds and is thus a key to product quality. It is also of
importance for environmental control and energy conservation. Temperature is an important quality in daily life, science and industry.
Just about all processes depend on temperature because heat makes molecules move or vibrate faster, resulting in faster chemical
reactions. Accurate measurement of the temperature of products in retail frozen food cabinets requires particular care. Small items
warm up quickly when removed from the cabinet or handled: drilling a hole, even with a precooled drill, will cause errors unless this
can be done without removing the package from its position in the cabinet. If the product is loosely packed, it is easier and quicker to
insert the sensor into the centre of the package, with minimum handling and without moving the package from its original position.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

180 www.ijergs.org

The temperature of stacked packets may be measured by inserting a thin probe between packets, without disturbance, and allowing
sufficient time for constant temperature to be reached. Provided a rapid-response sensor is used. The temperature measurements place
major role in industries, warehousing, and hospitals

2. BACKGROUND:
A microcontroller is a small and low-cost computer built for the purpose of dealing with specific tasks, such as displaying information
in a microwave LED or receiving information from a televisions remote control. Microcontrollers are mainly used in products that
require a degree of control to be exerted by the user. Microcontroller can be regarded as a single-chip special-purpose computer
dedicated to execute a specific application. As in general-purpose computer, microcontroller consists of memory (RAM, ROM, Flash),
I/O peripherals, and processor core. However, in a microcontroller, the processor core is not as fast as in general purpose-computer,
the memory size is also smaller. Microcontroller has been widely used in embedded systems such as, home appliances, vehicles, and
toys. There are several microcontroller products available in the market, for example, Intel's MCS-51 (8051 family), Microchip PIC,
and Atmel's Advanced RISC Architecture (AVR).

2.1 ATMEGA 8:
ATmega8535 is an 8-bit AVR microcontroller. It consumes less power and produce high performance. It follows advanced RISC
architecture. ATMEGA 8 contains 28 pins, 23 Programmable I/O Lines, 512Bytes EEPROM, 1Kbyte Internal SRAM, Two 8-bit
Timer/Counters, One 16-bit Timer/Counter, 8-channel ADC in TQFP package, 6-channel ADC in PDIP package, operating voltage is
5v.It contain 3 ports with each ports contain 8 pins. By executing powerful instructions in a single clock cycle, the ATmega8 achieves
throughputs approaching 1MIPS per MHz, allowing the system design to optimize power consumption versus processing speed.
Details of ATmega8535 microcontroller are described in [1].

2.2 LM35
The LM35 series are precision integrated-circuit temperature sensors, with an output voltage linearly proportional to the Centigrade
temperature. This LM35 has advantage over the linear temperature sensors calibrated in Kelvin, as the user is not required to subtract
a large constant voltage from the output to obtain convenient Centigrade scaling. Low cost is assured by trimming and calibration. The
low output impedance, linear output, and precise inherent calibration of the LM35 make interfacing to readout or control circuitry
especially easy[2]. The device is used with single power supplies. As the LM35 draws only 60 A from the supply, it has very low
self-heating of less than 0.1C in still air. The LM35 is rated to operate over a 55C to +150C temperature range.


2.3 ZIGBEE
ZigBee is a specification for a suite of high level communication protocols. ZigBee is based on an IEEE 802.15 standard. It uses low
power. Though low-powered, ZigBee devices can transmit data over long distances by passing data through intermediate devices to
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

181 www.ijergs.org

reach more distant ones, creating a mesh network; i.e., a network with no centralized control or high-power transmitter/receiver able to
reach all of the networked devices. The decentralized nature of such wireless ad hoc networks makes them suitable for applications
where a central node can't be relied upon. It is used in applications that require only a low data rate, long battery life, and secure
networking. It has a defined rate of 250 Kbit/s, best suited for periodic or intermittent data or a single signal transmission from a
sensor or input device. Applications include wireless light switches, electrical meters with in-home-displays, traffic management
systems, and other consumer and industrial equipment that requires short-range wireless transfer of data at relatively low rates. ZigBee
specification is intended to be simpler and less expensive than other WPAN such as Bluetooth or Wi-Fi.
2.4 PELTIER DEVICE
Peltier create a temperature differential on each side [9]. One side gets hot and the other side gets cool. Therefore, they can be used to
either warm something up or cool something down. Depending on which side we use. We can also take advantage of temperature
differential to generate electricity. This peltier works very well as long as you remove the heat from the hot side. After turning on the
device, the hot side will become hot quickly, the cold side will cool quickly.
3. RELATED WORK
Zhu and Bai [3] proposed a system for monitoring the temperature of electric cable interface in power transmission, based on Atmel
AT89C51 microcontroller. The system consists of a central PC machine, host control machines, and temperature collectors. Several
temperature collectors are connected to a host control machine through RS-485 communication network, and the host control machine
communicates and exchanges data with the central PC machine using General Packet Radio Service (GPRS) connection. The
temperature collector itself consists of sensor temperatures (Maxim's DS18B20, 1-wire digital thermometer), decoders, and other
circuits for interfacing purpose. Each temperature collector saves the temperature in SRAM and sent the temperature information
back to the host control machine when requested. Each host control machine also stores this temperature data in its memory (SRAM),
and send it back to the central PC machine when requested. In this system, the communication using RS-485 network is limited by
cable length (1200 meters). In [4], Loup et al. developed a Bluetooth embedded system for monitoring server room temperature. When
the room temperature is above threshold, the system sends a message to each server via Bluetooth to shut down the server.
There are also some works on wireless temperature monitoring system based on Zigbee technology [5, 6, 7]. Bing and Wenyao [5]
designed a wireless temperature monitoring and control system for communication room. They used Jennic's JN5121 Zigbee wireless
microcontroller and Sensirion's SHT11 temperature sensor. The system proposed in [6] uses Chipcon's CC2430 Zigbee System-on-
Chip (SoC) and Maxim's 18B20 temperature sensor. In [7] Li et al. developed a wireless monitoring system based on Zigbee, not only
for temperature, but also humidity.

Different from our system, we use personal computer. The values transmitter and received through the zigbee will be passed to
personal computer so that we can change temperature from distance. This can be accurately done by the extension of the zigbee
range. This system controls both heater and peltier cooler[10].



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

182 www.ijergs.org

4. DESIGN AND IMPLEMENTATION:
PC
CP2102
ZIGBEE
TRANSCEIVER
ATMEGA8
TEMPERATURE
SENSOR1
TEMPERATURE
SENSOR2
ATMEGA8
ZIGBEE
TRANSCEIVER
COOLING
DEVICE
HEATING
DEVICE
ROOM1 ROOM2
5

Figure 1: Block diagram

4.1 SPECIFICATION:
We define our system to have specification as follows.
1. Display room temperature
2. Set the required temperature
This project clearly focuses on monitoring and controlling the temperature. The required temperature will be set by the PC apps which
have been developed. Alarm will be raised when the required set temperature exceeds or get reduced. The system consist of two parts
1. Hardware
2. Software

4.2 HARDWARE:
The hardware used here is temperature sensor, ZigBee, peltier cooler, Atmega8, cp2102 [8]. The specification of each product and the
connections will be enclosed below. LM35 is the temperatures Sensor used in this system[12]. Zigbee which is used is XBee series 2.
Peltier specification is TEC1-12706[9]. Set the required temperature which we actually required in our room to save particular items
or to prepare chemicals in the industries. This required temperature reading will be passed from pc to the controller. This setted
temperature will be maintained and watched regularly. Now the current temperature from room is again transmitted from room to the
pc via ZigBee. Both the transmission and reception of temperatures are done by ZigBee. Initially transmitter circuit is prepared. This
transmitter circuit is as follows

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

183 www.ijergs.org


Figure 2 : Transmitter circuit
In monitoring part, the current temperature is monitored through sensor and transmitted via Atmega and zigbee.
In controlling part, the required set temperature can be controlled. The temperature which we require is transmitted from laptop or
computer to transmitter circuit board. The controller will send the required temperature through transmitter ZigBee to receiver ZigBee.
Meanwhile the controller in the board will start to generate PWM. In this transmitter circuit we also have L293D IC which is used to
drive the peltier cooler and heater. The ZigBee and microcontroller works with voltage of less than 5v. To drive the 5v to 12v, this IC
is used. This is used to switch the voltage from low to high. They take a low-current control signal and provide a higher-current signal.
This high current signal is used to drive the cooling device. This will help the peltier to cool or heat at the required level. Photograph
of the system is shown in Fig 3. This can be extended by introducing authentication over the system. The user name and password
authentication is done using .net. Only the authenticated user can access the personal computer to set the particular temperature which
is required to maintain the products or chemicals. The chemicals in the industries have the capability to leak the gas. This gas will be
detected by the sensor and it will reported via ZigBee to personal computer. This passage of message can be done by .net at the
backend. So that the system can be authenticated. This can be used effectively in case of dancer. Similarly with the use of upgraded
version of Atmega 8, GSM message can be sent in case of any emergencies. This system ensure safety to the food products, chemicals
and medicines. The PWM generation of the microcontroller made the efficiency to increase compared to other temperature control
projects. Due to the use of PWM, accuracy is maintained exactly
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

184 www.ijergs.org


Figure 3: Hardware photograph
4.3 SOFTWARE:
The software which is used for simulation of Atmega 8 is code vision AVR. The software used to display is XCTU. This will display
continuously the temperature values.[11]
CODE VISION AVR:
The software has four main parts: 1) read the temperature from ADC, 2) control the temperature at various situations
1. READ TEMPERATURE FROM ADC
// Read the AD conversion result
unsigned int read_adc(unsigned char adc_input)
{
ADMUX=adc_input | (ADC_VREF_TYPE & 0xff);
// Delay needed for the stabilization of the ADC input voltage
delay_us(10);

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

185 www.ijergs.org

// Start the AD conversion
ADCSRA|=0x40;
// Wait for the AD conversion to complete
while ((ADCSRA & 0x10)==0);
ADCSRA|=0x10;
return ADCW;
}
.
.
while (1)
{
raw_temp=read_adc(3);
putchar(raw_temp);
delay_ms(500);
}
2. CONTROL SITUIATION AT VARIOUS SITUIATIONS:
When set temperature is less than current temperature

if(set_temp<current_temp)
{
OCR1A=0;
OCR1B=255;
}

When set temperature is greter than current temperature

else if(set_temp>current_temp)
{
OCR1A=0;
OCR1B=0;
}
When set temperature is equal to current temperature

else if(set_temp==current_temp)
{
OCR1A=0;
OCR1B=127;

The flow of the coding is as follows
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

186 www.ijergs.org








































START
INITIALISE
ADC,PORTS,TIMERCOUNTER, SPI

CONFIGURE
USCRA=0x00;
USCRB=0x18;
USCSRC=0x86;
UBRRH=0x00;
UBRRL=0x47;

READ ADC_VALUE FROM ADCH (i.e)
TEMP
TEMP

TEMP= read-adc(3)
CALCULATE AVERAGE
VALUE OF CURRENT
TEMPERATURE

PELTIER
ON

PELTIER
OFF

IF SET
TEMP <
CURRENT
TEMP

yes no
If set
temp==current
temp

PELTIER
MILD ON

STOP
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

187 www.ijergs.org


5. RESULT AND DISCUSSION:

The result of the system is as follow. First step is to install XCTU Software and cp2102 USB to UART converter driver. As soon
as dB9 connector is connected from ZigBee receiver board, to the PC using a cable to anyone of the port, that particular serial port
will be selected. Then the start monitor button will be pressed. This enables to find the current temperature. After every time the
refresh button in the transmitter circuit is pressed, value will be updated. Then the set temperature button is used to set the
particular temperature which we required in the industries. As soon as the required temperature is set, the current temperature of
the industry will be changed as 19.
The figure 4 shows the output which
checked with peltier cooling.


Figure 4: output
6. CONCLUSION

In this paper, we have designed and implemented a microcontroller-based system for monitoring and controlling temperature in
industries. We utilized Atmel AVR Atmega8 microcontroller and LM35 temperature sensor. Based on the testing results, the system
works according to our predefined specification. This system can be used to help the administrator to monitor and controlling
temperature of the industries. The system also can raise an alarm and send a text message to warn the administrator if the fire or gas
leaked over the industries specially in case of chemical industries and warehousing where the storage of all materials are possible.
This project is used to prevent the materials from damage.

REFERENCES:
[1] Atmel Corp. 2006 ATmega8 Datasheet. http://www.atmel.com/images/atmel-2486-8-bit-avr-microcontroller-
atmega8_l_datasheet.pdf
[2] National Semiconductor Corporation, LM35 datasheet, precision centigrade temperature sensors, Atmel data book, November
2000 update.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

188 www.ijergs.org


[3] HongLi Zhu and LiYuan Bai. 2009. Temperature monitoring system based on AT89C51 microcontroller. In IEEE International
Symposium on IT in Medicine Education. ITIME (August 2009), volume 1, 316-320.
[4] T.O. Loup, M. Torres, F.M. Milian, and P.E. Ambrosio. 2011. Bluetooth embedded system for room-safe temperature monitoring.
Latin America Transactions, IEEE (Revista IEEE America Latina) (October 2011), 9(6):911-915.
[5] Hu Bing and Fan Wenyao. 2010. Design of wireless temperature monitoring and control system based on ZigBee technology in
communication room. In 2010 International Conference on Internet Technology and Applications (August 2010), 1-3.
[6] Lin Ke, Huang Ting-lei, and Li Iifang. 2009. Design of temperature and humidity monitoring system based on zigbee technology.
In Control and Decision Conference. CCDC (June 2009).Chinese , 3628-3631.
[7] Li Pengfei, Li Jiakun, and Jing Junfeng. Wireless temperature monitoring system based on the ZigBee technology. 2010. In 2010
2nd International Conference on Computer Engineering and Technology (ICCET), volume 1 (April 2010), V1-160-V1-163.
[8] cp2102/9 silicon labs http://www.silabs.com/Support%20Documents/TechnicalDocs/CP2102-9.pdf
[9] peltier device https://www.sparkfun.com/products/10080
[10] Kooltronics, Basic cooling methods.
[11] Code vision AVR https://instruct1.cit.cornell.edu/courses/ee476/codevisionC/cvavrman.pdf
[12] Basic workings of temperature sensor http://electronicsforu.com/electronicsforu/circuitarchives/view_article.asp?
sno=1476&title%20=%20Working+With+Temperature+Sensors%3A+A+Guide&id=12364&article_type=8&b_type=new











International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

189 www.ijergs.org

Design and Testing of Solar Powered Stirling Engine
Alok kumar
1
, Dinesh Kumar
1
,Ritesh Kumar
1

1
Scholar, Mechanical Engineering Department, N.I.T patna,
Email- kumargaurav4321@gmail.com

Abstract-This report presentsdifferent components and its various configurations along with the feasibility of using solar energy as a
potential source of heat for deriving a stirling engine. In addition to this it contains the design details of various parts of stirling engine
and details of materials used.Engine parts being of mild steel, aluminium and cast iron so turning, facing, grinding, cutting, threading,
tapping operations were used in the fabrication of stirling engine. There is design calculation of different components of stirling
engine and parabolic dish as hot cylinder calculations, hot (Displacer) piston calculations, cold cylinder calculations, cold piston
calculations, connecting rod calculations, calculations of flywheel, parabolic dish calculations is performed.
Keyword-Joint board, hot cylinder, displacer piston, cold cylinder, cold piston, connecting rod, flywheel, slider, crank, rotating disc,
connecting pins, shaft, frame, dish, piston holder, sealing nipple.
1. Introduction
Energy crisis is a harsh reality in the present scenario. Conventional fossil fuels like coal, natural gas, petroleum products etc. get
exhausted in the near future and also the prices of these fuels are increasing day-by-day. Pollutionand global warming are drawback
with the use of conventional fossil fuels. So, use of alternative sources which provide clean and green energy is important.This report
demonstrates that stirling engine which is an external heat engine can be used as an efficient and clean way of producing energy with
help of concentrating a parabolic reflector.. It is used in some very specialized applications, like in submarines or auxiliary power
generators. A stirling engine was first invented by Robert Stirlinga Scottish in 1816.
A Stirling engine is a heat engine operating by cyclic compression and expansion of the working fluid (air or other gas) at different
temperature levels such that there is a net conversion of heat energy to mechanical work.When the gas is heated, because it is in a
sealed chamber, the pressure rises and this then acts on the power piston to produce a power stroke. Whenconfine gas is cooled, the
pressure drops and then piston to recompress the gas on the return stroke, giving a net gain in power available on the shaft. The
working gas flows cyclically between the hot and cold heat exchangers. The Stirling engine contains a fixed amount of gas that is
transferred back and forth between a cold end and a hot end. The displacer piston moves the gas between the two ends and the power
piston is driven due to the change in the internal volume as the gas expands and contracts. This report presents an external combustion
engine. The engine is designed so that the working gas (air) is generally compressed in the colder portion of the engine and expanded
in the hotter portion resulting in a net conversion of heat into work. So, aStirling engine system has at least one heat source, one heat
sink and heat exchangers and transmitted from a heat source to the working fluid by heat exchangers and finally to a heat sink.
There are three types of Stirling engines that are distinguished by the way they move the air between the hot and cold sides of the
cylinder is alpha, beta and gamma types, In a beta configuration similar to the engine used in this study, A beta Stirling has a single
power piston arranged within the same cylinder on the same shaft as a displacer piston The displacer piston shuttle the working gas
from the hot heat exchanger to the cold heat exchanger. The displacer is a special-purpose piston; used in Beta and Gamma type
Stirling engines, to move the working gas back and forth between the hot and cold heat exchangers. The working gas is pushed to the
hot end of the cylinder so; it expands and pushes the power piston. The displacer is large enough to insulate the hot and cold sides of
the cylinder thermally and to displace a large quantity of gas.

2. Calculations
2.1 Hot cylinder calculations:
Assuming a pressure of 2 bar = .2MN/m
2

External diameter of hot cylinder (D
o
)= 50mm
Thickness of cylinder(T
hc
)= P*D/2
t
=.2*50/2*48
T
hc
= .104 mm1.5mm (due to standard size of tube)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

190 www.ijergs.org

Internal diameter of hot cylinder (D
i
)= 50-2*1.5=47mm
Length of hot cylinder (L
h
)= 3* D
i
=141mm140mm
2.2 Hot(Displacer) piston calculations:
Diameter of hot piston (D
p
)= 47-2=45mm (1mm clearance on each side)
Thickness of hot piston (T
hp
)=.03*D
p
= 1.35mm.25mm (due to standard size of aerosol bottle)
Length of hot piston (L
p
)= 80mm
2.3 Cold cylinder calculations:
Assuming a pressure of 2 bar = .2MN/m
2

External diameter of cold cylinder (d
o
)= 32mm
Thickness of cold cylinder (t
cc
)=P*d
o
/2*
t
= .2*32/2*68
t
cc
=.047mm=1.5mm (due to standard size of tube)
Internal diameter of cold cylinder (d
i
)= 32-3= 29mm
Length of cold cylinder (l
c
)= 67mm
2.4 Cold piston calculations:
Diameter of cold piston (d
p
)= 29mm
Thickness of cold piston (t
cp
)= .03*d
cp
= .87mm 1mm (due to standard thickness of tube)
Length of cold piston (l
p
)= 35mm
2.5 Connecting rod calculations:
Diameter of connecting rod(d
1
)= 6mm
Length of connecting rod part1(l
1
)= 11.5mm
Now radius of gyration of the rod (k)= d/4= 6/4 = 1.5mm
Also we have constant K= 4/25000
Crippling stress on the rod (f
cr1
)= f
c
/[1+K*(l/k)] = 213/[1+4/25000*(11.5/1.5)]
=212.7MN/m
2
<268MN/m
2
which is yield strength of mild steel
Hence the design is safe
Similarly for connecting rod parts 2, 3, 4 the lengths are as follows
Length of connecting rod part2(l
2
)=8.5mm
Length of connecting rod part3(l
3
)= 5.5mm
Length of connecting rod part4 (l
4
)= 4.8mm
Crippling stress values for part 2, 3, 4 are as follows
f
cr2
=213/[1+4/25000*(8.5/1.5)]
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

191 www.ijergs.org

= 212.8MN/m
2

f
cr3
=213/[1+4/25000*(5.5/1.5)]
=212.8MN/m
2

f
cr3
=213/[1+4/25000*(4.8/1.5)]
=212.8MN/m
2
2.6 Calculations of flywheel:
Shaft diameter (D
s
)=15mm
Diameter of the flywheel(D
f
)= 118mm
Width of the rim (B) = 25mm
Thickness of the rim (t
f
) = 5mm
Hub diameter (d
h
)= 2*D
s
= 30mm
Length of the hub (l
h
)= 2*D
s
= 30mm
Taking a speed of 600 RPM
We have speed (n)= 600/60 = 10rev/s
Change in energy E= C
E
*P/n = .29*5/10
= .145J
Weight of the flywheel = .75Kg
Velocity of the wheel = *D
f
*n = *118*10
= 3707.1mm/s= 3.71m/s
Mass density of cast iron () =7200Kg/m
3
Centrifugal force on one half of the rim = 2*B*t
f
**v
2
/10
6

= 2*25*5*7200*3.71
2
/10
6
= 24.78N
Tensile stress at rim section due to centrifugal force = *v
2
/10
6

=7200*3.71
2
/10
6
= 99.1KN/m
2

2.7 Parabolic dish calculations:

f = ( D * D ) / ( 16 * c )
where
f= Focal length
c=Depth of dish
D=Diameter
For D=420mm & c=37mm
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

192 www.ijergs.org

f = ( 420 * 420) / ( 16 * 37 )=297.97mm~298mm
Length of minor axis=420mm
Length of major axis=525mm
Area of disc=*a*b=*525*420=692721.2 mm
2
=69.3 cm
2

2.8 Calculation for direct radiation:
Latitude (let)= 30
Hour angle= 0
Reflectivity of the material = .96
Tilt angle =90
Declenation , d = 23.5
Altitude angle at solar noon
max
= 90-(l-d) =90- (30-23.5)
= 83.5
At solar noon solar azimuth angle =180
Wall azimuth angle = 180 (-) = 0
Incident angle overall = Cos
-1
(Cos*Cos) =Cos
-1
(Cos89.53*Cos180)
= 90.47
Direct radiation I
DN
=A*exp(-B/Sin) =1080*exp(-.21/Sin83.5)
= 874W/m
2

I
DN
* Cos =874*Cos90.47 =-7.16W/m
2

Diffuse radiation, I
d

View factor F
ws
= (1+Cos)/2
Diffuse radiation, I
d
=C*I
DN
*F
ws
=0.135*874*0.5
= 58.99W/m
2

Reflected radiation for .96 (
g
) I
r
=(I
DN
+I
d
)*
g
*F
wg

= (874+59.99)*.96*0.5
=448.31W/m
2

3. Fabrication Details
The fabrication details of different parts of the engine are given below with the detail of the operations performed
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

193 www.ijergs.org


Fig.1 TheStirling Engine
3.1 Joint Board -Joint board is a cast iron rectangular slab of two overlapping holes were made on both sides of the slab. The holes
were overlapped in order to provide transition for working fluid from hot cylinder to cold cylinder. Hole for hot cylinder was 50mm
diameter and 32mm for hot cylinder. Tapping of M16 was performed on both holes to provide internal threads so as to fit cylinders in
them. Tapping of M16 gave us 2mm pitch for threads.
3.2 Hot Cylinder - Hot cylinder is 140mm long cylinder with 50mm external diameter. Cylinder thickness for hot cylinder is 2mm.
External threads were provided on it to so as to fit it on respective hole with M16. Threads were provided upto 50mm from front side.
On the posterior side a circular aluminium plate was welded to the cylinder where heat absorption will take place.
3.3 Cold Cylinder - Cold cylinder is 67mm long mild steel cylinder with 32mm external diameter. Threading was provided to one
side of it which was to be fit into the joint board hole. Other side was for piston which was connected to crank with a connecting pin.
3.4Hot Piston - An aluminium pesticides bottle was used as piston for hot cylinder. Connecting rod was placed in it with help of
teflon which was placed in it with internal threading. It was grinded to provide for smooth surface finish so as to provide easy
movement inside a hot cylinder.
3.5 Connecting Rod - Mild steel rod was used connecting rod for both of the pistons. For hot cylinder the connecting rod was fitted to
the hot cylinder with help of threads, with internal threads for teflon block and external threads for connecting rod. For cold piston
connecting rod was fitted to the piston with help of a movable pin of 1mm.
3.6 Final Assembly - All the components were then assembled on a board with a proper alignment with help of welding. Then the
final assembly was placed onto a frame so that it can be properly focused on with help of parabolic dish.
3.7 Parabolic Dish - A parabolic dish of 420mm minor axis and 525mm major axis with 37mm depth. Focal point of the dish is
298mm. The dish was first mended for minor flaws with hammering. Then the dish was first cleaned with emery paper and a layer of
reflective paper was placed on it for reflectivity. A convex lens was further procured to get a better focus of the incident light on the
hot cylinder.The focal length of the lens is 6 inchs.
4. Conclusion
It is concluded that the simple design analysis of stirling engine operated in two heat source with help of solar energy. The shaft rotates when
solar energy imparted on hot zone of the stirling engine. This design of has low hot-side temperatures archive as compared to operated at
traditional Stirlingengine so overerallefficiency is low. Friction between different mating parts and proper lubrication are also more important
to increase the overall efficiency.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

194 www.ijergs.org

REFERENCES:
[1] Snyman H., Harms T.M. and Strauss J.M., (2008) examination of Design analysis methods for Stirling engines. Journal of energy
in South Africa, Vol.-19 No.-3, page 4-19.South Africa
[2] Khan K.Y., Ivan N.A.S., Ahmed A.S., Siddique A.H. and Debnath D. (2011) examination of solar dish stirling system and its
economic prospect in Bangladesh.International journal of electrical & computer sciences IJECS-IJENS Vol: 11 No: 04, page 7-
13.Bangladesh

[3] Mancini T.R. examination of solar-electric dish stirling system development. USA
[4] Kyei-Manu F. andObodoako A. (2005) examination of solarstirling-engine water pump proposal draft. Page 1-15.

[5] Heand M. and Sanders S. examination of design of a 2.5kW low temperature stirling engine for distributed solar thermal
generation. American institute of aeronautics and astronautics, page 1-8, USA
[6] Sukhatme S.P. (2007) Principles of thermal collection and storage. McGraw Hill, New Delhi
[7] Duffie,J.A and Beckman (2006) Solar engineering of thermal processes. John willy& sons, INC., London
[8] Rai, G.D (2011) solar energy utilisation. Khanna publishers, India
[9] Renewable Energy focus handbook (2009), ELSEVIER page 335
[10] Valentina A. S., Carmelo E. M., Giuseppe M. G., MiliozziAdio and Nicolini Daniele (2010) New Trends in Designing Parabolic
trough Solar Concentrators and Heat Storage Concrete Systems in Solar Power Plants. Croatia, Italy

[11] FOLARANMI, Joshua (2009) Design, Construction and Testing of a Parabolic Solar Steam Generator.Journal of Practices and
Technologies ISSN 1583-1078. Vol-14, page 115-133, Leonardo

[12] Xiao G. (2007) A closed parabolic trough solar collector. Version 2 Page 1-28

[13] Brooks, M.J., Mills, I and Harms, T.M. (2006) Performance of a parabolic trough solar collector. Journal of Energy in Southern
Africa, Vol-17, page 71-80 Southern Africa










International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

195 www.ijergs.org

Wireless Sensor Network Protocol Implementation by Using Hybrid
Technology
Rupesh Raut
1
, Prof. Nilesh Bodne
2

1
Scholar, Rastrasant tukodoji maharaj, Nagpur University
2
Faculty, Rastrasant tukodoji maharaj, Nagpur University
Email- me.rupesh_raut@rediffmail.com

ABSTRACT A multi hop wireless sensor network is composed of large number of nodes and consecutive link between them.
Wireless sensor network normally consist of large number of distributed nodes. In WSN one of the main problems is related to power
issue because every node is operated by external battery. To have a large network life time all nodes need to minimize their power
consumption. Node is composed of small battery so energy associated with this is very less so replacing or refilling of battery is not
possible which is very costly. Hence some technique are applied through which power associated with each node can be conserved. In
this paper we proposed design for implementation of wireless sensor network protocol for low power consumption by using power
gating signal.

Keywords Wireless sensor network, power consumption, node, battery, life time of network, protocol, inactive state.

INTRODUCTION
The term "wireless" has become a generic and all-encompassing word used to describe communications in which
electromagnetic waves to carry a signal over part or the entire communication path. Wireless technology can able to reach virtually
every location on the surface of the earth. Due to tremendous success of wireless voice and messaging services, it is hardly surprising
that wireless communication is beginning to be applied to the domain of personal and business computing. [2].Ad- hoc and Sensor
Networks are one of the parts of the wireless communication. In ad-hoc network each and every nodes are allow to communicate with
each other without any fixed infrastructure. This is actually one of the features that differentiate between ad-hoc and other wireless
technology like cellular networks and wireless LAN which actually required infrastructure based communication like through some
base station. [3].
Wireless sensor network are one of the category belongs to ad-hoc networks. Sensor network are also composed of nodes.
Here actually the node has a specific name that is Sensor because these nodes are equipped with smart sensors [3]. A sensor node is
a device that converts a sensed characteristic like temperature, vibrations, pressure into a form recognize by the users. Wireless sensor
networks nodes are less mobile than ad-hoc networks. So mobility in case of ad-hoc is more. In wireless sensor network data are
requested depending upon certain physical quantity. So wireless sensor network is data centric. A sensor consists of a transducer, an
embedded processor, small memory unit and a wireless transceiver and all these devices run on the power supplied by an attached
battery [2].
Traditional development of wireless sensor network mote is generally based on SoC platform but here we are going to
implement protocol on FPGA platform so we use the term hybrid technology.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

196 www.ijergs.org


Fig.1 Wireless Sensor Network

Battery Issues

The battery supplies power to the complete sensor node and hence plays a vital role in determining sensor node lifetime.
Batteries are complex devices whose operation depends on many factors including battery dimensions, type of electrode material used,
and diffusion rate of the active materials in the electrolyte. In addition, there can be several non idealities that can creep in during
battery operation, which adversely affect system lifetime. We describe the various battery non idealities and discuss system level
design approaches that can be used to prolong battery lifetime.[1]

Rated Capacity Effect

The most important factor that affects battery lifetime is the discharge rate or the amount of current drawn from the battery.
Every battery has a rated current capacity, specified by the manufacturer. Drawing higher current than the rated value leads to a
significant reduction in battery life. This is because, if a high current is drawn from the battery, the rate at which active ingredients
diffuse through the electrolyte falls behind the rate at which they are consumed at the electrodes. If the high discharge rate is
maintained for a long time, the electrodes run out of active materials, resulting in battery death even though active ingredients are still
present in the electrolyte. Hence, to avoid battery life degradation, the amount of current drawn from the battery should be kept under
tight check. Unfortunately, depending on the battery type (lithium ion, NiMH, NiCd, alkaline, etc.), the minimum required current
consumption of sensor nodes often exceeds the rated current capacity, leading to suboptimal battery lifetime. [4]

Relaxation Effect

The effect of high discharge rates can be mitigated to a certain extent through battery relaxation. If the discharge current from
the battery is cut off or reduced, the diffusion and transport rate of active materials catches up with the depletion caused by the
discharge. This phenomenon is called the relaxation effect and enables the battery to recover a portion of its lost capacity. Battery
lifetime can be significantly increased if the system is operated such that the current drawn from the battery is frequently reduced to
very low values or is completely shut off [5].

Proposed Method for Implementation of Wireless Sensor Protocol Implementation

The sensor nodes radio enables wireless communication with neighboring nodes and the outside world. In general, radios can
operate in four distinct modes of operation: Transmit, Receive, Idle, and Sleep. An important observation in the case of most radios is
that operating in Idle mode results in significantly high power consumption, almost equal to the power consumed in the Receive mode
[6]. Thus, it is important to completely shut down the radio rather than transitioning to Idle mode when it is not transmitting or
receiving data. Another influencing factor is that as the radios operating mode changes, the transient activity in the radio electronics
causes a significant amount of power dissipation. For example, when the radio switches from sleep mode to transmit mode to send a
packet, a significant amount of power is consumed for starting up the transmitter itself [7].
Therefore our idea to keep wireless sensor network node in inactive (shut down) mode until it get power gating signal. For
implementing this idea we first consider the transmitter and receiver section design consideration and then develop node (i.e.
transmitter plus receiver) by using power gating signal.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

197 www.ijergs.org


In design consideration of transmitter, it is generally in ideal state and wait for RX_BEACON signal from receiver. After receiving
RX_BEACON signal from receiver it transmit data and cyclic redundancy check (CRC) to receiver until it get acknowledgement from
receiver side after gating ACK signal transmitter again enter in idle mode. The flow diagram for transmitter working is as shown in fig.2.

























Fig.2. Transmitter Flow Diagram




























Idle
RX_BEACON
=1?
SEND
DATA + CRC
RX_ACK
=1?
YES
NO
NO YES
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

198 www.ijergs.org


In design consideration of receiver, when it want to receive data it send RX_BEACON signal to transmitter after receiving data
depend upon data information it decide status of various devices (e.g. ON/OFF). Also it send ACK signal after receiving desired data. The
flow diagram for receiver working is as shown in fig.3.




















Fig.3. Receiver Flow Diagram

After taking consideration of Transmitter and receiver design we are going to develop node with power gating
signal. In this design we put node in Inactive Mode (shut down mode) after gating active low POWER GATING signal it will enter in
Control Mode where it will wait for RX_BEACON signal from receiver. If it get active high signal it will transmit with CRC until it
get acknowledgement from receiver depending upon nature of data it will command the device and after getting acknowledgement
from receiver it again enter in Inactive mode. The flow diagram for node is shown in fig.4







Start
Data received
Send
RX_BEACON
If data>128
byte
Device On
Device Off
Send Ack
YES
NO
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

199 www.ijergs.org
























Fig.4.Flow diagram of node with power gating signal



CONCLUSION

POWER
GATING
SIGNAL =0?
INACTIVE
MODE
SEND
DATA + CRC
CONTROL MODE
RX_BEACON
=1?
RX_ACK
=1?
NO
YES
YES

NO
NO
YES
Device Status
On/Off
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

200 www.ijergs.org

This paper describes challenges face by wireless sensor network and present design for low power transmitter. Present
techniques that are available are complicated and economically costly to implement .The design technique that we have used in our
paper is robust, low cost and easy to implement. The use of Power Gating signal enables our system to meet the low power
requirements of wireless sensor node. If we write VHDL code for our protocol implementation and find out it power after simulation
then we get power as low as 20W so such amount power saving can lead to significant enhancement in sensor network lifetime.
Therefore our approach for implementation of wireless sensor network protocol is simple and cost effective.

REFERENCES

[1]. Vijay Raghunathan,Curt Schurgers,Sung Park, and Mani B. Srivastava Energy aware wireless sensor network ,
Signal Processing Magazine 1053-5888/02/$17.002002IEEE march 2002
[2]. Carlos de Morais Cordeiro, Dharma Prakash Agrawal Ad-hoc and sensor networks theory and application, World Scientific
publication, 2006
[3]. Paolo Santi, Topology control in wireless Ad-hoc and Sensor networks, Jhon Wiley and sons publication, 2005.
[4]. C.F. Chiasserini and R.R. Rao, Pulsed battery discharge in communication devices, in Proc. Mobicom, 1999, pp. 88-95.
[5]. S. Park, A. Savvides, and M. Srivastava, Battery capacity measurement and analysis using lithium coin cell battery, in Proc.
ISLPED,2001, pp.382-387.
[6]. Y. Xu, J. Heidemann, and D. Estrin, Geography-informed energy conservation for ad hoc routing, in Proc. Mobicom, 2001,
pp. 70-84.
[7]. A. Wang, S-H. Cho, C.G. Sodini, and A.P. Chandrakasan, Energy-efficient modulation and MAC for asymmetric microsensor
systems, in Proc.ISLPED, 2001, pp 106-111.















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

201 www.ijergs.org

Detection and Recognition of Mixed Traffic for Driver Assistance System
Pradnya Meshram
1
, Prof. S.S. Wankhede
2

1
Scholar, Department of Electronics Engineering, G.H.Raisoni College of Engineering, Digdoh hill, Nagpur, India
2
Faculty, Department of Electronics Engineering, G.H.Raisoni College of Engineering, Digdoh hill, Nagpur, India
E-mail- pmeshram111@gmail.com


ABSTRACT- Driver-assistance systems that monitor driver intent, warn drivers or assist in vehicle guidance are all being actively
considered. This paper present computer vision system designed for recognizing road boundary and a number of objects of interest
including vehicles, pedestrians, motorcycles and bicycles. The system is designed using Hough transform and Kalman filters to
improve the accuracy as well as robustness of the road environment recognition. A Kalman filter object can be configured for each
physical object for multiple object tracking. To use the Kalman filter, the moving object must be track. The results are then used as
the road contextual information for the following procedure, in which, particular objects of interest, including vehicles, pedestrians,
motorcycles and bicycles, are recognized by using a multi-class object detector. The results in various typical but challenging
scenarios show the effectiveness of the system.
Keywords Computer vision toolbox, Video processing, Hough transform ,Kalman filters ,Region of interest, Object track ,Driver
assistance system , Intelligent vehicles.
INTRODUCTION

Within the last few years, research into intelligent vehicles has expanded into applications that work with or for the human user.
Computer vision system should be able to detect the drivable road boundary and obstacles. For some higher-level functions, it is also
necessary to identify particular objects of interest, such as vehicles, pedestrians, motorcycles, and bicycles. The detection and
recognition of such information are crucial for the successful deployment of the future intelligent vehicular technologies in the
practical mixed traffic, in which, the intelligent vehicles have to share the road environment with all road users, such as pedestrians,
motorbikes, bicycles and vehicles driven by human beings. Whereas computer vision can deliver a great amount of information,
making it a powerful means for sensing the structure of the road environment and recognizing the on-road objects and traffic
information. Therefore, computer vision is necessary and promising for the road detection and other applications related to intelligent
vehicular technologies.
The novelty of this paper lies in the following two aspects: First, we formulize the drivable road boundary
detection using Hough transform which not only improves the accuracy but also enhances the robustness for the estimation of the
drivable road boundary. The detected road boundaries are used to verify which ones are needed to be tracked and which ones are not.
Second, we recognize particular objects of interest by using the Kalman filter. It is use to predict a physical object's future location, to
reduce noise in the detected location. The system development in order to improve traffic safety with respect to the road users Such
framework can improve not only the accuracy but also the efficiency of the road environment recognition
REVIEW OF LITERATURE

Chunzhao Guo and Seiichi Mita, in their study, they recognizing a number of objects of interest in mixed traffic, in which, the host
vehicle have to drive inside the road boundary and interact with other road First, it formulize the drivable road boundary detection as a
global optimization problem in a Hidden Markov Model (HMM) associated with a semantic graph of the traffic scene Second, it
recognize particular objects of interest by using the road contextual correlation based on the semantic graph with the detected road
boundary Such framework can improve not only the accuracy but also the efficiency of the road environment recognition.

Joel C. McCall and Mohan M. Trivedi , in their study, motivate the development of the novel video-based lane estimation and
tracking (VioLET) system. The system is designed using steerable filters for robust and accurate lane-marking detection. Steerable
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

202 www.ijergs.org

filters provide an efficient method for detecting circular- reflector markings, solid-line markings, and segmented-line markings under
varying lighting and road conditions. They help in providing robustness to complex shadowing, lighting changes from overpasses and
tunnels, and road-surface variations. They are efficient for lane-marking extraction because by computing only three separable
convolutions, we can extract a wide variety of lane markings There are three major objectives of this paper The first is to present a
framework for comparative discussion and development of lane-detection and position-estimation algorithms. The second is to present
the novel video-based lane estimation and tracking (VioLET) system designed for driver assistance. The third is to present a
detailed evaluation of the VioLET system

Michael Darms, Matthias Komar and Stefan Lueke , The paper presents an approach to estimate road boundaries based on static
objects bounding the road. A map based environment description and an interpretation algorithm identifying the road boundaries in
the map are used. Two approaches are presented for estimating the map, one based on a radar sensor, one on a mono video camera.
Besides that two fusion approaches are described. The estimated boundaries are independent of road markings and as such can be used
as orthogonal information with respect to detected markings. Results of practical test using the estimated road boundaries for a lane
keeping system are presented.

Akihito Seki and Masatoshi Okutomi, in their study,Understanding the general road environment is a vital task for obstacle detection
in complicated situations. That task is easier to perform for highway environments than for general roads because road environments
are well-established in highways and obstacle classes are limited. On the other hand, general roads are not always well-established and
various small obstacles, as well as larger ones, must be detected. For the purpose of discerning obstacles and road patterns, it is
important to determine the relative positions of the camera and the road surface. This paper presents an efficient solution using a
stereo-vision-based obstacle detection method for general roads. The relative position is estimated dynamically even without any clear
lane markings. Additionally, obstacles are detected without applying explicit models. We present experimental results to demonstrate
the effectiveness of our proposed method under various conditions.


Zehang Sun, George Bebis, and Ronald Miller ,in their study, presents a review of recent vision-based on-road vehicle detection
systems. Our focus is on systems where the camera is mounted on the vehicle rather than being fixed such as in traffic/driveway
monitoring systems. First, we discuss the problem of on-road vehicle detection using optical sensors followed by a brief review of
intelligent vehicle research worldwide. Then, we discuss active and passive sensors to set the stage for vision-based vehicle detection.
Methods aiming to quickly hypothesize the location of vehicles in an image as well as to verify the hypothesized locations are
reviewed next. Integrating detection with tracking is also reviewed to illustrate the benefits of exploiting temporal continuity for
vehicle detection

Akihiro Takeuchi, Seiichi Mita, David McAllester, in their study ,proposes a novel method for vehicle detection and tracking using a
vehicle-mounted monocular camera. In this method, features of vehicles are learned as a deformable object model through the
combination of a latent support vector machine (LSVM) and histograms of oriented gradients (HOG). The vehicle detector uses both
global and local features as the deformable object model. Detected vehicles are tracked by using a particle filter with integrated
likelihoods, such as the probability of vehicles estimated from the deformable object model and the intensity correlation between
different picture frames.




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

203 www.ijergs.org


SYSTEM DESIGN ARCHITECTURE :

Figure shows the diagram of the proposed Approach system .The system lies in two ways :Right image and Left
image. The system is design using Hough transform and Kalman filter. With the Hough transform, we find the drivable road
boundary. The resultant road boundary is then used as the road contextual information to enhance the performance for each processing
step of the multi-object recognition, which detects the objects of interest by using Kalman filter.



Fig. flow diagram of the proposed work

A) ROAD BOUNDARY DETECTION AND TRACKING :
Computer vision-based methods are widely used for road detection. They are more robust than the all.
The detection of road boundary an Hough transform is used. It is detect boundary in the current video frame and finally
localize the road boundary using the colour red and green . The current goal is to find the edges of road in which human driver
have to drive. By using the Hough transform, the proposed approach finds the edges of road. The detected road boundary are
used to verify which ones are needed to be tracked and which ones are not. The approach can still find the accurate drivable
road boundary robustly from the figure .



Fig. 1. Road boundary detection.






International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

204 www.ijergs.org



B) MULTI-OBJECT RECOGNITION :

As mentioned previously, the intelligent vehicles have to share the road environment with all road users, such as
pedestrians, motorbikes, bicycles, and other vehicles. The system development of different types of detection systems in order to
improve traffic safety with respect to the road users .In the proposed system, particular objects of interest, including vehicles,
pedestrians, motorcycles and bicycles, are recognized with the context information . Object identification is challenging in that objects
present dramatic appearance changes according to camera viewpoints and environment conditions. The detection of object an kalman
filter is used. The Kalman filter object is designed for tracking. It is use to predict a physical object's future location, to reduce noise
in the detected location, or to help associate multiple physical objects with their corresponding tracks. A Kalman filter object can be
configured for each physical object for multiple object tracking.
The flowchart of this object detection system is shown in fig 2. and its main steps are discussed in the following section. The
first step is to collect database of video file. There is masking of all the image and foreground detection. Then block is analysis .After
the analysis of all the block it read all the frames of image. All the previous frames is delete and create a new frame. It detect the
object track and predict frames.


Fig 2. Flow chart of object detector

Object tracking is often performed to avoid false detections over time and predict future target positions. However, it is
unnecessary to keep tracking the targets which are out of the collision range Particular objects of interest, including vehicles,
pedestrians, motorcycles, and bicycles, are recognized, which will be provided to the behavioural and motion planning systems of the
intelligent vehicles for high-level functions. Some example results in various scenarios for different on-road objects are shown in
Figure which substantiated that the proposed system can successfully detect the objects of interest with various sizes, types, colours.

1) Vehicle detection:-

Original video Segmented video
Track object
Bound box
Block analysis
Foreground detection
Database of videofile
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

205 www.ijergs.org




2) Pedestrians detection :-


Original video Segmented video

3) Vehicle and pedestrian detection:-


Original video Segmented video


CONCLUSION :
We present a vision-based approach for estimating the road boundary and recognizing a number of road users. Our first
contribution is road detection using Hough transform. It allows the used to verify which ones are needed to be tracked and which
ones are not. it will help the human driver to go on a particular road boundary .Our second contribution is the use of road contextual
correlation for enhancing the object recognition performance. The Kalman filter object is designed for tracking. It is use to predict
object's future location. The system development in order to improve traffic safety with respect to the road users All of these
contributions improve the accuracy as well as robustness of the road environment recognition.

REFERENCES:
[1] Chunzhao Guo, Member, IEEE, and Seiichi Mita, Member, IEEE Semantic-based Road Environment Recognition in
Mixed Traffic for Intelligent Vehicles and Advanced Driver Assistance Systems
[2] M.Nieto and L.Salgado , Real time vanishing point estimation in sequences using adaptive steerable filter bank ,in
proc. Advanced concepts for intelligent vision system ,LNCS 2007,pp.
[3] J.C.McCall and M.M.Trivedi , Video-Based Lane Estimation and Tracking for Driver Assistance: Survey, System,
and Evaluation, IEEE Trans .Intell .Transp .Syst .,vol 7,no.1 ,pp. 20-37,Mar 2006
[4] J. McCall, D. Wipf, M. M. Trivedi, and B. Rao, Lane change intent analysis using robust operators and sparse
Bayesian learning, in Proc. IEEE Int. Workshop Machine Vision Intelligent Vehicles/IEEE Int. Conf. Computer Vision
and Pattern Recognition, San Diego, CA, Jun. 2005, pp. 5966.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

206 www.ijergs.org

[5] M.darms ,M.Komar , and S. Lueke , Map based Road Boundary Estimation, in proc .IEEE Intelligent vehicle
symposium ,2010,pp.609 -614.
[6] A .seki and M. Okutomi Robust Obstacle Detection in General Road Environment Based on Road Extraction and
Pose Estimation , in proc. IEEE Intelligent Vehicles Symposium, 2006, pp 437-444.
[7] S. Kubota , T. Nakano and Y. Okamoto , A Global Optimization Algorithm for Real-Time On-Board Stereo
Obstacle Detection Systems , in proc. IEEE Intelligent Vehicles Symposium, 2007, pp 7-12
[8] F. Han, Y. Shan, R. Cekander, H. S. Sawhney, and R. Kumar, A two-stage approach to people and vehicle detection with HOG-
based SVM, in Performance Metrics for Intelligent Systems 2006 Workshop, pp. 133-140, Aug. 2006
[9] K. Kluge, Performance evaluation of vision-based lane sensing: Some preliminary tools, metrics, and results, in Proc. IEEE
Intelligent Transportation Systems Conf., Boston, MA, 1997, pp. 723728.
[10] M.Bertozzi, A. Broggi, and A. Fascioli, Obstacle and Lane Detection on Argo Autonomous Vehicle, IEEE Intelligent
Transportation Systems, 1997.
[11] z.sun ,G .Bebis ,and R.miller ,on-Road Vehicle Detect ion: A Review ,IEEE Trans. Pattern Analysis and Machine Intelligence,
vol. 28, no. 5,pp. 694-711, 2006
[12] D.geronimo ,A.M.lopez,A.D.sappa and T.Graf, survey on pedestrian detection for advance driver assistance system, IEEE
Trans. Pattern Analysis and machine intelligence, vol. 32, no. 7,pp.1239-1258,2010
[13] A. Takeuchi, S. Mita, and D. McAllester, On-road vehicle tracking using Deformable Object Model and Part icle Filter with
Integrated Likelihoods__ in Proc. IEEE Intelligent Vehicles Symposium, 2010, pp.1014-1021.
[14] K. Frstenberg, D. Linzmeier, and K. Dietmayer, Pedestrian recognition and tracking of vehicles using a vehicle based
multilayer laser scanner, Proc. of 10th World Congress on ITS 2003, Madrid, Spain, November 2003














International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

207 www.ijergs.org

Design & Implementation of ANFIS System for Hand Gesture to Devanagari
Conversion
Pranali K Misal
1
, ProfM.M. Pathan
2

1
Scholar, Department of Electronics Engineering, G.H.Raisoni College of Engineering, Digdoh hill, Nagpur, India
2
Faculty, Department of Electronics Engineering, G.H.Raisoni College of Engineering, Digdoh hill, Nagpur, India
E-mail- missal.pranali20@gmail.com

ABSTRACT- Sign language mainly uses hand gesture for communicate between vocal and hearing impaired people and normal
people. It is also used to share message. This research work presents a simple sign language recognition system that develops using
ANFIS & neural network. Devanagari language is one of the most used for writing. The system works uses functions like skin color
detection, convex hull, contour detection and identification of extrema points on the hand. This is technology which is convert image
from video acquisition into Devanagari Spoken language. The database of all alphabets is created first then after feature extraction of
all images. The features calculated first then from centroids values trained Neural network. The Algorithms for training neural
network is linear vector Quantization. The feature extraction point trained in neural network after that from a new recognizes hand
gesture and translate sign language into Devanagari alphabets. The system architecture contains video acquisition, image processing,
feature extraction & neural network classifier. This system is used to recognize more alphabets which can sign with one and two hands
movement. This system useful to identify the gesture after that sign translating Devanagari language. This project aims to develop and
test a new method for recognition of Devanagari sign language. To do so preprocessing, contour & convex based feature extraction is
done. The method is evaluated on database and proves to be superior than rule based methods. To identify Devanagari alphabets of
sign language, in an image morphological operation and skin color detection is performed. A MATLAB implementation of the
complete algorithm is developed and conversion of sign language into Devanagari spoken language with the better accuracy.

KeywordsHand gesture, Sign language recognition, Image processing, ANFIS, Feature Extraction, contour points, convex hull
Devanagari Alphabets & numerals.
INTRODUCTION
Hand gesture technique is a way of communication between vocal and hearing impaired people. A person who has knowledge of
sign language can talk and hear properly. Untrained people cannot communicate with mute person, because the person can
communicate with Impaired People by training sign language. Hands Gesture to Devanagari voice system will be more useful for the
vocal & hearing impaired for communicate with normal people more fluently. The proposed system will be use for sign language into
spoken language. The aim of this research work includes conversion of hand gesture to Devanagari speech. The vocal and hearing
impaired community has developed their own culture and communicates with ordinary person by using sign language.
Hands gestures are basically physical action by using hands & Eyes, we can communicate with the deaf & dumb people.
Gesture represents ideas and actions of deaf & dumb people. They can express their feelings with the different hand shapes, fingers
patterns & movements of hands. The gestures vary greatly culture among People, hand Gesture are basically used in communication
between the peoples those who are unable to speak with another. It is shown that people who are hearing impaired people , when they
talk on telephone, and unable to see each other as well as face to face communication. These problems overcome by hand gesture
recognition system which recognizes Devanagari alphabets & numerals. It is the demand of available advanced technology to
recognize, classify and various different hand gestures and use them in a wide range of application. Linear vector Quantization
algorithm is used for training the neural network. Devanagari alphabets are includes in two palm or both hand movement numerals in
only one hand movements. The background for image is in white background, because color gloves are very expensive. In this
proposed work, only use white background for better feature extraction.



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

208 www.ijergs.org


II. RELATED WORK ON HAND GESTURE RECOGNITION

Gesture Recognition is becomes important factor for sign language. There has been gesture recognition technique developed for
voice recognition by using hand gesture. Ullah [7] designed a system that of 26 images representing each alphabet, which is used for
training purpose. The American Sign Language from static images using CGP has recognition accuracy of 90%. Miller & Thomson
first gave the idea of CGP. CGP genes are represented by nodes each of which some characteristics, which represents the actual
system of CGP chromosomes or genotype. The accuracy of this system is reported on sized images of 47*27 pixel resolution, by
taking too small a data set for testing and manual preprocess of training images. CGP based system are faster with respect to
conversion GP algorithms., but recently developed Neuro- Evolutionary approaches, CGP is slow. For the fast ability learning
approaches like Cartesian genetic programming evolved Artificial Neural network (CGPANN).Dr. Raed Abu Zaiter & Maraqa [3]
developed a system for recognition of Arabic sign language using recurrent neural network. This paper used color coded glove to
extract perfect feature with recognition accuracy rate of 95% is reported. In this paper, the images have been captured by a color
camera & image digitized into 256*256 pixel image, and then it is converted into HIS system, then after color segmentation is done
with matlab6. The result show an improvement in generalisability of system when using fully recurrent neural network rather than
using Elman network.

Paulraj [4], developed systems which convert sign language into voice signals i.e. Malaysian language. In this paper, feature
extraction method done by Discrete Cosine Transform (DCT). This system use a camera for lighting sensitivity & background
condition, skin color segmentation applied for each of gesture frame images and segmented. The feature extraction stage the moment
is calculated from the blob method, in the calculated from blob alone in a set of image frame. They surveyed the use of Skin Color
Segmentation system. They found that accuracy rate 95% in Recognition of 92.85% in A Phoneme Based Sign Language Recognition
System Using Skin Color Segmentation. Akmeliawati [6], developed an automatic sign language translator provides a real time
English translation of Malaysia SL. This sign language translator can recognize both finger, spelling and sign gestures that involve
static and motion sign. In this neural network is used to translate sign language to English. In earlier days, English & Malay languages
learnt only as second languages. The data gloves are less comfortable to signer. But these gloves are very costly. In this automatic sign
language translator recognize all 49 signs in BIM vocabulary. This system achieved recognition rate of over 90%.

Fang et al [1] make use of three additional trackers in their hybrid system with self organizing feature maps & HMM to
accuracy which is between 90-96%. But using SOFM/HMM system increases recognition accuracy by 5% than HMM. The
recognition rate of this system is 91%. Which is recognizing sentences of 40 signs, this system imposing a strict grammar, in real time
performance accuracy rate was 97%. A self adjusting recognition algorithm is proposed for improving for SOFM/HMM
discrimination. The aim of sign language recognition is to provide an efficient and accurate mechanism to transcribe sign language
into text or speech so that communication between deaf and hearing impaired people is very easy. Their system correctly recognizes
27 out of 31 ASL symbols. The recognition rate for their system was 91% which recognizing sentences of 40 signs. Memona Tariq,
Ayesha Iqbal, Aysha Zahid, and Zainab Iqbal [2], presented a machine translation of sign language into text. The approaches rely on
intrusive hardware webcam and in the form of wired or colored gloves. The specific language dialect dependent on accurate gesture for
their better understanding. The functions use for feature extraction like skin color based thresholding, contour detection & convexity
detect, for detection of hands and identification of important points on the hand. In testing, it can be used for translation of trained sign
language symbols. The system recognized one hand including 9 numerals & 24 alphabets of English language. The maximum
recognition average accuracy is of 77% on numerals and alphabets.



III. SYSTEM DESIGN ARCHITECTURE

The flowchart of this system is shown in fig 1 and its main steps are discussed in the following section. The first step is
to collect database of all Devanagari alphabets. The database having a large amount of Devanagari alphabets images, The features are
calculated by convex hull, extrema points and counter points methods. After calculating feature of centroids values, then these values
are stored in neural network for training. Fuzzy system use for rule based system. In neural network for training purpose algorithm
uses linear vector quantization in this system, then after classification is done by this algorithm. Then from video acquisition new
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

209 www.ijergs.org

image is captured & compared with the database. The image processing is process, and then correctly converts sign language into
Devanagari Voice for alphabets.






Fig 1 -.System Architecture

Feature extracted from image based on the distance between centroid, fingers & palms. These feature vectors are used for neural
network system.



Fig 2 - flow chart of this system
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

210 www.ijergs.org


A. SKIN COLOR DETECTION-

Image is captured from video to identify hand, determine gesture. For identification of hand gesture RGB, gray scale color
model are require. The skin color detection i.e. detects only skin color by using morphological operation. The skin color is detected &
then boundary of the hand is located by points. The convex hull is useful for collecting features from image. The boundary covered
around the hand by using skin color detection technique.


B. FEATURE EXTRACTION-

The image is captured in white background for better Results. It is processed by function of skin color from skin color detection
and then determines contour and convex hand of the shape. To determine the spatial moment or position of hand are required for
contouring. Contour is boundary or outline of curved shape it draws outline around the hand. The hand having different orientation in
convex hull & contour points feature extraction. The key information contains in fingers and palm. For hand gesture identification
method, convex hull designed for counter points. Finding detect of convexity contour points are necessary for joining points. The start
points of detects are marked, used for computing feature vector. Defects points are unevenly distribution vary in number from one
frame to other. The defected points are filtered by identifying all contour points. The distances of contour points are determined from
centroid. The distances are feature extracted from every hand gesture.


Fig 3- Original Image Fig 4- Convex hull


Fig 5- Contour points



C. ANFIS-

The adaptive neuro-fuzzy inference system (ANFIS) proposed by Jang in 1993, implements a Sugeno fuzzy inference method.
The ANFIS architecture contains a six layer feed-forward neural network as shown in Figure 3. Layer 1 is the input layer that passes
external crisp signals to Layer 2, known as the fuzzification layer; to determine the membership grades for each input implemented by
the given fuzzy membership function. Layer 3 of ANFIS is the rule layer, which calculates the firing strength of the rule as the product
of the membership grades.

Layer 4 is called the normalized firing strengths, in which each neuron in the layer receives inputs from all neurons in Layer 3, and
calculates the ratio of the firing strength of a given rule to the sum of firing strengths of all rules. Layer 5 is the defuzzification layer
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

211 www.ijergs.org

that yields the parameters of the consequent part of the rule. A single node in Layer 6 calculates the overall output as the summation of
all incoming signals. ANFIS training can use alternative algorithms to reduce the error of the training. The LVQ network use as a
classifier for sign recognition, where each neuron corresponds to a different category.


Fig. 6 Adaptive Neuro-Fuzzy Inference System (ANFIS)[9]


IV.CONCLUSION

In this paper, we proposed, design and tested method for Devanagari Sign Language Recognition using the neural network, ANFIS
classifier and features extracted from contour points & convex hull. From the experiments, we concluded that, obtained slightly better
results 90%. Sign language recognition system is created by using skin color detection and neural network using LVQ algorithm. The
sign language to voice system helps vocal and hearing impaired people to communicate with normal people more fluently by using
this system. Our approaches described in the paper recognition accuracy greater than 90%. Sign language is the most important
language for vocal and hearing impaired people. The aim of this research work to convert sign language into Devanagari spoken
language. Though a lot of work has been done in this area previously, but direction is to extend this system to recognize Devanagari
alphabets which can be signed with one hand movements.

REFERENCES

[1] G. Fang, W. Gao, J. Ma, Signer-independent sign language recognition based on SOFM/HMM, Recognition,
Analysis, and Tracking of Faces and Gestures in Real-Time Systems, Proceedings, IEEE ICCV Workshop, pp. 90-95,
2001.

[2] Memona Tariq, Ayesha Iqbal, Aysha Zahid, and Zainab Iqbal, Sign Language Localization: Learning Eliminate
Language Dialects, Journal to International of Human Computer Interaction, 2012.

[3] Meenakshi Panwar Hand Gesture based Interface for Aiding Visually Impaired International Conference on Recent
Advances in Computing and Software Systems, 2012.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

212 www.ijergs.org

[4] M. Maraqa, R. Abu-Zaiter, Recognition of Arabic Sign Language (ArSL) using recurrent neural networks,
Applications of Digital Information and Web Technologies, ICADIWT, First International Conference, pp. 478-481,
2008.

[5] M. P. Paulraj, S. Yaacob, M. S. bin Zanar Azalan , R. Palaniappan, A phoneme based sign language recognition
system using skin color segmentation, Signal Processing and its Applications (CSPA), 6
th
International Colloquium, pp.
1-5, 2010.

[6] R. Akmeliawati, M. P-L. Ooi, Y. C. Kuang, Real-Time Malaysian Sign Language Translation using Color
Segmentation and Neural Network, IEEE Instrumentation and Measurement Technology Conference Proceedings,
IMTC, pp. 1-6, 2007.

[7] F. Ullah, American Sign Language recognition system for hearing impaired people using Cartesian Genetic
Programming Automation, Robotics and Applications (ICARA), 5th International Conference, pp. 96-99, 2011.

[8] Chunli Wang, Wen GAO, Shiguang Shan An Approach Based on Phonemes to Large Vocabulary Chinese Sign
Language Recognition, Fifth IEEE International Conference on Automatic Face and Gesture Recognition (FGR.02),
2002.

[9] Jyhshing Roger Jang, ANFIS: Adaptive- Network-Based Fuzzy Interface System, in proceeding of IEEE
Transaction on System, Man and Cybernetics Vol.23,NO.3,MAY/JUNE, 1993.

[8] H. Birk, T. B. Moeslund, and C. B. Madsen, Real-time recognition of hand alphabet gestures using principal component
analysis, in Scandinavian Conference on Image Analysis (SCIA), 1997, pp. 261 268

[9]T. Starner and A. Pentland, Visual recognition of American sign language using hidden markov models, In Intl. Conf. On
Automatic Face and Gesture Recognition, pp. 189194, 1995.

[10]C. Vogler and D. Metaxas Asl recognition based on a coupling between hmms and 3d motion analysis In Proc.Intl. Conf. on
Computer Vision, pp. 363369, 1998.

[11]Y. Nam and K. Wohn Recognition of Space-Time Hand-Gestures Using Hidden Markov Model In Proceedings of the ACM
Symposium on Virtual Reality Software and Technology, pp. 51{58, Hong Kong, July 1996.

[12] B. Bauer, H. Hienz, and K.F. Kraiss Video-Based Continuous Sign Language Recognition Using Statistical Methods In
Proceedings of the International Conference on Pattern Recognition, pp. 463{466, Barcelona, Spain, September 2000.




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

213 www.ijergs.org

Effects of Compression Ratio on Performance of a Single Cylinder 4-Stroke
Compression Ignition Engine using Blends of Neat Karanja Oil with Diesel in
Dual Fuel Mode
N.H.S.Ray
1
, S.N.Behera
1
,M.K.Mohanty
2

1
Faculty, Department of Mechanical Engineering, CEB, Bhubaneswar, BPUT, Odisha, India
2
Faculty, Department of FMP, CAEB, OUAT, Bhubaneswar, Odisha, India
E-mail- mohanty65m@gmail.com

ABSTRACT- The diesel engine is a major tool in the day-to-day life of modern society. The fossil fuel scarcity and pollutant emissions from diesel
engines have become two important problems of the world today. One method to overcome the crisis is to find suitable substitute for the petroleum
based fuels. Biofuels have been gaining popularity recently as an alternative fuel for diesel engines. Biofuels can be used in any diesel engine, usually
without any modifications. In India million tonnes non edible seeds like Karanja seeds are going in waste. Oil produced from these seeds can be used
as an alternate fuel in diesel engines. The overall objective is to prevent waste, increase the value recovery of resource as bio fuel and to meet fossil
fuel scarcity. As compared to diesel fuel biodiesel from Karanja oil has the advantages as it is renewable, non-toxic, reduces CO, HC and smoke
emission from the engine. It is used CI engine by blending with diesel or it can also be directly used in the engine without any engine modification.
In the present study, the effects of compression ratio on the performance of a four stroke single cylinder diesel engine using Karanja oil in dual fuel
mode are investigated.

Key Words- Bio diesel, VCR engine, Karanja oil, Compression ratio. Alternate fuel.BSFC, BTE.


I. INTRODUCTION
Energy is one of the major sources for the economic development of any country. India being a developing country requires much higher level of
energy to sustain its rate of progress. According to the International Energy Agency (IEA), hydrocarbons account for the majority of Indias energy
use. Together, coal and oil represent about two-thirds of total energy use. Natural gas now accounts for a seven percent share, which is expected to
grow with the discovery of new gas deposits. India had approximately 5.7 billion barrels of proven oil reserves as of January 2011, the second-largest
amount in the Asia-Pacific region after China. The combination of rising oil consumption and relatively flat production has left India increasingly
dependent on imports to meet its petroleum demand. To combat the present energy crisis, one of the important strategies need to be adopted is to
develop and promote appropriate technology for utilizing non-traditional energy resources to satisfy energy requirements. Hence to overcome all
these problems most combustion devices are modified to adapt gaseous fuels in dual fuel mode.
For substituting the petroleum fuels used in internal combustion engines, fuels of bio-origin provide a feasible solution to the twin crises of fossil
fuel depletion and environmental degradation. For diesel engines, a significant research effort has been directed towards using vegetable oils and
their derivatives as fuels. Several research institutions are actively pursuing the utilization of non-edible oils for the production of Bio-diesel,
additives for lubricating oils, saturated and unsaturated alcohols and fatty acids and many other value added products. Biodiesel has received a good
response worldwide as an alternative fuel to diesel. Biodiesel is a cleaner burning fuel because of its own molecular oxygen content. Again in place
of diesel, biodiesel can be substituted as the pilot fuel in the dual fuel due to the diminishing reserves of petroleum fuels and rising awareness for
protecting our environment. Biodiesel is produced by transesterification process of Karanja seeds which involves a chemical reaction between an
alcohol and triglyceride of fatty acid in the presence of a suitable catalyst leading to the formation of fatty acid alkyl esters (biodiesel) and glycerol.
Biodiesels viscosity is much closer to that of the diesel fuel than vegetable oil. Although biodiesel has many advantages over diesel fuel, there are
several problems needs to be addressed such as its lower calorific value, higher flash point, higher viscosity, poor cold flow properties, etc. This can
lead to the poor atomization and mixture formation with air that result in slower combustion, lower thermal efficiency and hi gher emissions. To
overcome such limitations of biodiesel some researchers have studied the performance and emissions of the diesel engines with increased injection
pressure. Fuel injection pressure and fuel injection timing play a vital role in ignition delay and combustion characteristics of the engine, as the
temperature and pressure change significantly close to TDC. The fuels properties also play a significant role to increase or decrease exhaust
pollutants. Various investigations clearly reported that cetane number (CN) affects exhaust emissions. The CN also affects the combustion efficiency,
and this ensures starting the engine easily. However, if the CN is excessively higher than the normal value, the ignition delay will be too short for the
fuel to spread into the combustion chamber. As a result of this unexpected condition, the engine performance will be reduced and the smoke value
will increase.



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

214 www.ijergs.org

II. METHODOLOGY
2.1 Experimental set up




Fig.1 Actual Engine setup
The principles and methodologies that have been used during the course of several experimental investigations in the VCR diesel engine test rig
consisting of a single cylinder, 4-stroke, 3.5 kW at 1500 rpm Diesel engine connected to eddy current dynamometer in computerized mode to study
the performance, emission and combustion of engine by varying its compression ratio at different load condition from 0kg to 12kg using various
blends of Karanja oil and diesel. The detailed specification of the engine is shown in Table 1. Engine performance analysis software package
EnginesoftLV has been employed for online performance analysis. Piezo-sensor and crank angle sensor which measure the combustion pressure and
the corresponding crank angle respectively are mounted into the engine head. The output shaft of the eddy current dynamometer is fixed to a strain
gauge type load cell for measuring applied load to the engine. Type K-Chromel (Nickel-Chromium Alloy)/ Alumel (Nickel-Aluminium Alloy)
thermocouples are used to measure gas temperature at the engine exhaust, calorimeter exhaust, water inlet of calorimeter and water outlet of
calorimeter, engine cooling water outlet and ambient temperature. The fuel flow is measured by using 50ml burette and stopwatch with level sensors.

2.1.1 Engine specifications and attachments.

Table:-1 Engine specifications
Make Kirloskar
General details VCR Engine test setup 1- cylinder, 4- stroke, Water
cooled, compression ignition
Rated power 3.5Kw at 1500 rpm
Speed 1500 rpm(constant)
Number of cylinder Single cylinder
Compression ratio 16:1 to 18:1(variable)
Bore 87.5 mm
Stroke 110 mm
Ignition Compression ignition
Loading Eddy current dynamometer
Load sensor Load cell, type strain gauge 0-50 Kg
Temperature sensor Type RTD, PT100 and Thermocouple, Type K
Cooling Water
Air flow transmitter Pressure transmitter, Range (-) 250 mm WC
Rotameter Engine cooling 40-400 LPH; Calorimeter 25-250 LPH
Software EnginesoftLV Engine performance analysis software
Propeller shaft With universal joints
Air box M S fabricated with orifice meter and manometer
Fuel tank Capacity 15 lit with glass fuel metering column
Calorimeter Type Pipe in pipe
Piezo sensor Range 5000 PSI, with low noise cable
Crank angle sensor Resolution 1 Deg, Speed 5500 RPM with TDC pulse
Data acquisition device NI USB-6210, 16-bit, 250kS/s.
Piezo powering unit Make-Cuadra, Model AX-409.
Digital milivoltmeter Range 0-200mV, panel mounted
Temperature
transmitter
Type two wire, Input RTD PT100, Range 0100 Deg C,
Output 420 mA and Type two wire, Input
Thermocouple, Range 01200 Deg C, Output 420 mA
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

215 www.ijergs.org

Load indicator Digital, Range 0-50 Kg, Supply 230VAC
Pump Type Monoblock
Overall dimensions W 2000 x D 2500 x H 1500 mm


2.1.2 Compression ratio adjustment



Fig.2 Compression ratio adjustment

The maximum compression ratio can be set by slacking the 6 Allen bolts provided for clamping the tilting block. Then the lock nut on the adjuster is
to set and to rotate the adjuster as per the marking on the CR indicator. Lock nut on adjuster and the 6 Allen bolts are then tightened gently. The
centre distance between two pivot pins of the CR indicator is to be noted down. After changing the compression ratio the difference () can be used
to know new CR.

2.1.3 Dynamometer



Fig.3 Dynamometer

It is an absorption type of eddy current water cooled dynamometer used as loading unit. The load is measured by strain gauge type load cell.

2.1.4 Multi gas analyser

Multi - Gas Analyzer capable of measuring CO, HC, CO
2
, O
2
& NOx (optional) contents in the exhaust. The AVL-444 analyzer provides optimized
analysis methods for different applications. This analyzer can easily check the pollution level of various I.C.engines, elegant and smart in appearance.
The analyzers are easy to install and known for its efficient functioning. Further, the range is tested on various parameters in order to meet the set
industrial standards. The specifications of accuracy for measurement of various parameters are given in Table-2.

Table-2 Measurement range and accuracy of AVL 444 gas analyzer

Measured Quality Measuring Range Resolution Accuracy
CO 010% vol. 0.01% vol. <0.6% vol: 0.03% vol.
>=0.6% vol: 5% of ind value
CO2 020% vol. 0.1% vol. <10% vol: 0.5% vol.
>=10% vol: 5% vol.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

216 www.ijergs.org

HC 020000 ppm vol <=2000:1 ppm vol.
>2000:10 ppm vol.
<200 ppm vol: 10 ppm vol.
>=200 ppm vol: 5% of ind. val.
O2 022% vol. 0.01% vol. <2% vol: 0.1% vol.
>=2% vol: 5% vol.
NO 05000 ppm vol. 1 ppm vol. <500 ppm vol: 50 ppm vol.
>=500 ppm vol: 10% of ind. val.
Engine Speed 4006000 min
-1
1 min
-1
1% of ind. val.
Oil Temperature -30125
0
C 1
0
C 4
0
C
Lambda 09.999 0.001 Calculation of CO, CO2, HC, O2



Fig.4 Multi gas analyser

2.1.5 Smoke meter




Fig.5 Smoke meter

Table-3 Measurement range and accuracy of Smoke meter

Measured Quality Measuring Range Resolution Accuracy
Opacity 0100% 0.1% %of full scale

Absorption 099.99m-1 0.01 m-1 Better than 0.1 m-1
RPM 400-6000 1/ min 11 1/ min 10
Oil Temperature 00150
o
C 1
o
C 3
o
C
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

217 www.ijergs.org

2.2 Experimental lay out


Fig.6 Layout of VCR engine

2.3 Experimental procedure

The variable compression ratio engine is started by using standard diesel. It is to run for 30 minutes. When the engine is warmed up then readings are
taken. The tests are conducted at the rated speed of 1500 rpm. Fuel consumption is measured by the help of the measuring burette attached to the data
acquisition system. In every test, brake thermal efficiency, brake specific fuel consumption, exhaust gas temperature, mechanical efficiency, torque
and combustion parameters like combustion pressure, combustion temperature, ignition delay, net heat release rate, combustion duration and exhaust
gas emissions such as carbon monoxide (CO), carbon dioxide (CO
2
), hydrocarbon (HC), nitrogen oxides (NO
x
), and smoke opacity are measured.
From the initial measurement, performance, combustion and emission parameters with respect to compression ratio 16:1, 17:1 and 18:1 at 100% load
for different blends are calculated and recorded. Also the engine operating parameters such as performance, combustion and emissi on with respect to
different loads for different blends at compression ratio 18 are measured and recorded. At each operating condi tions, the performance and combustion
characteristics are also processed and stored in personal computer (PC) for further processing of results. The same procedure is repeated for different
blends of Karanja oil.

The specific gravity of biodiesel fuels is lower than that of straight vegetable oil. Therefore, the specific gravity of the blend increases with the
increase of biodiesel concentration. Also, the specific gravity shows an inverse relationship with temperature. As the temperature is increased,
specific gravity decreases. The viscosity of biodiesel is found lower than that of straight vegetable oil. The viscosity of the blend increases with the
increasing biodiesel fraction for all. Similar to the effect of temperature on specific gravity, viscosi ty also shows linearly inverse trend i.e. increasing
temperature reduces the viscosity. This property helps in better atomisation and hence fuel burning in application of biodiesels. It has been noticed
that the specific gravity and the viscosity of the biodiesel blends increase with increase of the biodiesel fraction. It is also seen that the specific
gravity and viscosity of each blend decreases with increase in the temperature.


2.4 Fuel property testing at different blends and temperatures

2.4.1 Specific gravity blend of Karanja oil at different temperature
The specific gravity of all fuel blends (neat Karanja oil, blended neat oil with diesel, 100% biodiesel and blended biodiesel with diesel) are measured
as per standard ASTM D4052 at varying temperature using hydrometer.

Fig.7 Hydrometer
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

218 www.ijergs.org

Referring to table-4, it can be seen that specific gravity go on decreasing for all the blend with increase in temperature. It is found that the specific
gravity of neat Karanja oil (K100) varies from 0.925 to 0.878 at a temperature range of 30-100C.

Table-4 Variation of specific gravity with temperature and blend (neat Karanja)

Temp

Blend
30
0
C 40
0
C 50
0
C 60
0
C 70
0
C 80
0
C 90
0
C 100
0
C
K-10 0.827 0.8215 0.814 0.807 0.800 0.793 0.785 0.780
k-20 0.8375 0.832 0.825 0.820 0.812 0.8055 0.797 0.791
k-30 0.850 0.843 0.836 0.830 0.823 0.816 0.810 0.803
k-40 0.860 0.854 0.847 0.840 0.834 0.827 0.821 0.816
k-50 0.871 0.866 0.860 0.853 0.845 0.838 0.833 0.8265
k-60 0.883 0.877 0.870 0.866 0.859 0.852 0.846 0.840
k-70 0.892 0.886 0.880 0.873 0.866 0.860 0.853 0.846
k-80 0.905 0.898 0.891 0.884 0.877 0.871 0.866 0.858
k-90 0.916 0.913 0.904 0.895 0.887 0.881 0.874 0.870
k-100 0.925 0.919 0.914 0.907 0.901 0.891 0.885 0.878

2.4.2 Viscosity of blends of Karanja oil at different temperatures

When a fluid is subjected to external forces, it resists flow due to internal friction. Viscosity is a measure of internal friction. The viscosity of the fuel
affects atomization and fuel delivery rates. It is an important property because if it is too low and too high then atomizati on, mixing of air and fuel in
combustion chamber gets affected. Viscosity studies are conducted for different fuel blends (neat Karanja oil, blended neat oil with diesel, 100%
biodiesel and blended biodiesel with diesel). Kinematic viscosity of liquid fuel samples are measured using the viscometer at different temperatures
and blends as per specification given in ASTM D445, using Cannon-Frensky viscometer tubes in viscometer oil bath.

\
Fig.8 Viscometer
Referring to the table-4, it is observed that viscosity for all blends go on decreasing with increase in temperature. It is found that the viscosity of neat
Karanja oil blends (K10) varies from 4.116 cSt to 2.2912 cSt at a temperature range of 30 to 100C.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

219 www.ijergs.org

Table- 4variation of viscosity with temperature and blend (neat Karanja)

Temp

Blend

30
0
C 40
0
C 50
0
C 60
0
C 70
0
C 80
0
C 90
0
C 100
0
C
K-10 4.116 3.238 2.351 2.043 1.763 1.2572 2.0572 2.2912
K-20 4.976 4.354 3.421 2.724 2.332 2.052 1.842 1.7
K-30 8.061 6.563 4.5 3.695 2.852 2.612 2.316 2.005
K-40 10.344 8.382 6.420 5.25 4.189 3.586 2.789 2.556
K-50 14.821 11.948 8.917 6.920 4.683 3.933 3.494 3.147
K-60 18.678 14.789 10.898 9.024 7.384 5.305 4.372 4.189
K-70 27.167 20.686 16.562 11.185 9.667 7.633 6.563 4.226
K-80 34.513 24.602 17.645 12.475 9.956 8.989 8.097 7.491
K-90 36.478 30.665 23.436 16.753 12.785 10.636 9.631 8.561
K-100 58.1324 42.785 32.173 22.228 13.256 11.649 7.589 9.346

III. RESULT AND DISCUSSION

3.1 Performance Analysis of Neat Karanja Oil

3.1.1 Brake specific fuel consumption

The brake specific fuel consumption decreases with increase in load and K10 gives less BSFC compared to K20 and diesel, it can be seen from Fig
3.1.1.1 that with increase in blend % age of Karanja oil BSFC increasing this is due to the decrease in calorific value and higher density of Karanja
oil for the higher blends. The BSFC varies with diesel, K10 and K20 at full load is found to be 0.34 kg/kWh, 0.33kg/kWh, 0.35kg/kWh respectively.
Form Fig 3.1.1.2 it can be observed that the brake specific energy consumption decreases with the increase in compression rat io. The BSFC for
blend K20 is found to be higher compared to that of diesel. K10 shows less BSFC as compared to K10 and diesel at all compression ratio.



Fig 3.1.1.1


0
0.2
0.4
0.6
0.8
1
1.2
0 10 20 30 40 50 60 70 80 90 100
B
S
F
C
(
K
g
/
k
W
h
)
Load(%)
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

220 www.ijergs.org



Fig 3.1.1.2


3.1.2 Brake thermal efficiency

The variation of brake thermal efficiency (BTE) for different loads and for different fuels is given in Fig 3.1.2.1. It is seen that there is a steady
increase in efficiency with increases in load in all the fuel operations. It is happened due to reduction in heat loss and increase in power developed
with increase in load. The engine BTE at full load for diesel, K10, and K20 fuels is 24.9%, 26.63%, and 24.1% respectively. It is also observed that
the BTE of the blend K20 is slightly lower than that of the diesel and K10 is higher than diesel. This may be due to higher viscosity of the blend K20
resulting in poorly formed fuel spray and air entrainment affecting the combustion in the engine and further due to lower vol atility of vegetable oil.
The variation of brake thermal efficiency (BTE) for different compression ratio and for different blends is given in Fig 3.1.2.2. It is observed that the
BTE of the blend K10 is higher than that of the diesel at all compression ratios. BTE also gets increased for all the fuel types tested. BTE is directly
proportional to the compression ratio.



Fig 3.1.2.1



Fig 3.1.2.2
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
16 17 18
B
S
F
C
(
K
g
/
k
W
h
)
Compression Ratio
Diesel
K10
K20
0
5
10
15
20
25
30
0 10 20 30 40 50 60 70 80 90 100
B
T
E
(
%
)
Load(%)
Diesel
K10
K20
0
5
10
15
20
25
30
16 17 18
B
T
E

(
%
)
Compression ratio
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

221 www.ijergs.org


3.1.3 Mechanical efficiency

Fig 3.1.3.1 shows that the variation of mechanical efficiency with load for various blends. It has been observed that there is a steady increase in
mechanical efficiency for diesel and blends as the load increases. Maximum mechanical efficiency has been obtained from blend K10 at 50% load
i.e. 35.41%. The efficiency of the fuel blends is in general very closer to that of diesel. The increase in efficiency for all the blends may be due to
improved quality of spray, high reaction activity in the fuel rich zone and decrease in heat loss due to lower flame temperat ure of the blends than that
of diesel. At full load diesel gives maximum mechanical efficiency as compared to K10 and K20. The mechanical efficiency at full load for diesel,
K10, and K20 fuels are 52.32%, 35.41%, and 32.56% respectively.
The variations of mechanical efficiency with compression ratio for various blends are shown in Fig 3.1.3.2. It has been observed that the mechanical
efficiency increases with compression ratio and higher in higher compression ratio. Mechanical efficiency of diesel is higher than K10 and K20 at all
compression ratios. Mechanical efficiency increases with increasing compression ratio for all the blends.



Fig 3.1.3.1



Fig 3.1.3.2

3.1.4 Exhaust gas temperature

The variation of exhaust gas temperature with applied load for different bends is shown in Fig 3.1.4.1. The exhaust gas temperature increases with
increase in load. The exhaust gas temperatures decrease for different blends when compared to that of diesel. The highest temperature obtained is
324.53C for diesel for full load, whereas the temperature is only 317.93C and 310.39C for the blend K10 and K20.It may be due to energy content
in diesel is higher as compared to K10 and K20.
The variation of exhaust gas temperature for different compression ratio and for different blends is shown in Fig 3.1.4.2.The exhaust gas temperature
decreases with increase in compression ratio. The result indicates that exhaust gas temperature decreases for different blends when compared to that
of diesel. As the compression ratio increases, the exhaust gas temperature of various blends is lesser than that of diesel. The reason for the reduction
in exhaust gas temperature at increased compression ratio is due to lower temperature, at the end of compression,

0
10
20
30
40
50
60
0 10 20 30 40 50 60 70 80 90 100
M
e
c
h
n
i
c
a
l

e
f
f
i
c
i
e
n
c
y
(
%
)
Load(%)
Diesel
K10
K20
0
10
20
30
40
50
60
16 17 18
M
e
c
h
a
n
i
c
a
l

e
f
f
i
c
i
e
n
c
y
(
%
)
Compression ratio
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

222 www.ijergs.org



Fig 3.1.4.1



Fig 3.1.4.2

3.2 Combustion analysis of neat Karanja
3.2.1 Combustion pressure
The variation of combustion pressure with load for different blends is shown in Fig 3.2.1.1. It shows that increasing load combustion pressure
increases. It shows that diesel gives maximum pressure as compared toK10 and K20. It is seen that the maximum pressure for diesel as well as
Karanja oil blends is almost the same at full load, the maximum pressure value for diesel and blends K10 and K20 being 61.2 bar, 58.93 bar, and
59.19 bar respectively. The peak pressure depends on the amount of fuel taking part in the uncontrolled phase of combustion, which is governed by
the delay period and spray envelop of the injected fuel.
The variation of combustion pressure for different compression ratio and for different blends is shown in Fig 3.2.1.2.It shows that increasing
compression ratio, combustion pressure increases. . It shows that diesel gives maximum pressure as compared toK10 and K20.


Fig 3.2.1.1

0
50
100
150
200
250
300
350
0 10 20 30 40 50 60 70 80 90 100
E
x
h
a
u
s
t

t
e
m
p
e
r
a
t
u
r
e
(

C
)
Load(%)
Diesel
K10
K20
0
100
200
300
400
16 17 18
E
x
h
a
u
s
t

t
e
m
p
e
r
a
t
u
r
e
(

C
)
Compression ratio
Diesel
K10
K20
0
10
20
30
40
50
60
70
0 10 20 30 40 50 60 70 80 90 100
M
a
x
i
m
u
m

p
r
e
s
s
u
r
e
(
B
a
r
)
Load(%)
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

223 www.ijergs.org



Fig 3.2.1.2

3.2.2. Combustion duration

It is difficult to define exactly the combustion duration of a diesel engine as the total combustion process consists of the rapid premixed
combustion, mixing controlled combustion and the late combustion of fuel present in the fuel rich combustion products. The combustion duration in
general increases with load. The variation of the total combustion duration with different loads for different blends is shown in Fig 3.2.2.1. At full
load, the combustion duration for the fuel blends K10, K20 and diesel are 47, 77 and 19 CA respectively. As the calorific value of the Karanja oil
blend is lower than diesel, a higher quantity of fuel is consumed to keep the engine speed stable at different loads. The decrease in combustion
duration is due to the efficient combustion of the injected fuel. K20 gives higher combustion duration than other.
Fig 3.2.2.2 shows the variation of combustion duration with compression ratio for different blends. Increase in compression ratio combustion
duration increases. The oil blends causes longer duration for combustion at lower compression ratio and less duration for combustion at higher
compression ratio. K20 gives higher combustion duration than other.



Fig 3.2.2.1


Fig 3.2.2.2
3.2.3. Net Heat release rate

The variation of the total combustion duration with different loads for different blends is shown in Fig 3.2.3.1. Increasing load heat release rate
increases. The maximum heat release rate of diesel, K10, and K20 at full load has been observed to be 53.2, 47.2 and 41.5J/ CA. The heat release
rate is analysed based on the changes in crank angle variation of the cylinder. The heat release rate of Karanja oil blends decreases compared to that
0
10
20
30
40
50
60
70
16 17 18
M
a
x
i
m
u
m

p
r
e
s
s
u
r
e
(
B
a
r
)
Compression ratio
Diesel
K10
K20
0
20
40
60
80
100
0 10 20 30 40 50 60 70 80 90 100 C
o
m
b
u
s
t
i
o
n

d
u
r
a
t
i
o
n


(
D
e
g
)
Load(%)
Diesel
K10
K20
0
20
40
60
80
100
16 17 18
C
o
m
b
u
s
t
i
o
n

d
u
r
a
t
i
o
n

(
D
e
g
)
Compression ratio
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

224 www.ijergs.org

of diesel at all load. The heat release rate of diesel is higher than oil blend due to its reduced viscosity and reduction of air entrainment and fuel-air
mixing rates.
Fig 3.2.3.2 shows the variation of heat release rate with compression ratio for different blends. Low compression ratio Heat release rate increases
with the low compression ratios and slightly decreases at higher compression ratio. This may be due to the air entrainment and lower air/fuel mixing
rate and effect of viscosity blends. The heat release rate of diesel is higher than oil blend due to its reduced viscosity and better spray formation.




Fig.3.2.3.1



Fig 3.2.3.2

3.2.4 Mass fraction burnt

The variations of the mass fraction burnt with the crank angle for Karanja oil blends and diesel at compression ratio 18 at full load is given in Fig
3.2.4.1, due to the oxygen content of the blend, the combustion is sustained in the diffusive combustion phase. Diesel gives higher mass fraction
burnt than other blends. The highest rate of burning shows that the efficient rate of combustion. The engine operates in rich mixture and it reaches
stoichiometric region at higher compression ratio. More fuel is accumulated in the combustion phase and it causes rapid heat release.



Fig 3.2.4.1

3.2.5 Ignition delay

The most vital parameter in combustion analysis is ignition delay. The variation of the ignition delay with different loads for different blends is
shown in Fig.3.2.5.1. It has observed that the ignition delay decreases with Karanja oil in the diesel blend with increase in load and increases in
compression ratio. K20 give higher ignition delay than diesel. It is because more fuel required due to lower calorific value.
Fig.3.2.5.2 shows the variation of ignition delay with compression ratio for different blends. Ignition delay decrease with i ncrease in compression
ratio.
0
20
40
60
0 10 20 30 40 50 60 70 80 90 100
M
a
x
i
m
u
m

N
H
R
(
J
/
C
A
)
Load(%)
Diesel
K10
K20
0
20
40
60
80
16 17 18
M
a
x
i
m
u
m

N
H
R
(
J
/
C
A
)
Compression ratio
Diesel
K10
K20
0
10
20
30
40
50
60
70
80
90
350 355 360 365 370 375 380
M
a
s
s

f
r
a
c
t
i
o
n

b
u
r
n
e
d
(
%
)
Crank angle(Deg)
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

225 www.ijergs.org




Fig.3.2.5.1



Fig 3.2.5.2

3.2.6 Maximum combustion temperature

The variations of the maximum combustion temperature with loads for different blends are given in Fig.3.2.6.1. Increasing load combustion
temperature increases for all cases. Diesel gives better combustion temperature than other blends. The maximum combustion temperature of diesel,
K10, and K20 at full load has been observed to be 1429.04, 1425.74 and 1400.01C respectively.
Fig 3.2.6.2 shows the variation of maximum combustion temperature with compression ratio for different blends. It has been observed that
increasing compression ratio combustion temperature increases. Diesel gives better combustion temperature than all other blends. Due to more fuel
accumulated in the combustion chamber.



Fig 3.2.6.1

0
2
4
6
8
10
12
0 10 20 30 40 50 60 70 80 90 100
I
g
n
i
t
i
o
n

d
e
l
a
y

(
D
e
g
)
Load(%)
Diesel
K10
0
2
4
6
8
16 17 18
I
g
n
i
t
i
o
n

d
e
l
a
y
(
D
e
g
)
Compression ratio
Diesel
K10
K20
0
200
400
600
800
1000
1200
1400
1600
0 10 20 30 40 50 60 70 80 90 100
M
a
x
i
m
u
m

c
o
m
b
u
s
t
i
o
n

t
e
m
p
e
r
a
t
u
r
e
(

C
)
Load(%)
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

226 www.ijergs.org



Fig3.2.6.2



Fig.3.2.6.3

3.3 Emission analysis of Karanja oil

3.3.1 Carbon monoxide emission

Fig.3.3.1.1 shows the variation of carbon monoxide emission of the blends and diesel for various loads. CO emission is higher at lower load then
decreases with increase in load and at higher load it again increases. The CO emission of the blend K10 and K20 is more at lower load compared to
diesel; it may be due to higher viscosity and improper spray pattern resulting in incomplete combustion. At full load diesel gives highest CO
emission.
Fig.3.3.1.2 shows the variation of carbon monoxide emission of the blends and diesel with various compression ratios. . CO emission decreases with
increase in compression ratio. The CO emission of diesel is minimum compared to K10 and K20. This may be due to; at higher compression ratio air
fuel mixing is better.


Fig 3.3.1.1
1370
1380
1390
1400
1410
1420
1430
1440
16 17 18
M
a
x
i
m
u
m

c
o
m
b
u
s
t
i
o
n

t
e
m
p
e
r
a
t
u
r
e
(

C
)
Compression ratio
Diesel
K10
K20
0
200
400
600
800
1000
1200
1400
1600
320 330 340 350 360 370 380 390 400
M
e
a
n

g
a
s

t
e
m
p
e
r
a
t
u
r
e
(

C
)
Crank angle(Deg)
Diesel
K10
K20
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0 10 20 30 40 50 60 70 80 90 100
C
O
(
%
)
Load(%)
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

227 www.ijergs.org




Fig.3.3.1.2

3.3.2 Carbon dioxide emission

The variation of carbon dioxide emission with different loads is shown in Fig 3.3.2.1. CO
2
emission increases with increase in load. In the range of
whole engine load CO
2
emission of diesel fuel is lower than other fuel .This is because vegetable oil contains oxygen element, carbon content is
relatively lower in the same volume of fuel consumed at the same engine load. More amount of CO
2
is an indication of complete combustion of fuel
in the combustion chamber. CO
2
emission of the blend K20 is slightly higher than diesel at all loads. It is probably due to higher oxygen availability.
The variation of carbon dioxide emission with different compression ratio is shown in Fig.3.3.2.2. The blend emits higher percentage of CO
2
than
diesel at lower compression ratios and vice versa. The CO
2
emission from the combustion of bio fuels can be absorbed by the plants and the carbon
dioxide level and is kept constant in the atmosphere.



Fig 3.3.2.1




Fig 3.3.2.2

3.3.3 Hydrocarbon emission

The variation of hydrocarbon emissions with load for different blends is plotted in Fig 3.3.3.1. Increased HC emissions clearly show that the
combustion in the engine is not proper. It is very clear that increasing the blend percentage of Karanja oil increase the HC emissions. All blends have
shown higher HC emissions at 50% load. This may be due to poor atomization of the blended fuel because of higher viscosity. Physical properties of
fuels such as density and viscosity influence the HC emissions. The blend K10 has higher HC emissions at full load.
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
16 17 18
C
O
(
%
)
Compression ratio
Diesel
K10
K20
0
1
2
3
4
5
6
0 10 20 30 40 50 60 70 80 90 100
C
O

(
%
)
Load(%)
Diesel
K10
K20
0
1
2
3
4
5
6
16 17 18
C
O

(
%
)
Compression ratio
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

228 www.ijergs.org

The variation of hydrocarbon emission with different compression ratios for different blends is given in Fig 3.3.3.2. It shows that the hydrocarbon
emissions of various blends are lower at higher compression ratios. Blend K20 gives higher HC emission at lower compression ratio but at higher
compression ratio K10 gives higher.






Fig 3.3.3.1




Fig 3.3.3.2

3.3.4 Nitrogen oxides emission

Fig 3.3.4.1 shows that the variations of Nitrogen oxides (NO
x
) emission with load for different blends. NO
x
emission increases with increase in load.
This is properly due to higher combustion temperature in the engine cylinder with increasing load. It is also observed that with increasing the
percentage of Karanja oil blend there is a trend of decreasing NO
x
emission. NO
x
emission for diesel, K20, and K20 are 550ppm, 493ppm, and
524ppm at full load. The limitation is higher viscosity of these higher Karanja oil blends. The variations of nitrogen oxides (NO
x
) emission with
respect to different compression ratio for different blends are shown in Fig 3.3.4.2. The NO
x
emission for diesel and other blends increase with
increase of compression ratio. Diesel gives higher NOx emission than other blends. The other blends closely follow diesel.

0
5
10
15
20
25
30
35
40
45
0 10 20 30 40 50 60 70 80 90 100
H
C
(
p
p
m
)
Load(%)
Diesel
K10
K20
0
10
20
30
40
50
60
16 17 18
H
C
(
p
p
m
)
Compression ratio
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

229 www.ijergs.org



Fig 3.3.4.1



Fig 3.3.4.2

3.3.5 Smoke opacity

Fig 3.3.5.1 shows that the variations of smoke opacity with load for different blends. Smoke opacity increase with increase in load. K10 and K20
give higher smoke opacity than diesel. It is observed that K10 and K20 have smoke opacity less than diesel at nearly 70% load. Hence it can be
conclude that K20 would be better blend from other. The smoke opacity for diesel, K10, and K20 at full load is 88.7%, 96%, and 97.8% respectively.
The variation of smoke opacity with respect to different compression ratio for different blends is shown in Fig 3.3.5.2. Smoke opacity increase with
increase in compression ratio. K20 give higher smoke opacity than that of K10 and diesel.



Fig.3.3.5.1
0
100
200
300
400
500
600
700
0 10 20 30 40 50 60 70 80 90 100
N
O
X
(
p
p
m
)
Load(%)
Diesel
K10
K20
0
100
200
300
400
500
600
16 17 18
N
O
X
(
p
p
m
)
Compression ratio
Diesel
K10
K20
0
20
40
60
80
100
120
0 10 20 30 40 50 60 70 80 90 100
S
m
o
k
e

O
p
a
c
i
t
y
(
%
)
Load(%)
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

230 www.ijergs.org





Fig 3.3.5.2


IV CONCLUSION

The performance, emission and combustion characteristics of a dual fuel variable compression ratio engine with Karanja oil and diesel blends have
been investigated and compared with that of diesel. The experimental results confirm that the BTE, SFC, exhaust gas temperature, mechanical
efficiency and torque of variable compression ratio engine, is a function of bio diesel blend, load and compression ratio. For the similar operating
conditions, engine performance reduced with increase in bio-diesel percentage in the blend. However by increasing the compression ratio the engine
performance varied and it becomes comparable with that of diesel. The following conclusions are drawn from this investigation:
The performance, emission and combustion characteristics of a dual fuel variable compression ratio engine with Karanja oil and diesel
blends have been investigated and compared with that of diesel.
The experimental results confirm that the BTE, SFC, exhaust gas temperature, mechanical efficiency and torque of variable compression
ratio engine, is a function of bio diesel blend, load and compression ratio. For the similar operating conditions, engine performance reduced
with increase in bio-diesel percentage in the blend. However by increasing the compression ratio the engine performance varied and it
becomes comparable with that of diesel.
The following conclusions is drawn from this investigation was found out. K10 gives less BSFC compared to K20 and diesel. . The engine
BTE at full load for diesel, K10, and K20 fuels is 24.9%, 26.63%, and 24.1% respectively. It is also observed that the BTE of the blend K20
is slightly lower than that of the diesel and K10 is higher than diesel.
The highest temperature obtained is 324.53C for diesel for full load, whereas the temperature is only 317.93C and 310.39C for the blend
K10 and K20.It may be due to energy content in diesel is higher as compared to K10 and K20. CO, HC emission of K10 and K20 is lower
than diesel and NOx emission was higher than diesel.



REFFERENCES

[1] Varuvel EG, Mrad N, Tazerout M, Aloui F. Experimental analysis of bio fuel as an alternative fuel for diesel engines. Applied Energy
2012; 94: 224-231.
[2] Swaminathan C, Sarangan J. Performance and exhaust emission characteristics of a CI engine fuelled with biodiesel (fish oil) with DEE as
additive. Biomass and bio energy 2012; 39:168-174.
[3] Muralidharan K, Vasudevan D, Sheeba K.N. Performance, emission and combustion characteristics of biodiesel fuelled variable
compression ratio engine. Energy 2011; 36:5385-5393.
[4] Muralidharan K, Vasudevan D. Performance, emission and combustion characteristics of a variable compression ratio engine using methyl
esters of waste cooking oil and diesel blends. Applied Energy 2011; 88:3959-3968.

[5] Jindal S, Nandwana BP, RathoreNS, VashisthaV. Experimental investigation of the effect of compression ratio and injection pressure in a
direct injection diesel engine running on Jatropha methyl ester. Applied Thermal Engineering 2010; 30:442-8.

[6] Saravanan S, Nagarajan G, Lakshmi Narayana Rao G, Sampath S. Combustion characteristics of a stationary diesel engine fuelled with a
blend of crude rice bran oil methyl ester and diesel. Energy 2010; 35:94-100.
[7] Prem Anand B, Saravanan CG, Ananda Srinivasan C. Performance and exhaust emission of turpentine oil powered direct injection diesel
engine. Renewable Energy 2010; 35:1179-84.

70
75
80
85
90
95
100
16 17 18
S
m
o
k
e

O
p
a
c
i
t
y
(
%
)
Compression ratio
Diesel
K10
K20
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

231 www.ijergs.org

[8] Haik Yousef, Selim Mohamed YE, Abdulrehman Tahir. Combustion of algae oil methyl ester in an indirect injection diesel engine. Energy
2011; 36:1827-35.

[9] Kalam MA, Masjuki HH, Jayed MH, Liaquat AM. Emission and performance characteristics of an indirect ignition diesel engine fuelled with
waste cooking oil. Energy 2011; 36:397-402.
[10] Mani M, Nagarajan G, Sampath S. Characterization and effect of using waste plastic oil and diesel fuel blends in compression ignition
engine. Energy 2011; 36:212-9.
[11] Gumus MA. Comprehensive experimental investigation of combustion and heat release characteristics of a biodiesel (hazelnut kernel oil
methyl ester) fuelled direct injection compression ignition engine. Fuel 2010; 89:2802-14.

[12] Celikten Ismet, Koca Atilla, Arslan Mehmet Ali. Comparison of performance and emissions of diesel fuel, rapeseed and soybean oil methyl
esters injected at different pressures. Renewable Energy 2010; 35:814-20.

[13] A.S. Ramadhas, C. Muraleedharan, S. Jayaraj. Performance and emission evaluation of a diesel engine fuelled with methyl esters of rubber
seed oil, Renewable Energy 30 (2005) 17891800.
[14] Arul Mozhi Selvan V, Anand RB, Udayakumar M. Combustion characteristics of Diesohol using bio diesel as an additive in a direct
injection ignition engine under various compression ratios. Energy & Fuels 2009; 23:5413-22.
[15] Satyanarayana M, Muraleedharan C.A comparative study of vegetable oil methyl esters (biodiesels). Energy 2011; 36:2129-37.

[16] Devan PK, Mahalakshmi NV. Study of the performance, emission and combustion characteristics of a diesel engine using poon oil e based
fuels. Fuel Processing Technology 2009; 90:513-9.














International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

232 www.ijergs.org

Innovation with TRIZ
N.U.Kakde
1
, D B Meshram
1
,G R Jodh
1
,A.S.Puttewar
1

1
Faculty, Dr Babasaheb Ambedkar College of Egineering and Research, Nagpur

ABSTRACT- Today, evolution of science and technology has reached tremendous rate. Major breakthroughs in sciences, technology, medicine,
and engineering make our everyday life more and more comfortable. Today it is nearly impossible to find an engineer who does not use
complex mathematical tools for formal modeling of design products, CAD systems for drawings, electronic handbooks and libraries, and the
Internet to find necessary data, information, and knowledge.

But what happens when we need to invent a radically new solution? To generate a new idea? To solve a problem when no known problem solving
methods provide results? What tools and methods do we have to cope with these situations? It happens that when it comes to producing new ideas,
we still rely heavily on thousands-years-old trials and errors method. It is good when a new brilliant and feasible idea is born quickly. But what
price we have to pay for it most of the time? Waste of time, money and human resources. Can we afford this today, when competition is
accelerating every day and capability to innovate becomes a crucial factor of survival? Certainly, not. But if there is anything that can help?

Fortunately, the answer is yes. To considerably improve innovative process and avoid costly trials and errors, leading innovators use TRIZ, a
scientifically-based methodology for innovation. Relatively little known outside the former Soviet Union before the 90
th
, it rapidly
gained popularity at world-leading corporations and organizations, among which are DSM, Hitachi, Mitsubishi, Motorola, NASA, Procter
& Gamble, Philips, Samsung, Siemens, Unilever, just to name a few. This article presents a brief overview of TRIZ and some its techniques with
focus on technological applications of TRIZ.

TRIZ origins
TRIZ (a Russian acronym for the Theory of Inventive Problem Solving) was originated by the Russian scientist and engineer Genrich Altshuller. In
the early 50
th
, Altshuller started massive studies of patent collections. His goal was to find out if inventive solutions were the result of chaotic and
unorganized thinking or there were certain regularities that governed the process of creating new inventions.

After scanning approximately 400.000 patent descriptions, Altshuller found that only 2% of all patented solutions were really new, which means
that they used some newly discovered physical phenomenon such as the first radio receiver or photo camera. 98% of patented inventions used
some already known physical principle but were different in its implementation (for instance, both a car and a conveyer may use the air cushion
principle). In addition, it appeared that a great number of inventions complied with a relatively small number of basic inventive principles.
Therefore, 98% of all new problems can be solved by using previous experience - if such experience is presented in a certain form, for instance
as principles or patterns. This discovery had given impact on further studies which let to discovery the basic principles of invention.

More than thirty years of research resulted in revealing and understanding of origins of an inventive process, and formulation of general principles
of inventive problem solving. At the same time, first TRIZ techniques were developed.

Later, many researchers and practitioners worldwide united efforts and largely extended Altshullers approach with new methods and tools.
Today, a number of companies and universities worldwide are involved to enhancing TRIZ techniques and putting them to the practical use.

Modern TRIZ
TRIZ offers a number of practical techniques, which help to analyze existing products and situations, extract core problems and generate new
solution concepts in a systematic way. TRIZ fundamentally changes our view on solving inventive problems and innovative design as shown in
figure 1. Instead of random generation of thousands of alternatives among which only one can work, TRIZ uses systematic approach to generate new
ideas as sh.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

233 www.ijergs.org



Fig 1. Modern TRIZ
Modern TRIZ is a large volume of knowledge. It includes such techniques as Inventive Principles, Patterns of standard solutions, Functional
Analysis, Databases of physical, chemical and geometrical effects, Trends and Patterns of technology evolution and Algorithm of Inventive Problem
Solving, which is also known as ARIZ. TRIZ is not easy to learn. However, most of its techniques can be learned and applied independently that
simplifies the processes of learning and implementation. This is shown in Fig. (1)

Common Patterns of Inventions

Let us have a look as how TRIZ works by comparing two problems.

First problem: how to protect a hydrofoil moving at a high speed from hydraulic cavitation, which results from collapsing air bubbles which destroy
the metal surface of the foil? Second problem: how to prevent orange plantations from being eaten by apes if installing fences around the plantations
would be too expensive?

Are these problems similar? At a first glance, not at all. However from the TRIZ point of view, they are similar, because both the problems
result in identical problem patterns. In both cases, there are two components interacting with each other and the result of the interaction is
negative.

In the first situation, the water destroys the foil, in the second an ape eats an orange. And there is no visible and simple way to improve the
situations. To solve this type of problems, TRIZ recommends introducing a new component between the existing ones. Well, but how? We tried it,
and it did not work fences are still expensive. What did the best inventors do in this case? Analysis of the best inventions showed that this new
component has to be a modification of one of the two existing components!

In TRIZ, the word modification is being understood in broad terms. It can be a change of aggregate state of a substance, or a change of color,
structure, etc. What can a modification of the water be? Ice. A refrigerator is installed inside the foil and freezes the water thus forming an ice
layer over the foil surface. Now, the cavitations destroy the ice, which is constantly rebuilt.
What can be the modification of the orange? A lemon! The ape does not like the taste of the lemon so it was proposed to surround
the orange plantations with lemon trees.


As seen in figure 2, TRIZ suggests recommendations on solving new problems accordingly guidelines drawn from previous
experience of tackling similar problems in different areas of technology. Well- known psychological methods for activation of
thinking (brainstorm, for instance) or traditional design methods aim at finding a specific solution to a specific problem. It is difficult
too much information has to be browsed and there is no guarantee that we move in a right direction. TRIZ organizes translation of
the specific problem into abstract problem and then proposes to use a generic design principle or a pattern, which is relevant to the
type of the problem. As clear, by operating at the level of conceptual models, the search space is significantly reduced that makes
it much easier to find the needed solution concept among the patterns TRIZ offers.(Fig.2)






International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

234 www.ijergs.org
















A b s tr a c t pr ob l e m A b s tr a ct s o l u tio n
P P R RI IN N C C I IP P L LE ES S O
OOF
FF T T R R I I Z Z



S p e c if ic p r o b l e m S p e c i f ic s o l u t io n


T T R R I I A A L L S S & & E E R R R RO OR R S S
S S E E A AR RC C H H S S P P A AC CE E

Fig. 2 Common Platform of Invention
INVENTION IS A RESULT OF SOLVING A CONTRADICTION
Another discovery of Altshuller was that every inventive solution is a result of elimination of a contradiction. The contradiction arises
when two mutually exclusive design requirements are put on the same object or a system. For example, the walls of a space shuttle
have to be lightweight to decrease the mass of the shuttle when bringing it to the orbit. However, this cannot be done by simply
decreasing the thickness of the walls due to the thermal impact when entering the Earth atmosphere. The problem is difficult due to
the necessity to have two contrary values of the same design parameter: according to the existing solutions, the walls have to be both
heavyweight and lightweight at the same time.

When a designer faces a contradiction that cannot be solved by redesigning a product in a known way, this means that he faces an
inventive problem, and its solution resides outside a domain the product belongs to. One known method to solve problems with
contradicting demands is to find a compromise between two conflicting parameters or values. But what to do if no optimum can be
reached that solves the problem? TRIZ suggests solving problems by removing the contradictions.

A comprehensive study of patent collections undertaken by TRIZ researchers and thorough tests of TRIZ within industries have
proven the fact that if a new problem is represented in terms of a contradiction, a relevant TRIZ principle has to be used to find
a way to eliminate the contradiction. The principle indicates how to eliminate the same type of the contradiction encountered in
some area of technology before

A collection of TRIZ inventive principles is the most known and widely used TRIZ problem solving technique. Each principle in the
collection is a guideline, which recommends a certain method for solving a particular type of an inventive problem. There are 40
inventive principles in the collection, which are available in a systematic way according to a type of a contradiction that arises during
attempts to solve the problem. Examples of the inventive principles are:

Variability Principle: Characteristics of the object (or external environment) should change such as to be optimal at each stage of
operation; the object is to be divided into parts capable of movement relative to each other; if the object as a whole is immobile, to
make it mobile or movable.

Segmentation Principle: Divide the object into independent parts; make the object such that it could be easily taken apart; increase
the degree of the objects fragmentation (segmentation). Instead of non-fragmented objects, more fragmented objects can be
used, as well as granules, powders, liquids, gases.

Access to the principles is provided through a matrix, which consists of 39 rows and columns. Positive effects that have to be achieved
(so-called generalized requirements) are listed along the vertical axis while negative effects, which arise when attempting to achieve
the positive effects, are listed along the horizontal axis. A selection of a pair of positive and negative effects indicates which principles
should be used to solve the problem






International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

235 www.ijergs.org



Table 1
A matrix of principles for engineering contradiction elimination. Numbers indicate what principles
have to be used: 1 - Segmentation; 2 - Removing; 10 - Preliminary action; 13 - Other way round;
etc

what gets worse as a result of improvement
Speed Force Stress ..... Stability
what to improve
Speed 13,28,15,19 6,18,38,40 ..... 28,33,1
Force 13,28,15 18, 21,11 ..... 35,10,21
Stress 6, 35,36 36,35,21 ..... 35, 2,40
..... ..... .... .... .....
Stability 33,28 10,35,21 2,35,40 .....

For instance, a problem is that we need a device to hold an easily breakable part, which has a complex shape. If we use a traditional
vise with clamping teeth, the contradiction is the following: to hold the part reliably (positive effect), we have to apply sufficient
forces. However, the forces are distributed non-uniformly and the part can be damaged (negative effect).Table 1 indicates the matrix
principle used in TRIZ tool.

To solve this type of contradictions TRIZ recommends using Segmentation Principle mentioned above. So we must to segment the
clamping teeth. It can be done by replacing the teeth with a chamber filled with small-sized elastic cylinders and to compress the
cylinders by moving the chamber wall as shown in fig 3. As a result, the contradiction is eliminated: a part of almost any shape can be
hold by such the device and the forces will be distributed uniformly





Fig. 3 Segmentation using Teeth
PHYSICS FOR INVENTO PHYSICS FOR INVENTORS
Sometimes, just to be capable of seeing things different is not enough. New breakthrough products often result from a synergy of non-
ordinary view of a problem and knowledge of the latest scientific advances. TRIZ suggest to search for new principles by defining
what function is needed and then finding which physical principle can deliver the function.

Studies of the patent collections indicated, that inventive solutions are often obtained by utilizing physical effects not used previously
in a specific area of technology. Knowledge of natural phenomena often makes it possible to avoid the development of complex and
unreliable designs. For instance, instead of a mechanical design including many parts for precise displacement of an object for a short
distance, it is possible to apply the effect of thermal expansion to control the displacement.

Finding a physical principle that would be capable of meeting a new design requirement is one of the most important tasks in the early
phases of design. However, it is nearly impossible to use handbooks on physics or chemistry to search for principles for new products.
The descriptions of natural phenomena available there present information on specific properties of the effects from a scientific point
of view, and it is unclear how these properties can be used to deliver particular technical functions.

TRIZ Catalogues of the effects bridge a gap between technology and science. In TRIZ Catalogues, each natural phenomenon is
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

236 www.ijergs.org

identified with a number of technical functions that might be achieved on the basis of the phenomenon.

The search for effect is possible through formulation of a problem in terms of a technical function. Each technical function indicates
an operation that can be performed with respect to a physical object or field. Examples of the technical functions are move a loose
body or change density , generate heat field, and accumulate energy.

Another example illustrates the use the TRIZ Catalogues of physical effects. How to accurately control the distance between a
magnetic head and a surface of a tape in a special high- performance digital tape recorder, where the gap should be different during
different recording modes and a change must be produced very quickly?

In the TRIZ Catalogue to physical effects, the function to move a solid object refers to several effects. One of the effects is the
physical effect of magnetostriction: a change in the dimensions and shape of a solid body (made of a specific metal alloy) under
changing the intensity of applied magnetic field. This effect is similar to the effect of thermal expansion, but it is caused by magnetic
field rather than thermal field.

The magnetic head is fixed to a magnetostrictive rod as shown in figure 4. A coil generating magnetic field is placed around the rod. A
change of the magnetic fields intensity is used to compress and extend the rod exactly to the required distance between the head and
the recording surface.

Picture A Picture B
Fig. 4
Solving a problem with TRIZ pointer to physical effects. Picture A: Old design with a screw, Picture B: new design with a magnetostrictive rod and a
electromagnetic induction coil
Trends of the Technology Evolution
Altshuller also discovered that the technology evolution is not a random process. Many years of studies revealed that there are a
number of general trends governing the technology evolution no matter what area the products belong to.

The practical use of trends is possible through specific patterns. Every pattern indicates a line of evolution containing particular
transitions between old and new structures of a design product. In total, TRIZ presents nine trends of the technology evolution. One of
the trends Evolution of systems by transitions to more dynamic structures is shown at the figure below.

The significance of knowing the trend of the technology evolution is that they can be used to estimate what phases of the evolution a
system has passed. As a consequence, it is possible to foresee what changes the system will experience. And, what is more important,
produce the forecast in design terms.






International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

237 www.ijergs.org














Table 2 Patterns of increasing the degree of system dynamics

Evolution Phase Example Illustrations


Solid object Traditional mobile phone




Solid object divided into Mobile phone with a sliding part which
two segments with non contains a microphone
-flexible link



Two segments with Flip-flop phone of two parts.
a flexible link




Many segments with Phone which is made as a wrist watch: its bracelet
flexible links is made of segments, which contain different parts
of the phone


Flexible object A flexible liquid-crystal film, which can be rolled
in and out and stored inside a plastic cylinder
(serves also as a mobile videophone)





Practical value of TRIZ
Today, TRIZ and TRIZ software are used in about than 5000 companies and government organizations worldwide. For instance,
designers at Eastman Kodak used TRIZ to develop a new solution for a cameras flash. The flash has to move precisely to change the
angle of lightning. A traditional design includes a motor and mechanical transmission. It complicates the whole design and makes it
difficult to precisely control the displacement. A newly patented solution uses piezoelectric effect and involves a piezoelectric linear
motor, which is more reliable and easier to control.

In general, the use of TRIZ provides the following benefits:

1. Considerable increase of productivity in searching for new ideas and concepts to create new products or solve existing problems.
As estimated by the European TRIZ Association experts on the basis of industrial case studies, these processes are usually
accelerated 5-10 times. Sometimes, new solutions became only possible from using TRIZ.

2. Increasing the ratio Useful ideas / useless ideas during problem solving by providing immediate access to hundreds of unique
innovative principles and thousands of scientific and technological principles stored in TRIZ knowledge bases.

3. Reducing risk of missing an important solution to a specific problem due to a broad range of generic patterns of inventive solutions
offered by TRIZ
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

238 www.ijergs.org

4. Using the scientifically-based trends of technology evolution to examine all possible alternatives of future evolution of a specific
technology or a design product and select the right direction of the evolution.

5. Leveraging intellectual capital of organizations via increasing a number of patented solutions of high quality.

6. Raising the degree of personal creativity index by training personnel to approach and solve inventive and innovative problems in a
systematic way.

TRIZ is the most powerful and effective practical methodology of creating new ideas available today. However, TRIZ does not replace
human creativity instead, amplifies it and helps to move to the right direction. As proven during long-term studies, everyone can
invent and solve non- trivial problems with TRIZ.
TRIZ IN THE WORLD

Today, TRIZ is widely recognized as a leading method for innovation worldwide. Leading Japanese research organization,
Mitsubishi Research Institute, which unites research efforts of 50 major Japanese corporations, invested US$14 mln to bring TRIZ and
TRIZ-relates software to Japan.

In 1998, the TRIZ Association was formed in France, which involves such participants as Renault, Peugeot, EDF, Legrand. In South
Korea, LG Electronics uses TRIZ to solve major inventive problems and develop new products. Motorola purchased 2000 packages of
TRIZ software, while Unilever has recently released information about investing US$ 1.2 mln into purchasing TRIZ software and
using it as a major tool for achieving competitive leadership.

In 2000, the European TRIZ Association was established, with a global coordination group of 26 countries including representatives
from Japan, South Korea, USA.

In 2004, Samsung Corporation recognized TRIZ as a best practice for innovation after a number of successful TRIZ projects, which
resulted in total economic benefits of 1.5 billion Euros during three years.

Small and medium-sized companies benefit from using TRIZ as well. TRIZ helps to define and solve problems within short time and
with relatively small efforts thus avoiding large R&D investments for approaching solutions and finding new design concepts.

REFERENCES

1. Lawrence D. Miles: Techniques of Value Analysis & Engineering. McGraw-Hill Book Co., London
2. S. D. Savransky: Engineering of Creativity. Introduction to TRIZ Methodology of Inventive
3. Problem Solving; CRC Press, Boca Raton, Florida, 2000
4. Darrell Mann: Hands-On Systematic Innovation, Ieper Belgium, 2002.
5. Anticipating Failures with Substance-Field Inversion by Thomas W. Ruhe, TRIZ-Journal, Feb 2003.









International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

239 www.ijergs.org

Analysis and Design of Low Voltage Low Power Dynamic Comparator with
Reduced Delay and Power
Dinabandhu nath Mandal
1,
, Niladri Prasad Mohapatra
1
, Rajendra Prasad
3
,Ambika Singh
1

1
Research Scholar (M.Tech), Department of Electronics,KIIT University, Bhubaneswar, India
1
Assistant Professor, Department of Electronics,KIIT University, Bhubaneswar, India
Email-mandaldinbandhu@gmail.com
Abstract High speed devices such as ADC, operational amplifier are of great importance and for this high speed application a
major thrust is given towards low power methodologies. Reduction of power consumption in these device can be achieved by moving
towards smaller feature size processes. Now ADC requires lesser power dissipation, low noise, better slew rate ,high speed etc.
Dynamic comparator are being used in today's A/D converters extensively because these comparator are high speed ,consumes lesser
power dissipation ,having zero static power consumption and provide full-swing digital level output voltage in shorter time duration.
Back to back inverter in these dynamic comparator provides positive feedback mechanism which convert a smaller voltage difference
in full scale digital level output. A pre-amplifier based comparator can amplify a small input voltage difference to a large enough
voltage to overcome the latch offset voltage and also can reduce the kickback noise. However the pre-amplifier based comparator
suffer large static power consumption as well as from the reduced intrinsic gain with the reduction of the drain to source resistance due
to continuous technology scaling. In this paper a delay analysis has been presented for different dynamic comparators and finally a
proposed designed has been given where delay has been reduced 264Ps and average power dissipation has been reduced to
1.09w.The above design has been simulated in 180nm technology with a supply voltage of 0.8v

Keywords High speed analog-to-digital comparators(ADCs) , Dynamic clocked comparator, low power analog design , Double-tail
dynamic comparator, conventional dynamic comparator , preamplifier based comparators

INTRODUCTION
Comparator is a fundamental building block in analog-to-digital converter(ADCs). In design of ADCs , comparator of high speed ,
low power consumption are used. comparator in ultra deep sub micrometer (UDSM) technologies suffers from low supply voltage.
hence design of high speed comparator is a challenge when the supply voltage is low[1]. Hence to achieve high speed in a given
technology more transistor are required and more area and power is required. Technique such as supply boosting method[2],[3] a
technique such as body driven transistor[4] ,[5] has been developed to meet the low voltage design. In addressing switching problems
and input range two technique such as boosting and bootstrapping are used . In this paper the delay has been presented for various
dynamic comparator architecture. Based on the double-tail architecture a new dynamic comparator has been presented where delay is
comparatively reduce compared to the earlier design which doesn't require boosted voltage . By adding a few number of transistor the
delay time at the latch has been comparatively reduce. As a result in the modified design the power is saved and can be used for high
speed ADCs design.
CLOCK REGENERATIVE COMPARATORS
Clock regenerative comparator are widely use in design of ADCs of high speed as these type of comparator makes fast decision due
to the presence of feedback(positive) in the latch stage. There are many analysis which investigate the behavior of the comparator
from many respect such as random decision error[ 10],offset voltage[ 8],[9], noise[7] , kick-back noise[11]. The the above section the
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

240 www.ijergs.org

analysis of delay is presented. the delay of the conventional dynamic and conventional double-tail comparator are verified and based
on the above proposed comparator will be presented
I. CONVENTIONAL DYNAMIC COMPARATOR
Conventional dynamic comparator is widely used dynamic comparator for the design of analog to digital comparator .It has rail-to-rail
output swing ,high input impedance , zero static power consumption. the schematic of conventional dynamic comparator is shown in
fig 1.1 and fig 1.2 shows the transient simulation of conventional dynamic comparator .


fig 1.1. Schematic of conventional dynamic comparator

fig 1.2 :- Transient simulation of the conventional dynamic comparator for the voltage difference of 5 m V, V cm=0.7 and supply
voltage of 0.8V
The delay of the above comparator consists of two delay t
0
and t
latch
where t
o
discharging delay of the load capacitance C
L
and t
latch
is
the latching delay of the cross coupled inverter and hence the total delay(t
delay
) of the above comparator is given as
(1)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

241 www.ijergs.org

Where C
L
is the load capacitance, |V
thp
| is the threshold voltage of M2 transistor , g
m,eff
is the transconductance of the back-to-back
inverter , V
DD
is the supply voltage ,I
tail
is the current of the M
tail
transistor.
1,2
is the current factor of the input transistor , Vin is
the input voltage difference. According to equation (1) the delay of the above comparator depends directly to the load capacitance(C
L
)
and inversely to input difference voltage ( Vin) .
The main advantage of the above architecture are rail-to-rail swing at the output, better robustness against noise and mismatch, static
consumption is zero. The power plot of the conventional dynamic comparator is shown in fig 1.3 and the layout of the above
comparator is shown in fig 1.4

Fig 1.3 power plot of conventional dynamic comparator

Fig 4 Layout schematic of the conventional dynamic comparator
II. CONVENTIONAL DOUBLE-TAIL DYNAMIC COMPARATOR
The schematic of double-tail dynamic comparator is shown in fig 1.5 as the above topology has large number of transistor and has less
stacking and operation can be done at lower supply voltage compared to the earlier design of conventional dynamic comparator. As in
these structure due to the presence of two M
tail
transistor it provides large current at the latching stage and wider M
tail2
requires for fast
latching which is independent to V
cm
(common mode voltage at input) and has small current at the input stage , required for low
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

242 www.ijergs.org

offset[6]. The fig 1.6 shows the transient simulation of the conventional double-tail dynamic comparator for input voltage difference
of Vin=5mv , V
cm
=0.7v and V
DD
= 0.8V

fig 1.5 Schematic of conventional double-tail dynamic comparator

Fig1.6 Transient simulation of the conventional double-tail dynamic comparator for input voltage difference of Vin=5mv ,V
cm
=0.7v
and V
DD
The delay of the double-tail dynamic comparator comprises of two delay t
0
and t
latch
. which is similar to that of conventional
dynamic comparator . Here t
0
is the capacitive charging of the capacitance at the load C
Lout
(at the outn and outp) until the transistor
(M9/M10) are on, and hence the latch regeneration starts and t
0
is determined. The total delay of the above comparator is given as

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

243 www.ijergs.org

where g
mR1,2
is transconducatance of the transistor(M
R1
and M
R2
) , I
tail 2
is the current of M
tail2
transistor , Vin is the voltage
difference at the input , V
0
is the output voltage difference. The fig 8 and fig 1.7 below shows the power plot for calculating power
and layout for determining area respectively of double tail dynamic comparator.

Fig 1.7-Power plot of double-tail dynamic comparator

Fig 1.8 - Layout of double tail dynamic comparator








International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

244 www.ijergs.org

III PROPOSED DOUBLE TAIL DYNAMIC COMPARATOR
The schematic of proposed design is compared with double-tail dynamic comparator is shown in fig 1.9 . In the proposed design the
lower input state is replace by differential amplifier with PMOS load









Fig :- 1.9 Schematic of proposed comparator(right) with double-tail dynamic comparator(left)
The delay of the proposed double-tail dynamic comparator is comparatively reduced in comparison to double-tail dynamic
comparator. The power plot and Transient simulation of the proposed double-tail dynamic comparator for input voltage difference of
Vin=5mv ,V
cm
=0.7v and V
DD
= 0.8V is shown in fig 2.1 and fig 2.2 shows the layout of proposed double-tail dynamic comparator


Fig 2.1 :- Power plot and Transient simulation of the Modified double-tail dynamic comparator for input voltage difference of
Vin=5mv ,V
cm
=0.7v and V
dd
= 0.8V
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

245 www.ijergs.org


fig 2.2 Layout of proposed double-tail dynamic comparator in 180nm technology
SIMULATION RESULT
The comparison table has been presented to compare the results of proposed comparator with conventional and double-tail dynamic
comparators. the above circuit are simulated in a 180nm CMOS technology
Comparator
Structure

Conventional
Dynamic
Comparator

Double-tail
Dynamic
Comparator

Proposed
Double-tail
Dynamic
Comparator

No of
Transistors
used


9 14 16
Supply
Voltage (V)

0.8 0.8 0.8
Delay(Ps)


898.2 293 263
Energy (FJ)

1.108

2.125

866 n

Estimated
Area

22.7 * 15.7

28*13 28.9*19.5



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

246 www.ijergs.org


CONCLUSION
A new proposed double-tail comparator shows better performance as compared to conventional dynamic and double-tail dynamic
comparator. As it is shown that the delay of the proposed design is 263 Ps which is comparatively lesser than the earlier also energy
per conversion is reducing from 1.108 in conventional dynamic to 866 ns in proposed double-tail. The proposed double-tail dynamic
comparator can be used for the design of high speed ADCs as the delay is reduced and hence the operation will be faster. As in the
proposed structure the number of transistor is more so the area of the design is more which is one of the disadvantage of the above
comparator

REFERENCES

[1] B. Goll and H. Zimmermann, A comparator with reduced delay time in 65-nm CMOS for supply voltages down to 0.65, IEEE
Trans. Circuits Syst. II, Exp. Briefs, vol. 56, no. 11, pp. 810814, Nov. 2009

[2] S. U. Ay, A sub-1 volt 10-bit supply boosted SAR ADC design in standard CMOS, Int. J. Analog Integr. Circuits Signal
Process., vol. 66, no. 2, pp. 213221, Feb. 2011.

[3] A. Mesgarani, M. N. Alam, F. Z. Nelson, and S. U. Ay, Supply boosting technique for designing very low-voltage mixed-signal
circuits in standard CMOS, in Proc. IEEE Int. Midwest Symp. Circuits Syst. Dig. Tech. Papers, Aug. 2010, pp. 893896.
[4] B. J. Blalock, Body-driving as a Low-Voltage Analog DesignTechnique for CMOS technology, in
Proc.IEEESouthwest Symp. Mixed-Signal Design, Feb. 2000, pp. 113118.
[5] M. Maymandi-Nejad and M. Sachdev, 1-bit quantiser with rail to rail input range for sub-1V modulators,IEEE Electron. Lett.,
vol. 39, no. 12, pp. 894895, Jan. 2003
[6] B. Murmann et al., "Impact of scaling on analog performance and associated modeling needs," IEEE Trans. Electron
Devices, vol. 53, no. 9, pp. 2160-2167,
Sep. 2006
[7 ] R. Jacob Baker, Harry W. Li, David E. Boyce, CMOS- Circuit Design, Layout, And Simulation, IEEE Press Series on
Microelectronic Systems, IEEE Press, Prentice Hall of India Private Limited, Eastern Economy Edition,2002

[8] Meena Panchore, R.S. Gamad, Low Power High Speed CMOS Comparator Design Using .18m Technology,
International Journal of Electronic Engineering Research, Vol.2, No.1, pp.71-77, 2010

[9] M. van Elzakker, A.J.M. van Tuijl, P.F.J. Geraedts, D. Schinkel, E.A.M. Klumperink and B. Nauta, "A 1.9W
4.4fJ/Conversion-step 10b 1MS/s Charge-Redistribution ADC," ISSCC Dig. Tech. Papers, pp. 244245, February 2008

[10] Heungjun Jeon and Yong-Bin Kim, A Novel Low-Power, Low-Offset and High-Speed CMOS Dynamic Latched
Comparator, IEEE, 2010

[11] Behzad. Razavi, Design of Analog CMOS Integrated Circuits, New York McGraw-Hill, 2001

[12] Dinabandhu Nath Mandal , Sanjay Kumar ''High Speed Comparators for Analog-To-Digita comparatorl'' IOSR Journal of
Electrical and Electronics Engineering (IOSR-JEEE) e-ISSN: 2278-1676,p-ISSN: 2320-3331, Volume 9, Issue 2 Ver. III
(Mar Apr. 2014), PP 5661




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

247 www.ijergs.org

A Novel Blind Hybrid SVD and DCT Based Watermarking Schemes
Samiksha Soni
1
, Manisha Sharma
1

1
Bhilai Institue of Technology Durg, Chhattisgarh
Email- samiksha.soni786@gmail.com

ABSTRACT In recent years SVD has gained wide importance in the field of digital watermarking. In this paper the
fundamental of SVD and quantization based watermarking algorithm is discussed and a modified hybrid algorithm is proposed. In this
work cascade combination of DCT and SVD is applied to design a robust watermarking system. This work exploits the features of
both DCT and SVD. We implemented the same algorithm in three variants where these variation lies in the embedding procedure of
watermark bit 1. Simulation result shows that minor change in embedding formula has significant impact on robustness of the
system. To check the robustness of the proposed work it is subjected to variety of attacks and robustness is measured in terms of
normalized correlation and bit error rate.
Keywords DCT, SVD, watermarking, quantization, embedding, extraction, singular value, diagonal, orthogonal.
INTRODUCTION
In todays era, the internet has subverted the way we access information and share our ideas. The internet provides excellent means for
sharing digital multimedia object. It is inexpensive, eliminates warehousing and delivery, and is almost instantaneous. But with the
advent of information technology there is threat to duplication and authentication of multimedia data. Watermarking is a branch of
information hiding which is used to embed proprietary information in digital multi media.The conceptual model [1] of the
watermarking system is explained in Fig. 1 and Fig. 2. Which comprise of two basic modules, embedding module and extraction
module. Original image acts as the carrier which is to be secured. The watermark embedding module embeds the secondary signal in
to the original image. This secondary signal providing the sense of ownership or authenticity is called watermark. The optional key is
used to enhance the security of the system. Extraction module estimates the hidden secondary signal from the received image with the
help of key and original image if required. Channel noise or illegitimate access may degrade quality of watermarked image during
transmission. But embedding system should be strong enough in such a manner that no manipulation can detach the watermark from
its cover except the authentic user.



Fig. 1 Watermark Embedding Module




Fig. 2 Watermark Extraction Module


Key
Extraction
module
Watermarked
image
Original image
Watermark
Watermarked
image
Key
Embedding
module
Original image
Watermark
Channel
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

248 www.ijergs.org

An effective watermarking scheme [2] should satisfy the following basic requirements
- Transparency: The watermark embedded in the original signal should not be perceivable by human eye and watermark should not
distort the media being protected.
- Security: A watermarking scheme should also ensure that no one can generate bogus watermarks and should provide reliable
evidence to protect the rightful ownership.
- Robustness: It refers to the property of survival of watermark against various attacks such as filtering, geometric transformations,
noise addition, etc.
Image watermarking techniques proposed so far can be broadly categorized according to the basis of how to embed the watermark as:
First category is spatial domain technique [3] which adds the digital watermark on the image directly in terms of a certain algorithm.
Second category is transform domain technique which embeds the watermark into the transformed image [4-6]. The former technique
has an easier algorithm and faster computing speed, but the disadvantage is that its robustness is not stronger; the latter one has better
robustness and resilient to image compression, common filtering and noise, but its problem lies in computing speed. However,
because of its better robustness, transform domain technique has gradually been applied to digital watermarking development and
research.
In the recent years, singular value decomposition based watermarking technique and its variations have been proposed. SVD is a
mathematical technique used to extract algebraic features from an image. The core idea behind SVD based approaches is to apply the
SVD to the whole cover image or, alternatively, to small blocks of it, and then modify the singular values to embed the watermark.
Gorodetski et al. in [7] proposed a simple SVD domain watermarking scheme by embedding the watermark to the singular values of
the images, to achieve a better transparency and robustness. Proposed method is not image adaptive and fails to maintain transparency
for different images. Liu et al. in [8] presented a scheme where a watermark is added to the singular value matrix of the watermarking
image in spatial domain. This scheme offers good robustness against manipulations for protecting rightful ownership. But since the
scheme is designed for the rightful ownership protection, where the robustness against manipulations is desired, it is suitable for
authentication. Makhloghi et al. in [9] presents singular value decomposition and discrete wavelet transform based blind robust digital
image watermarking. In the proposed work the wavelet coefficients of the host image are modified by inserting bits of singular values
of watermark image.
In [10] a digital image watermarking scheme based on Singular Value Decomposition using Genetic Algorithm (GA) is proposed. The
proposed scheme is based on quantization step size optimization using the Genetic Algorithm to improve the quality of watermarked
image and robustness of the watermark. Zhu et al. [11] method can deal with the rectangle matrices directly and can extract better-
quality watermarks. It takes little time to embed and extract the watermark in large images. This method can avoid some
disadvantages such as the distortion caused by the computing error then extracting the watermark in the diagonal direction.
Modaghegh et al. [12] proposed an adjustable watermarking method based on SVD, the parameters of which were adjusted using the
GA in consideration of image complexity and attack resistance, and by the change of the fitness function, watermarking method can be
converted to each of robust, fragile, or semi-fragile types. Abdulfetah et al. [13] proposed a robust quantization based digital image
watermarking for copy right protection in DCT-SVD domain. The watermark is embedded by applying a quantization index
modulation process on largest singular values of image blocks in the DCT domain. To avoid visual degradation of, they have designed
adaptive quantization model based on blocks statistics of the image.
Horng et al. [14] proposed an efficient blind watermarking scheme for e-government document images through a combination of the
discrete cosine transform (DCT) and the singular value decomposition (SVD) based on genetic algorithm (GA). DCT, in this case, is
applied to the entire image and mapped by a zigzag manner to four areas from the lowest to the highest frequencies. SVD, meanwhile,
is applied in each area and then the singular value of DCT-transformed host image, subsequently, is modified in each area with the
quantizing value using GA to increase the visual quality and the robustness. The host image is not needed in the watermark extraction
and it is more useful than non blind one in real-world applications.
SVD BASED WATERMARKING ALGORITHM
Sun et al. [15] proposed an SVD and quantization- based watermarking scheme. The diagonal matrix property is exploited to embed
the watermark. To embed the watermark largest coefficient of diagonal matrix is selected. The modification was determined by the
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

249 www.ijergs.org

quantization means. After that, the inverse of the SVD transformation was performed to reconstruct the watermarked image. Because
the largest coefficients of diagonal matrix can resist general image processing, the embedded watermark was not greatly affected.
Also, the quality of the watermarked image can be determined by the quantization. Thus, the quality of the watermarked image can be
maintained. To extract the watermark, the SVD transformation was employed and the largest coefficients in the S component were
examined. After that, the watermark was extracted.
The watermark embedding and extracting procedures can be described as follows.
watermark embedding procedure
In first step partition the host image into blocks. In second step perform SVD transformation. In third step extract the largest
coefficient S
i
1,1 from each S component and quantize it by using a predefined quantization coefficient Q.
Let Y
i
= S
i
1,1mod Q
In fourth Step embed watermark bit as follows
When W
i
= 0 it will be embedded as follows:
if Y
i
< 3/4,then S
i

1,1 = S
i
1,1 +Q/4 Y
i
else S
i

1,1 = S
i
1,1 + 5Q/4 Y
i

When W
i
= 1 it will be embedded as follows:
if Y
i
< 3/4,then S
i

1,1 = S
i
1,1 Q/4 +Y
i
else S
i

1,1 = S
i
1,1 + 3Q/4 Y
i

In step five perform the inverse of the SVD transformation with modified S matrix and U, V matrix of original image to reconstruct
the watermarked image.
- Watermark extraction procedure
In first step partition the watermarked image into blocks. In second step perform SVD transformation. In third step extract the largest
coefficient S(1, 1) from each S component and quantize it by using the predefined quantization coefficient Q. Let Z = S(1, 1)modQ.
In fourth step check if Z < Q/2, the extracted watermark has a bit value of 0. Otherwise, the extracted watermark has a bit value of 1.
In the proposed work we implemented three variants of quantization based blind embedding [] which differs minutely from one
another. Difference lies in the embedding step for watermark bit to be 1. This minor difference creates significant change in
robustness.
PROPOSED SCHEME
In the proposed work we provide modification in the existing method by cascading it with DCT. DCT operation is performed on
original image to obtain its frequency components. Then reordering of DCT components is done in zigzag manner. After that block
SVD operation is performed on scanned DCT coefficients then watermark is embedded inside the largest SVs of each block.
- Watermark embedding procedure:
In first step convert the original color image in to gray scale. Then apply 2-D DCT to the gray scale image and perform the zigzag
scanning operation on DCT coefficients shown in Eq. (1) and Eq. (2). Let the gray scale image be A
A
d
= DCT2(A) (1)
Z
d
= Zigzag(A
d
) (2)
In next step two dimensional matrix is formed from the zigzag scanned vector
M = Con2_matrix(Z
d
) (3)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

250 www.ijergs.org

After that Matrix M is fractioned in to smaller blocks depending on the payload size m
1,
m
2,.,
m
n
=

divi(M) where n is equal to
watermark length, then using Eq. (4) SVD operation is performed on this blocks

U
i
S
i
V
i
= svd(m
i
) (4)
Where i=1,2,3,4.,n
After applying DCT SVD operation on original image the binary watermark is inserted by the following ways:
Modify the largest singular value of each block as
Y
i
= S
i
1,1mod Q
Where Q is predefined quantizing value, Q must be selected with the specification of an image both to obtain a maximum resistance
towards attack and to obtain the minimum perceptibility.
- First Embedding Procedure:
When W
i
= 0 it will be embedded as follows:
if Y
i
< 3/4,then S
i

1,1 = S
i
1,1 + Q/4 Y
i
else S
i

1,1 = S
i
1,1 + 5Q/4 Y
i

When W
i
= 1 it will be embedded as follows:
if Y
i
< /4,then S
i

1,1 = S
i
1,1 Q/4 Y
i
else S
i

1,1 = S
i
1,1 + 3Q/4 Y
i

- Second Embedding Procedure:
When W
i
= 0 it will be embedded as follows:
if Y
i
< 3/4,then S
i

1,1 = S
i
1,1 + Q/4 Y
i
else S
i

1,1 = S
i
1,1 + 5Q/4 Y
i

When W
i
= 1 it will be embedded as follows:
if Y
i
< 3/4,then S
i

1,1 = S
i
1,1 Q/4 +Y
i
else S
i

1,1 = S
i
1,1 + 3Q/4 Y
i

- Third Embedding Procedure:
When W
i
= 0 it will be embedded as follows:
if Y
i
< 3/4,then S
i

1,1 = S
i
1,1 + Q/4 Y
i
else S
i

1,1 = S
i
1,1 + 5Q/4 Y
i

When W
i
= 1 it will be embedded as follows:
if Y
i
< 3/4,then S
i

1,1 = S
i
1,1 Q/4 +Y
i
else S
i

1,1 = S
i
1,1 + 3Q/4 +Y
i

Next step is to perform inverse SVD operation on blocks to obtain modified DCT coefficients m
i

= ISVD(U
i
S
i

V
i
) and smaller blocks
are recombined by M

= mergm
1

, m
2

, . , m
n


, then after inverse zigzag operation is performed on M

to map DCT coefficients


back to their position A
d

= IZigzag(M


). Last step is to perform inverse DCT operation on A
d

using Eq. (5) to obtain watermarked


imageA

.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

251 www.ijergs.org

- Watermark extraction procedure
The first step of the watermark-extraction process is to apply DCT to the watermarked image as shown in Eq. (5)
A
dr

= DCT2(A

) (5)
In Step two, using Eq. (6) scan the DCT coefficients in the zigzag manner
Z
dr
= Zigzag(A
dr


) (6)
After that two dimensional matrix is formed from scanned vector using Eq. (7)
M
r
= Con2_matrix(Z
dr
) (7)
In step three matrix M
r
is fractioned in to smaller blocks depending on the payload size m
r1,
m
r2,.,
m
rn
=

divi(M
r
) where n is
equal to watermark length, then SVD operation is performed on this blocks as shown in Eq. (8)
U
ri
S
ri
V
ri
= svd(m
ri
) (8)
Where i=1, 2, 3, 4., n. In step four get the largest singular values from each block and extract the watermark
Y
ri
= S
ri
1,1mod Q
- Extraction mechanism for first and second embedding procedure:
If Y
ri
< /2 , then W
ri
= 0, else W
ri
= 1, these extracted bit values are used to construct the extracted watermark.
- Extraction mechanism for third procedure:
If Y
ri
Q/2 , then W
ri
= 0, else W
ri
= 1, these extracted bit values are used to construct the extracted watermark.
EXPERIMENTAL RESULTS
To verify the performance of the proposed watermarking algorithm, MATLAB platform is used and a number of experiments are
performed on different images of size 512512 and binary logos of size 6464.Here we provide the comparative result for host image
Lena and binary logo shown in Fig. 3 (a) and Fig. 3 (b).Extracted watermark of three procedure is shown in Fig. 4 (a)(first procedure)
, Fig. 4 (b)(second procedure), Fig. 4 (c)(third procedure) The watermarked image quality is measured using PSNR (Peak Signal to
Noise Ratio) given by Eq. (9).To verify the presence of watermark, two parametric measures are used to show the similarity between
the original watermark and the extracted watermark. These two parameters are normalized correlation and bit error rate given by Eq.
(9) and (10)
PSNR = 10Log
10

(A

(i,j))
2 N
j =1
N
i=1
(Ai,jA

(i,j))
2 N
j =1
N
i=1
(9)
NC =
wi,jw
mean
(w

i,jw
mean

)
N
j =1
N
i=1
(w

i,jw
mean

)
2 N
j =1
N
i=1
(wi,jw
mean
)
2 N
j =1
N
i=1
(10)
. BER =
w(i,j)w

i,j
N
j =1
N
i=1
NN
(11)
Where w(i, j) be the original watermark image and the extracted watermark be w'(i, j) original watermark image and the extracted
watermark be w'(i, j).
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

252 www.ijergs.org


(a) (b)
Fig.3 Host image and watermark image

(a) (b) (c)
Fig.4 Extracted watermark

In order to check the robustness of the proposed watermarking scheme the watermarked image is attacked by a variety of attacks
namely Average and Median Filtering, Gaussian noise, Random noise, JPEG Compression, Cropping, Resize, Rotation, Blur. After
these attacks on the watermarked image, the extracted logo is compared with the original one.
- Filtering
The most common manipulation in digital image is filtering. In filtering watermarked image is attacked by applying Mean(33),
Median (33) and Gaussian low pass (5x5) filter.
- Addition of noise
Noise addition in watermarked image is another way of checking the robustness of the system. Noise addition leads to degradation and
distortion of the image. Which effects the quality of extracted watermark. Here robustness is checked against salt and pepper noise and
random noise.
- JPEG compression
Another most common manipulation in digital image is image compression. To check the robustness against Image Compression, the
watermarked image is tested with JPEG100 and JPEG2000 compression attacks.
- Cropping and resizing
Cropping is the process of selecting and removing a portion of an image to create focus or strengthen its composition. Cropping an
image is done by either hiding or deleting rows or columns. In the proposed work three variants of cropping is performed they are row
column blanking, row column copying, cropping 25% area (right bottom corner). To fit the image into the desired size, enlargement
or reduction is commonly performed and resulted in information loss of the image including embedded watermark. For this attack,
first the size of the watermarked image is reduced to 256256 and again brought to its original size 512512.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

253 www.ijergs.org


- Rotation
In this work watermark is subjected to very minor rotation i.e. of 0.2, 0.3 and result are obtained. When rotation of larger degree is
provided watermark fails to resist the attack. However if the effect of rotation is reverted by some way watermark can be successfully
extracted.
- General image processing attacks
We employed motion blur with pixel length 3 and angle 45
0
on watermarked image to check its robustness
TABLE I
NORMALIZED CORRELATION VALUE OF THREE IMPLEMENTED SCHEMES
Types of attacks First Embedding
procedure
Second Embedding
Procedure
Third Embedding
Procedure
Without attack 0.9927 0.8956 0.5352
Random noise 0.5930 0.4070 0.3028
Low Pass Filtering 0.5218 0.3854 0.2160
Rotation 0.6316 0.4624 0.2951
Blurred 0.6831 0.5229 0.3064
Average Filtering 0.6004 0.4499 0.2630
Median Filtering 0.7333 0.5805 0.3437
Crop 0.7396 0.5904 0.1906
JPEG 100 0.9546 0.7606 0.4903
JPEG2000 0.9912 0.9004 0.5340
Salt & Pepper 0.7439 0.5786 0.3917
Row Column
Blanking
0.7550 0.6306 0.4232
Row Column Copying 0.7984 0.7535 0.4320
Resizing 0.8328 0.5130 0.4185
TABLE III
PSNR VALUE OF THREE IMPLEMENTED SCHEMES
Types of attacks First Embedding
prPPprocedure
Second Embedding
Procedure
Third Embedding
Procedure
Without attack 47.5090 47.5671 38.8217
Random noise 33.9449 33.9422 32.8873
Low Pass Filtering 33.6067 32.5978 32.1208
Rotation 37.4609 37.4382 35.5961
Blurred 35.4630 35.4561 34.3489
Average Filtering 32.7986 32.7889 32.2431
Median Filtering 35.9154 35.9007 34.6384
Crop 11.4074 11.8670 11.3010
JPEG 100 44.0591 44.3251 38.2187
JPEG2000 46.3709 47.2910 38.5917
Salt & Pepper 32.0481 32.1717 31.4888
Row Column
Blanking
24.1098 26.2981 23.9875
Row Column Copying 28.9098 33.3551 27.0656
Resizing 34.5344 37.9656 33.5479


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

254 www.ijergs.org

TABLE IIIII
BER VALUE OF THREE IMPLEMENTED SCHEMES

Types of attacks First Embedding
procedure
Second Embedding
Procedure
Third Embedding
Procedure
Without attack 0.0037 0.0510 0.4304
Random noise 0.1848 0.2253 0.4614
Low Pass Filtering 0.2261 0.2607 0.5028
Rotation 0.1768 0.2146 0.4695
Blurred 0.1528 0.1868 0.4983
Average Filtering 0.1951 0.2275 0.5029
Median Filtering 0.1328 0.1599 0.4870
Crop 0.2554 0.1406 0.5012
JPEG 100 0.0225 0.0896 0.4579
JPEG2000 0.0044 0.0496 0.4255
Salt & Pepper 0.1240 0.1587 0.4475
Row Column Blanking 0.1277 0.1365 0.4412
Row Column Copying 0.1030 0.0923 0.4380
Resizing 0.0840 0.2795 0.4882
Conclusion
In this paper three variants of quantization based blind watermarking scheme is discussed. Experimental result shows that
performance of first embedding procedure is better is terms of NC, PSNR and BER. Proposed technique shows resilience
towards a variety of attacks but it fails to withstand histogram equalization, contrast enhancement attacks and rotational
attacks of higher degree. Since embedding procedure for inserting watermark bit 0 is common in all the procedure and
variation exists in insertion of watermark bit 1 only. This variation plays significant impact on watermark retrieval
which is clearly identified by the NC, BER and PSNR of three embedding procedure shown in Table I, II and III.
REFERENCES:
[1] C.I.Podilchuk and E.J.Delp, Digital Watermarking: Algorithms and Applications, IEEE Signal Process.Magazine, pp.33-46,
July 2001.
[2] Fernando PCrez-Gonz6lez and Juan R. Hernbdez, " A TUTORIAL ON DIGITAL WATERMARKING, " IEEE Trans. on
Information Forensics Security, 1999;
[3] Dipti Prasad Mukherjee, Subhamoy Maitra , Scott T. Acton ,Spatial Domain Digital Watermarking of Multimedia Objects for
Buyer Authentication, IEEE Transactions on multimedia, VOL. 6, NO. 1, FEBRUARY 2004.
[4] J. R. Hernansez, M. Amado, and F. Perez-Gonzalez, DCT-domain watermarking techniques for still images: Detector
performance analysis and a new structure, IEEE Trans. Image Process., vol. 9, pp. 5568, Jan. 2000.
[5] I. J. Cox, J. Kilian, T. Leighton, and T. Shamoon, Secure spread spectrum watermarking for multimedia, IEEE Trans. Image
Processing, vol. 6, pp. 16731687, Dec. 1997.
[6] P. Meerwald, Digital Watermatrking in the Wavelet Transform Domain, Masters, Dept. Sci. Comput., Univ. Salzburg, Austria,
2001.
[7] V. I. Gorodetski, L. J. Popyack, and V. Samoilov, SVD-based approach to transparent embedding data into digital images, in
Proc. International Workshop, MMM-ACNS, St. Petersburg, Russia, pp. 263274, May 2001.
[8] R. Liu and T. Tan, A SVD-Based Watermarking Scheme for Protecting Rightful Ownership, IEEE Trans. on Multimedia, Vol.
4, pp.121-128, March 2002.
[9] M. Makhloghi, F. Akhlaghian, H. Danyali, Robust digital image watermarking using singular value decomposition, in: IEEE
International Symposium on Signal Process. and Information Technology, pp. 219224, 2010.
[10] B.Jagadeesh, S.Srinivas Kumar, K.Raja Rajeswari, Image Watermarking Scheme Using Singular Value Decomposition,
Quantization and Genetic Algorithm, International Conf. on Signal Acquisition and Process., IEEE Computer Society, pp.120-
124, 2010.
[11] Xinzhong Zhu, Jianmin Zhao and Huiying Xu , A Digital Watermarking Algorithm and Implementation Based on Improved
SVDThe 18th International Conf. on Pattern Recognition ,2006
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

255 www.ijergs.org

[12] H. Modaghegh, R.H. Khosravi, T. Akbarzadeh, A new adjustable blind watermarking based on GA and SVD, Proceeding of
International Conf. on Innovations in Information Technology in, pp. 610, 2009.
[13] A. Abdulfetah, X. Sun and H. Yang, Quantization Based Robust Image Watermarking in DCT-SVD Domain. Research Journal
of Information Technology, Vol 1, pp. 107-114, 2009.
[14] Shi-Jinn Horng , Didi Rosiyadi, Tianrui Li a, Terano Takao , Minyi Guo , Muhammad Khurram Khan, A blind image copyright
protection scheme for e-government, Pattern Recognition Letters, pp.10991105 ,2013.
[15] Sun, R., Sun, H., Yao, T., A SVD and quantization based semi-fragile watermarking technique for image authentication, Proc.
IEEE International Conf. Signal Process., pp. 1592-95, 2002.





















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

256 www.ijergs.org

Low Power Design of Pre Computation-Based Content-Addressable Memory
SK.Khamuruddeen
1
, S.V.Devika
1
, V Rajath
2
, Vidhan Vikram Varma
2

1
Associate professor, Department of ECE, HITAM,Hyderabad, India
2
Research Scholar (B.Tech), Department of ECE, HITAM,Hyderabad, India

ABSTRACT - Content-addressable memory (CAM) is a special type of computer Memory used in certain very high speed searching
applications. It is also known as associative memory, associative storage, or associative array. Content-addressable memory (CAM) is
frequently used in applications, such as lookup tables, databases, associative computing, and networking, that require high-speed
searches due to its ability to improve application performance by using parallel comparison to reduce search time. Although the use of
parallel comparison results in reduced search time, it also significantly increases power consumption. In this paper, we propose a
Block-XOR approach to improve the efficiency of low power pre computation- based CAM (PB-CAM). Compared with the ones-
count PB-CAM system, the experimental results show that our proposed approach can achieve on average 30% in power reduction
and 32% in power performance reduction. The major contribution of this paper is that it presents practical proofs to verify that our
proposed Block-XOR PB-CAM system can achieve greater power reduction without the need for a special CAM cell design. This
implies that our approach is more flexible and adaptive for general designs.

Keywords Content-addressable memory, Block-XOR, pre computation- based CAM

I.INTRODUCTION

1.1 Existing System:
A CAM is a functional memory with a large amount of stored data that compares the input search data with the stored data. Once
matching data are found, their addresses are returned as output. The vast number of comparison operations required by CAMs
consumes a large amount of power.
1.2 Proposed System:
This proposed system approach can reduce comparison operations by a minimum of 909 and a maximum of 2339. We propose a new
parameter extractor called Block-XOR, which achieve the requirement.
II. CAM OVERVIEW

Content addressable memory (CAM) compares input search data against a table of stored data, and returns the address of the
matching data [1][5]. CAMs have a single clock cycle throughput making them faster than other hardware- and software-based
search systems. CAMs can be used in a wide variety of applications requiring high search speeds. A CAM is a good choice for
implementing this lookup operation due to its fast search capability.
However, the speed of a CAM comes at the cost of increased silicon area and power consumption, two design parameters that
designers strive to reduce. As CAM applications grow, demanding larger CAM sizes, the power problem is further exacerbated.
Reducing power consumption, without sacrificing speed or area, is the main thread of recent research in large-capacity. CAMs.
Development in the cam area is surveyed at two levels: circuits and architectures levels. We can compare CAM to the inverse of
RAM. When read, RAM produces the data for a given address. Conversely, CAM produces an address for a given data word. When
searching for data within a RAM block, the search is performed serially. Thus, finding a particular data word can take many cycles.
CAM searches all addresses in parallel and produces the address storing a particular word. CAM supports writing "don't care" bits into
words of the memory. The don't care bit can be used as a mask for CAM comparisons; any bit set to don't care has no effect on
matches.
The output of the CAM can be encoded or un encoded. The encoded output is better suited for designs that ensure duplicate
data is not written into the CAM. If duplicate data is written into two locations, the CAM's output will not be correct. If the CAM
contains duplicate data, the un encoded output is a better solution; CAM with un encoded outputs can distinguish multiple data
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

257 www.ijergs.org

locations. We can pre-load CAM with data during configuration, or you can write into CAM during system operation. In most cases,
two clock cycles are required to write each word into CAM. When you use don't care bits, a third clock cycle is required.
2.1 Operation of CAM:

FIG.1 Conceptual view of a content-addressable memory containing w words
Fig.1 shows a simplified block diagram of a CAM. The input to the system is the search word that is broadcast onto the
search lines to the table of stored data. The number of bits in a CAM word is usually large, with existing implementations ranging
from 36 to 144 bits. A typical CAM employs a table size ranging between a few hundred entries to 32K entries, corresponding to an
address space ranging from 7 bits to 15bits.
Each stored word has a match line that indicates whether the search word and stored word are identical (the match case) or
are different (a mismatch case, or miss). The match lines are fed to an encoder that generates a binary match location corresponding to
the match line that is in the match state. An encoder is used in systems where only a single match is expected.
In addition, there is often a hit signal (not shown in the figure) that flags the case in which there is no matching location in the
CAM. The overall function of a CAM is to take a search word and return the matching memory location. One can think of this
operation as a fully programmable arbitrary mapping of the large space of the input search word to the smaller space of the output
match location. The operation of a CAM is like that of the tag portion of a fully associative cache. The tag portion of a cache
compares its input, which is an address, to all addresses stored in the tag memory. In the case of match, a single match line goes high,
indicating the location of a match. Many circuits are common to both CAMs and caches; however, we focus on large capacity CAM s
rather than on fully associative caches, which target smaller capacity and higher speed.
Todays largest commercially available single-chip CAMs are 18 M bit implementations, although the largest CAMs reported
in the literature are 9 M bit in size. As a rule of thumb, the largest available CAM chip is usually about Half the size of the largest
available SRAM chip. This rule of thumb comes from the fact that a typical CAM cell consists of two SRAM cells.
2.2 Simple CAM architecture:
Content Addressable Memories (CAMs) are fully associative storage devices. Fixed-length binary words can be stored in any
location in the device. The memory can be queried to determine if a particular word, or key, is stored, and if so, the address at which it
is stored. This search operation is performed in a single clock cycle by a parallel bitwise comparison of the key against all stored
words.

Fig 2. Simple schematic of a model CAM with 4 words having 3 bits each.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

258 www.ijergs.org

We now take a more detailed look at CAM architecture. A small model is shown in Fig. 2. The figure shows a CAM
consisting of 4 words, with each word containing 3 bits arranged horizontally (corresponding to 3 CAM cells). There is a match line
corresponding to each word (ML0, ML1, etc.) feeding into match line sense amplifiers (MLSAs), and there is a differential search line
pair corresponding to each bit of the search word ( etc.). A CAM search operation begins with loading the search-
data word into the search-data registers followed by precharging all match lines high, putting them all temporarily in the match state.
Next, the search line drivers broadcast the search word onto the differential search lines, and each CAM core cell compares
its stored bit against the bit on its corresponding search lines. Match lines on which all bits match remain in the pre charged-high state.
Matchlines that have at least one bit that misses, discharge to ground. The MLSA then detects whether its match line has a matching
condition or miss condition. Finally, the encoder maps the match line of the matching location to its encoded address.
2.3 LOW POWER PB-CAM
Since content addressable memory (CAM) is frequently used in applications, that require high-speed searches, and because
of its ability to improve application performance by using parallel comparison, it results in reduced search time. But it also
significantly increases power consumption. So the main CAM-design challenge is to reduce power consumption associated with the
large amount of parallel active circuitry, without sacrificing speed or memory density.
2.3.1 Power saving CAM architecture:
Architectural technique for saving power, which applies to binary CAM, is pre-computation. Pre-computation stores some
extra information along with each word that is used in the search operation to save power. These extra bits are derived from the stored
word, and used in an initial search before searching the main word. If this initial search fails, then the CAM aborts the subsequent
search, thus saving power.

2.4 PB-CAM Architecture:

Fig.3 Memory organization of PB-CAM architecture
Fig. 3 shows the memory organization of the PB-CAM architecture which consists of data memory, parameter memory, and
parameter extractor, where k << n. To reduce massive comparison operations for data searches, the operation is divided into two
parts. In the first part, the parameter extractor extracts a parameter from the input data, which is then compared to parameters stored in
parallel in the parameter memory. If no match is returned in the first part, it means that the input data mismatch the data related to the
stored parameter. Otherwise, the data related to those stored parameters have to be compared in the second part. It should be noted that
although the first part must access the entire parameter memory, the parameter memory is far smaller than that of the CAM (data
memory). Moreover, since comparisons made in the first part have already filtered out the unmatched data, the second part only needs
to compare the data that match from the first part.
The PB-CAM exploits this characteristic to reduce the comparison operations, thereby saving power. Therefore, the
parameter extractor of the PB-CAM is critical, because it determines the number of comparison operations in the second part. So, the
parameter extractor plays a significant role since this circuit determines the number of comparison operations required in the second
part. Therefore, the design goal of the parameter extractor is to filter out as many unmatched data as possible to minimize the required
number of comparison operations in the second part. Two parameter extractors are discussed, namely Ones count parameter extractor
and Block-XOR parameter extractor.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

259 www.ijergs.org

2.5 Ones count approach:
For ones count approach, with an n-bit data length, there are n+1 types of ones count (from 0 ones to n ones count).
Further, it is necessary to add an extra type of ones count to indicate the availability of stored data. Therefore, the minimal bit length
of the parameter is equal to log (n+ 2). The below fig 5 shows the conceptual view of ones count approach. The extra information
holds the number of ones in the stored word. For example, in fig.10, when searching for the data word, 01001101, the pre-computation
circuit the number of ones (which is four in this case). The number four is compared on the left-hand side to the stored ones count.
Only match lines PML
5
and PML7 match, since only they have a ones count of four. In the data-memory stage in fig.3.2, only two
comparisons actively consume power and only match line PML5 results in a match. The 14-bit ones-count parameter extractor is
implemented with full adders as shown in Fig. 4.

Fig.4 conceptual view of ones count approach

2.6 Mathematical Analysis:
For a 14-bit length input data, all the input data contain numbers, and the number of input data related to the same
parameter for ones count approach is , where n is a type of ones-count (from 0 to 14 ones-counts). Then we can compute the
average probability that the parameter occurs. The average probability can be determined by
(1)

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

260 www.ijergs.org


Fig. 5 14-bit ones-count parameter extractor
TABLE I
NUMBER OF DATA RELATED TO THE SAME PARAMETERS AND AVERAGE PROBABILITIES FOR THE ONES COUNT
APPROACH

Table I lists the number of data related to the same parameter and their average probabilities for the input data that is 14-bit
in length. For example, if a match occurs in the first part of the comparison with the parameter 2, the maximum number of required
comparison operations for the second part is .With conventional CAMs, the comparison circuit must compare all stored
data, whereas with the ones-count PB-CAMs, a large amount of unmatched data can be initially filtered out, reducing comparison
operations for minimum power consumption in some cases. However, the average probabilities of some parameters, such as 0, 1, 2,
12, 13, and 14 are less than 1%.
In Table I, parameters with over 2000 comparison operations range between 5 and 9. However, the summation of the
average probabilities for these parameters is close to 82%. Although the number of comparison operations required for ones-count PB-
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

261 www.ijergs.org

CAMs is fewer than that of conventional CAMs, ones-count PB-CAMs fail to reduce the number of comparison operations in the
second part when the parameter value is between 5 and 9, thereby consuming a large amount of power. From the Table I we can see
that random input patterns for the ones-count approach demonstrate the Gaussian distribution characteristic. The Gaussian distribution
will limit any further reduction of the comparison operations in PB-CAMs.
2.7 Block XOR approach:
The key idea behind this method is to reduce the number of comparison operations by eliminating the Gaussian
distribution. For a 14-bit input data, if we can distribute the input data uniformly over the parameters, then the number of input data
related to each parameter would be , and the maximum number of required comparison operations would be
for each case in the second part of the comparison process. Compared with the ones-count approach, this approach
can reduce comparison operations by a minimum of 909 and a maximum of 2339 (i.e., for parameter value is from 5 to 9) for 82% of
the cases. Based on these observations, a new parameter extractor called Block-XOR, which is shown in Fig.3.4, is used to achieve the
previous requirement.


Fig. 6 concept of n-bit Block-XOR block diagram.

In this approach, we first partition the input data bit into several blocks, from which an output bit is computed using
XOR logic operation for each of these blocks. The output bits are then combined to become the input parameter for the second part of
the comparison process. To compare with the ones-count approach, we set the bit length of the parameter to [log (n+ 2)]. Where n is
the bit length of the input data. Therefore, the number of blocks is [n/ log (n+2)] in this approach. Taking the 14-bit input length as an
example, the bit length of the parameter is log (14+2) = 4-bit, and the number of blocks is [14/ log(14+2)] = 4 . Accordingly, all the
blocks contain 4 bits except the last one, which contains the remainder 2 bits as shown in the upper part of Fig. 6.
However, the concept of Block-XOR approach does not provide a, valid bit for checking whether the data is valid; hence
it cannot be applied to the PB-CAM directly. For this reason, modified architecture is used as shown in the lower part of Fig. 6 to
provide a valid bit and to guarantee the uniform distribution property of the Block-XOR approach. We added a multiplexer to select
the correct parameter.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

262 www.ijergs.org


Fig. 7: Structure of Block-XOR approach with valid bit.
The selected signal is defined as
S=A3A2A1A0. (2)
According to (2), if the parameter is 0000 to 1110 (S = 0), the multiplexer will transmit the i0 data as the output. In other words,
the parameter does not change. Otherwise, (A3A2A1A0 =1111, S =1), the first block of the input data becomes the new
parameter, and 1111 can then be used as the valid bit. The case where the first block is 1111 was not considered, because the
1111 block bits will result in 0 for one of the four parameter bits.
2.8 Comparison between Two Approaches:
To eliminate the Gaussian distribution, we uniformly distribute the parameter over the input data. However, as can be seen
from Tables III and IV, when the parameter is 0, 1, 2, 3, 4, 10, 11, 12, 13, or 14, the number of comparison operations required for the
ones-count approach is fewer than that for the Block-XOR PB-CAM. Although the Block-XOR PB-CAM is better than the ones-count
PB-CAM only for parameters between 5 and 9, we must draw attention to the fact that the probability that these parameters occurs is
82%. For example, when the parameter is 7, there is a 20.95% chance that the Block-XOR PB-CAM can result in more than 2280
fewer comparison operations as compared to the ones-count approach. Compared with the ones-count approach, we can reduce the
number of comparison operations for more than 1000 in most cases. In other words, the ones-count approach is better than Block-
XOR approach in only 18% of the cases.
The number of comparison operations required for different input bit length 4, 8, 14, 16, and 32 bits is shown in Fig.8. As can
be seen, from the fig 3.6 Block-XOR PB-CAM becomes more effective in reducing the number of comparison operations as the input
bit length increases. This implies that the longer the input bit length is, the fewer the number of comparison operations required (i.e.,
power reduction). Therefore, the Block-XOR PB-CAM is more suitable for wide-input CAM applications. In addition, the Block-
XOR parameter extractor can compute parameter bits in parallel with three XOR gate delays for any input bit length, hence short
constant delay. On the contrary, as the input bit length increases, the delay of the ones-count parameter extractor will increase
significantly.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

263 www.ijergs.org


Fig.8.Comparision operations for different input bit length.
III Gate-Block Selection Algorithm:
To make the parameter extractor of the block-xor PB-BAM more useful for specific data types, we take into account the different
characteristic of logic gates to synthesize the parameter extractors for different data types. As can be seen in Fig. 3.5, if the input bits
of each partition block is set into l, the bit length of the parameter (i.e. the number of blocks) will be [n/l], where n is the bit length of
the input data, and then the levels in each partition block equal [log
2
l]. We observe that when the input bits of each partition block
decreases, the mismatch rate and the number of comparison operations in each data comparison process will decrease (this is because
that the combinations of the parameter increase). Although the increasing parameter bit length can decrease the mismatch rate and the
number of comparison operations in each data comparison process, the parameter memory size must be increased. In other words, it
increases the power consumption of the parameter memory as well. As we stated, when the PBCAM performs data searching
operation, it must compare the entire parameter memory. To avoid wasting the large amount of power in the parameter memory, we
set the input of each partition block to 8 bits. Fig. 3.7 shows the proposed parameter extractor architecture. We first partition the input
data bit into several blocks, G0~G6 in each block stand for different logic gates, from which an output bit is computed using
synthesized logic operation for each of these blocks. The output bits are then combined to become the parameter for data comparison
process.
The objective of our work is to select the proper logic gates in Fig. 9 so that the parameter (Pk1, Pk2, , P0) can reduce
the number of data comparison operations as many as possible.

Fig. 9: n-bit block diagram of the proposed parameter extractor architecture.
In our proposed parameter extractor, the bit length of the parameter is set into [n/8], and then the levels in each partition
block equal [log
2
8] (which is 3). Suppose that we use basic logic gates (AND, OR, XOR, NAND, NOR, and NXOR) to synthesize a
parameter extractor for a specific data type, which has (6
7
)
[n/8]
different logic combinations based on the proposed parameter extractor.
Obviously, the optimal combination of the parameter extractor cannot be found in polynomial time.
To synthesize a proper parameter extractor in polynomial time for a specific data type, we propose a gate-block selection
algorithm to find an approximately optimal combination. We illustrate how to select proper logic gates to synthesize a parameter
extractor for specific data type from mathematical analysis below.
3.1 Mathematical Analysis:
For a 2-input logic gate, let p be the probability of the output signal Y that is one state. The probability mass function of the
the output signal Y is given by
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

264 www.ijergs.org



Assume that the inputs are independent, if we use any 2-input logic gate as a parameter extractor to generate the parameter
for 2-bit data, then the PB-CAM requires the average number of comparison operations in each data search operation can be
formulated as


Where N0 is the number of zero entries, and N1 is the number of one entries for the generated parameters. To illustrate
clearly, we use Table V as an example.

TABLE II


Suppose that a 2-input AND gate is used to generate the parameter, the average number of comparison operations in each
data search operation for the PB-CAM can be derived:

In other words, when we use a 2-input AND gate to generate the parameter for this 2-bit data, the average number of
comparison operations required for each data search operation in the PB-CAM is 4.33. According to Equ. 4, Table V derives the
average number of comparison operations for six basic logic gates. Obviously, using OR and NOR gates are the best selection for this
case, because they require the least average number of comparison operations (which is 3). Moreover, when we use the inverse
relation of logic gates (AND/NAND, OR/NOR, and XOR/NXOR) to generate the parameter, the average number of comparison
operations for each data search operation required in the PB-CAM will be the same. To reduce the complexity of our proposed
algorithm and the performance of the parameter extractor, our proposed approach only selects NAND, NOR, and XOR gates to
synthesize the parameter extractor for our implementation. This is because that NAND and NOR is better than AND and OR in terms
of the area, power, and speed. Based on this mathematical analysis, Fig.3.8 shows our proposed gate block selection algorithm.
Note that when the input is random, the synthesized result will be the same as the block-XOR approach. In order words, the
block-XOR approach is a subset of our proposed algorithm. To better understand our proposed approach, we give a simple example as
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

265 www.ijergs.org

illustrated in figure. In this example, a 4-bit data is assigned as input data. Because the input data is only 4 bits in this example, we set
the number of input bits of each partition block to 4, and then the levels in each partition block equal [log2 4] (i.e. two levels).
IV RESULT:

Fig.5.11.VHDL output showing the data write into the CAM


Fig.5.12.VHDL output showing the data read from the CAM
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

266 www.ijergs.org


Fig.5.13.VHDL output showing the address read from the CAM

V. CONCLUSION:
In this 14-bit low power pre computationbased content addressable memory (PB-CAM) is simulated in VHDL.
Mathematical analysis and simulation results confirmed that the Block-XOR PB-CAM can effectively save power by reducing the
number of comparison operations in the second part of the comparison process. In addition, it takes less area as compared with the
ones count parameter extractor. This PB-CAM takes data as input and gives the address pointing to the same data as well as different
data as an output exactly after one clock cycle. So it is flexible and adaptive for the low power and high speed search applications.
In this synthesis, a gate-block selection algorithm was proposed. The proposed algorithm can synthesize a proper
parameter extractor of the PB-CAM for a specific data type. Mathematical analysis and simulation results confirmed that the proposed
PB-CAM effectively save power by reducing the number of comparison operations in the data comparison process. In addition, the
proposed parameter extractor can compute parameter bits in parallel with only three logic gate delays for any input bit length (i.e.
constant delay of search operation).

REFERENCES
[1] K. Pagiamtzis and A. Sheikholeslami, Content-addressable memory(CAM) circuits and architectures:A tutorial and survey,
IEEE J. Solid-State Circuits, vol. 41, no. 3, pp. 712727, Mar. 2006.
[2] H. Miyatake, M. Tanaka, and Y. Mori, A design for high-speed-lowpower CMOS fully parallel content-addressable memory
macros,IEEE J. Solid-State Circuits, vol. 36, no. 6, pp. 956968, Jun. 2001.
[3] I. Arsovski, T. Chandler, and A. Sheikholeslami, A ternary contentaddressable memory (TCAM) based on 4T static storage and
includinga current-race sensing scheme, IEEE J. Solid-State Circuits, vol. 38,no. 1, pp. 155158, Jan. 2003.
[4] I. Arsovski and A. Sheikholeslami, A mismatch-dependent power allocationtechnique for match-line sensing in content-
addressable memories,IEEE J. Solid-State Circuits, vol. 38, no. 11, pp. 19581966,Nov. 2003.
[5] Y. J. Chang, S. J. Ruan, and F. Lai, Design and analysis of low power cache using two-level filter scheme, IEEE Trans. Very
Large Scale Integr. (VLSI) Syst., vol. 11, no. 4, pp. 568580, Aug. 2003.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

267 www.ijergs.org

[6] K. Vivekanandarajah, T. Srikanthan, and S. Bhattacharyya, Dynamic filter cache for low power instruction memory hierarchy, in
Proc. EuromicroSymp. Digit. Syst. Des., Sep. 2004, pp. 607610.
[7] R. Min, W. B. Jone, and Y. Hu, Location cache: A low-powre L2cache system, in Proc. Int. Symp. Low Power Electron. Des.,
Apr.2004, pp. 120125.
[8] K. Pagiamtzis and A. Sheikholeslami, Using cache to reduce power in content-addressable memories (CAMs), in Proc. IEEE
Custom Integr.Circuits Conf., Sep. 2005, pp. 369372.
[9] C. S. Lin, J. C. Chang, and B. D. Liu, A low-power precomputationbased fully parallel content-addressable memory, IEEE J.
Solid-State Circuits, vol. 38, no. 4, pp. 622654, Apr. 2003.
[10] K. H. Cheng, C. H.Wei, and S. Y. Jiang, Static divided word matching line for low-power content addressable memory design,
in Proc. IEEEInt. Symp. Circuits Syst., May 2004, vol. 2, pp. 2326.
[11] S. Hanzawa, T. Sakata, K. Kajigaya, R. Takemura, and T. Kawahara,A large-scale and low-power CAM architecture featuring a
one-hotspot block code for IP-address lookup in a network router, IEEE J. Solid-State Circuits, vol. 40, no. 4, pp. 853861, Apr.
2005.
[12] Y. Oike, M. Ikeda, and K. Asada, A high-speed and low-voltage associative co-processor with exact Hamming/Manhattan-
distance estimation using word-parallel and hierarchical search architecture, IEEE J. Solid-State Circuits, vol. 39, no. 8, pp. 1383
1387, Aug. 2004.
[13] K. Pagiamtzis and A. Sheikholeslami, A low-power content-addressable memory (CAM) using pipelined hierarchical search
scheme,IEEE J. Solid-State Circuits, vol. 39, no. 9, pp. 15121519, Sep. 2004.
[14] D. K. Bhavsar, A built-in self-test method for write-only content addressablememories, in Proc. 23rd IEEE VLSI Test Symp.,
2005, pp.914.













International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

268 www.ijergs.org

Design of low power S-Box in Architecture Level using GF
N.Shanthini
1
, P.Rajasekar
1
, Dr. H.Mangalam
1

1
Asst. prof, Department of ECE, Kathir college of Engg, Coimbatore
E-mail rajasekarkpr@gmail.com
Abstract - Information security has become an important issue in the modern world and also the technology is going to increase very
fast.Data encryption & decryption method were more popular for real-time security communication application used in nowadays. For
that purpose AES has to be proposed. One of the most critical problem in AES is the power consumption. In this paper presents an
optimized composite field arithmetic based S-Box implemented in four stage pipeline.Here we mainly concentrate the power
consumption of S box which is the most power consuming block in AES. The construction procedure for implementing Galois Field
(GF) combinational logic based S-Box is presented here. S Box operation is divided in to the GF based multiplication and inverse
operation and illustrated in a step-by-step manner. The XC2VP30 device of Xilinx FPGA is used to validate the power with VHDL
code for the proposed architecture. Power consumption has been measured by Xpower analyser tool in ISE 14.7 design suite.

Keywords - AES,S-Box, composite field arithmetic, GF, Pipelining, FPGA,VHDL.
INTRODUCTION
One of the most important think in the modern world is the information because without information we cannot doing
anything. The evolution of information technology and in particular the increase in the speed of processing and power consumption
devices has necessitated the need to reconsider the cryptographic algorithms used. So its necessary to encrypt and decrypt our
information. Encryption normally hides our original message into unreadable form for anyone at the same way decryption change the
unreadable form into readable form for the respective person. Cipher system is one of the security mechanism to protect any
information from unauthorized person or any public person. Cipher systems are usually subdivided into block ciphers and stream
ciphers. Block ciphers operates on simultaneously encrypts the groups of characters, and also the stream ciphers usually operate on
the individual characters of a plain text message one at a time, cryptographic algorithms used. There are two types of encryption
algorithm Private(symmetric key)& Public where as Private uses only one key for both encryption & decryption, Public uses two key
one for encryption & another one for decryption. Substitution-permutation networks (SPNs) are natural constructions for symmetric
key Cryptosystems that realize confusion and diffusion through substitution and permutation operations, respectively.In SPNs the only
non-linear operation is substitution step, and it can commonly referred to as an S(ubstitution)-box,the construction of S-BOX is very
difficult and its important in AES.
cryptographically strong block ciphers that are resilient to common attacks, including both the linear and differential
cryptanalysis, and also the algebraic attacks. The two Claude Shannons properties of confusion and diffusion are strengthening the
symmetric key cryptosystem where Confusion can be defined as the complexity of the relationship between the secret key and cipher
text, and diffusion can be defined as the degree to which the influence of single input plaintext bit is spread throughout the resulting
cipher text. The National Institute of Standards and Technology of the United States (NIST) in cooperation with industry and
cryptographic communities have worked together to create a new cryptographic standard. The symmetric block cipher Rijndael was
standardized by the NIST as the AES in November of 2001.AES is an Advanced Encryption Standard provides high security as
compared to other encryption techniques along with RSA model. At the time of introducing AES the NIST publicly calls for
nominees for the new AES. Totally 15 algorithm has to be applied in that 5 finalists were chosen based on the Presentation ,analysis
& testing. From that 5 the one algorithm will be chosen as the successful one that one is Rijndael. Finally Rijndael AES cipher is
adapted which is a Symmetric key encryption standard. This algorithm is proposed by the two Belgian cryptographers Vincent Rijmen
and Joan Doemen. In Advanced Encryption Standard (AES) symmetric-key blockcipher, the construction of cryptographically strong
S-boxes with efficient hardware and software implementations in these cryptosystems has become a topic of critical research. The
basic difference between the normal AES & Rijndael AES is that in the Normal AES fixes block length to 128 bits & support key
length of 128,192,256 were as the Rijndael AES block & key length can be independently fixed to any multiple of 32,ranging from
128 to 256 bits.In this paper we investigate a design methodology for low power S-Box because the S-Box is one of the non-linear
operation in AES. The FPGA implementation of the architecture is done along with comparison of some exsisting system.
The remaining part of this paper as follow: Section II describe the AES operation .The S-Box construction method was
described in Section III. Section IVcontain the Proposed S-Box architecture. The simulation result & conclusion are drawn from
Section V, Section VI respectively.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

269 www.ijergs.org

AES Encryption algorithm
In previous the DES was used but it can support only 56 bit key. The AES is a symmetric block cipher, which uses the same
key for both encryption and decryption. It has been broadly used for different applications, like smart cards, cellular phones, website
servers, automated teller machines etc. The process of generating cipher is Similar to other symmetric ciphers, the AES applies round
operations iteratively to the plaintext to generate the cipher text. There are four transformations in a round operation: SubBytes,
ShiftRow, MixColumn and AddRoundKey. The subbyte is a non-linear operation where one byte is substituted for another based on
the algorithm we have to use. In the shiftrow operation data is shifted with in row. Row 0 is not shifted, Row 1 is shifted 1 byte like
wise. The mixcolumn operation has perform mixing of data within columns. The actual encryption is performed in the add round key
function, when each byte in the state perform xor operation with the subkey.
The AES process can be defined in three types based on length of the key used for the generating the cipher text which are
AES 128, AES192, AES256. In this operation, the AES cipher maintains an internally 4 by 4 matrix of bytes called states. The state
consists of four rows of bytes, each row containing Nb bytes, where N is the number of byte and b is the block length divided by 32 (4
for 128-bit key, 6 for 192-bit key, 8 for 256-bit key). At the same time key length and number of rounds differ from key to key, i.e we
have to use 10 round for 128-bit key,12 round for 192-bit key,14 round for 256-bit key. The last round operation is different from the
Previous other rounds as there is no mixcolumn transformation. The AES encryption & decryption operation is shown in Fig1.


Fig 1: AES encryption and decryption algorithm

S-Box Transformation
The Sub Bytes transformation is a nonlinear byte substitution that operates independently on each byte of the State using a substitution
table (S-box). ThisS-box, was usually invertible, and it can construted using two method :
1.Look up table
2.Composite field arithmetic
In that Look up table all the values are predefined based on the ROM so the area and memory access & latency is high. So
our method is based on the composite field arithmetic it contain two main operation as follows:
(1) Perform the multiplicative inverse in GF(2
8
).
(2)Perform the affine transformation over GF(2).
The GF stands for Galois Field. The Arithmetic in a finite field(Galois Field) is usually different from the standard integer arithmetic.
The finite field should contain the limited number of elements. The finite field with (p
n
)element is denoted GF(p
n
), wherepis a prime
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

270 www.ijergs.org

numbercalled the characteristic of the field andnis a positive integer. Aparticular case is GF(2) which has only two elements (1 and 0),
where addition is exclusive OR (XOR) and multiplication is AND. The element 0 is never invertible, the element 1" is always
invertible and inverse to itself. Therefore, the only invertible element in GF(2) is 1". Since the only invertible element is 1" and the
multiplicative inverse of 1" is also 1", division is an identity function.
The individual bits in a byte representing a GF(2
8
) element can be viewed as coefficients to each power term in the
GF(2
8
) polynomial. For instance, {10001011}2 is representing the polynomial q
7
+ q
3
+ q + 1in GF(2
8
). From [2], it is stated that any
arbitrary polynomial can be represented as bx + c, given an irreducible polynomial of x
2
+ Ax + B.
Thus, element in GF(2
8
)may be represented as bx + c in that b is the most significant nibble while c is the least significant nibble. So
the multiplicative inverse can be construted using the equation below,
( + )
1
= (
2
B+bcA+
2
)-1 x+(c+bA)(
2
+ + 2) 1
where A=1, B= so that the equation become
( + )
1
= (
2
+bc+
2
)-1 x+(c+b)(
2
+ +2) 1 (1)
Proposed S-Box Design Method
This section says that the multiplicative inverse computation will first be covered and the affine transformation will then follow to
complete the methodology involved for constructing the S-BOX for the subbyte operation.For the invsubbyte operation,that can reuse
multiplicative inversion module and combine it with the inverse affine transformation.So the multiplicative inverse can be constructed
using the equation 1,

Fig 2 : Show the block diagram for S-box
Show the Description for the building blocks of the S-Box
o =Isomorphic mapping to composite field

2
=Squarer in GF(2
4
)
=Multiplication with constant, in GF(2
4
)
= Addition operation in GF(2
4
)

1
= Multiplicative inversion in GF(2
4
)
= Multiplication operation in GF(2
4
)
o
1
= Inverse isomorphic mapping to GF(2
8
)
Affine Transform
The affine transform is normally should improve our result. Its the second building for the composite field arithmetic based S-Box.
Our proposed affine transform & Inverse affine transform as follows:
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

271 www.ijergs.org

o=

(+4 8)

(+5 8)

(+6 8)

(+7 8)


(2)
Where d = {01100011},I = 0 to 7
o
1
=
(+2 8)

(+5 8)

(+7 8

(3)
Where d={00000101},I = 0 to 7
Isomorphic and Inverse Isomorphic mapping
Computation of the multiplicative inverse in composite fields cannot be directly applied to an element which is based on GF(2
8
) .So
for that we have to decomposing the more complex GF(2
8
) to lower order fields of GF(2
1
), GF(2
2
), GF(2
2
)
2
).To accomplish
this ,the following irreducible polynomials are used .
(2
2
) GF(2) :
2
+ +1
GF(2
2
)
2
) GF(2
2
) :
2
+ +|
GF(((2
2
)
2
)
2
) GF((2
2
)
2
) :
2
+ +
where | = {10}2 and = {1100}2.
The element in GF(2
8
) has to be mapped to its composite field representation via an isomorphic function,o.After performing the
multiplicative inversion, the result will also have to be mapped to its equivalent in GF(2
8
) via the inverse isomorphic functiono
1
.Let
q be the element in GF(2
8
) ,in that o&o
1
can be represented as 8x8 matrix, where q7 is the most significant bit,q0 is the least
significant bit. The equation is given as below,
o =

5
764321
7532
75321
7621
74321
641
610



o
1
X q =

7651
62
651
65421
54321
74321
54
65420



Arithmetic operation in composite Field
In Galois Field the element q can be split into qHx+qL i.e the higher & lower order term.
Addition in GF(

)
Addition of two elements in Galois Field can be translated to simple bitwise XOR operation between the two elements.
Squaring in GF(

)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

272 www.ijergs.org

We have to take k=
2
,where k and q is an element in GF(2
4
) ,represented by the binary number of {k3 k2 k1 k0}2 and {q3 q2 q1
q0}2 respectively, From that,
k3 k2 = kH , k1 k0 = kL, q3q2 =qH
q1 q0= qL So,
kH x+kL = (qH x+qL)
2

Using the irreducible polynomial
2
+x+1,and setting it to
2
=x+1,so the higher and lower order term is given by,
kH = q3(x+1)+q2
i.e k3x+k2 = q3x+(q2+q3) (4)
kL = q3(1)+q2x+q1(x+1)+q0
i.e k1x+k0 = (q2+q1)x+(q3+q1+q0)(5)
From the equation 2 & 3 the formula for computing the squaring operation in GF(2
4
) is shown below.
k3 = q3
k2 = q3q2
k1 = q2q1
k0 = q3q1q0
Multiplication with constant
In that we take k=q where k={k3 k2 k1 k0}2,q={q3 q2 q1 q0}2 and ={1100}2 are element in GF(2
4
) we proceed the
same procedure as seen in Addition we get,
k3 = q3
k2= q3q2
k1=q2q1
k0=q3q1q0
GF(

) Multiplication
Let k=qw where k={k3k2k1k0}2, q={q3q2q1q0}2 & w={w3w2w1w0}2 are element of GF(2
4
)
k = kHx+kL = (qHwH+qHwL+qLwH) x +qHwH|+qLwL
GF(

) Multiplication
k=qw, where k={k1 k0}2,q={q1q0}2 & w={w1 w0} are element of GF(2
2
) we get
k1=q1w1 q0w1q1w0
k0=q1w1q0w0
Multiplication with constant |
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

273 www.ijergs.org

Let k=q|, where k={k1k0}2,q={q1q0}2 and |={10}2 are element ofGF(2
2
)
k1=q1q0,k0=q1.
Multiplicative Inversion in GF(

)
q is an element of GF(2
4
) such that
1
={3
1
,2
1
,1
1
, 0
1
},the inverse of the individual bits can be computed as below,
3
1
=q3q3q2q1q3q0q2
2
1
=q3q2q1q3q2q0q3q0q2q2q1
1
1
=q3q3q2q1q3q1q0q2q2q0q1
0
1
=q3q2q1q3q2q0q3q1q3q1q0q3q0q2q2q1q2q1q0q1q0
From the above discussion is the operation for the composite field arithmetic based S-Box .Our proposed method is the
implementation of this S-Box in the four stage pipeline. So that the area, delay, power will be reduced. The diagram will shown
below,

Fig 3: Proposed Pipelined implemented S-Box






Comparison Result
We design the S-Box is based on composite field arithmetic method. In this paper proposed method coding can be written using
VHDL hardware description language. The XC2VP30 device of xilinx FPGA is used to validate the power with VHDL code for the
proposed architecture also the power is analysed using Xilinx ISE 14.7 Xpower analyzer. Table 1 show the comparison of power
,delay and slices for conventional & proposed method.fig 3 show power report for the proposed method,

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

274 www.ijergs.org

Table 1:Comparison Result for conventional&Proposed architecture, Simulation
Implementatio
n
No.of 4
LUTS
No of
Occupied
slices
Dynamic power(W) Delay (ns)
Conventional
structure(C.S)
76 40 8.278 19.866
C.S in 2 stage
pipelined
83 43 5.072 15.76
C.S inv replace
equ
74 40 8.278 18.986
C.S inv rep equ
in 2(pipe)
81 43 5.076 14.412
C.S inv rep equ
in 4(pipe)
82 43 5.064 6.275
C.S inv replace
mux
76 39 8.277 18.863
C.S inv rep
mux in 2(pipe)
83 44 5.16 14.627
C.S inv rep
mux in 4(pipe)
88 49 8.36 6.275
Operand based
S-Box(OP)
75 39 8.278 18.366
OP in 2(pipe) 86 45 5.012 18.608
OP in 4(pipe) 77 40 5.061 6.318
OP inv replace
equ
75 39 8.277 18.318
OP inv rep equ
in 4(pipe)
79 40 5.098 6.318
OP inv rep mux 74 39 8.278 18.318
OP inv rep mux
in 2(pipe)
79 40 5.066 16.869
OP inv rep mux
in 4(pipe)
76 40 5.78 6.318
Proposed
architecture
85 44 5.053 6.275

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

275 www.ijergs.org


Fig 4: Simulation Result for Proposed structure


Fig 5:Power report for the proposed architecture
Conclusion
The main aim of this paper is to design and implementation of the composite field arithmetic method based S-Box. Proposed
method is based on combinational logic , thus its Power & delay is very low. The proposed approach is based on pipelining
technique. In this paper we have to use four stage pipelining in S-Box design. The proposed S-Box design is only based on XOR,
AND, NOT, OR logic gates. The pipelined based S-Box has low power & high speed than the conventional structure.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

276 www.ijergs.org

Acknowledgment
The authors would like to thank Kathir College of Engineering to utilize the lab facility, network resources to complete this
paper in time. The suggestion & comments of anonymous reviewers, Which have greatly helped to improve the quality of this paper &
knowledge.
REFERENCES:
[1]. Sumio moroika, Akashi Satoh, An optimized S-Box circuit architecture for low power AES design, Springer Verilog
Berlin Heidelberg 2003.
[2]. Joon-HoHwang,Efficient Hardware Architecture of SEED S-Box for the application of smart cards, journal December
2004.
[3]. P.Noo-intara, S. Chantarawong, S.Choomchaay, Architecture for mixcolumn transform for the AES,ICEP 2004.
[4]. GeorgeN.Selimis,Athanasios P.Kakarountas,ApostolosP.Fournaris,Odysseas Koufopaylou, A Low Power design for S-box
cryptographic Primitive of AES for the Mobile end user,Journal 2007.
[5]. Xing Ji-Peng,Zou Xue-cheng,Guo Xu,Ultra-Low power S-Boxes architecture in the AES method,journal march 2008.
[6]. L.Thulasimani,M.Madheswaran,A Single chip design & Implementation of AES -
128/192/256encryptionalgorithm,International journal 2010.
[7]. MohammadAminAmiri,Sattar Mirzakuchaki, Mojdeh Mahdavi,LUT based QCA realization of a 4x4 S-Box in the AES
method, Journal April 2010.
[8]. Yong-sung Jeon,Young-Jin Kim,Dong-Ho Lee,A Compact Memory-Free architecture for the AES algorithm using RS
methods,Journal 2010.
[9]. MuhammadH.Rais,Mohammad H.Al.Mijalli,ReconfigurableImplementation of S-Box using Virtex-5,Virtex-6,Virtex-7
based reduced residue of Prime number.
[10]. TomoyasuSuzaki,KazuhikoMinematsu,Sumio moroika Eita Kobayashi, TWINE : A light weight block cipher for multiple
Platforms.
[11]. Vincent Rijmen Efficient Implementation of the Rijndael S-Box Katholieke Universiteit Leuven,Dept,ESAT Belgium.
[12]. Akashi Satoh, Sumio Morioka,Kohji Takano and Seiji Munetoh, A Compact Rijndael Hardware Architecture with
Optimization, Springer-Verlag Berlin Heidelberg.
[13]. Saurabh kumar, V.K. Sharma, K.K.Mahapatra Low latency VLSI architecture of S-Box for AES encryption.
[14]. Saurabh Kumar, V.K. Sharma, K.K. Mahapatra An improved VLSI architecture of S-Box for AES encryption.
[15]. S.Limanarrag,Abdellatifhamdown, Abderrahimtragha, Sulaheddinekhamilich, Implementation of stronger AES by using
dynamic S-Box dependent of master key, journal of theroretical and applied information technology, 20
th
july 2013, vol.53
no.2.
[16]. Cheng wang, Performance characterization of pipelined S-Box implementation for the AES, Journal January2014.




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

277 www.ijergs.org

Single Phase d-q Transformation using as indirect Control Method for Shunt
Active Power Filter
Sachi Sharma
1

1
Research Scholar (M.E), LDRP-ITR College, Gandhinagar, India
Email-spark_sachi@yahoo.com
Abstract A single-phase shunt active power filter is used mainly for the elimination of harmonics in single-phase AC networks. In
this paper a single-phase shunt active power filter based on, an indirect control technique is designed and simulated. This control
technique is achieved by phase shifting the input signal(voltage/current) by /2.The overall action of the shunt active power filter to
eliminate the harmonics created by a non-linear load on the source side is discussed in this paper and the output of the shunt active
power filter is verified using MATLAB/Simulink software.

Keywords Harmonics, Single Phase Shunt Active Power Filter,

1. Introduction
Because of the tremendous advantage of power electronic based devices/equipment they play a vital role in the modern power
processing .As a result these devices/equipment draws non-sinusoidal current from the utility side due to their nonlinearity .So in
addition to the reactive power supply a typical distribution system has to take care of the harmonics also[C]. These power quality
concerns made the power engineers to think about the devices which reduces the harmonics in the supply line [E,F].Such devices are
known as active power filter/power conditioners which are capable of current/voltage harmonic compensation. Active power filters
are classified into shunt , series and hybrid active power filters which can deal with various power quality issues [A,E]. One of the
major advantage of the APFs are they are adaptable to changes in network and load fluctuations and it consumes only less space
compared with the conventional passive filters[H]. Nowadays power quality issues in single phase system is more than three phase
due to the large scale uses of non-linear loads and also due to the increase in newly developed distributed generation systems like
solar photo voltaic, small wind energy systems etc in single phase network [A,G].Reactive power and current harmonics are
significant while considering a single-phase network, which are major concerns for a power distribution system, because these issues
leads to other power quality troubles. In this paper a single-phase shunt active power filter based on indirect control technique for
generating the reference signal is used. In this paper section (2) detailing about single-phase shunt active power filter, section (3) gives
an idea about the indirect control strategy which is then followed by the simulation study and conclusions.
2. Single-Phase Shunt Active Power Filter
In this topology the active power filter is connected in parallel to the utility and the non-linear load. Pulse width modulated voltage
source inverters are used in shunt active power filter and they are acting as a current controlled voltage source. The compensation for
current harmonics in shunt active power filter is by injecting equal and opposite harmonic compensating current (180 degree phase
shifted).As a result the harmonics in line get cancelled out and source current becomes sinusoidal and makes it in phase with source
voltage .With the help of control strategies reference signals are generated and which then compared with the source current to
produce the gating signals for the switches.. For the reference signal generation there are different control strategies like instantaneous
active reactive power theory (pq theory) developed by Akagi [K] ,Parks d-q or synchronous reference frame theory[D].
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

278 www.ijergs.org

These control strategies are mainly focused on three phase systems [I].The three phase pq theory is made applicable to the
single phase systems by the work of Liu [J] by phase shifting an imaginary variable which is similar to voltage or current signals by 90
degree. Later this concept extended to single phase synchronous d-q reference frame by Zhang [B].

Figure 1: Principle of shunt active power filter.

3. Indirect Control Technique
3.1 Single-phase d-q transformation

Figure 3: Reference signal generation using single-phase d-q transformation.

A single-phase system can directly convert into frame without any matrix transformation. An imaginary variable obtained
by shifting the original signal (voltage/current) by 90 degrees and thus the original signal and imaginary signal represent the load
current in co-ordinates.

From second equation we can write as

From

we can derive fundamental active ,fundamental reactive, harmonic active, and harmonic reactive by using
appropriate filters .The DC components

are obtained by using LPF and AC components

~
,

~
are obtained by using HPF.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

279 www.ijergs.org

Here we are using the DC component for the generation of reference current hence it is called indirect method. The load requires only
fundamental active part of the source current.


In order to obtain a constant DC voltage across the active filter the term

is added to the above equation.


s
Therefore the reference signal is

The generated reference current is used for making gating pulses to the inverter switches which further inject the compensating current
into the line

Figure 4: Simulink model of proposed shunt active power filter.

4. Simulation Study
The proposed single-phase shunt active power filter using indirect control strategy is simulated in simpower system toolbox in
MATLAB software. Here a 60Hz source is connected to the non-linear diode rectifier load.Due to the non-linearity in the load the
source current is distorted and the THD content is about 38.90%.When the shunt active power filter is connected in between source
and load which injects thenegative harmonic compensating current into the line and the source current regain its sinusoidal nature , the
power factor is much better than without the filter and the THD content is improved to 9.65%.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

280 www.ijergs.org




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

281 www.ijergs.org




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

282 www.ijergs.org



Figure 5: FFT analysis of distorted source current.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

283 www.ijergs.org


Figure 6: FFT analysis of source current after compensation.

Figure 7: FFT analysis of source voltage.

Table 1: Performance of indirect control technique for 1 SAPF.



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

284 www.ijergs.org

5. ACKNOWLEDGEMENT
A colossal number of people have directly or indirectly helped me at different levels of me to successfully accomplish it. It is their
relentless support and care that has got me through this memorable journey of our life. So, here I would like to express our sincere
gratitude towards them.
I am grateful to all faculty members of Electrical Engineering Department, LDRP Institute of Technology & Research. I also thank
our Head of Electrical Engineering Department, Prof. H. N. Prajapati for providing the necessary infrastructure to carry out the paper
work at the institute. Last but not least, we would like to express thanks to the Almighty GOD and our families without whose
blessings the successful accomplishment of this seminar could not have been possible. I will always remember and cherish each and
every one who has helped me to bring the project to this level.
6. Conclusion

A single phase shunt active power filter based on indirect control technique is used in this paper. Using this control strategy
reference signal is generated successfully. The shunt active power filter is found effective in injecting harmonic compensating current
and thereby reducing the source current THD and improves the power factor of the line.The THD is reduced from 38.90% to 9.65%
after compensation. It is also noticed that a constant voltage appears across the DC-link capacitor which helps the smooth functioning
of the voltage source inverter.The shunt active power filter output is verified successfully with the help of MATLAB software.
REFERENCES:
[1] V Khadikar,A Chandra and B N Singh(2009),Generalised single-phase p-q theory for active power filtering:simulation and
DSP-based experimental investigation, IET Power Electronics,,Vol.2,No.1,pp.67-78.
[2] R Zhang, M Cardinal, P Szczesny and M Dame(2002), A grid simulator with control of single-phase power converters in
D-Q rotating frame, in proc. IEEE Power Electronics Specialists Conference(PESC),vol.,pp.1431-1436.
[3] M Gonzalez, V Cardenas and F Pazos (2004),D-Q transformation development for single-phase systems to compensate
harmonic distortion and reactive power, in Proc. IEEE Power Electronics Congress ,pp.177-182.
[4] S Golestan, M Joorabian, H Rastegar, A Roshan and J.M. Guerrero(2009),Droop based control of parallel-connected single
phase inverters in D-Q rotating frame, in Proc. IEEE Industrial Technology,pp1-6.
[5] B Singh, K Al-Haddad and A Chandra(1999),A Review of Active Power Filters for Power Quality Improvement, IEEE
Transactions Ind. Electro. Vol 45,no.5,pp.960-971.
[6] M El-Habrouk,M K.Darwish and P Mehta(2000),Active power filters a review, in Proc. Of IEE-Elect. Power
Appl,vol.147,no-5,pp.403-413.
[7] Kunjumuhammed L.P,Mishra M.K(2006) ,Comparison of single phase shunt active power filters algorithm, proc. Annu.
Conf..IEEE Power India.
[8] Mohammad H Rashid(2007),Power Electronics Handbook: Devices Circuits and Applications Elsevier 2e. Single-Phase
Shunt Active Power Filter Using Indirect Control Method 89
[9] H Akagi, Y Kanazawad and A Nabae (1984),Instantaneous reactive power compensators comprising switching devices
without energy storage Components, IEEE Trans.Ind.Appl,Vol.20.no-3,pp 625-630.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

285 www.ijergs.org

[10] J Liu, J Yang and Z Wang(1999),A new approach for single phase harmonic current detecting and its application in a
hybrid active power filter,inproc.Annu.Conf.IEEE.Indist.Electronics.Soc(IECON99),vol 2,pp,849-854.
[11] A Vectorial Approach for Generation of Optimal Current References for Multiphase Permanent -Magnet Synchronous
Machines in Real Time Xavier Kestelyn, Member, IEEE, and Eric Semail, Member, IEEE
[12] A Vectorial Approach for Generation of Optimal Current References for Multiphase Permanent -Magnet Synchronous
Machines in Real Time Xavier Kestelyn, Member, IEEE, and Eric Semail, Member, IEEE



















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

286 www.ijergs.org

A Comparative Study on Feature Extraction Techniques for Language
Identification
Varsha Singh
1
, Vinay Kumar Jain
2
, Dr. Neeta Tripathi
3

1
Research Scholar, Department of Electronics & Telecommunication, CSVTU University
2
Associate Professor, Department of Electronics & Telecommunication, CSVTU University
3
Principal, SSITM, CSVTU Univeristy, FET, SSGI, SSTC jumwani Bhilai, C. G, India
E-mail- varshasingh.40@gmail.com
ABSTRACT- This paper presents a brief survey of feature extraction techniques used in language identification (LID) system. The
objective of the language identification system is to automatically identify the specific language from a spoken utterance. Also the
LID system must perform quickly and accurately. To fulfill this criteria the extraction of the features of acoustic signals is an
important task because LID mainly depends on the language-specific characteristics. The efficiency of this feature extraction phase is
important since it strongly affects the performance and quality of the system. There are different features which are used in LID are
cepstral coefficients, MFCC, PLP, RASTA-PLP, etc.

Keywords LID (Language Identification), feature extraction, LPC, Cepstral analysis, MFCC, PLP, RASTA-PLP.
INTRODUCTION
The Speech is an important and natural form of communication with others. Over the past three decades there is the tremendous
development in the area of speech processing. Applications of speech processing include speech/ speaker recognition, language
identification etc. The objective of the automatic speaker recognition system is to extract, characterize and recognize the information
about speaker identity [1]. Language identification system automatically identifies the specific language from a spoken utterance.
Automatic language identification is therefore an essential component of, and usually the first gateway in, a multi-lingual speech
communication/interaction scenario. There are many potential applications of LID. In the area of telephone-based information
services, including customer service, phone banking, phone ordering, information hotline and other call-centre/Interactive Voice
Response (IVR) based services; LID systems would be able to automatically transfer the incoming call to the corresponding agent,
recorded message, or speech recognition system. LID system can be made efficient by extracting the language-specific characteristics.
In this paper we mainly focus on the language specific characteristics for language identification system. Spectral features are those
features that characterize the short-time spectrum and based on the time-varying properties of the speech signal. Temporal features are
assumed constant over a short period and its characteristics are short-time stationary.
LITERATURE REVIEW
Feature Extraction is a process of reducing data while retaining speaker discriminative information. The amount of data, generated
during the speech production, is quite large while the essential characteristics of the speech process change relatively slowly and
therefore, they require less data [2]. We can define requirement that should be taken into account during selection of the appropriate
speech signal characteristics of features [3, 4]:
- large between-speaker and small within-speaker variability
- not change over time or be affected by the speaker's health
- be difficult to impersonate/mimic
- not be affected by background noise nor depend on the specific transmission medium
- Occur naturally and frequently in speech.
It is not possible that a single feature would meet all the criteria listed above. Thus, a large number of features can be extracted and
combined to improve the accuracy of the system.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

287 www.ijergs.org

The pitch and formant features of speech signal are extracted and used to detect the three different emotional states of a person [5].
Pitch is originates from the vocal cords. When air flow from glottal through the vocal cords, the vibration of vocal cords/folds
produces pitch harmonics. The rate at which the vocal folds vibrate is the frequency of the pitch. So, when the vocal folds oscillate at
300 times per second, they are said to be producing a pitch of 300 Hz. Pitch is useful to differentiate speaker genres. In males, the
average pitch falls between 60 and 120 Hz, and the range of a females pitch can be found between 120 and 200 Hz [2]. The Cepstral
analysis method is used for pitch extraction and LPC analysis method is used to extract the formant frequencies. Formants are defined
as the spectral peaks of sound spectrum, of the voice, of a person. In speech science and phonetics, formant frequencies refer to the
acoustic resonance of the human vocal tract. They are often measured as amplitude peaks in the frequency spectrum of the sound
wave. Formant frequencies are very much important in the analysis of the emotional state of a person. The Linear predictive coding
technique (LPC) has been used for estimation of the formant frequencies [5].
LPC is one of the feature extraction method based on the source-filter model of speech production. B.S. Atal in 1976 [3] uses linear
prediction model for parametric representation of speech derived features. The predictor coefficients and other speech parameters
derived from them, such as the impulse response function, the auto-correlation function, the area function, and the cepstrum function
were used as input to an automatic speaker recognition system, and found the cepstrum to provide the best results for speaker
recognition.
Reynolds in 1994 [6] compared different -features useful for speaker recognition, such as Mel frequency cepstral coefficients
(MFCCs), linear frequency cepstral coefficients (LFCCs), LPCC (linear predictive cepstral coefficients) and perceptual linear
prediction cepstral coefficients (PLPCCs). From the experiments conducted, he had concluded that, of these features, MFCCs and
LPCCs give better performance than the other features. Revised perceptual linear prediction was proposed by Kumar et al. [7], Ming
et al. [8] for the purpose of identifying the spoken language; Revised Perceptual Linear Prediction Coefficients (RPLP) was obtained
from combination of MFCC and PLP.
Of all the various spectral features, MFCC, LPCC and PLP are the most recommended features which carry information about the
resonance properties of vocal tract [9].
METHODOLOGY
In this section a comprehensive review of several methods for feature extraction are presented for language identification.
LPC: It is one of the important method for speech analysis because it can provide an estimate of the poles (hence the formant
frequency- produced by vocal tract) of the vocal tract transfer function. LPC (Linear Predictive Coding) analyzes the speech signal by
estimating the formants, removing their effects from the speech signal, and estimating the intensity and frequency of the remaining
buzz. The process of removing the formants is called inverse filtering and the remaining signal is called the residue [1]. The basic idea
behind LPC coding is that each sample can be approximated as a linear combination of a few past samples. The linear prediction
method provides a robust, reliable, and accurate method for estimating the parameters. The computation involved in LPC processing is
considerably less than cepstrum analysis.
Digital speech signal LPC Coefficients

Fig. 1 Block diagram of LPC algorithm
Cepstral Analysis: This analysis is a very convenient way to model spectral energy distribution. Cepstral analysis operates in a
domain in which the glottal frequency is separated from the vocal tract resonances. The low order coefficients of the cepstrum contain
information about the vocal tract, while the higher order coefficients contain primarily information about the excitation. (Actually, the
higher order coefficients contain both types of information, but the frequency of periodicity dominates). The word cepstrum was
derived by reversing the first syllable in the word spectrum. The cepstrum exists in a domain referred to as quefrency (reversal of the
first syllable in frequency) which has units of time. The cepstrum is defined as the inverse Fourier transform of the logarithm of the
power spectrum. The Cepstrum is the Forward Fourier Transform of a spectrum. It is thus the spectrum of a spectrum, and has certain
properties that make it useful in many types of signal analysis [10]. Cepstrum coefficients are calculated in short frames over time.
Only the first M cepstrum coefficients are used as features (all coefficients model the precise spectrum, coarse spectral shape is
Frame
blocking
Windowing
Autocorrelation
Analysis
Pre-
emphasis
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

288 www.ijergs.org

modeled by the first coefficients, precision is selected by the number of coefficients taken, and the first coefficient (energy) is usually
discarded). The cepstrum is calculated in two ways: LPC cepstrum and FFT cepstrum. LPC cepstrum is obtained from the LPC
coefficients and FFT cepstrum is obtained from a FFT. The most widely parametric representation for speech recognition is the FFT
cepstrum derived based on a Mel scale [11]. A drawback of the cepstral coefficients: linear frequency scale. Perceptually, the
frequency ranges 100200Hz and 10 kHz 20 kHz should be approximately equally important. The standard cepstral coefficients do
not take this into account. Logarithmic frequency scale would be better. Mimic perception is necessary because typically we want to
classify sounds according to perceptual dissimilarity or similarity; perceptually relevant features often lead to robust classification,
too. It is desirable that small change in feature vector leads to small perceptual change (and vice versa). The Mel-frequency cepstral
coefficients fulfill this criterion.

Speech Cepstrum
Signal
Fig. 2 Cepstral analysis
MFCC: This technique is considered as one of the standard method for feature extraction and is accepted as the baseline. MFCCs are
based on the known variation of the human ears critical bandwidths with frequency; filters spaced linearly at low frequencies and
logarithmically at high frequencies have been used to capture the phonetically important characteristics of speech. This is expressed in
the Mel-frequency scale (the Mel scale was used by Mermelstein and Davis [11] to extract features from the speech signal for
improving the recognition performance). MFCC are the results of the short-term energy spectrum expressed on a Mel-frequency scale
[1]. The MFCCs are proved more efficient better anti-noise ability than other vocal tract parameters, such as LPC. Various steps to
calculate MFCC are shown in the figure below:

Speech Signal

Vectors of MFCC
Fig. 3 Block diagram of MFCC processor
LFCC speech features (LFCC-FB40): The methodology of LFCC [11] is same as MFCC. The only difference is that the Mel-
frequency filter bank is replaced by linear-frequency filter bank.. Thus, the desired frequency range is implemented by a filter-bank of
40 equal-width and equal-height linearly spaced filters. The bandwidth of each filter is 164 Hz, and the whole filter-bank covers the
frequency range [133, 6857] Hz.
Speech Signal

LFCC Coefficient
Fig. 4 LFCC Implementation
HFCC-E of Skowronsky & Harris: Skowronski & Harris [12] introduced the Human Factor Cepstral Coefficients (HFCC-E). In the
HFCC-E scheme the filter bandwidth is decoupled from the filter spacing. This is in contrast to the earlier MFCC implementations,
where these were dependent variables. Another difference to the MFCC is that in HFCC-E the filter bandwidth is derived from the
equivalent rectangular bandwidth (ERB), which is based on critical bands concept of Moore and Glasbergs expression rather than on
the Mel scale [11]. Still, the centre frequency of the individual filters is computed by utilizing the Mel scale. Furthermore, in HFCC-E
scheme the filter bandwidth is further scaled by a constant, which Skowronski and Harris labelled as E-factor. Larger values of the E-
factor E= {4, 5, 6} were reported [12] to contribute for improved noise robustness.
Sampling &
Pre-
emphasis

Framing &
Windowing

Absolute
Value

DCT

log

Mel-scaled
Filterbank

DFT
DFT
Log
IDFT Window
Sampling &
Pre-
emphasis

Framing &
Windowing

Absolute
Value

DCT

log

Linear
frequency
Filter bank

DFT
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

289 www.ijergs.org

Speech Signal

HFCC Coefficient
Fig. 5 HFCC Implementation
PLP: The Perceptual Linear Predictive (PLP) speech analysis technique is based on the short-term spectrum of speech. PLP is a
popular representation in speech recognition, and it is designed to find smooth spectra consisting of resonant peaks [13]. PLP
parameters are the coefficients that result from standard all-pole modeling [14] which is effective in suppressing speaker-specific
details of the spectrum. In addition, the PLP order is smaller than is typically needed by LPC-based speech recognition systems. PLP
models the human speech based on the concept of psychophysics of hearing [13]. In PLP the speech spectrum is modified by a set of
transformations that are based on models of the human auditory system. The PLP computation steps are critical-band spectral-
resolution, the equal-loudness hearing curve and the intensity-loudness power law of hearing. Once the auditory-like spectrum is
estimated, it is converted to autocorrelation values by doing a Fourier transform. The resulting autocorrelations are used as input to a
standard linear predictive analysis routine, and its output is perceptually-based linear prediction coefficients. Typically, these
coefficients are then converted to cepstral coefficients via a standard recursion [14].
Speech
Signal

PLP Cepstral Coefficient
Fig. 6 PLP Implementation
RASTA-PLP: A popular speech feature representation is known as RASTA-PLP, an acronym for Relative Spectral Transform
Perceptual Linear Prediction. PLP was originally proposed by H. Hermansky as a way of warping spectra to minimize the differences
between speakers while preserving the important speech information [13]. The term RASTA comes from the words RelAtive
SpecTrA. RASTA filtering is often coupled with PLP for robust speech recognition. RASTA is a separate technique that applies a
band-pass filter to the energy in each frequency sub band in order to smooth over short-term noise variations and to remove any
constant offset resulting from static spectral coloration in the speech channel e.g. from a telephone line [15]. In essence, RASTA
filtering serves as a modulation-frequency band pass filter, which emphasizes the modulation frequency range most relevant to speech
while discarding lower or higher modulation frequencies.
Speech Signal


Cepstral Coefficients of RASTA-PLP

Fig. 7 RASTA-PLP Model

DFT
Intensity-
Loudness
Power-
Law of
hearing

IFFT

LPC to
Cepstral
Coefficient

Critical-
band
Spectral-
resolution

Equal-
Loudness
Hearing
Curve

Autoregressive
coefficients to
LPC

Sampling &
Pre-
emphasis

Framing &
Windowing

Absolute
Value

DCT

log

Human
factor filter
bank

DFT
DFT

Lograithm
& Filtering

Equal-
Loudness
Curve

IDFT

Solving of Set
of Linear
Equations
(DURBIN)

Power-
Law of
hearing

Inverse
Logarithm

Cepstral
Recursion

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

290 www.ijergs.org

CONCLUSION
MFCC, PLP and LPC are the most proposed acoustic features used in language identification. The accuracy and speed of LID system
is enhanced by combining more features of speech signal. In the following table some important conclusion has been made of above
discussed feature extraction technique.

Table No.1 Showing the concluding highlights of the different types of feature extraction methods
S. No.
Method Property Comments
1.
Linear Predictive
Coding
Static feature extraction method, 10 to
16 lower order coefficient.
The LP algorithm is a practical way to estimate formant
of the speech signal especially at high frequencies. It is
used for feature extraction at lower order.
2.
Cepstral Analysis Static feature extraction method, power
spectrum.
The Cepstrum is a practical way to extract the
fundamental frequency of the speech signal. The
Cepstral algorithm shows some limitations in the
localization of formants especially at high frequencies.
3.
Mel Frequency Cepstral
Coefficients
It is the result of short-term energy
spectrum and expressed on Mel-scale
which is linear frequency spacing
below 1000 Hz and a logarithmic
spacing above 1000 Hz.
The MFCC reduces the frequency information of the
speech signal into a small number of coefficients. It is
easy and relatively fast to compute.
4.
Linear Frequency
Cepstral Coefficients
Uses a bank of equal bandwidth filters
with linear spacing of the central
frequencies.
The equal bandwidth of all filters renders unnecessary
the effort for normalization of the area under each
filter.
5.
Human Factor Cepstral
Coefficients
Uses Moore and Glasbergs expression
for critical bandwidth (ERB), a
function only of center frequency, to
determine filter bandwidth.
Larger values of the E-factor contribute for improved
noise robustness.


6.
Perceptual Linear
Predictive Analysis
Short term spectrum is modified based
on psychophysically based
transformation.
Lower order analysis results in better estimates of
recognition parameters for a given amount of training
data.
7.
RASTA-PLP Applies a band pass filter to each
spectral component in the critical-band
spectrum estimate.
These features are best used when there is a mismatch
in the analog input channel between the development
and fielded systems.

REFERENCES:
[1] Vibha Tiwari, MFCC and its applications in speaker recognition, International Journal on Emerging Technologies 1(1): 19-
22(2010).
[2] Premakanthan P. and Mikhad W. B., Speaker Verification/Recognition and the Importance of Selective Feature Extraction:
Review, MWSCAS. Vol. 1, 57-61, 2001.
[3] B. S. Atal, Automatic Recognition of Speakers from their Voices, Proceedings of the IEEE, vol. 64, 1976, pp 460 475.
[4] Douglas A. Reynolds and Richard Rose, Robust Text Independent Speaker Identification using Gaussian Mixture Speaker
Models, IEEE transaction on Speech and Audio Processing, Vol.3, No.1, January 1995.
[5] Bageshree V. Sathe-Pathak, Ashish R. Panat, Extraction of Pitch and Formants and its Analysis to identify 3 different
emotional states of a person, International Journal of Computer Science Issues, Vol. 9, Issue 4, No 1, July 2012.
[6] D.A. Reynolds, Experimental evaluation of features for robust speaker identification, IEEE Trans. Speech Audio Process. , vol.
2(4), pp. 639-43, Oct. 1994.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

291 www.ijergs.org

[7] Kumar, P., A.N. Astik Biswas and M. Chandra, Spoken Language identification using hybrid feature extraction methods, J.
Telecomm., 1: 11-5, 2010.
[8] Ming, J., T. Hazen, J. Glass and D. Reynolds, Robust speaker recognition in noisy conditions, IEEE Trans. Audio Speech
Language Proc., 15:1711-1723, DOI: 10.1109/TASL.2007.899278, 2007.
[9] Hassan Euaidi and Jean Rouaf, Pitch and MFCC dependent GMM models for speaker identification systems, CCECE IEEE,
2004.
[10] Childers, D.G., Skinner, D.P., Kemerait, R.C., The cepstrum: A guide to processing Proceedings of the IEEE Volume 65,
Issue 10, Oct. 1977 Page(s):1428 1443.
[11] Merlmestein P. and Davis S., Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously
Spoken Sentences, IEEE Trans. On ASSP, Aug, 1980. pp. 357-366.
[12] Skowronski, M.D., Harris, J.G., Exploiting independent filter bandwidth of human factor cepstral coefficients in automatic
speech recognition, J. Acoustic Soc. Am., 116(3):17741780, 2004.
[13] H. Hermansky, Perceptual linear predictive (PLP) analysis for speech, J. Acoustic Soc. Am., pp. 1738-1752, 1990.
[14] L. Rabiner and R. Schafer, Digital Processing of Speech Signals, Prentice Hall, Englewood Cliffs, NJ, 1978.
[15] H. Hermansky and N. Morgan, RASTA Processing of Speech, IEEE Trans. On Speech and Audio Processing, Vol. 2, 578-589,
Oct. 1994.

















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

292 www.ijergs.org

Design of Reconfigurable FFT/IFFT for Wireless Application
Preeti Mankar
1
, L.P.Thakare
1
, A.Y.Deshmukh
1

1
Scholar, Department of Electronics Engg. GHRCE, Nagpur,
E-mail- preetimankar414@gmail.com

ABSTRACT - Communication is one of the important aspects of life. The field of communication has seen a fast growth with
the advancement in age and with growing demands. Digital domain is now being used for the transfer of signals in place of analog
domain. Single carrier waves are being replaced by multi carriers for the purpose of better transmission. Multi carrier systems
like CDMA and OFDM are now a days being implemented commonly. The orthogonal frequency division multiplexing (OFDM)
modulation format has been proposed for variety of digital communications applications such as DVB-T and for wideband wireless
communication systems. OFDM requires the use of FFT & IFFT for conversion of signal from time domain to frequency domain and
vice versa respectively.
For number of applications the number of FFT/IFFT required changes and so there comes the concept of reconfiguration.
This concept of reconguration may be used for making the system applicable for various specifications. This paper discuss about the
concept of use of reconfigurable FFT in wireless systems to reduce the complexity, cost and power consumption of the system.

Keywords OFDM, FFT/IFFT, Floating point representation, Complex Multiplier, Reconfigurable FFT/IFFT

INTRODUCTION
OFDM can be seen as either a modulation technique or a multiplexing technique. One of the main reasons to use OFDM is to increase
the robustness against frequency selective fading or narrowband interference. Error correction coding can then be used to correct for
the few erroneous subcarriers. The concept of using parallel data transmission and frequency division multiplexing was published in
the mid-1960s [1, 2]. Some early development is traced back to the 1950s [3]. OFDM has been adopted as a standard for various
wireless communication systems such as wireless local area networks, wireless metropolitan area networks, digital audio broadcasting,
and digital video broadcasting. It is widely known that OFDM is an attractive technique for achieving high data transmission rate in
wireless communication systems and it is robust to the frequency selective fading channels.


Figure1. A basic diagram of OFDM Transceiver
There are many types of FFT architectures used in OFDM systems. They are mainly categorized into three types
namely the parallel architecture, the pipeline architecture and the shared memory architecture. The high performance of a parallel and
pipelined architecture is achieved by having more butterfly processing units but they consume larger area than the shared memory
architecture. On the other hand, the shared memory architecture requires only one butterfly processing unit and has the advantage of
area efficiency.
The rest of the paper is organized as follows. In section II, the FFT algorithm is reviewed. Section III includes
comparative study of various methods and architectures available for reconfiguring FFTs for wireless systems. Section IV gives a
tabular comparison of the all the methods reviewed. Finally a conclusion is given in section V.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

293 www.ijergs.org

FFT ALGORITHM

Fast Fourier transform (FFT) has been playing an important role in digital signal processing and wireless communication systems. The
choice of FFT sizes is decided by different operation standards. It is desirable to make the FFT size changeable according to the
operation environment. The Fourier transform is a very useful operator for image or signal processing. Thus it has been extensively
studied and the literature about this subject is very rich. The Discrete Fourier Transform (DFT) is used for the digital signal processing
and its expression is given below
---(1)
It appears obvious that this expression can not be computed in a finite time due to the infinite bounds. From that, the usully computed
expression is the N-points Fast Fourier Transform
Given below
---(2)
The expression of the FFT is bounded and computable with a finite algorithmic complexity.
This complexity is expressed as an order of multiplications and additions. Computing a N-points FFT without any simplification
requires an algorithmic complexity of O(N2) multiplications and O(N2) where O denotes the "order of" multiplications and additions.
Note that the real number of additions is N(N 1) which is O(N2). This reduction of complexity is however not sufficient for the large
FFT sizes that are used in many digital communications standards.
FFT and IFFT methods have three types. One is fixed radix FFT, Mixed radix FFT and Split Radix FFT.[4] Fixed radix
decompositions are algorithms in which the same decomposition is applied repeatedly to the OFT equation. The most common
decompositions are radix-2, radix-4, radix-8 and radix-16. An algorithm of radix-r can reduce the order of computational complexity
to O(N logr(N)). Mixed-radix refers to using a variety of radices in succession. One application of this method is to calculate FFTs of
irregular sizes. Mixed-radix can also refer to a computation that uses multiple radices with a common factor. This could be a
combination of radices such as 2, 4, and 8. These can be ordered in a way to simplify and optimize calculations of specific sizes or to
increase the efficiency of computing FFTs of variable sized inputs. The split-radix algorithm is a method of blending two or more
radix sizes and reordering the sequence of operations in order to reduce the number of computations while maintaining accuracy.
"Split-radix FFT algorithms assume two or more parallel radix decompositions in every decomposition stage to fully exploit
advantage of different fixed-radix FFT algorithms.
FLOATING POINT REPRESENTATION

Floating point numbers are one possible way of representing real numbers in binary format; the IEEE 754 standard presents two
different floating point formats, Binary interchange format and Decimal interchange format. Fig. 2 shows the IEEE 754 single
precision binary format representation; it consists of a one bit sign (S), an eight bit exponent (E), and a twenty three bit fraction (M or
Mantissa). [5]If the exponent is greater than 0 and smaller than 255, and there is 1 in the MSB of the significand then the number is
said to be a normalized number; in this case the real number is represented by

Figure 2. IEEE single precision floating point format

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

294 www.ijergs.org

Sign Bit: This bit represents whether the number is positive or negative. 0 denotes a positive number and 1 denotes a negative number.
Exponent: This field represents both positive and negative exponents. This is done by adding a bias to the actual exponent in order to
get the stored exponent. For IEEE 754 this value is 127.
Mantissa: This field is also known as the significant and represents the precision bits of the number. It comprises of implicit leading
bits and the fraction bits.
Table I


In proposed work, the BCD input is first converted into floating number format. The process of addition, subtraction and
multiplication in the middle stage i.e. the complex multiplier takes place in floating point format only.
At the end the floating output is again converted into BCD. The system can process signed, unsigned and decimal numbers thereby
increasing the range.

Reconfigurable architecture
A Reconfigurable FFT Architecture can be implemented by cascading several radix-2 stages in order to accommodate different FFT
sizes. The signal-flow graphs for radix-2 to radix- 24 butterflies are shown in Fig. 3

Fig. 3 Various Butterfly Operations
Radix-2
The radix-2 PE applies one stage of radix-2 butterfly computations to its data. It is used when the size of the frame to be processed is
32 or 128 points. [7]The radix-2 PE is realized as a simplified radix-4 PE (Figure 2). The Butterfly Core is replaced with the simpler
radix-2 butterfly network, consisting of two (2) complex adders/subtractors and one (1) complex multiplier. This circuit though is
optimized further. In the split radix 128 and 32 point FFT computation, the twiddle factors for all radix- 2 butterflies have the constant
value of (1+0j). Plugging into the radix-2 butterfly equations we obtain:
Consequently, the complex multiplier (in the butterfly core) and the twiddle generator blocks are omitted.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

295 www.ijergs.org

The general fixed-radix algorithms decompose the FFT by letting r=rl=r2...=rm. The r-point DFT is called the butterfly, which is a
basic operation unit. Higher radix decomposition is often preferred because it can reduce the computational complexity by reducing
the number of complex multiplications required. The trade-off is that the hardware complexity of the butterfly grows as the radix
becomes high. However, a fixed-radix algorithm is sometimes found deficient due to its limitation on FFT size (power of r). As we
prefer higher radix algorithms to reduce the computational complexity, the flexibility of the FFT size is also limited. Therefore, the
mixed-radix algorithm is adopted in our design to keep the architecture flexible while using a high radix algorithm.
SIMULATION RESULTS

1. BCD to Floating Point Representation:










2. FLOATING POINT REPRESENTATION TO BCD:



3. 2- POINT FFT:


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

296 www.ijergs.org



4. 4-POINT FFT:


5. 4-POINT IFFT


CONCLUSIONS

This paper presents various methods for programmable FFT/IFFT processor design has been for OFDM applications.
The paper includes various low power, reduced complexity, and low cost methods of reconfigurable FFT/IFFT. The method used
shows that by making use of floating point format the FFT/IFFT of signed, unsigned, decimal numbers can be obtained efficiently.
Also by making use of reconfigurable architecture the system itself can be capable of switching into the radix algorithm according to
the provided input and can provide the correct computation result. Using vedic mathematics for the complex computation helps to
increase the speed of the computations and provide efficient results.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

297 www.ijergs.org

REFERENCES:

[1] BaigI,Jeoti V DCTprecoded SLM technique for PAPR Reduction Intelligent and Advanced System international
Conference,15-17 June 2010.
[2] S.P.Vimal ,K.R.Shankar Kumar A New SLM Technique for PAPR Reduction in OFDM Systems, European Journal for
scientific Research ISSN 1450-216X vol.65,No.2,2011.
[3] OFDM Simulation using Matlab Smart Research Laboratoryfaculty advisor Dr.Mary Ann Ingram ,Guillermo
Acosta ,Aug2000.
[4] Md Nooruzzaman Khan M.Mohamed Ismail Dr.P .K.J awahar II M.Tech(VLSI & Embedded Systems) Associate Professor
Professor An Efficient FFT IIFFT Architecture for Wireless communication ICCSP-'12
[5] Preethi Sudha Gollamudi, M. Kamaraju Dept. of Electronics & Communications Engineering Gudlavalleru Engineering
College, Andhra Pradesh, India-769008 Design Of High Performance IEEE- 754 Single Precision (32 bit) Floating Point
Adder Using VHDL International Journal of Engineering Research & Technology (IJERT) Vol. 2 Issue 7, July 2013
IJERTIJERT ISSN.
[6] Sharon Thomas & 2V Sarada 1 Dept. of VLSI Design, 2Department of ECE, 1&2 SRM University Design of
Reconfigurable FFT Processor With Reduced Area And Power ISSN (PRINT) : 2320 8945, Volume -1, Issue -4, 2013
[7] Konstantinos E. MANOLOPOULOS, Konstantinos G. NAKOS, Dionysios I. REISIS and Nikolaos G. VLASSOPOULOS
Electronics Laboratory, Department of Physics National and Capodistrian University of Athens Reconfigurable Fast Fourier
Transform Architecture for Orthogonal Frequency Division Multiplexing Systems
[8] Anuj Kumar Varshney 1, Vrinda Gupta 2 Department of Electronics and Communication Engineering .National Institute of
Technology, Kurukshetra, Haryana Kurukshetra 136119, India Power-Time Efficient Algorithm for Computing
Reconfigurable FFT in Wireless Sensor Network International Journal of Computer Science & Engineering Technology
(IJCSET)
SPNA071ANovember 2006 Implementing Radix-2.













International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

298 www.ijergs.org

A New Technique for Protecting Confidential information Using
Watermarking
Gayathri.M
1
,Pushpalatha.R
1
, Yuvaraja.T
2

1
PG Scholar, Department of ECE, kongunadu College of Engineering and Technology, Tamilnadu, India
2
Assitant Professor, Department of ECE, kongunadu College of Engineering and Technology, Tamilnadu, India
E-mail- mgayathri01@gmail.com
ABSTRACT - A new approach of image watermarking based on RSA encryption technique for the lossless medical images has
been proposed. This paper presents a strategy of attaining maximum embedding capacity in an image in a way that to determine the
amount of information to be added in each pixel, maximum possible neighboring pixels are analyzed for their frequencies. The
technique provides a seamless insertion of image into carrier video, and reduces the error assessment and artifacts insertion required
to a minimal. Two or more bits in each pixel can be used to embed message, which has high risk of delectability and image
degradation to increase the embedding capacity. The RSA techniques might use a significant bit insertion scheme, the bits of data
added in each pixel remains constant or a variable least significant bit insertion in which the number of bits added in each pixel vary
on the surrounding pixels to avoid degrading the image fidelity.
Keywords: watermarking, mean square error, encryption, decryption, SPHIT, wavelet, RSA algorithm.
1. INTRODUCTION
A watermark is a recognizable image or pattern in paper that appears as various shades of lightness/darkness when viewed by
transmitted light, caused by density variations or thickness in the paper. Watermarks have been used have been on currency, postage
stamps, , and other government documents to discourage counterfeiting. Watermarks are often used as security features of passports,
banknotes, postage stamps, and other documents to prevent counterfeiting. Encoding an identifying code into digitized video, music,
picture, or other file is known as a digital watermark.

A watermark is made by impressing a water-coated metal stamp or dandy roll onto the paper during manufacturing. Artists can
copyright their work by hiding their name within the image. It is also applicable to other media, such as digital video and audio. There
are a number of possible applications for digital watermarking technologies and this number is increasing rapidly. For example, in
data security, watermarks may be used for authentication, certification, and conditional access. Certification is a vital issue for official
documents, like identity cards or passports.

2. RSA Algorithm
RSA is associate algorithmic program for public-key cryptography that's supported the probable problem of factorization large
integers, the factorization drawback. A user of RSA creates and so publishes the product of two large prime numbers, in conjunction
with an auxiliary value, as their public key. The prime factors should be unbroken secret. Anyone will use the general public key to
encrypt a message, however with presently published methods, if the general public key is large enough, only someone with
information of the prime factors will feasibly rewrite the message. whether or not breaking RSA secret writing is as hard as factoring
is associate open question referred to as the RSA drawback.
The RSA algorithmic program involves 3 steps, it is given below
Key generation
Encryption
Decryption

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

299 www.ijergs.org

2.1 Key generation
RSA involves a public key and a non-public key(private key). The general public key is well-known by everybody and is employed
for encrypting messages.
Messages encrypted with the general public key will solely be decrypted in a very affordable quantity of your time using the non-
public key.
2.2 Encryption
For example, Alice transmits her public key (n, e) to Bob and keeps the private key secret. Bob then desires to send message M to
Alice.
He initial turns M into a whole number m, such 0 m < n by exploitation an agreed-upon reversible protocol referred to as a padding
scheme. He then computes the cipher text c like

2.3 Decryption
Alice will recover m from c by exploitation her non-public key (private key) exponent d via computing


Given m, she will be able to recover the original message M by reversing the artifact theme

3. DISCRETE WAVELET TRANSFORM
It permits the image decomposition in several styles of coefficients conserving the image information. Such coefficients coming from
completely different images are suitably combined to get new coefficients in order that the data within the original images is collected
befittingly. In discrete wavelet transform (DWT), two channel filter bank is employed. When decomposition is performed, the
approximation and detail element is separated 2-D discrete wavelet Transformation (DWT) converts the image from the spatial
domain to frequency domain.

4. PEAK SIGNAL TO NOISE RATIO
PSNR is most typically used to measure the standard of reconstruction of loss compression codecs (e.g., for image compression). The
signal during this case is that the original information, and therefore the noise is that the error introduced by compression. Once
examination compression codecs, PSNR is approximation to human perception of reconstruction quality. Although the next PSNR
typically indicates that the reconstruction is of upper quality, in some cases it's going to not.


PSNR is most simply outlined via the mean square error (MSE).

Given a noise-free mn monochrome image I and its noisy approximation K, MSE is outlined as:

=
1

, ,
2
1
=0
1
=0

The PSNR is defined as:
= 20. log
10
10. log
10

Here, MAX
I
is that the most attainable pixel value of the image. when the pixels are delineated mistreatment eight bits per sample, this
can be 255.

For color pictures with three RGB values per images, the definition of PSNR is that the same except the MSE is that the total over all
squared value variations divided by image size and by three. Alternately, for color pictures the image is regenerate to a unique color
space and PSNR is reported against every channel of that color space.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

300 www.ijergs.org


4.1 Testing Topology
Depending on the knowledge that's created obtainable to the algorithmic rule, video quality check algorithms may be divided into 3
categories:
1. A Full Reference (FR) algorithm has access to and makes use of the original reference sequence for a comparison (i.e. a
difference analysis). It can compare each pixel of the reference sequence to each corresponding pixel of the degraded sequence. FR
measurements give the highest accuracy and repeatability but tend to be processing intensive.
2. A Reduced Reference (RR) algorithm uses a reduced side channel between the sender and the receiver which is not capable of
transmitting the complete reference signal. Instead of parameters are extracted at the causation aspect that helps predicting the
standard at the receiving side. RR measurements might supply reduced accuracy and represent a working compromise if information
measure for the reference signal is restricted.
3. A No Reference (NR) algorithm only uses the degraded signal for the quality estimation and has no information of the
original reference sequence. NR algorithms are low accuracy, estimates only as the originating quality of the source reference is
completely unknown. A common variant of NR algorithms does not analyze the decoded video on a pixel level but work on an
analysis of the digital bit stream on an IP packet level, only. The measurement is consequently restricted to a transport stream analysis.
Peak Signal to Noise magnitude relation (PSNR) could be a ubiquitously used image process performs to compare two pictures. It the
foremost rudimentary estimate on the distinction between two pictures and is predicated on mean square error (MSE).

5. BLOCK DIAGRAM

Fig-1 Block diagram of the System
To hide an image into the carrier video, the image is encoded using SPIHT and then apply the discrete wavelet transform.
Watermarking is used to hide that image into video. After hide tha image into the video. There is no difference between the input
video and the watermarking video. The image can be recovered by using the SPIHT decoding and inverse wavelet transform.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

301 www.ijergs.org

The output is given below

Fig-2 output
6. CONCLUSION
The watermarking is used in the covert communication to transport secrete information. if to hide the secret message into an image
means the secret message is embedded into smaller matrix of size 8x8 and inserted into input image. In this paper the RSA algorithm
process is used to hide an image into the video. Video is used as a carrier. The improvement of this application would be extending its
functionality to support hiding data in video files or in other file format.
REFERENCES:
[1] Ayman Ibaida, Ibrahim Khalil(2013), Wavelet Based ECG Steganography for Protecting Patient Confidential Information
in Point-of-Care Systems IEEE Transactions on Biomedical Engineering,pp.1-9.
[2] Golpira. H and Danyali. H(2009), Reversible blind watermarking for medical images based on wavelet histogram shifting,
IEEE,pp. 3136.
[3] Ibaida. A, Khalil. I, and Sufi. F(2010), Cardiac abnormalities detection from compressed ECG in wireless telemonitoring using
principal components analysis(PCA), pp.207212.
[4] Kaur. S, Singhal. R, Farooq. O, and Ahuja. B(2010), Digital Watermarking of ECG Data for Secure Wireless Commuication, pp.
140144.
[5] Lee .W and Lee C.(2001), A cryptographic key management solution for hipaa privacy/security regulations, vol. 12, no.
1.,pp. 34-41.
[6] Malasri. K and Wang. L (2007), Addressing security in medical sensor networks, ACM, p. 12.
[7] Marvel. L, Boncelet. C, and Retter. C.(1999), Spread spectrum image steganography, vol. 8, no.8, pp. 10751083.
[8] Ming Li, Shucheng Yu, Yao Zheng, Kui Ren, and Wenjing Lou(2013), Scalable and Secure Sharing of Personal Health Records
in Cloud Computing Using Attribute-Based Encryption vol. 24, no. 1., , pp. 131-143.
[9] Wang. H, Peng. D, Wang .W, Sharif. H, Chen. H, and Khoynezhad. A(2010), Resource-Aware secure ECG healthcare
monitoring through body sensor vol.17, no.1., pp. 12-19,.
[10] Zheng. K and Qian. X (2008), Reversible Data Hiding for Electrocardiogram Signal Based on Wavelet Transform CIS08,
vol. 1,.
[11] Fei Hu, Meng Jiang(2007), Privacy-Preserving Telecardiology Sensor Networks:Toward a Low-Cost Portable Wireless
Hardware/Software Codesign vol.11,no.6.
[12] Y. Lin, I. Jan, P. Ko, Y. Chen, J. Wong(2004), A wireless PDA-based physiological monitoringsystem for patient transport,
vol. 8, no. 4,pp. 439447,.



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

302 www.ijergs.org

Design of Substrat Integerated Waveguide Bandpass Filter of SCRRs in the
Microstrip Line
DAMOU Mehdi
1,2
, NOURI keltouma
1,2
, Taybe Habib Chawki BOUAZZA
1
, Meghnia.Feham
2

1
Laboratoire de Technologies de Communications LTC, Facult de technologie,-Universit Dr Moulay Tahar, BP 138
Ennasr, Saida, Algrie
2
Laboratoire de recherche Systmes et Technologies de lInformation et de la communication STIC, Facult des Sciences
Universit de Tlemcen BP 119 Tlemcen, Algrie
Email- bouazzamehdi@yahoo.fr
Abstract In this paper A novel band-pass Substrate Integrated Waveguide (SIW) filter based on complementary Split ring
Resonators (CSRRs) is presented in this work.a X-band wideband bandpass filter based on a novel substrate integrated waveguide-to-
Complementary split ring resonators (SIW-CSSRs) cell is presented. In the cell, the (CSRRs) is etched on the top plane of the SIW
with high accuracy, so that the performance of the filter can be kept as good as possible. Finally, the filter, consisting of three cascaded
cells, is designed meet compact size, Three different CSRRs cells are etched in the top plane of the SIW for transmission zero control.
A demonstration band-pass filter is designed, It agreed with the simulated results well. This structure is designed with Numeric
Method (MOM) using CST on a single substrate of RT/Duroid 5880. Simulated results are presented and discussed..
Index Terms Substrate Integrated Waveguide, Complementary split ring resonators CSRRs, band-pass, via, SIW, simulation

Introduction : very recently, Complementary split ring resonators (CSSRs) elements have been proposed for the synthesis of
negative permittivity and left-handed (LH) metamaterials in planar configuration [1] (see Fig 1). As explained in [2], CSRRs are the
dual counterparts of split ring resonators (SRRs), also depicted in Fig. 1, which were proposed by pendry in 1999. It has been
demonstrated that CSRRs etched in the ground plane or in the conductor strip of planar transmission media (microstrip or CPW)
provide a negative effective permittivity to the structure, and signal propagation is precluded (stopband behavior) in the vicinity of
their resonant frequency [2]. CSSRs have been applied to the design of compact band-pass filters with high performance and
controllable characteristics [3]. Recently, a new concept Substrate Integrated Waveguide (SIW) has already attracted much interest
in the design of microwave and millimeter-wave integrated circuits. The SIW is synthesized by placing two rows of metallic via-holes
in a substrate. The field distribution in an SIW is similar to that in a conventional rectangular waveguide. Hence, it takes the
advantages of low cost, high Q-factor etc., and can easily be integrated into microwave and millimeter wave integrated circuits [4].
This technology is also feasible for waveguides in lowtemperature co-fired ceramic (LTCC). The SIW components such as filter,
multiplexers, and power dividers have been studied by researchers in [5]. In this paper, a band-pass SIW filter based on CSRRs is
proposed for the first time. The filter is consisted of the input and output coupling line with the CSRRs loaded SIW. Using the high-
pass characteristic of SIW and band-stop characteristic of CSSRs, a bandpass SIW filter is designed. In this paper, we will do a
detailed investigation of CSRR based stop band filters: starting with a single CSRR etching in the microstrip line, finding its stop
band characteristics and quality factor. Then the effect of number of CSRRs etching and periodicity on the stop band filter
performance will be investigated.
ANALYSIS OF SIW-CSRRs CELL
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

303 www.ijergs.org

The proposed SIW-CSRRs cell is shown in Fig 1. Since the CSRRs is etched into the top metal cover of SIW, it is quite convenient to
do system integration. For this proposed SIW-CSRRs cell, its bandpass function is the composite high-low (Hi-Lo) type, i.e., it is a
combination of the highpass guided wave function of SIW and the bandgap function of CSRRs.







filters: starting with a single CSRR etching in the microstrip line, finding its stop band characteristics and quality factor. Then the
effect of number of CSRRs etching and periodicity on the stop band filter performance will be investigated.
PARAMETER DESIGN OF SIW
The SIW was constructed from top and bottom metal planes of substrate and having two arrays of via holes in the both side walls as
shown in Fig. 2. Via hole must be shorted to both planes in order to provide vertical current paths, otherwise the propagation
characteristics of SIW will be significantly degraded. Since the vertical metal walls are replaced by via holes, propagating modes of
SIW are very close to, but not exactly the same as in rectangular waveguide [6].







By using equivalence resonance frequency, the size of SIW cavity is determined from [7]:



Fig. 2 Topology of the substrate Integrated Waveguide





(1)
Fig1. Geometries of the CSRRs and the SRRs, grey zones represent the metallization.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

304 www.ijergs.org

This is to ensure that the SIW filter be able to support TE
10
mode in the operating frequency range. The TE-field distribution in SIW is
just like in the conventional ectangular waveguide. The effective length of SIW cavity can be determined from:


Where w and l are the real width and length of SIW cavity. However D is the diameter and P is the pitch, also known as distance
between center to center of adjacent via hole shown in Fig. 3.





Via holes form a main part of SIW in order to realize the bilateral edge walls, the reduction and huge scale combination of electronic
devices place a remarkable request on multilayer geometries and also important for discontinuities in multilayered circuits. The
diameter and pitch is given by:
d < g/ (3)
p 2d (4)
In order to minimize the leakage loss between nearby hole, pitch needs to be kept as small as possible based on (3) and (4) above. The
diameter of via hole also contributes to the losses. As consequences, the ratio d/p reflected to become more critical than pitch size of
via hole. This is because the pitch and diameter are interconnected and it might distract the return loss of the waveguide section in
view of its input port [21, 11]. The SIW components can be initially designed by using the equivalent rectangular waveguide model in
order to diminish design complexity. The effective width of SIW can be defined by:







Figure 3: Via hole







(2)





(5)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

305 www.ijergs.org

Substrate Integrated Waveguide
The SIW features high-pass characteristics, it was demonstrated in [8] that a TE10-like mode in the SIW has dispersion characteristics
that are almost identical with the mode of a dielectric filled rectangular waveguide with an equivalent width. This equivalent width is
the effective width of the SIW, namely, can be approximated as follows:

Then, the cutoff frequency for the SIW can be defined as fc = (c/2r aeqv), in which C is the light velocity in vacuum. Based on this
property, existing design techniques for rectangular waveguide can be used in a straightforward way to analyze and design various
components just knowing aeqv of the SIW. In this case, the SIW geometry size can be initially designed by
CSSR Loaded SIW
Fig 4 shows the Layout of a SIW with CSSRs etched in the top substrate.







Let us now analyze the CSSRs loaded SIW. Since CSRRs are etched in centre of the top layer, and they are mainly excited by the
electric field induced by the SIW, this coupling can be modeled by connecting the SIW capacitance to the CSRRs. According to this,
the proposed lumped-element equivalent circuit for the CSRR loaded SIW is that depicted in Fig. 4. As long as the electrical size of
the CSRRs is small, the structure can be described by means of lumped elements. In these models, L is the SIW inductance, C is the
coupling capacitance between the SIW and the CSRR. The resonator is described by means of a parallel tank [9], Lc and Cc being the
reactive elements and R accounting for losses.




Figure 4. Layout of a SIW with CSSR etched in the top substrate side, (a) top layer


Fig. 5 The depicted equivalent circuit models

(6)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

306 www.ijergs.org


In order to demonstrate the viability of the proposed technique, we have applied it to the determination of the electrical parameters of
the single cell CSSRs loaded SIW.
First Design Example
The specifications for the design example are:
Frequency Band : 2 to 15 GHz
Substrate : Duroid (cr = 2.2, h = 0.254 mrn)








The dimensions to the SIW are: a = 14 mm. The equivalent width of microstrip line w = 0.8 mm. The taper of microstrip line of
length equal to 5.5 mrn. and SIW dimensions are a = 14mm, D = 0.8mm and P = 1.6 mm, respectively. The width of the access lines
is 0.76 mm. The simulated (using CST Microwave Studio) S-parameters of Figure. 5 are shown in Fig. 6. It can be clearly found that
these structures exhibit similar characteristics except Figure. 6 Excellent results are also obtained for this transition, as shown in fig. 7

RESULTS AND DISCUSSION
A CSRR structure is designed to resonate at 9.17 GHz of the X-band microwave frequency region. The dimensions of the CSRR
structure are c = 4mm, d = 2 mm, f = 0.3 mm, s = 0.2mm and g = 0.4mm. The dependence on dimensions of the CSRR structure for
the resonant frequency is observed as follows: with the increase of the ring width (c) and gap width (d) resonant frequency increases.
The CSRR structure is placed in the microstrip line exactly below the center of a ground plane of width 2.89mm for a RT/Duroid 5880
substrate (dielectric constant r = 2.22, thickness h = 0.254mm and tan = 0.002) as shown in Fig. 6. Same substrate is used for all
other later designs. All the designs are simulated using Microwave CST software [8]. The simulation results for a single CSRR etching
in a microstrip line are shown in Fig 7.
Fig. 6 Topology of the substrate Integrated Waveguide

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

307 www.ijergs.org












The results of scattering parameters versus frequency (GHz) show narrow stop band characteristics at the resonant frequency of CSRR
at 8.3 GHz. By placing a single CSRR structure in the strip line, we can obtain a narrow stop band with a very low insertion loss level,
which is not possible with conventional microstrip resonators. It is difficult to achieve such a good narrowband stop band response
with a single element of conventional resonators. Stop bandwidth of the above single CRRR loaded microstrip line filter is
approximately 456 MHz at the resonant frequency of 9.17 GHz.
Design of proposed transition
In order to combine SIW and microstrip technologies, SIW-microstrip transitions are very required [10]-[11]. SIW filter and tapered
transition shown in Fig. 8 has been studied. This structure is simulated on a The substrate used in the filter is RT/Duroid 5880 which
has permittivity of 2.22, height of 0.254mm, the distance between the rows of the centres of via is w = 15 mm, the diameter of the
metallic via is D = 0.8 mm and the period of the vias P = 1.6 mm. The width of tapered Wt

is 1.72 mm, its length is Lt

= 5.5 mm, and
thickness t = 0.035mm of the ground plane and microstrip line.







Fig 7. Simulate frequency response corresponding to the basic cell

Fig 8. Configuration for the proposed SIW Filter.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

308 www.ijergs.org

Table 1: The simulated performance of this structure




















Here our concern is to enhance the stop band filter characteristics by increasing the number of CSRR structures in the ground plane.
This is achieved by placing more CSRRs with the same resonant frequencies periodically. Such a stop band filter structure is shown in
Fig.8, which has three CSRR structures in the strip line and all the CSRRs are resonating at the same frequency of 8.3245 GHz. The
distance between the centers of any two adjacent CSRRs is known as period and it is 6 mm for this filter. The simulation results are
shown in Fig. 8.
CSSRs1 dimensions
Symbol Quality (mm) Symbol Quality (mm)
c 3.7 f 0.3
d 1.85 s 0.2
f 0.3 g 0.4
CSSRs2 dimensions
Symbol Quality (mm) Symbol Quality (mm)
c 4 f 0.3
d 2 s 0.2
f 0.3 g 0.4
CSSRs3 dimensions
Symbol Quality (mm) Symbol Quality (mm)
c 3.8 f 0.3
d 1.9 s 0.2
f 0.3 g 0.4
SIW dimensions
Symbol Quality (mm) Symbol Quality (mm)
Lt
5.5
Wt
1.72
WSIW
0.8
LSIW
1.9
D 0.8 P 1.6
a 14 L 32

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

309 www.ijergs.org

The simulation results depicted in Fig. 9 shows a stop band at 8.3245 GHz with a stop bandwidth of approximately 1.75GHz
(1750Mhz) .























Fig 9. Simulation results for the proposed filter SIW-CSRRs cell with different values




Fig 10. Simulation results S11 for the proposed filter SIW-CSRRs cell with different values for t=0.015,t=0.025 and t=0.035



Fig 11. Simulation results S21 for the proposed filter SIW-CSRRs cell with different values for
t=0.015,t=0.025 and t=0.035


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

310 www.ijergs.org

In order to achieve a low-loss broadband response, the transition is designed by simultaneously considering both impedance matching
and field matching. Thus, due to the electric field distribution in the SIW, each transition is connected to the center of the width of the
SIW, since the electric field of the fundamental mode is maximum in this place [8]. The optimization of the transition is performed by
means of electromagnetic simulations by varying the dimensions (Lt, Wt) of the stepped geometry. After optimization, the dimensions
retained are Wt = 1.72 mm and Lt = 5.5 mm.
The distribution of the electric field is given in Fig12.























Fig 12. Electric field distribution of proposed filter with three cascaded SIW-CSRRs cells (a) bottom layer, (b) top layer.





a




b
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

311 www.ijergs.org

DESIGN OF SIW FILTER
Filter Configuration
Fig.13 Shows the proposed design of filter, this filter includes two microstrip tapered transitions and four SIW resonators cavities.









Table 2: The simulated performance of this structure












Fig. 13. Configuration for the proposed SIW Filter
d = 2 mm, s = 0.2 mm, g = 0.4mm, a = 14mm, d = 0.8mm and b = 1.6 mm.





CSSRs dimensions
Symbol Quality (mm) Symbol Quality (mm)
c 1.5 f 0.3
d 1 s 0.1
f 0.15 g 0.2
L 4 x 2

SIW dimensions
Symbol Quality (mm) Symbol Quality (mm)
Lt 5.5 Wt 1.72
WSIW 0.8 LSIW 1.9
D 0.8 P 1.6
a 14 L 32

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

312 www.ijergs.org

Since the field distribution of mode in SIW has dispersion characteristics similar to the mode of the conventional dielectric waveguide,
the design of the proposed SIW band-pass filter, makes use of the same design method for a dielectric waveguide filter. The filter can
be designed according to the specifications [9]-[10]. Fig. 14 shows the simulation results of the opasse band filter structure shown in
Fig. 13. The results are plotted for the scattering parameters (S11 and S12) against frequency from 1GHz to 3GHz. These results show
a stop band mid band frequency of 1.9GHz, stop bandwidth ranges from 8 GHz to 12 GHz approximately 4 GHz. The period of the
CSRRs based stop band filter is changed to 6 mm. The number of CSRRs in the ground plane is same as in the previous design.









Its simulated S-parameters in Fig14 . From the simulated results, the filter has a central frequency of 10 GHz, a fractional bandwidth
of 72% and return loss better than 20 dB in the whole passband.






Fig.14. Stop band filter having 3 CSSRs in the stripline
Scattering parameters













Fig 15. Electric field distribution of proposed filter with three cascaded
SIW-CSRRs cells(a) top layer, (b) bottom layer.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

313 www.ijergs.org

CONCLUSION
Using the sub-wavelength resonator components of left handed metamaterials namely CSRR, more compact planar microstrip stop
band filtersIn this paper, Substrate Integrated Waveguide (SIW) filter based on complementary Split ring Resonators (CSRRs) is
presented in this work for X-band applications has been designed. The simulation process of the structure is done by using CST
software. This type of filter is suitable for highdensity integrated microwave and millimeter wave applications. The design method is
discussed; the effect of the aperture width of coupling and isolation is studied. By using SIW techniques, the compact size of the
CSSRs is produced and easy to integrate with other planar circuit compared by using conventional waveguide. Single CSSR particle in
the microstrp line gives a very narrow stop band at its resonant frequency with an extremely high Q factor but periodically placing
these CSRR structures gives wide stop bands. This is especially of benefit for the growing numbers of microwave circuits required for
the compact integrated circuits (ICs) for wireless communications.

REFERENCES:
[1] David M. Pozar, Microwave Engineering, Third Edition, John Wiley & Sons Inc, 2005.
[2] Djerafi, T.; Ke Wu; , "Super-Compact Substrate Integrated Waveguide Cruciform Directional Coupler," Microwave and Wireless
Component Letters, IEEE, vol.17, no.11, pp.757-759, Nov. 2007.
[3] Peng Chen; Guang Hua; De Ting Chen; Yuan Chun Wei; Wei Hong; , "A double layer crossed over Substrate Integrated
Waveguide
wideband directional coupler, Microwave Conference, 2008. APMC 2008. Asia Pacific, vol., no., pp. 1-4, 16-20 Dec. 2008.
[4]Pendry, J. B., A. J. Holden, D. J. Robbins, and W. J. Stewart, Magnetism from conductors and enhanced nonlinear phenomena,
IEEE Trans. Microw. Theory Tech., Vol. 47, No. 11, Nov. 1999.
[5] Falcone, F., T. Lopetegi, J. D. Baena, R. Marques, F. Martn, and M. Sorolla, Effective negative- stop-band microstrip lines
based on complementary split ring resonators, IEEE Microw. Wireless Compon. Lett., Vol. 14, No. 6, 280282, Jun. 2004.
[6] Burokur, S. N., M. Latrach, and S. Toutain, Analysis and design of waveguides loaded with split-ring resonators, Journal of
Electromagnetic Waves and Applications, Vol. 19, No. 11, 1407 1421, 2005.
[7] Xu, W., L. W. Li, H. Y. Yao, T. S. Yeo, and Q. Wu, Lefthanded material effects on waves modes and resonant frequencies:
filled waveguide structures and substrate-loaded patch antennas,Journal of Electromagnetic Waves and Applications, Vol. 19,
No. 15, 20332047, 2005.
[8] Bonache, J., I. Gil, J. Garca-Garca, and F. Martn, Novel microstrip bandpass filters based on complementary split-ring
resonators, IEEE Trans. Microw. Theory Tech., Vol. 54, No. 1, 265271, Jan. 2006.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

314 www.ijergs.org

[9] Bonache, J., F. Martin, I. Gil, J. Garcia-Garcia, R. Marques, and M. Sorolla, Microstrip bandpass filters with wide bandwidth and
compact dimensions, Microw. Opt. Technol. Lett., Vol. 46, No. 4,
343346, Aug. 2005.
[10] Cassivi, Y., L. Perregrini, P. Arcioni, M. Bressan, K. Wu, and G. Conciauro, Dispersion characteristics of substrate integrated
rectangular waveguide, IEEE Microw. Wireless Compon. Lett.,Vol. 12, No. 9, 333335, Sep. 2002.
[11] Lee, J. H., P. Stephane, P. J. Papapolymerou, L. Joy, and M. M. Tentzeris, Low-loss LTCC cavity filters using system-
onpackage technology at 60 GHz, IEEE Trans. Microwave Theory Tech., Vol. 53, No. 12, 38173824, Dec. 2005




















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

315 www.ijergs.org

Comparative Study on Hemispherical Solar Still with Black Ink Added
Ajayraj S Solanki
1
,Umang R Soni
2
, Palak Patel
1

1
Research Scholar (M.E), Department of mechanical Engineering, Sardar Patel Institute of Technology, Piludara, Mehsana
2
Research Scholar (PHD), Department of mechanical Engineering, PAHER, Udaipur, Rajasthan, Gayatrinagar Society, Gujarat
E-mail soniur@gmail.com
Abstract Water is the basic need for man to sustaining life on the earth. With the passage of time due to technical usage and their
waste disposal along with ignorance of human being caused water pollution, which led the world towards water scarcity. To resolve
this problem Solar Distillation is one of the best Techniques from available another techniques. But, due to its lower productivity it
cannot be commercial in the market. So that Lots of work can be done to improve the solar still efficiency or productivity. This
experimental has been carried out to measure the effect of black ink on the hemispherical solar still. With different water depth and
constant proportion of ink in the water and with same depth of water increasing the proportion of ink in the water has been compared
to the simple hemispherical solar still. From this experimental study, it has been observed that the productivity of hemispherical solar
still increased with decreasing the water depth. The productivity of hemispherical solar still has been increased with 1.25% black ink
added up to 17% to 20%, and for 2% black ink added it increased up to 25%.
Keywords passive, hemispherical solar still, black ink, polycarbonate glass, condensing glass cover, Active solar still, absorbing
material.
INTRODUCTION
Water is the basic need for sustaining life on the earth. With the passage of time due to technical usage and their waste disposal along
with ignorance of human being caused water pollution. This led the world towards water scarcity. Due to water pollution the surface
and underground water reservoirs are now highly contaminated. Most of the human dices are due to brackish water problem. Around
1.5 to 2 million children are dies and 35 to 40 million people are affected by water borne dices. However the increasing industrial
activities may lead to a situation where by countries need to reconsider their option with respect to the management of its water
resources. Around 3% of the world water is potable and this amount is not evenly distributed on the earth. So, developed and under
developed countries are suffering the problem of potable water.
Distillation is an oldest technique to distillate brackish or salty water in to potable water. Various technologies were invented for
desalination from time to time and it has been accepted by people without knowing future environmental consequences. Desalination
techniques like vapour compression distillation, reverse osmosis and electrolysis used electricity as input energy. But in the recent
years, most of the countries in the world have been significantly affected by energy crisis because of heavy dependency on
conventional energy sources (coal power plants, fossil fuels, etc.), which has directly affected the environment and economic growth
of these countries. The changing climate is one of the major challenges the entire world is facing today. Gradual rise in global average
temperatures, increase in sea level and melting of glaciers and ice sheets have underlined the immediate need to address the issue.
All these problems could be solved only through efficient and effective utilization of renewable energy resources such as solar, wind,
biomass, tidal, and geothermal energy etc. The alternative solution of this problem is solar distillation system and a device which
works on solar energy to distillate the water is called solar still. Solar still is very simple to construct, but due to its low productivity
and efficiency it is not popularly used in the market. Solar still is working on solar light which is free of cost but it required more
space.
SOLAR DISTILLATION SYSTEM
G.N .Tiwari et al reviewed the present status of solar distillation systems for both passive and active models. In this field a large
group of authors reported that the passive solar distillation system is a slow process for purification of brackish water. The yield of this
still is about 2L/day per m
2
of still area, which is much less and may not be economically useful. However, there is a method to
increase the yield by integration of solar collector into the basin. This is generally referred to as active solar stills. These may be flat
plat collector, solar concentrator or evacuated collector. These collectors may produce temperatures within the range of 80120C
depending upon the type of solar collector. However, the range of temperature within solar stills is reduced to about 80C due to high
heat capacity of water mass within the basin. Hence there is a practical application of such active systems to extract the essence of
medicinal plants placed under the solar still at about 80C. The systems used for extraction of the essence of medicinal plants have
become economical.
[1]
Salah Abdallah et al. worked to measuring the Effect of various absorbing materials on the thermal performance of solar stills. From
this Experiment they found that there is a strong need to improve the single slope solar still thermal performance and increase the
production rate of distilled water. Different types of absorbing materials were used to examine their effect on the yield of solar stills.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

316 www.ijergs.org

These absorbing materials are of two types: coated and uncoated porous media (called metallic wiry sponges) and black volcanic
rocks. Four identical solar stills were manufactured using locally available materials. The first three solar stills contain black coated
and uncoated metallic wiry sponges made from steel quality AISI 430 type and black rocks collected from Mafraq Area in north-
eastern Jordan. The fourth still is used as reference still which contains no absorbing materials (only black painted basin). The results
showed that the uncoated sponge has the highest water collection during day time, followed by the black rocks and then coated
metallic wiry sponges.
[2]
On the other hand, the overall average gain in the collected distilled water taking into the consideration the overnight water collections
were 28%, 43% and 60% for coated and uncoated metallic wiry sponges and black rocks respectively.

V.K. Dwivedia et al. can compare the internal heat transfer coefficients in passive solar stills by different thermal models by an
experimental validation. In this paper, an attempt has been made to evaluate the internal heat transfer coefficient of single and double
slope passive solar stills in summer as well as winter climatic conditions for three different water depths (0.01, 0.02 and 0.03 m) by
various thermal models. The experimental validation of distillate yield using different thermal models was carried out for composite
climate of New Delhi, India (latitude 2835N, longitude 7712E). By comparing theoretical values of hourly yield with experimental
data it has been observed that Dunkles model gives better agreement between theoretical and experimental results. Further, Dunkles
model has been used to evaluate the internal heat transfer coefficient for both single and double slope passive solar stills. With the
increase in water depth from 0.01 m to 0.03 m there was a marginal variation in the values of convective heat transfer coefficients. It
was also observed that on annual basis output of a single slope single slope solar still is better (499.41 l/m
2
) as compared with a double
slope solar still (464.68 l/m
2
).
[3]
SangeetaSuneja et al. measured n Effect of water depth on the performance of an inverted absorber double basin solar still. They
perform transient analysis of a double basin solar still has been presented. Explicit expressions have been derived for the temperatures
of various components of the inverted absorber double basin solar still and its efficiency. The effect of water depth in the lower basin
on the performance of the system has been investigated comprehensively. For enunciation of the analytical results, numerical
calculations have been made by them using meteorological parameters for a typical winter day in Delhi. They observed that the daily
yield of an inverted absorber double basin solar still increases with the increase of water depth in the lower basin for a given water
mass in the upper basin.
[4]
G.N. Tiwari et al. worked on Computer modelling of Passive/Active Solar Stills by using inner Glass Temperature. Expressions for
water and glass temperatures, hourly yield and instantaneous efficiency for both passive and active solar distillation systems have been
derived. The analysis is based on the basic energy balance for both the systems. A computer model has been developed by them to
predict the performance of the stills based on both the inner and the outer glass temperatures of the solar stills. In this work two sets of
values of C and n (C
inner
, n
inner
and C
outer
, n
outer
), obtained from the experimental data of January 19, 2001 and June 16, 2001 under
Delhi climatic condition, have been used. It is concluded that (i) there is a significant effect of operating temperature range on the
internal heat transfer Coefficients and (ii) by considering the inner glass cover temperature there is reasonable agreement between the
experimental and predicted theoretical results.
[5]
Bhagwanprasad and G N Tiwari et.al. Perform an analysis of a double effect active solar distillation unit has been presented by
incorporating the effect of climatic and design parameters. Based on an energy balance in a quasi-steady condition, an analytical
expression for hourly yield for each effect has been derived. Numerical computations have been carried out for a typical day in Delhi,
and the results have also been compared with single effect, active solar distillation unit. It has been observed that there is a significant
improvement in the performance for a minimum flow rate of water in the upper basin.
[6]
T. Arunkumaret.al. Working on An Experimental Study on a Hemispherical Solar Still and This work reports a new design of solar
still with a hemispherical top cover for water desalination with and without flowing water over the cover. The daily distillate output of
the system was increased by lowering the temperature of the cover by water flowing over it. The fresh water production performance
of this new still was observed in Sri Ramakrishna Mission Vidhyalaya College of Arts and Science, Coimbatore (11 North, 77 East),
India. The efficiency was 34%, and increased to 42% with the top cover cooling effect. Diurnal variations of a few important
parameters were observed during field experiments such as water temperature, cover temperature, air temperature, ambient
temperature and distillate output. Solar radiation incident on a solar still is also discussed here.
[7]
Basel I. Ismail et al. represents a Design and performance of a transportable hemispherical solar still. A simple transportable
hemispherical solar still was designed and fabricated, and its performance was experimentally evaluated under outdoors of Dhahran
climatic conditions. It was found that over the hours of experimental testing through daytime, the daily distilled water output from the
still ranged from 2.8 to 5.7 l/m2 day. The daily average efficiency of the still reached as high as 33% with a corresponding conversion
ratio near 50%. It was also found that the average efficiency of the still decreased by 8% when the saline water depth increased by
50%.
[8]

S. Siva kumar et. al. worked on a single basin double slope solar still made up of mild steel plate with different sensible heat storage
materials like quartzite rock, red brick pieces, cement concrete pieces, washed stones and iron scraps. Out of different energy storing
materials used, in. quartzite rock is the more effective.
[9]

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

317 www.ijergs.org

Yousef H. Zurigat et. al. Worked on regenerative solar still. They have been observed that the Insulation has higher effect on the
regenerative still compared to simple still. Productivity will increase up to 50% if the wind speed was increase from 0 to 10 m/s.
[10]

Hiroshi Tanaka et. al. represented a theoretical analysis of a basin type solar still with an internal reflector (two sides and back
walls). They have observed that the benefit of vertical external reflector would be smaller or even negligible. The daily productivity
with external reflector was 16% greater than that with the vertical external reflector.
[11]

Badshah alam et. al. represented the comparative evaluation of the annual performance of single slope passive and hybrid (PVT)
active solar stills. Higher yield obtained from the active solar still and ratio depends on the climatic conditions during the year.
Efficiency of 9.1- 19.1% was obtained by the active solar still while the passive solar still performed at 9.8-28.4% during the year.
[12]
EXPERIMENTAL STUDY OF SOLAR STILL
Experimental measurements were performed to evaluate the performance of the solar still under the outdoors of Mehsana climatic
condition. Mehsana has geographical condition as latitude 2313N and Longitude 7239' campus area. Entered assembly was made
air tight with the half of a silicone gel. Basin of solar still has been constructed from 14 gauge of galvanized iron steel. Condensing
glass cover made up of clear type polycarbonate material. Thickness of the polycarbonate condensing cover was 2mm.The basin liner
is black oil painted on the inner surface of the basin. This has the dimension 0.08m effective absorber area of flat based circular
section. Thickness of insulation was 10 mm in each side and the thermocol used as insulating material to minimize the heat loss over
the sides of the basin. One water inlet, condensing water outlet and excess water outlet was provided in the basin. After the black
coating of the basin scale was fixed in the basin wall with the help of solution to measure the Depth of water. Thermocouple was
inserted from water inlet hole and located in different place of the still before fixed the glass cover. They record the different
temperature such as inner surface of glass cover, basin water, vapour temperature inside the still and atmospheric temperature outside
the still.


Fig 2 Hemispherical Polycarbonate Condensing Cover diagram
Before the commencement of each test the basin was filled with saline water using the inlet port and hemispherical cover was cleaned
from dust. The water depth was kept 0.5cm, 1cm, 1.5cm respectively and ink add in proportion of 1.25% water and depth is
0.5cm,1cm,1.5cm respectively. The experiment was carried out on sunny day. The temperature of glass cover vapour, inside water
temperature and atmosphere temperature was measured by J-type thermocouple and record to the note. Daily Solar radiation was
measured by solarimeter in w/m
2
. The distil water was collected hourly in the measuring jar. Experiment was carried out in month of
March and April, Experiment starting from 9 AM TO 5 PM in sunny days. The Maximum amount of potable was collected at 1pm to
2: 30pm. This simple hemispherical results can compared to the with ink proportion used in water.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

318 www.ijergs.org

PHOTOGRAPH OF EXPERIMENTAL SETUP

Fig 3 Photograph of Experimental Setup
RESULT AND DISCUSSION
Typical results of the variation of the saline water temperature and glass cover temperature and ambient temperature have been
measured during reprehensive day of testing. Temperature difference between the water and cover has similar trends, as they increase
in the morning hours to maximum value around noon time. They start to decrease late in the afternoon. This is due to increase of solar
irradiance in the morning and the decrease after 2.00 pm. After assembling the solar still, a set of experiments were performed to test
its efficiency and productivity per hour per day. The experiments were carried out on days of bright sunny days. The amount of water
and some temperature values was measured from 9:00AM to 5:00PM in campus area. Some factors are affecting on the active solar
still are: solar radiation intensity, ambient temperature, wind velocity, humidity, condensing glass cover inclination, solar collector
inclination, solar collector area, absorber material, etc.
The quantity of fresh water obtained from the solar still was 1.5 l/m. 2
nd
day of March the still area for 0.08m has 1.5 cm of water
depth. Total Mass of water obtained from the solar still was 255ml and it comes as a 3.180 L/m
2
.The Hemispherical still has 0.5 L/m
2

day on the day of 9
th
day of March the still area with 0.08m, Total mass of water gain was 270 ml. Figure 5.1, Figure 5.2, Figure 5.3
and Figure 5.4 Figure 5.5, Figure 5.6, Figure 5.7 and Figure 5.8 shows the graph of hourly variation of solar radiation, mass of
distilled water ml during the day 2
nd
and 9
th
of the march, 2014. The maximum solar radiation is in between 12:00 to 14:00 and the
ambient temperature maximum in between 13:00 to 14:00 of the day period and drastically change in the solar radiation can shows the
weather effect.
A hemispherical solar still has been fabricated and tested with and without black ink. Black ink is also an absorptive ink. So, if it is
used in the form of mixture with proportion of water then its helpful to increase the productivity in the solar still. It was the best
absorbing material used in terms of water productivity. I hope to the resulted in an enhancement of about 60%.
Measuring Jar
Temperature
Indicator
Solar Power Meter
Hemispherical glass
Stand
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

319 www.ijergs.org

WITHOUT ABSORBER INK

Fig 4 hourly variations in temperature and productivity during the day water depth 0.5 cm

Fig 5 hourly variations in temperature and productivity during the day water depth 1 cm
0
200
400
600
800
0
20
40
60
80
1 2 3 4 5 6 7 8 9
P
R
O
D
U
C
T
I
V
I
T
Y
T
E
M
P
E
R
A
T
U
R
E
TIME HR
9 March 2014
Tv Tw Tg Ta Productivity in ml/m2
0.00
200.00
400.00
600.00
800.00
0.00
20.00
40.00
60.00
80.00
9.00
am
10.00
am
11.00
am
12.00
pm
1.00
pm
2.00
pm
3.00
pm
4.00
pm
5.00
pm
P
R
O
D
U
C
T
I
V
I
T
Y
T
E
M
P
E
R
A
T
U
R
E
TIME HR
1 MARCH 2014
Tv Tw Tg Ta Productivity in ml/m2
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

320 www.ijergs.org


Fig 6 hourly variations in temperature and productivity during the day water depth 1.5 cm

Fig 7 hourly variations in temperature and productivity during the day water depth 2 cm
Figure 4, Figure 5, Figure 6 and Figure 7 shows without absorber material ink the hourly production rate of distilled water during the
day period from 9:00 to 17:00. Maximum production in solar still was in between 13:00 to 14:00. From this observation of experiment
9:00 to 17:00 hour of the day, the temperature difference between basin water surface and inner surface of glass cover was increasing
from 9:00 and maximum at the 13:00 to 14:00 after peak value the ambient temperature will reduced and solar radiation intensity also
reduced, so for that temperature of water in the basin and glass cover also decreases. So, from the part of this study observed that if the
temperature difference between basin water and inner surface of glass cover of the still will increase than the productivity of the still
increases. The average productivity was 2.8 l/m
2
/day. After the peak value of productive output the productivity was continuously
decreases.
0
100
200
300
400
500
600
0
10
20
30
40
50
60
70
9 10 11 12 1 2 3 4 5
P
R
O
D
U
C
T
I
V
I
T
Y
T
E
M
P
E
R
A
T
U
R
E
TIMR HR
2 March 2014
Tv Tw Tg Ta Productivity in ml/m2
0.00
200.00
400.00
600.00
800.00
0.00
10.00
20.00
30.00
40.00
50.00
60.00
9.00
am
10.00
am
11.00
am
12.00
pm
1.00
pm
2.00
pm
3.00
pm
4.00
pm
5.00
pm
P
R
O
D
U
C
T
I
V
I
T
Y
T
E
M
P
E
R
A
T
U
R
E
TIMR HR
5 March 2014
Tv Tw Tg Ta Productivity in ml/m2
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

321 www.ijergs.org

WITH ABSORBER INK

Fig 8 hourly variations in temperature and productivity during the day water depth 0.5 cm with 1.25% ink

Fig 9 hourly variations in temperature and productivity during the day water depth 1 cm with 1.25% ink

0
200
400
600
800
0
20
40
60
80
9 10 11 12 1 2 3 4 5
P
R
O
D
U
C
T
I
V
I
T
Y
T
E
M
P
E
R
A
T
U
R
E
TIME HR
19 March 2014
Tv Tw Tg Ta Productivity in ml/m2
0
100
200
300
400
500
600
700
800
0
10
20
30
40
50
60
70
9 10 11 12 1 2 3 4 5
P
R
O
D
U
C
T
I
V
I
T
Y
T
E
M
P
E
R
A
T
U
R
E
Axis Title
18 April 2014
Tv Tw Tg Ta Productivity in ml/m2
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

322 www.ijergs.org


Fig 10 hourly variations in temperature and productivity during the day water depth 1.5 cm with 1.25% ink

Fig 11 hourly variations in temperature and productivity during the day water depth 2 cm with 1.25% ink
Figure 8, Figure 9, Figure 10 and Figure 11 shows the variation in productivity of distilled water with respected to variation in
temperature and time. The productivity of this solar still converted in to the ml/ m
2
. As compared to without ink the hemispherical
productivity has been increased up to 17 to 20% with the help of only 1.25% ink added. The productivity of the solar still with ink
added also gives the maximum productivity in-between 1:00 pm to 2:00 pm and the temperature difference between water and glass
surface has been increase as compared to the without ink added solar still.
0
100
200
300
400
500
600
700
800
0
10
20
30
40
50
60
70
9 10 11 12 1 2 3 4 5
P
R
O
D
U
C
T
I
V
I
T
Y
T
E
M
P
E
R
A
T
U
R
E
TIME HR
7 APRIL 2014
Tv Tw Tg Ta Productivity in ml/m2
0.00
100.00
200.00
300.00
400.00
500.00
600.00
700.00
0.00
10.00
20.00
30.00
40.00
50.00
60.00
9.00
am
10.00
am
11.00
am
12.00
pm
1.00
pm
2.00
pm
3.00
pm
4.00
pm
5.00
pm
P
R
O
D
U
C
T
I
V
I
T
Y
T
E
M
P
E
R
A
T
U
R
E
TIME HR
20 April 2014
Tv Tw Tg Ta Productivity in ml/m2
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

323 www.ijergs.org


Fig 12 hourly variations in temperature and solar radiation the day water
depth 1 cm with 2% ink
Figure 12 shows that the increment of different temperatures and productivity of solar still with respected to the increment in solar
radiation during the day period. From this figures it has been observed that the temperature difference between water and glass cover
increased with increasing the ink proportion with water. Also, the productivity has been increased with increasing the ink proportion
with the water as inlet in the hemispherical type solar still.
CONCLUSION
From this experimental study conclude that the productivity of the hemispherical was increases due to decrease in water depth.
Productivity of the solar still has been increased due to increase in temperature difference between water surface and inner surface of
condensing glass cover. And maximum productivity of drinking water was collected during 1:00 to 2:00 pm in the sunny days. From
this experiment it has been observed that with increasing in ink proportion with water that productivity of the hemispherical solar still
gets increased and as compared to without ink added. The productivity of hemispherical has been increased with 2% ink added up to
25% and with 1% ink added 17% to 20%. The productive output distilled water has been tested in laboratory and the results shows
that it is useful for the drinking water.
FUTURE SCOPE
- This work will be carried out with 1m
2
area and compared to the slope type solar still.
- Measure the effect of different proportion of ink added with water for different depth of water.
- Also, measures the effect of ink added in active hemispherical solar still and compares the results with passive hemispherical solar
still.
- Compares to the acrylic glass and polycarbonate glass to flat circular base hemispherical solar still.
REFERENCES:
[1] G.N. Tiwari, H.N. Singh, Rajesh Tripathi, Present status of solar distillation, ELSEVEIR 2003 Solar energy 75 367-371
[2] Salah Abdallaha, Mazen M. Abu-Khaderb, Omar Badranc, Effect of various absorbing materials on the thermal performance of
solar stills, ELSEVIER 2009 Desalination 242 128-137.
[3] SangeetaSuneja, G.NTiwari, Effect of water depth on the performance of an inverted absorber double basin .solar still,
Applied Energy 2004 77 317-325.
[4] SangeetaSuneja ,G.N. Tlwarl, S.N. Rai, Parametric study of an inverted absorber double-effect solar distillation system,
Energy Conversion & Management 1999 40.
[5] G.N. Tiwari, S.K. Shukla, I.P. Singh, Computer modeling of passive/active solar stills by using inner glass temperature,
[6] Bhagwanprasad and G. N.Tiwari, Analysis of Double Effect Active Solar Distillation.
[7] T. Arunkumar a, R. Jayaprakasha,D. Denkenberger b, AmimulAhsan c, M.S. Okundamiya d, Sanjay kumare,Hiroshi Tanaka f,
H.. Aybar g, An experimental study on a hemispherical solar still.
[8] Basel I. Ismail, Design and performance of a transportable hemispherical solar still.
0
100
200
300
400
500
600
700
800
900
1000
0
10
20
30
40
50
60
70
9 10 11 12 13 14 15 16 17
p
r
o
d
u
c
t
i
v
i
t
y

m
l
/
m
2
T
e
m
p
e
r
a
t
u
r
e

C
time hr
Tv Tw Tg Ta Productivity in ml/m2
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

324 www.ijergs.org

[9] K.KalidasaMurugavel, S.Sivakumar, J.RiazAhamed, Kn K.S.K. Chockalingam, K.Srithar , Single basin double slope solar still
with minimum basin depth and energy storing materials, Applied Energy 87 (2010) 514.
[10] Yousef H.Zurigat, MousaK.Abu-Arabi, Modelling and performance analysis of a regenerative solar desalination unit, Applied
Thermal Engineering 24 (2004) 1061.
[11] Hiroshi Tanaka, Yasuhito Nakatake, Masahito Tanaka, Indoor experiments of the vertical multiple-effect diffusion-type solar
still coupled with a heat-pipe solar collector, Desalination 177 (2005) 291-302.
[12] Badshah Alam, Emran Khan and Shiv Kumar, Annual Performance of Passive and Hybrid (PVT) Active Solar Stills, VSRD
MAP, Vol. 2 (6), 2012, 223-231.




























International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

325 www.ijergs.org

Coupling based BigData Analysis Reusability of Datasets
Thirunavukarasu B
1
, Vasanthakumar U
1
, Vijay S
1
, Dr Kalaikumaran T, Dr Karthik S
1
Research Scholar (B.E), Department of Computer Science and Engineering, SNS College of Technology, Coimatore, India
E-mail- bs.thirunavukarasu@gmail.com
Abstract Presently we are in the BigData era. Many organizations and enterprises are dealing with massive set of data. These data
are to be analyzed for various factors. For easy and effective data analysis, many a methods are used. Here we proposed Coupling
based BigData analysis. Here the dataset are initially coupled so that the optimization can be achieved. Before one performs analysis
of massive set of data, the dataset are coupled or grouped based on some kind of predefined available methodologies. Reusability of
previously extracted datasets are used for quicker execution.

Keywords BigData, Coupling, Analysis of BigData, Coupling Analysis, Optimized analysis, predictive analysis.
.
INTRODUCTION
Without any data, one cannot do anything. For every actions carried out, there generates data. Due to increased generation of
data from various sources, organizations are supposed to store large amount of data. Its useless simply storing large amount of
data if we are not using those data in future. So always the stored massive amount of data should be analyzed to get some
prediction and output. For this purpose of analyzing large data sets, Hadoop tool was used. Hadoop is a distributed database
management system of BigData.

Fig 1. BigData Hadoop Tool Architecture

BIGDATA
Big data is an unstructured large set of data even more than peta byte data. Unstructured data is a data which is in the form other than
common forms of tables and rows. There will be any items logs. Big data can be of only digital one. Data analysis become more
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

326 www.ijergs.org

complicated because of their massive set of large amount of data. There are many availability of tools that can be easily used for
analysis of this massive datasets. Predictions, analysis, requirements etc., are the main things that should be done using the
unstructured big data. Big data is a combination of three vs those are namely volume, velocity and variety. Big data will basically
processed by the powerful computer. But due to some scalable properties of the computer, the processing gets limited. Organizations
and entrepreneurs are now forced to work with larger amount of data that are generated during their work. This data are the primary
thing that could be used for many improving actions that needed to be taken in the near future.
BIGDATA ANALYSIS
Big data analytics is the application of advanced analytic techniques to very large, diverse data sets that often include varied data types
and streaming data. Big data analytics explores the granular details of business operations and customer interactions that seldom find
their way into a data warehouse or standard report, including unstructured data coming from sensors, devices, third parties, Web
applications, and social media - much of it sourced in real time on a large scale. Using advanced analytics techniques such as
predictive analytics, data mining, statistics, and natural language processing, businesses can study big data to understand the current
state of the business and track evolving aspects such as customer behaviour. New methods of working with big data, such as Hadoop
and Map Reduce, also offer alternatives to traditional data warehousing.
Analytics, providing deep insights on Big Data to optimize every customer touch point. Using personalized workspaces and self-
service templates, analytics are rapidly assembled, customized and shared across business teams.
COUPLING
Coupling defines the integration between elements to do a particular user need. Here the coupling of design element represents the
strength of connectivity between elements. Coupling is in different types. Among then Highly-Coupled and Loosely Coupled
types are playing important role to group the components. If a particular element is not fully depend on other elements in the system
then it is in Loosely-Coupled type of connectivity. Else were it is called Highly-Coupled.

TYPES OF COUPLING
Apart from the above mentioned major types of coupling, the following indicates the deeper coupling types.
1. Import Coupling
2. Export Coupling
Import coupling is type of coupling that groups the elements that are going to be referred to support other components. Export
coupling indicates the group of components that needs other components for the support.

COUPLING ANALYSIS PROPOSED METHODOLOGY
In Coupling Analysis, the Dataset have to be analyzed as per the data recommended for grouping them. In Coupling based method
the Coupling value and type of that considered for having clustering of Datasets. From that here we can also consider Stability to
have a good quality repository with stable Datasets. In the 'Coupling-Based' approach, the type of the coupling may be considered
only to categorize the Datasets. But in this proposed approach from the 'Import' and 'Export' coupling of a component will be used.

The Component reusability can achieved by implementing the coupling. Reusability of data is nothing but using the previously
processed data sets without creating new datasets with the same resource of data gathered for every analysis execution. The reusable
components are found with the help of the Coupling factors. This Coupling facilitates the improved reusability thus minimizing the
overall execution time. As simple, the steps are followed as same till the data are processed by MapReduce.

The following flow diagram indicates the sequence of the operations or actions that are to be carried out for the optimized analyzing of
the massive amount of the data set.







International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

327 www.ijergs.org

FLOW OF PROPOSED METHODOLOGY



















Fig 2. Sequence of Proposed Method


In this proposed methodology, initiallythe dataset are gathered from various resources. Once the resources are gathered, the data are
divided into blocks. This blocking of the data are made for the purpose of enhancing the distributed system. The blocks are then
placed in the DataNodes based on the alignment given by the NameNode. In the DataNode, the MapReduce Algorithm is executed.
By Map the data are again divided, and by the Reduce algorithm, the divided data are executed separately.

Once the reduce algorithm is executed, the separate data results are grouped together. These grouped data is nothing but the resultant
data. This resultant data is sent back to the client by means of the TCP connection. One the data is received, it is used for the analysis.
At the Analysis phase, the coupling is implemented. Here the generated resultant data is checked for the coupling factors. By which
one could know how this particular data set is dependent on or coupled with the other datasets. If once the resultant data is highly
coupled, it could be considered as the reusable data set.This reusability could be found by the coupling value. The coupling value is a
numerical value that indicates how a component is coupled with other components.

The Dataset reusability is only achieved with the coupling value is high. This highly coupled data set is replicated and stored
separately for the data history usability. And so once a certain dataset needs the previous datasets to analyze, and if the coupling is
available between the data set that we had stored previously, the new data comprises of entire data is avoided and by only the new data
is executed to get the resultant data set. This resultant data set is coupled with the previously stored reusable data set and the analysis
is made in effective manner. The Previously executed datasets are stored in Data Mart. By storing this previously executed dataset in
data mart, the repeated data execution could be more fairly avoided. From the proposed method, we get some results. The transmission
time gets reduced when the amount or volume of the data gets reduced. By this the transmission time is directly propositional to the
volume of the data.
Implementing MapReduce
Resultant Data Generation
Analysing the Data sets

Coupling of Data sets

Collection of Data Resource
Reusability of Data sets

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

328 www.ijergs.org

ACKNOWLEDGMENT
I heart fully thank The Department of CSE, SNS College of Technology for the effective encouragement in achieving some milestones
in BigData research. I also thank Dr. S N Subbramanian, Chairman - SNS College of Technology, for his endless support and
guidance.

CONCLUSION
Thus by the coupling of datasets, the Reusability of the same is achieved. As a result, the analysis of data is made in easy and fair manner.
The dataset coupling avoids the repeated execution of data sets. Only the newly generated data are forced to the execution by MapReduce
algorithm. Since the new data alone are executed, the Massive volume of data is considerably reduced into smaller amount of data, by this the
execution time and transmission time also gets reduced. The datasets are reused without being made for circular execution. Thus
Optimization of BigData analysis is achieved.


REFERENCES:
[1] M. Halkidi, D. Spinellis, G. Tsatsaronis and M.Vazirgiannis,Data mining in software engineering, Intelligent Data Analysis,
2011.
[2] Man Deep Kaur, Parul Batra and Akhil Khare Static analysis and run-time coupling metrics , Oriental Journal of Computer
Science & Technology, Vol. 3(1), 2010.
[3] Big Data Processing with Hadoop-MapReduce in Cloud Systems,Rabi Prasad Padhy,Senior Software Engg, Oracle
Corp.,Bangalore, Karnataka, India
[4] ] Marko Grobelnik, Big-Data Tutorial Stavanger, May 2012.
[5] Guanying Wang.,Evaluating MapReduce System Performance: A Simulation Approach, August 2012.
[6] Robert D. Schneider,Hadoop for Dummies, John Willey & sons, 2012
[7] Critical Study of Hadoop Implementation and Performance Issues,Madhavi Vaidya,Asst. Professor, Dept. of Computer Sc.,
Vivekanand College, Mumbai, India.
[8] Thirunavukarasu,Sangeetha K, Kalaikumaran T, Karthik S, Effectively Placing Block Replicas of Big data on the Rack by
Implementing Block Racking Algorithm, International Journal of Science and Technology Research , pp.891-894, April 2014













International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

329 www.ijergs.org

Selective Mapping Algorithm Based LTE System with Low Peak to Average
Power Ratio in OFDM
P.Kalaiarasi
1
, R. Sujitha
1
, V.Krishnakumar
2

1
Research Scholar (PG), Department of ECE, Kongunadu College of Engg&Tech, Trichy, Tamilnadu, India
2
Asst. Professor, Department of ECE, Kongunadu College of Engg&Tech, Trichy, Tamilnadu, India

ABSTRACT - To propose a high performance LTE system. The performance of an LTE system is enhanced in two stages. The
first stage is to reduce the high peak to average power ratio of the OFDM signal. The second stage is to improve the channel
estimation. PAPR reduction is based on the selective mapping algorithm. The channel estimated via least square method by using
wavelet-based de-noising method to reduce additive white Gaussian noise and inter-carrier interference (ICI).OFDM system suffer
with the problem of inherent high peak-to-average power ratio (PAPR)due to the inter symbol interference between the
subcarriers. In order to obtain an optimal PAPR reduction using the selective mapping algorithm with less complexity. The proposed
system used to reduce the PAPR and improved bit error rate in OFDM.
Keyword: OFDM(orthogonal frequency division multiplexing),PAPR(Peak to average power ratio),SLM(selective mapping
algorithm),CP(cyclic prefix)
1 INTRODUCTION
Orthogonal Frequency Division Multiplexing (OFDM) is a Multicarrier transmission technique based on orthogonal carriers
which have become one of the most cheering developments of modern broadband wireless networks and wire line digital
communication systems because of its high speed data transmission, great spectral efficiency, high quality service, robustness to the
selective fading problem and narrow band interference. High Peak-to-Average Power Ratio (PAPR) of transmitted OFDM signals is
one of the major problems . High PAPR in OFDM system leads to used High Power Amplifier (HPA) with a large dynamic range, but
these amplifiers are very expensive and are major cost component of the OFDM system. OFDM is widely applied to mobile
communication systems due to its robustness against the frequency selective fading channel and high data rate transmission capability.


Fig.1, OFDM Block diagram
In an OFDM scheme a large number of sub-channels or sub-carriers are used to transmit digital data. Each sub-channel is
orthogonal to every other. They are closely spaced and narrow band. The separation of the sub-channels is as minimal as possible to
obtain high spectral efficiency. OFDM is being used because of its capability to handle with multipath interference at the receiver.
These two are the main effects of multi propagation. Frequency selective fading and Inter Symbolic Interference (ISI). In OFDM the
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

330 www.ijergs.org

large number of narrow band sub-carriers provides sufficiently flat channels. Therefore the fading can be handled by simple
equalizing techniques for each channel. Furthermore the large amount of carriers can provide same data rates of a single carrier
modulation at a lower symbol rate.
2 PEAK TO AVERAGE POWER RATIO
One of the most serious problems with OFDM transmission is that, it exhibits a high peak-to-average ratio. In other words,
there is a problem of extreme amplitude excursions of the transmitted signal. The OFDM signal is basically a sum of N complex
random variables, each of which can be considered as a complex modulated signal at different frequencies. In some cases, all the
signal components can add up in phase and produce a large output and in some cases, they may cancel each other producing zero
output. Thus the peak-to-average ratio (PAR) of the OFDM system is very large.
The problem of Peak-To-Average Ratio is more serious in the transmitter. In order to avoid clipping of the transmitted
waveform, the power-amplifier at the transmitter front end must have a wide linear range to include the peaks in the transmitted
waveform. Building power amplifiers with such wide linear ranges is a costly affair. Further, this also results in high power
consumption. The DACs and the ADCs must also have a wide range to avoid clipping.Due to the large number of sub-carriers in
typical OFDM systems, the amplitude of the transmitted signal has a large dynamic range, leading to in-band distortion and out-of-
band radiation when the signal is passed through the nonlinear region of power amplifier.Although the above-mentioned problem can
be avoided by operating the amplifier in its linear region, this inevitably results in reduced power efficiency. Peak to average power
ratio reduces the accuracy and produce a high error rate. Various PAPR reduction techniques can be used in OFDM.
The PAPR of the transmit signal is defined as
PAPR=
max
0tT

2
1

0

From the central limit theorem; for large number of N, the real and Imaginary values of S(t) become Gaussian distributed.
The amplitude of PAPR, is therefore has a Rayleigh distribution, with zero mean and variance N times the variance of one complex
sinusoidal signal.
3 SELECTIVE MAPPING METHOD

Fig.2:block diagram for SLM
Selected mapping (SLM) is a promising PAPR reduction technique of OFDM system. The main idea of SLM technique is to
generate a number of OFDM symbols as candidates and then select the one with the lowest PAPR for actual transmission from a
number of different data blocks (independent phase sequences) that have the same information at the transmitter. In the SLM method,
the vectors from the original frequency domain OFDM signal are rotated based on a set of predefined phase arrays. For each signal
variant obtained, its corresponding PAPR is evaluated. The one with the lowest PAPR is chosen for the transmission. The block
diagram of SLM scheme is demonstrated in Fig 2

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

331 www.ijergs.org

4 PROPOSED BER SCHEME

Fig.3:Proposed scheme
OFDM is widely applied in mobile communication due to robustness against the frequency selective fading channel and high
data rate transmission.Pilot extraction method was used to prevent frequency and phase shift errors.Channel estimation technique of
an OFDM system can be grouped into two categories. Blind and non-blind. The blind channel estimation method exploits the
statistical behavior of the received signals, while the non-blind channel estimation method utilizes some or all portions of the
transmitted signals,i.e., pilot tones or training sequences, which are available to the receiver to be used for the channel estimation.
In non-blind method there are two classical pilot-based channel estimation algorithms namely LS (Least Square) and MMSE
(Minimum Mean-square error) estimation. Since LS estimation is simpler to implement as it doesnt need any information about
channelstatistics, LS estimation has been widely used .However ,LS estimation is sensitive to additive white Gaussian noise(AWGN),
especially, when the signal-to-noise ratio (SNR) is low, the performance will degrade significantly. MMSE estimation is more robust
against noise and performs better than LS.
Wavelet Denoising is a method to remove the noise contained in the LS estimation. To reduce the additive white Gaussian noise and
inter carrier interference. The general wavelet denosing procedure is as follows:

[1] Apply wavelet transform to the noisy signal to produce the noisy wavelet coefficients at stable required level.

[2] Select appropriate threshold limit at each level and threshold method (hard or soft thresholding) to best remove the noises.

[3] Inverse wavelet transforms of the thresholded wavelet coefficients to obtain a denoise signal.

The channel estimation can be performed by either inserting pilot tones into all of the subcarriers of OFDM symbols with a
specific period or inserting pilot tones into each OFDM symbol .The first one, block type pilot channel estimation, has been developed
under the assumption of slow fading channel. Even with decision feedback equalizer, this assumes that the channel transfer function is
not changing very rapidly. The estimation of the channel for this block-type pilot arrangement can be based on Least Square (LS) or
Minimum Mean-Square (MMSE).The MMSE estimate has been shown to give 1015 dB gain in signal-to-noise ratio (SNR) for the
same mean square error of channel estimation over LS estimate

5 SIMULATION RESULT

This simulation is based on LTE Standard. In this section, the computer simulations, using MATLAB, are capability of the proposed
scheme.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

332 www.ijergs.org




Fig .5: PAPR Reduction
OFDM system parameters used in the simulation are indicated in Table I. We assume to have perfect synchronization

Fig.6: BER rate calculation
since the aim is to observe channel estimation performance. Moreover, we have chosen the guard interval to be greater than the
maximum delay spread in order to avoid inter-symbol interference. Simulations are carried out for different signal-to noise(SNR)
ratios. The simulation parameters to achieve those results are shown in the table.1

Table .1: System parameter






Parameters Values used
Number of sub carriers(N) 64
Oversampling factor(OF) 8
Modulation scheme QAM
Number of sub blocks used
in SLM
2,4,8,16,32,64
Total Number of IFFT for
weighted factor 1(or)2
256
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

333 www.ijergs.org

6 CONCLUSION
To propose a modified OFDM scheme with high performance. The proposed scheme achieved low PAPR and low BER
based on the LTE system. The improvement of the PAPR is achieved by selective mapping, where trigonometric transformation with
minimum PAPR is selected for each partitioned block. On the other hand, the improvement of BER is achieved by the enhancement of
channel estimation which is based on enhanced least square estimator by using wavelet based de-noising method to reduce additive
white Gaussian noise and inter-carrier interference (ICI).
REFERENCES:
[1] Imran Baig , Muhammad Ayaz, VarunJeoti, A SLM based localized SC-FDMA uplink system with reduced PAPR for LTE-
AJournal of King Saud University Engineering Sciences (2013) 25, 119123
[2] Vineet Sharma , AnurajShrivastav , Anjana Jain , AlokPandayBER performance of OFDM-BPSK,-QPSK,- QAM over AWGN
channel using forward Error correcting codeVineet Sharma, AnurajShrivastav, Anjana Jain, AlokPandayInternational Journal of
Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 3, May-Jun 2012, pp.1619-624
[3] C. Siegl and R. F. Fischer, Selected basis for PAR reduction inmulti-user downlink scenarios using lattice-reduction-aided
precoding,EURASIP J. Adv. Signal Process., Jul. 2011.
[4] S. Khademi, T. Svantesson, M. Viberg, and T. Eriksson, Peak-to-average-power-ratio (PAPR) reduction in WiMAX and
OFDM/A systems,EURASIP J. Adv. Signal Process., Aug. 2011.
[5] Y. Shen and E. Martinez, WiMAX channel estimation: Algorithmsand implementations, Application Note, Freescale, 2007.
[6] S. H. Han and J. H. Lee, An overview of peak-to-average power ratio reduction techniques for multicarrier transmission, IEEE
Wireless Commun. Mag., vol. 12, pp. 5665, Apr. 2005.
[7] J. Wang, Z. Lan, R. Funada, and H. Harada, On scheduling and power allocation over multiuser MIMO-OFDMA:
Fundamentaldesign and performance evaluation WiMAX systems, in IEEE Int.Symp., Feb. 1997.
[8] ] S. Khademi, A.-J. van der Veen, and T. Svantesson, Precoding technique for peaktoaverage-power-ratio (PAPR) reduction in
MIMO OFDM/A systems, in , 2012 IEEE.
[9] Victor C. M. Leung, Alister G. Burr, Lingyang Song, Yan Zhang, andThomas Michael Bohnert, OFDMA Architectures, Protocols,
andApplications EURASIP Journal on Wireless Communications andNetworking, pages: 1-20, July 2008.
[10] T. Jiang and G. Zhu, Complement Block Coding for Reduction inPeak-to-Average Power Ratio of OFDM Signals,
IEEECommunications Magazine, vol. 43, no.9, pp. S17 S22, Sept. 2005.
[11] Coleri, S., Ergen, M., Puri, A., and Bahai, A., Channel EstimationTechniques Based on Pilot Arrangement in OFDM Systems,
IEEETrans. On Broadcast, vol. 48, pp. 223229, Sep. 2002.
[12] P. Hoeher, S. Kaiser, and P. Robertson, Twodimensional pilot-symbolaided channel estimation by Wiener filtering, in Proc. IEEE
Int. Conf.Acoustics, Speech, and Signal Processing, pp. 18451848, Apr. 1997.
[13] Mohamed Ibnkahla, "Signal Processing for Mobile Communications,"Hand book, CRC Press 2005.










International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

334 www.ijergs.org

Performance Enhancement of 3 IM Drive using Fuzzy Logic Based DTC
Technique
K.Satheeshkumar
1
, P.Rajarajeswari
2

1
Research Scholar(PG), Department of EEE, Mahendra College of Engineering, Salem
2
Assistant Professor, Department of EEE, Mahendra College of Engineering, Salem
E-mail- satheeshkumargceb@gmail.com

ABSTRACT - This paper presents a direct flux and torque control (DTC) of three phase induction motor drive (IMD) using PI and
fuzzy logic controllers (FLC) for speed regulator (SR) and low torque ripples. This control method is based on DTC operating
principles. The DTC is one of the most excellent direct control strategies of stator flux and torque ripples of IMD. The key issue of the
DTC is the strategy of selecting proper stator voltage vectors to force stator flux and developed torque within a prescribed band. Due
to the nature of hysteresis control adopted in DTC, there is no difference in control action between a large torque error and a small
one. This results in high torque ripple. It is better to divide the torque error into different intervals and give different control voltages
for each of them. To deal with this issue a fuzzy logic controller has been introduced. The main drawback with the conventional DTC
of IMD is high stator flux and torque ripples and the speed of IMD is reduced under transient and dynamic state of operating
condition. This drawback is reduced by the proposed system, the speed is regulated by PI controller and torque is controlled by fuzzy
logic controller. The amplitude of the reference stator flux is kept constant at rated value. The simulation results of proposed DTC
shows the low stator flux linkage, torque ripples and good speed regulator than conventional DTC technique using
MATLAB/SIMULINK.

Keywords- Direct Torque Control (DTC), Fuzzy Logic Control (FLC), Induction Motor Drive (IMD), Space Vector Modulation
(SVM).

I.INTRODUCTION

Nowadays around 70% of electric power is consumed by electric drives. This electric drives are mainly classified into AC
and DC drives. During last four decades AC drives are become more and more popular, especially induction motor drives (IMD),
because of robustness, high efficiency, high performance, and rugged structure, ease of maintenance so widely used in industrial
application, such as paper mills, robotics, steel mills, servos, transportation system, elevators, machines tools etc. Commonly used
techniques for speed control of induction motor drive are V/F ratio control, Direct Torque Control (DTC) and Vector Control. In the
scalar or the V/F ratio control technique, there is no control over the torque or flux of the machine. Torque and Flux control is possible
with vector control in induction motor drive. However, vector control is highly computationally complex and hence the DTC
technique with less computational complexity along with control of torque and flux is preferred in many applications. Comparing with
FOC, DTC has a simple control scheme and also very less computational requirements, such as current controller, and co-ordinate
transformations are not required. The main feature of DTC is simple structure and good dynamic behaviour and high performance and
efficiency [1,2,3]. The new control strategies proposed to replace motor linearization and decoupling via coordinate transformation, by
torque and flux hysteresis controllers [4]. This method is referred as conventional DTC [5].

In the conventional DTC has some drawbacks such as, variable switching frequency, high torque and flux ripples, problem
during starting and low speed operating conditions, and flux and current distortion caused by stator flux vector changing with the
sector position [5], and the speed of IMD is changing under transient and dynamic state operating condition. In order to overcome
with this problem, the proposed DTC with PI and FLC are used. The PI controller is used for speed control in the SR loop and the
FLC is used for stator flux and torque ripple reduction in the torque control loop [6]. The conventional and proposed DTC of IMD
simulation results are presented and compared.

II. DIRECT TORQUE CONTROL

The conventional DTC of IMD is supplied by a three phase, two level voltage source inverter (VSI). The main aim is to
directly control of stator flux linkage or rotor flux linkage and electromagnetic torque by selection of proper voltage switching states
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

335 www.ijergs.org

of inverter. The schematic diagram of conventional DTC of IMD is shown in Fig.1. This schematic diagram consists of a torque and
flux hysteresis band comparators (T, ), voltage vector sector selection, stator flux and torque estimators (
s
, T
e
), induction motor,
speed controller, and voltage source inverter (VSI) [7].

A. Voltage Source Inverter (VSI)

The three phase and two level VSI is shown in Fig.2, it has eight possible voltage space vectors, in those six active voltage
vectors (U1-U6) and two zero voltage vectors (U7,U8), according to the combination of the switching modes are S
a
, S
b
, and S
c
. When
the upper part of switches is ON, then the switching value is 1 and when the lower switch is ON, then the switching value is 0. The
stator voltage vector is written as in equation (1).

U

s,k =
2
3
U
DC
[

S
a +
aS
b +
a
2
S
c
] (1)
Where UDC is the dc link voltage of inverter, a = e
j2/3
.

The inverter output voltages U
a
s
, U
b
s
and U
c
s
are converted to U
ds
s
and U
qs
s
by following equations (2), and (3):

U
ds
s
=
2
3
S
a
-
1
3
S
b
-
1
3
S
c
(2)
U
qs
s
= - (1/3) S
b
+ (1/3) S
c
(3)

The behaviour of induction motor drive using DTC can be described in terms of space vector model is written in the stator
stationary reference frame[11],[12]:

U
ds
s
= R
s
i
ds
s
+
d
dt

ds
s
(4)
U
qs
s
= R
s
i
qs
s
+
d
dt

qs
s
(5)
0 = R
r
i
qr
s
+
d
dt

qr
s

r

dr
s
(6)
0 = R
r
i
dr
s
+
d
dt

dr
s

r

qr
s
(7)

s
= L
s
i
s
+ L
m
i
r
(8)

r
= L
r
i
r
+ L
m
i
s
(9)



Figure 1. Schematic diagram of direct torque control of induction motor.

B. Direct Flux Control
The implementation of the DTC scheme requires the torque, flux linkage computation and generation of vector switching
states through a feedback control of the flux and torque directly without inner current loops. The stator flux in the stationary reference
frame (d
s
-q
s
) can be estimated as [10]:

ds
s
= (U
ds
s
i
ds
s
R
s
) dt (10)

qs
s
= (U
qs
s
i
qs
s
R
s
) dt (11)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

336 www.ijergs.org

The estimated stator flux,
s
, is given by:

s
= (
s
ds
2
+
s
qs
2
) ^ (1/2) (12)
U

s
=
d
dt
(
s
) or
s
= U

s
.t (13)

Figure 2.Schematic diagram of voltage source inverter.
The change in input to the flux hysteresis controller can be written as:

s
=
s
* -
s
(14)

The flux hysteresis loop controller has two level of digital output , according to the following relation shown in Table 1.

TABLE 1. SWITCHING LOGIC FOR FLUX ERROR
State Flux Hysteresis ()
(s* - s) > s 1
(s* - s) < -s -1

C. Direct Torque Control

The torque hysteresis loop control has three levels of digital output, which have the following relations is shown in Table 2.

TABLE 2. SWITCHING LOGIC FOR TORQUE ERROR
State Torque Hysteresis(T)
(Te* - Te) > Te 1
-Te < (Te* - Te) < Te 0
(Te* - Te) < -Te -1

When the torque hysteresis band is T=1 increasing torque, when T=0 means torque at zero and T=-1 decreasing the torque.
The instantaneous electromagnetic torque and angle in terms of stator flux linkage is given in equation (15), (16).

T
e
=
3
2

P
2
(
ds
s
i
qs
s

qs
s
i
ds
s
) (15)

e
(k) = tan
-1
(
ds
s
/
qs
s
) (16)

The change in electromagnetic torque error can be written as:

T
e
= T
e
* - T
e
(17)

The eight possible voltage vector switching configuration is shown in Fig.3.The voltage vector is selected using torque or
flux need to be increased or decreased comes from the three level and two level hysteresis comparators for torque and stator flux
respectively. The selection of increasing and decreasing the stator flux and torque is shown in Table 3.The Fig.3., illustrates the 2-
hysteresis optimized voltage vector in six sectors and which are selected from six active and two zero voltage vector switching
configurations, using the voltage vector selection table is shown in Table 4.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

337 www.ijergs.org



Figure 3. Eight possible switches configuration of the voltage source inverter.


TABLE 3. GENERAL SELECTION FOR DTC
K
th
Sector Increase Decrease
Stator Flux () Uk , Uk+1 , Uk-1 Uk+2 , Uk+3 , Uk-2
Torque (T) Uk , Uk+1 , Uk+2 Uk+3 , Uk-2 , Uk-1


TABLE 4. VOLTAGE VECTOR SELECTION
Hysteresis
Controller
Sector Selection e(k)
T
e(1)

e(2)

e(3)

e(4)

e(5)

e(6)

1 1 U2 U3 U4 U5 U6 U1
0 U7 U8 U7 U8 U7 U8
-1 U6 U1 U2 U3 U4 U5
-1 1 U3 U4 U5 U6 U1 U2
0 U8 U7 U8 U7 U8 U7
-1 U5 U6 U1 U2 U3 U4


III. PROPOSED FUZZY LOGIC CONTROLLER

The fuzzy logic control is one of the controllers in the artificial intelligence techniques. Fig.4 shows the schematic model of
Fuzzy based DTC for IMD. In this project, Mamdani type FLC is used and the DTC of IMD using PI controller based SR(speed
regulator) are requires the precise mathematical model of the system and appropriate gain values of PI controller to achieve high
performance drive. Therefore, unexpected change in load conditions would produce overshoot, oscillation of the IMD speed, long
settling time, high torque ripple, and high stator flux ripples. To overcome this problem, a fuzzy control rule look-up table is designed
from the performance of torque response of the DTC of IMD. According to the torque error and change in torque error, the
proportional gain values are adjusted on-line [8].

The fuzzy controller is characterized as follows:
1) Seven fuzzy sets for each input and output variables,
2) Fuzzification using continuous universe of discourse,
3) Implication using Mamdani's min operator,
4) De-fuzzification using the centroid method.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

338 www.ijergs.org


Figure 4. Proposed Structure of FLC Based Direct Torque Control.

Fuzzification: the control process of converting a numerical variable (real number) convert to a linguistic variable (fuzzy number) is
called fuzzification.

De-fuzzification: the rules of the FLC generate required output variable in a linguistic variable (Fuzzy Number), according to the real
world requirements, linguistic variables have to be transformed to crisp output (Real number).

Database: the database stores the definition of the membership Function required by fuzzifier and defuzzifier [10].

A. Fuzzy Variables

In the crisp variables of the torque error and change in torque error are converted into fuzzy variables T
e
(k) and T
e
*(k) that
can be identified by the level of membership functions in the fuzzy set. The fuzzy sets are defined with the triangular membership
functions.

B. Fuzzy Control Rules

In the fuzzy membership function there are two input variables and each input variable have seven linguistic values, so
7x7=49 fuzzy control rules are in the fuzzy reasoning is shown in Table.5 and flowchart of FLC is shown in Fig.6.



Figure 5. Fuzzy membership functions of input variables (a) torque error, (b) change in torque error, and (c) output variable.



Figure 6. Flowchart of Fuzzy logic controller

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

339 www.ijergs.org


TABLE 5. FUZZY LOGIC CONTROL RULES
Te NL NM NS ZE PS PM

PL

NL NL NL NL NL NM NS ZE
NM NL NL NL NM NS ZE PS
NS NL NL NM NS ZE PS PM
ZE NL NM NS ZE PS PM PL
PS NM NS ZE PS PM PL PL
PM NS ZE PS PM PL PL PL
PL ZE PS PM PL PL PL PL

A FLC converts a linguistic control strategy into an automatic control strategy and fuzzy rules are constructed by expert
knowledge or experience database. Firstly, the input torque T
e
(k) and the change in torque error T
e
*(k) have been placed of the
torque to be the input variables of the FLC. Then the output variable of the FLC is presented by the control of change in torque T
e
.
To convert these numerical variables into linguistic variables, the following seven fuzzy levels or sets are chosen as: NL (negative
large), NM (negative medium), NS (negative small), ZE (zero), PS (positive small), PM (positive medium), and PL (positive large) as
shown in Fig.5.

IV. SIMULATION AND RESULTS

The conventional and proposed DTC MATLAB models were developed for 3hp IMD. The simulation results of conventional
and proposed DTC for forward motoring operation are shown in Fig.7, and Fig.8, it represents the stator current, stator flux, developed
torque at no load and full load, speed, and stator dq-axis.








International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

340 www.ijergs.org



Figure 7. Simulation results of conventional DTC: Stator Currents, Stator flux, Electromagnetic load torque of 30 N.m is applied at 0.6 sec and removed at 0.85 sec,
rotor Speed from 0 to 1200rpm, and stator flux dq-axis of IMD.












Figure 8. Simulation results of proposed DTC: Stator Currents, Stator flux, Electromagnetic load torque of 30 N.m is applied at 0.6 sec and removed at 0.85 sec, rotor
Speed from 0 to 1200rpm, and Stator flux dq-axis of IMD.

V. ACKNOWLEDGMENT
We take this opportunity to express our deepest gratitude and appreciation to all those who have helped us directly or
indirectly towards the successful completion of this paper.




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

341 www.ijergs.org

VI. CONCLUSION

In this paper, an effective control technique is presented for direct flux and torque control of three-phase IMD. In this
proposed control technique the PI controller is regulates the speed of IMD and the fuzzy logic controller is reduced the stator flux and
torque ripples. It is proposed a decoupled space vector control between the stator flux and electromagnetic torque hysteresis controller
for generating the pulses for VSI. The two independent torque and flux hysteresis band controllers are used in order to control the
limits of the torque and flux. The simulation results of both conventional and proposed techniques are carried out for DTC of three-
phase IMD, among both of them proposed control technique is superior for good speed regulator, low stator flux linkage, and torque
ripples under transient and dynamic state operating conditions using MATLAB/SIMULINK. The main advantage is the improvement
of torque and flux ripple characteristics at low speed region; this provides an opportunity for motor operation under minimum
switching loss and noise.

REFERENCES:

[1] M. Dependrock, Direct self control (DSC) of inverter-fed induction machine, IEEE Trans. on Power Electronics, volume 22,
no. 5, pp. 820-827, September/October 1986.

[2] Tang L., Zhong L., Rahman M.F., Hu Y., A novel direct torque controlled interior permanent magnet synchronous machine
drive with low ripple in flux and torque and fixed switching frequency, IEEE Transactions on Power Electronics, 19(2), p.346-
354, 2004.

[3] I. Takahashi and Y. Ohmori, High-performance direct torque control of induction motor, IEEE Trans. Ind. Appl., vol. 25, no.
2, pp. 257-264, 1989.

[4] P. Vas, Sensorless vector and direct torque control, Oxford University Press, 1998.

[5] C. F. Hu, R. B. Hong, and C. H. Liu, Stability analysis and PI controller tuning for a speed sensorless vector-controlled
induction motor drive, 30th Annual Conference of IEEE Inds. Elec., Society, 2004, IECON, vol. 1, 2-6 Nov. 2004, pp. 877-882,
2004.

[6] M.N. Uddin, T. S. Radwan, and M. A. Rahman, Performance of fuzzy logic- based indirect vector control for induction motor
drive, IEEE Trans. Ind. Appl. Vol. 38, no. 5, PP. 1219-1225, September/Oct. 2002.

[7] B.K.Bose, Modern Power Electronics and AC Drives, Prentice Hall Indic, 2006.

[8] F. Blaschke The principle of field orientation as applied to the new TRANSVECTOR closed loop control system for rotating
field machines, Siemens Review XXXIX, (5), pp:217220, 1972.

[9] D. Casadei, G. Grandi, G. Serra, A. Tani Effectes of flux and torque hysteresis band amplitude in direct torque control of
induction machines, IEEE-IECON-94, 1994, 299304.

[10] Suresh Mikkili, A.K. Panda PI and Fuzzy Logic Controller based 3- phase 4-wire Shunt active filter for mitigation of Current
harmonics with Id-Iq Control Strategy, Journal of power Electronics (JPE), vol. 11, No. 6, Nov. 2011.

[11] P.Krishnan. Electric Motor Drives, Modeling, Analysis, and Control, Pearson Education, First Indian Reprint, 2003.

[12] P. C. Krause, Analysis of Electric Machinery, McGraw-Hill Book Company,1986.






International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

342 www.ijergs.org

A Review on Multi Sensor Image Fusion Techniques
Priyanka Chaudhari
1
, Prof.M.B.Chaudhari
2
, Prof. S.D.Panchal
2

1
Research Scholar (ME), CSE, Government Engineering College, Gandhinagar, Gujarat
E-mail- priyanka07.chaudhari@gmail.com

ABSTRACT - Most Earth observational satellites are not able to acquire high spatial and spectral resolution data simultaneously
because of design or observational constraints. To overcome such limitations, image fusion techniques are use. Image fusion is
process combine different satellite images on a pixel by pixel basis to produce fused images of higher value. The value adding is
meant in terms of information extraction capability, reliability and increased accuracy. The objective of this paper is to describe basics
of image fusion, various pixel level mage fusion techniques for evaluating and assessing the performance of these fusion algorithms.

Keywords -Image Fusion, Pixel Level, Multi-sensor, IHS, PCA, Multiplicative, Brovey, DCT, DWT.
INTRODUCTION
Image Fusion is process of combine two different images which are acquired by different sensor or single sensor. Output image
contain more information than input images and more suitable for human visual perception or for machine perception. Objectives of
Image Fusion Schemes are Extract all the useful information from the source images.



Figure1.1 Pre-processing of image fusion [1].

Image fusion is applicable at different fields that are: defense systems, remote sensing and geosciences, robotics and industrial
engineering, and medical imaging. Goal of image registration is to find a transformation that aligns one image to another. In image
registration, one dataset is regarded as the reference data and other as sensed data. Sensed data is matched relative to the reference
data, Image registration at a very basic level.
Image re-sampling [2] is the process to produce new image with eight in different size. Re-sampling can change the size of the image.
Increasing the size is called up-sampling; decreasing the size is called down-sampling. Note that the spatial resolution would not
change after the RS procedure, either up-sampling or down-sampling. In multi-sensor image fusion, the images of the same scene
come from different sensors of different resolution. In multi-focus image fusion, the images of the same scene come from the same
sensor are combined to produce an image in which all the objects are in focus.

Pohl & Genderen 1998 presents three types of image fusion levels: pixel, feature, and decision levels, In this paper, we are only
concerned about pixel level fusion.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

343 www.ijergs.org


Figure 1.2 Level of Image Fusion [3].

Image Fusion Techniques

Image fusion techniques are classified into several techniques, which are described below

Figure1.3 The categorization of pixel level image fusion techniques [4].

IHS (Intensity, Hue, Saturation)

IHS is a color space, intensity relates to the total amount of light that reaches the eye, hue is defined as the predominant wavelength of
a color, and saturation is defined as total amount of white light of a color.

Steps
1. first converts a RGB image into intensity (I),

1
2
=
1/3 1/3 1/3
2 6 2 6 22 6
1 2 1 2 0



Hue (H) and Saturation (S) components.

2. Replacement of I by high resolution image,

()
()
()
=
1 1 2 1 2
1 1 2 1 2
1 2 0

1
2

3. Reverse IHS, converting IHS components into RGB colors.

Pixel level Image Fusion Techniques
Component
Substitution
IHS
PCA
Mathematical
Combination
multiplicative
Brovey
Filtering
Smoothing
Filter
High Pass
Low Pass
Multi-
resolution
Transform
DCT
DWT
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

344 www.ijergs.org

()
()
()
=
+ 1
+1
+1


Merit of IHS is simplicity and high sharpening ability, it separates the spatial information as an intensity (I) component from the
spectral information represented by the hue (H) and saturation (S) components, and demerit of IHS is that it only processes three
multispectral bands, Cause Color distortion [5].

Figure 1.4 Block Diagram of the IHS fusion method [6].

PCA (PRINCIPAL COMPONENT ANALYSIS)

PCA maintain image clarity, spectral information loss is slightly better than that of the IHS fusion method.
Steps
1. Produce the column vectors from input images
2. Calculate the covariance matrix of the two column vectors formed in step1.
3. Calculate the Eigen values and the Eigen vectors of the covariance.
4. Normalize the column vectors.
5. Normalized Eigen vector act as the weight values which, multiply it with each pixel of the input images.
6. Fuse the two scaled matrices will be the fused images matrix.



Figure.1.5 Block Diagram of the PCA fusion method [7].

Merits of PCA is it can transforms number of correlated variable into number of uncorrelated variables, demerits is that have spatial
domain fusion may produce spectral degradation.

Multiplicative

The algorithm is derived from the four-component technique, as described by Crippen (1989). The four possible arithmetic methods
that can be used to produce an intensity image into a chromatic image (addition, subtraction, division, and multiplication), only
multiplication is unlikely to distort the color.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

345 www.ijergs.org


The merits of this method is straightforward and simple [8] multiplicative algorithm can be used to merge PAN and MS images,
however special attention has to be given to color preservation. This method can produce spectral bands of a higher correlation which
means that it does alter the spectral characteristics of the original image data. Demerits are that resulting image does not retain the
radiometry of the input multispectral image.

Red=(LR Band1 * HR Band1)
Green = (LR Band2 * HR Band2)
Blue = (LR Band3 * HR Band3) [9].

Brovey

The Brovey transformation Overcomes to demerits of the multiplicative method. Brovey is also called the color normalization
transform because it involves a red-green-blue (RGB) color transform method.

Merit of Brovey is it hold the corresponding spectral feature of each pixel, and transforms all the brightness information into a
panchromatic image of high resolution. Brovey Transform not be used in preserving the original scene radiometry. It is good for
producing RGB images with a contrast higher degree in the low and high ends of the image histogram and for getting visually
appealing images.

Red = (band1/ band n)* High Resolution Band
Green = (band2/ band n)* High Resolution Band
Blue = (band3/ band n)* High Resolution Band
High resolution band = PAN [9].

DCT (DISCRETE COSINE TRANSFORM)

The 2D discrete cosine transform of an image or 2D signal x(nl,n2) of size NI xN2 is defined as,

X(
1
,
2
)= (
1
) (
2
)

(
1
,
2
)

2
1

2
=0

1
1

1
=0

(2
1
+1)
1
2
1


cos
2
2
+1
2
2
2

For 0
1

1
1 and 0
2

2
1

Where in [10]


1
=

1
,
1
= 0

1
, 1
1

1
1




2
=

2
,
2
= 0

2
, 1
2

2
1



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

346 www.ijergs.org

1

2
are discrete frequency variables
1

2
are pixel index.

The 2D inverse discrete cosine transform is
Defined as,
X(
1
,
2
)= (
1
) (
2
)


1

2

1
,
2

2
1

2
=0

1
1

1
=0

(2
1
+1)
1
2
1


cos
2
2
+1
2
2
2

For 0
1

1
1 and 0
2

2
1

Merits of DCT Provides excellent result on spectral domain [11], it is complex and time consuming which are hard to be performed on
real time applications.

DISCRETE WAVELET TRANSFORM (DWT)

The wavelet transform contains the low-high bands, the high-low bands and the high-high bands of the image at different scales. Then
a fusion rule is to be selected. And in this way the fusion takes place in all the resolution levels.

1. Generate one PAN image for each MS band, histogram-matched to that band.
2. Apply one of the DWTs described above to both the MS and the new PAN images.
3. Add the detail images from the transformed PAN images to those of the transformed MS images,
4. Perform the inverse transform on the MS images with added PAN images and the resulting wavelet planes added directly to the
MS image; no inverse transform is then needed[12].

Advantage of DWT fusion method may outperform the slandered fusion method in terms of minimizing the spectral distortion. It also
provide better signal to noise ratio than pixel based approach. Disadvantage In this method final fused image has a less spatial
resolution.

CONCLUSION:

The review shows that the suitable selection of a proper pixel-level fusion algorithm depend on characteristics. The combination of
existing fusion methods and further development of new techniques are expected to surely guide and improve the fusion performance
more appropriate for human visual perception or for machine perception, chosen suitable method will improve quality of image by
determine visual analysis result and quantity analysis result. We can use DCT with filter which are still not used in image fusion to
improve quality.

REFERENCES:

[1] R.J.Sapkal, S.M.Kulkarni, Image Fusion based on Wavelet Transform for Medical Application, International Journal of
Engineering Research and Applications (IJERA) ISSN: 2248-9622, Vol. 2, Issue 5, pp.624-627, , September- October 2012.

[2] Nisthula P, Mr. Yadhu.R.B, A Novel Method To Detect Bone Cancer Using Image Fusion And Edge Detection International
Journal Of Engineering And Computer Science ISSN: 2319-7242 Volume 2 Issue 6 June, 2013.

[3] H.B.Mitchell. Image Fusion Theories,Techniques And Application ,Verlag Berlin HeidelbergSpringer ISBN 978-3-642-11215-
7, Springer 2010.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

347 www.ijergs.org

[4] Nemir Al-Azzawi and Wan Ahmed K. Wan Abdullah, Medical Image Fusion Schemes using Contourlet Transform and PCA
Based ,Biomedical Electronics group, University Sains Malaysia Penang, Malaysia, June 2011.

[5] Wen Doua, Yunhao Chenb, An Improved Ihs Image Fusion Method With High Spectral Fidelity, The International Archives of
the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B7. Beijing 2008.

[6] Leila Fonseca, Laercio Namikawa, Emiliano Castejon, Image Fusion for Remote Sensing Applications, National Institute for
Space Research, INPE So Paulo State University, Unesp Brazi, Jun 24, 2011.

[7] Jagdeep Singh, Vijay Kumar Banga,An Enhanced DCT based Image Fusion using Adaptive Histogram Equalization,
International Journal of Computer Applications (0975 8887) Volume 87 No.12, February 2014.

[8] Sascha Klonus, Manfred Ehlers, Performance of evaluation methods in image fusion, 12th International Conference on
Information Fusion, Seattle, WA, USA, July 6-9, 2009.

[9] Rohan Ashok Mandhare, Pragati Upadhyay, Sudha GuptaPixel-Level Image Fusion Using Brovey Transform And Wavelet
Transform International Journal Of Advanced Research In Electrical, Electronics And Instrumentation Engineering Vol. 2, Issue 6,
June 2013.

[10] VPS Naidu, Discrete Cosine Transform based Image Fusion Techniques, Journal of Communication, Navigation and Signal
Processing, Vol. 1, No. 1, pp. 35-45, January 2012.

[11] Mandeep Kaur Sandhu, Ajay Kumar Dogra, A Detailed Comparative Study Of Pixel Based Image Fusion Techniques,
International Journal of Recent Scientific Research, Vol. 4, Issue, 12, pp.1949-1951, December, 2013.

[12] Song Qiang, Wang Jiawen, Zhang Hongbin, An Overview on Fusion of Panchromatic Image and Multispectral Image, 978-1-
4244-4994-1/09/$25.00 2009 IEEE.
















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

348 www.ijergs.org

Performance Evaluation of Adders using LP-HS Logic in CMOS Technologies
Linet K
1
, Umarani P
1
, T.Ravi
1
1
Scholar, Department of ECE, Sathyabama university
E-mail- linetk2910@gmail.com

ABSTRACT - This paper presents a modified approach for constant delay logic style named LP-HS logic. Constant delay logic
style is examined against the LP-HS logic, by analysis through simulation. It is shown that the proposed LP-HS logic has low power,
delay and power delay product over the existing constant delay logic style.Adders is one of the fundamental operations for any digital
system. In this paper an 8 bit Ripple Carry Adder and 8 bit Carry Select Adder is analysed using both CD logic and LP-HS logic. The
simulations were done using HSPICE tool in 45nm, 32nm, 22nm and 16nm CMOS technologies and performance parameters of
power, delay and power delay product were compared. The adders using LP-HS logic is better in terms of power, delay and power
delay product when compared to constant delay logic style.
Keywords CMOS, MOSFET, VLSI, Power Consumption, Delay, Power delay product (PDP), Constant Delay Logic (CD logic)
INTRODUCTION
The need for high performance devices are increasing day by day. Rapid growth in VLSI technology enhancing all these
features from generation to generation. The three most widely accepted parameters to measure the quality of a circuit or to compare
various circuit styles are area, delay and power. Advances in CMOS technology have led to improvement in the performance in terms
of area, power or delay. There always exists a trade-off between area, power and delay in a circuit. [2] The power delay product is a
figure of merit for comparing logic circuit technologies or families. [1] Different types of logics are present in CMOS. The most
common classification is the Static and Dynamic.[9] It is then further classified into other sub divisions.
One of the newly developed logic is the constant delay logic style.[7] This high performance energy efficient logic style has
been used to implement complicated logic expressions. In this paper some modifications have been done for the constant delay logic
style to reduce the power consumption and to improve the speed. The proposed technique is known as the LP-HS logic.

CONSTANT DELAY LOGIC STYLE
Designers of digital circuits often desire fastest performance. This means that the circuit needs high clock frequency. Due to
the continuous demand of increase operating frequency, energy efficient logic style is always important in VLSI. One of the efficient
logics which come under CMOS dynamic domino logic is the feedthrough logic (FTL). [3][4][5]Dynamic logic circuits are important
as it provides better speed and has lesser transistor requirement when compared to static CMOS logic circuits. Feedthrough logic has
low dynamic power consumption and lesser delay when compared to other dynamic logic styles.[11][13][14]
To mitigate the problems associated with the feedthrough logic new high performance logic known as constant delay (CD)
logic style has been designed. It outperforms other logic styles with better energy efficiency. This high performance energy efficient
logic style has been used to implement complicated logic expressions. It exhibits a unique characteristic where the output is pre-
evaluated before the input from the preceding stage is ready.[7] Constant delay logic style which is used for high speed applications is
shown in Fig 1.

Fig 1: Constant Delay Logic Style [7]
CD logic consists of two extra blocks when compared to feedthrough logic. They are the timing block (TB) as well as the
logic block (LB). Timing block consists of self reset technique and window adjustment technique. This enables robust logic operation
with lower power consumption and higher speed. Logic block reduces the unwanted glitch and also makes cascading CD logic
feasible. The unique characteristic of this logic is that the output is pre-evaluated before the inputs from the preceding stage got ready.
An Nmos pull down network is placed where the inputs are given. Based on the logic which is given in the pull down network we will
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

349 www.ijergs.org

get the corresponding output. A buffer circuit implemented using CD logic is shown below. The expanded diagram for timing block as
well as logic block is also shown in the Fig 2


Fig 2: Buffer Using CD Logic [7]
The chain of inverters is acting as the local window technique and the NOR gate as a self reset circuit. Length of the inverter
chain varies according to the circuit which we have to design. The prime aim of the inverter chain is to provide a delayed clock. The
contention problem which is one of the disadvantages of the feedthrough logic is reduced with the help of this window adjustment. In
the self reset circuit one of the input of the NOR gate is the intermediate output node X and the other one is the clock. The logic block
is simply a static inverter as in the case of dynamic domino logic. Since the above circuit is for a buffer the NMOS pull down network
consists of only one nMOS transistor.
The timing diagram for constant delay logic is shown in Fig 3.CD logic works under two modes of operation.

i. Predischarge mode (CLK=1)
ii. Evaluation mode (CLK=0)



Fig 3: Timing Diagram of CD Logic [7]

Predischarge mode happens when CLK is high and evaluation mode occurs when CLK is low. During predischarge mode X
and Out are predischarged and precharged to GND and VDD respectively. During evaluation mode three different conditions namely
contention, C-Q delay and D-Q delay takes place in the CD logic. Contention mode happens when IN=1 for the entire evaluation
period. During this time a direct path current flows from pMOS to PDN. X rises to nonzero voltage level and Out experiences a
temporary glitch. C-Q delay (clock-out) occurs when IN goes to 0 before CLK transits to low. At this time X rises to logic 1 and Out
is discharged to VDD and the delay is measured from CLK to Out. D-Q delay happens when IN goes to 0 after CLK transits to low.
During this time X initially enters contention mode and later rises to logic 1 and the delay is measured from IN to Out.
If acknowledgement is there wishing thanks to the people who helped in work than it must come before the conclusion and must be
same as other section like introduction and other sub section.
PROPOSED LP-HS LOGIC

The proposed LP-HS logic is derived from the existing constant delay logic. When compared to CD logic there are three
major differences in the LP-HS logic. The window adjustment technique is eliminated in this logic. The evaluation transistor is altered
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

350 www.ijergs.org

as pMOS transistor instead of nMOS. The third variation is the addition of the transistors M2 and M3 in parallel below the pull down
network.
The proposed logic helps to reduce the power and delay which in turn reduces the power delay product. The circuit diagram
for the proposed logic is shown in Fig 4




Fig 4: Proposed LP-HS Logic

Transistors M0 and M1 whose gates are driven by the CLK and the output of NOR gate are connected in series. This
increases the resistance which in turn helps reducing the power. M4 is acting as an evaluation transistor. The NOR gate which is
behaving as the self resetting logic is constituted by the transistors M5, M6, M7 and M8. M5, M6 and M7, M8 is driven by CLK and
the output intermediate node X. IN values are given to the nMOS pull down network which is given according to the circuit which we
have to design. Transistors M2 and M3 are connected in parallel and is placed down to the nMOS pull down network. These
transistors help to reduce the power delay product. The gate of M2 is driven by the clock and M3 is at ground. Transistor M2 increases
the dynamic resistance of the pull down network which successively helps to reduce the power consumption. Transistors M9 and M10
together figures the static inverter which is used to make the cascading logic more feasible.

The circuit works under two modes of operation.

i. Precharge mode (CLK=0)
ii. Evaluation mode (CLK=1)

Precharge mode occurs when clock is low and evaluation mode happens when clock is high. When clock is low, transistor
M4 gets ON and provides a high value at node X which in turn provides a low value at the output node OUT. When clock is high the
transistor M2 gets ON and the nMOS pull down network is evaluated and gives the output. During this time the transistor M0 whose
gate is driven by the CLK is in OFF condition. Due to this the contention mode gets wiped out in the evaluation condition which in
turn tends for the elimination of window adjustment technique in the proposed logic. One of the reasons for the power and delay
reduction in the circuit is the elimination of the window adjustment technique. During the evaluation mode the pull down network and
the transistor M2 gets ON which provides high dynamic resistance which further reduces the power. Transistor M3 is in always ON
condition which offers an easy discharge of the value to the ground.



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

351 www.ijergs.org

8 BIT RIPPLE CARRY ADDER
An 8 bit ripple carry adder is constructed by cascading FA blocks in series. One full adder is responsible for the addition of two binary
digits at any stage of the ripple carry. The carryout of one stage is fed directly to the carry in of the next stage. An 8 bit ripple carry
adder structure is shown in Fig 4 where x
0
-x
7
, y
0
-y
7
represents the two set of inputs. C
0
represents carry input. The output sum and
carry is shown as S
0
-S
7
and C
7
respectively.



Fig 5: 8 Bit Ripple Carry Adder


SIMULATION RESULTS
Figure below describes the output waveforms of 8 bit ripple carry adder using both existing as well as proposed logic in
different nanometer CMOS technologies. V(30,31,32,33,34,35,36,37), V(38,39,40,41,42,43,44,45) are the two input signals, V(46)
refers the carry input. The output sum and carry are represented by V(55,56,57,58,59,60, 61,62) and V(54) respectively.

45nm CMOS TECHNOLOGY 32nm CMOS TECHNOLOGY

Fig 6: Output of Existing Fig 7: Output of Proposed Fig 8: Output of Existing Fig 9: Output of Proposed
8 bit RCA 8 bit RCA 8bit RCA 8 bit RCA

22nm CMOS TECHNOLOGY 16nm CMOS TECHNOLOGY

Fig10: Output of Existing Fig11:Output of Proposed Fig 12:Output of Existing Fig 13:Output of Proposed
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

352 www.ijergs.org

8 bit RCA 8 bit RCA 8bit RCA 8 bit RCA
8 BIT CARRY SELECT ADDER
The concept of carry select adder (CSA) is to compute alternative results in parallel and subsequently selecting the correct
result with a single or multiple stage hierarchical techniques. In order to enhance its speed performance, the carry select adder
increases its area requirements. In carry select adders both sum and carry bits are calculated for the two alternatives: inputs carry 0
and 1. Once the carry in is delivered, the correct computation is chosen (using a MUX) to produce the desired output. Therefore
instead of waiting for the carry in to calculate the sum, the sum is correctly found as soon as the carry in gets there. The time taken to
compute the sum is then avoided which results in good improvement in speed. An 8 bit carry select adder structure is shown in
Fig 7 x
0
-x
7
, y0-y
7
represents the two set of inputs. C
0
represents carry input. The output sum and carry is shown as S
0
-S
7
and C
7

respectively.



Fig 14: 8 Bit Carry Select Adder
SIMULATION RESULTS
Figure below describes the output waveforms of 8 bit carry select adder using both existing logic as well as proposed logic in
different nanometer technologies. V(2,3,4,5,6,7,8,9), V(10,11,12,13,14,15,16,17) are the two input signals, V(20) refers the carry
input. The output sum and carry are represented by V(21,22,23,24,25,26,27,28) and V(39) respectively.

45nm CMOS TECHNOLOGY 32nm CMOS TECHNOLOGY

Fig15: Output of Existing Fig16: Output of Proposed Fig17: Output of Existing Fig18: Output of Proposed
8 bit CSA 8 bit CSA 8bit CSA 8 bit CSA

22nm CMOS TECHNOLOGY 16nm CMOS TECHNOLOGY

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

353 www.ijergs.org

Fig19: Output of Existing Fig20: Output of Proposed Fig21: Output of Existing Fig22: Output of Proposed
8 bit CSA 8 bit CSA 8bit CSA 8 bit CSA
PERFORMANCE ANALYSIS

Here the performance analysis like power, delay and power delay product of 8 bit RCA and 8 bit CSA using CD logic as
well as LP-HS logic have been carried out and there results were compared and is shown in the table below.

Table 1: Power, Delay and PDP analysis of 8 bit RCA in different nanometer technologies


Table 2: Power, Delay and PDP analysis of 8 bit CSA in different nanometer technologies


CONCLUSION

The concept of constant delay logic is modified and a new logic has been developed known as the LP-HS logic. Adders are
designed using both existing as well as proposed logic. It is simulated with 45nm, 32nm, 22nm and 16nm CMOS technologies and the
performance parameters power, delay, power delay product were compared. The simulations for 45nm, 32nm and 22nm CMOS
technologies were carried out at 0.9 V, while 16nm CMOS technology was simulated at 0.6V. The operating frequency for all the
technologies was kept at 1GHz.
From the results it is found that the power delay product has been improved by 82.81% for 8 bit RCA and 85.15% for 8 bit
CSA using the proposed logic in 45nm CMOS technology. A betterment of 86.60% has been found for 8 bit RCA and 78.13% for 8
bit CSA using the proposed logic for 32nm CMOS technology. Similarly an improvement of 85.57% for 8 bit RCA and 84.56% for 8
bit CSA were found using the proposed logic in 22nm CMOS technology. Finally an improvement of 50% and 55.69% have been
found for 8 bit RCA and 8 bit CSA in 22nm CMOS technology.
TECHNOLOGY
CD LOGIC LP HS LOGIC
Power
(w)
Delay (ps)
PDP
(fJ)
Power (w) Delay (ps)
PDP
(fJ)
45nm 77.10 8.35 0.64 13.46 8.30 0.11
32nm 58.15 38.64 2.24 8.39 35.35 0.30
22nm 88.04 11.85 1.04 6.92 21.21 0.15
16nm 8.52 9.30 0.08 2.41 17.86 0.04
TECHNOLOGY
CD LOGIC LP HS LOGIC
Power
(w)
Delay (ps)
PDP
(fJ)
Power (w) Delay (ps)
PDP
(fJ)
45nm 164.30 38.96 6.40 44.92 21.17 0.95
32nm 125.50 73.69 9.24 29.91 67.61 2.02
22nm 207.60 27.46 5.70 21.77 40.65 0.88
16nm 18.86 42.11 0.79 5.61 61.52 0.35
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

354 www.ijergs.org

REFERENCES:
[1] AnanthaP.Chandrakasan,SamuelSheng,Robert W.Brodersen (1992),Low Power CMOS Digital Design,IEEE Journal of
Solid-State Circuits, vol.27, no.4.
[2] Chetana Nagendra, Robert Michael Owens and Mary Jane Irwin (1994), Power-Delay Characteristics of
CMOS Adders, IEEE Transactions on Very Large Scale Integration(VLSI) Systems vol.2, no.3, pp.377-381.
[3] Deepika Gupta, Nitin Tiwari , Sarin. R.K (2013), Analysis Of Modified Feedthrough Logic With Improved Power Delay
Product, International Journal Of Computer Applications, vol.69, no.5 pp.214-219.
[4] Lakshmi. M, Nareshkumar. K, Sagara Pandu(2013),Analysis and Implementation of Modified Feedthrough Logic for High
Speed and Low PowerStructures International Journal of Computer Applications, vol.82,no.18, pp.29-31.
[5] Laxmiprava Samal and Tejaswini R.Chowdri (2013), Low Power Modified Feed-Through Logic Circuit for Ultra-
low Voltage Arithmetic Circuits, International Journal of Emerging Technology and Advanced Engineering vol.3, no.12,
pp.440-444.
[6] Neha Agarwal and Sathyajit Anand (2012),Study and Comparison of VLSI Adders Using Logical Effort Delay Model,
International Journal of Advanced Technology & Engineering Research vol.2, no.6, pp.10-12.
[7] Pierce Chuang, David Li, Manoj Sachdev (2013),Constant Delay Logic Style, IEEE Transactions on Very Large Scale
Integration(VLSI) Systems, vol.21, no.3, pp. 554-565.
[8] Pierce Chuang, David Li, Manoj Sachdev (2009), Design Of 64 Bit Low Energy High Performance Adder Using
Dynamic Feedthrough Logic in Proc.IEEE Int. Circuits Syst. Symp, pp.3038-3041.
[9] Rajaneesh Sharma and Shekhar Verma (2011), Comparitive Analysis of Static and Dynamic CMOS Logic Design,IEEE
International Conference on Computing & Communication Technologies, pp.231-234.
[10] Saradindu Panda, Banerjee.A, Maji. B, Dr.Mukhopadhyay.A.K (2012), Power and Delay Comparison in Between
Different Types of Full Adder Circuits, International Journal of Advanced Research in Electrical, Electronics and
Instrumentation Engineering, vol.1, no.3,pp.168-172.
[11] Sauvagya Ranjan Sahoo and Kamala Kanta Mahapatra (2012), Performance Analysis Of Modified Feedthrough
Logic For Low Power And High Speed , IEEE International Conference on Advances in Engineering, Science
and Management, pp.1-5.
[12] Sauvagya Ranjan Sahoo and Kamala Kanta Mahapatra (2012),Design of Low Power and High Speed Ripple Carry Adder
Using Modified Feedthrough Logic , International Conference on Communications, Devices and Intelligent Systems
(CODIS) pp.377-380.
[13] Sauvagya Ranjan Sahoo and Kamala Kanta Mahapatra (2012), An Improved Feedthrogh Logic for Low Power Design,1
st

International Conference on Recent Advances in Information Technology (RAIT).
[14] Sauvagya Ranjan Sahoo and Kamala Kanta Mahapatra (2012), Modified Circuit Design Technique For
Feedthrough Logic , National Conference on Computing and Communication Systems(NCCCS) pp.105-108
[15] Uma. R, Vidya Vijayan, Mohanapriya. M, Sharon Paul (2012), Area, Delay and Power Comparison of Adder
Topologies, International Journal of VLSI Design & Communication Systems, vol.3, no.1, pp. 153 168.












International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

355 www.ijergs.org

Analysis of OFDM and OWDM System with various Wavelets Families
Sourabh Mahajan
1
, Parveen Kumar
2
,Anita Suman
3

1
Assistant Professor, SSIET Dinanagar, PTU Jalandhar, Punjab, India
2
Associate Professor, BCET Gurdaspur, PTU Jalandhar, Punjab, India
3
Assistant Professor, BCET Gurdaspur, PTU Jalandhar, Punjab, India

ABSTRACT - To increase the data rate of wireless standard orthogonal frequency division multiplexing (OFDM) is used which is a
great method that uses an Inverse Fast Fourier Transform (IFFT) at the transmitter to modulate a high bit-rate signal into a number of
sub-carriers. The problem with this technique is that it is inherently more complex IFFT core. This paper delivers an examination of a
technique to actions respective acts, called Orthogonal Wavelet Division Multiplex (OWDM), an substitute method to OFDM, which
uses a Discrete Wavelet Transform (DWT) in its place of using the IFFT at the transmitter to produce the output and intensifications
elasticity in system . In this research, we associated bit error rate using OFDM and OWDM. Actually OWDM (orthogonal wavelet
division multiplexing) is realized by using various Daubechies family. There are dissimilar ways of applying Daubechies family, we
have calculated the bit error using OFDM and then again simulated these results using various Daubechies family and then compare it.
The numerous simulations are done in MATLAB.

KEYWORD: OFDM, OWDM, IFFT, FFT, BER, SNR, FDM, DAB, DVB, WLAN, MMAC

INTRODUCTION
The concept of using parallel data broadcast by means of frequency division multiplexing (FDM) was Printed in mid 60s [1-2].
OFDM can be simply defined as a form of multicarrier variation scheme where its carrier planning is sensibly selected so that each
subcarrier is orthogonal to the each other subcarriers. [1] OFDM is a method widely used in wireless message systems due to its high
data rate transmission capability with high bandwidth effectiveness and also its robustness to multi-path fading without necessitating
complex equalization techniques [3-4]. OFDM has been adopted in a number of wireless tenders including Digital Audio Broadcast
(DAB), Digital Video Broadcast (DVB), and Wireless Local Area Network (WLAN) standards such as IEEE802.11g and Long Term
Evolution (LTE) [5-6]. As is known, orthogonal signals can be divided at the receiver by correlation techniques; hence, inter symbol
interference among channels can be rejected.

ORTHOGONALFREQUENCY DIVISION MULTIPLEXING (OFDM)
OFDM is of great interest by investigators and research laboratories all over the world. It has previously been familiar for the new
wireless local area network criteria IEEE 802.11a, High Routine LAN type 2 (HIPERLAN/2) and Mobile Multimedia Access
Communication (MMAC) Systems. Also, it is expectable to be used for wireless broadband multimedia communications. Data rate is
actually what broadband is about. The new standard agrees bit rates of up to 54 Mbps. Such high rate executes large bandwidth, thus
confident carriers for values higher than UHF band. For instance, IEEE802.11a has regularities billed in the 5- and 17- GHz bands [8].

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

356 www.ijergs.org


OFDM BLOCK DIAGRAM
The Orthogonal Frequency Division Multiplexing is a MCM technique that is broadly putative and most frequently used today. In
OFDM system, the modulation and demodulation can be applied easily by means of IDFT and DFT operators. In such a system,
conversely, the involvement data bits are actually reduced by a rectangular opening and the wrapper of the spectrum takes the forms of
sinc (w) which create rather high side lobes. This leads to rather high nosiness when the station deficiencies cant be fully
remunerated. [9]
ORTHOGONALITY
In order to assure a high spectral efficiency the sub channel waveforms must have overlapping transmit spectra. Nevertheless, to
enable simple separation of these overlapping sub channels at the receiver they need to be orthogonal. Orthogonality is a property
that allows the signals to be perfectly transmitted over a common channel and detected without interference. However, loss of
Orthogonality results in blurring between these information signals and degradation in communications this is the result of the
symbol time equivalent to the inverse of the carrier spacing. The sinc shape has a narrow main lobe with many side lobes that decay
slowly with the magnitude of the frequency modification away from the centre. Every carrier has a highest at its centre frequency
and nulls evenly spaced with a frequency slit equal to the mover spacing. [10]
DWT
The foundations of the DWT go back to 1976 when Croiser, Esteban, and Galand devised a method to decay discrete time signals.
Crochiere, Weber, and Flanagan did a parallel work on coding of communication signals in the same year. They named their
examination scheme as sub band coding. In 1983, Burt defined a technique very parallel to sub band coding and named it pyramidal
coding which is also known as multi resolution analysis. Later in 1989, Vetterli and Le Gall made some enhancements to the sub
band coding scheme, eradicating the existing severance in the pyramidal coding scheme. A complete exposure of the discrete
wavelet transform and theory of multi tenacity analysis can be originate in a number of articles and books that are obtainable on this
topic, and it is beyond the scope of this tutorial. [11] The discrete wavelet transform (DWT), on the other hand, it delivers sufficient
information both for examination and synthesis of the original signal, with a significant drop in the computation time. The DWT is
considerably easier to gadget when related to the CWT. The basic concepts of the DWT will be introduced in this section along with
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

357 www.ijergs.org

its properties and the algorithms used to compute it. As in the previous chapters, examples are providing to aid in the interpretation
of the DWT. [11]
OWDM
OWDM is a modulation frontend that has been proposed as an alternative to OFDM. In DWTOWDM, the modulation and
demodulation are apprehended by wavelets rather than by Fourier transform. [12] The OFDM device by IFFTs and FFTs has some
problems. The OFDM grieves from ISI pattern. This is recurrently taken care of by using a adding a cyclic prefix greater than the
channel length but this may not constantly be conceivable. This ensues due to loss of Orthogonality due to channel properties.

OWDM BLOCK DIAGRAM
The OFDM requires time and frequency synchronization to get a low bit error rate. Carrier frequency offset- The offset amongst
the carrier frequency and the frequency of the local oscillator also causes a large bit error rate. One of these is wavelet transform.
The wavelet transform is proposed by many authors, it has a higher degree of side lobe suppression and the loss of orthogonal
leads to lesser ISI and ICI. In wavelet OFDM the FFT and IFFT is replace by DWT and IDWT respectively. In DWT -OWDM, the
modulation and demodulation are implemented by wavelets rather than by Fourier transform. [7]




SIMULATION & RESULTS

OWDM scheme is shown to be overall quite similar to Orthogonal Frequency Division Multiplexing but with some additional features
and value-added characteristics, the aim of this thesis is to examine the outcome of wavelets on the appearance of the Orthogonal
Wavelet Division Multiplexing system.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

358 www.ijergs.org


Bit Error Rate vs. Signal to Noise Ratio for OFDM and OWDM (db1, db2, db3, db4)


It is a dominant technique that uses an IFFT at the source to modulate a high bit error rate signal onto a number of movers. But it faced
one problem that is inherently inflexible and needs a complex IFFT core. To overwhelmed this problem a new system is prearranged
that employs the flexible nature of Discrete Wavelet Transform by OWDM in its place of OFDM.
The four wavelets of Daubechies family OWDM (coif1, coif2, db1, db2) are restrained with increasing order to regulate which
wavelet translate is the most suited for use in an AWGN channel and degree the presentation in terms of Variance and Signal to noise
ratio SNR for AWGN channel in comparison with OFDM and demonstrates the next level examination of new system comparing
different wavelets. To parallel the dissimilar wavelets, the buffered Quadrature modulated block (containing the same information for
each trial) was passed over the dissimilar wavelet filters. The output from the riddle was passed over the AWGN channel with
reducing Signal to Noise Ratio (SNR) and then demodulated

Peak to Average power vs. Signal to Noise Ratio for OFDM and OWDM (db1, db2, db3, db4)
1 2 3 4 5 6 7 8 9 10 11
0
0.05
0.1
0.15
0.2
0.25
Signal To Noise Ratio
B
i
t

E
r
r
o
r

R
a
t
e
BER vs SNR of OFDM and OWDM


ofdm
owdm-db1
owdm-db2
owdm-db3
owdm-db4
1 2 3 4 5 6 7 8 9 10 11
2
3
4
5
6
7
8
Signal To Noise Ratio
P
e
a
k

T
o

A
v
e
r
a
g
e

P
o
w
e
r

R
a
t
i
o
PAPR vs SNR of OFDM and OWDM


ofdm
owdm-db1
owdm-db2
owdm-db3
owdm-db4
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

359 www.ijergs.org

CONCLUSION
In this study a low difficulty equipment OWDM is proposed to perform better over the earlier used technology OFDM as in the
OWDM the controls are larger size FFFs are used that make a system complex so a new knowledge named as OWDM is less trying
AS with the help of Daubechies wavelet the result are shown better. In this study it surveys a set of simulation and assessment has
been flourished of dissimilar wavelet filters of OWDM with OFDM. From these results , it is recommended that the db1 wavelet (the
first wavelet of Daubechies family) is the most suited for OWDM because of the lesser variance to noise in channel followed by
Daubechies family, while db4 (the forth wavelet of Daubechies family) is the smallest suited because it has high variance. The
conclusions exposed that there were some OWDM system whose modification outperformed that of OFDM and db1 wavelet reached
the best recital compared to other wavelet db2, db3, db4 and OFDM as well the finally accomplish that peak to average power ratio
and BER VS SNR of db1 wavelet (the first wavelet of Daubechies family) is the most suited for OWDM as equate to OFDM..The
outcomes displayed that there were some OWDM scheme whose variance surpassed that of OFDM and db1 wavelet achieved the best
recital compared to other wavelet db2, db3, db4 and OFDM as fit the lastly arrange that bit error rate of db1 wavelet (the first wavelet
of Daubechies family) is the most suited for OWDM as associate to OFDM.

REFERENCES:
[1]R.W.Chang,synthesisofbandlimitedorthogonal Signals for Multichannel Data Transmission, Bell Syst. Tech. J, vol.45, pp. 1775-
1796, Dec. 1966.
[2]B.R. Salzberg, Performance of an efficient parallel data transmission system, IEEE Trans. Commun. Technol., vol. COM 15, pp.
805-813, Dec. 1967.
[3] R. Van Nee and R. Prasad, OFDM for Wireless Multimedia Communications. Artech House, 2000.
[4] A. R. Sheikh Bahai, B. R. Saltzberg, and M.Ergen,:Theoryand Applicationsof OFDM. Springer, 2004.
[5] Sobia Baig, Fazal-ur-Rehman, M. Junaid MughalPerformance Comparison of DFT, Discrete Wavelet Packet and Wavelet
Transforms, in an OFDM Transceiver for Multipath Fading Channel, IEEE Communication Magazine,2004.
[6] A. R. Lindsey, Wavelet packet modulation Theory, pp. 392396, 1995.
[7] Rama Kanti,Dr. Manish Rai, Comparative Analysis of Different Wavelets in OWDM with OFDM for DVB-T International
Journal of Advancements in Research & Technology, Volume 2, Issue3, ISSN 2278-7763, March-2013
[8] Anbal luis intini,Orthogonal Frequency Division Multiplexing for Wireless Networks university of California Santa Barbara
December 2000.
[9]1H. Umadevi and 2K.S. Gurumurthy OFDM Technique for Multi-carrier Modulation (MCM) Signaling Journal of Emerging
Trends in Engineering and Applied Sciences (JETEAS) 2 (5): 787-794 ISSN: 2141-7016 Scholarlink Research Institute Journals,
2011.
[10] OFDM Systems And Papr Reduction Techniques In Ofdm Systems By Abhishek Arun Dash Department Of Electronics
And Communication Enginnering National Institute Of Technology, Rourkela 2006-2010.
[11] Robi Polikar, Index to Series Of Tutorials To Wavelet Transform By Robi Polikar the engineer 'sultimate guide to wavelet
analysis, the wavelet tutorial,2006.
[12] Mandeep Kaur, Vikramjeet Singh, International Journal of Engineering Trends and Technology (IJETT) - Volume4Issue4
Analysis of DVB-T system using OWDM with various wavelet families April 2013.



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

360 www.ijergs.org

An Approach to Global Gesture Recognition Translator
Apeksha Agarwal
1
, Parul Yadav
2

1
Amity school of Engieering Technology, lucknow Uttar Pradesh, india
2
Department of Computer Science and Engineering, Amity Lucknow, Lucknow, India
E-mail- pyadav@lko.amity.edu
Abstract Hand gestures play a vital role in communication between people during their daily lives. The major use of hand
gestures as a mean of communication can be found mostly in the form of sign languages. Sign language is a popular communication
method that can be used between deaf and dumb people. A translator is definitely needed when a person wants to communicate with a
deaf one. Sign Language is the only mode of communication between deaf/dumb and normal human beings. The major difficulty of
sign language recognition is determined by the fact that there exist a variety of sign language sets in the world even for a single
language such as English. No Global gesture recognition translator is proposed to overcome this difficulty and therefore it is
impossible for users of different sign language groups to understand each other. In this research paper, we have proposed an approach
to Global sign language recognition translator.

Keywords Master gesture(MG), Canonical frame, Sign Language(SL), Kohonen, Translator, Eigen, Degree of freedom(DOF).

INTRODUCTION
HumanComputer Interaction (HCI) is getting increasingly important as computers Influence on our lives is becoming more and
more significant[1]. With the advancement in the world of computers, the already-existing HCI devices (the mouse and the keyboard
for example) are not satisfying the increasing demands anymore. Designers are working to make HCI faster, easier, and to look more
natural[2]. To achieve this, Human-to-Human Interaction techniques are being introduced into the field of Human-Computer
Interaction. One of the most fertile Human-to-Human Interaction fields is the use of hand gestures. People use hand gestures mainly to
communicate and to express ideas[3].The importance of using hand gestures for communication becomes clearer when sign language
is considered. The sign language is the fundamental communication method between people who suffer from hearing imperfections.
Sign language is a collection of gestures, movements, postures, and facial expressions corresponding to letters and words in natural
languages[4]. In order for an ordinary person to communicate with deaf people,an interpreter is usually needed to translate sign
language into natural language and vice-versa. In the recent years, the idea of designing a GSL translator has become an attractive
research area.
We can express our feelings and thoughts through gestures, gestures can go beyond this point, hostility and enmity can be expressed
as well during speech, approval and emotion are also expressed by gestures[5]. The development of user interface requires a good
understanding of the structure of human hands to specify the kinds of postures and gestures. To clarify the difference between hand
postures and gestures, hand posture is considered to be a static form of hand poses an example of posture is the hand posture like
stop hand sign , its called also static gesture, or Static Recognition[6]. On the other hand; a hand gesture is a comprised of a
sequence static postures that form one single gesture and presented within a specific time period, example for such gesture the
orchestra conductor that applies many gestures to coordinate the concert, also called dynamic recognition, or dynamic gesture. Some
gestures might have both static and dynamic characteristics as in sign languages[7]. We can define gesture as a meaningful physical
movement of the fingers, hands, arms, or other parts of the body, with the purpose to convey information or meaning for the
environment interaction . Gesture recognition, needs a good interpretation of the hand movement as effectively meaningful
commands[15]. For human computer interaction (HCI)interpretation system there are two commonly approaches:

a. Data Gloves Approaches: These methods employs mechanical or optical sensors attached to a glove that transforms finger flexions
into electrical signals to determine the hand posture [16]. Further in this method the data is collected by one or more data- glove
instruments which have different measures for the joint angles of the hand and degree of freedom (DOF) that contain data position and
orientation of the hand used for tracking the hand. However, this method requires a wearisome device with a load of cables connected
to the computer, which will hampers the naturalness of user-computer interaction.
b. Vision Based Approaches: These techniques based on the how person realize information about the environment. These methods
are usually followed by capturing the input image using camera(s).In this ,the architecture is generally divided into two parts i.e
feature extraction and recognition[17]. The recognizer uses and implements machine learning algorithms. Artificial Neural Network
(ANN) and Hidden Markov Model (HMM) are the most common tool to be used and implemented. In order to create the database for
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

361 www.ijergs.org

gesture system, the gestures should be selected with their relevant meaning and each gesture may contain multi samples for increasing
the accuracy of the system.

2. RELATED WORK
Work for automatic recognition of sign language became visible in the 90s. Researches on hand gestures and two major classes have
been identified. First category relies on electromechanical devices (sensor) that are used to observe different gesture parameters such
as hands position, angle, and the location of fingertips and gives numerical values[1]. Systems that use such devices are called
wearable or glove computation-based systems. So to avoid this inconvenience ,the second class exploits image processing techniques
to create visual based hand gesture recognition systems. Visual gesture recognition systems are again classified into two categories:
The first one depends on specially designed gloves with visual markers called vision based glove-marker gestures (VBGMG) that
helps in determining the hand position and postures[2]. But using gloves and markers leads to the limitation and do not provide the
natural feeling required in human computer interaction systems. Besides, colored gloves increases, the processing complexity. The
next one that is an alternative to the second kind of visual based gesture recognition systems can be called natural visual-based
gesture (NVBG) means visual based gesture without gloves & markers[3]. And this type tries to provide the ultimate convenience
and naturalness by using images of bare hands for gesture recognition. Also many researchers have been trying very hard to introduce
hand gestures to Human-Machine Interaction field. Year 1992: Charayaphan and Marble developed a way to understand American
Sign Language using image processing. Previous work on sign language recognition focuses primarily on finger spelling recognition
and isolated sign recognition. Some work uses neural networks ,
For the work to apply to continuous ASL recognition, the problem of explicit temporal segmentation must be solved. HMM-based
approaches take care of this problem implicitly. Mohammed Waleed Kadous uses Power Gloves to recognize a set of 95 isolated
Auslan signs with 80% accuracy, with an emphasis on computationally inexpensive methods There is very little previous work on
continualus ASL recognition. Thad Starner and Alex Pentland use a view-based approach to extract two-dimensional features as input
to HMMs with a 40-word vocabulary and a strongly constrained sentence structure consisting of a pronoun, verb, noun, adjective, and
pronoun in sequence. Annelies Brafort describes ARGO, an architecture for LSF recognition based on linguistic principles and
HMMs , but provides limited experimentation results. Yanghee Nam and KwangYoenWohn use three-dimensional data as input to
HMMs for continuous recognition of a very small set of gestures.
Year1996: Grobel and Assan used HMMs to recognize isolated signs and Braffort also presented a recognition system for sentences of
French Sign language. Year 1997: Vogler and Metaxas used computer vision methods and HMMs. Year 1998: Yoshinori, Kang-
Hyun, Nobutaka, and Yoshiaki used colored gloves and have proven that using colored gloves faster & easier hand features extraction
can be done than simply wearing no gloves at all. Year 1998: Liang and Ouhyoung developed a continuous recognition of Taiwan sign
language using HMMs with a vocabulary between 71 and 250 signs using data glove as an input device. However their system
required that gestures performed by the signer to be slow to so that word boundary can be detected. Year 1999: Yang and Ahuja
developed dynamic gestures recognition as they used skin color detection and transforms of skin regions which are in motion to figure
out the motion path of ASL signs. Using a neural network, they recognized 40 ASL gestures with a success rate of around 96%. But
their technique has very high computational cost whenever wrong skin regions are detected. Year 2000: Halawani used subtractive
clustering algorithm and least- squares estimator are used to identify fuzzy inference system[7]. Taniguchi, Arita, and Igi used Eigen
method to detect hand shapes which was appearance based. Using a clustering technique, they generate clusters of hands on an eigen
space. They have achieved accuracy of around 93%. Year 2000: Symeoinidis used orientation histograms to detect static hand
gestures, specifically, a subset of American Sign Language. . Year 2003: Nielsen and Tezera[9] have developed hand posture
detection and recognition approach using vision based.
Later a project in Oxford University using a red color wrist band was used to recognize hand from image and fast template matching
was used for pattern recognition with an accuracy of 99.1%[10].


3. PROPOSED METHOD

This proposed method uses 2-D image captured from normal digital camera, webcam as input. An example of an input image is shown
in Fig.1.Using Watch detection method, the watch is detected in the image and image is preprocessed. If watch is not detected then a
different methods are used. If watch is detected then our proposed method can be applied on the preprocessed image. Then image is
matched using the technique pattern recognition. After that quantization of image is done. Tree technique described in detail below is
used to reduce time complexity and unnecessary matchings done and accordingly winning neuron will be chosen. The output can be
audio or textual[10].


3.1 WATCH DETECTION

For detecting watch in the image three sets namely A,B,C are taken.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

362 www.ijergs.org

Set A {all pixels of image}
Set B {all skin color pixels}
Set C {wrist watch color pixels}
Then,
Preprocessed image = A (B U C)
Wrists watch pixels can be detected as between the line passing through the adjacent pixels of skin color and a different color (watch)
will be nearly parallel to each other as shown below[10].


FIG. 1 Wrist Watch


.3.2 TREE TECHNIQUE

Many symbols are similar and have a bit difference and some are totally different from others. In this technique we group visually
similar symbols in groups and the input image is matched with the group to which it is most likely to be matched. This increases the
possibility and chances of match to be increased.


Fig.2 Master Gesture
Here we have divided 26 alphabets into 9 sets with 3 and less than three gestures per set. All the 3 gestures within a set are
superimposed over each other to gives a master gesture representing that set. Then the input is matched with 9 master gestures of all
sets. The closest match with a master gesture provides a security that the gesture lies in that particular set. Best case will have 2
matches for output and in worst case there will be 11 to 12 comparisons for 26 alphabets, average case will be 6-7 comparisons[10].
We took four countries namely India, Australia, Irish and South Africa. The original difficulty of SL recognition is aggravated by the
fact that there are many variety of SL sets exist in the world for even a single language such as English. Each country has its own
symbols and gestures. Thus, it is impossible for users of different SL groups to understand each other. So by defining a Global SL
translator system that is capable of providing recognition and interpreting of signs from different regions/countries will be useful for
deaf communities. So if we not make this we have to do 104 comparisons for 4 countries gestures and it is obviously time consuming
process and would not be applicable for real time system. So we have to make use of this proposed method so as to reduce no. of
comparisons and to make this process real time.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

363 www.ijergs.org

3.5 ARCHITECTURAL DESIGN

Fig.4 Architectural Design
For every country we know that SL gestures are different. So we are developing a language platform recognizer. We consider here
four languages .In 3 languages all alphabets are same except 6 alphabets which are different. So that means 20 alphabets are same and
remaining 6 alphabets which are different are given below-
G, H, K, L, P, Q

Fig.3 Signs Language Gestures
In three countries there are same gestures for same alphabets so we will take one image in our data set for matching that and in one
country there are few different gestures for various alphabets so we will take two images for those alphabets one to represent those
three countries in which it is same an one of that different one. Now we have to compare for six alphabets with two images as
discussed above. Further we will take the concept of Master Gesture. Five Master Gesture we can make for fifteen alphabets and five
alphabets remained that different that they cant form a MG set (I, V, W, Y, Z).
MG set are
1. A S T
2. C D O
3. E M N
4. R U X
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

364 www.ijergs.org

5. J F B
So now totally we have:-
Five alphabets which are same in all four countries but no master gesture exists for them.
Six alphabets which have two images in dataset for their recognition. Fifteen alphabets in five different master gesture sets. So total
one hundred four gestures of four different countries can now be compared in:-
Best case: - 2 comparisons.
Worst case: - 22 comparisons.
Average case: - 12 comparisons.
3.4 WINNING NEURON

After the selection of group to which the input lies comes the choice of winning neuron. In winning neuron technique there come only
one output which is most similar in nature using kohonen algorithm. The output can be in audio and textual manner whatever desired.

4. MATCHING PROCESS


CONCLUSION

This research has a new idea for the purpose of recognition of global sign languages with an attempt of minimizing computational
complexity. This proposed approach overcomes various limitations of previously used techniques like gloves, multicolored gloves,
and sensor gloves. Method summarized in this paper, is helpful in human-machine interface. By using Tree technique, time
complexity gets reduced that results in reduction in number of comparisons.



FUTURE SCOPE

Methodology proposed in this paper will be implemented in our next research paper. Vision based control and access can lead to
overcome several basic limitations of computers like mouse & keyboard.

REFERENCES:
[1] Syed Atif Mehdi & Yasir Niaz Khan, SIGN LANGUAGE RECOGNITION USING SENSOR GLOVES, Proceedings of the
9th International Conference on Neural Information Processing (ICONIP02), Vol. 5
[2] M. B. Waldron and S. Kim, Isolated ASL sign recognition system for deaf persons. IEEE transactions on Rehabilitation
Engineering, 3(3):261-71, September 1995
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

365 www.ijergs.org

[3] T. Starner. and A. Pentland, Visual Recognition of American Sign Language Using Hidden Markov Models, Technical Report
TR306, Media Lab, MIT, 1995.
[4] Kim, W. Jang and Z. Bien, A Dynamic Gesture Recognition System for the Korean Sign Language (KSL), IEEE Trans. On
Systems, Man, and Cybernetics, Vol. 26, No. 2, pp. 354-359, 1996.
[5] R.-H. Liang, Continuous Gesture Recognition System for Taiwanese Sign Language, Ph.D Thesis, National Taiwan University,
Taiwan, R.O.C., 1997.
[6] Shin Han Yu,Chung-Lin Huang,Shih-Chung Hsu & Hung-Wei Lin,andHau- Wei Wang Vision based continuous sign language
reconition using product HMM,IEEE, 2011.
[7] Omar Al-Jarrah ,AlaaHalawani recognition of gstures in Arabic sign language, Elsevier Science , 2001
[8] B.Lekhashri & A.ArunPratap (2011), Use of Motion-Print in Sign Language Recognition, Proceedings of the National
Conference on Innovations in Emerging Technology-2011,Kongu Engineering College, Perundurai, Erode, Tamilnadu, India.17
& 18 February, 2011.pp.99-102.
[9] Ray Lockton & Balliol College Oxford University Hand Gesture Recognition Using Computer Vision.
[10] Sonal Singh & Alpika Tripathi gesture recognition using wrist watchInternational journal of scientific & engineering research.
[11] V.I. Pavlovic, R. Sharma, T.S. Huang, Visual interpretation of hand gestures for humancomputer interaction: A review, IEEE
Transactions on Pattern Analysis and Machine Intelligence 19 (1997) 677695.
[12] Iwan Njoto Sandjaja & Nelson Marcos (2009) Sign Language Number Recognition, 2009 Fifth International Joint Conference
on INC, IMS and IDC.
[13] B.Lekhashri & A.Arun Pratap(2011), Use of Motion-Print in Sign Language Recognition, Proceedings of the National
Conference on Innovations in Emerging Technology-2011,Kongu Engineering College, Perundurai, Erode, Tamilnadu, India.17
& 18 February, 2011.pp.99-102.
[14] Ray Lockton & Balliol College Oxford University Hand Gesture Recognition Using Computer Vision
[15] A. BrafTort. ARGO: An architecture for sign language recognition and interpretation. In P. A. Hading, A. D. N. Edwards, editors,
Progress in gestural interaction. Proceedings of Gesture Workshop 96, pp. 17-30. Springer, Berlin, New York 1997.
[16] Ankit Chaudhary, J. L. Raheja, Karen Das, Sonia Raheja, Intelligent Approaches to interact with Machines using Hand Gesture
Recognition in Natural way, International Journal of Computer Science & Engineering Survey (IJCSES) Vol.2, No.1, Feb 2011
[17] Rafiqul Zaman Khan and Noor Adnan Ibraheem HAND GESTURE RECOGNITION, International Journal of Artificial
Intelligence & Applications (IJAIA), Vol.3, No.4, July 2012
















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

366 www.ijergs.org

A Comparative Study of high Resolution Weather Model WRF & ReGCM
Weather Model
Jyotismita Goswami
1
, Alok Choudhury
2

1
Scholar, Department of Computer Science & IT and Engineering, Assam Don Bosco University (Assam), India
2
Assistant Professor, Department of Computer Science & IT and Engineering, Assam Don Bosco University (Assam), India
E-mail- jyotizgoswami09@gmail.com
Abstract: The science of numerical weather forecasting [1] is as old as the advent of ENIAC . Earth System Models or atmospheric
models are built on the basis of interdependence among the prognostic variables and their effect on the atmosphere. These models
have succeeded in predicting the future weather conditions provided initial weather inputs are fed to the system. A remarkable
progress has been seen in this field for the past 50 years, giving a clear understanding of climate change which is increasing daily with
the invention of advanced complex prediction methodologies . These models have found applications in a variety of fields
including climate prediction, data assimilation, case studies, theoretical & sensitivity studies. In this presentation, a survey study is
made regarding the 2 mostly used weather models: WRF and RegCM, their evolution followed by architectures, applications and
advantages .

Keywords: Numerical Weather Forecasting, Atmospheric Models, WRF , RegCM.

I. INTRODUCTION

Population in economically developing nations like India depend extensively on climate for their welfare (e.g., agriculture, water
resources, power generation, industry) and likewise are vulnerable to variability in the climate system. Weather forecasting deals with
the methodologies providing timely and expected weather forecasts which is highly crucial for agriculture based countries. Its origin
took place in about 19
th
century, when the great American meteorologist Cleveland Abbe concluded from his experiments that
meteorology is essentially the application of hydrodynamics and thermodynamics to the atmosphere [3]. Climate models, both global
and regional, are the primary tools that aid in our understanding of the many processes that govern the climate system. These models
[2] use differential equations ,conservation laws(Fig 2),formulated based on the factors governing the physical behavior of the
atmosphere ,dividing the Earth into a 3D grid coordinate system(Fig 1). The interaction among these variables (wind components ,
surface pressure, temperature, mixture of cloud water , ice ,snow etc) with the adjacent grid cells help in calculation of future
atmospheric conditions.



Fig 1: Atmospheric model schematic

A. Model Types

1. Cloud-Resolving Models (CRMs)
2. Mesoscale Models
3. Numerical Weather Prediction (NWP) Models
4. Regional Climate Models (RCMs)
5. Global Circulation Models (GCMs)

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

367 www.ijergs.org

The seasonal predictions made by these Earth models are generally computed by the use of High Performance Computing
architectures. Scientific simulations[12] are typically compute intensive in nature. It takes week or days to obtain result if ordinary
single processor system is used. For example, in predicting weather the amount of computation is so large that it could take ordinary
computer weeks if not months. To make a simulation more feasible the use of High Performance Computing (HPC) is essential. HPC
is the use of supercomputers and complex algorithms to do parallel computing i.e. to divide large problems into smaller ones,
distribute them among computers so as to solve them simultaneously.

B. Important Issues to be Considered While Building Models

1. Purpose: The model should be able to reflect its purpose for Eg: Whether it is build for NWP or climate simulation, climate
and weather case and process studies etc.
2. Efficiency vs. Accuracy: These 2 factors play a major role in prediction purpose so,the model should utilize the available
resources well to produce better results.
3. Domain and Resolution: The area of interest (Domain) must be large enough to protect the main region of our study from
boundary effects and resolution denotes the scale of features which can be simulated with the model.Generally Regional
models presents higher resolution over smaller domains than global ones.


Fig 2:Climate Model Equations.

The sequence of the process starting from the observations obtained from radar, sattelites till the outcome of fine predicted or
simulated results is a long way when all the things are put together illustrated by the diagram (Fig 2)

Fig 3 From observations to model simulation/prediction

II. WRF MODEL
A. Definition.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

368 www.ijergs.org

TheWeather Research and Forecasting(WRF) model is a mesoscale community model[15] developed jointly by National Centre
for Atmospheric Research(NCAR) Advanced Research WRF(ARW) , National Centers for Environmental Predictions'(NCEP) Non-
hydrostatic Mesoscale Model(NMM) .This dynamic model inherits all the enhanced features and dynamical cores of the above
communities , making it a full fledged end-to-end forecasting system .WRF software has been designed for real time simulation of
the atmosphere, air quality modeling, wildfire simulation and advanced hurricane and tropical storm prediction, intensifying the bond
between research and operational forecasting communities .

B. Model Software Architecture

Software development and maintenance costs adds a lot to the total cost of a large numerical simulation code. Moreover cost for
computational resources is greatly reduced with a code able to run on multiple high-performance computing platforms. The modular
and hierarchical architecture(Fig 4) of WRF facilitates grouping of multiple dynamic cores and plug-compatible physics in a single
code over a diverse parallel platforms thus providing performance portability, extensibility, usability, run-time configurability,
interoperability, among different Earth models. It is neutral with respect to external packages for I/O and makes effective use of
computer aided software engineering (CASE) tools.
The WRF model is presented in a three-level hierarchy (Figure 2) structure.
1. The highest levels correspond to the driver layer holding the responsibilities of top-level control of initialization, time-
stepping, I/O, instantiating domains, maintaining the nesting between domain type instances , decomposition,parallelism and
processor topologies.
2. The model layer correspond to the lowest layer containing the subroutines required for actual model computations.
3. The mediation layer acts as the interface between the model and driver layers encapsulating the unwanted information . It
encompasses the features of inheritance along with encapsulating details that are of no concern to other layers .

Fig 4: WRF 3-Hierarchy Stucrure.

C. WRF System Model
The following fig(5) depicts the system architecture of a WRF model with a brief description of its components .


Fig 5:WRF System Model
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

369 www.ijergs.org

i Model Components :Preprocessors for producing initial and lateral boundary conditions for idealized, real-data, and one-way
nested forecasts.Postprocessors for meant for analysis and visualization purpose.A three-dimensional variational data
assimilation (3DVAR) program for obtaining three dimensional input data. Each of the preprocessors and 3DVAR are parallel
programs implemented using the WRF Advanced Software Framework (ASF). Data streams between the programs are input
and output through the ASFs I/O and Model Coupling API. The WRF Model (large box in figure) contains two dynamical
cores(ARW and NMM ) providing flexibility and initialization programs for real and (for ARW) idealized data
(real.exe/ideal.exe) in applications.
D. Working of WRF Model.
Some of the details for running WRF are architecture dependent. For distributed memory runs, it is usually necessary to use some
form of the mpirun command; for example: mpirun -np 4 wrf.exe In general, however, for single-processor or shared memory parallel
runs, the command is ./wrf.exe. It can be specified by either tile aspect ratio through the WRF namelist.input file. To run WRF it is
necessary to first generate a set of initial conditions which will be read in from the file wrfinput. We need to set the io form option in
the namelist.input file to 1 and the other options such as grid dimensions, dx, dt, etc. and then type ./ideal.exe in the run directory.
This will generate the file wrf input. As long as the basic grid specifications are not changed in the namelist. input file, you can
continue to reuse the wrf input file for multiple runs of the wrf.exe code.After that we need to edit the namelist.input file to set such
run-specific items as number of time steps, output frequency, etc. and then run the wrf.exe code using the procedure for your system .
WRF will input the namelist.input file and also the wrfinput file of initial conditions. As it runs it will output to the file wrf output file.
However during the run of WRF model one of its component ie.WRF Pre-processing System (WPS) plays a major role.Its main
function is to interpolate the Real-data while dealing with numerical prediction cases along with adding more aobervations for
analysis.The diagram(Fig 6) depicts the workflow co-ordination among WPS and WRF .



Fig 6: WPS AND WRF Program Workflow
E. Basic Software Requirement

i. Fortran 90/95 compiler-- Code uses standard f90 ( portable).
ii. C compiler- Registry-based automatic Fortran code generation (for argument lists, declarations , nesting functions, I/O
routines).
iii. Perl-- configure/compile scripts.
iv. Netcdf library-- for I/O operations and machine independent feature.
v. MPI if distributed memory is used.

F. Merits and Demerits of WRF Model
The table(1) below gives an overview of the merits and demerits of a WRF model.


TABLE 1: MERITS and DEMERITS of WRF MODEL.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

370 www.ijergs.org


In 2004 a WRF benchmark (Fig 7) was developed in order to demonstrate computational performance and scaling of this model on
different architectures.It was run on multiple platforms ,changing the number of processes and their performance in terms of simulated
seconds was noted.

Fig 7: WRF Benchmark ( 2004)


III. REGCM MODEL

RegCM is an open source Regional Climate Model(Limited Area Model), originally developed by Giorgi et al. [8,9] and then
modified, improved and discussed by Giorgi and Mearns [10] . It uses the Downscaling[11] method for getting a clear high-resolution
(Fig 8)weather information (for Eg giving a better representation of the underlying topography at a scale of 50 km or even less)
compared to that relatively coarse-resolution information by global climate models (GCMs).

However, the RCM is susceptible to any systematic errors in the driving fields provided by the GCM. High frequency, i.e. 12 or 6
hourly, time-dependent GCM fields are required to provide the boundary conditions for the RCM. Table 2 lists the various dynamical
and physical packages of successive versions of the RegCM system.


Fig 8: Snapshot of RegCM Resolution



TABLE2: DESCRIPTION of the PROGRESSION of the VERSIONS of the REGCM SYSTEM

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

371 www.ijergs.org

A. RegCM Model Description

The different types of global data needed in order to localize the REGCM model is given by the following diagram (Fig 9) followed
by the typical REGCM architecture(Fig 10) and its components description.


Fig 9: RegCM Global Data




Fig 10: RegCM Architecture Model

1. Model Components: The Terrain file is used for creating the domain file consisting localized topography and projection
information. The SST for the model is created using sst program containing the sea surface temperature to be used in
generating the icbc for the model and lastly the ICBC files created using icbc program, contains surface pressure,
temperature, horizontal wind components and time resolution for the input file.

After successfully running the model it generates 4 files in the output directory.
i. ATM Contains atmosphere status of the model.
ii. SRF-Contains surface diagnostic variables.
iii. RAD Contains radiation information.

B. Software Requirements.

i. Unix or Linux OS.
ii. FORTRAN 90/95 compiler, python language interpreter.
iii. Make utility (GNUmake ).
iv. NetCDF library.
v. MPI (for parallel shared memory).
vi. Graphics (GrADS, FERRET, NCL,) for visualization.

C. Applications of Regional Climate Modeling

i. Model development and validation with a high resolution for smaller areas.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

372 www.ijergs.org

ii. Used in process studies such as topographic effects, regional hydrolic budgets.
iii. Climate change studies
iv. Paleoclimate studies(Climate effects of aerosols).
v. Seasonal prediction
vi. Impact studies

D. Advantages and Disadvantages of RCM Modelling


TABLE 3:MERITS and DEMERITS of RCM MODELLING


E. Basic Issues in RCM Modeling

Ratio of forcing fields resolution to model resolution should not exceed 6-8. For a successful RCM simulation it is thus critical that the
driving large scale boundary conditions be of good quality.Model physics if happens to be same in both in the nested RCM and
driving GCM then better interpretation of model results is obtained. The model resolution should be sufficient to capture relevant
forcings and to provide useful information for given applications.

IV. IMPORTANCE OF RCM OVER GCM.

Global Climate Models (GCMs) are mainly used for simulating global climate system ,providing estimates of climate variables [6].
The GCM owing to its coarse resolution and lack of fine features their accuracy normally decreases for a small area. To overcome the
limitations of GCM, dynamical downscaling using high-resolution Regional Climate Models (RCMs) nested(Fig 12) in GCMs are
used. These RCMs lead to better estimations of future climate conditions since their horizontal resolutions are much finer than the
GCMs [5].Fig 11 gives a practical view of the distortion made by GCM over a limited area compared to RCM.

Fig 11: GCM and RCM Resolution View Over a Small Region.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

373 www.ijergs.org


Fig 12:Nesting



A. Comparison of the Two Models.
The Figure (TABLE 4) below gives some points differentiating the two models with respect to their use and behavior.


TABLE 4: COMPARISON of RCM AND GCM

Here we have included a Fig(TABLE 5) giving a brief inter comparison of all the related features of the different available climate
models.



TABLE 5: INTER-COMPARISON of DIFFERENT MODELS


V. CONCLUSION

Thus we have got a clear cut idea about these two models in our discussion made so far regarding their history, applications, working
of the models, software requirements and a brief comparison of them .GCMs and RCM s have increased our level of understanding
about the on-going atmospheric processes and the prediction of the same after a long period.Though climate modeling has made
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

374 www.ijergs.org

sufficient improvements in the last decades still researchers are undergoing experiments in different aspects regarding the increase in
scale and lowering the cost,how to make finer resolutions feasible,coupling of RCMs with other climate models,more extensive
comparative studies and last but not the least development of a two-way nesting mechanism.These areas hold a great work as future
scope to be made yet in climate system modeling.


REFERENCES :

[1] Peter Lynch, The origins of computer weather prediction and climate modeling, Meteorology and Climate Centre, University
College Dublin, Journal of Computational Physics, 2007
[2] Kit K. Szeto, An Overview of Atmospheric Models, MAGS Model Cross-Training Workshop, York University, 5-6 September
2002.
[3] E.P. Willis, W.H. Hooke, Cleveland Abbe and American meteorology, 18711901, Bull. Am. Met. Soc. 87 (2006) 315326.
[4] The Weather Research and Forecast Model: Software Architecture and Performance by J Michalakes, J.Dudhia, D.Gill,
T.Henderson, J.Klemp, W.Skamarock, W.Wang.
[5] IPCC (2007). Climate Change 2007: Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the Fourth
Assessment Report of the Intergovernmental Panel on Climate Change. Parry, Canziani M L, Palutikof O F et al. (Eds.),
Cambridge University Press, Cambridge, UK, 976.
[6] Ghosh S, and Mujumdar P P (2006). Future rainfall scenario over Orissa with GCM projections by statistical downscaling,
Current Science, vol 90(3), 396404.
[7] Xu C Y (1999). Climate change and hydrologic models: a review of existing gaps and recent research developments, Water
Resources Management, vol 13(5), 369382.

[8] Giorgi F, Marinucci M R et al. (1993a). Development of a second generation regional climate model (RegCM2), Part I: Boundary
layer and radiative transfer processes, Monthly Weather Review, vol 121, 27942813.

[9] Giorgi F, and Mearns L O (1999). Introduction to
special section: regional climate modeling revisited,
Journal of Geo-physical Research, vol 104(D6), 6335 6352.
[10] Pal J S, Small E E et al. (2000). Simulation of
regional-scale water and energy budgets: representation of subgrid cloud and precipitation processes within RegCM, Journal of
Geophysical Research, vol 105(D24), 2957929594.
[11] Nellie Elguindi, Laura Mariotti and Fabien Solmon, Regional Climate Modeling and RegCM4: A tool for downscaling, Summer
School on Climate Modelling Turunc, Turkey,August 2010.
[12] Bibrak Qamar ,Jahanzeb Maqbool, Implementation and Evaluation of Scientific Simulations on High Performance Computing
Architectures, National University of Sciences and Technology (NUST) ,Islamabad, Pakistan, 2011.

[13] P. GOSWAMI AND K. C. GOUDA, Evaluation of a Dynamical Basis for Advance Forecasting of the Date of Onset of
Monsoon Rainfall over India, Council of Scientific and Industrial Research, Centre for Mathematical Modelling and Computer
Simulation, Bangalore, India,2009.
[14] Haltiner, G. J. and R. T. Williams, 1980: Numerical Prediction and Dynamic Meteorology,Wiley and Sons, Inc., New York, 477
pp.
[15] Pileke, R. A., 1984: Mesoscale Meteorological Modelling, Academic Press, New York, 612 pp.







International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

375 www.ijergs.org

Design of Floating Point Multiplier for Fast Fourier Transform Using Vedic
Sutra
Yashkumar. M. Warkari
1
, Prof L.P.Thakare
2
, Dr. A.Y. Deshmukh
3

1
Research Scholar (M.Tech), Department of Electronics Engineering, G.H.Raisoni college of Engineering, Nagpur
2
Assistant Professor, Department of Electronics Engineering, G.H.Raisoni college of Engineering, Nagpur
3
Professor, Department of Electronics Engineering, G.H.Raisoni college of Engineering, Nagpur
E-mail- yashwarkari@gmail.com
Abstract- The need of multipliers in mathematics seems to be a very important aspect .Likewise not only in maths but also the technical
applications based on maths ,the multipliers are used many number of times. Floating point multiplier is also one of the sinequanon design
used in FFTs digital filters, various transforms etc. The aim of the proposed design in the paper is to provide the multiplication of floating
point numbers within less possible time with more accuracy. The lacuna elicits by conventional method has been obliterated by the aid of
vedic sutra in order to reduce the complexity of design as well as functionality of entire circuit. The device used in this proposed design is
7a30tcsg324-3.
Keywords- DSP, VHDL, IEEE, MSB, CLA, Rounding, Normalize
1. INTRODUCTION
The rudimentary designs which exhibits the behaviour similar to the floating multipliers has been incorporated for the reckon purpose
for high range numerical values. Basically any digital designs has to be imited through some fixed steps of designing method.
Conventional methods are used most frequently right from myriads to smaller numbers. In mathematical domains like algebra,
calculus one thing has got obsession apropos to multiplication which is popularly known as multiplication of floating point numbers. It
is doddle to solve the multiplication operation of such numbers by aid of paper & pen method. But when question comes to digital
circuits then the decimal point in floating number plays a vital role. Vedic Mathematics is one such division that involves thinking and
mind is used at its best. In India most of the students studying the conventional mathematics can solve problems that are taught to
them at school, but they are unable to solve the problems that are new and not taught to them. Author has proposed The comparison &
description of basics of multiplication & different algorithms for designing. of based circuit designs In [1].
It proposed that the p bit multiplication elicits the 2p bit result so to rehash the result again into p bit range the rounding techniques
plays an vital role. The different algorithms of sticky bit generation have elucidated .So it can be discussed In [2].
It proposed the method & apparatus for generation of efficacy in the obtained results. It has also elucidated the distinct blocks
importance for bringing the more appropriate results of each block. It heeds on rounding and controlling operation in design which has
discussed In[3].

2. IMPORTANCE OF VEDIC MATHEMATICS

High speed arithmetic operations are very important in many signal processing applications. Speed of the digital signal processor
(DSP) is largely determined by the speed of its multipliers. In fact the multipliers are the most important part of all digital signal
processors; they are very important in realizing many important functions such as fast Fourier transforms and convolutions. Since a
processor spends considerable amount of time in performing multiplication operation of numbers, an improvement in multiplication
speed can greatly improve by the system performance. Multiplication can be implemented using many algorithms such as array
multiplication, booth multiplication, carry save adder, and Wallace tree algorithms used for myriads.
The multiplier architecture is based on this sinequanon Urdhva tiryakbhyam sutra. The prime advantage of this algorithm is that partial
products and their sums are calculated in parallel manner. This parallelism makes the multiplier clock independent than event clock.
The other main advantage of this multiplier as compared to other multipliers is its regularity of reckon. Due to this modular type of
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

376 www.ijergs.org

nature the lay out design will be easy. The defined architecture can be explained with aid of two eight bit numbers i.e. the first
multiplier number and second multiplicand number are eight bit numbers.
Urdhava Tiryakbhyam is a Sanskrit word which means vertically defined as urdhavya and crosswise as triyakbhayam in English. The
method is a general multiplication formula applicable to all cases of multiplication examples. It is based on a novel & rudimentary
concept through which all kind of partial products are generated concurrently. Demonstrates a 4 x 4 binary multiplication using this
method.
The method can be generalized for any N x N bit multiplication. This type of multiplier is independent of the
clock frequency of the processor because the partial products and their sums are calculated in parallel manner. The net advantage is
that it reduces the need of microprocessors to operate at increasingly higher possible clock frequencies. As the depending operating
frequency of a processor increases the number of switching instances also increases. There are again several methods that can be
followed to reduce logical expression such as Boolean algebra , tabulation method etc. It is tedious task to make a use of basic
Boolean reducing rules to apply over a huge logical terms in such a lengthy expression. It proposed the significance of methods based
on vedic sutra In[4].

3. METHODOLOGY OF FLOAING POINT MULTIPLIER




Basically the above delineated circuit is used for multiplication of two floating point numbers. There is no definite logic level for
representation of decimal point in digital circuit. So it is herculean to store the decimal point into the storing elements like flip flops,
registers, memories etc in true form. So we ought to cogitate that how we can store the floating number. So we have a IEEE formats
for different ranges like single precision, double precision, quad precision etc. In this paper single precision format is preffered.




Ex-1 convert 6.75 into single precision format

6=110 in binary
.75 * 2 = 1.5
Fig1- Floating multiplier circuit
Fig2- Single precision format
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

377 www.ijergs.org

.5 * 2 = 1.0
.0 * 2 = 0.0
.0 * 2 = 0.0
110.11000000000000000000000
=1.1011000000000000000000000 * 2
2


Exponent = 127 + 2= 129 or 10000001 in binary
Mantissa = 10110000000000000000000

6.75 in 32 bit floating point IEEE representation:-
01000000110110000000000000000000
In above circuit the mantissas of two input numbers have to be multiplied by 24 bit vedic multiplier. Normaliser is followed vedic
multiplier, which is again followed by rounding block. The exponent is reckon by de-normalisation followed by one 2:1 mux which uses
MSB bit of vedic multiplier output as a select line. Eventually the sign of result can be determined by the Xoring of two sign bits of given
input numbers.
4. METHODOLOGY OF 24-BIT VEDIC MULTIPLIER
The 24-bit vedic multiplier design is a block which can be formed of progression of four 12-bit vedic multiplier design blocks. The 12-
bit multiplier design can be modeled first by lower range of vedic multipliers. Then eventually the final design can be obtained by
structural modeling in VHDL code.



Number A of 24 bits a23 -------a12 a11-------- a0

Number B of 24 bits b23 --------b12 b11---------b0



Equation => (A
H *
B
H
) + (A
H *
B
L
) + (A
L *
B
H
) + (A
L *
B
L
)

The two numbers 24-bit each is ramified into four parts i.e A
H,
A
L,
B
H,
B
L
. Each subpart is of 12-bit which means four 12-bit vedic
multiplier again.

When the 12-bit vedic multiplier properly map then it elicit the 24-bit output from all four 12-bit vedic unit in Fig3. After that the role
of adders comes into picture which exhibits through Equation elucidated above .
Then the output elicited from middle two 12-bit vedic units is endowed to 24bit CLA adder-1.The all output bits excluding carry has
used as one input of 24bit CLA adder-2 and the second input of 24bit CLA adder-2 is framed by concatenating 12 MSB output bits of
last 12-bit vedic multiplier unit with twelve leading zeroes. The output elicited from First 12-bit vedic unit is endowed to 24bit CLA
adder-3 as one input and the second input of 24bit CLA adder-3 is framed by concatenating 12 MSB output bits of last 24-bit CLA
adder-2 unit with eleven leading zeroes and output of OR gate. Output carries of 24bit CLA adder-1 & 24bit CLA adder-2 has used as
an input of OR gate .Eventually the 48 bit output can be obtained by concatenating the bits shown.




AH AL
BH BL
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

378 www.ijergs.org




5. NORMALIZE & ROUNDING
Normalization is quoth to be a sinequanon block for the entire design. When 48 bit output product is obtained from the vedic multiplier
block, then the entire 2p bit result has to be normalized first in order to get correct answer. The output should be in 1. Form .The decimal
point is suppose to be place after first Two MSB bits. Accordingly the output has to be metamorphose into the above alluded form.
When decimal point shifts towards left hand side of result accordingly add 1 to the output of denormalizer & if the decimal point shifts
towards right hand side of result .

EX - The output of vedic multiplier block is as follows

10.0111010111110101111010100001000000000000000000

The output of normalizer
1.00111010111110101111010100001000000000000000000

Now in above example the decimal point has shifted to left by one bit position in order to rehash into normalized form as alluded
above. Therefore the addition of 1 is needed to the output of denormalizer.

Rounding is also plays a vital role in the reckon of mantissa. The main moto of rounding is to curtail the plethora bits so that only the
desired number of bits can be there to represent the output. In this case the answer should be in 23 bits only but the actual output of
vedic multiplier unit is of 48 bits. So extra25 bits needs to be curtail at anyhow. There are several methods for rounding . Accurate
rounding of transcendental mathematical functions is difficult because the number of extra digits that need to be calculated to resolve
whether to round up or down cannot be known in advance. This problem is known as "the table-maker's dilemma".
The simple and easiest method of rounding is rounding towards zero. In this technique just consider the left part of decimal point &
ignore rest part of number. The technique is also known by name truncation.
The another technique which is taught at school level popularly known as round towards half way. In this technique a base number is
used as a reference & the part after decimal point is to be compared with reference value. If the number found to be greater than or
equal to the reference value then round the entire number to its next adjacent greater value. Otherwise round the entire number to its
next adjacent smaller value. In this design the rounding towards zero has been used for rounding of 48 bit result int0 23 bit.

Fig3- The 24 * 24 bit vedic multiplier
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

379 www.ijergs.org

Table No -1. Rounding table



Y

Round
Down
(towards
)
Round
up
(towards
+)
Round
towards
zero
Round
away from
zero
Round
to
nearest
+ 23.67 + 23 + 24 + 23 + 24
+ 24
+ 23.50 + 23 + 24 + 23 + 24
+ 24
+ 23.35 + 23 + 24 + 23 + 24
+ 23
+ 23.00 + 23 + 23 + 23 + 23
+ 23


EX The output of normalize is as follows

1.01101001100001011100001000000000000000000000000

The output of rounding
01101001100001011110001

In the above example the first 23 bits after decimal point has considered & plethora bits has curtailed. Therefore the desired mantissa
bit range for single precision format is being achieved.

6. EXPONENT UNIT
The exponent of two input numbers can be reckon by the aid of an exponent unit. The exponent of the actual decimal point answer
should have similarity with the answer obtained from the exponent unit. The exponent has got a nexus mainly with the decimal point.
The shifting operation elicits the variation of the value of an exponent. According to the shifting the unit will elicits the desired output.
The denormaliszer plays an vital role in reckoning .




The exponent unit is to reckon the result exponent. During process of metamorphose of single precision format the count of number of
shifted positions of decimal point is being added to the bias, which is 127 in case of single precision format.
Fig 4 - The Exponent unit
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

380 www.ijergs.org

At the time of reckoning the exponents of input numbers has to be subtracted from the bias always. Eventually the correct exponent
will be obtained. The mathematical equation deals with reckon is EA + EB 127. But the exponent unit has sort of nexus with
mantissa unit. The mux will build a nexus of these two units. While normalizing the result of vedic multiplier sometimes it is found
that the decimal point has to shift to one bit position left & during this shift the 1 has to be added to the exponent of result. Sometimes
The normalized result may be obtained then no need to add 1. The mux delineated in main floating multiplier will determine the need
of 1. Thus the resultant exponent can be obtained.

Ex - 8.5 * 240.1 = 2040.85

8.5=1000.101
= 1.000101 * 10
3

=127+3

EA=130

240.1=11110000.01
=

1.1110000 * 10
7
=127+7

EB=134

2040.85=11111111000.1010101
=1.11111110001010101 * 10
10

=127+10

ER=137


Equation as per elucidation
ER= (EA-127) + (EB-127) + 127
= (130-127) + (134-127) + 127
ER = 137

7. SIGN GENERATING LOGIC
The Sign of result can be obtained by XORing of signs of input number. The logic 1 sign bit represents negative number, whereas logic
0 sign bit represents positive number. If both the numbers are of different sign then they elicits negative result. Otherwise elicits the form
same as form of input number

Table No -2. Sign table
PRODUCT
TERMS
SA SB SR= SA xor SB
1. 4 * 5 = 20 0 0 0
2. -3 * 4 = -12 1 0 1
3. 3 * -4 = -12 0 1 1
4. -5 * -5 = 25 1 1 0

Where SA is sign of first number. SB is sign of second number. SR is sign of result number.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

381 www.ijergs.org


Table No -3. Synthesis report values





The results shown below is the output simulation of floating multiplier for the two input numbers in IEEE single precision format.
Where atotal and btotal are input numbers And fmresulttotal is the output.

1. 48.12 * 18.5 = 890.22

2. -6 * 1.414 = -8.484

3. -1 * -1.414 = 1.414

4. 6 * 0 = 0



Fig 5 - RTL of floating multiplier
Fig 6 - Simulation results of floating multiplier
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

382 www.ijergs.org

8. CONCLUSION

Thus we have designed the floating point multiplier using vedic sutra. The different adder designs can be used according to the requirement of
particular application. By doing this the desideratum of cull design can be obtained. The implication of OR gate is beneficial for obliterating
errors in multiplication operation. The normalization plays an vital role in bringing the correct mantissa of a result & the extra LSB bits can be
connived from the mantissa bit stream.

REFERENCES:
[1] Rajkumar singh, shivananda Reddy, Floating point multipliers simulation & synthesis using VHDL.
[2] Xia Hong, Jia Jingping, Research & optimization on rounding algorithms for floating point multiplier, International conference on
computer science and electronics engineering 2012.
[3] Jeffrey D. Brown, Roy R. Faget, Scott A. Hilker, Apparatus for determining sticky bit value in airthmatic operations, United states
statutory invention registration in Aug 3, 1993.
[4] Swami Bharti Krishna Tirtha, vedic mathematics, motilal bansidass publication 1992.
[5] Daniel J. Bernstein , fast mathematics and its applications, Algorithmic number theory MSRI publication volume 44,2008..
[6] Robert K. Yu and Gregory B. Zyner. 167 MHz Radix4 Floating Point Multiplier [C]. Proc. 12
th
Symp. Computer Arithmetic,
1995: 149-154
[7] Stuart Franklin. Oberman .Design Issues in High Performance Floating Point Arithmetic Units[R].Technical Report:CSL-TR-96
711.1996,12
[8] G. Even, S.M. Mueller, and P.M. Seidel. A Dual Mode IEEE Multiplier[C] . Proc. Second IEEE Int'l Conf. Innovative Systems
inSilicon, 1997: 282-289
[9] Behrooz Parhami. Computer Arithmetic: Algorithms and Hardware Designs[M]. NewYork, Published by Oxford University
Press Inc, 2000
[10] Shlomo Waser and Michael J. Flynn.Introduction to Arithmetic for Digital Systems Designers[M].CBS College Publishing.
NY,1982 :139
[11] M.Nagarjuna, R.Surya Prakash, B.Vijay Bhaskar. High speed ASIC design of complex multiplier using Vedic mathematics.
IJERA vol 3, Issue 1, January-February 2013, PP. 1079-1084.
[12] Vaijyanath Kunchigi, Linganagouda Kulkarni, Subhash Kulkarni. High speed & area efficient Vedic multiplier. Jawaharlal Nehru
technological university.










International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

383 www.ijergs.org

Experimental and Numerical Study of Retrofitted RC Beams Using FRP
Dhanu M.N
1
, Revathy D
1
, Lijina Rasheed
1
,Shanavas S
2

1
Scholar (B.Tech), Department of Civil Engineering, YCEW, Kerala, India
2
Associate Professor, Department of Mechanical Engineering, YCEW, Kerala, India
E-mail- shanumech@yahoo.com
ABSTRACT - This paper deals with Experimental and Numerical study of retrofitted reinforced concrete beams using Fibre
Reinforced Polymer (FRP). Retrofitting means modifying the existing structures to increase the resistance of the structures against
seismic activity. The objective of the current study is to investigate the improvements in the structural behaviour of RC beams, while
retrofitting using various types of FRP. The fibres used for the study were Glass fibres and Coir Fibres. Experimental tests were
conducted on RC beams and RC beams retrofitted with various FRP such as GFRP and Coir FRP. For numerical study RC beams and
RC beams retrofitted with GFRP were considered and ANSYS software was used to build a 3D model of the beams and to analyse the
beam structure. The result shows that the RC beams retrofitted with Glass reinforced Polymer makes the structure more resistant to
seismic activity.

Keywords - Retrofitting, Strengthening, FEA, ANSYS, FRP, RC Beam, Structures.

1. INTRODUCTION

In the field of structural engineering, new contemporary researches were carried out using advanced materials in order to structures
considering strength aspect. Due to new innovations the plain cement concrete was introduced with steel members and it gives quite
satisfactory results but the problem is that the aggressive steel member introduced in the plain cement concrete may get corroded if its
affected by moisture content. To overcome this, new ideas emerged and such a one kind is retrofitting. Retrofitting can be applied on
old structures, and structures in seismic zone to resist their structural collapse. Retrofitting means the further modification of anything
after it has been manufactured. Retrofitting can be achieved by using composite materials. By effectively doing retrofitting process,
we can improve the strength of existing structures against seismic activity.
Composite materials are materials made from two or more constituent materials with significantly different physical and
chemical properties that when combined produce a material with characteristics different from the individual components. The
individual components remain separate and distinct within the finished structure. Technologically the most important composite are
those in which the dispersed phase is in the form of fibre. The fibers are either long or short. Long and continuous fibers are easy
to orient and process, where as short fibers cannot be controlled fully for proper orientation. The principal fibres in commercial use are
various types of glass, carbon, graphite and Kevlar. All these fibers are incorporated in matrix form either in continuous length or in
discontinuous length. The polymer is most often epoxy, but other polymers, such as polyester, vinyl ester or nylon, are sometimes
used. The properties of FRP depend on the layouts of the fiber, the proportion of the fibers relative to the polymer and the processing
method.
Experimental study involves the determination of flexural load or ultimate load by subjecting the beams under loading
condition. Three points loading is carried out. From the ultimate load obtained and providing suitable factor of safety the permissible
load is calculated. The permissible load is then taken for numerical study. Numerical study can be achieved by using FEM (Finite
Element Modelling) with the aid of ANSYS software. In Finite Element Modelling first the meshing process is carried out i.e. the
structure is divided into finite number of elements, each element is considered for the analysis. Then the boundary conditions are
applied. Boundary conditions are selected from the load and support. The load can be applied either as force, torque, weight etc and
the support can be given as simply supported or as fixed. Here weight is applied as load and support is assumed to be fixed. The
permissible load from the numerical study is applied as load and stresses are calculated.

2. MATERIALS

Ordinary Portland cement of grade 53 satisfying the requirements of IS12269-1987 was used for the investigation. The initial setting
time was 30 minutes with a specific gravity of 3.1. The clear river sand passing through 4.75 mm sieve is used as fine aggregate. The
coarse aggregate was machine crushed broken stone with angular shape. The maximum size of aggregate was 20mm. Ordinary clear
potable water free from suspended particle and chemical substances was used for mixing and curing of concrete. Design concrete mix
of 1:1.87:2.79 by weight is used. The water cement ratio of 0.5 is used. 3 cubic specimens were casted and tested to determine
compressive strength. Mild steel bars of 8mm diameter ,Glass fibre fabric and coir fibre sheets were used.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

384 www.ijergs.org












Figure 1: 0/90
0
PW Style


Figure 2: Glass fibre sheet Figure 3: Coir sheet

3. PREPARATION OF COMPOSITE

Materials used for the preparation of composite were Plain weave (PW) glass fibre fabric, Epoxy resin and hardener. Fibre fabric are
sheet of layers of fibre made by mechanical interlocking of fibre themselves or with a secondary material to bind these fibers together
and hold them in place, giving the assembly sufficient integrity to be handled. Fabric types are categorized by the orientation of the
fibres: Unidirectional, 0/90, Multiaxial, and Other/random. The orientation and weave style of the fibre fabric is chosen to optimize
the strength and stiffness properties of the resulting material. Most commonly used weave style of 0/90
0
fabric is plain weave (PW)
which gives much strength.

4. CASTING

The design mix ratio was adopted for designing the beams. 9 under reinforced beams were casted, 3 as control specimens and 6 beams
for retrofitting. The dimensions of all the beams are identical. The length of the beams was 500 mm and cross sectional dimensions
were 100x100mm. Mild steel bars of 8 mm diameter were used for longitudinal reinforcement.

5. RETROFITTING OF BEAMS

Hand layup method was used for retrofitting of beams. The surface of the beam after curing was made rough and then cleaned with
water to remove all dirts for the proper bonding with fibre. Then the beam was allowed to dry for 24 hours. The fibre sheets were cut
according to the size. After that the epoxy resin primers was mixed in a plastic container to produce a uniform mix. Then it was coated
on the surface of beam for the effective bonding of fibre sheets with the concrete surface. Then fibre sheets were placed on the top of
epoxy resin and another coating of resin was applied on the top of fibre sheets. This operation carried out at room temperature and is
allowed to set under sunlight for 6 Hrs.

5.1. RETROFITTING BY GLASS FIBRE REINFORCED EPOXY

Initially the required PW glass fibre fabric is cut from the fabric sheet to make a U wrap around the lateral faces of the RC beam.
(500 X 300) fibre fabrics of three numbers are chooses for retrofitting a specimen. Epoxy resin is mixed with hardener and the
solution is applied on the selected surface of the RC beam by Brushes. Glass fibre (fabric) is placed over the reinforced concrete beam
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

385 www.ijergs.org

(specimen) making a U- wrap. By brushes again the solution is impregnated on the fabric .The process repeats by layering fabric one
by one. Finally three layered retrofitted RC beam is left to cure under standard atmospheric conditions. Three such specimens are
prepared for the test

Figure 4: RC Beam Retrofitted with Glass Fibre sheet

5.2. RETROFITTING BY COIR FIBRE REINFORCED EPOXY

Initially the required coir fibre is cut from the fabric sheet to make retroffiting on one side of the RC beam. (500x100) one fibre fabric
is chooses for retrofitting a specimen. Epoxy resin is mixed with hardener and a solution is applied on the surface of the RC beams by
brushes. One coating of epoxy is done and then the coir fibre is placed on the surface of the beams. Finally one layered retrofitted RC
beam is left to cure under standard atmospheric conditions. Three such specimens are prepared for the test.

Figure 5: RC Beam Retrofitted with Coir Fibre sheet





6. EXPERIMENTAL STUDY

All the specimens are tested in the Universal Testing Machine. The test procedures of all the specimens are same. After the curing
period of 28 days is over control beams are washed and its surface is cleaned for clear visibility of cracks where other sets of beams
are strengthened by GFRP. The load arrangements for testing of all sets of beam is consist of central point loading as shown in figure.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

386 www.ijergs.org



Figure 6: Test on RC beam Fig. 7: Test on RC beam Retrofitted with GFRP


Figure 8: Test on RC beam Retrofitted with coir FRP

From the experimental work the rupture load is obtained. From that ultimate bending stress can be find out by using the following
equation.

b
= 3PL/2bt
2

where, P is the rupture load,
L is the gauge length,
b is the width of the beam, and
t is the thickness of the beam.

7. DENSITY MEASUREMENT

Mass of GFRP sheet of (40X40X1) mm
3
was found out. The density was found out by the relation
Density =
Mass
Volume

Observed reading: Mass = 3.46 gm

8. MATERIAL MODEL

The following assumptions have been made for modeling
1. Material is assumed to behave as linear elastic.
2. Heat generation and thermal stresses are ignored.
3. Material is isotropic and homogeneous in nature.
The Youngs modulus and poissons ratio of the E-Glass fibre composite was found out by laminar theory.
E = E
f
V
f
+ E
m
(1-V
f
) [Mixtures Equation]
=
f
V
f
+
m
(1-V
f
)
Where, E

= Youngs modulus of the composite shaft
E
f
= Youngs modulus of the Fibre.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

387 www.ijergs.org

E
m
= Youngs modulus of Mixture (Epoxy).
V
f
= Volume of Fibres.
The composition of the composite is 60% fibre by volume. i.e, V
f
= 0.60

Table 1
PROPERTIES E Glass Fibre Epoxy
Density, (kg/m
3
) 2540 1360
Youngs Modulus, E (GPa) 72.5 3.792
Poissons ratio, 0.21 0.4
Ultimate tensile strength,
u
(MPa) 2450 82.75

The materials used for finite element analysis and their properties are tabulated in table 2:
Table 2
PROPERTIES STRUCTURAL STEEL CONCRETE GFRP
Density (kg/m
3
) 7850 2300 2162.5
Youngs Modulus (Gpa) 200 30 45
Poissons ratio 0.3 0.18 0.28
Ultimate tensile strength (Mpa) 460 5 1080

9. THREE DIMENSIONAL MODELLING OF FULL SCALE BEAM

A 3D model of the specimen was generated using ANSYS 13. RC beam of cross-section (300X300) mm
2
and span of 4000mm,
reinforced with 4 structural steel rod of diameter 16mm each was modelled. The RC beam and the RC beam retrofitted with U wrap
GFRP of 2mm thickness was considered for the study.

10. MESH GENERATION

After 3D Modelling, meshing of the model is necessary for the analysis. Mesh generation is done by selecting the element type Beam
3 node 189. The total numbers of nodes are 268903 and the total numbers of elements are 98471.






Figure 9: 3D Model Figure 10: Mesh generation

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

388 www.ijergs.org

11. BOUNDARY CONDITION

Boundary condition is a very important step for the analysis of the structures in FEM. Here the load applied is in the form of
uniformly distributed load with a total magnitude of 5,000N and the beam is supported by fixed support at its ends.

Figure 11: Boundary conditions

12. EXPERIMENTAL RESULT

Table 3
MATERIAL ULTIMATE LOAD (KN) ULTIMATE BENDING STRESS (MPa)
RC Beam without FRP 17 10.2
RC Beam with Coir FRP 24.66 14.8
RC Beam with GFRP 34.08 20.45

13. FEM RESULT OF FULL SCALE BEAM


Figure 12: Equivalent stress of RC without FRP

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

389 www.ijergs.org


Figure 13: Equivalent stress of RC with GFRP



Figure 14: Total deformation of RC without FRP



Figure 15: Total deformation of RC with GFRP
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

390 www.ijergs.org

Table 4
FEM Result
Induced Equivalent stress
(Von-Mises) (MPa)
Total Deformation(m)
RC beam without retrofitting 2.9877 8.4195x10
-5

RC beams retrofitted by GFRP 1.2978 4.0407x10
-5



5. CONCLUSION

From the results the following conclusions are obtained

- The flexural strength and ultimate load capacity of the beams can be improved by retrofitting.
- Retrofitting using E-Glass Fibre sheet gives more load carrying capacity than Coir sheets.
- Also GFRP sheets is economical since its cost is very less compared to carbon fibre sheets and the cost for GFRP sheet (E-
Glass) is only Rs.75/m
2
.

REFERENCE:

[1] Kaushal Parikh, C. D. Modhera, Application of GFRP on preloaded retrofitted beam for enhancement in flexural strength,
International Journal of Civil and Structural Engineering, Vol. 2 (2012), pp. 1070-1080.

[2] Anumol Raju, Liji Anna Mathew, Retrofitting of RC beams using FRP, International Journal of Engineering Research and
Technology, Vol. 2, (2013), pp. 1-6.

[3] Tedesco, J.W., Stallings J.M., and El-Mihilmy M., Finite Element Method Analysis of a Concrete Bridge Repaired with Fiber
Reinforced Plastic Laminates, Computers and Structures, Vol. 72, (1999), pp.379-407.

[4] Kachlakev Damian, Miller Thomas, and Yim Solomon, Finite Element Modeling of Reinforced Concrete Structures Strengthened
with FRP Laminates, Report for Oregon Department Of Transportation, Salem, (2001).

[5] Norris Tom, Saadatmanesh Hamid and Ehsani Mohammad R., Shear and Flexural Strengthening of R/C Beams with Carbon
Fiber Sheets, Journal of Structural Engineering, Vol.123, (1997).

[6] Robert M. Jones, Mechanics of Composite Materials, Taylor & Francis, Second Edition 1999.











International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

391 www.ijergs.org

Improvement in performance of Chip-multiprocessor using Effective Dynamic
Cache Compression Scheme
Poonam Aswani
1
, Prof B. Padmavathi
1
1
Department of Computer Engineering, GH Raisoni College, Pune, India
E-mail- aswanipoonam41@gmail.com
Abstract Chip Multiprocessors (CMPs) combine multiple cores on a single die, typically with private level-one caches and a
shared level-two cache. The gap between processor and memory speed is alleviated primarily by using caches. However, the
increasing number of cores on a single chip increases the demand on a critical resource: the shared L2 cache capacity. In this
dissertation work , a lossless compression algorithm is introduced for fast data compression and ultimately CMP performance.
Cache compression stores compressed lines in the cache, potentially increasing the effective cache size, reducing off-chip misses and
improving performance. On the downside, decompression overhead can slow down cache hit latencies, possibly degrading
performance. While compression can have a positive impact on CMP performance, practical implementations of compression raise a
few concerns: Compression algorithms have high overhead to implement at the cache level. Decompression overhead can degrade
performance . Generally compression algorithm are not effective in compressing small blocks. Hardware modification is required. In
this dissertation work , we make contributions that address the above concerns. We propose a compressed L2 cache design based on
an effective compression algorithm with a low decompression overhead. We developed dynamic cache compression scheme that
dynamically adapts to the costs and benefits of cache compression, and employs compression only when it will enhance the
performance. We show that cache compression improve CMP performance for different workloads.
Keywords:Cache Compression, Compression Ratio, LRU, ERP, Off-chip Memory, Memory latency,L1 Cache, L2 Cache
1. Introduction
The widening gap between processor and memory speeds, results because of tight constraints on the amount of on-chip cache
memory and the high latency of off-chip memory, such as dynamic random access memory. More time is essential to access off-chip
memory time required to access generally takes an accessing on-chip cache. Hence to improve memory system efficiency cache
hierarchies is been incorporated on chip, but it is constrained by die area and cost. Cache compression is one such technique; data in
last-level on-chip caches, e.g., L2 resulting in larger usable caches. In the past, researchers have reported that cache compression can
improve the performance of uniprocessors . However past work requires complex hardware for cache compression. Past work does
not considered the performance, area and power consumption requirement In this dissertation , we explore using compression to
effectively increase these resource and ultimately overall system throughput. To achieve this goal, we identify a distinct and
complementary design where compression can help improve CMP performance: Cache Compression. Cache compression stores
compressed lines in the L2 cache, potentially increasing the effective cache size, reducing off-chip misses, and improving
performance. Moreover, cache compression can also allow CMP designers to spend more transistors on processor cores. On the
downside, decompression overhead can slow down cache hit latencies, which degrades performance for applications that would fit in
an uncompressed cache. Such negative side-effectsmotivate a compression scheme that avoids compressing cache lines when
compression is not beneficial.
The ideal cache compression technique would be fast, simple, and effective in saving storage space. Clearly, the resulting compression
ratio should be large enough to provide a significant upside, and the hardware complexity of implementing the scheme should be low
enough that its area and power overheads do not offset its benefits. Perhaps the biggest problem to the adoption of cache compression
in commercial microprocessors is decompression latency. Unlike cache compression, which takes place in the background upon a
cache fill (after the workload is supplied), cache decompression is on the critical path of a cache hit, where minimizing latency is
extremely important for performance., we consider compression of the L2 caches. The three desired goals of having fast, simple, and
effective cache compression are at odds with each other (e.g., a very simple scheme may yield too small a compression ratio, or a
scheme with a very high compression ratio may be too slow, etc.), the challenge is to find the right balance between these goals. To
achieve significant compression ratios while minimizing hardware complexity and decompression latency, we propose a new cache
compression technique called Dynamic cache compression. It dynamically adapts to the costs and benefits of cache compression, and
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

392 www.ijergs.org

implements compression only when it helps performance. We propose and evaluate a CMP design that implements cache
compression.
2. Related Works
Jang-Soo Lee et al.,proposed the selective compressed memory system based on the selective compression technique, fixed space
allocation method, and several techniques for reducing the decompression overhead. The proposed system provide on the average
35% decrease in the on-chip cache miss ratio as well as on the average 53% decrease in the data traffic. However, authors could not
control the problem of long DRAM latency and limited bus bandwidth. Charles Lefurgy et al presented a method of decompressing
programs using software. It relies on using a software managed instruction cache under control of the decompressor. This is achieved
by employing a simple cache management instruction that allows explicit writing into a cache line. It also considers selective
compression (determining which procedures in a program should be compressed) and show that selection based on cache miss profiles
can substantially outperform the usual execution time based profiles for some benchmarks. This technique achieves high performance
in partthrough the addition of a simple cache management instruction that writes decompressed code directly into an instruction cache
line. This study focuses on designing a fast decompressor (rather than generating the smallest code size) in the interest of performance.
Paper shown that a simple highly optimized dictionary compression perform even better than CodePack, but at a cost of 5 to 25% in
the compression ratio .
Prateek Pujara et al investigated restrictive compression techniques for level one data cache, to avoid an increase in the cache access
latency. The basic technique all words narrow ( AWN) compresses a cache block only if all the words in the cache block are of narrow
size. AWN technique here stores a few upper halfwords (AHS) in a cache block to accommodate a small number of normal-sized
words in the cache block. Further, author not only make the AHS technique adaptive, where the additional half-words space is
adaptively allocated to the various cache blocks but also propose techniques to reduce the increase in the tag space that is inevitable
with compression techniques. Overall, the techniques in this paper increase the average L1 data cache capacity (in terms of the
average number of valid cache blocks per cycle) by about 50%, compared to the conventional cache, with no or minimal impact on the
cache access time. In addition, the techniques have the potential of reducing the average L1 data cache miss rate by about 23%. Martin
et al. shown that it is possible to use larger block sizes without increasing the off-chip memory bandwidth by applying compression
techniques to cache/memory block transfers. Since bandwidth is reduced up to a factor of three, work proposes to use larger blocks.
While compression/decompression ends up on the critical memory access path, work find its negative impact on the memory access
latency time. Proposed scheme dynamically chosen a larger cache block when advantageous given the spatial locality in combination
with compression. This combined scheme consistently improves performance on average by 19%.
Xi Chen et al. (2009) presented a lossless compression algorithm that has been designed for fast on-line data compression, and
cache compression in particular. The algorithm has a number of novel features tailored for this application, including combining pairs
of compressed lines into one cache line and allowing parallel compression of multiple words while using a single dictionary and
without degradation in compression ratio. The algorithm is based on pattern matching and partial dictionary coding. Its hardware
implementation permits parallel compression of multiple words without degradation of dictionary match probability. The proposed
algorithm yields an effective system-wide compression ratio of 61%, and permits a hardware implementation with a maximum
decompression latency of 6.67 ns. Martin et al. [30] presents and evaluates FPC, a lossless, single pass, linear-time compression
algorithm. FPC targets streams of double-precision floating-point values. It uses two context-based predictors to sequentially predict
each value in the stream. FPC delivers a good average compression ratio on hard-to-compress numeric data. Moreover, it employs a
simple algorithm that is very fast and easy to implement with integer operations. Author claimed that FPC to compress and
decompress 2 to 300 times faster than the special-purpose floating-point compressors. FPC delivers the highest geometric-mean
compression ratio and the highest throughput on hard-to compress scientific data sets. It achieves individual compression ratios
between 1.02 and 15.05.
3. Cache Compression in Chip Multiprocessors
The increasing number of processor cores on a chip increases demand on shared caches. Cache compression addresses the increased
demand on both of these critical resources in a CMP. In this project, we propose a CMP design that supports cache compression. CMP
cache compression can increase the effective shared cache size, potentially decreasing miss rate and improving system throughput. In
addition, cache compression can decrease demand on pin bandwidth due to the decreased miss rate.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

393 www.ijergs.org

Due to the significant impact of the memory wall on performance, many existing uniprocessor and CMP systems implement
hardware prefetching to tolerate memory latency. Prefetching is successful for many workloads on a uniprocessor system. For a CMP,
however, prefetching further increases demand on both shared caches and pin bandwidth, potentially degrading performance for many
workloads. This negative impact of prefetching increases as the number of processor cores on a chip increases.

Figure: A Single-Chip p-core CMP with Compression Support
4. Cache Compression Technique

Compression is achieved by two means:

(1) It uses statically decided, compact encodings for frequently appearing data words
(2) It encodes using a dynamically updated dictionary allowing adaptation to other frequently appearing words.










Figure :Cache Compression
The dictionary supports partial word matching as well as full word matching. The Pattern column describes frequently appearing
patterns, where z represents a zero byte, m represents a byte matched against a dictionary entry, and x represents an unmatched
byte. In the Output column, B represents a byte and b represents a bit. During one iteration, each word is first compared with
patterns zzzz and zzzx. If there is a match, the compression output is produced by combining the corresponding code and
unmatched bytes . otherwise the compressor compares the word with all dictionary entries and finds the one with the most matched
bytes. The compression result is then obtained by combining code, dictionary entry index, and unmatched bytes, if any. Words that
fail pattern matching are pushed into the In each output, the code and the dictionary index, if any, are enclosed in parentheses.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

394 www.ijergs.org

Although a 4-word dictionary can be used, the dictionary size is set to 64B in our implementation. The dictionary is updated after each
word insertion. During decompression, the decompressor first reads compressed words and extracts the codes for analyzing the
patterns of each word, which are then compared against the codes.
5. Block diagram of Dynamic compression policy
Here address is generated by the CPU in the form of process id (pid) and page no of the corresponding process (pno). Thus the
address consists of combination of pid and pno.
Once the address is generated, CPU first searches the address in page mapped table of private cache. Here, there are two chances:
a) Page hit occurs
b) page miss occurs.

If page is present in the page mapped table of private cache, we say that page hit occurs. If so, first search the respective frame no.
using search in private method and get the required data using get Page Private method and then stop. If page is not available in
private cache and shared cache also we need to fetch it from main memory. But, in this project we are using adaptive policy to decide
whether to compress the data before storing it in the shared cache or not. So first we need to observe whether the miss is of avoidable
or unavoidable type. If any of the tag in shared cache is 0 it means that we might avoid the present miss if that page was compressed
and thus we need to increment the value of global compression predictor. Once this is done search the page in main memory and
before transferring it to shared cache decide whether to compress the data.
6. Cache Replacement Policy
ERP tries to replace the page which is not referenced more often. To implement the ERP cache replacement policy, we created a
pointer called Replace_Ptr and an array of Hit_Bit called reference bits for each cache block in a cache set. The ERP policy uses a
circular buffer with the Replace_Ptr pointing to the cache block that is to be replaced when a cache miss occurs. The use of the
circular queue avoids the movement of cache blocks from the head of the queue to the tail of the queue; instead it replaces the block
by advancing the Replace_Ptr to point to the next cache block in the circular queue. Replace_Ptr will only be advance by resetting hit
bit when hit bit of page that Replace_Ptr is pointing is 1. During a cache hit, the ERP policy will set the Hit_Bit of the accessed cache
block to 1 to indicate that the cache block has been hit. When a cache miss occurs, Replace_Ptr will not to advance to the next cache
block whenever the Hit_Bit of the cache block pointed by the Replace_Ptr is equal to 0 and new cache block will be placed at
Replace_Ptr position. Initially reference bit is 0, policy sets it to 1 as soon as the corresponding cache block is referenced. Reference
bit = 0 means that the cache block has not been referenced and hence, it can be replaced. Reference bit= 1 means the corresponding
cache block has been referenced and hence, is likely to be used soon therefore, it is not replace. The main purpose in the development
of the ERP is to create a cache replacement policy that has lower maintenance cost compared to LRU replacement policies.
7. Implementation Details
Two policies that is never compress and always compress is developed. From two policies we can predict that in compression
the number of hits is more as compared to uncompressed cache format.
A) Mathematical Model
- U = Set of all modules in the system = {P, CM,MM} Where, P-Processor, C-Cache, MM-Main
Memory
- CM = It contains shared and private cache IS = {SC,PC}
Where, SC-Shared Cache and PC-Private Cache

- P generates Page Numbers and Process Id.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

395 www.ijergs.org

- CM={LRU, ERP} where LRU is Least Recently Used and ERP is Efficient Replacement Policy.
- PNo {SC,PC} where PNo is page numbers generated by processor.

Page Numbers generated by P is first searched in Cache Memory If it is found in cache memory, then we can say hit is occurred . The
page status is checked (Compressed/Uncompressed). If it is Compressed then page is decompressed and provided to processor. If Page
is not present in Cache memory then Main Memory is searched for page . Now the page is transferred from main Memory to Cache
memory and then it is fetched to processor for processing.
8. Result
To understand the utility of dynamic compression we compare the dynamic compression with two extreme policy never and always.
In never data is never stored in compressed form and in always data is always stored in compressed form. Thus never tries to reduce
hit latency while always tries to reduce miss rate. Dynamic compression use compression only when, it predicts that the benefit are
more than the overhead. In results, we have compared no of hits occurred under different workloads. In never data is in uncompressed
form and in always data is always stored in compressed form. Thus never compression technique reduces hit time , while always tries
to reduce miss rate. different workloads are considered. Dynamic compression use compression only when, it predicts that the benefit
are more than the overhead. Here different workloads are considered. From the above results it is observed that for small workload
(workload1) never compression and dynamic compression work better than always compression because it does not required any
decompression overhead. For workload whose size is slightly greater than size of level2 cache (workload2) and for memory intensive
workload(Workload 3) dynamic gives better performance than never and always because always compression required more
decompression overhead and never generate less hits.
For too large workload always compression gives more hit but it also required more decompression overhead. Always compression
increase size of cache in comparison with never and dynamic compression but always required decompression overhead. If we
compare LRU and ERP replacement policy for replacement of page in level2 cache performance under ERP is better than
conventional LRU replacement policy for three compression techniques. For small workloads (Workload1), the number of hits is same
under all three policies are same. For Workload2, the number of hits for dynamic is more as compared to other two policies. For
Workload3 (Memory intensive), the number of hits for dynamic policy are more as compared to other two policies. Decompression
overhead is also another factor for comparing result between three policies. Decompression overhead is zero for never compression
technique. In Always, decompression overhead is more as compared to Dynamic compression.







Figure : Hits In Shared Cache
9. Conclusion and Future Work
A) Conclusion
Chip multiprocessors (CMPs) combine multiple processors on a single die. The increasing number of processor cores on a single chip
increases the demand on the shared cache capacity. Demand on this critical resource can be further exacerbated by various
compression techniques. In this project, we explored using compression to effectively increase cache size and ultimately CMP
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

396 www.ijergs.org

performance. Cache compression stores compressed lines in the cache, potentially increasing the effective cache size, reducing off-
chip misses, and improving performance. On the downside, decompression overhead can slow down cache hit latencies, possibly
degrading performance. While compression can have a positive impact on CMP performance, practical implementations of
compression raise a few concerns (e.g., compressions overhead).We proposed a compressed shared cache design based on a simple
compression scheme with a low decompression overhead. Such design can double the effective cache size .We developed an adaptive
compression algorithm that dynamically adapts to the costs and benefits of cache compression, and uses compression only when it
helps performance. We presented a simple analytical model that helps provide results of applying compression in multiprocessor
system.
B) Future Scope
Different compression algorithms can be used to compress the data. For ex for compressing audio, video, image data and so on.
Multiprocessing can be used for running several process and generating address simultaneously. It can be extended for server farm
where memory becomes bottleneck. We can analyze the performance of the system using industry benchmarks. We can implement
this system at the kernel level.
REFERENCES:
[1] Alaa R. Alameldeen and David A. Wood. Adaptive Cache Compression for High-Performance Processors. In 31st
AnnualInternational Symposium on Computer Architecture (ISCA-31) Munich, Germany, June 19-23, 2004
[2] Alaa R. Alameldeen and David A. Wood. Variability in Architectural Simulations of Multi-threaded Workloads. In Proceedings of
the Ninth IEEE Symposium onHigh- Performance Computer Architecture, pages 718, February2003.
[3] Alaa R. Alameldeen and David A. Wood. Frequent Pattern Compression: A Significance-Based Compression Scheme for L2
Caches. Technical Report 1500, Computer sciences Department, University of WisconsinMadison, April 2004.
[4] A. R. Alameldeen and D. A.Wood, Interactions between compression and refetching in chip multiprocessors, in Proc. Int.
Symp. High- Performance Computer Architecture, pp. 228 239, Feb. 2007
[5] A. Moffat, Implementing the PPM data compression scheme, IEEE Trans. Commun. , vol. 38, no. 11, pp. 19171921, Nov.
1990
[6] E. G. Hallnor and S. K. Reinhardt, A compressed memory hierarchy using an indirect index cache, in Proc. Workshop
Memory Performance Issues, pp. 915, 2004.
[7] Sharada Guptha M N, H. S. Pradeep & M Z Kurian. A VLSI Approach for Cache Compression in Microprocessor.In International
Journal of Instrumentation, Control and Automation (IJICA) ISSN : 2231-1890 Volume-1, Issue-2, 2011
third Post Graduate Symposium for Computer Engineering cPGCON 2014
[8]Bulent Abali, Hubertus Franke, Dan E. Poff, Jr. Robert A. Saccone, Charles O. Schulz, Lorraine M. Herger, and T. Basil
Smith. Memory Expansion Technology (MXT): Software Support and Performance. IBM Journal of Research andDevelopment,
45(2):287301, March 2001.
[9]Jacob Ziv and Abraham Lempel. A Universal Algorithm for Sequential Data Compression. IEEE Transactions on
InformationTheory, 23(3):337343, May 1977.
[10]Jacob Ziv and Abraham Lempel. Compression of Individual Sequences Via Variable-Rate Coding. IEEE Transactions
onInformation Theory, 24(5):530536, September 1978.
[11]Jang-SooLee,Won-Kee Hong, and Shin-Dug Kim, Design and Evaluation of a Selective Compressed Memory System,
International Conference On Computer Design (ICCD), 1999.
[12]Charles Lefurgy, Eva Piccininni, and Trevor Mudge,Reducing Code Size with Run-time Decompression,Proceedings
on 6
th
International Symposium on High Performance computer Architecture HPCA, 2002, PP. 218- 228




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

397 www.ijergs.org

Design and Optimization of high Frequency Lowpass Filter on RO4003C
Microstrip Using maximally-Flat (Butterworth) Technique
Ahmad Aminu
1

1
Department of Electrical/Electronic Engineering, School of Technology, Kano State Polytechnic, Kano, Nigeria
E-mail- ahmadaisha2008@gmail.com
AbstractIn this paper, the design and optimization of Lowpass Filter Using Maximally-Flat (Butterworth) Technique is proposed.
The realization of seven order Lowpass filter on Microstrip transmission lines was carried out. MATLAB and AWR software were
used for the implementation.
Keywords: - Lowpass Filter, RO4003C microstrip, MSTEPX$, Attenuation, Lumped elements, frequency, scattering parameters
INTRODUCTION
Filters have important roles in communication/radar systems, and their usage is unavoidable when rejection of an unwanted frequency
range is required. Filtering is also a major approach in electromagnetic compatibility (EMC) engineering, for the cancellation of noise
and interference. Functionally, filters can be grouped into four categories: low-pass filters (LPF), high-pass filters (HPF), bandpass
filters (BPF), and band-stop filters (BSF). There are various sets of analytical functions that satisfy given filter specifications, but
Butterworth, Chebyschev, Cauer, and Bessel functions, with their pros and cons, are the functions widely used in RF/microwave filter
design. For example, Butterworth filters are maximally flat in the pass band, but their out-of-band attenuation slopes are not good.
Chebyschev filters have sharper attenuation slopes (as compared to Butterworth filters), but the payoff is the ripple inside the pass
band. Elliptic filters have the sharpest out-of band attenuation, but they have undesired ripples both in and out of the pass band.

Today, most microwave filter designs are done with computer-aided design (CAD) packages, such as Advancing the Wireless
Revolution, Ansoft Designer, etc based on the insertion loss method. In this work, the Butterworth lowpass filter design is taken into
consideration.

The aim of this work is to design and optimize lowpass filter using maximally-flat (Butterworth) Technique with the following
specifications:
Source Impedance, Z
o
=50, Load Impedance, Z
L
=50. The dielectric substrate to be used in the microstrip will be RO4003C with
the height of 1.52 mm. The typical parameters of this dielectric are given as r = 3.38, tan = 0.0027, and metal cladding of 35m.
The highest practical characteristic impedance to be implemented on the microstrip is 130, and the lowest is 18. The filter should
have a cut off frequency (where attenuation is 3 dB) of 2.4 GHz and give minimum 30 dB attenuation at 4.08 GHz.
The objectives are:
(i) Calculations and MATLAB simulations of the filter with lumped elements (Approximate solution)
(ii) Calculations and MATLAB simulations of the filter with microstrip transmission lines (Almost exact solution)
(iii) Implementation of the filter design on AWR Microwave design and optimization/tuning (Engineering solution)
(iv) Implementation of the filter design on AWR Microwave design by considering discontinuities and optimization/tuning (Practical
solution).

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

398 www.ijergs.org

DESIGN OF THE FILTER
(A) First phase (Lumped element approach)
A procedure, called the insertion loss method, is used here which uses network synthesis techniques to design filter with a completely
specified frequency response. The design is simplified with low-pass filter prototypes that are normalized in terms of impedance and
frequency. The normalized element values are obtained for maximally flat filter design from table 2.1 below.

From the given specification on the insertion loss, the filter order can be obtained as
|/
c
| - 1 = |(24.08/2 2.4)| - 1 = 0.7; from fig. 2.1 below, it is found that, N = 7 will be sufficient. Then table 1 gives the
prototype elements as: g
o
=1.0000, g
1
=0.4450, g
2
=1.2470, g
3
=1.8019, g
4
=2.0000, g
5
=1.8019 g
6
=1.2470, g
7
=0.4450, g
8
=1.0000.
The un-normalized values can be obtained from the normalized values using the following formulae:
L
k
=L
k
R
o
/
c
(1)
C
k
=C
k
/R
o

c
(2)
Table 2.1: Element values for maximally Flat Low-pass Filter Prototypes (g
o
=1,
c
=1, N=1 to 10)


Fig. 2.1: Attenuation versus normalized frequency for maximally flat filter prototype

MATLAB was used to obtain the following results from equation (1) and (2) above.
C
1
= 0.5902pF,
L
2
= 4.1347nH,
C
3
= 2.3898pF,
L
4
= 6.6315nH,
C
5
= 2.3898pF,
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

399 www.ijergs.org

L
6
= 4.1347nH,
C
7
= 0.5902pF,
The filter circuit is drawn using AWR software and is shown in fig. 2.2 below

Fig. 2.2, Low pass, maximally flat with N=7, filter circuit
MATLAB is used to obtain the scattering parameters graphs (S11 vs. frequency graph) and (S21 vs. frequency graph) shown in figure
2.3 below.

Fig. 2.3, the graph of S11 and S21 verses Frequency

Comment:
All the frequencies lower than the cut off frequency (2.4 GHz) have attenuation values of below 3 dB and at 4.08 GHz there is 32.26
dB attenuation which is higher than 30 dB.

(B) Second phase (Microstrip transmission line approach)
0 1 2 3 4 5 6
x 10
9
-120
-100
-80
-60
-40
-20
0


X: 2.4e+009
Y: -3.01
Frequency Hz
(
d
B
)
|S11| and |S21| vs Frequency
X: 4.08e+009
Y: -32.26
S11 (dB)
S21 (dB)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

400 www.ijergs.org

In this phase, the lengths and characteristic impedances of Microstrip transmission line section will be calculated. The highest
practical characteristic Impedance implemented on the Microstrip is 130 and the lowest is 18. The effective dielectric permittivity
for both 130 and 18 are calculated using AWR software.
eff
= 2.35 for 130 , and 3.04 for 18 .
The lengths are calculated using the following formulae for inductors and capacitors respectively,
( )
, 1
/ 3
ind k k p high
L V Z =
( )
, 2
4
cap k k p low
C V Z =
Where V
p1
=
310
8
2.35
= 195.7x10
6
m/sec and V
p2
= =
310
8
3.04
= 172.06x10
6
m/sec are velocities at 130 and at 18 respectively.
Therefore, the lengths of the Microstrip lines are:
,1 ,2 ,1 ,2 ,1 ,2 ,1
1.83 , 6.22 , 7.4 , 9.98 , 7.4 , 6.22 , 1.83 ,
cap ind cap ind cap ind cap
mm mm mm mm mm mm mm = = = = = =
Using the values of the characteristic impedances and lengths obtained above, MATLAB is used to calculate the overall input
impedance and reflection coefficient (S11) using cascade approach and then unitary property of scattering matrix is used to calculate
S21.

Fig. 2.5, graph of S11 and S21 verses Frequency.
The figure 2.6 below shows the comparison between S21 of the first phase and that of the second phase.
0 1 2 3 4 5 6
x 10
9
-80
-70
-60
-50
-40
-30
-20
-10
0


X: 2.32e+009
Y: -2.967
Frequency Hz
(
d
B
)
The graph of |S11| and |S21| Vs Frequency
X: 4.08e+009
Y: -28.11
S11 (dB)
S21 (dB)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

401 www.ijergs.org


Fig.2.6, the graph of S21 verses Frequency of the first and second phase.

Comment:
All frequencies lower than 2.32 GHz which is less than the cut off frequency (2.4 GHz) have attenuation values of below 3 dB and at
4.08 GHz there is 28.11dB attenuation which is less than the specified minimum 30 dB.
Figure 2.6 shows that the first approach (phase) has a better result compare to second phase.

(C) Third phase (Simulation approach)
In this phase, the microwave circuit design is implemented and simulated with AWR Microwave Design Environment software.
Physical lengths for the lengths of the transmission lines and characteristics impedances calculated in second phase are used. S11 vs.
frequency graph and S21 vs. frequency graph on the same figure were drawn with this program and a screen shots were taken as
shown in figure2.7 (a) and (b) below.
0 1 2 3 4 5 6
x 10
9
-60
-50
-40
-30
-20
-10
0


Frequency Hz
S
2
1
(
d
B
)
The graph of |S21| Vs Frequency
X: 2.32e+009
Y: -2.967
X: 2.4e+009
Y: -3.01
X: 4.08e+009
Y: -28.11
X: 4.08e+009
Y: -32.26
First phase
Second phase
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

402 www.ijergs.org


Fig. 2.7(a), AWR simulation of the third phase without optimization

Fig. 2.7(a), S11 and S21 vs. Frequency graph without optimization

Comment:
All the frequencies lower than the cut off frequency (2.4 GHz) have attenuation values below 3 dB and at 4.08 GHz there is 28.26 dB
attenuation which is less than 30 dB. This result is better than that of second phase since there is no shift in the cut-off frequency
unlike in the second phase but still an improvement is needed.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

403 www.ijergs.org

By using the tune and tune tool of AWR, the performance of the filter was improved as much as possible as shown in fig. 2.8
below. The characteristics impedances were kept constant, while the lengths of sections were tuned. Table 2.2 below shows the
original and optimized values of the lengths of the transmission line.

Fig. 2.8(a) simulation of the third phase with optimization

Fig.2.8 (b), S11 and S21 Vs frequency graph with optimization




International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

404 www.ijergs.org

Table 2.2, original and optimized values of the lengths of the transmission line
Length of the line Without optimization With optimization
,1 cap

1.83 2.01
,2 ind

6.22 6.84
,3 cap

7.40 7.61
,4 ind

9.98 9.00
,5 cap

7.40 7.53
,6 ind

6.22 6.84
,7 cap

1.83 2.00

Comment:
With optimization (tuning), a better result is obtained for the third phase simulation, here, at 4.08 GHz there is 30.04dB attenuation
which is the same as the minimum attenuation specified 30 dB while the cut-off frequency of 2.4 GHz is maintained at 3dB.

(D) Fourth Phase (Production approach)
Microwave circuits and networks often consist of transmission lines with various types of discontinuities. In some cases
discontinuities are an unavoidable result of mechanical or electrical transitions from one medium to another. Some typical microstrip
discontinuities and transitions are shown in figure below. Although approximate equivalent circuits have been developed for some
printed transmission line discontinuities, many do not lend themselves to easy or accurate modeling, and must be treated by numerical
analysis. Modern CAD tools are usually capable of accurately modeling such problems.

AWR simulations that have been realized in third phase do not take these discontinuities into account unless putting some special
transition elements of AWR between the sections. In this phase, special transition elements (MSTEPX$) are inserted between the
sections of the transmission lines to eliminate the discontinuities. Figure 2.9 shows this arrangement.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

405 www.ijergs.org


Fig. 2.9(a), AWR simulation of the design with MSTEPX$ between sections

Fig. 2.9(b), graph of S11 and S21 vs frequency with MSTEPX$ between sections

Looking at the above graph we can see that the cut-off frequency has shifted from 2.4GHz to 2.149GHz. In trying to make the cut-off
frequency back to its original value by reducing the lengths of the transmission lines, figure 2.9(c) and (d) were obtained as shown
below.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

406 www.ijergs.org


Fig. 2.9(c), AWR simulation with reduced lengths of the transmission lines


Fig. 2.9(d), graph of S11 and S21 vs frequency with reduced lengths of the transmission lines

By inserting MSTEPX$ between the sections of the third phase with optimization, the following figures 2.9(e) and (f)were obtained.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

407 www.ijergs.org


Fig.2.9 (e), AWR simulation of the design with MSTEPX$ between sections


Fig. 2.9(f), graph of S11 and S21 vs frequency with MSTEPX$ between sections

Finally, by making fine tuning to improve the performance of the filter simulation in third phase with optimization and inserting
MSTEPX$ between sections, figure 2.9 (e) and (f) were obtained as shown below.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

408 www.ijergs.org


Fig. 2.9 (g), AWR simulation with reduced lengths and MSTEPX$ between sections

Fig. 2.9 (h), graph of S11 and S21 vs frequency with MSTEPX$ between sections
Comment:
With optimization (tuning), a better result is obtained, here, at 4.08 GHz there is 32.06dB attenuation which is higher than the
minimum attenuation specified 30 dB while the cut-off frequency of 2.4 GHz is maintained at 3dB. Table 2.3, below shows the
variations of the lengths with insertion of MSTEPX$ between sections and optimization of the third phase
Table 2.3, optimized and reduced length values of the transmission line
Length of the line Optimized lengths reduced in lengths
,1 cap

2.01 1.995
,2 ind

6.84 6.82
,3 cap

7.61 6.47
,4 ind

9.00 7.86
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

409 www.ijergs.org

,5 cap

7.53 7.53
,6 ind

6.84 6.50
,7 cap

2.00 1.92

CONCLUSION
In this work, a 7- section Maximally-flat high frequency low-pass filter was designed and simulated using four different phases, (i)
Calculation and MATLAB simulations with lumped elements (Approximate solution), (ii) Calculation and MATLAB simulations
with microstrip transmission lines (Almost Exact solution), (iii) Implementation of the design on Advancing the Wireless Revolution
(AWR) microwave design and optimization/tuning (Engineering solution) and lastly, (iv) Implementation of the design on Advancing
the Wireless Revolution (AWR) microwave design by considering discontinuities and optimization/tuning (Practical Solution).
ACKNOWLEDGMENT
I would like to acknowledge the contribution and advice of Dr. Mustafa Secmen. My gratitude goes to my parent, family and friends
for their support and well wishes. I also thank Engr. Dr. Rabiu Musa Kwankwaso for sponsoring my postgraduate education.

REFERENCES:
1. S. Ramo, J. R. Winnery, and T. Van Duzer, Fields and Waves in Communication Electronics, 3rd edition, John Wiley & Sons,
New York, 1994.
2. J. A. Stratton, Electromagnetic Theory, McGraw-Hill, New York, 1941.
3. H. A. Wheeler, Reection Charts Relating to Impedance Matching, IEEE Transactions on Microwave Theory and Techniques,
vol. MTT-32, pp. 10081021, September 1984.
4. P. H. Smith, Transmission Line Calculator, Electronics, vol. 12, No. 1, January 1939.
5. David M. Pozar, Microwave Engineering, fourth edition, , John Wiley & Sons, New York, 2012.
6. T. C. Edwards, Foundations for Microstrip Circuit Design, John Wiley & Sons, New York, 1987.
7. Mustafa Secmen, Class lecture notes on Microwave Theory, spring semester, Yasar University, 2013.
8. Collin R.E, Foundation for Microwave Engineering, second edition, John Wiley & Sons, New York, 2001









International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

410 www.ijergs.org

Simulation & Performance Parameters Analysis of Single- Phase Full Wave
Controlled Converter using PSIM
Amit Solanki
1

1
Assistant professor, EEE Department LNCT Indore (M.P), India
E-mail- Solanki_amit04@yahoo.co.in

Abstract- This Paper may be focused on power Electronics Converter Circuits modelling and Simulation and their analysis on the
basis of some performing parameters. This paper deals with the analysis and Simulation of Single-phase full- wave ac to dc converter
analysed on the basis of performing parameters and simulated with different types of loads. A Simulation Result, which includes a
study of performance parameters like ,PF, FF, V
avg
, V
rms
, I
avg,
,I
rms
and Efficiency etc. agree with the theoretical results .the
development of model is useful for computer aided analysis and design of full converter including firing circuits. A phase-controlled
converter is an integral part of any power supply unit used in the all electronics equipments; also it is used as an interface between
utility and most of the power electronics equipments. Single phase converters are also used to drive the induction motors.
Keywords- Power Simulator Software (PSIM), Single-phase Full controlled converter, performing
Parameters, R, L, C Loads.

INTRODUCTION - Power electronics Controlled converter is a type of semiconductor converter which is used for the
conversion of AC to DC , DC to AC , AC to AC , DC to DC power .Power Electronics is used to change the Characteristics of
electrical power to suit a particular application.It is an interdisciplinary technology. The Thyristor can be triggered at any angle in
positive half cycle and thus the output voltage can be controlled. The Thyristor blocks during the negative half cycle. in fig shown the
waveforms of different types of loads. Full wave controlled converter which provides higher dc output voltage.


Fig-1 Rectifier quadrant operation.
PERFORMANCE PARAMETERS- The analysis of the full wave converter is done by considering
Following Parameters.
1.
Form Factor: - FF = V
rms
/ V
dc
2. Ripple Factor:-

3.
TUF:- P
dc
/

V
s
I
s
4.
Efficiency:- = P
dc
/ P
ac
5. Power Factor

SIMULATION- The PSIM simulation model of single phase half Wave rectifier is shown in fig 3. While in fig4
2
( 1) RF FF =
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

411 www.ijergs.org

And fig5 for firing pulse generation and waveform For performance parameter calculation is shown Respectively. PSIM is a
Simulation Software Specially designed for Power electronics and Motor drives module. With fast Simulation and
User friendly interface, PSIM provides powerful Simulation environment for power electronics. PSIM simulation environment
consists of the Circuit schematic program PSIM, the simulator Engine and the waveform processing program Simview. The
Simulation process is shown as Follows.


Fig-2 PSIM Environment
Advantages of PSIM:-

1) With PSIM's interactive simulation capability you can change parameter values and view voltages/currents in the middle of a
simulation.
2) You can design and simulate digital power supplies using PSIM's Digital Control Module. The digital Control Can be
implemented in either block diagram or custom C code.
3) PSIM has a built-in C compiler which allows you to enter your own C code into PSIM Without
Compiling. This makes it very easy and flexible to implement your own function or control methods.


Fig-3 PSIM Simulated Model and Simulation results for FW Rectifier with R-load





Fig-4 Simulated Model and Simulation Results for FW Rectifier with RL-load
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

412 www.ijergs.org




Fig-5 Simulated Model and Simulation Results for FW Rectifier with RLE-load

SIMULATION RESULTS - The Tabular form result is shown at R, L, E, Loads and firing angles as
Follows :-
Table 1.1 R Load (V
in
= 110 volts, R = 5 ohms)

30 60 90 120

V
avg
136.6 109.78 73.09 36.63
I
avg
13.66 10.97 7.30 3.66
V
rms
160.2 145.8 114.88 71.95
I
rms
16.02 14.58 11.48 7.19
Form
Factor 1.17 1.32 1.57 1.96
Ripple
Factor 0.17 0.32 0.57 0.96
Power
factor
0.99 0.98 0.879 0.499


72.2% 56.5% 40.45% 25.48%

Table 1.2 RL Load (V
in
= 110 volts, R = 10 ohms, L=10mh)

30 60 90 120

V
avg
134.1 107.27 52.43 21.15
I
avg
13.24 10.56 5.10 2.02
V
rms
160.6 146.24 96.43 55.28
I
rms
15.19 13.37 8.10 4.18
Form
Factor
1.19 1.36 1.83 2.61
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

413 www.ijergs.org

Ripple
Factor
0.19 0.36 0.83 1.61
Power
factor
1.0 0.98 0.6 0.4
72.77%
58.35%
34.18% 5.5%

Table 1.3 RLE Load (V
in
= 11 0 volts, R = 10 ohms, L=10mh, E=100 volts )


30 60 90 120

V
avg
101.4 101.4 100.6 99.9
I
avg
14.6 14.6 0.063 0.05
V
rms
101.5 101.5 100.6 99.9
I
rms
28.9 28.9 0.15 0.10
Form
Factor
1.0 1.0 1 1
Ripple
Factor
0 0 0 0
Power
factor
0.99 0.98 0.85 0.89
50.51% 50.51% 42.2% 49.94%

CONCLUSION-
The design of ac to dc single phase full controlled Converter was simulated in PSIM software. The firing Circuit was designed
and different types of waveforms are generated for the measurement of performance Parameters were developed. The
performance parameters Was Calculated with the help of waveform developed in PSIM Simulator and These parameters
matched with Actual performing parameters. Then tabulated the Performing parameters for various loads and different Firing
angles and analyzed on these parameters value. These simulated performing parameters are used in many industrial applications
where controllable dc power required and it is also useful in educational purpose for engineering students and Laboratory
experiments.

REFERENCES :
[1] Amit Solanki, Simulation and performance Parameters Analysis of Single-phase half controlled converter using PSIM
published in International Conference at Mangalmay Institute of Engineering and Management, Delhi During 1-2
nd
March-
21014, ( Paperback ISBN : 978935156339 6), Online ISSN: 2230-729X).
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

414 www.ijergs.org

[2] Santosh S. Raghuwanshi, Ankita Singh,Yamini Mokhariwale. A comparison and performance of Simulation Tools MATLAB/
SIMULINK, PSPICE, & PSIM for power Electronics Circuits International journal of Advanced Research in Computer Science
and Software Engineering. Volume-2 Issue-3,March 2012 ISSN: 2277 128X
[3] ] Sameer Khader, Alan Hadad and Akram A, Abu.The application of PSIM & MATLAB /SIMULINK in Power Education
Conference (EDUCON) ,Jordan 118-121.
[4] D.Venkatasubramanian1 S. P. Natarajan1, B. Baskaran 2 and S. Suganya3. Dual Converter Controlled Single Phase
Matrix Converter Fed DC Drive ARPN Journal of Engineering and Applied Sciences VOL. 7, NO. 6, JUNE 2012
ISSN 1819-6608.
[5] Suryaprakash Singh1, Dharmesh Pandya2, Monika Patel3 High Power Factor Operation of a 1-Phase Rectifier for an
Adjustable Speed Drive. NATIONAL CONFERENCE ON EMERGING TRENDS IN COMPUTER & ELECTRICAL
ENGINEERING, ETCEE2014..
[6] Elena Niculescu, E. P. Iancu, M. C. Niculescu and Dorina-Mioara Purcaru (2006) Analysis of PWM Converters
Using MATLAB. Proceedings of the 6th WSEAS International Conference on Simulation, Modeling and
Optimization, Lisbon, Portugal, September, 507-512.
[7] Power Simulator Software (PSIM) user guides.
[8] Aslam P. Memon, Ahsan Zafar, M. Usman Keerio, Waqar Ahmad Adil, Asif Ali. A . Experimental Study and
Analysis ofHarmonics Generation in Uncontrolled and Controlled Rectifier Converters International Journal of
Scientific & Engineering Research, Volume 5, Issue 1, January-2014 ISSN 2229-5518.
[9] Santosh S. Raghuwanshi, Kamlesh Gupta, Purva Trived. Computer Simulation and Analysis of Power Electronic
Circuit in Different Simulation Tools ISSN: 2277-4629 (Online) | ISSN: 2250-1827 (Print) CPMR-IJT Vol. 2, No.
2, December 2012.
[10] Dr. P.S. Bimbhra. Power Electronics Khanna Publishers Fifth Edition ISBN No.-978-81-7409-279-3.
[11] Meenakshi, J. ; Dash, S.S. ; Thyagarajan, T.,sahoo A,K, Power Electronics, 2007. ICPE '07. 7th Internatonal Conference on 22-
26 Oct. 2007, E-ISBN :978-1-4244-1872-5.
[12] Deokyoung Lim, Windarko, N.A. ; Jaeho Choi PSIM based dynamic simulator for analysis of SPHEV operationpower
electronics and application (EPE-2011) proceeding of the 2011-14th European conference, E-ISBN :978-90-75815-15-3Print
ISBN:978-1-61284-167-0, Aug. 30 2011-Sept. 1 2011.
[13] Yang Jun, ; Lian Xiaoqin ; Zhang Xiaoli ; Duan Zhengang Versatile PSIM Simulation Model for Photovoltaic Array with
MPPT Function Intelligent Computation Technology and Automation (ICICTA), 2011 International Conference on (Volume:1 ),
Page(s):987 - 990Print ISBN: 978-1-61284-289-9 date-28-29 March 2011.









International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

415 www.ijergs.org

Moving Object Counting in Video Signals
Ganesh Raghtate
1
, Abhilasha K Tiwari
1

1
Scholar, RTMNU, Nagpur, India
E-mail- gsraghate@rediffmail.com
Abstract Object detection and tracking is important in the field of video processing. The increasing need for automated
video analysis has generated a great deal of interest in object tracking algorithms. The input video clip is analyzed in three key
steps: Frame extraction, Background estimation and Detection of foreground objects. The use of object tracking and counting;
basically cars; is pertinent in the tasks of traffic monitoring. Traffic monitoring is important to direct traffic flow, to count
traffic density and check the rules of traffic at traffic signals. In this paper we have presented a technique to avoid human
monitoring and automate the video surveillance system. This system avoids the need to have a background image of the
traffic. To a given input video signals, frames are extracted. Selected frames are used to estimate the background. This
background image is subtracted from each input video frame and foreground object is obtained. After post processing
technique, counting is done.
Keywords Background estimation, Background subtraction, Car tracking, Frame difference, Object counting, Object
detection
1. INTRODUCTION
The efficient counting and tracking of multiple moving objects is a challenging and important task in the area of computer
vision. It has applications in video surveillance, security, traffic rules violations and humancomputer interaction. Recently, a
significant number of tracking systems have been proposed. The major hurdles in monitoring algorithms are changing light
intensities especially at late evenings and at night, weather changes like foggy atmospheres. Vehicle counting is important in
computing traffic congestion and to keep track of vehicles that use state-aid streets and highways. Even in large metropolitan
areas, there is a need for data about vehicles that use a particular street. A system like the one proposed here can provide
important and efficient counting mechanism to monitor vehicles (cars) density at highways
Objects are defined as vehicles moving on roads. Cars and buses can be differentiated and the different traffic components can
be counted and observed for violations, such as lane crossing, vehicles parked in no parking zones and even stranded
vehicles that are blocking the roads. Vision-based video monitoring systems offer many more advantages. Surveillance and
video analysis provide quick and practical information resulting in increased safety and traffic flow. The algorithm does not
require the background image of road for this system. The background image is estimated from randomly selected input video
frames. This is the greatest advantage of this method.
The organization of the paper is as follows. Section 2 gives a brief summary of the literature survey. Section 3 describes the
architecture and modeling for the current technique of background estimation. In the subsections, the details of frame
extraction, background estimation, background subtraction and car detection is presented. The technique of counting is
described in section 4. Section 5 gives the results obtained. Simulation software is discussed in Section 6. Section 7 gives the
future work to be done in this system. Section 8 gives the conclusions.

2. LITERATURE SURVEY
A brief survey of the related work in the area of video segmentation and traffic surveillance is presented. Sikora T. [1]
used this concept for intelligent signal processing and content-based video coding. Here an image scene consists of video
objects and the attempt is to encode the sequence that allows separate decoding and construction of objects. Nack et al., [2]
and Salembier et al., [3] have discussed Multimedia content description related to the generation of region based
representation with respect to MPEG-4 and MPEG-7. detection based approach Y. Yokahama et al. in [4] discusses concept
of initial segmentation as applied to the first frame of the video sequence, which performs spatial segmentation, and then
partitions the first frame into homogeneous regions based on intensity. Motion estimation is then computed for determining
the motion parameters for each region, and finally motion-based region merging is performed by grouping the regions to
obtain the moving objects. L, Wu et al., [5] explains how temporal tracking is performed in detail after initial segmentation.
Neri et al., [6] describes a solution to eliminate the uncovered background region by applying motion estimation on regions
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

416 www.ijergs.org

with significant frame difference. The object in the foreground is then identified when a good match is found between two
frame differences. The remaining region is then discarded as unwanted area. Stauder et al., [7] considers the effect of shadow
of object in the background region which affects the output in change.

In [8] tracking and counting pedestrians using a single camera is proposed. Here the image sequences are segmented
using background subtraction and the resulting regions are connected then grouped and together as pedestrians and tracked.
Dailey et al., [9] presents the background subtraction and modeling technique that estimates the traffic speed using a sequence
of images from an uncelebrated camera. The combination of moving cameras and lack of calibration makes the concept of
speed estimation a challenging job. Vibha L, Chetana Hegde [10] has background subtraction and compared with foreground
registration technique.

3. ARCHITECTURE AND MODELING
Figure 1 shows the architecture of Background estimation method. Frames are selected randomly from the input
video clip and are used for background estimation. From the input video frame, the background frame is subtracted to obtain
the image of foreground object. Then using some post processing like median filtering and morphological closing operation a
clear foreground object image is obtained. Then object tuning for object identification is done.

3.1 Frames extraction
Frames are extracted from the input video clip. Here 6-7 video clips are
examined which are of different atmospheric conditions, different light intensities and different traffic densities. These frames
are converted into gray frames. The total number of frames which depend upon the size of the input video clip is also
extracted.

Fig 1: Architecture diagram of Background estimation technique




3.2 Background estimation

Frames are selected randomly from the input video clip. The number may vary according to traffic density. For
higher densities of traffic this number may be larger than that required lower densities of traffic. Then all these frames are
averaged out by averaging filter. The resulting image formed is that of background since the objects are averaged out. This
method is called background estimation. By using this method, object tracking can be done on any video clip, even if
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

417 www.ijergs.org

background image of the clip is not available. Figure 7 below shows the background image obtained by averaging out 6-7
input video frames.

Fig 7 Background Image by background estimation

3.3 Background subtraction

The background these frames are gray images. The result of background subtraction is a foreground object image. Fig 2
shows the result of background subtraction.

Figure 2 Background Subtraction Figure 3: Post Processing Technique



3.4 Post processing

The post processing steps are applied to remove some noise in the camera images. These include filtering techniques
such as median filtering. The image boundaries are smoothened and any noise is removed. This filter replaces the value of a
pixel by the median of the gray levels in the neighborhood of that pixel. The formula used is
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

418 www.ijergs.org

f (x, y) = median {g (s, t)} (1)

3.5 Object tuning

This post processing step is applicable in traffic monitoring and traffic
surveillances systems. The output of this processing is a binary and a clearer image of the object. To get a clearer image of the
foreground object a morphological closing operation is required to be done on the binary image. Fig 5 shows the binary image
obtained by object tuning

Figure 5: Object Tuning

4. Object counting

The tracked binary image forms the input image for counting. An imaginary or hypothetical line is assumed to be present
across the Y-AXIS of the image frame. When any moving object (vehicle) crosses the line, it is registered and count is
incremented. One variable is maintained i.e., count that keeps track of the number of vehicles and. When a new object is
encountered, as soon as it crosses the line, Count is incremented, else it is treated as a part of an already existing object and the
presence of the object is neglected. This concept is applied for the entire image and the final count of objects is present in
variable count. A very good accuracy of count is achieved. Sometimes due to occlusions two objects are merged together and
treated as a single entity.

5. Simulation software
Simulation is performed using Matlab Software This is an interactive system whose basic data element is an array that
does not require dimensioning. It is a tool used for formulating solutions to many technical computing problems, especially
those involving matrix representation. It provides environment in the solution of digital image processing. Matlab is a very
efficient tool for image processing.

6. Results

The algorithm for moving objects counting is implemented on video trafficcctv.avi series 01 to 05and the results
obtained are shown in fig 6. There are traffic videos of all conditions of light, foggy atmosphere, and traffic density. Fig 6a
and 6c shows result of high traffic density in dim light. Fig 6b shows results of very low traffic density and brighter light. Fig
6d and 6e shows the result of our algorithm in moderate traffic. The algorithm is successful in counting moving objects when
frames are displayed consecutively in series. Table 1 gives count values for different traffic counts







International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

419 www.ijergs.org

TABLE 1 Count Values

Sr.
No.
Traffic Video Count
Value
Obtd
(CO)
Actual
Count (CA)
1 cctv01.avi 9 8
2 cctv02.avi 5 5
3 cctv03.avi 12 12
4 cctv04.avi 7 7

5
trafficcctv.avi 13 13



Fig 6a. trafficcctv.avi Fig 6b) cctv04.avi
Fig 6c) cctv03.avi Fig 6d) cctv02.avi Fig6e) cctv01.avi

7. Future work

The future work consists of comparing real time implementation of this project and display of the number of moving objects
simultaneously. His will be very helpful in weight sensitive bridges and to control traffic congestion or traffic jam.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

420 www.ijergs.org

8. Conclusion

In this paper, we propose an efficient algorithm for counting moving object using background elimination technique. Initially
we compute the frame differences (FD) between frames Fi and background frame. The moving object is then isolated from the
background. In the post processing step, the noise and shadow regions present in the moving object are eliminated using a
morphological gradient operation that uses median filter without disturbing the object shape. This could be used in real time
applications involving multimedia communication systems. It is to be proved in further work that the clarity of the image obtained
using background elimination technique is much better than using background registration technique. Good segmentation quality is
achieved efficiently. This paper also discusses an application system of traffic surveillance.


REFERENCES:
[1] Sikora, T., The MPEG-4 Video Standard Verification Model, IEEE Transactions, Circuits Systems, Video Technology, vol.7,
pp. (19-31), Feb. 1997.
[2] Nack F. and Lindsay A. T., Everything you Wanted to Know about MPEG-7: Part 2, IEEE Multimedia, vol. 6, pp. (64-73), Dec.
1999.
[3] Salembier P. and Marques F., Region-based Representations of Image and Video: Segmentation Tools for Multimedia Services,
IEEE Transactions, Circuits Systems, Video Technology, vol. 9, pp. (1147-1169), Dec. 1999.
[4] Y. Yokoyama, Y. Miyamoto and M. Ohta, Very Low Bit Rate Video Coding Using Arbitrarily Shaped Region-Based Motion
Compensation, IEEE Transactions, Circuits System. Video Technology, vol. 5, pp. (500-507), Dec. 1995
[5] L.Wu, J. Benoise-Pineau, P. Delagnes and D. Barba, Spatio-temporal Segmentation of Image Sequences for Object-Oriented Low
Bit-Rate Image Coding, Signal Processing: Image Communication., vol. 8, pp. 513-543, 1996.
[6] Neri A., Colonnese S., Russo G. and Talone P., Automatic Moving Object and Background Separation, Signal Processing, vol.
66, pp.(219-232), Apr. 1998.
[7] Stauder J., Mech R. and Ostermann J., Detection of Moving Cast Shadows for Object Segmentation, IEEE Transaction
Multimedia, vol. 1, pp.(65-76), Mar. 1999.
[8] O. Masoud and N. P. Papanikolopoulos, Robust Pedestrian Tracking Using a Model-based Approach, In Proceedings of IEEE
Conference on Intelligent Transportation Systems, pp. (338-343), Nov. 1997.
[9] Dailey D. J., Cathey F. and Pumrin S., An Algorithm to Estimate Mean Traffic Speed Using Uncalibrated Cameras, In IEEE
Transactions on Intelligent Ttransportations Systems, vol. 1, No. 2, pp.98-107, June, 2000.
[10] Vibha L, Chetana Hegde, P Deepa Shenoy, Venugopal K R, L M Patnaik, Dynamic Object Detection, Tracking and Counting
in Video Streams for Multimedia Mining ,IAENG International Journel of Computer Science,2008
















International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

421 www.ijergs.org

Photovoltaic Power injected to the Grid with Quasi Impedence Source Inverter
M. Gobi
1
, P. Selvan
2

1
Scholar (PG), Erode Sengunthar Engineering College, Thudupathi, Erode
2
Professor, Erode Sengunthar Engineering College, Thudupathi, Erode
E-mail- gobi1985.m@gmail.com

Abstract - The Z-source inverter (ZSI) with battery operation can balance the stochastic fluctuations of photovoltaic (PV) power
injected to the grid/load, but the existing topology has a power limitation due to the wide range of discontinuous conduction mode
during battery discharge. This paper proposes a new topology of the energy stored ZSI to overcome this disadvantage. Two strategies
are proposed with the related design principles to control the new energy stored ZSI when applied to the PV power system. They can
control the inverter output power track the PV panel maximum power. The voltage boost, inversion, and energy storage are integrated
in a single stage inverter. The obtained results verify the theoretical analysis and prove the effectiveness of the proposed control of the
inverters input and output powers and battery power regardless of the charging or discharging condition.
Keywords - quasi-Z Source Inverter (qZSI), Photovoltaic (PV), Sinusoidal Pulse Width Modulation (SPWM), Maximum Power
Point Tracking (MPPT).
INTRODUCTION

This paper deals with the [13] photovoltaic energy stored quasi-Z-source Inverter to connect with grid. With the worsening of the
worlds energy shortage and environmental pollution problems, protecting the energy and the environment becomes the major
problems for human beings. Thus the development and application of clean renewable energy, such as solar, wind, fuel cell, tides and
geothermal heat etc., are getting more and more attention. Among them, solar power will be dominant because of its availability.[12]
The worldwide installed photovoltaic (PV) power capacity shows nearly an exponential increase due to decreasing costs and the
improvements in solar energy technology. Power converter topologies employed in the PV power generation systems are mainly
characterized by two or single stage inverters. The single stage inverter is an attractive solution due to its compactness, low cost, and
reliability. However, its conventional structure must be oversized to cope with the wide PV voltage variation derived from changes of
irradiation and temperature.

BLOCK DIAGRAM

Fig 1.1 shows that the basic block diagram for PV power injected to the grid. The PV power is observed by MPPT [6],
and it is calculated for the conversion by qZSI. By applying the SPWM to the qZSI to boost the voltage [6].The boosted
voltage is again applying for transmission by step-up transformer and it is tied with grid for distribution

Fig 1.1 Block diagram of PV power injected to the grid

ENERGY CONVERSION EFFICIENCY
=
P
m
EA
(1.1)
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

422 www.ijergs.org

The equation (1.1) expressed as, Energy conversion efficiency is the percentage of power converted (from absorbed light to
electrical energy. When a solar cell is connected to an electrical circuit. This term is calculated using the ratio of maximum power
divided by input light irradiance E in W/ under standard test conditions (STC) and A is area of the solar cell.
MAXIMUM POWER
The load for which the cell can deliver maximum electrical power at the level of irradiation. The equation (1.2) states is maximum
power, is maximum voltage, and is the maximum current
(1.2)
SOLAR MODULE AND ARRAY MODEL
Since a typical PV cell produces less than 2W at 0.5V approximately, the cells must be connected in series-parallel configuration on a
module to produce enough high power. A PV array is a group of several PV modules which are electrically connected in series and
parallel circuits to generate the required current and voltage. The equivalent circuit for the solar module arranged in N
p
parallel and
N
s
series. The terminal equation for the current and voltage

The mathematical equation (1.3) of generalized model can be described as, we get
= N
p

ph
N
p
I
s

q
V
N
s
+
R
s
N
p

kT
c
A
1 (1.3)
The equivalent circuit is described on the following equation (1.4) is
= N
p

ph
N
p
I
s
q
V
N
s
kT
c
A
1

(1.4)
Where, N
s
- is series number of cells for a PV array. N
p
- is parallel number of cells for a PV array.
CONVENTIONAL METHOD
The fig 2.1 shows that circuit diagram for conventional method.This conventional structure must be oversized to cope with the wide
PV voltage variation derived from changes of irradiation and temperature

Fig 2.1 Circuit diagram for conventional method
The two-stage inverter topology applies a boost dc/dc converter to minimize the required KVA rating of the inverter and boost the
wide-range input voltage to a [11] constant desired output value. However, the switch in the dc/dc converter will increase the cost and
decrease the efficiency. Most of the existing ESS system that may act as bidirectional dc / dc converter to manage the batteries, which
makes the system complex, increase its cost, and decreases its reliability.
PROPOSED METHOD
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

423 www.ijergs.org

The fig 3.1 shows that circuit diagram for proposed method. This proposed system, as structurally the battery is connected to
capacitor. It will be drawn a dc constant current and voltage

Fig. 3.1 Circuit diagram for proposed method
Normally the Z-source inverter (ZSI) presents a new single-stage structure to achieve the voltage boost/buck character in a single
power conversion stage, [9] this type of converter can handle the PV dc voltage variations over a wide range without overrating the
inverter. The component count and system cost are reduced, with improved reliability due to the allowed shoot-through state. Recently
proposed quasi-Z-source inverters (qZSI) have some new attractive advantages more suitable for application in PV systems follows

- The qZSI draws a constant current from the PV panel, and thus, there is no need for extra filtering capacitors
- The qZSI features lower component (capacitor) ratings
- The qZSI reduces switching ripples

Modes of Operation
The quasi-Z-source inverter has the two operating modes during battery charging, discharging and energy stored in photovoltaic
power system. i.e. continuous conduction mode (CCM) and discontinuous conduction mode (DCM). Pulse width modulation (PWM)
methods are essential to properly operate the qZSI. The sinusoidal PWM (SPWM)-based techniques of qZSI can be divided into
simple boost control, [8] maximum boost control and maximum constant boost control. They are simple to implement, but have
defects of high switching frequency and additional switching operations, resulting in the incremental losses [3]
Mode 1
This mode will make the inverter short circuit via any one phase leg, combinations of any two phase legs, and all three phase legs
which are referred to[7] as the shoot-through state. As a result, the diode D
z
is turned off due to the reverse-bias voltage. During this
time interval, the circuit equations are presented as shown in fig 3.2

Fig 3.2 Mode 1 operation
- Charging the inductor
1

- Voltage drop is in capacitor
2

- Diode is not conducted, so the
1
current is not injected
- It leads the current to
1
capacitors the maximum output voltage is controlled by

.
Mode 2

Fig 3.3 mode 2 operation

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

424 www.ijergs.org

- Diode conducted because i
L1
i
C2
i
L2
.
- E
b
or V
b
is battery power less.
- V
C1
voltage on capacitors 1 is fall onV
b
diode is conducted
- Output is shorted, so the current flows through i
E
.

RESULTS AND DISCUSSION
OVERALL SIMULATION CIRCUIT


Fig 4.1 Simulation circuit for QZSI output grid tie

The qZSI output is compared with the sinusoidal PWM and energy balance is to be maintained during continuous conduction. The
step-up transformer is used to maximize the power with the rating of 125V/2KV. In the qZSI output is possibly connected to the grid
for distribution [9] as shown in fig 4.1
WAVEFORM FOR QZSI OUTPUT
The output from qZSI is 131.4V as shown in fig 4.2. It is measured and the three phase supply is not directly tied with grid. Because
of the energy imbalance is carried out from the PV power. This will be optimized during the shoot-through state. Now the PV power
can be controlled by the duty cycle and inverter output is controlled by modulation index for stable, smooth and to enhance the large
power.


Fig 4.2 Waveform for qzsi output
WAVEFORM FOR VOLTAGE ACROSS AT SECONDARY
The fig 4.3 shows the transformer voltage at secondary. This output voltage is maximum by shoot-through duty cycle with low
switching losses are obtained the result.

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

425 www.ijergs.org


Fig 4.3 Waveform for voltage across at secondary
WAVEFORM FOR LOAD CURRENT
The load current waveform is shown in fig 4.4 The real and reactive power is limited by the load current under different state of
charging in the battery.

Fig 4.4 Waveform for load current

WAVEFORM FOR VOLTAGE ACROSS LOAD
The output voltage across the load is shown in fig 4.5. This output voltage causes the high voltage stress from secondary of step up
transformer with the rating of 230V/2KV.


Fig 4.5 Waveform for voltage across load
CONCLUSI ON
This paper deals with an energy-stored qZSI achieved by new technique has been proposed to overcome the shortcoming of the
existing solutions in PV power system. Two strategies have been proposed to control the new circuit topology, and their design
methods have been presented by employing a small signal model. The proposed energy stored qZSI and two control methods, it is
being implemented under different operating conditions. The theoretical analysis and simulations results presented in this paper
clearly demonstrate that the proposed energy stored qZSI and the suggested two control methods in the PV module and inject the
active and reactive power into the grid by the inverter, independently, as well as control the battery power flow. These are mostly
applicable for various PV power systems. It is ensuring the system accuracy and proving the maximum output



International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

426 www.ijergs.org

REFERENCES:
{1] R. A. Mastromauro, M. Liserre, T. Kerekes, and A.DellAquila, A single phase voltage controlled grid connected photovoltaic
system with power quality conditioner functionality,IEEE Trans.Ind. Electron, vol.56, no. 11, pp. 44364444, Nov. 2009.
[2] F. Bradaschia, M. C. Cavalcanti, P. E. P.Ferraz, F. A. S. Neves, E. C. dos Santos, and J. H. G. M. da Silva, Modulation for three
phase transformer less Z-source inverter to reduce leakage currents in photovoltaic systems, IEEE Trans. Ind.Electron., vol.58, no.
12, pp. 5385 5395,Dec. 2011

[3] J. Chavarria, D. Biel, F. Guinjoan, C. Meza ,and J. Negron, Energy balance control of PV cascaded multilevel grid connected
inverters for phase shifted and level shifted pulse width modulations,EEE Trans Ind. Electron., vol. 60, no.1, pp. 98111, Jan.2013

[4] H. Abu-Rub, A. Iqbal, S. Moin Ahmed,F. Z.Peng, Y. Li, and G. Baoming, Quasi-Z source inverter based photovoltaic generation
system with maximum power tracking control using ANFIS, IEEE Trans.Sustainable Energy, vol. 4, no. 1, pp.1120, Jan. 2013

[5] Kadri, J. Gaubert, and G. Champenois,An improved maximum power point tracking for photovoltaic grid connected inverter based
on voltage oriented control,IEEE Trans. Ind. Electron.,vol.58, no. 1, pp. 6675,Jan. 2011

[6] Baoming Ge, Member, IEEE, An Energy stored Quasi-Z-Source Inverter For application to photovoltaic power system Haitham
Abu-Rub, Senior Member, IEEE, Fang Zheng Peng, Fellow, IEEE,Qin Lei, Student Member, IEEE, Anbal T. de Almeida, Senior
Member, IEEE,Fernando J. T. E. Ferreira, Senior Member, IEEE,Dongsen Sun, and Yushan Liu, Student Member, IEEE Transactions
on industrial electronics, vol. 60, no. 10, October 2013

[7] R. Kadri, J. Gaubert, and G. Champenois, An improved maximum power point tracking for photovoltaic grid-connected inverter
based on voltage oriented control, IEEE Trans. Ind. Electron., vol. 58, no. 1, pp. 6675,Jan. 2011

[8] J. Chavarria, D. Biel, F. Guenon, C. Meza, and J. Negron, Energy balance control of PV cascaded multilevel grid-connected
inverters for phase-shifted and level-shifted pulse-width modulations, IEEE Trans.Ind. Electron, vol. 60, no. 1, pp. 98111, Jan. 2013

[9] D. Vinnikov and I. Roasto, Quasi-Z-source-based isolated DC/DC converters for distributed power generation, IEEE Trans. Ind.
Electron., vol. 58, no. 1, pp. 192201, Jan. 2011

[10] R. Carbone, Grid-connected photovoltaic systems with energy storage, in Proc. Int. Conf. Clean Elect. Power, Capri, Italy, Jun.
2009, pp. 760767













International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

427 www.ijergs.org

Adaptive Viterbi Decoder for Space Communication Application
R.V.Babar
1
, Dr. M.S. Gaikwad
2
, Pratik L. Parsewar
3

1
Assistant Professor, Sinhgad Institute of Technology, Department of Electronics and Telecommunication, Lonavala, Pune
2
Principal, Sinhgad Institute of Technology, Lonavala, Pune
3
Research Scholar (M.E), Sinhgad Institute of Technology, Department of Electronics and Telecommunication, Lonavala, Pune
E-mail- pratik.parsewar@gmail.com
Abstract Day by day need of increase in data transmission rate in wireless communication systems increases rapidly. Viterbi
Algorithm is known as optimum-decoding algorithm for convolutional codes and has often been served as a standard technique in
digital communication systems for maximum likelihood sequence estimation. In this paper, by making existing well-known Viterbi
algorithm, an adaptive Viterbi algorithm that is based on strongly connected trellis decoding is proposed. Using this algorithm, the
design and a field-programmable gate array implementation of a low-power adaptive Viterbi decoder with a constraint length 9 and a
code rate of 1/2 is presented. It is shown that the proposed algorithm can reduce by up to 70% the average number of ACS
computations over that by using the non-adaptive Viterbi algorithm, without degradation in the error performance. The proposed
Adaptive Viterbi decoder can be used in high speed decoding applications such as space communication. This results in lowering the
switching activities of the logic cells, with a consequent reduction in the dynamic power. Also in this paper, with the help of Matlab,
comparison of results of BER of adaptive Viterbi decoder and Viterbi decoder has been concluded.

Keywords - Convolution codes, Adaptive Viterbi decoder, ACS unit, field-programmable gate array (FPGA) implementation.

I. INTRODUCTION
CONVOLUTIONAL codes and the Viterbi algorithm are known to provide a strong forward error correction (FEC) scheme,
which has been widely, utilized in digital communication applications. As the error-correcting capability of convolutional codes is
improved by employing codes with larger constraint lengths K, the complexity of decoders is increased. The Viterbi algorithm (VA),
which is the most extensively employed decoding algorithm for convolutional codes, is effective in achieving noise tolerance, but the
cost is an exponential growth in memory, computational resources, and power consumption. To overcome this problem, the reduced-
complexity adaptive Viterbi algorithm (AVA), has been developed. The average number of computations per decoded bit for this
algorithm is substantially reduced versus the VA, while comparable bit-error rates (BER) are preserved.
It has been shown that the larger the constraint length used in a convolutional encoding process, the more powerful the code
produced. However, the complexity of the Viterbi decoding process becomes increases for a constraint length is more than 9 sizes. As
an effect, it would not possible to achieve a hardware implementation of a Viterbi decoder for, in order to meet the requirements of the
power, speed and area. In recent years, Viterbi decoders have been mostly used in mobile systems that require portable battery
operations, thus making the power consumption a critical concern to the designers.

II. Background

The idea behind the Viterbi Decoder (VD) is quite simple, in spite of its inherent implementation difficulty. Moreover, there is a
wide gap in complexity with the transmission side, where convolutional encoding can easily be implemented. Since convolutional
codes are represented by a state trellis, the decoder is a finite state machine that explores the transitions between states, stores them in
a large memory, and comes to a final decision on a sequence of transitions after some latency due to the constraint length of the input
code. Decisions are usually taken by considering the transition metrics among states, which are updated in terms of either Euclidean or
Hamming distance with the error-corrupted received sequence. The performance of convolutional codes strongly depends on their
minimum distance, which in turn depends on the constraint length and coding rate. As a consequence, in order to increase the gain
with respect to the uncoded case, there is a continuous trend towards increasing such parameters. Thus, complexity may grow up to a
limit where classic implementation techniques are no longer viable. Recently, Adaptive Viterbi Decoding (AVD) for the algorithmic
part and systolic architectures for the implementation aspects are increasing their popularity in the technical literature. In the AVD
approach, only a subset of the states is stored and processed, significantly reducing computation and storage resources at the expense
of a small performance loss.
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

428 www.ijergs.org

III. Viterbi Algorithm
The Viterbi algorithm proposed by A.J. Viterbi is known as a maximum likelihood decoding algorithm for Convolutional codes.
So, it finds a branch in the code Trellis most likely corresponds to the transmitted one. The Algorithm is based on calculating the
Hamming distance for every branch and the path that is most likely through the trellis will maximize that metric. The algorithm
reduces the complexity by eliminating the least likely path at each transmission stage. The path with the best metric is known as the
survivor, while the other entering paths are non-survivors. If the best metric is shared by two or more paths, the survivor is selected
from among the best paths at random.
The selection of survivors lies at the heart of the Viterbi Algorithm and ensures that the algorithm terminates with the maximum
likelihood path. The algorithm terminates when all of the nodes in the trellis have been labeled and their entering survivors are
determined. We then go to the last node in the trellis and trace back through the trellis. At any given node, we can only continue
backward on a path that survived upon entry into that node. Since each node has only one entering survivor, our trace-back operation
always yields a unique path. This path is the maximum likelihood estimate that predicts the most likely transmitted sequence.
Various coding schemes are used in wireless packet data network of different standards like GPRS, EDGE and WiMAX to
maximize the channel capacity.

IV. Architecture of Viterbi Decoder
The architecture of the Viterbi decoder is illustrated in Fig. 1.

Fig. 1. Basic building blocks of the Viterbi decoder.

A. The Branch Metric Computer (BMC)
This is typically based on a look-up table containing the various bit metrics. The computer looks up the n-bit metrics associated
with each branch and sums them to obtain the branch metric. The result is passed along to the path metric update and storage unit. The
dashed rectangle in Fig. 2 shows the BMC.

B. The Path Metric Updating and Storage
This takes the branch metrics computed by the BMC and computes the partial path metrics at each node in the trellis. The
surviving path at each node is identified, and the information-sequence updating and storage unit notified accordingly. Since the entire
trellis is multiple images of the same simple element, a single circuit called Add-Compare-Select may be assigned to each trellis state.

C. Add-Compare-Select (ACS)
ACS is being used repeatedly in the decoder. A separate ACS circuit can be dedicated to every element in the trellis, resulting in a
fast, massively parallel implementation. For a given code with rate 1/n and total memory M, the number of ACS required to decode a
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

429 www.ijergs.org

received sequence of length L is L2M. In our implementation we combined both the BMC and the ACS in one unit representing a
single wing of each trellis butterfly as illustrated in Fig. 2.

D. Survivor Memory Management (SMM)
This is responsible for keeping track of the information path metric updating and storage unit. Bits associated with the surviving
paths designated by the path metric updating and storage unit.




V. Adaptive Viterbi Decoder

The well-known VA has been described in literature extensively. The data path of the Viterbi Decoder is composed of three major
components: Branch Metric Calculation Unit (BMU), ACS and Survivor Memory Unit (SMU) as shown in Fig 1. The branch metrics
are calculated from the received channel symbols in BMU and then fed into the ACS which performs Add-Compare-Select for all the
states. The decision bits generated in ACS are stored and retrieved in the SMU in order to finally decode the source bits along the final
survivor path. The state metrics of the current iteration are stored into the Path Metric Memory Unit (PMU) and read out for the use of
the next iteration.


Fig. 3 Top Level Diagram of Viterbi Decoder

In ACS unit, the VA examines all possible paths in the Trellis graph and determines the most likely one. The AVA only keeps a
number of the most likely states instead of the Whole of 2K1 states, where K is the constraint length of the convolution encoder. The
rest of the states are all discarded. The selection is based on the likelihood or metric value of the paths, which for a hard decision
decoder is the Hamming distance and for a soft decision decoder is Euclidean distance. The rules of the selecting the survivor Path is:

International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

430 www.ijergs.org

1. Every surviving path at trellis level n is extended and Its successors at level n+1 are kept if their path metric are smaller or equal
to P
Mmin
n + T, where P
Mmin
n is the minimum path metric of the surviving path at stage n+1, and T is the discarding threshold
configured by the user.
2. The total number of survivor paths per trellis stage is up bounded to a fixed number: N
max
, which is preset prior to the start of
the communication. In order to illustrate how the AVA operates, an example using a code rate R = 1/2, constraint length K = 3 is
given in Fig 2. The threshold T is set to 1 and Nmax is set to 3 respectively. Initially at t = 0, we set the P
Mmin
n Equal to 0 and the
decoder states equal to 00. The received Sequence is {01, 10, 11, 01, and 00}. The symbol X represents the discarded path and bold
line represents the final decision path by the AVA algorithm. For the sake of the simplicity, the minimum path metric of the nth
iteration P
Mmin
n is denoted by dm. It can be seen that at each trellis stage, the number of the survivor states is smaller than the VA
(2K1) and gets the same decision paths as the VA. The optimal selection strategy for architecture parameter Nmax and T is
discussed. In this paper, a range of T from 20 to 30 and a range of Nmax up to 2K2 are considered. The top level block diagram of
ACS unit of AVA Decoder is shown in Fig 4. Path Metric Adder and State Merge Unit correspond to the operation of Add and
Compare-Select Operation in VA respectively. Compared to conventional VA, two additional processing units are inserted into the
data path of the VA: Threshold Selection and Survivor Contender which correspond to the AVA rule 1 and rule 2 respectively in AVA
architecture. The Threshold Selection unit discards the paths exceeding the sum of the preset value T and the minimum path metric of
the last iteration P
Mmin
and the survivor contender is responsible for sifting N
max
states out of 2Nmax states. In addition, Min Path
Calculation unit is responsible for calculating the minimum path of the current iteration.


Fig. 4 Trellis Graph of adaptive Viterbi decoding

Fig. 5 block Diagram of Adaptive ACS Architecture

The conventional Threshold Selection architecture is Shown in Fig 5. At time step n, the path metric of state i denoted as PMin
and the branch metric from BMU, denoted as BMij associated with a state transition from i to j are added in the path metric adder. The
accumulated path metric of state j, denoted as PMj n+1 is compared to the sum of P
Mmin
n
and pre-set constant T. Those exceeding will
be discarded. In parallel to the operation of the Threshold Selection Unit and Survivor State Contender, the path metric of state j, PMj
n+1 is fed into the Min Path Calculation for determining the minimum path metric of current iteration PMminn+1 ,which is stored for
the use of next iteration.
VI. RESULTS
In order to evaluate the performance, the ACS unit as shown in Fig 3 is implemented with both conventional and
reformulated scheme in Verilog models and mapped into standard cell based ASIC and LUT based FPGA technologies
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

431 www.ijergs.org

respectively. Here we do not take BMU and SMU into the consideration because the two components are the same in the
different approaches. The specifications of the implementations are:

64 states, constraint length K = 9
Code rates R = 1/2
3 bits, 8 level soft decision inputs
Nmax = 16, T = 20
ASIC Approach: UMC .18u stand cell library
FPGA Approach: Xilinx Virtex600E

Improvement is achieved in speed respectively. Significant improvement can be observed both in standard cell
approach and LUT approach.
In addition, the power efficiency is enhanced compared to the basic comparison unit results. Further reduction in
power can be contributed to the reduction of complexity in Min Path Calculation unit and Path Metric Adder.
Below, BER (bit error rate) calculated with the help of MATLAB. Comparison graph shown below represent
BER rate has been improved in Adaptive Viterbi decoder (fig 6):

Fig. 6 BER rate of Adaptive Viterbi and Viterbi Decoder for K=9
VII. CONCLUSION AND FUTURE WORK
In this work, a high speed implementation of an adaptive Viterbi decoder which uses modified T-algorithm is
presented .The use of error-correcting codes has proven to be an effective way to overcome data corruption in digital
communication channels. Some of the conclusions drawn from the design are as under below. Efficient reformulation
based architecture for Threshold Selection in Adaptive Viterbi Decoding is presented. The Reformulated architecture
exploits the inherent parallism Between the Add Compare Select Operation and rescales operation in Adaptive Viterbi
International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

432 www.ijergs.org

Decoding. Through reformulation, the hardware complexity for the threshold selection in Adaptive Viterbi Decoding is
significantly reduced both in ASIC and FPGA technologies, which leads to a corresponding significant reduction in area,
power and delay. It should be noted that the proposed technique will also achieve a similar power, area and speed
efficiency with different specifications e.g. K = 9.
Power and area has been reduced by dividing the Trellis Coding structure into two segments. Significant amount of
power has been reduced in the design by modifying branch metric architecture.
In the future, plan to consider the decoding benefits of using a hybrid microprocessor and FPGA device. The tight
integration of sequential control with parallel decoding may provide further run-time power benefits.

REFERENCES:
[1] M. Cummings and S. Huruyama, FPGA in the Software Radio, IEEE Comm Magazine, volume: 37, no. 2, pp. 108-112,
February 1999.
[2] Xilinx Inc., Virtex-6 SXT for DSP and memory-intensive applications with low-power serial connectivity,
http://www.xilinx.com/products/v6s6.htm. Last visited: April. 2009.
[3] A.J. Viterbi, Convolutional codes and their performance in Communication systems, IEEE Trans. Commun., vol. COM-19, pp.
751-772, Oct., .1971.
[4] B. Sklar, Digital Communication Fundamentals and Applications, Prentice Hall, Englenwood Cliffs, New Jersey, 2000, Part 2
chapter 7.
[5] GSM 05:03: Channel coding, Version 8.9.0 Release 1999.
[6] GSM 03.64: Overall description of the GPRS radio interface; Stage 2 Version 8.12.0. Release 1999.
[7] IEEE Std 802.16e-2005: Part 16: Air interface for Fixed and Mobile Broadband Wireless Access Systems.
[8] Sherif Welsen Shaker, Salwa Hussien Elramly, Khaled Ali Shehata, FPGA Implementation of a Configurable Viterbi Decoder
for Software Radio Receiver, IEEE 44th annual Systems Readiness Technology Conference, AUTOTESTCON 2009,
Anaheim, California, USA, 14-17 September 2009.
[9] Maunu, J.; Laiho, M.; Paasio, A., Performance analysis of BMC and Decision Units in the differential analog Viterbi decoder,
IEEE 25th Norchip Conference, Aalborg, Denmark, Volume, Issue, 19-20 Nov. 2007 Page(s):1 4.
[10] Kamuf, M. Owall, V. Anderson, J.B., Survivor Path Processing in Viterbi Decoders Using Register Exchange and Trace
forward, IEEE Trans on Circuits and Systems II: Express Briefs, vol. 54, Issue 6, pp. 537-541, June 2007.
[11]Pravin Bhagwat, Charschik Bisdikian, Ibrahim Korpeoglu, Arvind Krishna, and Mahmoud Naghshineh, System design issues for
low-power, low-cost short range wireless networking, in IEEE International Conference on Personal Wireless Communications
(ICPWC99), February 1999.
[12] Cheetham, B., 2010. Power saving convolutional decoder. University of Manchester. [Email] (Personal Communication, 3
March 2010)
[13] Samir Palnitkar, Verilog HDL A Guide to Design and Synthesis, 2
nd
edition, Prentice Hall, ISBN: 0-13-044911-5 2003.


International Journal of Engineering Research and General Science Volume 2, Issue 3, April-May 2014
ISSN 2091-2730

433 www.ijergs.org

S-ar putea să vă placă și