Documente Academic
Documente Profesional
Documente Cultură
Special thanks to all reviewers who have been taking hours out of their
precious time to review papers sent to them. One must not forget that a
published article does include the contribution of the reviewers as well in
the form of proof-reading, typos identification, grammar correction,
improving paper presentation, spotting weaknesses, and as whole help to
improve the quality of the final article before its publication.
Dr Tristan Vanrullen
Chief Editor
LPL, Laboratoire Parole et Langage - CNRS - Aix en Provence, France
LABRI, Laboratoire Bordelais de Recherche en Informatique - INRIA - Bordeaux, France
LEEE, Laboratoire d'Esthétique et Expérimentations de l'Espace - Université d'Auvergne, France
Dr Constantino Malagôn
Associate Professor
Nebrija University
Spain
Dr Mokhtar Beldjehem
Professor
Sainte-Anne University
Halifax, NS, Canada
Dr Pascal Chatonnay
Assistant Professor
MaÎtre de Conférences
Laboratoire d'Informatique de l'Université de Franche-Comté
Université de Franche-Comté
France
Dr Yee-Ming Chen
Professor
Department of Industrial Engineering and Management
Yuan Ze University
Taiwan
Dr Vishal Goyal
Assistant Professor
Department of Computer Science
Punjabi University
Patiala, India
Dr Natarajan Meghanathan
Assistant Professor
REU Program Director
Department of Computer Science
Jackson State University
Jackson, USA
Dr Navneet Agrawal
Assistant Professor
Department of ECE,
College of Technology & Engineering,
MPUAT, Udaipur 313001 Rajasthan, India
Prof N. Jaisankar
Assistant Professor
School of Computing Sciences,
VIT University
Vellore, Tamilnadu, India
IJCSI Reviewers Committee 2010
4. Significant Interval and Frequent Pattern Discovery in Web Log Data [pg 29-36]
Kanak Saxena, Computer Application Department. R.G.P.V., S.A.T.I. Vidisha, M.P. , India
Rahul Shukla, Samrat Ashok Technological Institute, Vidisha (M.P.), India
8. ICT in Universities of the Western Himalayan Region of India II: A Comparative SWOT
Analysis [pg 62-72]
Dhirendra Sharma, University Institute of Information Technology, Himachal Pradesh
University, Shimla, Himachal Pradesh 171 005, India
Vikram Singh, Department of Computer Science and Engg, Ch. Devi Lal University, Sirsa,
Haryna 125 055, India
9. Stochastic Model Based Proxy Servers Architecture for VoD to Achieve Reduced Client
Waiting Time [pg 73-80]
T. R. GopalaKrishnan nair, Director Research Industry and Incubation Centre DSI ,Bangalore,
India
M. Dakshayini, Dr. MGR University, Working with Dept. of ISE, BMSCE, Bangalore. Member,
Multimedia Research Group, Research Centre, DSI, Bangalore, India
10. Modified EESM Based Link Adaptation Algorithm for Multimedia Transmission in
Multicarrier Systems [pg 81-86]
R. Sandanalakshmi, Athilakshmi and K. Manivannan, Department of Electronics and
Communication Engineering, Pondicherry Engineering College, Pondicherry, India
11. Reliable Mining of Automatically Generated Test Cases from Software Requirements
Specification (SRS) [pg 87-91]
Lilly Raamesh, Research Scholar, Anna University, Chennai 25, India
G. V. Uma, CSE, Anna University, Chennai-25, India
12. Understanding Formulation of Social Capital in Online Social Network Sites (SNS)
[pg 92-96]
S. S. Phulari, Santosh Khamitkar, Nilesh Deshmukh, Parag Bhalchandra, Sakharam Lokhande
and A. R. Shinde, School of Computational Sciences, Swami Ramanand Teerth Marathwada
University, Nanded, MS, India, 431606
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 1
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814
1, 3
Centre for Ageing & Mental Health
1. Introduction
RFID stands for Radio Frequency Identification and is a
term that describes a system of identification [1]. RFID
is based on storing and remotely retrieving information
or data as it consists of RFID tag, RFID reader and back-
end Database [2]. RFID tags store unique identification
information of objects and communicate the tags so as to
allow remote retrieval of their ID. RFID technology
depends on the communication between the RFID tags
and RFID readers. The range of the reader is dependent
upon its operational frequency. Usually the readers have
their own software running on their ROM and also,
communicate with other software to manipulate these
unique identified tags [3]. Basically, the application
which manipulates tag deduction information for the end
user, communicates with the RFID reader to get the tag
information through antennas. Many researchers have
addressed issues that are related to RFID reliability and
capability [2]. RFID is continuing to become popular
because it increases efficiency and provides better
Fig. 1 RFID evolution: Over past the few decades adapted from [6]
service to stakeholders [1]. RFID technology has been
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 2
www.IJCSI.org
3. How RFID System Works The reader performs these operations one by one on each
tag. A typical RFID system work cycle can be seen in
Most RFID systems consist of tags that are attached to figure 2.
the objects to be identified. Each tag has its own “read-
only” or “rewrite” internal memory depending on the
type and application [7]. Typical configuration of this 4. Components of an RFID System
memory is to store product information, such as an
object’s unique ID manufactured date, etc. The RFID The RFID system consists of various components which
reader generates magnetic fields that enable the RFID are intergrated in a manner defined in the above section.
system to locate objects (via the tags) that are within its This allows the RFID system to deduct the objects (tag)
range [5]. The high-frequency electromagnetic energy and perform vaious operations on it. The intergration of
and query signal generated by the reader triggers the tags RFID components enables the implementation of an
to reply to the query; the query frequency could be up to RFID solution [8]. The RFID system consists of
50 times per second. [4]. As a result communication following five components (as shown in Figure 3):
between the main components of the system i.e. tags and • Tag (attached with an
reader is established [6]. As a result large quantities of object, unique identification).
data are generated. Supply chain industries control this
problem by using filters that are routed to the backend • Antenna (tag detector, creates magnetic field).
information systems. In other words, in order to control
this problem, software such as Savant is used. This • Reader (receiver of tag information,
software acts as a buffer between the Information manipulator).
Technology and RFID reader [6, 7]. • Communication infrastructure (enable
Several protocols manage the communication process reader/RFID to work through IT infrastructure).
between the reader and tag. These protocols (ISO 15693 • Application software (user database/application/
and ISO 18000-3 for HF or the ISO 18000-6, and EPC interface).
for UHF) begin the identification process when the
reader is switched on. These protocol works on selected
frequency bands (e.g. 860 – 915 MHz for UHF or 13.56
MHz for HF). If the reader is on and the tag arrives in the
reader fields, then it automatically wakes-up and decodes
the signal and replies to the reader by modulating the
reader’s field [7]. All the tags in the reader range may
reply at the same time, in this case the reader must detect
signal collision (indication of multiple tags) [6]. Signal
collision is resolved by applying anti-collision algorithm
which enables the reader to sort tags and select/handle
each tag based on the frequency range (between 50 tags
to 200 tags) and the protocol used. In this connection the
reader can perform certain operations on the tags such as
reading the tag’s identifier number and writing data into
a tag [7].
Fig. 3 Components of an RFID System
5. Tags
Tags contain microchips that store the unique
identification (ID) of each object. The ID is a serial
number stored in the RFID memory. The chip is made up
of intergrated circuit and embedded in a silicon chip [7].
RFID memory chip can be permanent or changeable
depending on the read/write characteristics. Read-only
and rewrite circuits are different as read-only tag contain
fixed data and can not be changed without re-program
electonically [5]. On the other hand, re-write tags can be
Fig. 2 A typical RFID System [7]
programmed through the reader at any time without any
limit. RFID tags can be different sizes and shapes
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 3
www.IJCSI.org
6. Antennas
RFID antennas collect data and are used as a medium for
tag reading [7]. It consists of the following:
7. RFID Reader
RFID reader works as a central place for the RFID
system. It reads tags data through the RFID antennas at a
certain frequency [7, 9]. Basically, the reader is an
electronic apparatus which produce and accept a radio
signals [15]. The antennas contains an attached reader,
the reader translates the tags radio signals through Fig. 9 RFID near field methodology [7]
antenna, depending on the tags capacity [16]. The
readers consist of a build-in anti-collision schemes and a The distinction between the RFID systems with far fields
single reader can operate on multiple frequencies. As a to the near fields is that the near fields use LF (lower
result, these readers are expected to collect or write data frequency) and HF (higher frequency) bands [17, 18].
onto tag (in case) and pass to computer systems. For this While RFID systems with far fields usually use longer
purpose readers can be connected using RS-232, RS-485, read range UHF and microwave [17].
USB cable as a wired options (called serial readers) and
connect to the computer system. Also can use WiFi as
wireless options which also known as network readers 8. Advantages & Disadvantages of RFID
[8, 12]. Readers are electronic devices which can be System
used as standalone or be integrated with other devices
and the following components/hardware into it [12]. Table 1: Comparison of RFID System
Advantage Disadvantage
Power for running reader, (2) Communication interface, High speed Interference
(3) Microprocessor, (4) Channels, (5) Controller, (6) Multipurpose and many High cost
Receiver, (7) Transmitter, (8) Memory. format
Reduce man-power Some materials may create
signal problem
7.1 Tag Standards High accuracy Overloaded reading (fail to
read)
Readers use near and far fields of methodology to Complex duplication
communicate to the tag through its antennas [7]. If a tag Multiple reading (tags)
wants to respond to the reader then the tag will need to
receive energy and communicate with a reader. For
example, passive tags use either one of the two following
methods [7, 11].
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 5
www.IJCSI.org
• The number of paramedical staff involved in RFID applications in healthcare could save important
patients’ movement processes. resources that can further contribute to better patient
care. RFID applications could reduce the number of
• The number of actions performed in patients’ errors by tagging medical objects in the healthcare
movement processes. setting such as patients’ files and medical equipment
• The resources involved in an patients’ movement tracking in a timely manner. RFID further improves the
processes. situation for patients’ care by integrating medical objects
involved throughout the patients’ care. RFID based
• The finite number of locations used for patients’ timely information about the location of objects would
movement processes. increase the efficiency and effectiveness of paramedical
staff leading to improved patients’ experience [4, 16].
• The process of integrating patients’ movement
information with an existing IT infrastructure.
10.3 Security & Control Applications
The system should enable the integration and
optimization of resources while improving accuracy and RFID tags can be attached to the equipment/user
minimizing patients’ transition time leading to personal/official belongings such as organization ID
improvements in patients’ services. cards and vehicles. By applying RFID application in
secure zones, not only permission can be granted to and
10.1 Type of RFID Applications revoke for the users/persons in particular zone but also
record individual access and the length of their stay. It is
In previous section components are identified in hospital also good for audit trial. These types of application
case. However, the detailed investigation is yet to be
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 6
www.IJCSI.org
consider time and flow carefully and as an aspect that is advantages and study model. The last part explores RFID
very important [19]. technology applications. The paper considers RFID
technology as a means to provide new capabilities and
10.4 Patrolling Log Applications efficient methods for several applications. For example,
in heathcare, access control, analyzing investory
RFID is also used for auditing and controlling security information, and business processes. RFID technology
persons themselves. Application provides checkpoints needs to develop its capability to be used with computing
for patrolling the security guards. Checkpoints are devices. This will allow businesses to get real potential
basically a RFID tag which security guard needs to scan benefits of RFID technology. This study facilitates
during their sequential patrol through the reader. The adoption of location deduction technology (RFID) in a
reader maintains the record of the time and point at healthcare environment and shows the importance of the
which the security guard swapped his card. This will not technology in a real scenario and application in
only help security firms administration to check the connection with resource optimization and improving
performance of its security guards but also used as a effectiveness. However, there is no doubt in the future
reference to track events. This application can also help that many companies and organisations will benefit from
to improve the patrolling process, e.g. through RFID technology.
identifying the need to increase patrols or check points in
a patrolled area.
Future Work
10.5 Baggage Applications
Our work continues to develop an enterprise architectural
Airline industries, package and delivery service lose a lot framework for managing contextual knowledge by
of money on lost or late delivery of baggage/packages. exploiting object location deduction technologies such as
Handling large amount of packages from many places to RFID in healthcare processes that involve the movement
various destinations on different routes can be very of patients. Such a framework is intended to facilitate
complex. In this scenario RFID application provide best healthcare managers in adopting RFID for patient care
resource management, effective operation and efficient resulting in improvements in clinical process
transfer of packages. RFID helps to identify the management and healthcare services.
packages, and provide records that can advice the
industry on possible areas that may require some
improvements. It also keeps customers informed about Acknowledgments
their packages.
This research/project is funded by SaTH NHS Trust, UK
10.6 Toll Road Applications and this RFID applications study is one of many
objectives set in the larger project on “Context based
RFID applications make the toll collection/charging knowledge management in healthcare”.
better with improved traffic flow, as cars/vehicles cannot
pass through toll stations without stopping for payment.
RFID is used to automatically identify the account holder References
and make faster transactions. This application helps to [1] J. Bohn, “Prototypical implementation of location-aware
services based on a middleware architecture for super-
keep good traffic flow and to identify traffic patterns distributed RFID tag infrastructures”, Pers Ubiquit
using data mining techniques that can inform the omputing, (2008) Journal 12:155-166.
administration or decision support systems. For example, [2] J. Schwieren1, G. Vossen, “A Design and Development
the information can be used to report the traffic Methodology for Mobile RFID Applications based on the
conditions or to extend and develop future policies [19]. ID-Services Middleware Architecture”, IEEE Computer
Society, (2009), Tenth International Conference on Mobile
Data Management: Systems, Service and Middleware.
[3] B. Glover, & H. Bhatt, RFID Essentials, O’Reilly Media,
11. Conclusions Inc, Sebastopol, (2006), ISBN 0-596-00944-5.
[4] K. Ahsan, H. Shah, P. Kingston, “Context Based
This study has identified and explained the nature of Knowledge Management in Healthcare: An EA
RFID technology evolution with respect to RFID Approach”, AMCIS 2009, Available at AIS library.
applications. RFID technology will open new doors to [5] S. Garfinkel, B. Rosenberg, “RFID Application, Security,
make organisations, companies more secure, reliable, and Privacy”, USA, (2005), ISBN: 0-321-29096-8.
and accurate. The first part of this paper has explained [6] L. Srivastava, RFID: Technology, Applications and Policy
Implications, Presentation, International
and described the RFID technology and its components, Telecommunication Union, Kenya, (2005).
and the second part has discussed the main [7] Application Notes, “Introduction to RFID Technology”
considerations of RFID technology in terms of CAENRFID: The Art of Identification (2008).
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 7
www.IJCSI.org
[8] L. Sandip, “RFID Sourcebook”, IBM Press, USA, (2005) Paul Kingston is Professor of Health Sciences and Director of
ISBN: 0-13-185137-3. the Centre for Ageing and Mental Health at Staffordshire
[9] T. Frank, H. Brad, M. Anand, B. Hersh, C. Anita, K. John, University. Professor Kingston has 34 years of experience as a
“RFID Security”, (2006) ISBN: 1-59749-047-4. Healthcare researcher and also practitioner. He has presented
101 conference papers in a number of different countries. He
[10] A. Narayanan, S. Singh & M. Somasekharan, has co-authored or authored 7 books, and contributed 11
“Implementing RFID in Library: Methodologies, chapters. He has seen 33 journal articles published, produced 10
Advantages and Disadvantages”, (2005). reports/ monographs, and 3 working papers. Additionally Paul
[11] Intermec, “ABCs of RFID: Understanding and using radio has been involved in the development of training material in the
frequency identification”, White Paper, (2009). area of adult protection and was a key founder of the charity
[12] E. Zeisel & R. Sabella, “RFID+”, Exam Cram, (2006), Action on Elder Abuse. Furthermore he is a consulting editor for
ISBN: 0-7897-3504-0. the Journal of Elder Abuse and Neglect. Over the last ten years
[13] US. Department of Homeland Security, “Additional Paul has developed a strong and respected track record for
Guidence and Security Controls are needed over Systems developing research activity. During ten years he was either in
using RFID and DHS”, Department of Homeland Security receipt of, co-managing, or leading awards totaling £1,785,633.
(Office of Inspector General), (2006), OIG-06-53.
[14] US. Department of Homeland Security, “Enhanced
Security Controls needed for US-Visit’s System using
RFID Technology”, Department of Homeland Security
(Office of Inspector General), (2006), OIG-06-39.
[15] US. Government Accountability Office, “Information
Security: Radio Frequency Identification Technology in
the Federal Government”, (2005), Report to Congressional
Requesters, GAO-05-551.
[16] K. Ahsan, H. Shah, P. Kingston, “Role of Enterprise
Architecture in healthcare IT”, Proceeding ITNG2009,
(2009), IEEE.
[17] Y. Meiller & S. Bureau, “Logistics Projects: How to
Assess the Right System? The Case of RFID Solutions in
Healthcare”, Americas Conference on Information
Systems (AMCIS) 2009 Proceedings, Association for
Information Systems Year 2009.
[18] R. Parks, W. Yao & C. H. Chu, “RFID Privacy Concerns:
A Conceptual Analysis in the Healthcare Sector”,
Americas Conference on Information Systems (AMCIS)
2009 Proceedings, Association for Information Systems
Year 2009.
[19] S. Shepard, (2005), “RFID Radio Frequency
Identification”, (2005), USA, ISBN:0-07-144299-5.
develop the education to reach a new era. The results from 2. Embedded System and Applications
many researches (Zhang, 2003; Lin, 2005; Neo, 2006) also
show that the teaching contents should interact with The "Embedded system" integrates the application of
students regardless of the multimedia network teaching or information software and hardware. The application of
the recent virtual reality auxiliary teaching system in order embedded system can be found everywhere, including the
to reach the best learning performance. The computer is life facilities such as mobile phone, electric toy and
not the machine for data analysis only. From the designed video-audio instrument, and transportation system and
digital teaching materials, it does not only promote the automation in factories. The trend of the functions of
interests of the learners but also improves the learning common embedded systems is simplification. The
efficiency. Furthermore, the learners all over the world can software and hardware only include the modules required
use the teaching materials easily through the network. for specific functions. The peripheral products from
Many domestic and foreign researchers on network different manufacturers are combined to become an intact
Cooperative Learning wish to establish a more effective embedded system according to the provided Intelligence
learning environment through the Internet in order to Property (IP). Comparing with the common computers, the
improve the learning ability and performance of the building cost can be reduced significantly. Besides, due to
students (Shi, 2002; Chen, 2004; Dong, 2004). The the domain of the application, most of them include the
existing =be classified into three types according to the characteristics of miniaturization and low-power
tools and technologies of design. The first type is the consumption. Since the system only includes the specific
network Cooperative Learning system that provides functions, the design of the system will be optimized to
documents mainly (Johnson, 1994;Maier, 1994). This ensure the stability of the system (Kao, 2007).
type of Cooperative Learning system discusses the design
of the shared document database and the system
infrastructure of the network Cooperative Learning. The 2.1 Development Approach of Embedded System
second type is the Cooperative Learning system that
provides video conferencing function. As per the name, it Since the embedded system is developed for specific
is suggested that the Cooperative Learning system mainly functions, the developing tools include In-Circuit
provides the mechanism for the members of the group to Emulator (ICE), Development Board, Integrated
discuss face-to-face on network. The third type of Development Environment (IDE) and compiler
Cooperative Learning system provides virtual reality (Microtime Computer, 2004). The Development Board is
environment (Maier, 1994). This type of Cooperative the design sample provided by the hardware manufacturers.
Learning system mainly provides a learning environment The debugging mechanism in the central processing unit
of simulated virtual reality to the members of the group. of the In-Circuit together with the pre-set messages of the
However, most of the existing network Cooperative compiler can simulate the program running on the
Learning system a convenient network environment for hardware. The product developers may refer to their
the students to conduct online Cooperative Learning of circuit design and integrate to the corresponding
theoretical courses. It is rarely to discuss how the developing environment. Therefore, most of the
Cooperative Learning environment of operation training of Development Boards have been built with many peripheral
online-simulated instruments and the circuit current function modules such as network card, seven-segment
measurement can be established. display, parallel port, LED and dip switch, for the
convenient of product development and speedup the
This paper has proposed a Web-based Cooperative product development cycle. The application integrating
Learning system for remote operation 3D virtual electronic the development environment and complier refer to the
instruments with circuit-measuring function. The system development software on the personal computer of the
does not only provide the cooperative operation practice of developer. Therefore, the system developers may design
simulated instruments on network for the learners, it also the working platforms and the corresponding applications
improves the interests of the learners in practical courses of the embedded system on the personal computer, and
and the learning performance through the system functions complete the compiling. Through the parallel port or serial
including real time text chatting and discussion, and actual port, the corresponding software can be embedded to the
circuit measurement. The chapters of this article also target board and the cross-platform development process is
include embedded system and applications, virtual reality, finished. Since the common low-cost multimeter cannot
and system infrastructure implementation, the Cooperative communicate with computer, this study has made use of
Learning of virtual instruments operation and circuit low-cost 8-bit 89c51 chip to develop Embedded RS-2332
measurement, and conclusions. Module to capture the data shown on the panel of the
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 10
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814
2.3Embedded Broker
Figure 1 Embedded RS-232 module
As the network technology and applications are developed
2.2 Embedded RS-232 Module rapidly, large amount of servers for different network
services are built. They include file server, web server and
Since most of the low-cost multimeters used by students Email server. These servers are built on high-end servers.
do not include the serial communication function with the According to the functions built by the users, these
computer, this study makes use of 8-bit 89C51 single chip powerful servers become the specific function servers. The
to implement an Embedded RS-232 Module. It is then hardware is very complicated and expensive, and it
integrated with the low-cost multimeter to replace the increases the building cost of the system directly. To build
high-cost multimeter. The 89C51 single chip includes the the network server by the embedded system, the relevant
following advantages: (1) the program is easy to learn (2) functions can be enhanced according to the requirement of
low-cost (3) simple circuit (4) small size. The the service. For example, the files operating performance
infrastructure of the single chip is shown in Figure 2. The and the space of the hard disks of the file server should be
corresponding specification is as follows: increased but the relatively low performance of CPU can
(1) CPU: the automatic controlled high performance 8-bit be used. This can reduce the cost effectively and increase
CPU is used to implement the whole operation of the the competitiveness. This paper has adopted ARM7
computer. development board (as shown in Figure 3) to develop
(2) Program memory: the ROM or EPROM is used to embedded Broker to replace the expensive server.
store the program and the constant numbers. Different
memories
have different codes.
(3) Data memory: the RAM is used to store the data that is
dynamic during the running of the program. The data can
be
accessed in the memory with the CPU. The data will
disappear if the power is off.
(4) Oscillator: An oscillator is included in the MCS-51
series single chip. It will produce the clock for the whole
system if
it is connected with a quartz crystal.
Figure 3 ARM7TDMI development board
(5) I/O pins: there are 32 input / output pins to use.
(6) Timer / Counter: the commands are used to set the
16-bit Timer or 16-byte Counter.
3. Application of Virtual Reality Technology
The Virtual Reality (VR) is the term for science and
technology domain. The virtual reality includes many
components such as vision, hearing, and even sense of
touch and sense of smell. The users can determine the path
for browsing the information freely in the virtual reality.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 11
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814
Therefore, the virtual reality can be regarded as an intact real and smooth. The system has no strict limitation to the
multimedia system. The major characteristics of the virtual hardware and it supports most of the peripheral devices of
reality system are the interaction and the real time virtual reality. In addition, the SDK (Superscape
response. When it is applied on application, the users can Developers Kit) is applicable to compiling to the Dynamic
operate the computers freely. They can observe the Linking Libraries (.dll file) so that it can plug-in to the
products in any angle and position. The virtual reality virtual scene and develop the own drivers. The virtual
technology makes use of the computer drawing or image reality tool can develop online virtual reality and multiple
synthetic technology together with the virtual reality users’ connections on network. It can also process
constructed by sound processing, sense of touch and sense dynamic data link with other applications for information
of taste. In this virtual world, the 3D simulated object can sharing. It provides an intact virtual scene to the users and
be our famous things, or something that we cannot see, or they can browse the virtual scene from network after they
a simulated imagined space. Virtual reality is an integrated have installed ActiveX on the browser. This software
technology in order to provide a higher level of includes different editing tools, such as model editor,
man-machine interface (Kao, 2007). Dong (2004) has image editor, 3D scene and 3D virtual object editor. In
proposed three important topics for virtual reality. They order to increase the control of the 3D virtual object and
are immersion, interaction and imagination (3I). These flexibility of the interactive design of the 3D virtual object,
three topics are explained respectively as follows: the 3D Webmaster provides Superscape Control Language
(1) Immersion: (SCL), which is similar to programming language C.
The users can feel the scene of virtual reality directly Therefore, the users can write with the control language in
produced by computer. The display system provides a the virtual scene designed by themselves (Superscape,
simulated and First-person virtual world and scene. 1996) as shown in Figure 4.
Besides, it can be controlled by the users directly. The
users can really "immerse" in the world of virtual reality.
(2) Interaction
The users interact with the objects in the virtual reality
world. For example, the users can walk in the virtual
world, or wear the glove to catch the objects in the virtual
world, or even interact with other users in the same virtual
world.
(3) Imagination:
The virtual reality provides imagining space to the
users. If the virtual reality technology is applied in the
interacting environment of the virtual group, then multiply
users can remote logon simultaneously, and share and
interact in the same space, and meet the characteristics of
virtual reality. This must bring better efficiency on
interaction and communication approach to the virtual
group. The users can talk and interact as if they are in the
real world. The application and development of virtual Figure 4 SCL program editor of 3D Webmaster
reality technology on different professional domains are
increasing gradually. One of the reasons is that the
function of the software of virtual reality is enhanced. It is 4. System Infrastructure and Implementation
more important that the cost for virtual reality on personal
computer (PC) is relatively low and it is widely accepted The network technology is developed rapidly. For
by all trades and professions. The 3D Webmaster and example, the web language starts from HTML, SMIL,
Virtools Dev 3.0 have been adopted in this study to VRML to XML. The media appeared starts from text,
develop the relevant virtual instruments. picture, and image to streaming media. The transmission
channel starts from modem, ISDN, T1 lease line to ADSL.
The 3D Webmaster does not only provide the editing The change causes the revolutionary development on
function for the 3D virtual object but it can also transform teaching method, teaching style and interactive mechanism.
the DXF or VRML2.0 format to other 3D format. It can In order to recognize the characteristics of Cooperative
also export to VRML2.0 format and the audio file is Learning, the system of this study provides the functions
mainly in wav format. The Direct 3D is supported and Z including group real time discussion, real time interaction,
buffer is used for image processing so that the images look operation of 3D virtual electronic instruments that include
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 12
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814
transmission technology of remote control parameters and 4.2 Cooperative Learning of 3D virtual instruments
circuit measurement. with circuit- measuring function
The proposed cooperative practice environment with
4.1 System infrastructure circuit measurement function sends the measured value
from the multimeter Embedded RS-232 Module to the 3D
The functions of the system are divided into four major virtual multimeters of the learners in the same group. Any
parts as shown in Figure 5. learner can operate the own multimeter to progress
(1) Cooperative Learning system: the learners can cooperative practice. As shown in Figure 6, when the user
download the installation file from the link of the portal sends the control command (or get the data from circuit
website. After the installation, the Cooperative Learning measurement) to the 3D virtual instrument through the
system can be entered. Operating Interface, it will be determined and processed
(2) Cooperative Digital Learning: in the Cooperative by the embedded control program and a control parameter
Learning system, many items of Cooperative Digital will be obtained. Then with the remote control parameter
Learning (system brief, 3D virtual lab, 3D puzzle that transmission technology, the embedded Broker transmits
trains up initiative and interaction, 3D virtual multimeter the control parameter to other learners of the same group
and 3D virtual power supply) can be found. The learners so that the 3D virtual instruments of the learning partners
can click and start Cooperative Digital Learning. can be updated simultaneously. The detailed transmission
(3) Teaching material: after the learners have started procedures of the remote control parameter are as follows:
the Cooperative Learning system, they can start learning (1) The user sends the control command (or receives the
from the digital teaching content. The users must study the measured data of the circuit) on the Operating Interface.
operation of the electronic instruments first. Then they can The obtained control parameter will be sent to 3D virtual
make use of the real time discussion to practice operating instrument first and responded properly in accordance with
the instruments and start Cooperative Learning with the the control parameter.
group members. (2) The control parameter is sent to the embedded
(4) Circuit measurement: the learners of the same TCP/IP control function.
group can progress Cooperative Learning of multimeter (3) The TCP/IP control function sends the control
with embedded RS-232 module for circuit measurement. parameter to the Embedded Broker.
The members of the same group can see the data of the (4) Embedded Broker transmits the control parameter to
circuit obtained by the measuring team from the 3D virtual the TCP/IP control functions of other users.
instrument simultaneously. (5) Other users will send the received control parameter
to the embedded control program.
(6) The embedded control program from other users
then sends the control parameters to the 3D virtual
instrument, and it responds properly in accordance to the
control parameter.
(2) In the real time discussion, the users can transmit utilized: Ethereal (Network Protocol Analyzer, version
messages to the members of the group for real time 0.10.10).
discussion. The corresponding learning records can be In Figure 12 and 13, the x-axis stands for the number of
saved for the teacher assessing the learning performance. learners while the average response time for 3D virtual
(3) User inputs the IP of the Embedded Broker and the cooperative learning system (unit: ms, millisecond) is
assigned Port in order to connect to the Embedded Broker. indicated by the y-axis. While 40 learners are using the
(4) After grouping, team study via networking available system simultaneously, the response time for each learner
in addition to login learning. will be different because the received status data
(5) The users can make use of the virtual machine calculated by averaging the sum of each learner’s response
operating interface to operate the 3D virtual instrument. time are orderly updated via Embedded Broker into each
learner’s PC. Under PC and Embedded Broker
Architectures, the experimental results for the system’s
➂ response time against increasing the number of learners
are listed below:
(1) Under the system with PC Broker, while learning is
➃ done by grouping, the average response times for Group 1,
2, 3, and 4 are respectively “132ms,” “144ms,” “148ms,”
and “152~155ms,” as illustrated in Figure 12.
➀ ➄ (2) Under the system with Embedded Broker, while
learning is done by grouping, the average response times
for Group 1, 2, 3, and 4 are respectively “130ms,”
“141ms,” “147ms,” and “152~155ms,” as illustrated in
Figure 13.
From the above results, the proposed embedded Broker
➁ would also provide service capabilities similar to that of a
high-end server system, and significantly reduce the
system costs.
6. Conclusions
Most of the existing network Cooperative Learning system
provides a convenient network environment for the
students to conduct online Cooperative Learning of
theoretical courses. It is rarely to discuss how the
Cooperative Learning environment of operation training of
online-simulated instruments and the circuit measurement
can be established. This causes the existing network
Cooperative Learning system cannot implement the
characteristics of Cooperative Learning in the learning
environment on the Internet.
environment of the instrument practice is implemented in [7] Kao, F. C. (2007): Design of 3D Virtual Instrument for
the Internet. The proposed Cooperative Learning system Cooperative Learning System. TWELF2007, 602-606.
of 3D virtual instruments with circuit-measuring function [8] Kao, F.C., Chiang, K.Y. & Kuo, C.L. (2006). The Design of
integrates with the Virtual reality technology, embedded Load-Balancing Computer- Assisted Instruction System with
Embedded 3D Virtual Instrument s. ICIMB, 30.
system technology and the transmission technology of the [9] Kao, F.C., Tseng, C.W. & Ji, J.H. (2007). The Design of
remote control parameter. The Cooperative Learning Embedded LCMS Broker with Load- Balancing Function.
environment for operation of instruments on Internet is ICMLC, 3770-3776.
established completely. It provides the Cooperative [10] Lin, D. S. (2005). Studying on Applying Conceptual
Learning practice of circuit measurement for the learners Graph to Junior High Students Cooperative Learning.
to operate online anytime. Besides, the simulating training Nanhua General Education Research, Vol. (2), 39-67.
provided by the 3D virtual instruments, and the online [11] Liu, X. M. (1998). Teaching Strategy of Cooperative
grouped discussion do not only increase the interests of the Learning. Bulletin of Civic and Moral Education, Vol. (5),
learners in practice courses and train up their initiative and 285-294.
[12] Laurillard. D.(2002). Rethinking University Teaching, 2nd
interaction on Cooperative Learning. This also decreases ed: Routledge Falmer.
the damage rate of the traditional instruments and reduces [13] Maier, M. H. & Keenan, D. (1994). Teaching Tools
the purchasing cost of school facilities. Besides, the Cooperative Learning in Economics. Economic Inquiry,
proposed embedded Broker would also provide service Vol.32, 358-361.
capabilities similar to that of a high-end server system, and [14] Microtime Computer Inc., "Implementation of Embedded
significantly reduce the system costs. uClinux on PreSOCes", Chuan Hwa Book Co., Ltd, ISBN
957-21-4533-9
[15] Neo, S. J. & Yang, G. X. (2006). The History and Existing
Acknowledgment Development Analysis of Network Education.
http://www.edu.cn.
This paper has been supported in part by the grant [16] Parker, D.B. (1985). Learning Logic. Technical Report
NSC95-2520-S-212-001-MY3 from the National Science TR-47, Center for Computational Research in Economics
and Management Science, Massachusetts Institute of
Council of Taiwan.
Technology, Cambridge, MA, USA.
[17] Rosenberg, M.J. (2001). E-Learning: Strategies for
References Delivering Knowledge in the Digital Age. New York,
[1] Chen, S.J. (1997). Discussion on Applying Coaching McGraw-Hill.
Cooperative Learning Strategy on Reading Teaching in [18] Scardamalia, M and Bereiter, C. (2003). Computer
Primary School. Journal of National Taichung Teachers Support for knowledge-Building Communities. The Journal
College, Vol. (11), 65-111. of the Learning Science, vol.3, pp. 265-283.
[2] Chen, X. Z., Liu, J. R., & Ke, J. J. (2004). Performance [19] Shi, D. Q. (2002). Discussion on Improving the Reading
Assessment for Cooperative Learning on Information Ethics. and Understanding Ability from the Cooperative Learning
Journal of Kao Yuan University, Vol. (10), 161-168. Group Discussion. Practice Teachers Quarterly, Vol. (24),
[3] Dong, W. F. (2004). Applying Cooperative Learning on 70-76.
Teaching the English Class. http://www.chinaetr.com/. [20] Slavin. R.E. (1985). Learning to cooperate, cooperating to
[4] Huang, Z. J. & Lin, P. X. (1996). Cooperative Learning. learn. 147- 172.
Wu Nan Book Inc. [21] Wei, J. L. & Chen, Y. M. (1994). The Method Breaking
[5] Johnson, D.W. & Johnson. R.T. (1994). The new circles of Through the Teaching Predicament of Normal Class:
learning cooperation in the classroom and school, American: Cooperative Learning. Educational Research & Information,
ASCD. Vol. (35), 59-65.
[6] Kao, F. C. (2005). Design of SCORM Learning Management [22] Zhang, J. H. (2003). Enlightenment of Cooperative
System with Embedded 3D Virtual Instrument. OOTA2005, Learning Theory on Teaching the English Class. Journal of
286-290. Liaocheng University, Philosophy and Social Science Edition,
Vol. (5), 120-122.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 18
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814
2
Department of Electrical and Electronics Engineering
St.Peters Engineering College, Chennai, INDIA.
In Fig (4), TGS = Steam Governor Time Constant Where Ks = PSS gain
TTS = Steam Turbine Time Constant. Tw = Washout Time Constant
T1,T2,T3,T4 = PSS Time Constants
Fig(4) represents the model for Steam Governor and
Time Constants T1=T3, T2=T4
Turbine(Non reheat type) to be equipped with the Heffron
(Identical Compensator Block).
Phillips Generator model.(i.e) Output(∆Tm) of Steam GT
Model is given as input to the Heffron-Phillips generator
Hence Ks, T1, T2 are the PSS parameters which should be
model.
computed using Conventional Lead Lag stabilizer and
All the abbreviations for the Constants and Variables
optimally tuned using GAPSS and PSOPSS. The washout
involved in the model are given in Appendix-II.
time constant (Tw) is in the range of 1 to 20 seconds and
in this work, Tw is taken as 10 seconds.
The PSS model consists of the Gain Block, Cascaded ξEM = Damping Ratios of all the
identical Phase Compensation block and the washout Electromechanical modes of
block. The input to the controller is the Rotor speed Oscillation.
Deviation (∆ω) and output is the Supplementary Control
signal (∆UE) given to Generator excitation system.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 21
www.IJCSI.org
The Design problem including the constraints imposed on Step 2. Create an Initial Population of
the various PSS parameters is given as follows: individuals randomly.
to those obtained in current generation, The velocity of each agent can be modified by the
Otherwise Goto step 5. following equation
Step 5. Based on the fitness, select the best k +1
= W .V i + C 1.rand 1 * ⎛⎜ Pbest i − S i ⎞⎟ +
k k
Particle Swarm Optimization has more advantages over The Current position can be modified by the
Genetic Algorithm as follows: following equation
k +1 k +1
(a). PSO is easier to implement and there are fewer k
= S i +V i (13)
parameters to adjust. Si
(b). In PSO, every particle remembers its own
previous best value as well as the
neighbourhood best ; therefore, it has a more The Algorithmic Steps involved in Particle Swarm
effective memory capability than GA. Optimization Algorithm are as follows:
(c).PSO is more efficient in maintaining the
diversity of the swarm, since all the particles use Step 1: Select the various parameters of PSO.
the information related to the most successful
particle in order to improve themselves, whereas Step 2: Initialize a Population of particles with
in Genetic algorithm, the worse solutions are random Positions and Velocities in the
discarded and only the new ones are saved; (i.e) problem space.
in GA the population evolves around a subset of
the best individuals. Step 3 : Evaluate the Desired Optimization Fitness
Function for each particle.
PSO utilizes a population of particles that fly through the
problem space with given velocities. Each particle has a Step 4: For each Individual particle, Compare the
memory and it is capable of remembering the best position Particles fitness value with its Pbest. If the
in the search space ever visited by it. The Positions Current value is better than the pbest value,
corresponding to the Best fitness is called Pbest and the then set this value as the Pbest for agent i.
overall best out of all the particles in the population is
called gbest. Step 5: Identify the particle that has the Best Fitness
At each iteration, the velocities of the individual particles Value. The value of its fitness function is
are updated according to the best position for the particle identified as gbest.
itself and the neighborhood best position.
Step 6: Compute the new Velocities and
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 23
www.IJCSI.org
Positions of the particles according to Here e(t) refers to the error involving Rotor Speed
equation (11) & (13). deviation (∆ω) and Power Angle deviation (∆δ).Ts
represent the Time of Simulation.
Step 7: Repeat steps 3-6 until the stopping The State Space modeling of the SMIB model including
Criterion of Maximum Generations is met. steam Governor Turbine dynamics was performed and the
system open loop eigen values and damping ratios was
computed as listed in Table 1 and Table 2.The
Electromechanical modes of oscillation indicate that the
5. Simulation Results test system is unstable having positive real part eigen
values located in right half of the complex s plane.
For all the Computation, Simulation and Analysis of the Also, the time domain analysis involving Load change
results in this work, MATLAB 7.0 / SIMULINK platform disturbance (∆PL) in Fig (6) and Fig (7) reveal that the
was used. open loop system without PSS are having oscillatory
The Two main analysis involved in the simulation in this responses with huge overshoots and large settling times,
work are thus making the system unstable.
(1). Small Signal Stability Analysis.
(2). Non Linear Time Domain Analysis.
Eigen Values
S. Operating Open Loop CPSS GAPSS PSOPSS
No Conditions
without PSS
1 P = 0.4 -13.1440 -16.7162 ± j 7.1748 -15.3069 ± j 5.6824 -15.8761 ± j 6.8962
Q = 0.008 0.1218 ± j 5.452 -0.3956 ± j 8.6327 -0.7011 ± j 7.2045 -0.8895 ± j 8.4064
∆PL = 0.1 -6.5964 -0.8315 ± j 3.4119 -1.3476 ± j 3.9158 -5.8273 ± j 0.7754
p.u -3.2215 ± j 4.7092 -4.950 ± j 0.7994 -5.4757 ± j 1.0014 -2.1101 ± j 2.4145
-2.0101 -0.0500 -0.0501 -0.0504
-1.3302 -2.2377 -2.2702 -1.6176
-1.3260 -1.3303 -1.8079
2 P = 0.4 -13.1452 -16.6293 ± j 7.0840 -14.8144 ± j 5.1911 -15.7799 ± j 4.7389
Q = 0.06 0.1231 ± j 5.405 -0.4355 ± j 8.5255 -0.5502 ± j 6.7845 -0.6361 ± j 7.0777
∆PL = 0.2 -6.6024 -0.8501 ± j 3.4180 -1.6839 ± j 3.9568 -2.9594 ± j 2.6991
p.u -3.2247 ± j 4.7140 -4.9452 ± j 0.8198 -5.7962 ± j 0.9729 -0.0504
-2.0000 -0.0501 -0.0501 -1.6220 ± j 0.2209
-1.3292 -2.2343 -2.2542 -2.2438
-1.3233 -1.3364 -1.3253
3 P = 0.4 -13.0586 -17.1748 ± j 7.6603 -0.3440± j 5.8868 -16.7640 ± j 7.1869
Q = 0.06 0.1272 ± j 5.6292 -0.2107 ± j 9.1457 -12.1442 ± j 1.6734 -0.5372 ± j 8.7165
∆PL = 0.3 -6.5683 -0.7241 ± j 3.3516 -3.2217 ± j 4.0608 -0.7460 ± j 3.5708
p.u -3.2617 ± j 4.6158 -4.7499 ± j 0.6580 -9.0214 -4.8116 ± j 0.7338
+ 10 % increase -2.0552 -0.0500 -6.2420 -0.0500
in Line -1.3371 -2.2364 -0.0501 -2.2373
reactance Xe. -1.3329 -1.3637 -1.3334
-2.2169
Table 2. Computed Damping Ratios for Open loop without PSS, GAPSS and PSOPSS
Table.3. Parameters selected for GA Implementation. better damping to the oscillatory modes. Though CPSS
and GAPSS provide good damping , the PSO based
GA Parameters controller (PSOPSS) provide better damping to the
Population Size 20 oscillatory modes, with damping ratios more than the
No of Generations 10 threshold level of damping (ξT=0.06) for all the conditions
Selection Operator Roulette Wheel Selection involved (last Column of Table 2).
Generation gap 0.9
Crossover Probability 0.95
Non linear Time domain simulations involving wide
Mutation Probability 0.10
variations in operating points and system model
Termination Method Maximum Generations
parameters have been carried out to show the robustness
Table.4. Parameters selected for PSO Implementation.
of the proposed controllers in damping the low frequency
oscillations.
PSO Parameters
Swarm Size 20
wmax ,wmin 1 , 0.5
C1 , C2 1.0 , 1.0
No of Generations 10
No of Variables 03
Termination Method Maximum Generations
Parameter Sensitivity analysis refers to analyzing the Fig.8. Speed Deviation response for (0.4, 0.008), ∆PL=0.1p.u condition
with CPSS, GAPSS and PSOPSS.
System Stability performance, whenever the System
parameters involved in the System model and System
operating conditions are varied over a wide range with
respect to the normal operating point.
Fig (10) and Fig (11) indicate the Speed deviation and compared to the CPSS and the Genetic based PSS
Power Angle response of the system with operating (GAPSS).
condition (P=0.4, Q= 0.06, ∆PL=0.2 p.u). In order to enhance the system stability, the CPSS,
GAPSS and PSOPSS reduces the oscillations overshoot
These responses reveal the dominance of the Bio Inspired and also make the oscillations to settle at a quicker settling
optimal damping controllers in damping out the low time. For instance, in Fig (13), the maximum overshoot is
frequency oscillations, in particular, the PSO based 0.06 p.u for CPSS, for GA based PSS it is 0.054p.u,
controller damp the oscillations with reduced overshoot whereas for PSOPSS, the maximum oscillation overshoot
and quick settling time compared to the CPSS and the is only 0.038 p.u. This shows the optimal tuning and
Genetic based PSS (GAPSS). effective damping exerted by the PSOPSS.
Fig.10. Speed Deviation response for (0.4, 0.06), ∆PL=0.2 p.u condition
Fig.12. Speed Deviation response for (0.4, 0.06), ∆PL=0.3 p.u and 10%
with CPSS, GAPSS and PSOPSS
increase in Line Reactance(Xe) condition with CPSS, GAPSS
and PSOPSS.
International Journal of Electrical Power and Energy Assistant Professor in Electrical and Electronics Engineering
Systems Engineering”1:4,Fall 2008,pp.226-233. Department. He is now working towards his PhD in Computational
[3] C.Y.Chung,K.W.Wang,C.T.Tse,X.Y.Bian and A.K.David, Intelligence at Anna University, Chennai. His areas of research
include Artificial Intelligence, Optimization Techniques, FACTS
“Probabilistic Eigen Value Sensitivity Analysis and PSS and Power System Stability Analysis. He is an annual member of
design in Multimachine systems”, IEEE Transactions on IEEE and Life member of ISTE.
Power Systems, Vol.18,No.4,2003,pp.1439-1445.
[4] D.K.Chaturvedi, O.P.Malik and K.Kalra, “Experimental Dr.R.Lakshimpathi received the B.E degree in 1971 and M.E
studies with a Generalized Neuron based Power System degree in 1973 from College of Engineering, Guindy, and
Stabilizer”, IEEE Transactions on Power Systems, Vol.19, Chennai.He received his PhD degree in High Voltage Engineering
from Indian Institute of Technology, Chennai, India. He has 36
No.3, Aug 2004, pp.1445-1453. years of teaching experience in various Government Engineering
[5] R.Segal,M.L.Kothari and S.Madani, “Radial Basic function Colleges in Tamilnadu and he retired as Principal and Regional
(RBF) network adaptive Power System Stabilizer”, IEEE Research Director at Alagappa Chettiar College of Engineering
Transactions on Power Systems,Vol.15,May 2000,pp.722- and Technology, Karaikudi.He is now working as Professor in
727. Electrical and Electronics Engineering department at St.Peters
[6] M.J.Gibbard, “Robust Design of Fixed parameter Power Engineering College, Chennai.His areas of research include HVDC
Transmission, Power System Stability and Electrical Power
System Stabilizer”, IEEE Transactions on Power Systems, Semiconductor Drives.
Vol.6, 1991, pp.794-800.
[7] M.A.Abido, “Robust design of Multimachine Power
System Stabilizers using Simulated Annealing”, IEEE
Transaction on Energy Conversion, vol.15, No.3, 2003,
pp.297-304.
[8] M.A.Abido,Y.L.Abdel Magid, “Optimal Design of Power
System Stabilizers using Evolutionary Programming”,IEEE
Transactions on Energy Conversion,vol.17,No.4,2002,
pp.429-436.
[9] S.Mishra, M.Tripathy, J.Nanda, “Multimachine Stabilizer
design by rule based Bacteria Foraging”, Electric Power
Systems Research, Vol.77, 2007, pp.1595-1607.
[10] Bikash Pal, Balarko Chaudhuri, “Robust Control in Power
Systems”, Springer Series, 2005.
[11] Yao Nan Yu, “Electric Power System Dynamics”,
Academic Press, Newyork, 1983.
[12] A.Andreoiu, K.Bhattacharya, “Robust Tuning of PSS
using a Lyapunov method based Genetic Algorithm”,IEE
proceedings on Generation, Transmission and Distribution
,Vol.149,No.5,Sep 2002,pp.585-592.
[13] K.Sebaa, M.Boudour, “Optimal locations and Tuning of
Robust Power system Stabilizer using Genetic Algorithms”,
Electric Power Systems Research, vol.79, 2009,pp.406-419.
[14] J.Kennedy and R.Eberhart, “Particle Swarm Optimization”,
in proceedings, IEEE International Conference on Neural
Networks (ICNN), vol.4, Nov 1995, pp.1942-1948.
[15] R.Eberhart and J.Kennedy, “A New optimizer using
Particle Swarm theory”, in proceedings,6th international
symposium on Micro Machine and Human
Science(MHS),Oct 1995,pp.39-43.
[16] Jong Bae Park,Ki Song Lee, Joong Rin Shin and Kwang
Y.Lee, “A Particle Swarm Optimization for Economic
Dispatch with Non Smooth Cost functions”, IEEE
Transactions on Power Systems,vol.20,No.1,Feb 2005,pp.34-
42.
[17] M.A.Abido, “Optimal Design of Power System Stabilizers
using Particle Swarm Optimization”, IEEE Transactions on
Energy Conversion, vol.17, No.3, Sep 2002, pp.406-413.
[18] Graham Rogers, “Power System Oscillations”, Kluwer
Academic Publishers, USA, 2000.
2
M-Tech (Software System) Student, R.G.P.V., S.A.T.I.
Vidisha, M.P. , India
time points that satisfy the criteria of minimum confidence 3. Terms and Definition and Significant
and maximum interval length specified by the user. These Interval
Significant Interval are used to discover the frequent
patterns in Web log data We are Apply the One Pass SI algorithm[4],One Pass
In this Paper, we have used One PassSI Algorithm [4] and AllSI[4] and One Pass FED[5] on the web log data so we
One Pass AllSI Algorithm [4] for generating the need the different website name such as
Significant Interval. These algorithm are single-pass citeseer.com,sports.com,newsworld.com etc The time-
algorithm, these algorithm use the main-memory approach series can be represented with an Website timestamp
to discover significant intervals in the Web Log Data. This model. A website w (for example citeseer.com is access,
approach not only makes use of a reduced data set by sports.com is not access, etc.) is associated with a
compressing point-based data to an interval-based data, sequence of timestamps {T1, T2, • ••, Tn} that describes
its access over a period of time. The notion of periodicity
but also makes only a single pass over the entire data set. (such as daily, weekly, monthly, etc.) is used to group the
We are also performing the analysis on One Pass SI website accesses. For each website, the number of
algorithm [4], One Pass AllSI Algorithm [4] and One Pass accesses at each time point can be obtained by grouping
FED algorithm [5]. on the timestamp (or periodicity attribute). We term the
The remainder of the paper is organized as follows. number of accesses of each website as access count (ac).
Section - 2 discussed the Related work , section-3 Thus the time series data can be represented as < w {Tl,
Defined the Related terms and Definition and Significant al}, {T2, a2}, {T3, a3}, {Tn, an}>, where Ti represents
Interval.section-4 describe the process of significant the timestamp associated with the website w and ai
Interval Discovery with the Example section-5 discuss the represents number of accesses. ai can be referred as the
frequent pattern with the Example of Frequent Pattern access count of the website w at Timestamp Ti . This is
referred to as folding of the time-series data using
Discovery and section-6 shows the Experimental Analysis
periodicity and time granularity (e.g., daily on seconds,
on Significant Interval and Frequent pattern finally daily on minutes, weekly on minutes).
section-7 discussed the conclusion and future work
3.1 Significant Interval
2. Related Work Intervals associated with a website are characterized by
confidence and density. The confidence of a time point is
The significant intervals used for the lock and unlock the ratio (expressed as percentage) of the access count at
operations. These significant intervals can then be used to that point to the periodicity of data collection (number of
find an association or a relationship between the Websites. days or weeks (N)). When the interval consists of several
We have worked on the Dynamic Data (i.e. Web Log time points, total access count of the interval (the sum of
Data) that is dependent on the interest of user. These the access count of the points that form the interval) is
Intervals are used to discover the frequent pattern/Episode. used. The density of an interval is the ratio of the total
Though the concept of intervals has been used for finding access count of the interval to the interval length. We
association rules ([11] Miller & Yang, 1997; [12] Srikant define a Window as a time-interval wd [Ts, Tel where (Ts
& Agrawal, 1996b), not many data mining algorithms <= Te), Ts is the start time and Te is the end time of the
discuss the formation of intervals on time-series data interval. An interval associated with a website is
based on the interaction of events. WinEpi (Mannila et al., represented as [Ts, Te, ac, 1, d, cl where ac represents total
1995) makes multiple passes over the data for counting the access count of the interval, 1 denotes the length of an
support of the candidates in each pass. Algorithm [4] is interval (Te-Ts+1), d indicates the density, and c
closer to MinEpi (Mannila et al., 1995) as we obtain the represents confidence of the interval (ac/N * 100). Given a
event count in a single pass over the data and use it to time sequence T, minimum confidence min-conf and
obtain support for the intervals. Regarding timing interval length max-Len, we define the interval wd [Ts,
constraints, WinEpi and MinEpi find all sequences that Tel as a Significant Interval in T if:
satisfy the time constraint maximum span, which is 1) Confidence(c) of wd >= min-conf
defined as the maximum allowed time difference between 2 length (l) of wd <= max-Len
latest and earliest occurrences of events in the entire 3) There is no window wd'= [Ts ', Te'] in T such that Ts
sequence and minimum support, counted with the one '>=Ts and Te'=<Te that satisfies i) and ii).
occurrence per span window method. They have applied
this approach on the static data but we are applying this
approach on the Dynamic Data. 3.2 Valid Significant Interval
Significant intervals can be of unit size, overlapping or
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 31
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814
disjoint. All valid intervals should be discovered by a 4. Process of Significant Interval Discovery
significant interval discovery (SID) algorithm. Figure 3.1
shows all possible valid significant intervals. That B The Raw data is collected and then apply the
combines with C since B started after A. The Figure3.1 preprocessing steps The Preprocessing steps are
taken from [4]. 1) Data Cleaning: In this step we are separate the data into
1) Unit Significant Interval: It is a significant interval with the different website record and use the different views to
the same start and end time; that is, Ts = Te. show the record of each website. This process is called
2) Disjoint Significant Intervals: They are defined as two Data cleaning
significant intervals (that is, wd [Ts, Te] and wd` [Ts`, 2) Data Folding: In the preprocessing phase, the input data
Te`]), which do not overlap (that is, Ts` >= Te and Te` is set is combined to periodicity of interest (example, daily
not in the interval [Ts, Te] or Te`> Ts and Ts` is not in the or weekly). This process is called Data Folding.
interval [Ts, Te]). Now One Pass SI or One Pass AllSI [4] is applied for
3) Overlapping Significant Intervals: They are defined as generate the significant Interval. Those Time points
two significant Intervals wd [Ts, Te] and wd’ [Ts’, Te’]. satisfied the Minimum Constraint (i.e. min-conf and max
(if Ts<=Te’<Te and Ts’<Ts) or (Ts<Ts’<=Te and Te’> Len for One Pass SI[4] or min-Conf for One Pass
Te). AllSI[4]) generate the Significant Interval are Discovered.
4.1 Example
Before applying One Pass SI or One Pass AllSI Algorithm
[4] The Preprocessing steps must be taken.
Consider a 7 day data set. This dataset contains the
Website name, its Access status (Access/Not Access), and
timestamp (this combination is considered as a unique
Website). There are two different website in the Table
4.1(raw data) so After Data Cleaning the Table 4.1 Break
Figure3.1 Valid Significant Interval up in the two different Tables. Table 4.2 Data Cleaning
(Citeseer.com) and Table 4.3 Date Cleaning (Rgtu.net).
An invalid significant interval is identified by the third Website Access Status Timestamp
condition of the definition for a significant interval; that is, Citeseer.com Access 4/15/2009 2:05 pm
a significant interval cannot have an embedded valid Rgtu.net Access 4/15/2009 2:10 pm
significant interval (including a unit significant interval). Citeseer.com Access 4/16/2009 2:10 pm
Citeseer.com Access 4/17/2009 2:40 pm
Citeseer.com Access 4/18/2009 2:40 pm
Rgtu.net Access 4/19/2009 2:10 pm
Citeseer.com Access 4/19/2009 2:05 pm
Rgtu.net Access 4/20/2009 2:20 pm
Rgtu.net Access 4/20/2009 2:10 pm
Rgtu.net Access 4/21/2009 2:05 pm
Citeseer.com Access 4/21/2009 2:05 pm
Citeseer.com Access 4/21/2009 2:10 pm
Rgtu.net Access 4/22/2009 2:05 pm
Citeseer.com Access 4/22/2009 2:10 pm
Figure3.2 Invalid Significant Interval Rgtu.net Access 4/22/2009 2:20 pm
Citeseer.com Access 2/21/2009 2:05 pm the constraint of min-conf, it is not a significant interval
Citeseer.com Access 2/21/2009 2:10 pm because it does not satisfy the max-Len constraint.
Citeseer.com Access 2/22/2009 2:10 pm 3) Next, the algorithm starts significant interval discovery
at time point 2:40 which cannot be further combined with
any more time points. Since all time points in the data set
Table 4.3 Data Cleaning (Rgtu.net)
have been considered there are no more time points and
Website Access Status Timestamp
Rgtu.net Access 2/15/2009 2:10 pm
the algorithm stops. The significant Interval is given in
Rgtu.net Access 2/19/2009 2:10 pm Table 4.6
Rgtu.net Access 2/20/2009 2:20 pm
Table 4.6 Significant Interval Using One PassSI algorithm
Rgtu.net Access 2/20/2009 2:10 pm
Rgtu.net Access 2/21/2009 2:05 pm Website Start Time End Time Interval
Confidence
Rgtu.net Access 2/22/2009 2:05 pm
Citeseer.com 2:05 2:10 85.71%
Rgtu.net Access 2/22/2009 2:20 pm
Rgtu.net 2:05 2:10 71.42%
Rgtu.net 2:10 2:20 71.42%
Now the Data Folding step is taken on Table 4.2 (for
Now for 60% minimum confidence (min-conf) in Daily
citeseer.com) and table 4.3 (for rgtu.net).The result of
Periodicity and max_Len is not required in the One Pass
Data Folding step is given in Table 4.4 and Table 4.5.The
AllSI [4], the One Pass-AllSI algorithm [4] performs the
Table 4.4 and Table 4.5 contain the time point at which
following steps for Table4.4:
the website was ‘Access’ and number of times (over the
1) The algorithm starts significant interval discovery at
entire dataset) it was ‘Access’ at that time point. For
time point 2:05 and combines it with time point 2:10 and
example, Citeseer.com has been `Access' at 2:05 three
computes a confidence of 85.71%. Since the required 60%
times (on three different days) in the data set (of seven
confidence has achieved by combining these two time
days).
Points. Discovery of a significant interval (say A) with
start time of 2:05 and end time of 2:10 and confidence of
Table 4.4 Data Folding (Citeseer.com)
85.71%.
Website Access Times of access Access
Status Count 2) Next, the algorithm starts significant interval discovery
Citeseer.com Access 2:05 3 at time point 2:10 and combines it with time point 2:40 to
Citeseer.com Access 2:10 3 give a confidence of 71.42% with interval length of 30
Citeseer.com Access 2:40 2 minutes. This interval satisfies the constraint of min-conf,
it is a significant interval because there are No max_Len
Table 4.5 Data Folding (Rgtu.net Constraint. As we see in previous, when we used One
Website Access Times of Access Access Pass-SI algorithm for significant interval discovery, even
Status Count though this interval satisfied the constraint of min-conf, it
Rgtu.net Access 2:05 2 was not discovered as a significant interval because it did
Rgtu.net Access 2:10 3 not satisfy the constraint of max-Len. But here we are
Rgtu.net Access 2:20 2 using the One Pass-AllSI [4] algorithm for significant
interval discovery, this interval (say B) is classified as a
Consider a 7 day data set which gives a folded table as significant interval with start time of 2:10, end time of
given in Figure 4.4 and 4.5. For 60% minimum confidence 2:40 and confidence of 71.42%.
(min-conf), maximum interval length (max-Len) of 20 3) Next, the algorithm starts significant interval discovery
minutes and Daily periodicity, the One PassSI algorithm at time point 2:40 which cannot be further combined with
[4] performs the following steps for Table4.4: any more time points. Since all time points in the data set
1) The algorithm starts significant interval discovery at have been considered there are no more time points and
time point 2:05 and combines it with time point 2:10 and the algorithm stops. The significant Interval is given in
computes a confidence of 85.71% ((3+3)*100/7). Since Table 4.7
the required 60% confidence has achieved by combining
these two time points, and also satisfied the max-Len Table 4.7 Significant Interval using One Pass AllSI Algorithm
because the length is 5 minutes. Discovery of a significant Website Start Time End Time Interval
interval (say A) with start time of 2:05 and end time of Confidence
2:10 and confidence of 85.71%. Citeseer.com 2:05 2:10 85.71%
2) Next, the algorithm starts significant interval discovery Citeseer.com 2:10 2:40 71.42%
at time point 2:10 and combines it with time point 2:40 to Rgtu.net 2:05 2:10 71.42%
give a confidence of 71.42 %(( 3+2)*100/7) with interval Rgtu.net 2:10 2:20 71.42%
length of 30 minutes. Even though this interval satisfies
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 33
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814
Number of Significant
Vary Max Length
575
Table 5.2 Third Level Frequent Pattern 570
Interval
Third Level Frequent Pattern 565
560
Website1 Website2 Website3 Start End Confi 555
Time Time dence 550
(%) 10 20 30 40 50 60
Newswo Citeseer. Rgtu.net 2:00 2:15 70
Max Length
rld.com com
Finally, the frequent episodes have detected and given in Confidence-40
the Table 5.1 and 5.2.
Figure 6.2
been taken from the cyber café which is providing the 800
Interval
600
services to their customer in the 24 hours. 400
200
0
Data set used for experiments Figure 6.1
10 30 50 70 90
No of Days No of Tuples No of Websites
90 1700 5 Confidence
M ax Length-20
Figure 6.3
max-Len but one Pass SI has this limitation When the Figure 6.5
confidence increases more than 60 One Pass SI [4]
generates the more significant Interval as compare to One
Pass AllSI [4] Actually this is dependent on the Data The 6.3 Effect of Varying Sequential Window Length on
parameter for Experiment was max_Len=20 minutes for One Pass FED Algorithm
One PassSI [4] comparison between the One Pass SI [4]
and One Pass AllSI [4] is given in Figure 6.4. In the First phase of paper we have developed the
significant Interval using One Pass SI algorithm [4] then
we have applied the One Pass FED algorithm [5] for
generate the Frequent Pattern/Episode. In this experiment
we want to test the effect of the changing the sequential
Comparison Between OnePassSI and window length user parameter for the weekly periodicity.
OnePassAllSI
The Figure 6.6 shows the effect of varying the sequential
No of Significant Interval
As the One Pass AllSI algorithm [4] produces a superset Figure 6.6
of significant intervals as compared to its One Pass SI
algorithm [4] counterpart, we wanted to check the extra
time taken for finding all significant intervals at same min- 6.4 Contribution of Website in Frequent Pattern
conf. An experiment was carried out to analyze the Discovery
difference in the time taken by each of these algorithms.
As seen from Figure 6.5, the One Pass-AllSI algorithm [4] In the final phase we are showing the interest of user in
takes more time than the One Pass-SI algorithm [4] for the the particular website in the particular period of time. We
different min-conf values. have taken the data of month April, May and June. In the
figure 6.7 we are showing the contribution of website in
the Frequent pattern/Episode Discovery. The table-a gives
the full form of website which are use in the Figure6.7.
14
12
10 SC Sports.com
8 RN Rgtu.net
6
4 EC Election.com
2
0
10 30 50 70 90 In the starting of April the citeseer.com, newsworld.com
Confidence
sports.com, Election.com is heavily accessed because this
is the time of Election and IPL Tournament and M-Tech
One Pass One Pass ALLSI
research student are also doing their research. In the month
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 36
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814
of May the people are also interested in the Rgtu.net [6] John F. Roddick and Myra Spiliopoulou. A survey of
because this is the time of admission in the different temporal knowledge discovery paradigms and methods.
courses and this time result of different courses are also IEEE Transactions on Knowledge and Data Engineering,
declared. In the month of June interest in Rgtu.net little bit 14(4):750-767, Jul/Aug. 2002
increase as compare to previous month. [7] Cook, D. J., Youngblood, G. M., III, E. O. H., Gopalratnam,
K., Rao, S., Litvin, A., et al. (2003).
[8] Mannila, H., Toivonen, H., & Verkamo, A. I. (1995).
Contribution of Website in Frequent Pattern Discovering frequent episodes in sequences. In KDD (pp.
Discovery 210-215).
[9]Srikant, R., & Agrawal, R. (1996a). Mining sequential
contribution of Website
Figure 6.7 Dr. Kanak Saxena is the Professor in the Computer Application
Department of Samrat Ashok Technological Institute, Vidisha
(M.P.), India. This Institute is Affiliated Rajiv Gandhi
Proudyogiki Vishwavidyalaya, Bhopal (M.P.), India
7. Conclusion and Future Work
Mr.Rahul Shukla is the Student of M-Tech (Software System)
from Samrat Ashok Technological Institute, Vidisha (M.P.),
In this paper, we have presented the discovery of
India
significant intervals in Web Log Data Using One Pass SI
and One Pass AllSI algorithm [4] and these significant
Intervals are used to generate the Frequent Pattern. The
frequent Pattern is generating by the One Pass FED
algorithm [5]. These patterns are used to forecast the
trend. We are applied this [4] for web Log Data Which is
the Dynamic nature Data. And analysis performs on Web
Log Data. And Experimental result shows the result of this
analysis. Currently, we are working on Frequent Pattern
Generation using Semantic-e on time-series web Log Data
This process applied for other domain, such as traffic
analysis, and others.
References
[1] Renate Iváncsy, István Vajk, Frequent Pattern Mining in Web
Log Data Acta Polytechnica Hungarica Vol. 3, No. 1, 2006
[2] Michael H. Bohlen, Renato Busatto, and Christian S. Jensen.
Point-versus interval-based temporal data models. In ICDE,
pages 192-200, 1998.
[3] Jiawei Han and Micheline Kamber. Data Mining: Concepts
and Techniques, chapter 1.2, page 5. Morgan Kaufmann
Publishers, 2005
[4] Sagar Savla, Sharma Chakravarthy, A Single Pass Algorithm
for Discovering Significant Intervals In Time series data
,International Journal of Data Warehousing and Mining,
Volume 3, Issue 3 , IGI Global
[5] Sagar Savla and Sharma chakravarthy, An Efficient single
pass approach to frequent episode discovery in sequence
data, IET 4th International Conference 2008
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 37
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814
2
Dean Academic, Department of Computer Science
K.S.Rangasamy College of Technology, Tiruchengode – 637 215, Tamilnadu, India
3
Research Scholar, Department of Computer Applications
K.S.R College of Engineering, Tiruchengode – 637 215, Tamilnadu, India
4
Reader, Periyar University, Salem, Tamilnadu, India
Highly dynamic topology: Vehicular network scenarios representative of the reactive approaches we have chosen
are very different from classic ad hoc networks. In DSR, since it has been shown to be superior to many other
VANETs, vehicles can move fast. It can join and leave the existing reactive ad-hoc routing protocols in [8].
network much more frequently than MANETs. Since the Position-based routing algorithms require information
radio range is small compared with the high speed of about the physical position of the participating nodes. This
vehicles (typically, the radio range is only 250 meters position is made available to the direct neighbors in the
while the speed for vehicles in freeway will be 30m/s). form periodically transmitted beacons. A sender can
This indicates the topology in VANETs changes much request the position of a receiver by means of a location
more frequently. service. The routing decision at each node is then based on
Predicable Mobility: Unlike classic mobile ad hoc the destination’s position contained in the packet and the
networks, where it is hard to predict the nodes’ mobility, position of the forwarding node’s neighbors. Position-
vehicles tend to have very predictable movements that are based routing does not require the establishment or
(usually) limited to roadways. The movement of nodes in maintenance of routes. Examples for position-based
VANETs is constrained by the layout of roads. Roadway routing algorithms are face-2 [9], GPSR [10], DREAM
information is often available from positioning systems [11] and terminodes routing [12]. As a representative of
and map based technologies such as GPS. Each pair of the position based algorithms we have selected GPSR,
nodes can communicate directly when they are within the (which is algorithmically identical to face-2), since it
radio range. seems to be scalable and well suited for very dynamic
Potentially large scale: Unlike most ad hoc networks networks.
studied in the literature that usually assume a limited
network size, vehicular networks can is extended over the 2.3. Routing protocols in VANET
entire road network and include many participants.
Partitioned network: Vehicular networks will be Following are a summary of representative VANETs
frequently partitioned. The dynamic nature of traffic may routing algorithms.
result in large inter-vehicle gaps in sparsely populated
2.3.1 GSR (Geographic Source Routing)
scenarios and hence in several isolated clusters of nodes.
Network connectivity: The degree to which the network is Lochert et al. in [13] proposed GSR, a position-based
connected is highly dependent on two factors: the range of routing with topological information. This approach
wireless links and the fraction of participant vehicles, employs greedy forwarding along a pre-selected shortest
where only a fraction of vehicles on the road could be path. The simulation results show that GSR outperforms
equipped with wireless interfaces. topology based approaches (AODV and DSR) with respect
to packet delivery ratio and latency by using realistic
2.2 Routing protocols in MANET vehicular traffic. But this approach neglects the case that
The routing protocols in MANETs can be classified by there are not enough nodes for forwarding packets when
their properties. On one hand, they can be classified into the traffic density is low. Low traffic density will make it
two categories, proactive and reactive. difficult to find an end-to-end connection along the pre-
selected path.
Proactive routing algorithms employ classical routing
strategies such as distance-vector routing (e.g., DSDV [1]) 2.3.2 GPCR (Greedy Perimeter Coordinator Routing)
or link-state routing (e.g., OLSR [2] and TBRPF [3]). To deal with the challenges of city scenarios, Lochert et al.
They maintain routing information about the available designed GPCR in [14]. This protocol employs a restricted
paths in the network even if these paths are not currently greedy forwarding procedure along a preselected path.
used. The main drawback of these approaches is that the When choosing the next hop, a coordinator (the node on a
maintenance of unused paths may occupy a significant part junction) is preferred to a non coordinator node, even if it
of the available bandwidth if the topology of the network is not the geographical closest node to destination. Similar
changes frequently [4]. Since a network between cars is to GSR, GPCR neglects the case of low traffic density as
extremely dynamic we did not further investigate well.
proactive approaches.
Reactive routing protocols such as DSR [5], TORA [6], 2.3.3 A-STAR (Anchor-based Street and Traffic Aware
and AODV [7] maintain only the routes that are currently Routing)
in use, thereby reducing the burden on the network when
only a small subset of all available routes is in use at any To guarantee an end-to-end connection even in a vehicular
time. It can be expected that communication between cars network with low traffic density, Seet et al. proposed A-
will only use a very limited number of routes, therefore STAR [15]. A-STAR uses information on city bus routes
reactive routing seems to fit this application scenario. As a to identify an anchor path with high connectivity for
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 39
www.IJCSI.org
packet delivery. By using an anchor path, A-STAR node, which is considered as most suitable next hop, due
guarantees to find an end-to-end connection even in the to high dynamics of vehicles. This will lead to low packet
case of low traffic density. This position-based scheme delivery ratio, high end to end delay and increased packet
also employs a route recovery strategy when the packets drops.
are routed to a local optimum by computing a new anchor The various routing protocols of MANETs and VANETs
path from local maximum to which the packet is routed. are analyzed and drawbacks of those routing protocols are
The simulation results show A-STAR achieves obvious described in the Table 1.
network performance improvement compared with GSR
Table 1
and GPSR. But the routing path may not be optimal Drawbacks of routing protocols in MANET and VANET
because it is along the anchor path. It results in large
delay. Routing Protocols Drawbacks
Frequent network disconnection.
2.3.4 MDDV (Mobility-Centric Data Dissemination Routing loops.
GPSR
Algorithm for Vehicular Networks) Too many hops.
Routing in wrong direction.
To achieve reliable and efficient routing, Wu et al. End to end connection is difficult in
GSR
proposed MDDV [16] that combines opportunistic low traffic density.
forwarding, geographical forwarding, and trajectory-based GPCR
End to end connection is difficult in
forwarding. MDDV takes into account the traffic density. low traffic density.
A forwarding trajectory is specified extending from the Routing paths are not optimal and
source to the destination (trajectory-based forwarding), A-STAR results in large delay of packet
transmission
along which a message will be moved geographically
closer to the destination (geographical forwarding). The Large delay if the traffic density varies
MDDV
selection of forwarding trajectory uses the geographical by time.
knowledge and traffic density. MDDV assumes the traffic Large delay due to varying topology
VADD
density is static. Messages are forwarded along the and varying traffic density.
forwarding trajectory through intermediate nodes which Too many hops.
Large delay if the traffic density is
store and forward messages opportunistically. This PDGR
high.
approach is focusing on reliable routing. The trajectory- Low packet delivery ratio.
based forwarding will lead to large delay if the traffic Frequent network disconnection.
density varies by time.
To guarantee an end-to-end connection in a sparse network 3.1. Edge Node Based Greedy Routing Algorithm
with tolerable delay, Zhao and Cao proposed VADD [17] (EBGR)
based on the idea of carry and forward by using predicable EBGR is a reliable greedy position base routing algorithm
mobility specific to the sparse networks. Instead of routing designed for sending messages from any node to any other
along a pre-select path, VADD chooses next hop based on node (unicast) or from one node to all other nodes
the highest pre-defined direction priority by selecting the (broadcast/multicast) in a vehicular ad hoc network. The
closest one to the destination. The simulation results show general design goals of the EBGR algorithm are to
VADD outperforms GPSR in terms of packet delivery optimize the packet behavior for ad hoc networks with
ratio, data packet delay, and traffic overhead. This high mobility and to deliver messages with high reliability.
approach predicts the directions of vehicles movement.
But it doesn’t predict the environment change in the The EBGR algorithm has six basic functional units. First is
future. Neighbor Node Identification (NNI), second is Distance
2.3.6 PDGR (Predictive Directional Greedy Routing) Calculation (DC), third is Direction of Motion
Identification (DMI), fourth is Reckoning Link Stability
Jiayu Gong proposed PDGR [18], in which the weighted (RLS), fifth is Potential score calculation (PS) and sixth is
score is calculated for current neighbors and possible Edge Node Selection (ENS). The NNI is responsible for
future neighbors of packet carrier. With Predictive DGR collection of information of all neighbor nodes present
the weighted scores of immediate nodes 2-hops away are within the transmission range of source/forwarder node at
also calculated beforehand. Here next hop selection is any time. The DC is responsible for calculating the
done on prediction and it is not reliable at all situations. It closeness of next hop using distance information from the
doesn’t guarantee the delivery of packet to the node GPS. DMI is responsible to identify the direction of
present in the edge of the transmission range of forwarding motion of neighbor nodes which is moving towards the
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 40
www.IJCSI.org
direction of destination. The RLS is responsible for using periodic beacon messages. The neighbor node which
identifying link stability between the source/forwarder is closer to the destination node is calculated. The
node and its neighbor nodes. The PS is responsible to closeness of next hop is identified by the mathematical
calculate potential score and identifies the neighbor node model [18] and it is shown in Fig.1.
having higher potential for further forwarding of a
particular packet to destination. The ENS is responsible to 𝑫𝑫𝒊𝒊
𝑫𝑫𝑫𝑫 = � 𝟏𝟏 − �
select an edge node having higher potential score in 𝑫𝑫𝒄𝒄
different levels of transmission range. In the following
section, the general assumptions of EBGR algorithm are 𝑯𝑯𝑯𝑯𝑯𝑯𝑯𝑯,
briefly discussed and then functional units of EBGR 𝑫𝑫𝒊𝒊 ∶ 𝑆𝑆ℎ𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 𝑖𝑖 𝑡𝑡𝑡𝑡
𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝐷𝐷.
algorithm are discussed in detail.
𝑫𝑫𝒄𝒄 ∶ 𝑆𝑆ℎ𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓
3.2. Assumptions 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 𝑐𝑐 𝑡𝑡𝑡𝑡 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝐷𝐷.
𝑫𝑫𝒊𝒊
∶ 𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶 𝑜𝑜𝑜𝑜 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛ℎ𝑜𝑜𝑜𝑜.
The algorithm design is based on the following 𝑫𝑫𝒄𝒄
assumptions: All nodes are equipped with GPS receivers,
digital maps, optional sensors and On Board Units (OBU). Fig. 1 Distance Calculation in EBGR
Location information of all vehicles/nodes can be
identified with the help of GPS receivers. The only 3.5. Direction of Motion Identification (DMI)
communications paths available are via the ad-hoc
network and there is no other communication The appropriate neighbor node which is moving towards
infrastructure. Node power is not the limiting factor for the the direction of destination node is identified using the
design. Communications are message oriented. The mathematical model [18] and it is shown in Fig.2.
Maximum Transmission Range (MTR) of each node in the
environment is 250m. �⃗𝒊𝒊 , 𝒍𝒍⃗𝒊𝒊,𝒅𝒅 �
𝑫𝑫𝑫𝑫𝑫𝑫 = 𝐜𝐜𝐜𝐜 𝐬𝐬� 𝝊𝝊
range of i). 𝐷𝐷1 and 𝛥𝛥𝛥𝛥 are estimated using the initial
positions of i and j ( (𝑋𝑋𝑖𝑖0 , 𝑌𝑌𝑖𝑖0 ) and �𝑋𝑋𝑗𝑗 0 , 𝑌𝑌𝑗𝑗 0 �, and their 𝑫𝑫𝒊𝒊
𝑷𝑷𝑷𝑷𝒊𝒊 = 𝛒𝛒 × � 𝟏𝟏 − �⃗𝒊𝒊 , 𝒍𝒍⃗𝒊𝒊,𝒅𝒅 � + 𝛌𝛌 × 𝑳𝑳𝑳𝑳𝒄𝒄,𝒊𝒊
� + 𝝎𝝎 × 𝐜𝐜𝐜𝐜 𝐬𝐬� 𝝊𝝊
�⃗𝑗𝑗 respectively).
�⃗𝑖𝑖 and 𝑉𝑉 𝑫𝑫𝒄𝒄
initial speeds 𝑉𝑉
𝑯𝑯𝑯𝑯𝑯𝑯𝑯𝑯,
𝑷𝑷𝑷𝑷𝒊𝒊 ∶ 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑜𝑜𝑜𝑜 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 𝑖𝑖
𝐷𝐷1 2 = ((𝑋𝑋𝑖𝑖0 + 𝑉𝑉𝑉𝑉𝑖𝑖 𝛥𝛥𝛥𝛥) − �𝑋𝑋𝑗𝑗 0 + 𝑉𝑉𝑉𝑉𝑗𝑗 𝛥𝛥𝛥𝛥� 𝛒𝛒, 𝝎𝝎, 𝛌𝛌 ∶ 𝑃𝑃𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓
+ (𝑌𝑌𝑖𝑖0 + 𝑉𝑉𝑉𝑉𝑖𝑖 𝛥𝛥𝛥𝛥) − (𝑌𝑌𝑗𝑗 0 + 𝑉𝑉𝑉𝑉𝑗𝑗 𝛥𝛥𝛥𝛥))2 𝐿𝐿𝐿𝐿𝐿𝐿 𝛒𝛒 + 𝝎𝝎 + 𝛌𝛌 = 𝟏𝟏 ; 𝛌𝛌 > 𝛒𝛒 𝑎𝑎𝑎𝑎𝑎𝑎 𝛌𝛌 > 𝝎𝝎
2 2
𝐷𝐷1 = 𝐴𝐴𝐴𝐴𝐴𝐴 + 𝐵𝐵𝐵𝐵𝐵𝐵 + 𝐶𝐶 𝑫𝑫𝒊𝒊 ∶ 𝑆𝑆ℎ𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 𝑖𝑖 𝑡𝑡𝑡𝑡
2 2 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝐷𝐷.
𝐴𝐴 = �𝑉𝑉𝑉𝑉𝑖𝑖 − 𝑉𝑉𝑉𝑉𝑗𝑗 � + �𝑉𝑉𝑉𝑉𝑖𝑖 − 𝑉𝑉𝑉𝑉𝑗𝑗 �
𝑫𝑫𝒄𝒄 ∶ 𝑆𝑆ℎ𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝
𝐵𝐵 = 2[�𝑋𝑋𝑖𝑖0 − 𝑋𝑋𝑗𝑗 0 ��𝑉𝑉𝑉𝑉𝑖𝑖 − 𝑉𝑉𝑉𝑉𝑗𝑗 � 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 𝑐𝑐 𝑡𝑡𝑡𝑡 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝐷𝐷.
+�𝑋𝑋𝑖𝑖0 − 𝑋𝑋𝑗𝑗 0 ��𝑉𝑉𝑉𝑉𝑖𝑖 − 𝑉𝑉𝑉𝑉𝑗𝑗 �] 𝑫𝑫𝒊𝒊
2 2 ∶ 𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶 𝑜𝑜𝑜𝑜 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛ℎ𝑜𝑜𝑜𝑜.
𝐶𝐶 = �𝑋𝑋𝑖𝑖0 − 𝑋𝑋𝑗𝑗 0 � + �𝑌𝑌𝑖𝑖0 − 𝑌𝑌𝑗𝑗 0 � 𝑫𝑫𝒄𝒄
�⃗𝒊𝒊 ∶ 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 𝑓𝑓𝑓𝑓𝑓𝑓 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 𝑜𝑜𝑜𝑜 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 𝑖𝑖.
𝝊𝝊
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆 𝑡𝑡ℎ𝑒𝑒 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒: 𝒍𝒍⃗𝒊𝒊,𝒅𝒅 ∶ 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 𝑓𝑓𝑓𝑓𝑓𝑓 𝑡𝑡ℎ𝑒𝑒 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙 𝑜𝑜𝑜𝑜 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛
𝐴𝐴𝐴𝐴𝐴𝐴 2 + 𝐵𝐵𝐵𝐵𝐵𝐵 + 𝐶𝐶 − 𝑅𝑅 2 = 0 𝑖𝑖 𝑡𝑡𝑡𝑡 𝑡𝑡ℎ𝑒𝑒 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙 𝑜𝑜𝑜𝑜 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 𝐷𝐷
𝑤𝑤𝑤𝑤 𝑐𝑐𝑐𝑐𝑐𝑐 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝛥𝛥𝛥𝛥. 𝐜𝐜𝐜𝐜 𝐬𝐬� 𝝊𝝊 �⃗𝒊𝒊 , 𝒍𝒍⃗𝒊𝒊,𝒅𝒅 � ∶ 𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 𝑜𝑜𝑜𝑜 𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎 𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚
𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿[𝑖𝑖, 𝑗𝑗] = 𝛥𝛥𝛥𝛥 𝑏𝑏𝑏𝑏 𝑡𝑡ℎ𝑒𝑒𝑒𝑒𝑒𝑒 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣
𝑳𝑳𝑳𝑳𝒄𝒄,𝒊𝒊 ∶ 𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓
𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 𝑐𝑐 𝑡𝑡𝑡𝑡 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 𝑖𝑖.
𝑳𝑳𝑳𝑳𝑳𝑳𝑳𝑳𝑳𝑳𝑳𝑳𝑳𝑳𝑳𝑳[𝒊𝒊, 𝒋𝒋]
𝑳𝑳𝑳𝑳[𝒊𝒊, 𝒋𝒋] =
𝛔𝛔
Fig. 4 Potential Score Calculation in EBGR
𝐻𝐻𝐻𝐻𝐻𝐻𝐻𝐻,
𝐿𝐿𝐿𝐿[𝑖𝑖, 𝑗𝑗] = 1 𝑤𝑤ℎ𝑒𝑒𝑒𝑒 𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿[𝑖𝑖, 𝑗𝑗] ≥ σ
3.8. Edge Node Selection (ENS)
𝑳𝑳𝑳𝑳𝒊𝒊,𝒋𝒋 ∶ 𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏 𝑡𝑡𝑡𝑡𝑡𝑡 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 𝑖𝑖 𝑎𝑎𝑎𝑎𝑎𝑎 𝑗𝑗.
In the Edge Node Selection, edge nodes are selected for
packet forwarding event. An edge node is a node which
has shortest distance to the destination D compared to all
Fig. 3 Reckoning Link Stability in EBGR other nodes within the different levels of transmission
range of source/packet forwarding node.
Once LS is calculated for each neighboring vehicle, EBGR
selects the node corresponding to the highest LS
(corresponding to the most stable neighboring link) as next
hop for data forwarding. This approach should help as well
in minimizing the risk of broken links and in reducing
packet loss.
The different levels of transmission range are considered 26. 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 = 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛ℎ𝑖𝑖
to avoid packet loss due to high speed mobility of 27. 𝒆𝒆𝒆𝒆𝒆𝒆 𝒇𝒇𝒇𝒇𝒇𝒇
vehicles. An edge node has the responsibility of saving 28. 𝒆𝒆𝒆𝒆𝒆𝒆𝒆𝒆 𝒊𝒊𝒊𝒊 (𝐷𝐷𝑐𝑐𝑐𝑐 < 𝐿𝐿2𝑇𝑇𝑇𝑇 && 𝐷𝐷𝑐𝑐𝑐𝑐 > 𝐿𝐿3𝑇𝑇𝑇𝑇)
received data packets in forwarding table and transfers it 29. 𝑙𝑙⃗𝑖𝑖,𝑑𝑑 = 𝑙𝑙𝑙𝑙𝑙𝑙𝑑𝑑 – 𝑙𝑙𝑙𝑙𝑙𝑙𝑖𝑖
𝐷𝐷
later when those nodes meet new neighbors. The overall 30. 𝑃𝑃𝑃𝑃𝑖𝑖 = ρ × � 1 − 𝑖𝑖 � + 𝜔𝜔 × co s� 𝜐𝜐⃗𝑖𝑖 , 𝑙𝑙⃗𝑖𝑖,𝑑𝑑 � + λ × 𝐿𝐿𝐿𝐿𝑐𝑐,𝑖𝑖
𝐷𝐷𝑐𝑐
objective of the algorithm is to forward the packet as soon 31. 𝒇𝒇𝒇𝒇𝒇𝒇 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛ℎ𝑖𝑖 𝑤𝑤𝑤𝑤𝑤𝑤ℎ 𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 𝑃𝑃𝑃𝑃𝑖𝑖 𝒅𝒅𝒅𝒅
as possible to increase packet delivery ratio, minimize the 32. 𝑃𝑃𝑃𝑃 = 𝑃𝑃𝑃𝑃𝑖𝑖
end to end delay and avoid packet loss. The MTR of a 33. 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 = 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛ℎ𝑖𝑖
vehicle/node is 250m.The other levels of transmission 34. 𝒆𝒆𝒆𝒆𝒆𝒆 𝒇𝒇𝒇𝒇𝒇𝒇
range is considerably less than MTR. The different levels 35. 𝒆𝒆𝒆𝒆𝒆𝒆𝒆𝒆 𝒊𝒊𝒊𝒊 (𝐷𝐷𝑐𝑐𝑐𝑐 < 𝐿𝐿3𝑇𝑇𝑇𝑇 && 𝐷𝐷𝑐𝑐𝑐𝑐 > 𝐿𝐿4𝑇𝑇𝑇𝑇)
of transmission range is shown in Fig.5 which includes, 36. 𝑙𝑙⃗𝑖𝑖,𝑑𝑑 = 𝑙𝑙𝑙𝑙𝑙𝑙𝑑𝑑 – 𝑙𝑙𝑙𝑙𝑙𝑙𝑖𝑖
𝐷𝐷
Maximum Transmission Range (i.e. MTR=250m) 37. 𝑃𝑃𝑃𝑃𝑖𝑖 = ρ × � 1 − 𝑖𝑖 � + 𝜔𝜔 × co s� 𝜐𝜐⃗𝑖𝑖 , 𝑙𝑙⃗𝑖𝑖,𝑑𝑑 � + λ × 𝐿𝐿𝐿𝐿𝑐𝑐,𝑖𝑖
𝐷𝐷𝑐𝑐
Level1 transmission range (i.e.L1TR=200m)
38. 𝒇𝒇𝒇𝒇𝒇𝒇 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛ℎ𝑖𝑖 𝑤𝑤𝑤𝑤𝑤𝑤ℎ 𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 𝑃𝑃𝑃𝑃𝑖𝑖 𝒅𝒅𝒅𝒅
Level2 transmission range (i.e.L2TR=150m)
39. 𝑃𝑃𝑃𝑃 = 𝑃𝑃𝑃𝑃𝑖𝑖
Level3 transmission range (i.e.L3TR=100m) 40. 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 = 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛ℎ𝑖𝑖
Level4 transmission range (i.e.L4TR=50m). 41. 𝒆𝒆𝒆𝒆𝒆𝒆 𝒇𝒇𝒇𝒇𝒇𝒇
42. 𝒆𝒆𝒆𝒆𝒆𝒆𝒆𝒆 𝒊𝒊𝒊𝒊 (𝐷𝐷𝑐𝑐𝑐𝑐 < 𝐿𝐿4𝑇𝑇𝑇𝑇)
𝑴𝑴𝑴𝑴𝑴𝑴: 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 = 250𝑚𝑚 43. 𝑙𝑙⃗𝑖𝑖,𝑑𝑑 = 𝑙𝑙𝑙𝑙𝑙𝑙𝑑𝑑 – 𝑙𝑙𝑙𝑙𝑙𝑙𝑖𝑖
𝑳𝑳𝑳𝑳𝑳𝑳𝑳𝑳: 𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿1 𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 = 200𝑚𝑚 𝐷𝐷𝑖𝑖
𝑳𝑳𝑳𝑳𝑳𝑳𝑳𝑳: 𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿2 𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 = 150𝑚𝑚 44. 𝑃𝑃𝑃𝑃𝑖𝑖 = ρ × � 1 − � + 𝜔𝜔 × co s� 𝜐𝜐⃗𝑖𝑖 , 𝑙𝑙⃗𝑖𝑖,𝑑𝑑 � + λ × 𝐿𝐿𝐿𝐿𝑐𝑐,𝑖𝑖
𝐷𝐷𝑐𝑐
𝑳𝑳𝑳𝑳𝑳𝑳𝑳𝑳: 𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿3 𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 = 100𝑚𝑚 45. 𝒇𝒇𝒇𝒇𝒇𝒇 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛ℎ𝑖𝑖 𝑤𝑤𝑤𝑤𝑤𝑤ℎ 𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 𝑃𝑃𝑃𝑃𝑖𝑖 𝒅𝒅𝒅𝒅
𝑳𝑳𝑳𝑳𝑳𝑳𝑳𝑳: 𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿4 𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 = 50𝑚𝑚 46. 𝑃𝑃𝑃𝑃 = 𝑃𝑃𝑃𝑃𝑖𝑖
𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄: 𝑡𝑡ℎ𝑒𝑒 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 47. 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 = 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛ℎ𝑖𝑖
𝒍𝒍𝒍𝒍𝒍𝒍𝒄𝒄 : 𝑡𝑡ℎ𝑒𝑒 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙 𝑜𝑜𝑜𝑜 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 48. 𝒆𝒆𝒆𝒆𝒆𝒆 𝒇𝒇𝒇𝒇𝒇𝒇
����⃗:
𝒗𝒗𝒄𝒄 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 𝑓𝑓𝑓𝑓𝑓𝑓 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑛𝑛𝑛𝑛 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 49. 𝒆𝒆𝒆𝒆𝒆𝒆𝒆𝒆
𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅: 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑜𝑜𝑜𝑜 𝑡𝑡ℎ𝑒𝑒 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 50. 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑡𝑡ℎ𝑒𝑒 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 𝑤𝑤𝑤𝑤𝑤𝑤ℎ 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
𝒍𝒍𝒍𝒍𝒍𝒍𝒅𝒅 : 𝑡𝑡ℎ𝑒𝑒 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙 𝑓𝑓𝑓𝑓𝑓𝑓 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 51. 𝒆𝒆𝒆𝒆𝒆𝒆 𝒊𝒊𝒊𝒊
𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏: 𝑡𝑡ℎ𝑒𝑒 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑎𝑎𝑎𝑎 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 ℎ𝑜𝑜𝑜𝑜 52. 𝒆𝒆𝒆𝒆𝒆𝒆 𝒇𝒇𝒇𝒇𝒇𝒇
𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒊𝒊 : 𝑡𝑡ℎ𝑒𝑒 𝒊𝒊𝑡𝑡ℎ 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛ℎ𝑏𝑏𝑏𝑏𝑏𝑏 53. 𝒆𝒆𝒆𝒆𝒆𝒆 𝒇𝒇𝒇𝒇𝒇𝒇
𝒍𝒍𝒍𝒍𝒍𝒍𝒊𝒊 : 𝑡𝑡ℎ𝑒𝑒 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙 𝑜𝑜𝑜𝑜 𝑡𝑡ℎ𝑒𝑒 𝒊𝒊𝑡𝑡ℎ 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛ℎ𝑏𝑏𝑏𝑏𝑏𝑏
���⃗
𝝊𝝊𝒊𝒊 : 𝑡𝑡ℎ𝑒𝑒 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 𝑜𝑜𝑜𝑜 𝑡𝑡ℎ𝑒𝑒 𝒊𝒊𝑡𝑡ℎ 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛ℎ𝑏𝑏𝑏𝑏𝑏𝑏
Fig. 6 Pseudo code of EBGR Algorithm
1. 𝑙𝑙𝑙𝑙𝑙𝑙𝑐𝑐 ← 𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔(𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐)
2. 𝑣𝑣𝑐𝑐 ← 𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔(𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐)
���⃗ Step1: Neighbor nodes having distance between 250m and
3. 𝑙𝑙𝑙𝑙𝑙𝑙𝑑𝑑 ← 𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔(𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑) 200m from the current node falls between MTR and
4. 𝐷𝐷𝑐𝑐 = 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑(locc , locd) L1TR. The potential score of all nodes present between the
5. ������⃗
𝑙𝑙𝑐𝑐,𝑑𝑑 = 𝑙𝑙𝑙𝑙𝑙𝑙𝑑𝑑 – 𝑙𝑙𝑙𝑙𝑙𝑙𝑐𝑐 transmission range of MTR and L1TR are calculated. The
6. 𝑃𝑃𝑃𝑃 = 𝜔𝜔 × cos(����⃗, 𝑣𝑣𝑐𝑐 ������⃗
𝑙𝑙𝑐𝑐,𝑑𝑑 ) node which is having higher potential score is considered
7. 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 as edge node of the MTR. So the packet from the current
8. 𝒇𝒇𝒇𝒇𝒇𝒇 𝒂𝒂𝒂𝒂𝒂𝒂 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛ℎ𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏 𝑜𝑜𝑜𝑜 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝒅𝒅𝒅𝒅 node is forwarded to that particular edge node. If no node
9. 𝑙𝑙𝑙𝑙𝑙𝑙𝑖𝑖 ← 𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔( 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛ℎ𝑖𝑖 ) present between MTR and L1TR, then L1TR and L2TR
10. 𝜐𝜐⃗𝑖𝑖 ← 𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔(𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛ℎ𝑖𝑖 ) are considered.
11. 𝐷𝐷𝑖𝑖 = 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑(𝑙𝑙𝑙𝑙𝑙𝑙𝑑𝑑 , 𝑙𝑙𝑙𝑙𝑙𝑙𝑖𝑖 )
12. 𝐷𝐷𝑐𝑐𝑐𝑐 = 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑(𝑙𝑙𝑙𝑙𝑙𝑙𝑐𝑐 , 𝑙𝑙𝑙𝑙𝑙𝑙𝑖𝑖 ) Step2: Neighbor nodes having distance between 200m and
13. 𝒇𝒇𝒇𝒇𝒇𝒇 𝒂𝒂𝒂𝒂𝒂𝒂 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛ℎ𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏 𝑜𝑜𝑜𝑜 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑤𝑤𝑤𝑤𝑤𝑤ℎ 𝐷𝐷𝑐𝑐𝑐𝑐 𝒅𝒅𝒅𝒅
150m from the current node falls between L1TR and
14. 𝒊𝒊𝒊𝒊 (𝐷𝐷𝑐𝑐𝑐𝑐 < 𝑀𝑀𝑀𝑀𝑀𝑀 && 𝐷𝐷𝑐𝑐𝑐𝑐 > 𝐿𝐿1𝑇𝑇𝑇𝑇)
L2TR. The potential score of all nodes present between the
15. 𝑙𝑙⃗𝑖𝑖,𝑑𝑑 = 𝑙𝑙𝑙𝑙𝑙𝑙𝑑𝑑 – 𝑙𝑙𝑙𝑙𝑙𝑙𝑖𝑖
𝐷𝐷
transmission range of L1TR and L2TR are calculated. The
16. 𝑃𝑃𝑃𝑃𝑖𝑖 = ρ × � 1 − 𝑖𝑖 � + 𝜔𝜔 × co s� 𝜐𝜐⃗𝑖𝑖 , 𝑙𝑙⃗𝑖𝑖,𝑑𝑑 � + λ × 𝐿𝐿𝐿𝐿𝑐𝑐,𝑖𝑖 node which is having higher potential score is considered
𝐷𝐷𝑐𝑐
17. 𝒇𝒇𝒇𝒇𝒇𝒇 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛ℎ𝑖𝑖 𝑤𝑤𝑤𝑤𝑤𝑤ℎ 𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 𝑃𝑃𝑃𝑃𝑖𝑖 𝒅𝒅𝒅𝒅 as edge node of the L1TR.So the packet from the current
18. 𝑃𝑃𝑃𝑃 = 𝑃𝑃𝑃𝑃𝑖𝑖 node is forwarded to that particular edge node. If no node
19. 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 = 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛ℎ𝑖𝑖 present between L1TR and L2TR, then L2TR and L3TR
20. 𝒆𝒆𝒆𝒆𝒆𝒆 𝒇𝒇𝒇𝒇𝒇𝒇 are considered.
21. 𝒆𝒆𝒆𝒆𝒆𝒆𝒆𝒆 𝒊𝒊𝒊𝒊 (𝐷𝐷𝑐𝑐𝑐𝑐 < 𝐿𝐿1𝑇𝑇𝑇𝑇 && 𝐷𝐷𝑐𝑐𝑐𝑐 > 𝐿𝐿2𝑇𝑇𝑇𝑇)
22. 𝑙𝑙⃗𝑖𝑖,𝑑𝑑 = 𝑙𝑙𝑙𝑙𝑙𝑙𝑑𝑑 – 𝑙𝑙𝑙𝑙𝑙𝑙𝑖𝑖 Step3: Neighbor nodes having distance between 150m and
𝐷𝐷
23. 𝑃𝑃𝑃𝑃𝑖𝑖 = ρ × � 1 − 𝑖𝑖 � + 𝜔𝜔 × co s� 𝜐𝜐⃗𝑖𝑖 , 𝑙𝑙⃗𝑖𝑖,𝑑𝑑 � + λ × 𝐿𝐿𝐿𝐿𝑐𝑐,𝑖𝑖 100m from the current node falls between L2TR and
𝐷𝐷𝑐𝑐
24. 𝒇𝒇𝒇𝒇𝒇𝒇 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛ℎ𝑖𝑖 𝑤𝑤𝑤𝑤𝑤𝑤ℎ 𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 𝑃𝑃𝑃𝑃𝑖𝑖 𝒅𝒅𝒅𝒅 L3TR. The potential score of all nodes present between the
25. 𝑃𝑃𝑃𝑃 = 𝑃𝑃𝑃𝑃𝑖𝑖 transmission range of L2TR and L3TR are calculated. The
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 43
www.IJCSI.org
1 0
0.9 0 5 10 15 20 25
Mobility (metre/second)
0.8
Packet Delivery Ratio
distance information from GPS. By using EBGR, the Tomasz Imielinske and Hank Korth, Eds., vol. 353. Kluwer
numbers of hops are minimized considerably and the Academic Publishers, 1996.
packet delivery ratio is improved for about 15.5% in [6]. Vincent D. Park and M. Scott Corson, “A highly adaptive
comparison with PDGR with the different levels of distributed routing algorithm for mobile wireless networks,”
in Proceedings of IEEE INFOCOMM, 1997, pp. 1405–
transmission range. 1413.
[7]. Charles E. Perkins and Elizabeth M. Royer, “Adhoc on-
4.5 Packet Delivery Ratio vs. Mobility demand distance vector routing,” in Proceedings of the 2nd
IEEE Workshop on Mobile Computing Systems and
In this part, we compare the packet delivery ratio with Applications, February 1999, pp. 1405–1413.
varying speed of vehicles and it is shown in Fig.10. When [8]. Josh Broch , David A. Maltz , David B. Johnson , Yih-Chun
the speed of vehicle increases, the packet delivery ratio of Hu , and Jorjeta Jetcheva , “A performance comparison of
GPSR and PDGR decreases much faster than others. The multi-hop wireless ad hoc network routing protocols,” in
high speed of vehicles leads to packet loss in edge of Proceedings of the Fourth Annual ACM/IEEE International
MTR. By using EBGR, the packet loss at the edge of MTR Conference on Mobile Computing and Networking
(MobiCom ’98), Dallas, Texas, U.S.A., October 1998, pp.
is minimized considerably and the packet delivery ratio is 85 – 97.
improved for about 12.5% in comparison with PDGR with [9]. P. Bose, P. Morin, I. Stojmenovic, and J. Urrutia, “Routing
the increase in speed of vehicles. with guaranteed delivery in ad hoc wireless networks,” in
Proc. of 3rd ACM Intl. Workshop on Discrete Algorithms
5. Conclusion and Methods for Mobile Computing and Communications
DIAL M99, 1999, pp. 48–55.
In this paper we have investigated routing aspects of [10]. Brad Karp and H. T. Kung, “GPSR: Greedy perimeter
VANETs. We have identified the properties of VANETs stateless routing for wireless networks,” in Proceedings of
and previous studies on routing in MANETs and the 6th Annual ACM/IEEE International Conference on
Mobile Computing and Networking (MobiCom 2000),
VANETs. We have commented on their contributions, and
Boston, MA, U.S.A., August 2000, pp. 243–254.
limitations. By using the uniqueness of VANETs, we have [11]. Stefano Basagni, Imrich Chlamtac, Violet R. Syrotiuk, and
proposed Revival Mobility Model and a new position Barry A.Woodward, “A distance routing effect algorithm
based greedy routing approach EBGR. Our simulation for mobility (dream),” in ACM MOBICOM ’98. ACM,
results have shows EBGR outperform GPSR and PDGR 1998, pp. 76 – 84.
significantly in the terms of improving the packet delivery [12]. Ljubica Blazevic , Silvia Giordano , and Jean- Yves Le
ratio. In the future, our approach requires modifications by Boudec , “Self-organizing wide-area routing,” in
taking into account the city environment characteristics Proceedings of SCI 2000/ISAS 2000,Orlando, July 2000.
and different mobility models with obstacles. Comparison [13]. C. Lochert, H. Hartenstein, J. Tian, D. Herrmann, H. Fubler,
M. Mauve: “A Routing Strategy for Vehicular Ad Hoc
of proposed EBGR approach with other existing approach
Networks in City Environments”, IEEE Intelligent Vehicles
shows that our routing algorithm is considerably better Symposium (IV2003).
than other routing algorithms in improving the packet [14]. C. Lochert, M. Mauve, H. Fler, H. Hartenstein. “Geographic
delivery ratio. Routing in City Scenarios” (poster), MobiCom. 2004, ACM
SIGMOBILE Mobile Computing and Communications
References Review (MC2R) 9 (1), pp. 69–72, 2005.
[15]. B.-C. Seet, G. Liu, B.-S. Lee, C. H. Foh, K. J. Wong, K.-K.
Lee. “A-STAR: A Mobile Ad Hoc Routing Strategy for
[1]. Charles E. Perkins and Pravin Bhagwat, “Highly dynamic Metropolis Vehicular Communications”, NETWORKING
destination-sequenced distance-vector routing (DSDV),” in 2004.
Proceedings of ACM SIGCOMM’94 Conference on [16]. H. Wu, R. Fujimoto, R. Guensler and M. Hunter. “MDDV:
Communications Architectures, Protocols and Applications, A Mobility-Centric Data Dissemination Algorithm for
1994. Vehicular Networks”, ACM VANET 2004.
[2]. T. H. Clausen and P. Jacquet. “Optimized Link State [17]. J. Zhao and G. Cao. “VADD: Vehicle-Assisted Data
Routing (OLSR)”, RFC 3626, 2003. Delivery in Vehicular Ad Hoc Networks”, InfoCom 2006.
[3]. Richard G. Ogier , Fred L. Templin , Bhargav Bellur , and [18]. Jiayu Gong, Cheng-Zhong Xu and James Holle. “Predictive
Mark G. Lewis , “Topology broadcast based on reverse-path Directional Greedy Routing in Vehicular Ad hoc
forwarding (tbrpf),” Internet Draft, draft-ietf-manet-tbrpf- Networks”, (ICDCSW’ 07).
03.txt, work in progress, November 2001. [19]. F. Granelli, G. Boato, and D. Kliazovich. Mora: a movement-
[4]. S. R. Das, R. Castaneda, and J. Yan, “Simulation based
based routing algorithm for vehicle ad hoc networks. In IEEE
performance evaluation of mobile, ad hoc network routing
protocols,” ACM/Baltzer Mobile Networks and Applications Workshop on Automotive Networking and Applications
(MONET) Journal, pp. 179–189, July 2000. (AutoNet 2006), San Francisco, U.S.A., Dec.2006.
[5]. David B. Johnson and David A. Maltz, “Dynamic Source [20]. The Network Simulator: ns2, http: //www.isi.edu/nsnam /ns/."
routing in ad hoc wireless networks,” in Mobile Computing,
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 46
www.IJCSI.org
formative evaluation of the tools. Such framework was simple graphs with containers. The GMF project currently
proposed in [8]. So, the major contribution of this paper is lacks the ability to specify “Query Result” visualizations.
to show how the framework can be applied to compare the
Visualization Tools which is presented in section 4. A An Open Framework for [10] visual mining of CVS based
Framework for visualizing Model Driven Software software repositories has three major aspects are data
Evolution falls into seven key areas (views): Context extraction, analysis and visualization. An approach was
View, Inter-model View, City View, Metric View, proposed for CVS data extraction and analysis. CVS data
Transformation View, Evolution View and Evaluation acquisition mediator used to extract the data from CVS
view [9] and 22 Key features are identified for all key repositories. Analysis techniques are used to analyze the
areas. The framework is used to evaluate visualization raw data retrieved from the CVS repositories from CVS
tools and it is also used to assess tool appropriateness from Querying. It also provides the comparison of the open
a variety of stakeholder perspectives. source projects. CVSgraph is a software tool used to
visualizing project at file level. This open framework does
This paper is structured as follows: Section 2 discusses the not provide the visualization of models, it provides for
related work. Section 3 summarizes the framework. program at file level only.
Section 4 discusses an application of the framework by
considering different Visualization tools/CASE tools. CVSscan[11] is a tool in which a new approach for
Section 5 outlines the conclusions and giving an outlook visualization of software evolution was developed. The
on future work. main audience targeted here is the software maintenance
community. The main goal is to provide support for
program and process understanding. This approach uses
2. Related Work multiple correlated views on the evolution of a software
project. The overall evolution of code structure, semantics,
This section reviews the literature related to the fields of and attributes are integrated into an orchestrated
Software Visualization, Software Evolution Visualization environment to offer detail-on-demand. And also provides
and Model Driven approaches. the code text display that gives a detailed view on both the
composition of a fragment of code and its evolution in
Source Viewer 3D (sv3D) [6] is a Software Visualization time. It is focused on the evolution of individual files.
framework that builds on the SeeSoft metaphor. sv3D can
show large amounts of source code in one view. Object
based manipulation methods and simultaneous alternative 2.1 Motivation for Framework and its Application
mappings are available to the user. The types of user tasks
and interactions that are supported by sv3D, is not directly There are number of frameworks exists in the literature for
related to solving/visualizing specific software comparison and assessment of the various CASE tools.
engineering tasks and it is a prerequisite for a software Comparison of these tools is essential to understand their
visualization tool. differences, to ease their replication studies, and to
discover what tools are lacking. Such a comparison is
Architecture to Support Model Driven Software difficult because there is no well-defined comprehensive
Visualization [7], borrows the field of Model Driven and common comparative study for different category of
Engineering (MDE) to assist with the creation of highly the tools. For design recovery tools a comparative
customizable interfaces for Software Visualization. In framework [14] was derived for comparison. This
order to validate the architecture, MDV framework for framework comprises eight concerns, which were further
Eclipse was developed. Model Driven Visualization divided into fifty three criteria and which were applied on
(MDV) is intended to address the customization of ten design recovery tools successfully. Another
information visualization tools, especially in the program framework [7] also exists in the literature for comparison
comprehension domain. The MDV architecture describes and assessment of the software architecture visualization
how to leverage the work done in the Model Driven tools. Software architecture is the gross structure of a
Engineering community and apply it to the problem of system; as
designing visualizations tools. such, it presents a different set of problems for
visualization than those of visualizing the software at a
The Graphical Modeling Framework(GMF)[12] project lower level of abstraction. Six visualization tools were
for evaluated in this framework. This framework consists of
eclipse has facilities to allow modelers to define graphical seven Key areas and 31 Key features, for the assessment
editors for their data. These graphical editors can be used of software architecture visualization tools. Both the
as viewers, however, the views they support are limited to
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 49
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814
describe the features of the tools. And the comparison of is ‘NA’. Compare to other two tools, ArgoUML is
those tools and responses are shown in the Table 3. particularly inspired by the three theories within Cognitive
Psychology. So, the designers of a complex system do not
Table 2: Possible Responses (Metrics) conceive a design fully formed. Instead, they must
Response Meaning
construct a partial design, evaluate, reflect on, and revise
Y Full support it, until they are ready to extend it further. So, the
Y? Mainly supported responses for the features are shown in the Table 3 as EV1
N? Mainly not supported – Y? , EV2 – N, EaV1 – N, Eav2 – Y? , EaV3 – Y.
N No support
NA Not applicable (not in the scope)
? Unable to determine
4.2 MetricView Evolution Tool
4.1 Argo UML Tool MetricView Evolution (MVE) tool [2], [3], [15] is a
research activity within Empirical Analysis of
ArgoUML (ARUML) is a free UML diagramming tool Architecture and Design Quality Project (EmpAnAda).
[5], [17] released under the open source BSD License. It is This Project is an activity of the System Architecture and
a java based UML tool that helps users to design using Networking group at the Eindhoven University of
UML. It is able to create and save most of the nine Technology, Netherlands. MetricView Evolution tool
standard UML diagrams. ArgoUML not only a free UML provides features such as metrics calculations within the
modeling tool, it is also an open source project that any tool, several views to explore and navigate UML models,
one can contribute to extend or to customize the features visualization of evolution data. This is an extension of
of a tool. It is a powerful yet easy-to-use interactive, MetricView tool which includes more features.
graphical software design environment that supports the MetricView Evolution also supports analysis of model
design, development and documentation of Object- quality and model evolution. Due to some limitations in
Oriented software applications. The users of Argo UML this research activity and since the entire UML
are software designers, architects, software developers, specification is quite complex so, not all the information
business analysts, system analysts and other professionals available in each diagram. Only the necessary elements are
involved in the analysis, design and development of extracted and displayed in this tool. Even with limitations
software applications. First version released in April 1998 the reasons to select this tool is research activity, easily
and the recent version is 0.26.2 in November 2008.All downloadable and features are closer to the framework.
nine UML 1.4 diagrams supported and it also supports
many features but the major weakness is no support for MetricView Evolution tool has full support (Y) for the key
UML 2. The four key features that make ArgoUML features such as CV1, CV2, CV3, IMV1, IMV2, CiV1,
different from other tools are: it makes use of ideas from CiV2, MeV1, MeV2, MeV3, EV1, EaV1, and EaV2.
cognitive psychology, it is based on open standards and it Feature IMV3 (i.e. integration of models) is not supported
is 100% pure java is used. but there is a scope for integrating the models. Features
such as TV1, TV2, and TV3 are not applicable because
Explorer View in ArgoUML has 9 perspectives which these features are not in the scope of the tool. And the
satisfy the features of the framework such as CV1, CV2, purpose of the MetricView Evolution tool is for quality
CV3, IMV1, and IMV2. This is indicated with the and evolution of UML models not for transformation of
response ‘Y’ in the Table 3. Integration of the models models like model to model, code to model and model to
(IMV3) is not supported so, the response is ‘N’. Features code. EV2 and EaV2 features are not mainly supported
such as CiV1, CiV2 are not mainly supported because as (N?) because the purpose of the evolution view in the tool
such there is no geographical view of a complete project is to enable the user to spot the trends in the values of
but it provides all the models in a project in a hierarchal quality attributes and/or metrics at multiple abstraction
tree view. Hence the response is ‘N?’ in the Table 3. All levels not for multiple dimensions of evolution. The
the features in a (Mev1, MeV2, MeV3, and MeV4) Metric responses for the stakeholders concerns(key features or
View are not applicable here because it is not intended to questions) are shown in the Table 3 in terms of Y, N, Y?,
calculate the metric values of the models. This is shown as N? and NA.
‘NA’ response. The response for the Transformation
features such as TV1, TV2, and TV3 is ‘N?’ because 4.3 Visual Paradigm for UML
transformation from model to code is partially available
not the other kinds of transformation such as model to Visual Paradigm [16] for UML 6.4 (VP-UML) is a
model or code to model. Transformation rules and powerful visual UML CASE tool. It is designed for a wide
language (TV4 and TV5) is not applicable, so the response range of users, including software engineers, system
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 51
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814
analysts, business analysts, and system architects like who design problems and many design issues and rules is also
are interested in building software systems reliably available. The remaining two features like ‘transformation
through the use of Object-Oriented approach. VP-UML rules’ and ‘transformation languages’ are not applicable
can run in different operating systems. It supports more and not supported by these three tools. By comparing the
than 20 diagram types including UML 2.1, BPMN, tools under this common framework a stakeholder can
SysML, ERD, DFD and more. Different editions are also easily understand and asses the tools and can find out the
available such as Enterprise, Professional, Standard, flaws in a particular tool.
Modeler, and Personal are commercial editions.
Community and Viewer are non-commercial editions. It From the comparison of various features of the
supports a rich array of tools. One special feature is three tools it is observed that still there is a need to
Resource-Centric interface, which lets the user access consider few more possible visualization/CASE tools
modeling tools easily without referring back and forth which are exists in the literature. It is possible to check the
from the workspace to various toolbars. Users can draw unsatisfied features of the three tools can be satisfied by
diagrams or models as with a pen and paper, executing the other tools and also possible to know the role of the
complicated modifications with just a click and drag, visualization tools in MoDSE. From the comparison of
creating completely visual environment number possible tools framework can be strengthen
further. Another application of the framework is to
It is observed that the names of the features in VP-UML evaluate stakeholder concerns considered in the
differ from the features of the framework. But the purpose framework against the concerns of the software
and intention of the features are same. So, they have full practitioners (stakeholders) from diverse organizations.
support for those features that labeled as ‘Y’ in the Table These are the subjects of the future work.
3. Transformation of the models such as model to model,
model to code and code to model available in the tool but
transformation rules and languages are not available.
Hence, features as TV4, TV5 are not applicable (NA).
MeV1,MeV2, MeV3, MeV4 features for metrics of the
models and which are not in the VP-UML tool that is
shown in the Table 3 as ‘ NA’. Features such as EV2,
EaV3 are not mainly supported in the tool i.e. shown in
the Table 3 as ‘N?’ Visualization of the models by using
different diagrams is possible but the techniques are not
available. So, the response is ‘N?’ for Mev2.
Stakeholder’s feedback (EaV3-N?) is not mainly provided,
but the user can store their opinions/ideas about the
evolution of the models.
Dr. Anand Rao Akepogu. recieved B.Sc (M.P.C) degree from Sri
VENKATESWARA University, Andhra Pradesh, India. He received
B.Tech degree in Computer Science & Engineering from University
of Hyderabad, Andhra Pradesh, India and M.Tech degree in A.I &
Robotics from University of Hyderabad, Andhra Pradesh, India. He
received PhD degree from Indian Institute of Technology, Madras,
India. He is currently working as a Professor & HOD of Computer
Science & Engineering Department and also as a Vice-Principal of
JNTU College of Engineering, Anantapur, Jawaharlal Nehru
technological University, Andhra Pradesh, India. Dr. Rao published
more than twenty research papers in international journals and
conferences. His main research interest includes software
engineering and data mining.
2
MIS Department, Al-Zaytoonah University
Amman, Jordan
2. Coding Theory These memories are normally packaged with multiple bit
(or byte) per chip organization [13].
The theory and practice of error-correction coding is Coding techniques play a major role in segment the
concerned with protection of digital information against information in to m blocks each block of k-bit or it may
the errors that occur during data transmission or storage. be taken as a single block of length k (k=256, 512, 1024,
Many ingenious error correcting techniques based on a 2048, 8192, 16384, 32768, 65536, 131072, 262144,
vigorous mathematical theory have been developed and 524288) according to the organized memory system in
have many important and frequent applications. The our research. BCH and RS code are two powerful
current problem with any high-speed data approaches to error control coding in memory systems.
communication system such as storage medium is how to The information segmented is the first step when
control the errors that occur during storing data in information in a computer memory is written. Then this
storage medium. In order to achieve reliable k-bit encoded in to n-bit called code word which consist
communication, designers should develop good codes of k-bit and r-bit parity check (n=k+r). This code word
and efficient decoding algorithms [11]. stored in memory.
There are three types of faults transient, intermittent, and The decoding method used to obtain the information k
permanent faults. Transient faults are likely to cause a with no errors according to the coding technique when a
limited number of symmetric errors or multiple code word fetched from the storage.
unidirectional errors. Also, intermittent faults, because of
short duration, are expected to cause a limited number of 2.2 Reed-Solomon Codes (RS Codes)
errors. On the other hand, permanent faults cause either
symmetric or unidirectional errors, depending on the A RS code is a class of non binary BCH codes. It is also
nature of the faults. The most likely faults in some of the a cyclic symbol error-correcting code. The RS code
recently developed LSI/VLSI, ROM, and RAM represent a very important class of algebraic error-
memories (such as the faults that affect address decoders, correcting codes, which has been used for improving the
word lines, power supply, and stuck-fault in a serial bus, reliability of compact disc, digital audio tape and other
etc.) cause unidirectional errors. The number of data storage systems [14]. Secure communications
unidirectional errors cause by the above mentioned faults systems commonly use RS code as one method for
can be fairly large [12]. protection against jamming. RS codes are also used for
The errors that can occur because of the noise are many error control in the data storage systems, such as
and varied. However, they can be classified into three magnetic drums and photo digital storage systems.
main types: symmetric, asymmetric, and unidirectional A RS code is block sequence of finite field GF (2m) of 2m
errors [7]. binary symbols, where m is the number of bits per
symbol. This sequence of symbols can be viewed as the
2.1 Error Control for Computer Main Memories coefficients of code polynomial C(x)=c0+c1x+c2x²+…+cn-
n-1
1x where the field elements Ci are from GF(2m) [10].
Error correcting codes have been used to enhance the A t-error-correcting RS code with symbols from FG(2m)
reliability and data integrity of computer memory has the following parameters:
systems. The error correction can be incorporated in to Code length : n=2m-1
the hardware. Number of information : k=n-2t
In particular the class of single error-correcting and Number of parity-check digits : n-k=2t
double error-detecting (SEC-DED) binary codes has Minimum distance : dmin=2t+1
been successfully used to correct and detect errors In the following, we shall consider Reed-Solomon codes
associated with failures in semiconductor memories. The with code symbols from the Galois field GF(2m). The
most effective organization is the so-called 1 bit per chip generator polynomial of a t-error-correcting Reed-
organization. In this organization, all bits of a code word Solomon code of length 2m-1 is
2 2t
are stored in different chips. Any type of failures in a g(x)=(x+α)(x+α )…(x+α ), where α is a primitive
chip can corrupt at the most 1 bit of the code word. As element of GF(2m), and the coefficients gi, 0≤ l ≤2t are
long as the errors do not line up in the same code word, also from GF(2m). An (n,k) RS code generated by g(x) is
multiple errors in the memory are correctable. Large an (n,n-2t) cyclic code whose code vectors are multiples
scale integration (LSI) and very large scale integration of g(x) [14,15].
(VLSI) memory systems offer significant advantages in Consider RS codes with symbols from GF(2m), where m
size, speed, and weight over earlier memory systems. is the number of bits per symbol.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010
ISSN (Online): 1694-0784 56
ISSN (Print): 1694-0814
Let d(x)=cn-kxn-k+cn-k+1xn-k+1+…+cn-1xn-1 be the This code is optimal, thus it is the only SbEC-DbEC
information polynomial and p(x)=c0+c1x+…+cn-k-1xn-k-1 code with three check bytes but for a given size b(b<16)
be the check polynomial. Then the encoded RS code there is only one or two value of information.
polynomial is expressed by:
c( x) = p ( x) + d ( x) (1)
m
where ci,0≤ l ≤n-1, are field elements in GF(2 ). Thus, a Table 1: The parameters of shortened modified RS code
vector of n symbols, (c0,c1,…,cn-1) is a code word if and b n n k
only if its corresponding polynomial c(x) is a multiple of 5 16 79 64
the generator polynomial g(x). The common method of 6 46 274 256
encoding a cyclic code is to find p(x) from d(x) and g(x), 7 77 533 512
which results in an irrelevant quotient q(x) and an 8 131 1048 1024
important remainder y(x). That is, 9 231 2075 2048
(2) 10 823 8222 8192
d ( x) = q ( x) g ( x) + y ( x)
11 1493 16417 16384
Substituting Eq. (1) in to (2) gives:
12 2734 32804 32768
c( x) = p ( x) + q ( x) g ( x) + y ( x) (3)
13 5045 65575 65536
If we define the check digits as the negatives of the 14 9366 131114 131072
coefficients of y(x), i.e, p(x) = -y(x), it follows that: 15 17480 262189 262144
c( x) = q ( x) g ( x) (4)
This ensures that the code polynomial c(x) is multiple of Let the two codes whose H0 matrices are denoted as Hv
g(x). Thus, the RS encoder will perform the above and Hw have minimum Hamming distance dmin=4
division process to obtain the check polynomial p(x) GF(2b), let vi, i=0,1,…,n-1, denote a column vector in the
[14]. matrix Hv. Preserving minimum distance, matrix Hw
Theorem 1: A Reed-Solomon code is a maximum converted to matrix Hw having an all 'I' row vector.
distance code, and the minimum distance is n-k+1. Next, this all 'I' row vector is removed from the matrix
This tells us that for fixed (n,k), no code can have a Hw, whose resultant matrix is now called Hw. Let vj,
larger minimum distance than a RS code. This is often a j=0,1,…,m-1, denote a column vector of matrix Hw. The
strong justification for using RS codes. RS codes always new code has a parity check matrix H1 of the form that
have relatively short block length as compared to other each column in it is defined by the following equation:
cyclic codes over the same alphabet [16]. (6)
(C ) T = (V W )
In decoding a RS code (or any non binary BCH code), ij i j
the same three steps used for decoding a binary BCH i=0,1,…,n-1, and j=0,1,…,m-1.The dmin of this code is
code are required, in addition a fourth step involving four over GF(2b).
calculation of the error value is required. The error value For example, let b=2, and Hw equal to
at the location corresponding to B1 is given by the
following equation: ⎡Ι Ι Ι Ι 0 0⎤
Z (βL−1 ) (5) Ηw = ⎢⎢Ι Τ Τ2 0 Ι 0⎥⎥
(7)
ei1 =
π ν Ι≈ ⎢⎣Ι Τ 2 Τ 0 0 Ι ⎥⎦
Where z(x) = 1 + (s1+σ1)x + (s2+σ1s1 + σ2)x2+…+
(sv+σsv-1+σ2sv-2+…+σv)xv where
The decoding method of RS code is worth mentioning
because of its considerable theoretical interest, even ⎡1 0 ⎤ ⎡0 1⎤ ⎡1 1⎤ ⎡0 0 ⎤
Ι=⎢ ⎥ Τ1 = ⎢ ⎥ Τ2 = ⎢ ⎥ 0=⎢ ⎥
though it is impractical [15]. ⎣0 1 ⎦ ⎣1 1⎦ ⎣1 0⎦ ⎣0 0 ⎦
This matrix can be converted to the new form that has
3. Byte-Per-Chip Memory Organization top row vector which has all 'I' elements. This conversion
can be carried out in the following manner.
In many computer memory and VLSI circuits The second row of Hw is multiplied by an arbitrary non
unidirectional errors are known to be predominant zero element Т^a. The multiplied result and the third row
protection must be against combinations of unidirectional vector are added to the first row vector in Hw. If the
and random errors because random byte errors also added row vector has non zero element, each column can
appear from intermittent faults in memories. Thus it is be normalized so that the first row element has a 'I'
very important to have such codes for protection of byte element. It can be derived that the number of Т^a
organized memories. Table (1) shows the parameters of elements is 2b-1. If Тa=Т' is chosen then:
modified RS code after shortening.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 57
www.IJCSI.org
[4] Z. Zhang and C. Tu, On The Construction of Systematic IEEE Transactions on Information Theory, Vol. 38, No. 6,
TEC/AUED Codes, IEEE Transactions on Information Theory, Nov., 1992, pp. 1801-1807.
Vol. 39, No. 5, Sep., 1993, pp.1662-1669. [15] S. Lin, An Introduction to Error-Correcting Codes,
[5] S. Al-Bassam, Another Method for Constructing t- Prentice-Hall, Electrical Engineering Series, New Jersey, 1970.
EC/AUED Codes, IEEE Transactions on Computers, Vol. 49, [16] R. E. Blahut, Theory and Practice of Error Control Codes,
No. 9, Sep., 2000, pp. 964-967. Addison-Wesley, 1983.
[6] G. Umanesan and E. Fujiwara, class of Random Multiple [17] C. L. Chen, Byte-Oriented Error-Correcting Codes for
Bits in a Byte Error Correcting and Single Byte Error Detecting Semiconductor Memory Systems, IEEE Transactions on
(S[sub t/b]EC-S[sub b]ED) Codes, IEEE Transactions on Computers, Vol. C-35, No. 7, July, 1986, pp.646-648.
Computers, Vol. 52, No. 7, Jul., 2003, pp. 835-848.
[7] B. Bose, S. Elmougy, and L. G. Tallini, Systematic t- Muzhir. Shaban Al-Ani has received Ph. D. in Computer &
Unidirectional Error-Detecting Codes over Zm, IEEE Communication Engineering Technology, ETSII, Valladolid
Transactions on Computers, Vol. 56, No. 7, July, 2007, pp. University, Spain, 1994. Assistant of Dean at Al-Anbar Technical
Institute (1985). Head of Electrical Department at Al-Anbar
876-880.
Technical Institute, Iraq (1985-1988), Head of Computer and
[8] S. Krishnan, R. Panigrahy and S. Parthasarathy, Error- Software Engineering Department at Al-Mustansyria University,
Correcting Codes for Ternary Content Addressable Memories, Iraq (1997-2001), Dean of Computer Science (CS) & Information
IEEE Transactions on Computers, Vol. 58, No. 2, Feb., 2009, System (IS) faculty at University of Technology, Iraq (2001-
pp. 275-279. 2003). He joined in 15 September 2003 Electrical and Computer
[9] I. Naydenova and T. Kløve, Some Optimal Binary and Engineering Department, College of Engineering, Applied
Ternary t-EC-AUED Codes, IEEE Transactions on Computers, Science University, Amman, Jordan, as Associated Professor.
Vol. 55, No. 11, Nov., 2009, pp. 4898-4904. He joined in 15 September 2005 Management Information
System Department, Amman Arab University, Amman, Jordan,
[10] G. Fang and H. C. A. Van Tilborg, "Bounds and
as Associated Professor, then he joined computer science
Constructions of Asymmetric or Unidirectional Error Code", department in 15 September 2008 at the same university.
Ablicable Algebra in Engineering Communication and
Computing (AAECC), 3, 1992, pp. 269-300. Qeethara Kadhim Abdul Rahman Al-Shayea has received Ph.
[11] M. Y. Rhee, Error Correcting Coding Theory, McGraw- D. in Computer Science, Computer Science Department,
Hill, New York, 1989. University of Technology, Iraq, 2005. She received her M.Sc
[12] D. J. Lin and B. Bose, Theory and Design of t-Error degree in Computer Science, Computer Science Department
Correcting and d (d>t)- Unidirectional Error Detecting (t-EC/d- from University of Technology, Iraq, 2000. She has received her
High Diploma degree in information Security from Computer
UED) Codes, IEEE Transactions on Computers, Vol. 37, No. 4,
Science Department, University of Technology, Iraq, 1997. She
Apri., 1988, pp.433-439. joined in 15 September (2001-2006), Computer Science
[13] C. L. Chen, Error-Correcting Codes with Byte Error- Department, University of Technology, Iraq as assistant
Detection Capability, IEEE Transactions on Computers, Vol. professor. She joined in 15 September 2006, Department of
C-32, No. 7, July, 1983, pp.615-621. Management Information Systems Faculty of Economics &
[14] M. Morii and M. Kasahara, Generalized Key Equation of Administrative Sciences Al-Zaytoonah University of Jordan as
Remainder Decoding Algorithm for Reed-Solomon Codes, assistant professor. She is interested in Coding Theory,
Computer Vision and Artificial Intelligence.
2
Department of Computer Science and Engg, Ch. Devi Lal University,
Sirsa, Haryana 125 055, India
the context of information system. Supported by their country with a view to achieve critical mass of skilled
historical perspective, they elaborated on policies, their human resource/ researchers in any given field.
implementation and strategic management in those
universities. The historical perspective gives a bird’s eye These guiding principles are expected to lead to various
view of evolution of the university in ICT field, which important steps in planning and implementation as
can be quite revealing. A retrospective analysis of this follows:
perspective may be useful in documenting key changes
over time. With this, the SWOT analysis may provide • ICT Technology should reach to each learner
directions to assist in making decisions and strategies • Generation of quality e-content, questions bank
about the relative merits of different activities in the ICT as modules-based learning.
universities. • Development of interface modules for physically
challenged learners.
In order to undertake a SWOT analysis with rigor, an • Facility of Geographical Information System
essential pre- requisite is that the primary data collected (GIS) for planning upto the village level.
should be through persons who have a deep understanding • Improvement in course curriculum and teachers
of the organization, including its historical perspective. training programs.
This would enable one to identify its strengths, • Efficient and effective knowledge transfer to
weaknesses and opportunities as well as a sound learner with proper interaction
understanding of internal and external environment, which • Voice over Internet Protocol (VOIP) supported
may effect positively as opportunities and negatively as communication between learner and teacher
harmful effects and threat [8].
• Enterprise Resource Planning (ERP) and e-
governance for education, coordination &
synergy for implementation of the policies,
1.2. National Mission on Education through ICT
setting up virtual laboratories and support for
creation of virtual technical universities.
One of the most crucial challenges facing Indian higher • Performance optimization of e-resources
education is its quality for which Government of India has • Certification of attainments of any kind at any
an ambitious goal for eleventh plan. Recently, level.
Government of India through Ministry of Human
Resource Development (MHRD) has developed a holistic All these factors are supposed to contribute towards the
approach on National Mission on Education through ICT SWOT analysis of any higher educational institution.
(NME ICT)[15]. NMEICT has brought out a document, In the present paper SWOT analysis will be carried out
which has already been triggered during the period of through a 2x2 matrix worksheet as given in Table I, of
tenth plan (phase I). As per its strategy, its future vision, ICT in six universities of western Himalayan region of
planning and developmental activities will form phase II India.
and phase III during the eleventh five year plan period.
Table I
The paper is organized in five sections as follows. Having Both of these technical universities have single faculty of
presented the introduction in section 1, the SWOT analysis engineering with different engineering departments
of technical universities will be presented in section 2. including computer science, electronics and
Section 3, deals with the SWOT analysis of agriculture communication and have been running B.Tech, M.Tech
and horticulture universities. Section III will be devoted to and Ph.D programs.
regular multi-faculty universities and their SWOT
analysis. The conclusion will be given in the final Section. J.P.University in addition has a department of Information
Technology and Bio-informatics. These programs have
been going on in respective departments since 2002,
2. Technical Universities: whereas in NIT Hamirpur, computer centre was
established in 1986, department of electronic and
There are two technical universities out of the six selected communication was started in 1988 and computer science
from the western Himalayan region. One is J.P.University department was setup in 1989. The department of
of Information Technology, Solan and another is National management came in existence in the year 2008.
Institute of Technology, Hamirpur (Deemed University).
In general, at both the universities the courses in computer
2.1. Historical perspective: science are compulsory to learn the ICT skills, for all the
Historical perspective of J.P.University and NIT Hamirpur students belonging to different branches of engineering.
are presented in Table III A and IIIB respectively. But emphasis on the ICT programs is given through the
curriculum in the departments of computer science,
Table III A. Historical Perspective of J.P. University electronics and communication, information technology
and bio-informatics. The department of management in
Faculty Department IT Course Year NIT Hamirpur has also been conducting courses on
Computer B.Tech 2002 information system.
Science & Engg,
Faculty of Information
Engineering Technology, M.Tech, 2006 2.2 SWOT Analysis:
Electronics and Ph.D
Communication, SWOT analysis of these two technical universities
Civil, Bio- (J.P.University and NIT Hamirpur) is given in Table IVA
Informatics and IV B.
External Opportunities External Threats b) NIT Hamirpur draws better faculty and technical
Effective interaction with Government policy and staff in sufficient number. They are encouraged
Alumni and industries. norms to improve their technical skills and qualification
Collaboration with Growing digital divide. at the national level through quality improvement
universities and industries. Threat from other private programs organized by AICTE.
Entrepreneurship programs /foreign universities c) NIT Hamirpur has better facility of IP telephony,
Solution to environmental players. wireless network and student counseling.
disasters. Development of d) NIT Hamirpur organizes application and training/
skilled professionals. extension programs using ICT facilities, for the
faculty of different universities and engineering
Table IV B. SWOT Analysis of NIT Hamirpur colleges.
Internal strengths Internal e) Better ICT facilities including the concentration
weaknesses on problem- based learning makes teaching
Vision- programs better, improving the quality of
Effective Leadership & ICT students.
Governance. ICT Infrastructure-
ICT Infrastructure Lack of Both the universities, due to greater industry interaction,
LAN Wired and Wireless, redundancy feature have better opportunities of placement for outgoing
Effect website, Internet firewall in campus wide students at the national and international level.
security, video Conferencing, IP network backbone Nevertheless, J.P. University encourages students to join
Telephony within campus, and firewall. J.P group itself. In overall performance, NIT Hamirpur has
Maintenance of networks, an edge over the other.
Informative website, IS, ERP,
Alumni portal/association, ICT Weaknesses: Both the technical universities lack in
technical staff, Maintenance, ICT campus wide network backbone redundancy features. NIT
budget, E library, E- content, Well Hamirpur also lacks in redundancy of firewall. J.P.
qualified faculty University is supposed to be weak in student- teacher
Activities interaction due to the large number students in the class
Problem based teaching & room as compared to NIT Hamirpur. Whereas
learning, Greater industry J.P.University lacks in mobile computing, IP telephony
interaction. and Video-conferencing.
Performance
Research collaboration, Actual Opportunities: Opportunities can be divided into two
placement, Training for faculties. groups; one coming from internal factors and another from
Opportunities Threats external/environmental factors. Most of the internal
More ICT based training Threat from security, weaknesses of an institution can become attractive
programs for professionals. government policy, opportunities manifested mainly in the form of
More effective contact with Private foreign performance.
alumni. universities.
Develop new ICT tools for In this respect, redundancy feature of the campus wide
teaching and e-learning network backbone, close contact with its alumni which
J.P. University may be able to handle in a better way, are
Strengths: As per our general framework, the vision and some of the opportunities for both of them. In particular
ICT planning and various initiatives belonging to tier I, these are the alumni who always prove very helpful to the
are reflected through good ICT infrastructure, networking, organizations and for providing placement to outgoing
internet security, information system/ ERP, e-library students. These technical universities are also having a
system, effective websites and video conferencing facility. greater responsibility towards finding solutions to
Both the universities are having good maintenance environmental disasters and development of skilled
network and of computers, close academic industry professionals along with the development of
interaction, e-placement and alumni association/portal. It entrepreneurs.
is interesting to point out that NIT Hamirpur has an edge
over J.P.University because of: Threats: Being financially sound, there is no internal
threat as such at the level of ICT infrastructure and
a) The availability of best financial resources from activities. At the performance level both the institutions
Government of India (GOI).
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 67
www.IJCSI.org
4. Regular Multi- faculty Universities: ejournals, for academic community of the university. This
facility was centrally located on the university campus.
There are two universities in this category. These are In the year 2007, terra-byte optical fibre backbone
Himachal Pradesh University, Shimla and Jammu connectivity was commissioned. All the teaching faculty
University, Jammu. members, various teaching laboratories and administrative
officers got the internet facility right in their offices. Total
4.1 Historical perspective- number of users on campus became 810. This internet
facility provided access to all e-journals through the
Historical perspective of these two universities is given in Inflibnet (INFLIBNET), for teachers, researchers of this
VIIA and VII B respectively: university. In the year 2008, connectivity of 512 Kbps
was upgraded to 2 Mbps (1:1) leased line. The ICT
Table VIIA. Historical Perspective of H. P. University, infrastructure developed on the campus is being used by
Shimla whole of the university.
Table VII B. Historical Perspective of Jammu University, B.Tech. program in information technology was started in
Jammu the year 2000, with the University Institute of Information
Technology. MBA (Information System) was triggered in
the year 2002 under the faculty of commerce and
Faculty Department Courses Year management. M.Tech. in computer science began in the
Computer Centre 1987 year 2006, under the faculty of physical sciences.
Physical (Central facility)
Science Computer Diploma 1987 University of Jammu came into existence in 1969.
Science in Computer centre was established in the year 1987.
Computer University Optical Fibre backbone was established in
Science 2003. In the year 2005, a comprehensive website of this
Physical Computer MCA 1995 university became functional. The DCA program was
Science Science launched in 1987 followed by MCA program in 1995
Physical Computer Ph.D under the department of computer science. The department
Science Science of management studies has been conducting MBA (IT)
Management PG program on the campus. It is pertinent to mention that
Management Studies Diploma whole of the academic community is being benefited by
in Mgt these ICT facilities.
Studies
Management MBA 4.2 SWOT Analysis:
Studies
Himachal Pradesh University and Jammu University are
state universities of the state of Himachal Pradesh and
Jammu & Kashmir respectively in the western Himalayan
Himachal Pradesh University, Shimla started functioning
region. As a result, they have constraints on the financial
in 1971 and had been running various PG programs in
resources. Both are multi-faculty universities and are
more than a dozen faculties including faculty of
working for diverse disciplines. The SWOT analysis of
agriculture and horticulture/forestry. As a result of
these two universities is given Table VIII A and VIII B.
trifurcation in the year 1977, full fledged agriculture
university at Palampur and Horticulture University at
Table VIII A. SWOT Analysis of H. P. University,
Nauni, Solan, came into existence in the state of Himachal
Shimla
Pradesh. Presently, Himachal Pradesh University is having
more than 30 teaching departments, on its campus. Internal Internal weaknesses
The Computer centre at Himachal Pradesh University, strengths Vision – ICT oriented leadership
Vision: ICT ICT Infrastructure-
Shimla was established 1987, under faculty of physical
Planning Non availability web
science, with diploma course in computer applications. In
the year 1989, MCA program was started and DCA was
ICT server/mailserver/e-content delivery
Infrastructure system/DNS Services. Redundancy
upgraded to PG diploma in computer applications
LAN Facility feature in firewall.
(PGDCA). In the year 2004, VSAT connectivity (512
Internet and Videoconferencing
kbps) was installed with bouquet of more than 4000
firewall IP Telephony Mobile
Security, computing(Wireless) ERP and E-
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 70
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814
Redundancy Governance ICT Technologies for Strength: In vision and planning, Himachal Pradesh
feature in teaching University had a little advantage over Jammu University
OFB. Qualified & sufficient Technical in ICT infrastructure, networking and security; both the
Staff universities are at par. In addition to this, Jammu
Activities: Maintenance of ICT infrastructure University is having its own Web server, mail server, DNS
Training/Extensi ICT budget facility, better mobile computing, and more internet
on E-library system/automation/ e- bandwidth as compared to Himachal Pradesh University
programs(ASC) Contents.
Activities- Weakness: Himachal Pradesh University is lacking with
Performance E-placement and alumni association. the facility of web server, mail server, DNS facilities, and
using ICT Problem based learning/teaching video conferencing. Mobile computing facilities, ERP, e-
Research approach. content, e-governance are not available in both the
Collaboration with other universities. universities. They lack sufficient ICT technologies for
Synergy with multi disciplinary teaching, e-library, e-placement, ICT support system,
activities maintenance of computers along with sufficient well
Training of faculty/skilled qualified teaching faculty. Nevertheless, Jammu
professionals, Problem oriented University has certain advantage in respect of video
education. conferencing and better bandwidth connectivity over
Opportunities Threats Himachal Pradesh University.
Collaboration with Migration of students to other
industries. universities, Presence of private Opportunities: The weaknesses in respect of ICT are to
Entrepreneurship foreign universities. be converted into opportunities in infrastructure and
Solution of ICT Threat from government policies. activities. The more crucial is to adopt the technology of
disaster. Close problem solving orientation in learning as internal factors
contact with and to face the challenges due to external factors like
alumni collaboration with other universities, industries and
development of ICT applications at the advanced and
professional level. Establishment of close contact with
Table VIII B: SWOT Analysis of Jammu University, alumni will also be helpful for both the universities.
Jammu
Internal strengths Internal weaknesses Threats: Threats come from government policies, private
Vision- ICT Planning Vision Motivating & foreign universities. ICT security is the major threat in
ICT Infrastructure Leadership. both the universities.
Impact of ICT, research ICT Infrastructure-
and placement. Redundancy feature in
LAN facility backbone and Firewall, 5. Conclusions and Suggestions
Internet and firewall video conference facility
security IP Telephony, IS/ ERP We have presented a comparative SWOT analysis in
Mobile Computing ICT Technologies in respect of ICT, of six universities placed in three
Activities- Teaching, ICT teaching, categories, supported by their historical perspectives. This
Extension technical staff, E-library has been done within the four- tier framework of ICT[15]
programs(ASC) system, E content, ICT and on the basis of primary data/ feedback obtained from
Performance support System different universities. Findings of this paper are along the
Research Activities – lines to those of National Accreditation and Assessment
Industry university of Committee (NAAC), an autonomous body of University
interaction Grants Commission as far as regular multi faculty
Research Performance. universities are concerned. ICT activities have a crucial
Problem Oriented role to play as per NMEICT directions/ policies to be
Training/Faculty adopted by the universities in order to achieve quality and
Opportunities Threats excellence in higher education system in the region.
Close Contact with Wi-fi is not secured.
Alumni. Threat from other private On the basis of this SWOT analysis, answers to some of
University Industry players. Threat from the glaring questions regarding ICT ingredients may be
Interaction. Collaboration government policies. briefly mentioned as follows:
with Foreign universities
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 71
www.IJCSI.org
[10] Mintzberg, H.(1994), The Rise and Fall of Strategic (India), since 2004 onwards. Earlier he was working with
Planning, New York: Prentice Hall. Kurukshetra University, Kurukshetra. His areas of interest
[11]Narayana Murthy N.R. (2009), Tribune India are computer networks, e-Governance, and system
November 16. http://www.tribuneindia.com/2009/ simulation tools. He is having more than 17 years of
20091117/j&k.htm#1 teaching/ research experience, has more than 30
[12] NAAC (2007), National Assessment & Accreditation publications in international and national
Council, New Methodology of Assessment & journals/conference proceedings, alongwith three books
Accreditation. on the subject.
[13] NME ICT 2009, National Mission on Education
through ICT, www.sakshat.ac.in and
http://www.education.nic.in/dl/MissionDocument.pd
f
[14] Pearce, J. A. and Robinson, R.B. (1997), Strategic
Management Formulation, Implementation and
Control, 6th Edition, Chicago: Irwin.
[15] Sharma, D. and Singh, V. (2009), ICT in Universities
of Western Himalayan Region of India: Initiative,
Status and Performance- An Assessment,
International Journal of Computer Science Issues, Vol
6,No 2, pp 44-52
[16]Weihrick, H,(1982), “ The Two- Matrix- A Tool for
Situational Analysis, “ Long Range Planning (15)2,
pp 54-66.
2
Research Scholar, Dr. MGR University. Working with Dept. of ISE, BMSCE, Bangalore.
Member, Multimedia Research Group, Research Centre, DSI,
Bangalore, India
Ref.[3] called GWQ (Global waiting queue) which reduces connected to MMS, is in turn connected to its left and right
the initial startup delay by sharing the videos in a neighboring LPSG in a ring fashion through its tracker.
distributed loosely coupled VoD system by balancing the We also propose an efficient regional popularity based
load between the lightly loaded proxy servers and heavily prefix caching and load sharing algorithm (RPPCL). This
loaded proxy servers in a distributed VoD. So whenever algorithm efficiently allocates the cache blocks to the
the local server is busy, the request will be serviced from video according to their local popularity and also shares
the remote server. This introduces the additional network the videos present among the PSs of the LPSG. Hence our
traffic that flows from remote servers. They have approach increases the video hit rate and reduces the client
replicated the videos evenly in all the servers, for which waiting time, network usage on MMS to PS path.
the storage capacity of individual proxy server should be The main aim of arranging the group of proxy servers in
very large to store all the videos. This may not allow each the form of LPSG is to provide the following advantages.
server to store replicas of more number of videos. Our • Reduced Client waiting time: replicating the
proposed scheme replicates only regionally (local and videos at PSs of Lp based on their local
global) popular videos using dynamic buffer allocation popularity, and sharing of these videos among the
algorithm[2] there by utilizing the proxy server storage
PSs of Lp can provide the service to the clients
space more efficiently to store replicas of more number of
videos. In [4] Sonia Gonzalez, Navarro, Zapata proposed a immediately as they request.
more realistic partial replication and load sharing • Increased aggregate storage space: by
algorithm PRLS to distribute the load in a distributed VoD distributing large number of videos across the
system. In their research, they have demonstrated that their PSs and TR of Lp, high cache hit rate can be
algorithm maintains a small initial start up delay using less achieved. For example, if 10 PSs within a LPSG
storage capacity servers by allowing partial replication of
managed 500 Mbytes each, total space available
the videos. They store the locally requested videos in each
server. Our work differs by caching the initial some is 5 GB. 200 proxies of LPSG could store about
portion of the video as prefix-1 at proxy and next part of 100 GB of movies.
the video as prefix-2 at tracker based on local and global • Load reduction: replication of the videos among
popularity using dynamic buffer allocation algorithm [2]. the PSs of Lp based on their regional popularity,
S.-H. Gary Chan, Fouad Tobagi in [7] considers the exchange allows more number of clients to get serviced
of cached contents with the neighboring proxy server from Lp. This reduces the communication with
without any coordinator. Our approach differs, in which
the main multimedia server and in turn its load.
we have made a group of proxy servers with a coordinator
(Tracker) to make the sharing of videos more efficient. • Scalability: by adding more number of PSs the
Another approach to reduce the aggregated transmission capacity of the system can be expanded.
cost has been discussed in [6] by caching the prefix and Interconnected TRs increases the system
prefix of suffix at proxy and client respectively. Since the throughput
clients are not trustable, and can fail or may leave the The organization of rest of the paper is as follows: In
network at any time without any notice, they have adopted section 3 we present a Model of the problem, Section 4
an additional mechanism to verify the client and cached describes the proposed approach and algorithm in detail,
data at client, which increases the overhead of such In section 5 we present a simulation model, Section 6
verification. Both searching of the video in the whole presents the simulation results and comparison of RPPCL,
cluster of proxy servers, and the verification process GWQ and PRLS algorithms, Finally, in section 7 we
increases the client's waiting time. conclude the paper and refer to further work.
So in order to minimize the client waiting time and
network traffic in the VoD system, in this paper, we
present a novel 3 layer architecture of distributed proxy
servers, for serving videos with a target to optimize the
client waiting time. This architecture consists of a Main
multimedia server [MMS], which is very far away from the 3. Stochastic Model of the Problem
user and is connected to a set of trackers [TR]. Each
tracker is in turn connected to a group of proxy servers Let N be a stochastic variable representing the group of
[PSs] and these proxy servers are assumed to be videos. It may take the different values for (videos) Vi
interconnected in a ring pattern, this arrangement of (i=1,2 . . N) and the probability of the video Vi being
cluster of proxy servers is called as Local Proxy servers asked is p(Vi). Let the set of values p(Vi) be the
Group[LPSG(Lp)]. Each of such LPSG, which is probability mass function. Since the variable must take
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 75
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814
Proposed algorithm
When there is a request for a video vi (at a particular proxy PSq of Lp , do the following:
If (Vreq € PSq)
(pref-1)Vreq is streamed immediately to the user ( y = time required to stream (pref-1) from proxy - user)
p-u
So wt Vreq = wt(p-1 )Vreq
else - pass the request to the TR(Lp)
if (Vreq € PS(Lp))
If (PS(Lp) is left or right NBR(PSq)
SMTR streams (pref-1)Vreq from NBR(psq), (pref-2) Vreq from its cache and the remaining portion from
MMS
(p-p)+(p-u)
wt Vreq = wt(p-1 )Vreq ( y = time required to stream (pref-1) from proxy- proxy & proxy - user)
else
SMTR streams the (pref-1)Vreq from OTR(PSq), (pref-2) Vreq from its cache and the remaining portion from
MMS to-User thru PSq using optimal path found
(p-p)+(p-u)
wt Vreq = wt(p-1 )Vreq ( y = time required to stream (pref-1) from proxy- proxy & proxy - user)
else
Pass the request to left or right TR(NBR(Lp))
if (Vreq € NBR(Lp))
TR(NBR(Lp)) streams the Vreq from NBR(Lp)-user thru TR(Lp)
(t −t)+(t − p)+(p −u)
wt Vreq = wt[(p-1 ) + (p-2 )]Vreq ( y = time required to stream (pref-1) from tracker –
Tracker , tracker – proxy & proxy - user)
else
TR(Lp) downloads the complete Vreq from MMS and streams to the user
(s-t)+(t-p)+(p-u)
wt Vreq = wt(S)Vreq ( y = time required to stream (pref-1) from MMS -TR, TR-PS & PS-user)
Also caches the (pref-1) and (pref-2) of Vreq at PSq using Dynamic Buffer allocation algorithm[ 2].
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. x, January 2010 78
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814
corresponding entry is updated in its database at TR. Whenever the sufficient buffer and bandwidth is not
Whenever a client at PSq wishes to play a video Vi, it first available in the above operation the user request is
sends a request to its parent proxy PSq, the SMPSq rejected.
immediately starts streaming the (pref-1) of video
requested to the client, if it is present in its cache. So
waiting time is almost negligeble. And informs the SMTR
to initiate the streaming of (pref-2) of Vi, then the IMTR 5. Simulation Model
coordinates with MMS to download the remaining portion
(S-(pref-1)-(pref-2))Vi of the video Vi. In our simulation model we have a single MMS and a
If it is not present in its cache, the IMPSq forwards the group of 6 TRs. All these TRs are interconnected among
request to its parent TR, VDM at TR searches its database themselves in a ring fashion. Each of these TR is in turn
using perfect hashing to see whether it is present in any of connected to a set of 6 PSs. These PSs are again
the PSs in that Lp. If the Vi is present in any of the PSs in interconnected among themselves in a ring fashion. To
that Lp, then the VDM checks whether the PS in which the each of this PS, 25 clients are connected. We use the video
Vi found is neighbor to the requested PSq [NBR(PSq)]. hit ratio (VHR), the average client waiting time y
If so, the VDM intimates the same to SMTR which initiates
the streaming of the (pref-1)Vi from that NBR(PSq), and Table 1: Simulation Values
(pref-2)Vi from its cache, to the requested PSq and the same
is intimated to the requested PSq. Then the IMTR Notation System Parameters US Letter Paper
coordinate with MMS to download the remaining portion S Video Size 25 to 1120 min
(S-(pref-1)-(pref-2))Vi, and hence the client waiting time is CMMS Cache Size (MMS) 2000blocks
very small .
CTR Cache Size(TR) 800(40%)
Otherwise, if it is not [NBR(PSq)] and is present in more
than one PS of Lp then SMTR selects one PS such that, the CPS Cache Size(PS) 300(15%)
path from selected PS to PSq should be optimum and λ Mean request arrival rate 45 reqs/hr
initiates the streaming of the (pref-1)Vi from the selected
and network usage as parameters to measure the
PS, and (pref-2)Vi from its cache, to the requested PSq
performance of our proposed approach more correctly by
through the optimal path found by the SMTR and the same
comparing the results of RPPCL, GWQ and PRLS
is intimated to the requested PSq and hence the client
algorithms. In addition we also use the WAN bandwidth
waiting time is relatively higher, but acceptable with high
usage on MMS-PS path and probability of accessing the
QoS.
main server as the performance metrics.
If the Vi is not present in any of the PSs in that Lp, then
We assume that the request distribution of the videos
the IMTR Passes the request to the tracker of NBR(Lp).
follows a zipf-like distribution. The user request rate at
Then the VDM(NBR(Lp)) checks its database using perfect
each PS is 35-50 requests per hour. The ratio of cache
hashing, to see whether the Vi is present in any of the PSs
sizes at different elements like MMS, TR and PS is set to
of its Lp. If it is present in one or more PSs, then the
CMMS : CTR : CPS = 10:4:2 and transmission delay between
SM(NBR(Lp)) selects the optimal streaming path from the
the proxy and the client, proxy to proxy and TR to PS as
selected PS(NBR(Lp)) to the requested PSq and intimates
120sec, transmission delay between the main server and
the same to IM(Lp). Then the SM(Lp) in turn initiates the
the proxy as 480 to 600sec, transmission delay between
streaming of Vi to the requested PSq through the optimal
tracker to tracker 240sec, the size of the cached [(pref-
path, and the same is intimated to the requested PSq and
1)+(pref-2)] video as 280MB to 1120MB(25-min-1hr) in
hence client waiting time is comparatively high but
proportion to its popularity.
acceptable because it bypasses the downloading of the
complete video from MMS using MMS-PS WAN
bandwidth. 6. Simulation Results
If the Vi is not present in any of the PSs of its NBR(Lp)
also, then the TR(Lp) modules decides to download the The simulation results presented below are an average of
Vi from MMS to PSq. So the IMTR coordinates with MMS several simulations conducted on the model
to download the Vi, and hence the waiting time is very Our main focus was to minimize the client waiting time
high, but the probability of downloading the complete via exploiting load sharing among the PSs of Lp. Fig.9
video from MMS is very less as shown by our simulation shows the total number of requests served from the
results. system, the average number of requests served
immediately at PSq as 51%, the average number of
requests served from (Lp+NBR[Lp]) as 34%, and the
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 79
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814
average number of requests served from MMS, that is only MMS has been contacted for very few (15-25% of the
15% which is very less. The corresponding average videos) number of videos, when the Vi is neither present
waiting time required for serving (pref-1) immediately in that Lp, nor in NBR(Lp). Even though the initial startup
from PS, from other PS of Lp (Lp+NBR[Lp]) and from delay and transmission cost seems to be more it is
MMS is shown in the fig. 5. acceptable because on an average (pref-1) and (pref-2) of
As the (pref-1) of most frequently asked videos have been
cached and streamed from the PSq of Lp and NBR[Lp],
with the cooperation of various modules of PSs, and the
coordination of modules of TR of Lp, Our scheme has
achieved a very high video hit ratio ( 86%) as shown in
Fig 6. So the local and global popularity based replication
of mostly accessed videos at the respective
client waiting time for the videos requested at PSq, [4] S. González, A. Navarro, J. López and E.L. Zapata, “Load
average network traffic of the system, and also the load of Sharing in Distributed VoD Systems”, Int'l Conf. on
MMS by the regional popularity based replication of most Advances in Infrastructure for e- Education, e-Science, and
popular videos at appropriate PSs of Lp. And sharing of e-Medicine on the Internet (SSGRR 2002w), L'Aquila, Italy,
January 21-27, 2002.
these videos among the proxies of the system with the
[5] Yuewei Wang, Zhi-Li Zhang, David H.C. Du, and Dongli Su
“A Network-Conscious Approach to End-to-End Video
Delivery over Wide Area Networks Using Proxy Servers”,
IEEE INFOCOM, pp 660-667, April 1998.
[6] Alan T.S lp, Jiangchuan Liu, John C.S.Lui COPACC: An
Architectureof cooperative proxy-client caching System for
On-Demand Media Streaming. A technical report.
[7] S.-H. Gary Chan, Fouad Tobagi, “distributed Servers
Architecture for Networked Video Services”, IEEE.
Transactions on networking, vol.9, No. 2, Aprol.
[8] S. Acharya and B. C. Smith, “Middleman: A video caching
proxy server”. in Proc. of NOSSDAV, June 2000.
[9] A. Feldmann, R. Caceres, “Performance of Web Proxy
Caching in Heterogeneous Bandwidth Environments”, In
Proc. Of IEEE INFOCOM ’99, March 1999.
[10]P. A. Chou and Z. Miao. Rate-distortion optimized streaming
of packetized media. Technical Report MSR-TR-2001-35,
Microsoft Research Center, February 2001.
[11]Dr.Mahmood Ashraf Khan, Prf.Go-Hasegawa, Yoshiaki
Taniguchi “QoS Multimedia Network Architecture” murata
Laboratory, Osaka University, Japan.
[12] Lian Shen, Wei Tu, and Eckehard Steinbach “A Flexible
Starting Point based Partial Caching Algorithm For Video
On Demand”, 1-4244-1017-7/07@2007 IEEE.
[12] B. Wang, S. Sen, M. adler, and D. Towsley, “ Optmal Proxy
Cache Allocation For Efficient Streaming Media
Distribution,” In IEEE INFOCOM, june 2002.IEEE.
References
[1] Bing Wang, Subhabrata Sen, Micah Adler and don Towsley
“Optmal Proxy cache Alloction for Efficient Streaming
Media Distribution” IEEE Transaction on multimedia, vol. 6,
No. 2, April 2004.
[2] H.S.Guruprasad,M Dakshayini et. el “Dynamic Buffer
Allocation for VoD System Based on Popularity”
proceedings of NCIICT 2006, PSG College of Technology,
Coibatore, 13- 17, WWW. psgtech. edu/ NCIICT/ files/
NCIICT06.
[3] Y.C Tay and HweeHwa Pang, “Load Sharing in Distributed
Multimedia-On-Demand Systems”, IEEE Transactions on
Knowledge and data Engineering, Vol.12, No.3, May/June
2000.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 81
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814
3
Department of Electrical & Electronics Engineering
Pondicherry Engineering College
Pondicherry, India
2. EESM for Unequal Constellation in OFDM the calibration parameter that is typically different for each
Block MCS. The procedure used to link adaptation with EESM is
listed herewith
12
wavelet analysis we represent low frequency information
by approximation coefficients cAn an high pass spatial 10
frequency data by horizontal vertical an diagonal signal by
cHn,cVn,cDn. To achieve a target SNR we allocate 8
approximate coefficients with low modulation order and 16QAM
4QAM
detail coefficients with high modulation order based on the quadratic
6
channel estimation feedback message from the receiver.
EESM method has been identified as one of the fast link
adaptation technique for multicarrier based systems. For a 4
0 5 10 15
fast link adaptation of multimedia transmission where the beta
audio and video are transmitted in different constellations Fig. 2. Instantaneous SNReff vs. β1 and SNReff vs. β2 curve with the
on the same ofdm block, the EESM method has been corresponding quadratic approximations.
modified and used for performance prediction (e.g., the
FER metric) for the current channel conditions. In the
subcarrier block with two modulation order are taken such
a way that low frequency components (Audio) are
transmitted with low modulation order and high frequency 3. Vertical Shift Method
components (video) with high modulation order. This Enabling the method described above it would require the
method maps a set of per subcarrier SNRs { γ 1 …γN2} to approximation of the SNReff vs. β1 and curve to SNReff vs.
a single effective SNR (SNReff) : β2 have to be sent potentially as often as every frame, in
order to track changes in the SNR due to fading. This
represents less feedback than sending the entire channel
⎡ N1 − γ i N 2 − λi ⎤ response {γ 1, …, γN}, but is still a significant amount of
⎢∑e + ∑e β2
β1
⎥ …. (1)
γ eff = − β . ln ⎢ i =1 i = N 1+1 ⎥ (1)
⎢ N1 + N 2 ⎥
⎢ ⎥
⎢⎣ ⎥⎦
End
.
(3)
α m(γ 1 + γ 2 )
α' = =
E( ) represents the average and Γ ( ) represents the Es / N 0 2γ 1γ 2 (1 − ρ )
gamma function, Ω /2 is the average power of the
signal, ‘m’ is named fading figure, parameter related to
fading range. In the case of M-distributed correlated
nonidentical fading Nakagami-m channels PDF of the
β ='
=
β (
m (γ 1 + γ 2 ) − 4γ 1γ 2 (1 − ρ )
2
) 1/ 2
p a (γ 1 ) = ⎨exp⎢− ⎥ − exp⎢−
where Iυ(·) denotes the υth-order modified Bessel
function (
2 ργ ⎪⎩ ⎢⎣ 1+ ρ γ ⎥⎦ ) ⎥⎬ ,
⎢⎣ 1− ρ γ ⎥⎦⎪⎭ ( )
ρ=
(
cov r1 , r2
2 2
) ,0 ≤ ρ < 1 (5)
γ1 ≥ 0 (8)
( ) ( )
2
var r1 var r2
2
m M a ( s : γ1 . γ2 : m : ρ ) ≅ M a ( s ) = ⎢1 − 1 2 s + s ⎥
⎣ m m2 ⎦
σ1 + σ 2 s≥ 0 .
α= (9)
2σ 1 σ 2 (1 − ρ )
( σ1 − σ2 )
2
+ 4σ1σ 2 ρ
β2 =
4σ12 σ 2 2 (1 − ρ )
2
(6)
⎢ ( γ / m) sin ⎜⎝ m ⎟⎠ ⎥
⎣ ⎦ Guard interval 800 ns
[10]
OFDM symbol duration 4 µs
f (θ / R ) is the p.d.f of the detection error, po(R) is the pdf
of the envelope of the fading signal. After approximation Channel bandwidth 20MHz
[14] Table.1 Simulation parameters for unequal modulation
2m
⎡ ⎤
Γ(2m + 1/ 2) 1 ⎢ 1 ⎥ For the above mentioned parameters in a rayleigh channel
Pe = . .⎢ ⎥ (11) the simulation has been conducted for unequal modulation
ε m π Γ(2m + 1) (1 − k ) ⎢ 2 ⎛ π ⎞⎥
2 m
4. Simulation Results
0
10
4QAM,unequal modulation ,16QAM
-1
10
16qam
BER
-2
10
4 qam 4qam an 16 qam
-3
Fig. 6 Performance of SNR vs BER for different values of ρ
10
0 5 10 15
SNR in dB
2
Asst. Professor/CSE, Anna University,
Chennai-25, India
Abstract
Writing requirements is a two-way process. In this paper we 2. Our Approach
use to classify Functional Requirements (FR) and Non
Functional Requirements (NFR) statements from Software
Requirements Specification (SRS) documents. This is Classifier
systematically transformed into state charts considering all
relevant information. The current paper outlines how test cases Training set
can be automatically generated from these state charts. The
application of the states yields the different test cases as
solutions to a planning problem. The test cases can be used for User
automated or manual software testing on system level. And requirements Rule
also the paper presents a method for reduction of test suite by generator
using mining methods thereby facilitating the mining and
knowledge extraction from test cases.
Keywords: SRS, FR, NFR, State model, Test case, Test suite,
Mining. Classification
rules
1. Introduction
Our approach is as follows tester. Once the test data corresponding to a particular
i. Generation of classification rules. predicate are determined, the steps are repeated by
ii. Generate test cases from the UML state machine. selecting the next predicate on the state machine diagram.
iii .Finally data mining techniques are applied on the The process is repeated until all Predicates on the state
generated test cases in order to further reduce the machine diagram have been considered.
test suite size.
3.1. Predicate selection:
2. Generation of Classification Rules
For selecting a predicate, a traversal of the state diagram
In the current paper, we provide the Software is performed using depth first (DFS) traversal or breadth
Requirements Specification to the classifier system. For first (BFS) traversal to see that every transition is
classifying we use Weka. The Weka classifier is initially considered for predicate selection. DFS traversal is used
trained with a training set. Later it is provided with the here. During traversal, conditional predicates on each of
SRS. It classifies SRS in to functional and non functional the transitions are looked. Corresponding to each
requirements by generating a classification rules. The conditional predicate, test data are generated.
classification rules are applied to the SRS to get FR and
NFR. The test data are generated for each predicate
From NFR we derive the state machine. State machines corresponding to the true or false values of the
specify the behaviour of a system/subsystem. conditional predicate satisfying the prefix path condition.
for many other types of data such as float, double, array, 4. Mining Techniques For Test Suite
and pointer and so on. Reduction
However, the method may not work when the variable
assumes only a discrete set of values. Each predicate in a Data mining is the process of extracting patterns from
path can be considered to be a constraint. A path will not data. As more data are gathered, with the amount of data
be traversed for some input data value, if the doubling every three years, data mining is becoming an
corresponding constraint is not satisfied. If a path P is not increasingly important tool to transform these data into
traversed for some data value, then we say that a information. It is commonly used in a wide range of
constraint violation has taken place for that data value. profiling practices, such as marketing, surveillance, fraud
We compute the value of F when each input datum is detection and scientific discovery.
modified by Sxi. If the function F decreases for the
modified data, and Constraint violation does not occur, While data mining can be used to uncover patterns in
then the given data variable and the appropriate direction data samples, it is important to be aware that the use of
is selected for minimising F further. Here, appropriate non-representative samples of data may produce results
direction refers to whether we increase or decrease the that are not indicative of the domain. Similarly, data
data variable xi so that F is minimised. We start mining will not find patterns that may be present in the
searching for a minimum with one input variable, while domain, if those patterns are not present in the sample
keeping all other input variables constant, until the being "mined". There is a tendency for insufficiently
solution is found (the predicate function becomes knowledgeable "consumers" of the results to attribute
negative) or the positive minimum of the predicate "magical abilities" to data mining, treating the technique
function is located. In the latter case, the search as a sort of all-seeing crystal ball. Like any other tool, it
continues from this minimum value with the next input only functions in conjunction with the appropriate raw
variable. Two data values Iin (inside boundary) and Iout material: in this case, indicative and representative data
(outside boundary) are generated using the search that the user must first collect. Further, the discovery of a
procedure mentioned. These two points are on different particular pattern in a particular set of data does not
sides of the boundary. necessarily mean that pattern is representative of the
whole population from which that data was drawn.
For finding these two data points, a Series of moves is Hence, an important part of the process is the verification
made in the same direction determined by the search and validation of patterns on other samples of data.
procedure mentioned above and the value of F is
computed after each move. The size of the step is The term data mining has also been used in a related but
doubled after each successful move. This makes the negative sense, to mean the deliberate searching for
search for the test data quick. A successful move is one apparent but not necessarily representative patterns in
where the value computed by the predicate function F is large numbers of data. To avoid confusion with the other
reduced. When the Minimisation function becomes sense, the terms data dredging and data snooping are
negative (or zero), the required data values Iin and Iout often used. Note, however, that dredging and snooping
are noted. These Points are refined further to generate a can be (and sometimes are) used as exploratory tools
data value, which corresponds to a minimum value of the when developing and clarifying hypotheses.[6]
minimisation function along the last processed Direction.
This refinement is done by reducing the size of the step 4.1. Applying data mining concepts
and comparing the value of F with the previous value.
Also, the distance between the data points is minimised There are many methods available for mining different
by reducing the step size. For each Conditional predicate kinds of data, including association rule, characterization,
in the state machine diagram, we generate the test data. classification, clustering, etc.
The generated test data are stored in a file. A test
executor can use these test cases later for automatic We can utilize any of these techniques based on
testing.
• What kind of data bases to work on
The above said procedure produces a test suit that is of
• What kind of knowledge to be mined
some what smaller size. But we can further reduce the
• What kind of techniques to be utilized
size by using mining techniques.
We can apply association or clustering
techniques for test case mining.
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 90
www.IJCSI.org
Clustering is the process of grouping the data into classes In this paper, a new approach to automatically generate
or clusters so that object within a cluster has high test cases from SRS and mining of test cases has been
similarity in comparison to another, but is dissimilar to discussed. Firstly a formal transformation of a detailed
object in other clusters. It doesn’t require the class label SRS to a UML state model, secondly the generation of
information about the data set because it is inherently a test cases from the state model and lastly mining of Test
data driven approach. It is the process of grouping or cases. The introduction of agents can bring enhancement.
abstract object into classes of similar object.
References
Among all the mining techniques, clustering is the most
effective technique which we are going to use for test 1.Zhijie Xu, Laisheng Wang, Jiancheng Luo, Jianqin Zhang,
case mining. “A Modified clustering algorithm for data mining”, Beijing
100101, China.
2. Ming-Syan Chen, Senior Member, IEEE Jiawei Han, Senior
Clustering analysis helps constant meaningful Member, IEEE, and Philip S.Yu, Fellow, IEEE. “ Data Mining:
partitioning of a large set of object based on a “divide An Overvies from a Database Perspective”.
and conquer” methodology which decomposes a large 3. Automatic test case generation using unified Modeling
scale system into smaller components to simplify design language (UML) state diagrams, P. Samuel R. Mall A.K.
and implementation. As a data mining task, data Bothra, Department of Computer Science and Engineering,
clustering identifies cluster or densely populated regions, Indian Institute of Technology, Kharagpur 721302, West
according to some distance measurement, in a large, Bengal, India
multidimensional data. Given a large set of E-mail: philips@cusat.ac.in
multidimensional data points, the data space is usually 4.Dae-Kyoo Kim, Jon Whittle, ”Generating UML Models from
Domain Patterns”, USA.
not uniformly occupied by the data points. Data 5. Tao Xie,USA. Jian Pei, Canada, Ahmed E.Hassan,Canada.
clustering identifies the sparse and the crowded places, “Mining software Engineering Data”.
and hence discovers the overall distributions patterns of 6.”A Comparative Evaluation of Tests Generated from
the data set. Different UML diagrams”, Supaporn Kansomkeat, Department
of Computer Science Faculty of Science, Prince of Songkla
For cluster analysis to work efficiently and effectively as University Hat Yai, Songkhla, 90112, Thailand”.
many literatures have presented, there are the following 7. Sarma M., "System State Model Generation from UML 2.0
typical requirements of clustering in data mining. Design", Technical Report TR-04-07, Department of Computer
Science and Engineering, Indian Institute Of Technology,
Kharagpur, April 2007
o Scalability: 8. Castejon H. N. "Synthesizing State-machine Behavior from
o Ability to deal with different types of attributes: UML Collaboration and Use Case Maps", Lecture Notes in
o Discovery of clusters with arbitrary shape: Computer Science, Vol. 3530, Springer, June 2005.
o Minimal requirements for domain knowledge to 9. Gupta A., "Automated Object's Statechart Generation and
determine input parameters: Testing from Class Method Contracts", 3rd Intl Workshop on
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 91
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814
6
Department of Computer Science,
N.S.B. College, Nanded, MS, India, 431601
example, user befriending another, user buying a DVD, systems would be connecting with others outside their pre-
user tagging a photo, etc. existing social group or location, liberating them to form
communities around shared interests, as opposed to shared
This study is important today in order to find out geography [2, 5].This could be interesting to predict about
behavior pattern of users connected via social networks. social capital. A hallmark of this early research is the
This helps us to predict the dynamics of the network presumption that when online and offline social networks
system also. At the same time , this study help us to talk overlapped, the directionality was online to offline-online
about new norms of behavior, technology, influences or connections resulted in face-to-face meetings. Although
idea , we can discover after analyzing data by questioning early work on SNS addressed many interesting findings,
and analyzing how similar is the behavior of connected we find no traces of social capital. This is because there is
users ? , how similar the structure of services and little empirical research that addresses whether members
technologies social network sites have? .We can also use SNSs to maintain existing ties or to form new ones,
device key indictors of social networks from such study. the social capital implications of these services is
unknown.
A popular Social Network researcher, Wellman [1]
argued that the concept of "social networks" is difficult to The SNS like Face book constitutes a rich site for
define. Usually a social network is defined as relations researchers interested in the affordances of social
among people who deem other network members to be networks due to its heavy usage patterns and technological
important or relevant to them in some way [2]. Using capacities that bridge on line and offline connections.
media to develop and maintain, social networks is Research suggests that Face book users engage in
established in practice. Now days, this concept is "searching" for people with whom they have an offline
popularly implemented in term of Internet sites. These connection more than they "browse" for complete
websites allow participants to construct a public or semi- strangers to meet [4]. We believe that this approach in
public profile within the system and formally articulate Face book represents an understudied offline to online
their relationship to other users in a way that is visible to trend where it originally, primarily served a
anyone who can access their profile [3, 4] We feel that geographically-bound community (the campus). However,
this definition does not specify the closeness of any given as social capital researchers we have another perspective
connection, but only that participants are linked in some to look into – findings of social capital as place-based
fashion. community facilitate the generation of social capital.
Beside, being just a gatherings of like-minded people, We are optimistic to get many social capitals and if a
brought together in cyberspace by shared interests linkage, Regression analysis is conducted on them then it will
Social Network Sites (SNSs) such as such as MySpace definitely suggest a strong association between use of SNS
allow individuals to present themselves, articulate their and the type of social capital, with the strongest
social networks, and establish or maintain connections relationship being to bridging social capital. In addition
with others. These sites can be oriented towards work- we could also find hidden aspects to interact with the
related contexts like LinkedIn.com, romantic relationship measures of psychological well-being. This discovery may
initiation like Friendster.com, and for connecting those help in suggesting that social capital and SNS can provide
with shared interests such as music or politics, or the greater benefits for users experiencing low self-esteem and
college student population like the Facebook.com. low life satisfaction. This is because a depressed person
has more tendencies to express feeling than a normal
The SNS operational style can be online or offline. person. [6].Such study brings a requirement to statistically,
The offline style has no face to face communications. The analytically process SNS data. However a social network
online social network application enables its users to will definitely look different depending upon how one
present themselves in an online profile, accumulate measures it, like, counting the number of interactions
"friends" who can post comments on each other's pages, between members (say “a”) or rating the closeness of
and view each other's profiles. For example, Face book relationships (say “b”) or “a versus b”, etc .
members can also join virtual groups based on common
interests, see what classes they have in common, and learn
each others' hobbies, interests, musical tastes, and 2. An Overview of Face book
romantic relationship status through the profiles. Online
SNSs support both the maintenance of existing social ties Created in 2004, by 2007 Face book was reported to
and the formation of new connections. Early research on have more than 21 million registered members generating
online communities assumed that individuals using these 1.6 billion page views each day The site is tightly
integrated into the daily media practices of its users: The
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 94
ISSN (Online): 1694-0784
ISSN (Print): 1694-0814
typical user spends about 20 minutes a day on the site, and increases and decreases in social capital [1]. We can probe
two-thirds of users log in at least once a day [7]. into social capital in two ways,
Capitalizing on its success it created separate versions for
high school students, communities for commercial Bridging (weak ties): loose connections between
organizations. individuals which may provide useful information or new
perspectives for one another
Face book s widely used for SNA due to its openness
and large customer base. Much of the existing academic Bonding (strong ties): strong connection between
research on Face book has focused on identity individuals. Usually emotionally close relationships, such
presentation and privacy concerns [8]. Looking at the as family and close friends.
amount of information Face book participants provide
about themselves, the relatively open nature of the Recently, researchers have emphasized the
information, and the lack of privacy controls enacted by importance of Internet-based linkages for the formation of
the users, argue that users may be putting themselves at weak ties, which serve as the foundation of bridging social
risk both offline (e.g., stalking) and online (e.g., identify capital. Because online relationships may be supported by
theft). Other recent Face book research examines student technologies like distribution lists, photo directories, and
perceptions of instructor presence and self-disclosure [9] search capabilities [15], it is possible that new forms of
temporal patterns of use [10] and the relationship between social capital and relationship building will occur in online
profile structure and friendship articulation [4]. social network sites. Bridging social capital might be
augmented by such sites, which support loose social ties,
The literature survey [7] shows that Face book is allowing users to create and maintain larger, diffuse
used more by college-age students and was significantly networks of relationships from which they could
associated with measures of social capital. We use Face potentially draw resources [10].
book as a research context in order to determine whether
knowledge about social capital patterns can be generated
by enacting some theorems or hypothesis. 4. Understanding Formulation of Social Capital
After briefly describing, to the extant of literature [9,
3. Defining Social Capital 11, 16, 17, and 18] regarding the forms of social capital
and the impact of Internet, we introduce some basic
Social capital is an elastic term. It broadly refers to hypothesis for social capital which speaks to the ability to
the resources accumulated through the relationships maintain valuable connections as one progress through life
among people. It has variety of definitions [11]. It is often changes.
conceived as a cause and an effect for social networking.
For individuals, social capital allows a person to draw on 4.1. Hypothesis 1: Use of SNS will be proportional
resources from other members of the networks to which he to user’s perceived bonding social capital.
or she belongs. These resources can take the form of
useful information, personal relationships, or the capacity Explanation: Bonding social capital reflects strong
to organize groups [12].This also give capabilities to ties with family and close friends. Day by day, internet is
access to individuals outside one's close circle and help to maturing and providing new means, connections to
get non-redundant information, resulting in benefits such connect and come closer. These new connections may
as employment connections [13] . result increase in social capital as users might be in a
position to provide emotional support whenever needed.
Social capital has been linked to a variety of positive Thus, as the tendency to share interests or regional goals
social outcomes, such as better public health, lower crime increases, user starts bonding the social capitals and use of
rates, and more efficient financial markets [13].When SNS will increase subsequently. This is the reason why
social capital declines, a community experiences increased terrorists frequently use SNS like Orkut and Face Book.
social disorder, reduced participation in civic activities, Online users are more likely to have a larger network of
and potentially more distrust among community members. close ties than non-Internet users, and Internet users are
Greater social capital increases commitment to a more likely than non-users to receive help from core
community and the ability to mobilize collective actions, network members. This proves our hypothesis.
among other benefits. Social capital may also be used for
negative purposes, but in general social capital is seen as a
positive effect of interaction among participants in a social
network [14]. The Internet has been linked both to
IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 1, No. 3, January 2010 95
www.IJCSI.org
The topics suggested by this issue can be discussed in term of concepts, surveys, state of the
art, research, standards, implementations, running experiments, applications, and industrial
case studies. Authors are invited to submit complete unpublished papers, which are not under
review in any other conference or journal in the following, but not limited to, topic areas.
See authors guide for manuscript preparation and submission guidelines.
Accepted papers will be published online and authors will be provided with printed
copies and indexed by Google Scholar, Cornell’s University Library,
ScientificCommons, CiteSeerX, Bielefeld Academic Search Engine (BASE), SCIRUS
and more.
All submitted papers will be judged based on their quality by the technical committee and
reviewers. Papers that describe research and experimentation are encouraged.
All paper submissions will be handled electronically and detailed instructions on submission
procedure are available on IJCSI website (www.IJCSI.org).
It also provides a venue for researchers, students and professionals to submit on-
going research and developments in these areas. Authors are encouraged to
contribute to the journal by submitting articles that illustrate new research results,
projects, surveying works and industrial experiences that describe significant advances
in field of computer science.
Indexing of IJCSI:
1. Google Scholar
2. Bielefeld Academic Search Engine (BASE)
3. CiteSeerX
4. SCIRUS
5. Docstoc
6. Scribd
7. Cornell’s University Library
8. SciRate
9. ScientificCommons
© IJCSI PUBLICATION
www.IJCSI.org