Sunteți pe pagina 1din 116

International Journal of Computer Science

and Business Informatics


(IJCSBI.ORG)

ISSN: 1694-2507 (Print)


VOL 11, NO 1
ISSN: 1694-2108 (Online) MARCH 2014
IJCSBI.ORG
Table of Contents VOL 11, NO 1 MARCH 2014

Energy- Aware QoS Based Routing Protocols for Heterogeneous WSNs: A Survey ............................... 1
Sridevi S., Rumeniya G. and Usha M.

Optimization of Outsourcing ICT Projects in Public Organizations; Case Study: Public Center of Iranian
ICT Studies ................................................................................................................................................. 20
Majid Nili Ahmadabadi, Abbas Bagheri and Fariba Abolghasemi

An Optimized CBIR Using Particle Swarm Optimization Algorithm ......................................................... 40


Subhakala S., Bhuvana S. and Radhakrishnan R.

Study of Satisfaction Assessment Techniques for Textual Requirements .............................................. 56


K. S. Divya, R. Subha and Dr. S. Palaniswami

Survey of MAC Protocols for Heterogeneous Traffic in Wireless Sensor Networks ............................... 67
Sridevi S., Priyadharshini R. and Usha M.

Harnessing Social Media for Business Success. Case Study of Zimbabwe ............................................... 80
Musungwini Samuel, Zhou Tinashe Gwendolyn, Zhou Munyaradzi and Ruvinga Caroline

Quality Platforms for Innovation and Breakthrough ................................................................................ 90


Dr. Hima Gupta

Development of Virtual Experiment on Waveform Conversion Using Virtual Intelligent SoftLab ...... 107
Bhaskar Y. Kathane
International Journal of Computer Science and Business Informatics

IJCSBI.ORG

Energy- Aware QoS Based


Routing Protocols for
Heterogeneous WSNs: A Survey
Sridevi S.
Associate Professor, Department of Computer Science and Engineering,
Sona College of Technology,
Salem, India

Rumeniya G.
PG Scholar, Department of Computer Science and Engineering,
Sona College of Technology,
Salem, India

Usha M.
Professor& Dean, Department of Computer Science and Engineering,
Sona College of Technology,
Salem, India

ABSTRACT
WSNs (Wireless Sensor Networks) are a huge collection of sensor nodes which have
limited battery power and limited computational capacity. The power limitation causes the
nodes to premature dead so the node power should be used efficiently to prolong the
network lifetime. In time critical applications, the data should reach the destination within a
deadline and without any packet loss which means the QoS metrics such as reliability and
delay are very essential for delivering the data to destination. One of the vital challenges for
research in wireless sensor networks is the implementation of routing protocols which
achieve both Quality of Service (QoS) and energy efficiency. The main task of the routing
protocol is to discover and maintain the routes to transmit the data over the network. At
present, to increase the performance of the networks, to achieve load balancing and to
provide fault tolerance multipath routing techniques are widely used rather than single path
routing technique. We present a review on the existing routing protocols for WSN by
considering energy efficiency and QoS. We focus on the main motivation behind the
development of each protocol and explain the function of various protocols in detail. We
compare the protocols based on energy efficiency and QoS metrics. Finally we conclude
the study by giving future research directions.

Keywords
WSNs, Routing Protocol, Multipath Routing, Fault Tolerance, Cross Layer Module.

1. INTRODUCTION
Wireless sensor network consists of number of sensor nodes deployed in the
target area to gather information, collaborate with each other and send the
gathered data to the sink node in a multi hop fashion [1]. In traditional
methods sensor nodes send their data directly to the sink node in a single-

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 1


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

hop approach. This has many drawbacks such as expensive and faster
energy depletion since the target sensing nodes are far away from the sink
node [2]. To overcome this drawback, multi-hop based approach is carried
out over short communication radius which saves energy and reduces
communication interference. Due to the dense deployment of the nodes we
can have multiple paths for data transmission from the source nodes to the
sink [3].

Many of the applications require QoS like military applications, fire


detection and biomedical applications. On the battlefield, sensors can be
used to detect unfriendly objects, vehicles, aircraft, and personnel. On the
health care applications [4], [5] and [6], smart wearable and companionable
wireless devices can be attached to or the sensors can be implanted inside
the human body to observe the essential signs of the patient body. The
routing protocols are required to choose the best path that satisfies the QoS
requirements as well as improves the lifetime of the network. The
characteristics of WSNs are rapid deployment, self-organization, and fault-
tolerance which make them adaptable for real time and non-real time
applications [7].

2. MOTIVATION
The sensor nodes are having limited energy, storage capacity and
bandwidth. The energy of the sensor nodes are consumed while sensing,
processing and transmission. So energy of the node should be used
efficiently to avoid early dead. In recent years, WSNs are used in mission
critical applications. For example, in fire detection application when the
event has detected, immediately the sensor node must gather and transmit
the information about the event to the sink within the deadline and without
any packet loss. But in many cases, the packets failed to reach the sink
within deadline and without any packet loss. The main reason for this is the
limited functionalities and inaccurate observation or low reporting rate of
the sensor nodes.

Many of the applications require QoS delivery for the data transmission.
The known fact is that the QoS always conflicts with energy efficiency
since the designs require more energy to minimize packet errors or failures
and to reduce latency. There are many existing routing protocols which try
to minimize the packet errors by considering retransmission which requires
more energy and to find best routing path for real time data, it needs to
perform some operations that also consumes more energy. Hence, a
thorough study has to be made to learn about the trade-off between energy
efficiency and QoS. The purpose of this survey is to focus on how the
WSNs provide the QoS and energy efficiency for real time applications.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 2


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

3. DIFFERENT KINDS OF ROUTING SCHEMES


The routing protocols are classified into three types according to their
characteristics: Proactive, reactive and hybrid routing [8]. The routing
protocols can be classified according to their operations as follows: Route
construction, Network Structure, Communication Model, Number of paths
and QoS [9]. The routing protocols dependent to the network structure are
further classified into flat routing or hierarchical routing. The
communication model based routing protocolscan befurther classified into
three ways:Query-based, Coherent and non-coherent based andNegotiation-
based [9].

3.1 Classification of routing protocols according to route construction


The three different routing strategies are identified in wireless networks:
proactive, reactive, and hybrid. In case of proactive routing, all the paths are
constructed by periodically broadcasting control messages before they are
actually needed then these constructed paths information are stored on the
routing table of each node. In case of reactive routing, the paths are
constructed between source and destination only when needed and it is
dependent on dynamic route search. The hybrid routing strategy relies on
both proactive and reactive routing protocols to achieve stability and
scalability in large networks.

3.2 Classification of Routing Protocols based on Network Structure


The nodes in a sensor network can be organized in one of the following
three ways: flat, hierarchical based and location based. In flat routing
protocols all the nodes are treated in the same way and they have minimal
overhead to maintain the infrastructure between the interacting nodes. In
hierarchical routing strategy, the nodes are grouped into clusters. Each
member in the cluster sends data to the corresponding cluster head which
aggregates the data and forwards to the sink through multiple hops. The
election algorithm selects the cluster heads based on parameters like
residual energy and distance. The cluster head has the additional
responsibility of coordinating the activities of its members and forwarding
data from one cluster to another.

3.3 Classification of Routing Protocols based on communication model


The routing protocol based on communication model can be classified into
two types according to their operations: negotiation based routing and query
based routing. The negotiation based protocols tries to eliminate the
redundant data by including high level data descriptors in the data
transmission. In query based protocols, the sink node starts the
communication by distributing a query for data over the network [10].

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 3


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

3.4 Classification of Routing Protocols based on number of paths


Based on the number of paths used to route data from sensor nodes to the
sink node, routing protocols are divided into single path routing protocols
and multi path routing protocols. In single path routing one path is
constructed from source to sink to route the data. Due to this the nodes in
the selected path may die soon and the network lifetime is reduced. To
improve the network lifetime and reliability multi-path routing protocols are
proposed which construct multiple paths to achieve load balancing, fault
tolerance. The wireless sensor network routing can be made very efficient
and robust by incorporating different type of local state information such as
Link quality, distance between the nodes, Residual energy, Position
information etc. Disjoint Path routing protocols [11] construct multiple
disjoint paths between source and destination in one of two ways: Link-
disjoint path: The paths between source and destination have no common
link. Node-disjoint path: The paths between source and destination have no
common node. The both link disjoint path and node disjoint path have one
active path, and number of backup paths. A service flow will be redirected
to the backup path if the active path fails. Load balancing is another
important aspect to avoid network congestion and optimize network
throughput and to prolong the network lifetime.

3.5 Classification of Routing Protocols based on QoS


The Quality-of-Service (QoS) provisioning in WSNs is a challenging task,
because of two reasons. First, resource constraints, the dynamic network
topology, unbalanced traffic, data redundancy, scarcity of node energy,
energy consumption for computation and bandwidth pose challenges on the
design of QoS support routing protocol in WSNs [12]. Second, there exist
wide differences in traffic generation rate, latency and reliability amongst
the data packets. The QoS based protocols aims to achieve QoS metrics
such as reliability, delay, energy efficiency and throughput [13].

The rest of the paper is structured as follows. Section 2 describes the


taxonomy of recently proposed routing protocols for wireless sensor
networks. Section 3 compares the studied protocols based on QoS metrics,
energy efficiency and path selection criteria. Section 4 concludes and gives
future research directions.

4. TAXONOMY OF EXISTING ROUTING PROTOCOLS FORWSNS


4.1 Energy efficient and QoS based routing protocol (EQSR)
The Energy efficient and QoS based routing protocol (EQSR) [7] is
designed to satisfy the QoS requirements of real-time applications. . To
increase reliability EQSR uses multipath routing and XOR-based Forward

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 4


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

Error Correction (FEC) technique which provides data redundancy during


the data transmission.To meet delay requirements EQSR employs queuing
model which classify the traffic into real-time traffic and non-real-time
traffic through service differentiation technique. To find out the path EQSR
executes three phases: Initialization phase, Primary Path discovery phase,
Alternative Paths discovery phase. During the initialization phase each
sensor node broadcasts a HELLO message to its neighbor nodes. The
HELLO message includes fields for source ID, hop count, residual energy,
free buffer and link quality which are used to calculate the link cost as given
by equation (1).

Eresd ,y + Bbuffer ,y + Iinterferrence ,xy (1)

In Primary Path discovery phase, the sink node starts to find the routes
through sending RREQ message to its preferred neighbor chosen by the
equation (2). This process is continues until the source node receives the
RREQ message.

Next_hop = MaxyNx { Eresd ,y + Bbuffer ,y + Iinterferrence ,xy }(2)

Where,Nxis the neighbor set of node x. Eresd,yand Bbuffer,y depicts the residual
energy and free buffer size at neighbor y, respectively. Iinterference,xy is
thesignal to noise ratio between node x and node y.

EQSR constructs node disjoint multiple paths during Alternative Paths


discovery phase. In this phase, the sink sends RREQ message to its next
most preferred one hop neighbor to construct alternative paths after the
construction of primary path. To construct node disjoint paths EQSR
restricts that each node should accept only one RREQ message. For that
reason each node accepts the first RREQ message and discards remaining
messages. The number of required paths k can be estimated according to the
need of successfully delivering a message to sink by using the equation (3).

N N
k = x. i=1 pi (1 pi ) + i=1 pi (3)

Where, x is the corresponding bound from the standard normal distribution


for various levels of and piis the probability of successfully delivering a
message to sink.

EQSR calculates the transmission delay of paths by measuring the


propagation delay of RREQ message and gives the best paths for real-time
traffic and remaining paths for non-real-time traffic. The algorithm find out

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 5


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

k node disjoint paths, out of which l paths are used for sending real time
data and m paths are used for sending non real time data. Finally, Error
Correction Codes (ECC) for data packets is calculated by lightweight XOR-
based FEC algorithm. The EQSR improves the QoS metrics such as
reliability and delay, but it introduces high control overheadbecause of FEC
mechanismwhich performs the encoding and decoding operations.
Simulations are done in Ns2 and the results depict that the EQSR protocol
performs very well than MCMP protocol for real time traffic. But the
MCMP outperforms the EQSR protocol for non-real time traffic since
additional delay is introduced in EQSR due to the queuing model.EQSR
offers lower energy efficiency than MCMP since some energy is wasted for
calculating the FEC. The packet delivery ratio is increased in EQSR than
MCMP because the EQSR uses forward error correction (FEC) technique.

4.2 Localized Multi Objectives Routing protocol (LOCALMOR)


The new localized multi objectives routing protocol [14] differentiates the
data traffic according to their requirements of QoS metrics. It classifies the
traffic into critical packet, delay sensitive packet, reliable sensitive packet
and regular packet. For each data packet, this protocol tries to satisfy the
required QoS in an energy efficient way. To improve the reliability it
considers multi-sink single-path approach. The neighbor manager is
accountable for executing HELLO packet, implementing estimation
methods and running other modules. The neighbor table is updated by
HELLO packet which has the information related to nodes current position,
residual energy, and estimated packet reception ratio and transmission delay
for each packet transmission. The sending node vi considers the time
window which is specified in terms of the number of packet transmitted and
the receiving node vjupdates its current window in terms of the number of
packet successfully received denoted as r and number of known packet
missed denoted as f. The number of transmitted and received packets can
be calculated with the help of sequence number of each packet. When the
current window size is equal to main window size then the link reliability
(or packet reception ratio) between node Vi and node Vj (prrvi,vj) is
calculated by using the estimator called Window Mean Exponential
Weighted Moving Average (WMEWMA) in regular time interval shown in
equation (4). The initial value of prrvi,vj is zero.
r
prrvi ,vj = . prrvi ,vj + 1 r+f
(4)

Here, is a tunable parameter of the moving average. The delay can be


calculated by using equation (5) and (6) with the help of EWMA estimator.
To estimate the delay it considers both queuing delay and transmission

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 6


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

delay. This protocol uses several queues in which each type of packet is
inserted into a separate queue. The queuing delay is different for each
packet type and it is calculated through local time stamp in terms of exact
waiting time of each packet.

wVi packet. type = . wVi packet. type + 1 . (5)

dtrVi = . dtrVi + (1 )(t ACK size ACK bw t 0 )(6)

Wherewvi is queuing delay, wvi[packet.type] is queuing delay for each type


of packet, dtrvi is transmission delay,t0 the time the packet is ready for
transmission, tACKthe time of the reception of acknowledgment (ACK)
packet, bw the bandwidth, and size(ACK) the size of the ACK packet. The
initial value of wvi[packet.type] and dtrvi is zero.

This protocol has different modules namely energy module, reliability


module and latency module. The energy module considers both
transmission cost and residual energy of routers to attain power efficiency.
For that, the min-max approach is used to find the energy efficient node.
Reliability module achieves the required reliability by sending a copy the
data packet to both primary and secondary sinks. When more than one node
has the same value for maximum reliability, the most power efficient node
is selected by energy module. Latency module calculates the required speed
by dividing distance by the time remaining to the deadline, rt. The
remaining time to deadline rt is calculated by equation (7).

rt = rt rec (t tr t rec + size bw) (7)

Where trec represents the reception time, ttr the transmission time, rtrec is the
previous value of rt. If the incoming packet is delay sensitive packet then it
selects the node which meets the required deadline. If more than one node
satisfies the required deadline then the most energy efficient node is
selected. If the incoming packet is critical packet then it first calls the
reliability module then latency module and energy module. Finally the
queuing manager uses the multi-queue priority policy in which four
separated queues are used for each type of packet. Critical packet has the
highest priority than Delay sensitive packet and reliability sensitive packet
has lowest priority. To avoid starvation a time out policy is proposed for
each lower priority queue. When a packet arrives at a queue, a timeout value
is assigned and when the timer expires the packet is moved to the highest
priority queue.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 7


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

Simulation results depict that Packet reception ratio increases linearly from
86 to 87 percent for critical packets and 86 to 98 percent for reliable
sensitive packets whereas it is constant for delay-sensitive packets at the
interval of 80 to 83 percent. Moreover, above 96 percent of packets were
successfully transmitted with reasonable delay.The energy deviation is
small for low and moderate number of critical packets but the energy
deviation is gradually increases as the number of critical packet
increase.However, the LOCALMOR protocol achieves better lifetime than
any other existing protocols.

4.3 QoS-aware Peering Routing Protocol for Reliability Sensitive Data


(QPRR)
Zahoor et al. proposed a novel routing protocol in consideration of the QoS
requirements of body area networks (BAN) data. This QoS-aware Peering
Routing Protocol for Reliability Sensitive Data (QPRR) [4] protocol
improves the reliability of critical BAN data while transferring the data
from source to destination. For sending reliable sensitive packets (RSP), the
protocol calculates the reliability of all possible paths. These path
reliabilities can be obtained by using neighbor table information. The
routing table can hold up to three most reliable paths among all possible
paths.

To transmit any RSP data between source and destination it should consider
the following criteria. If the first path itself can accomplish the reliability
requirement then the source node transmits RSP through it. If the first path
reliability is lower than required reliability then QPRR aggregates the
reliability of two paths and then QPRR compares the required reliability
with two paths aggregated reliability. If the two paths aggregated reliability
is greater than required reliability then the copy of RSP packets transmitted
through two paths. If not QPRR aggregates three paths reliability then
compares it with required reliability. If the three paths reliability is greater
than required reliability then the copy of RSP packet transmitted through
three paths. Otherwise the packet is dropped. The path reliability between
source i to destination Dst is calculated by using the following equation
(8).
R path (i,Dst ) = R link (i,j) R pat h(j,Dst ) (8)

The link reliability between nodes i to node j can be calculated by using


EWMA (Exponentially weighted moving average) formula as follows:

R link (i,j) = 1 R link i,j + . Xi (9)

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 8


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

The average probability of successful transmission is calculated by using


equation (10).
N Acks
Xi = N Trans
(10)
Where,Rpath(i,Dst) is the path reliability between node i to destination.
Rlink(i,j) is the link reliability between node i to node j. Rpath(j,Dst) is the
path reliability between node i to destination. is the average weighting
factor that satisfies 0< 1.This protocol takes as 0.4. NAcksis the number
of acknowledgement received and NTrans is the number of packets
transmitted.

Simulation results show that the QPRR reliability is above 75% for low
dense nodes and above 74% for high dense nodes and it uses low
transmission power which provides better transmission rate.The QPRR
provides better reliability but the drawback is the network traffic load is
increased.
4.4 Energy Efficient Node Disjoint Multipath Routing Protocol
(EENDMRP)
The Energy Efficient Node Disjoint Multipath Routing Protocol
(EENDMRP) [15] provided for the reliability analysis of route redundancy
in WSN. EENDMRP concentrates on route redundancy in a single node
level redundancy over a single path, single node level redundancy through
multi node over single path, and single node level redundancy through
multiple level multiple nodes in a single path. EENDMRP is a proactive
protocol and it considers number of stages between source and destination.
The sink node is at stage zero. The one hop neighbors of sink node are stage
1likewise for each node a stage is assigned towards source node. This is
done for avoid the construction of path with loops. It considers the node
which has residual energy greater than threshold energy during path
construction in the WSNs.

To construct the route each node exchanges the route construction (RCON)
packet. If the RCON packet is received by node which is not in the route
that reaches the sink then the node processes the RCON packet. If the
RCON packet is received by node which is already in the route that reaches
the sink then it compares the nodes hop count value with packets hop
count value. If the nodes hop-count value is greater than packets hop-
count value and the nodes residual energy greater than threshold energy
value then RCON is processed. If not, it drops the packet. Each nodes
routing table is updated while receiving RCON packet which has the fields
such as node id and hop-count value. Finally, all possible node disjoint
paths are constructed between source and destination. If any node in the

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 9


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

path failed to transmit the packets due to node death or node dislocation,
then EENDMRP reports the source node by sending route error packet
(RERR). The source node removes the failed path from the routing table
and calls the route maintenance phase then the alternate path is provided
between the node which created the RERR packet and sink node.
4.5 Lifetime Maximizing Dynamic Energy Efficient Routing Protocol
In [16], the authors proposed Energy efficient routing protocol to balance
the energy consumption among nodes and to avoid the premature death of
nodes. The proposed energy efficient routing protocol has three phases
namely initialization phase, selection of next hop and generation of DEERT
phase and tree maintenance phase. During the initialization phase a level is
assigned to each node based on the hop distance from the sink node which
at level 0. A node can selects its next hop from lower level or in the same
level. The data packets are transmitted from higher level node to lower level
node. Every node selects the next hop neighbor based on the cost of the link
between itself and its neighbor and the load of the neighbor. The link cost
between the nodes u and v is calculated by equation (11).

Cuv = min
{REu Etx , REv Erx }(11)

Where, Etx is transmission cost of node. Erx is reception cost of node and
REu and REv is the residual energy of nodes u and v respectively.

The load of node is calculated based on the sum of energy consumed for
transmission of a packet to a neighbor node and energy consumed for
receiving a packet from the children nodes and energy used for overhearing.
In the tree construction phase, a distinct energy efficient routing tree rooted
at the sink node is constructed based on the link cost for efficiently routing
the data. After a fixed amount of time, the tree is reconstructed again.

The tree maintenance algorithm reconstructs the tree in the


following cases:
If there is no response from neighboring nodes then that node is
considered as dead node.
If the residual energy of the neighbor node is lower than threshold
value.
If there is no appropriate next hop node then the source node
transmits its data directly to the sink node and updates its level and
other parameters consequently.

Simulation results depict that the DEERT has a better performance than
SBT, DEBR and aggregation tree based routing in terms of number of nodes
alive after certain number of rounds thus improving the lifetime of the

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 10


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

network. In the beginning, the DEBR end to end delay in terms of hop count
is lesser than proposed protocol whereas the end to end delay of DEBR is
increased when the number of rounds increases. This protocol concentrates
only on energy efficiency but does not support for QoS.

4.6 Braided multipath routing protocol


The braided multipath routing protocol [17] is provided for transmitting the
data packets from source to destination and giving the network ability to
adapt to fluctuations or failures. The source node constructs the path after
detecting the target. Once the target is detected then the node sends its ID in
a packet declaring that it has attained a target. The node which receives this
packet will response with its own ID to the source node declaring that the
preceding node as part of its path. Then the new node broadcasts its ID to
the next hop which response and forward the message likewise the process
is continued until it reach the sink. Then several paths will be created from
source to destination. For that, the destination node will give priority
numbers to the paths and they select the path which has the minimum
number of hops to the source and the nodes in that path are informed to
selected backup nodes.

To save the energy of all other nodes in the network, the nodes are entering
into an energy saving mode and activating from time to time to check the
ups and downs in the network. If any packet is transmitted from target to
sink then the sink will check its own route to this target and update its path
when the received one has minimum number of hops than the stored one.

Simulation results show that the braided algorithm uses backup nodes which
improve fault tolerance in the network. It is possible that only one or two
backup nodes can be established by path, leaving the other nodes without
backup thus making the path susceptible. In networks of higher density the
backup nodes improve fault tolerance at low costs.

4.7 Link Quality estimation based Routing protocol (LQER)


The LQER (Link Quality estimation based Routing) protocol [18] is
designed to improve reliability and energy efficiency in WSNs. It
incorporated minimum hop count field and dynamic window concept (m;
k). A path is constructed between the source and the sink nodes based on the
hop-count value. The sink node broadcast an advertisement (ADV) message
to its neighbors by setting the hop-count value as zero. For other nodes in
the networks, the hop count value is calculated based on the number of hops
of that node to the sink.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 11


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

If the current hop-count value is same or greater than next hop-count value
then that node is added as a forwarding node in the path. Or else the
message is rejected. Here, m is the number of data packets successfully
transmitted and k is the total number of packets transmitted. The dynamic
window concept is used to record the historical link status of data packets
based on m and k. The sufficient reliability can be achieved by using
historical link status information which keeps the word of k bit. If the data
transmission is not successful then that bit is represented as 0. Otherwise it
is represented as 1. The leftmost bit is oldest bit while the right most bit is
newest bit. When the new packet is transmitted, all the packets in the word
of k bits are shifted one position to the left and one bit is added in right most
position to indicate the current status. The quality of link p is calculated by
equation (12).

p=m k (12)

The historical link table can be updated dynamically with a low computing
cost and complexity. When the routing data is ready to transmit, LQER lists
all the neighbor nodes of current node and chooses the path with largest
value of p to transmit routing data.

Simulation results show that Successful transmission rate in LQER is


greater than that in MHFR and MCR. When the number of nodes increases,
the deviation is small in LQER, which specifies a good scalability of data
delivery effectiveness whereas the successful transmission rate decreases
rapidly in MHFR and MCR.

4.8 QoS-aware Peering Routing Protocol for Delay Sensitive


Data(QPRD)
The QoS-aware Peering Routing Protocol for Delay Sensitive Data (QPRD)
[5] is provided for handling delay-sensitive packets. It calculates the node
delay and path delay of all constructed path between source and destination
and finds the best path among all possible paths according to the delay
requirement. Each node has a routing table which contains information of
next hop with the lowest end to end delay. A delay sensitive packet (DSP) is
transmitted in a path if the latency of the path is less than or equal to the
delay requirement of the packet.

QPRD has other modules to choose the best path for transmitting the packet.
They are MAC receiver module, Delay module, Packet classifier module,
Hello protocol module, Routing service module, QoS-aware queuing
module and MAC transmitter. The MAC receiver forwards the packets only

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 12


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

if the packets MAC address matches with its own MAC address. The delay
module calculates node delay by using the equation (13).

DLnode i = DLtrans (i) + DLqueues +channel + DLproc (13)

Where, DLqueue+channel are queuing and channel delay, DLtrans(i) is


transmission time of a packet, DLproc is processing delay of a node. The
transmission time is calculated by dividing the total number of bits in each
packet by data rate. Exponentially Weighted Moving Average (EWMA)
formula is used to estimate queuing and channel delay. The path delay of
node i to destination DLpath(i,Dst) is calculated by using equation (14).

DLpath (i,Dst ) = DLnode (i) + DLpath (j,Dst ) (14)

The packet classifier differentiates data packet and Hello packets and the
packets are processed according to the type. The hello packet is broadcasted
to each neighbor node. In hello protocol module, the neighbor table
constructor constructs the neighbor table based on the node delay and path
delay. The routing services module is accountable for creating the routing
table and classifying the data packets into Delay-Sensitive Packets (DSPs)
and Ordinary Packets (OPs). For DSP, it chooses the path with minimum
end to end delay. For ordinary packet (OP), it chooses the energy efficient
next hop. The QoS-aware Queuing Module (QQM) separates the data
packets into DSP and OP. It maintains separate queue for each type of data
packet. The DSP has the highest priority than OP. The OP queue can
transmit its data only if the DSP queue is empty. For fair treatment of lowest
priority data, a timeout policy is used. Finally the MAC transmitter receives
all packets and stores it in queue. It transmits the packet in first in first out
policy.

Simulation results show that in static environment 94% of the DSPs are
transmitted within the deadline limits and in mobile environment it provides
an improvement of 35% than DMQoS.

4.9 Energy aware peering routing protocol (EPR)


The energy aware peering routing protocol (EPR) [6] is designed to provide
a reduced network traffic load, improved energy efficiency and improved
reliability. It selects the next hop which has higher battery power and
shorter distance to the sink. It has three main parts namely hello message
module, neighbor table construction module and routing table creation
module. The hello message module is used to update the neighbor node
information such as destination location, destination ID, sender nodes ID,
distance from next hop to destination and residual energy of neighbor node.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 13


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

The neighbor node information will be added in the sender nodes neighbor
table by using hello protocol. If a node does not receive any hello message
from its neighbors for a particular time then it assumes that the neighbor has
moved away or the link to the neighbor has broken down. The distance
between the nodes i and DST can be calculated by the following equation
(15).

D(i,DST ) = (Xi XDST )2 + (Yi YDST )2 (15)

Where, Xi, Yi denote the X, Y coordinates of node i. XDST, YDST denote the
X, Y coordinates of the destination. The communication cost can be
calculated by using the parameters such as distance between two nodes and
nodes residual energy. The routing table will selects the neighbor node with
lower communication cost from the neighbor table.
Simulation results show that about 34% of average traffic load is decreased
and about 23% of data transmission rate is increased than other similar
protocols.
4.10 Integrated link quality estimation-based routing Protocol(I-LQER)
I-LQER (integrated link quality estimation-based routing protocol) [19] is
designed to provide quality of service and to reduce power consumption. I-
LQER assigns different weights for the link quality records and link
stability is calculated based on this value. The link quality is estimated by
considering weighted factor along with m / k. Here, m is the number of data
packets successfully transmitted and k is the total number of packets
transmitted. It selects the node which has the greatest link quality.

It believes that the nearest period of transmission has relevance with current
transmission. If the node has a high probability to maintain the current link
quality then that node is taken as a good stability node. If the node has a low
probability to maintain the current link quality then that node is taken as a
low stability node. It compares the nodes record status in the nearest period.
Based on that, it selects the best node to forward the data. For example, if
we consider two nodes P and Q with link quality record status as 00 0011
1111 1111 and 11 1111 0100 0000 respectively where 1 denotes the good
link quality and 0 denotes the bad link quality, then the node P has a better
link quality stability than node Q.

Simulation results depict that the performance of I-LQER is superior to


LQER protocol in terms of end to end delay. For a network with 10 sensor
nodes, I-LQER gives an average delay of 9.00 ms and LQER gives an
average delay of 10.63 ms.when the number of nodes is increased to 100,
then I-LQER offers and average delay is 19.80 ms and LQER gives an

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 14


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

average delay of 28.03ms. This shows that I-LQER has a better scalability
than LQER.

5. COMPARISON OF DIFFERENT ROUTING PROTOCOLS


We compare the studied protocols based on reliability, delay, energy
efficiency and load balancing issues. Maximum number of protocols studied
in this paper construct single path to deliver data from the source to the sink.
Some protocols construct multiple paths for data delivery. All the protocols
use different criteria for the path selection. Almost all the protocols studied
focus on energy efficient routing. Table 1 and 2 gives the results of our
comparison. Only few protocols like LOCALMOR and EQSR provides
QoS support for heterogeneous traffic based on the type of traffic.
Table 1.Comparison of the routing protocols based on energy efficiency and QoS.
Scheme Reliability Delay (timely Energy Traffic Mobility
delivery) efficiency differentiation support
QPRR Yes No Yes OP, RSD Good
QPRD No Yes Yes OP, DSP Good
LOCALMOR Yes Yes Yes CSP, DSP, RSP Low

EQSR Yes Yes Yes Real time, Non No


Real time
DEERT No No Yes -
-
EPR No No Yes OP Good
I-LQER Yes No Yes Yes
-
Braided Yes No No
multipath - -
algorithm
EENDMRP Yes No Yes
- -
LQER Yes No Yes - -

Table 2.Comparison of the routing protocols based on multipath support.

Scheme Number Path Path metric Load Path


of Paths reconstruction balancing chooser
QPRR Up to No End to end - Source
three reliable path node
paths
QPRD Single No Least end to end - Source
path delay path node
LOCALM Single No Minimum delay, Yes Source
OR path maximum node

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 15


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

reliability and
maximum
residual energy
according to the
packet
requirement
EQSR Multi No minimum end to Yes Source
path end delay path node
for real time
traffic
DEERT Single Yes Maximum Yes Source
path battery power node
EPR Single No End to End Yes Source
path Energy efficient node
path
I-LQER Single No Reliable path by No Source
path considering link node
quality and link
stability
Braided Multi No The path with Yes Sink node
multipath path minimum
algorithm number hops
EENDMRP Multi Yes The path with Yes Source
path minimum node
number of hops,
maximum
residual energy
and maximum
path cost
LQER Single No Reliable path by - Source
path considering link node
quality

6. ERROR RECOVERY SCHEMES


6.1 Automatic Repeat Request (ARQ)
ARQ is the error recovery mechanism which uses the cyclic redundancy
check(CRC) technique to find error packet and it can retransmit the error
packet until the packet becomes error free at receiver side. If the packet is
successfully received by the receiver then it will send the positive
acknowledgement (ACK) to sender, otherwise it will send the negative
acknowledgement (NACK). If the ACK is not received by the sender within
the timeout frame then it will retransmit the packet. The drawback of ARQ
is retransmission which induces the additional cost.

6.2 Forward Error Correction (FEC)


FEC mechanism is mostly preferable in multi-hop WSNs to control the
packet transmission errors by adding the error correcting codes (ECCs) with
the sending data. The receiver can detect and correct the amount of bit

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 16


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

errors with the help of error correcting codes. However, the cost of
retransmission is very high since the FEC performs the encoding and
decoding operations which consume more energy.
7. CROSS LAYER MODULE
The concept of cross layer module is incorporating different classical layer
functionalities into a single functional protocol whereas the classical layer
structure is preserved i.e., the functionalities of each layer still remains
unbroken. Many cross layer module have been implemented to improve the
communication reliability, to improve energy efficiency and to avoid load
congestion. Most of the existing research integrates the MAC and physical
layers to reduce energy consumption and improve reliability, the MAC and
routing layers are integrated to extend the network lifetime, the routing and
physical layers are integrated to optimize the network throughput, the
transport and physical layer are integrated to control congestion [20] and the
application and MAC layer are integrated to provide QoS [21]. The cross
layer module improves the network performance and reduces the
implementation complexity and also outperforms the classical layer model
[22]. The network performance can be further improved while combining
multipath routing, FEC mechanism and cross layer module.
8. CONCLUSION AND FUTURE DIRECTIONS
The invention of smart, light-weight sensors makes the wireless sensor
network popular. Regarding the routing protocols, the reduced energy
consumption, the QoS, the scalability and the fault tolerance are the main
limitations in wireless sensor networks. This paper presents a study in what
way the recently proposed routing protocols are adapted to these
characteristics in WSNs. Although in the past years the energy efficient and
QoS based routing has been examined through various studies, yet there are
numerous significant research issues that should be further explored. The
Promising areas can be shortened as follows: 1) much research work has to
be done on multipath routing protocol to support both energy efficiency and
QoS 2) The cross layer module and the multipath routing with forward error
correction (FEC) technique can be used to increase the network
performance.

REFERENCES
[1] FengShan,WeifaLiang,JunLuo, and Xiaojun Shen,Network lifetime maximization
for time-sensitive data gathering in wireless sensor networks,Trans. on ELSEVIER,
Computer Networks, pp.1063-1077, 2013.
[2] Kazem Sohraby, Daniel Minoli and Taieb Znati,Wireless Sensor Networks
Technology, Protocols, and Applications, WILEY, A john wiley& sons, ISBN:
978-0-471-74300-2, 2007.
[3] AHMED E.A.A. Abdulla, HirokiNishiyama, and NeiKato,Extending the Lifetime
of Wireless sensor networks: A hybrid routing algorithm,Trans. on ELSEVIER,
Computer Communications, vol 35, pp.1056-1063, 2012.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 17


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

[4] ZahoorA.Khan,ShyamalaSivakumar, William Phillips, andBill Robertson,A QOS-


aware Routing Protocolsfor Reliability Sensitive Data in Hospital Body Area
Networks, Trans. on ELSEVIER, in proc. ANT, pp. 171-179, 2013.
[5] ZahoorA.Khan,ShyamalaSivakumar,William Phillips, and Bill Robertson,A QOS-
aware Routing Protocol for Delay Sensitive Data in Hospital Body Area
Networks,Trans. on IEEE, in proc. BWCCA, pp. 178-185, 2012.
[6] Zahoor A.Khan, Nauman Aslam, Shyamala Sivakumar, and William Phillips,
Energy-aware Peering Routing Protocol for Indoor Hospital Body Area Network
Communication,Trans. On ELSEVIER, in proc. ANT, pp. 188-196, 2012.
[7] Jalel Ben-othman and Bashir Yahya,Energy Efficient and QOS Based Routing
Protocol for Wireless Sensor Networks, Trans. on ELSEVIER, J.Parallel
Distrib.Comput.70, pp.849-857, 2010.
[8] MarjanRadi, BehnamDezfouli, Kamalrulnizam Abu Bakar, and
MalreyLee,Multipath Routing in Wireless Networks: Survey and Research
challenges,Trans. OnSensors, pp. 650-685, 2012.
[9] Nikolaos A. Pantazis, Stefanos A. Nikolidakis, and Dimitrios D. Vergados,Energy-
Efficient Routing Protocols in Wireless Sensor Networks: A Survey,Trans. on
IEEE COMMUNICATIONS SURVEYS & TUTORIALS, vol. 15, no. 2, pp.551-591,
2013.
[10] Mustafa IlhanAkbas and DamlaTurgut,Lightweight routing with dynamic interests
in wireless sensor and actor networks,Trans. On ELSEVIER, Ad- Hoc Networks,
pp.1- 15, 2013.
[11] YuchunGuo, Fernando Kuipers, and Piet Van Mieghem,Link-disjoint paths for
reliable QoS routing,Trans. OnINTERNATIONAL JOURNAL OF
COMMUNICATION SYSTEMS, pp.779798, 2003.
[12] Srihari Nelakuditi , Srivatsan Varadarajan,and Zhi-Li Zhang,On Localized Control
in QoS Routing, IEEE Trans. Automatic Control, vol.47, no.6,pp.1026-1032, 2002.
[13] Mohammad Hammoudeh and Robert Newman,Adaptive routing in wireless sensor
networks: QoSoptimisation for enhanced application performance,Trans. on
ELSEVIER, Information Fusion, pp.1-13, 2013.
[14] DjamelDjenouri and Ilangko Balasingham,Traffic- Differentiation-Based Modular
QOS Localized Routing For Wireless Sensor Networks,IEEE Transactions on
Mobile Computing,vol.10,no.6,pp.797-809, 2011.
[15] Shiva Murthy G, R.J.DSouza, and Varaprasad G, Reliability Analysis of Route
Redundancy Model for Energy Efficient Node Disjoint Multipath Routing in
Wireless sensor networks,Trans. on ELSEVIER, in proc. ICMOC,pp.1487-1494,
2012.
[16] SanghitaBhattacharjee and SubhansuBandyopadhyay, Lifetime Maximizing
Dynamic Energy Efficient Routing Protocol For Multi Hop Wireless Network,
Trans. on ELSEVIER, Simulation Modeling Practice And Theory,pp. 15-29, 2013.
[17] Carlos Velasquez-Villada and YezidDonoso,Multipath Routing Network
Management Protocol For Resilient And Energy Efficient Wireless Sensor
Networks,Trans. on ELSEVIER, in proc. ITQM,pp. 387-394, 2013.
[18] Jiming Chen, Ruizhong Lin, Yanjun Li, and Youxian Sun,LQER: A Link Quality
Estimation based Routing for Wireless Sensor Networks,Trans. on SENSORS,
pp.1025-1038, 2008.
[19] Wei Quan , Fu-Teo Zhao,Jian-Feng Guan, Chang-Qiao XU, and Zhang Hong-
ke,An Integrated Ling Quality Estimation-Based Routing For Wireless Sensor
Networks,Trans. on ELSEVIER, pp.28-33, 2011.
[20] Tommaso Melodia, Mehmet C. Vuran and Dario Pompili,The State of the Art in
Cross-layer Design for Wireless Sensor Networks,Trans. on SPRINGER, pp.78-92,
2006.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 18


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

[21] Lucas D.P. Mendesm and Joel J.P.C. Rodrigues,A survey on cross-layer solutions
for wireless sensor networks, Trans. on ELSEIVER, in proc. Journal of Network
and Computer Applications, pp.1-12, 2010.
[22] Ian F.Akyildiz, Mehmet C. Vuran and Ozgur B. Akan,A Cross-Layer Protocol for
Wireless Sensor Networks,Trans. on IEEE, pp.1102-1107, 2006.

This paper may be cited as:


Sridevi S., Rumeniya G. and Usha M., 2014. Energy-aware QoS Based
Routing Protocols for Heterogeneous WSNs: A Survey. International
Journal of Computer Science and Business Informatics, Vol. 11, No. 1, pp.1 -
19.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 19


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

Optimization of Outsourcing ICT


Projects in Public Organizations; Case
Study: Public Center of Iranian ICT
Studies
Majid Nili Ahmadabadi
Department of Management,
Qom University, Qom, Iran

Abbas Bagheri
Department of Industrial engineering,
Islamic Azad University, Firuzkuh Branch,
Tehran, Iran

Fariba Abolghasemi
Department of Management
Payam Noor University, Tehran Branch,
Tehran, Iran

ABSTRACT
Outsourcing is a strategic decision and consequently, it has a fundamental impact on the
performance and costs of an organization. If the spiritual and economic costs are not
allocated appropriately, it will be led to competitive advantages and will drag the
organization on the verge of destruction. In this paper by using over a decade outsourcing
experience of outsourcing in a national center as well as presented models in this regard, an
integrated model has been presented which could be of great help in remarkable cost
reduction and it would result in high productivity in national projects. This contribution is
based on a knowledge management module. In this paper, Momma, J. and Hvolbys four-
stage model will be introduced for outsourcing as the base model. Then, with an interview
to outsourcing of the research projects in Public center of Iranian ICT studies and its
analysis, the results and knowledge obtained will be discussed in a model framework for
the research project outsourcing. In the end, the required points will be introduced for using
proposed model and the benefits of its usage.
Keywords
Outsourcing, Public center, Research management, Research projects.

1. INTRODUCTION
Moving from an industrial society to the information society, turning
national economy into the global economy, centralization to
decentralization, and finally, hierarchical structure to the network system are
obvious signs of fundamental changes in today's environment which
undoubtedly answering them requires new solutions and strategies. One of

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 20


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
these solutions is outsourcing that takes a wider range every day so that not
only the government but also the private sector have taken the approach of a
warm welcome. Given the key role of outsourcing in developing and
enhancing the skills, predicting valid statistical and scientific centers
suggest that the volume of global outsourcing markets in the current year
will reach one trillion dollar. Moreover, given the predictions made, volume
of design work and outsourcing research and development from 2004 to
2009 has reached from 179 billion dollars to 345 billion dollars [1].
According to Gartner research study group, global market for IT
outsourcing in 2008 was equivalent to U.S. $ 253.1 billion and will grow
7.2% per year. Meanwhile, Forrest predicts that European organizations will
cost more than 238 billion Euros in the field of IT outsourcing in 2008.
Outsourcing enjoyed particular complexities with the growth and evolution
of other management issues. Therefore, the concept of outsourcing becomes
a complex issue in conjunction with the organizational growth so that not
only in terms of financial savings, but also in different prospects, including
remaining in a competitive market, the need to join the global markets,
rising customer expectations and market competitiveness are raised and
outsourcing is becoming a need, a requirement and a pressing issue for an
organization [4].
Outsourcing has its own characteristics and different sectors of activities can
be outsourced in different industries. Thus, this perception that outsourcing
is a strategy, a guideline and a method, refers to this issue that what
characteristics have the things being transformed and what are the outcomes
of the outsourcing for that organization? Therefore, according to the issue of
outsourcing, its characteristics can be studied and an appropriate name can
be given to. In most current industrialized and manufacturing companies, a
part of production process has been usually outsourced. But there is a
bilateral cooperation between research centers for outsourcing. The
prerequisite for large and innovative companies is having an open and a
competitive economy. On the other hand, outsourcing occurs in an
organization when a management need is felt within. If there is no
management belief and resource allocations, outsourcing will not occur as it
should. Iran has various experiences of outsourcing, but this kind of
outsourcing is basically different from what is being done in the companies
such as Cisco, IBM and Microsoft and has changed from product-oriented
to a service-oriented company [4]. On the other hand, without a previous
planning for the entire process and its consequences, some measures have
been taken for changing organizational strategy, but poor results have been
obtained.
In this proper, outsourcing will be first studied as a strategic activity in the
global organizations. Since the presented models for outsourcing are very

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 21


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
rare, one of them which is more consistent with Iran is Alpha after its
introduction. Then, the experiences of this center for a decade of
outsourcing with various strategies will be analyzed and finally, a developed
model of outsourcing activities for research centers will be presented. In the
end, the parameters and inputs and outputs of the presented model and the
benefits of its usage will be presented.
2. BACKGROUND STUDY AND AN OVERVIEW ON RESEARCH
OUTSOURCING AND ITS GLOBAL ADVANTAGES

2.1. Definitions
Outsourcing means organizing that part of ones redundant activities that are
not involved in the organizations chain value and should be transferred to
external parts of an organization. Outsourcing in its particular concept
means organizing ones tasks and activities that are involved in the
organizations value chain [4].In the field of research, research management
means managing in doing one or more research project. To manage a
research project in the organization, it requires that major research
approaches to be defined along with organizations major goals as well as
multiple smaller research projects to be defined along with each approach
and required budgets are allocated to them. Then, one of the capable staff
should take the responsibility of research management of this approach and
takes actions towards the activities related to projects outsourcing with a
group of colleagues[5].
2.2. Research Outsourcing
The outsourcing of R&D activities became popular from the late 1990 by
pharmaceutical companies. One reason for this issue is filling the research
gap among organizations. Another reason in using R&D outsourcing is that
producing new products requires a long time in the process of innovation
and market introduction. Other incentives that make the outsourcing of
research and development by the company include:
- R&D productivity is increased (reduced costs and increased revenues)
- The success of similar companies in the field of outsourcing research and
development
- The acquisition of knowledge generated by the outsourcing partner
- Multilateral engagement of the organization with colleagues, competitors
and customers and thus more information in doing research
- Ability to access the above-mentioned data through a partner organization
outsourcing
In doing such outsourcings, the general process of outsourcing has been
used and a specific principle or stage is not added or removed [6].

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 22


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
The first advantage of states research outsourcing is reducing costs and
time for such projects. The second advantage of this work is that the
organization performing the project is obligated to enforce laws that have
been enacted for public agencies. Since the number of these laws is led to
reducing the speed of these issues in state organizations and rising some
problems in the field of budget allocation, the outsourcing of these projects
is a way to overcome these legal problems, The static nature of government
organizations, people's lack of commitment to this organization, the need for
mobility in research projects, the need for coordination and integration and
minds of people in such projects, little and slow governmental monitoring
(which often takes the form, not the content) and problems alike can be
solved by research projects outsourcing in governments [7].
3. RELATED WORKS
In 2003, a research has been done by the students of research in operations
that integrated strategic management and organizational theory and used it
for inter-organizational communications [16]. Holcomb, H. and Hitt, M
proposed a theoretical model for strategic outsourcing with the resource-
based and transaction-based approaches in 2007 using the above research. In
his article written in 2008, McIvor, R presented the strategic decisions
towards outsourcing or lack of outsourcing of these activities as a model
[18].
Several theories have been presented previously for outsourcing, but it has
not often been led to a structural model. Existence a scientific model can be
effective in a successful outsourcing. The stationary and non-intelligent
nature of information that are produced through current tools and techniques
neither support management and control of dynamic processes nor the key
activities for operational and tactical levels.Therefore, they should use
trainings in the time of working, tacit knowledge and the colleagues
suggestions, expert advices and finally trial and error method. Many authors
have discovered the impacts of outsourcing over flexibility and value
creation in that flexibility. Value creation can be obtained through the
combination of strategy, economy, technology and human factors [19, 20,
and 21]. A significant issue in doing outsourcing is that doing a successful
outsourcing, production characteristics and market considerations should be
consistent with the legal personality, functional strategies and companys
business. The evidences indicate that a small number of researchers have
been attempted to develop the scientific model in an experimental level in
that their aim was to publicize their results in the books, scientific and non-
functional communities [22, 23]. Momma, J. and Hvolbyhave indicated that
there is no outsourcing framework that is consistent with the harsh
conditions of a real atmosphere in business and they themselves presented a
model and framework for outsourcing [24].

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 23


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
4. METHODOLOGY
In this article, the model and framework introduced by Momma, J. and
Hvolbywill be introduced as the base model and will be criticized and
thenitwill be developed given the experience of alpha center subsequently.
This model includes a wide range of tools and techniques to search for
suitable suppliers, monitoring and improving them as well as performing
outsourcing projects and communications management in this project.
Overall, the tools and techniques help implement the models and collecting,
structuring and accessing to the required data will help its stages. The
decisions of outsourcing companies for outsourcing are based on three
criteria:
- The company outsources the product which itsproduction is the
vital resources and capabilities.
- Outsourcing will be assigned to the suppliers who create
competitive advantages (such as a larger scale, lower costs or
greater efficiency)
- Sometimes, outsourcing is a way to improve production efficiency,
creating the staffs commitments and as a result increasing the
competitiveness and profitability of the company.
Figure 1 shows the stages and outsourcing methods in Momma, J. and
Hvolbysmodel.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 24


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

Figure 1: Momma-, J. and Hvolbys four phase model [24]


According to this model, the characteristics of suppliers along with each
characteristic are collected. Important information in the field of history,
price, quality, roofing production capacity, production and delivery time,
warranty, and all such information from the organization can be an
important determinant. The next stage is like crossing the suppliers of the
primary filter. At the next stage, the agreements on the details will be done
and some candidates are eliminated and one or more final supplier is
selected. The last stage of this model indicates that using the performance of
the supplier and observing his/her working process, a decision can be made
about continuing the coordination or replacing him/her.
4.1Review of a decade of outsourcing in Public center of Iranian ICT
studies

In this part, outsourcing activities of the Center, hereinafter briefly called


center - is examined since 79 to 86. The Center's activities can be divided
into three periods. In the first period, some of the main activities of the
Center are removed from the main chain of activities and are outsourced by
changing the strategy aiming with privatization of the activities (such as
PCBs and workshops). In addition, some major activities such as the control
center for research projects, human resource management and etc. were also
outsourced. In the second period of strategy, the center was wholly changed
from project conducting to project steering.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 25


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
Centers
Outsourcin Outsourci The process and Results and
Year macro
g issue ng aim working method outcomes
strategy
Outsourcing
contract side
activities
Financial assistance
such as
for the The center
transportatio
establishment of lost the
n, services,
satellite companies independent
restaurants
by staff and other conduction of
Insourcing
Conducting internal and applied
the Digital
Ministry of external people, projects; the
Research
ICT Selling stock center lost its
Project (a
research repository and workshops
project Privatizatio
projects and material, and facilities
conducted n,
subsidiaries, Selling various and thus
by the Reducing
Conducting workshops, requires
personnel of manpower
the projects Transferring assigning the
2000- contractor) associated
with an transportation, activities
2003 outsourcing with the
emphasis to restaurant, service outside the
main center,
fundamental contractor, center,
activities enhancing
projects by Assignment of causing wide
such as Internation
center project disappointme
contractual al relations
personnel management, nts in the
human
cooperated informal human areas of
resource
with resources taxation to the
managemen
universities management to the projects
t, circuit
contractor, personnel
board,
Obtaining advice from the
controlling
and carrying out project
the projects,
joint projects authorities.
consultancy
abroad.
and doing
projects
abroad
The All research Attracting Projects and Existing some
emphasis on projects are the macro designations interested
conducting almost research outsourcing in a personnel in
effective outsourced budgets wide and macro- fundamental
projects for and the and level and domestic and
ICT in the colleagues projects companies/outsourc developmenta
Secon
country only take outsourcing ing research l projects
d
along with the independen projects to the from the
period
determining managemen t from the external companies, center,
(2004
the t over fact that administering creating
-
ministry's researches. there are contracts with dissatisfaction
2005)
guidelines in The field of final formal and informal among the
purchasing, study in this customers personnel of the personnel for
assigning, time is and results center, voiding the special
decision- doing demander precedence of the payments to
making, research in the previous satellite people, doing
policy- activities ministry companies, cumulative

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 26


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
Centers
Outsourcin Outsourci The process and Results and
Year macro
g issue ng aim working method outcomes
strategy
making, etc. related to obtaining the conducted or
ICT at the consultation and semi-
national doing shared conducted
level (other projects with projects
ministries abroad, doing without
and three projects for The customer,
branches Judiciary, banks, existing
etc., doing insufficient
outsourcing transparency
activities as a in payments,
special activity existing
through the center multiple
personnel (with managed or
special and unmanaged
unstructured paying costs,
, wide outsourcing dissatisfaction
of the projects and of the
attracting the receptor
budgets and wide companies o
participation of the outsourcing
universities, etc. in for the
doing the projects insufficiency
of
outsourcing
management
Projects Continuing
outsourcing and existing of
designs in a more some of the
limited level to personnel
Doing
universities and interested in
research Attracting
domestic fundamental
managemen research
Doing companies, doing and
t with new budgets of
various outsourcing developmenta
organizing, the center,
strategic, activities as a l designs from
outsourcing doing
fundamental special activity by the center,
research requested
Third and the center personnel creating
activities projects by
period developmen (with special, and to dissatisfaction
only for the the
(2006 tal projects some extent among the
time of ministry
) for ICT structured, personnel for
existing and
ministry and payments, building the sake of
requests and subsidiary
subsidiary elite center for the special
final organizatio
organization macro management payments to
customer in ns in
s of the company's people,
the meeting
strategies inability in
subsidiary their needs
(comprised of the attracting
organization
ordinary personnel budgets and
of the center, conducting
though with required
different research

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 27


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
Centers
Outsourcin Outsourci The process and Results and
Year macro
g issue ng aim working method outcomes
strategy
regulations), projects for
outsourcing applying
research projects to harsh
the external projects regulation in
the customer
and obtaining
the costs of
working from
them, existing
a couple of
strategic
activities
without any
result by the
center's
personnel and
multiple
office works,
exhaustion of
research
personnel
from doing
their works
without any
result,
existing
unmanaged
costs, limited
projects
outsourcing
and wide
attracting of
the budgets
and limited
participation
of universities
and
companies in
doing the
projects due
to reducing
the reliance
caused by
outsourcing
management
insufficiency

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 28


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
In table (2), the condition of Public center of Iranian ICT studies has been
considered and analyzed in terms of 8 important factors in projects
outsourcing. Considering 6 indicated aspects in Momma, J. and
Hvolby'smodel in Table (2) shows that the research center did not follow a
specific pattern and just some concise regulations were enough and trial and
error methods relying on traditional ways are applied, therefore, multiple
problems were arisen.
Table 2: studying Public center of Iranian ICT studies in terms of the important
factors affecting Momma, J. and Hvolby's outsourcing process
Important factors
in outsourcing Period 1 Period 2 Period 3
process
The prospect and
written It has been provided Existed but not
Not exis
organizational for the first time updated
strategy
Organizational
memory/continuous Not exist Not exist Not exist
improvement
In is done in the In is done in the
minds of managers minds of managers
Competitive It is done in the minds and the experience and the experience
analysis of managers is not delivered to is delivered to other
other levels of levels of
organization organization
It is done with an It Is done with a
expertise working scholarly working
(by the scholars (by the scholars who
who have no have the previous
expertise in this experience in this
It is done only at the
Evaluation and the regard and have regard and have
level of managers and
approval of been not trained) been not trained as
does not relate to the
expositors and some decisions well) and some
staff
made at the decisions at the
management level. management level.
A primary guideline A vague guideline
has been prepared has been prepared
for this issue for this issue
The significant
The significant
contents of the
contents of the
contract are
contract are
It is done only at the provided to the
provided to the
Negotiation for management level and service presenters at
service presenters at
contract does not relate to the the scholar level,
the scholar level,
staff but the decisions
but the decisions are
are made at the
made at the level of
level of top
top managers
managers
Project The project is done, The project is done, The project is done,

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 29


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
Important factors
in outsourcing Period 1 Period 2 Period 3
process
implementation and but knowledge but knowledge but knowledge
knowledge transfer management is not management is not management is not
transparent transparent or it transparent or it
some some inattentiveness
inattentiveness arises
arises
Communications
management is also
Two organizational Two organizational
outsourced and is
units perform the units perform the
done by project
communications communications
control unit. Project
management management
control is done only at
imperfectly under imperfectly under
the temporal control
Communications the research branch the research branch
of the project
management known planning known planning
(reporting the retards
center and studying center and studying
to the management)
research designs research designs and
without regarding the
and the office for the office for
project contents and
supervision and supervision and
the backwardness
evaluation. evaluation.
reasons and its
analysis
It is done with the
opinion of research
management and
Contract It is done with the middle centersin
termination management opinion. research ministry
and finally by the
approval of top
management

4.2 The suggested model for outsourcing for the research projects
With respect to what are indicated in part 4, it was observed that Public center
of Iranian ICT studiesmanagement has made some wrong decisions in the field of
outsourcing which the result of their implementation in the past was not
satisfactory. One reason is the instability of top managers in this center, for
they are prone to change the managers in a short period of time and the new
managers were in lack of management skills which the previous managers
had in their management time. Moreover, the lack of organizational memory
in this center causes that despite the tendency of new managers for using the
experiences of previous managers, they could not get these experiences as
they must and as a result people's memory are used in this regard.
Obviously, this method was replete with many mistakes and thus was not
fruitful. Existing a unit named organizational memory in an organization
can preserve the old useful information, it can makes it possible for the top

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 30


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
managers to devise more useful strategies in the field of outsourcing for the
organization which is led to a continuous improvement in this regard.
In the Momma, J. and Hvolbys four-stage model, it is focused on the
customer and useful parameters are considered for primary identification
and evaluation, chose and approval of outsourcing service supplier,
approving contracts and performing the projects and finally evaluation and
improvement, but there is no relationship between outsourcing
implementation phases with organizational strategy and organizational
memory in this model. In addition, the continuous improvement should be
occurred in all sectors and processes of working in an organization;
however, it has not paying attention to the continuous improvement in
phases 1 and 2. These cases are considered as the main weaknesses of the
above model. Also in this article, Momma, J. and Hvolbypresented a
process as the framework of projects outsourcing that was proposed by
Laudon in 1998 for activities outsourcing and has been completed by
Momma in 2001. This framework comprises 6 continuous phases as 1)
competitive analysis, 2) evaluation and approval, 3) negotiation for contract,
4) project implementation and knowledge transfer, 5) communications
management, 6) contract termination. It has also been specified that this
process has also been presented in the four-staged model, but it is not in a
repetitive cycle in this process and on the other hand, it has not been paid
attention to organizational strategy and organizational memory in this
model. It is obvious that if the working results are not effective in changing
and modification of organizational strategies, continuous improvement will
not be obtained.
The developed model for outsourcing of the research projects according to
Figure (3) is of 6 stages in that some of key activities along with the related
assessors and desirable outputs (that are known as decision variables). These
6 stages are obtained through outsourcing competitive thinking and
implementing ideas of Bragg et.al [22] in that as previously been mentioned,
it has been presented in the framework suggested by Momma, J. and Hvolby
[24]. These 6 factors are proposed as the main components of outsourcing
process and have been shown in Figure (2). On the other hand, two factors
namely organizational prospect and strategy as well as organizational
memory can also be observed in this figure. Organizational memory plays
the main role in being a learner and maintaining the organizational records
and is related to the whole units and stages in order that both new obtained
information and knowledge are preserved in their memory and give the
previous records (information and knowledge created in the organization) to
the associated units for improving the application of current issues. Also,
this unit can give new created knowledge to the top managers in order to be
used in improving macro organizational prospects and strategy. Since the
outsourcing projects can be the source of knowledge creation and added

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 31


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
value for the organization, outsourcing projects are also considered in this
model, but with respect to the fact that this issue does not relate to
outsourcing, a general block is sufficient in this regard. In continuous,
outsourcing processes, key activities, performance assessors and expected
results will be explicated:
Outsourcing Process

Compet Evaluation Negotiation Project Communicat Results


itive and for making implementa ion evaluation
analysis supplier contract tion and management and contract
Designi approval knowledge with the termination
transfer supplier
ng the
perspect
ive and
organiza
tional
strategy,
planning
and
tactics
Organizational memory/controlling standards

Insourcing processes

Figure (3): the developed model of outsourcing for the research projects

The process of research projects outsourcing: this process comprises 6


stages as follows:
Stage 1: competitive analysis
In this stage, other research organizations in the regarded scientific area will
be considered and some information is gathered. Key activities,
performance indices and expected outputs in this stage are as follows:
- Key activities: strategic analysis, SWOT analysis, vital and non-vital
competencies, mapping, etc.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 32


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
- Performance indices: increasing competitive advantages, amount of
added value and the information related to final consumers, level of
agreement in the field of outsourcing in organization, etc.
- Expected outputs: suitable strategic steering, amount of knowledge
increase in competitiveness and profitability, level of creating a
shared comprehension regarding probable advancement
opportunities

Stage 2: evaluation and approval of suppliers


In this stage, using the indices such as the number of organization's
researchers, the number of articles, inventions and other achievements and
indices such as the organization's record in the research issues,
organization's scientific grade and rank and other indices, the specified
candidates in the last stage are evaluated. Key activities, performance
indices and expected outputs in this stage are as follows:
- Key activities: defining the vital indices for evaluation (quantitative
and qualitative), surveying the details of supplier features, the
evaluation of real performance etc.
- Performance indices: the possibility of assessment criteria, the
number of approved suppliers, the closeness of suppliers'
geographical location in comparison with the customers' place,
determining a well-informed and experienced observer, the
agreement of the observer regarding project contents, etc.
- Expected outputs: reduced risks in choosing the supplier, better
review and the access to vital production competencies, level of
cognition and understanding the customers' opinions through the
suppliers and their higher performance, being sure of obtaining
suitable results with the opinion of observer, enhancing internal
processes of suppliers' evaluation and related guidelines, etc.

Stage 3: negotiation for making a contract


In this stage, a number of meetings are hold with the research organization's
representatives in that they are trying to reach a primary agreement. Key
activities, performance indices and expected outputs in this stage are as
follows:
- Key activities: defining the projects and legal and business
regulations, negotiation about the duration of contract and the time
of its delivery and bilateral agreements etc.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 33


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
- Performance indices: level of agreement about the terms, legal and
business provisions, the ability to determine delivery condition, the
inclination and effort of the supplier for cooperation during the
negotiation.
- Expected outputs: close, intimate and long-term relationship,
bilateral agreement regarding contract provisions, bilateral will for a
fair cooperation for the two sides, paying attention to the secrecy of
working results, etc.

Stage 4: project implementation and knowledge transfer


In this process, some research has begun. Being assured of shared
comprehension about the expected outputs are among the important points
that should be taken into consideration in this stage. Otherwise, project
initiation will be faced with some deviations and its revision requires paying
financial and temporal costs. Key activities, performance indices and
expected outputs in this stage are as follows:
- Key activities:establishment of the suppliers' incorporation
principles, defining the method of interactions, the compatibility of
organization with the suppliers' performance, etc.
- Performance indices: the ability to perform changing processes, level
of profitability and flexibility capacity, the ability in defining
interaction costs (cost curve), etc.
- Expected outputs: more capital and more accessible resources,
increasing further engineering benefits, logical balance between
domestic production and outsourcing

Stage 5: relationship management with the supplier


Choosing an observer that is competent both scientifically and executively
and have enough supervision on the research outputs in its due time and
based on existing services specified in the contract is so vital in this stage. In
some research projects which more than one expertbeing used within them,
multiple observers should be used as well. Key activities, performance
indices and expected outputs in this stage are as follows:
- Key activities: creating communications, supervising systems and
information, the relationship among developmental projects,
continuous evaluation of performance, etc.
- Performance indices: the ability to measure the minimum impacts in
the relations, the curve of the product's life time and a later time to
be entered the market, innovation and changing the customers'

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 34


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
habits, costing structures, the ability to deliver and final quality of
the product.
- Expected outputs: the percentage of the final quality of the product,
reduced cost of construction relative to other samples or a better
control on the costs, less time in presenting new products, etc.

Stage 6: results evaluation and contract termination


One of the important points in this stage is to transfer new knowledge in the
project from the supplier to the organization. This knowledge is not merely
the project documentations and it sometimes requires that some skills are
transferred with them.For this condition, training periods and knowledge
management guidelines can be used. Key activities, performance indices
and expected outputs in this stage are as follows:
- Key activities: the evaluation of the options of contract continuance,
changing the supplier of domestic production, making reviewing
bases in the competitive strategy, etc.
- Performance indices: the need to achieve determined objectives in
the outsourced competitive area, the ability to obtain assurance in
terms of passing the critical stage, the ability to replace supplier or
outsourcing, etc.
- Expected outputs: being aware of the time for making long-term
contracts, replacing the supplier or regarding outsourcing, a better
way for strategic planning, etc.

4.3 Organizational Memory Unit


The databases save information related to capabilities and the process of
improving the suppliers during their cooperation with the company in the
current and past projects. These databases help distinct between the
suppliers among a wide number of raw materials suppliers, product and
technology suppliers. Furthermore, it records a background of the potential
outsourcing shareholders that are useful to work with and will be useful in
the future as well. Existing organizational memory unit in the process of
research projects outsourcing paves the way for preserving decision making
skills, negotiation, and dominance over concepts and methods in that
organization during this process and will be developed in the period of time.
Therefore, experienced and sapient people who are well-aware of the
methodologies and concepts are involved in the process of outsourcing;
organizing and managing can learn at the end.A capable organizational
memory can help the organization in the following areas:
Risk management

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 35


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
Determining the key capability of the receptor company
Determining the key capability of the company giving services
and making balance between the two key capabilities
Outsourcing project management (i.e. according to PMBOK
standard)
Changing and revising the prospect of the organization's
managers
Surveying the suggestions an determining the best one in terms of
an independent supervisor
Determining the criteria for professional ethics in preserving
secrete information from the part of service giver
Determining the new paradigms of the organization giving
services in the field of service outsourcing
Help revise organizational culture suitable to outsourcing
Reviewing the political aftermaths in outsourcing (especially
international outsourcing)
Determining the indices of technology alignment between the one
giving services and receiving services

5. The Impacts of using a model suggested for outsourcing management


of research organizations
The results of performance indices of above processes along with the access
to expected outputs are sent for the managers and authorities. Given the
obtained results, these people can change organization's strategy, revise the
prospect or issue some orders relative to operational revising measures. The
following issues are presented as some evidences in this regard:
Improving the determination of the competence for the power of research
management personnel's supervision over outsourcing projects, improving
the observance of security systems and professional ethics from the one
giving outsourcing services, promoting education (to other personnel in
outsourcing, research managers and finally training the working results to
them and being sure of knowledge attraction), promoting the analysis of
performance costs, making decision to increase or decrease inefficient
organizational units that create no added value to the organization's
outsourcing activity (changing structure), initiating or changing the
combination of supervision council, promoting or changing attraction
strategy and knowledge transfer, reviewing the performances in the working
value cycle in various sectors (group, faculty, viceroys, ) in order to
prevent tasks projection and the like.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 36


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
6. RESULTS
As it was mentioned, using over a decade experience of outsourcing in a
national center and presenting models in this area resulted to integrated and
unprecedented scientific models. This contribution mainly is based on a
knowledge management module. Using this module and integrating it with
previous modules would result in a sort of intelligence which prevents
occurrence of previous mistakes automatically. This article aimed to present
a pattern for outsourcing research projects. Although much studies have
been conducted concerning outsourcing and the models related to it, it has
less been taken into consideration the organizational memory and creating a
continuous improvement in the field of outsourcing. The effective role of
this unit in the access to the past and organizational learning that used
through it and more paid attention in this article and a model based on it is
presented. In this article, the experiences gained by Public center of Iranian
ICT studies were used as a case study for this issue. These experiences also
require organizational memory unit in the outsourcing company and shows
the devastating impacts of lack of attention to.Outsourcing in each case,
especially for the research projects can be managed using the model above
and improve it through the time. The benefits in using this model for the
outsourcing organizations are as follows:
- Following a specific framework and making a discipline in
outsourcing
- Being aware of the infrastructures required for applying and
managing the outsourcing process
- The optimal designation of organizational units and their
relationship with a successful outsourcing application
- Reducing the costs and increasing the physical and spiritual revenues
caused by a proper selection of outsourcing shareholder
- Learning in the field of outsourcing and satisfaction of domestic and
foreign customers caused by progressing of the organization
onwards
- The ability for counseling and guiding other organizations in the
field of outsourcing and sharing the knowledge and experience with
them
- Using organizational memory for the cases other than outsourcing
in the organization (improving other units and organizational
dimensions)
- Other benefits caused by being the organization as a learner in the
field of outsourcing

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 37


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
This method has no limitation or disadvantage relative to current models
and methods and the further organizational infrastructures that requiresto be
implemented is that organizational memory unit.

7. REFERENCES
[1] Carbone, J., EMS profits get squeezed. Purchasing 134, 2933, 2005.
[2] Pei, Z. ,Zhen-xiang, Z. and Chun-ping, H.," Study on Critical Success Factors for IT
Outsourcing lifecycle", International Conference on Wireless Communications,
Networking and Mobile Computing, 2007. WiCom2007.Volume , Issue , 21-25 Sept.
2007, Pp4379 - 4382
[3] Reyes Gonzalez, Jose Gasco and Juan Llopis, Information System Outsourcing: A
Literature Analysis, Information & Management, Vol. 43, 2006, (7), pp.821-834.
[4] Outsourcing the strategic decisions of successful enterprises, a meeting with the
presence of management authorities, Tadbir magazine, Issue 166, Esfand 2005
[5] Executive guidelines of research management, Iran telecommunications research center,
Mordad 2006.
[6] Higgins, M., "The outsourcing of R&D through acquisitions in the pharmaceutical
industry," Journal of Financial Economics 80 (2006) 351383
[7] Ulset, S. , "R&D outsourcing and contractual governance: An empirical study of
commercial R&D projects, " Journal of Economic Behavior &Organisation Vol. 30
(1996) 63-82
[8] Gilley, K.M., Rasheed, A., "Making more by doing less: ananalysis of outsourcing and
its effects on firm performance,"Journal of Management 26 (4), 2000, pp.763790.
[9] Billi, J.E., Pai, C.W., Spahlinger, D.A., "Strategic outsourcing ofclinical services: a
model for volume-stressed academic medicalcenters," Health Care Management Review
29 (4), 2004,pp. 291297.
[10] Roberts, V., "Managing strategic outsourcing in the healthcareindustry," Journal of
Healthcare Management 46 (4), 2001,pp.239249.
[11] Chen, I.J., Paulraj, A., "Towards a theory of supply chain management: the constructs
and measurements," Journal of Operations Management 22 (2), 2004,pp.119150.
[12] Shy, O., Stenbacka, R., "Strategic outsourcing," Journal of Economic Behavior and
Organization 50 (2), 2003, pp.203224.
[13] Fine, C.H., Whitney, D.E., Is the make-buy decision a core competence? In: Muffato,
M., Pawar, K. (Eds.), Logistics in the Information Age, ServiziGraficiEditoriali. Pandova,
Italy, 1999, pp.31-63.
[14] Quinn, J.B., Hilmer, F.G., "Strategic outsourcing," Sloan Management Review 35
(4),2004, pp. 4355.
[15] Quinn, J.B., "Strategic outsourcing: leveraging knowledge capabilities," Sloan
Management Review 40 (4), 1999, pp.921.
[16] Grover, V., Malhotra, M.K., "Transaction cost framework in operations and supply
chain management research: theory and measurement," Journal of Operations
Management 21 (4), 2003, pp. 457473.
[17] Holcomb, H. and Hitt, M., "Toward a model of strategic outsourcing," Journal of
Operations Management 25 (2007) 464481
[18] McIvor, R., "What is the right outsourcing strategy for your process?," European
Management Journal (2008) 26,24
[19] Lei, D., Hitt, M.A., "Strategic restructuring and outsourcing: the effect of mergers and
acquisitions and LBOs on building firm skills and capabilities," Journal of Management
21 (5), 1995, pp.835859.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 38


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
[20] Brandes, H., Lilliecreutz, J., Brege, S., "Outsourcing success or failure?," European
Journal of Purchasing & Supply Management 3 (2), 1997,pp. 6375.
[21] Lonsdale, C., "Effectively managing vertical supply relationships: a risk management
model for outsourcing," Supply Chain Management: An International Journal 4 (4),
1999,pp. 176183.
[22] Bragg, S.M., Outsourcing: A Guide to Selecting the Correct Business Unit
Negotiating the Contract Maintaining Control of the Process. Wiley, New York, USA,
1998.
[23] Wasner, R., "The outsourcing process -strategic and operational realities," Ph.D.
Thesis, Department of Management and Economics, Divisions of Industrial Marketing,
Linkoping University, Sweden, 1999.
[24] Mommea, J. and Hvolby, H. , "An outsourcing framework: action research in the
heavy industry sector," European Journal of Purchasing & Supply Management No. 8
,2002, pp.185196.
[25] Laudon, K.C., Laudon, J.P., Management Information Systems FNew Approaches to
Organization and Technology, 5th Edition. Prentice-Hall International, Englewood Cliffs,
NJ., 1998.
[26] Momme, J., "Framework for Outsourcing: based on theoretical review and empirical
findings from Danish heavy industry," In: Hvolby, H.H. (Ed.), The Fourth SMESME
International Conference, Department of Production. Aalborg University, Denmark, pp.
265274, 2001.
[27] Momme, J. and Hvolby, H. , "How Core Competence Thinking and Outsourcing
Interrelate," Proceedings of the 13th IPS Fugls0 Research Seminar, Department of
Production, Aalborg University, Denmark, 1998, pp.233 .260.

This paper may be cited:

Ahmadabadi, M. N., Bagheri, A. and Abolghasemi, F., 2014. Optimization


of Outsourcing ICT Projects in Public Organizations; Case Study: Public
Center of Iranian ICT Studies. International Journal of Computer Science
and Business Informatics, Vol. 11, No. 1, pp. 20-39.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 39


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

An Optimized CBIR Using Particle


Swarm Optimization Algorithm
Subhakala S.
Sri Krishna college of technology,
Coimabatore,India.

Bhuvana S.
Sri Krishna college of technology,
Coimabatore,India.

Radhakrishnan R.
Sri shakthi institute of Engineering and technology,
Coimabatore,India.

ABSTRACT
Storage and retrieval of images over a large database is an important issue. Content Based
Image Retrieval system provides solution for this issue. In Content Based Image
Retrieval(CBIR) similar images are retrieved using low level features such as color,
texture, edge, etc that are extracted both from the query image and the database. In CBIR
less amount of retrieval time with high accuracy is desired property. The proposed system
achieves this property by using Particle Swarm Optimization algorithm. The proposed
system consists of the following phases (i) Color feature extraction using (luminance(y),
blue chrominance (u), red chrominance (v)) method (ii) Texture feature extraction using
Grey Level Co-occurrence Matrix (iii) Edge feature extraction using Edge Histogram
Descriptor (iv) Measurement of Similarity between Query image and the Database image
using Euclidean Distance. (v) Optimization of retrieved result using Particle Swarm
Optimization. In comparison with the existing approach, the proposed approach
significantly improves the precision and recall of the retrieval system.
Keywords
Accuracy, Particle Swarm Optimization, Luminance, Chrominance, Edge Histogram
Descriptor.
1. INTRODUCTION
The Content Based Image Retrieval is a method which uses visual contents
to search images from large image repositories. According to users interest,
has been an active research area over the last few years. Users are exploiting
the opportunity [1] to access remotely stored images in all kinds of new and
exciting ways. However this has the problem of locating a desired image in
a large and varied collection. This leads to the rise of new research and
development area CBIR, the retrieval of images in the basis of features
automatically extracted from the image themselves.
The increase in computing power and electronic storage capacity has lead
to an exponential increase in the amount of digital content available to users

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 40


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
in the form of images and video, which form the bases of much
entertainment, educational and commercial applications. Consequently, the
search for the relevant information in the large space of image and video
databases has become more challenging. How to achieve accurate retrieval
results is still an unsolved problem and an active research area.
The currently available CBIR technique retrieves stored images from a
collection of given images by comparing features [6] [7] and automatically
extracted from the image themselves. The most commonly used features are
color, shape, texture etc.
The proposed system uses color, texture [11] and edge feature extraction.
From the extracted features similarity is measured using Euclidean distance.
The results are optimized using Particle Swarm Optimization. This system
will achieve the better retrieval accuracy.
1.1 Feature Extraction
Feature extraction is a form of dimensionality reduction. It reduces the
input size. This approach is helpful when images are in larger size. Reduced
feature representation is required for tasks such as query matching and
similarity retrieval. Feature extraction [3] is very different from feature
selection. Feature extraction consists in transforming arbitrary data, such as
text or images, into numerical features usable for machine learning.

1.2 Similarity Measurement


To compare the similarity between images, distance between that images
are measured. Example for a similarity measures such as Euclidean distance,
histogram intersection etc.

1.3 Optimization
Optimization [5] is defined as a set of methods and techniques for the
design and use of technical systems as fully as possible within the
parameters. Optimization can be classified into two categories: local and
global. The basic difference between local and global optimization is the
size of region where optimization conditions hold. A local optimum has an
extreme function value as compared to the points contained in the small
neighbourhood. However, the global optimum has the extreme function
value amongst all the points in the whole design space. Even though
clustering algorithms are simple and effective, they are sensitive to
initialization and easily trapped in local optima. There are many
optimization algorithms like Particle Swarm Optimization to overcome the
drawbacks of clustering techniques.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 41


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
2. RELATED WORKS
In recent years many studies have been performed in Content Based Image
Retrieval (CBIR). Lu et.al, proposed image retrieval technique based on
features of image using color and bitmap of an image. For the purpose of
retrieving more similar images effectively from the digital image databases,
this proposed system uses the color distributions. Mean and standard
deviations are the global characteristics of the images. Color, shape and
texture are the local characteristics of the image. To improve the retrieval
accuracy bitmap is used. This technique outperforms in terms of image
retrieval accuracy and category retrieval ability. This system uses RGB for
color extraction.RGB is not very efficient when dealing with real world
images. Creation of keywords for each image is time consuming because of
the size of database.
S.-B. Cho, et.al presented image retrieval process that dealt human
preference and emotion by using an interactive genetic algorithm (IGA). In
this method features are extracted from images using wavelet transform and
provide the means to retrieve the image from large database. This works by
creating a population of individuals that are represented by chromosomes.
Here crossover and mutation operators are used to induce variations in the
population. It requires several genetic operators such as cross over, mutation
for better performance. It is difficult to retrieve an image that cannot be
explicitly specified because it deals with emotion.
X.S. Zhou et al. proposed the Genetic Programming framework to
discover a combination of descriptors. Color, shape, and texture descriptors
are used to represent the content of database images. Local Similarity
pattern (LSP) is used in retrieval process. Image descriptors are used for
image searching process. Feature extraction and similarity computation are
characterized as descriptors. In this method a new relevance feedback
method for interactive image search is proposed. This method adopts
generic programming approach to learn user preferences in a query session.
This system allows only a linear combination of similarity values however,
more complex combination may be needed to express the user needs
moreover this system requires more iteration for user satisfaction.
James et al. proposed Wavelet-Based Image Indexing and Searching. In this
method a new technique for image indexing is used. New algorithm uses
partial sketch image searching capability for large image databases. This
algorithm characterizes the color variations. Features extracted from the
image are wavelet co-efficient and their variances. To improve the retrieval
accuracy two stage algorithm is proposed. In first stage, crude selection is
performed for query image. In second stage search is refined by performing
match between feature and the selected image query. For better accuracy in

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 42


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
searching, two-level multi resolution matching may also be used. Masks are
used for partial-sketch queries.
2.1 Motivation
In user oriented CBIR system, color feature is extracted using HSV method
which requires more retrieval time. To improve on this problem YUV
method is used which reduces the retrieval time compared to RGB and
HSV. YUV method uses mean and standard deviation values to extract Y
component, U component and V component. Interactive Genetic Algorithm
uses genetic operators such as mutation and crossover which requires more
computational cost for retrieving the images. To overpass this problem
Particle Swarm Optimization technique is used.

3. PROPOSED WORK
In The proposed system applies a user oriented CBIR approach to image
retrieval which is performed by extracting the feature [2] from the image as
well as the database. Low level features such as color, texture, edge that are
extracted both from the query image and the database. In order to improve
the retrieval performance and accuracy, this system uses YUV
(luminance(y), blue chrominance (u), red chrominance (v)) method for color
feature extraction. Texture feature is extracted using Grey Level Co-
occurrence Matrix method and edge feature is extracted using Edge
Histogram Descriptor method. After extracting these features similarity is
computed between the query image and the images in the database using
Euclidean distance and images are retrieved which are optimized further
using Particle Swarm Optimization. The proposed system consists of
following phases: (i) Color feature extraction (ii) Texture and edge feature
extraction (iii) Similarity Computation (iv) PSO Optimization
3.1 Color Feature Extraction
In this module, YUV [10] color space is used to extract the color feature
from the image as well as in the database. After extraction mean and
standard deviation is also calculated for YUV image. The mean of pixel
color states the principal color of the image, and the standard deviation of
pixel colors can depict the variation of pixel colors.
The "mean" is the "average" where adding up all the numbers and then
divide by the number of numbers. Mean of pixel color states the principal
color of the image. Sample mean and standard deviation are given in eqn (1)
and (2) respectively.

Sample mean =
X (1)
N

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 43


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

x x2
2
1
Standard deviation = (2)
n 1
Where
x is the sum of all data values
N is the number of data items in population
n is the number of data items in sample
Algorithm:
Input: Query image
Output: YUV image
Steps:
Query image is given as input
Color image is converted into RGB and then into YUV using
rgb2yuv function in mat lab.
For each row and column constant matrix value is multiplied with
rgb matrix values to get YUV components of an image. The formula
is given by
Y
0.30 0.59 0.11 R
U = -0.15 -0.29 -0.44 G
V
0.62 -0.51 0.10 B

Mean and standard deviation is calculated by using an inbuilt


function in mat lab for an YUV image.
3.2 Texture Feature Extraction
In this module [9], Grey Level Co-occurrence matrix method is used to
extract the texture feature from an image. The texture feature such as energy
which is used to compute the energy of grey scale images, entropy which is
used to capture the textural information in an image, auto-correlation, and
homogeneity are extracted from the image.
In a given offset, the co-occurrence matrix is a matrix is defined over
an image to be the distribution of co-occurring values. Mathematically, a co-
occurrence matrix C is defined over an n m image I, parameterized by an
offset (x, y), as:

(3)
Where i and j are the image intensity values of the image, p and q are the
spatial positions in the image I and the offset (x, y) depends on the
direction used and the distance at which the matrix is computed d. The
'value' of the image originally referred to the grayscale value of the specified

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 44


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
pixel, but could be anything, from a binary on/off value to 32-bit color and
beyond. Note that 32-bit color will yield a 232x232 co-occurrence matrix.

Query image

Feature
extraction Image database

Color Texture Edge


extraction extraction extraction
(YUV) (EHD)
(GLCM)

Similarity computation
(Euclidian distance
method)

Retrieved
results

PSO
optimization

Optimized
results

Figure 1. Overview of the proposed system


Algorithm:
Input: Query image

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 45


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
Output: Texture features
Steps:
Query image is given as input
Texture feature such as entropy, auto-correlation, contrast, etc are
extracted.
3.3 Edge Feature Extraction
Edge Histogram Descriptor (EHD) describes [8] five types in the image
namely horizontal, vertical, two diagonal and non-directional.
Algorithm:
Input: Query image
Output: Edge features
The image space is divided into 16 (4x4) non overlapping sub
images.
For each sub images a histogram with five edge bins are generated.
Totally 80 bins for the entire image is generated.
The role of the EHD is to provide primitive information on the edge
distribution in the image.
3.4 Similarity Computation
In this module, Euclidean Distance method is used which computes the
similarity between the query image and database images according to the
aforementioned low level visual features. This method retrieves and presents
a sequence of images ranked in decreasing order of similarity. As a result
the user is able to find relevant images by getting top ranked images first.
The Euclidean distance formula is
d(p,q)= i=1n (qi pi)2 (4)
where p and q are length of two pixels.
Input: Query image
Output: set of relevant images
Steps:
Query image is given as input
Similarity is computed by using Euclidean distance method.
Display the sequence of images.
3.5 Particle Swarm Optimization
Particle swarm optimization (PSO) [5] is an optimization technique inspired
by social behaviour of bird flying or fish schooling.
In PSO each particle is associated with the best solution. This value is called
pBest. When a particle takes all the population as its topological neighbours,
the best value is a global best and is called gBest.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 46


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
The PSO method appears to adhere to the five basic principles of swarm
intelligence:
1. Proximity: In simple space and time computation the swarm must be
able to perform.
2. Quality: It should be a able to respond to quality factors
3. Diverse response: It should not commit its activities along
excessively narrow channels
4. Stability: It should not change its behaviour every time
5. Adaptability: It should be able to change its behaviour.
Applications of Particle Swarm Optimization
Non convex search spaces
Particle swarm optimization is able to deal with local minima and
able to find the global optimum.
Integer or discontinuous search spaces
It does not require the space to be continuous but precautions need to
be taken
In PSO, Particle is spread throughout the search space randomly. In search
space the particles are assumed to be flocking. Iteratively velocity and
position of each particle is updated. Each particle possesses its local
memory.
In PSO, particle is considered as potential solution to the optimization
problem. The position of the particle is represented by Xi = xi1, xi2xiD in
dimensional space D. The velocity of a particle is given as Vi = vi1, vi2viD.
Each particle has a local memory (pBest) which keeps the best position of
the particle. Globally shared memory is represented as gBest. It represents
the global position of that particle. Flying velocity of each particle is given
in eqn (5). Particle position update is given in eqn (6).
= + 1 + 2 ( ) (5)
= + (6)
Where, are constants used for calculating the relative influences of the
particle. Introduction of inertia factor to eqn (5) is given in eqn (7). It
improves the performance and search precision of the particle.

= + 1 + 2
( ) (7)
Where the inertia factor and rand is denotes the random number.
Algorithm:
Steps:
Initialize each particle.
Calculate fitness value for each particle.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 47


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
Compare fitness values between particles and the best fitness value
pBest.
The above steps are repeated for all particles.
The particle with best fitness value among pBest values is set as
gBest.
Calculate the velocity for each particle and also update the particle
position.
Continue the above steps until the optimized results are retrieved.
4. EXPERIMENTAL RESULTS
To show the effectiveness of the proposed system, some experiments
performed on simplicity database. In our experiments, the database uses the
image from different categories like natural scene, beach etc. The database
is partitioned into ten categories: village, beach, buildings, buses, dinosaurs,
elephants, flowers, horses, mountains and, food, etc., and each category
contain 100 images. The image from the database is taken as query image.
Figure2 depicts the query image.

Figure 2. Query image

Color feature are extracted using YUV method. Y component, U component


and V component is extracted. Mean and standard deviation is computed.
Query image is given as input. Color image is converted into RGB and then
into YUV using rgb2yuv function in mat lab. Figure 3, 4 and 5 depict Y
component, U component and V component respectively.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 48


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

Figure 3. Y Component of query image

Figure 4. U Component of query image

Figure 5. V Component of query image

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 49


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
Texture feature is extracted using Grey Level Co-occurrence Matrix
method. Texture features such as auto-correlation, entropy, energy,
homogeneity, contrast, etc. are extracted from the query image and the
images in the database. Edge feature is extracted using Edge Histogram
Descriptor method. Texture feature extraction is shown in Figure 6.
Edge Feature extraction is shown in figure 7. In edge feature extraction
image space is divided into 16 (4x4) non overlapping sub images. For each
sub images a histogram with five edge bins are generated. Totally 80 bins
for the entire image is generated.

Figure 6. Texture feature extraction

Figure 7. Edge Feature extraction


In this image retrieval method Euclidean Distance method is used which
computes the similarity between the query image and database images
according to the before mentioned low level visual features. This method
retrieves and presents a sequence of images ranked in decreasing order of
similarity. As a result the user is able to find relevant images by getting top

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 50


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
ranked images first. Figure 8 and 9 shows the query image and retrieved
result of a feature based image retrieval process. Figure 10 and 11 shows the
query and retrieved result for a PSO optimization method.

Figure 8.query image for feature based image retrieval

Figure 9. Feature based image retrieval result

Figure 10. query image for PSO based image retrieval

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 51


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

Fig 11.PSO based image retrieval


The retrieved images are again optimized using PSO which uses the
fitness value of each particle. The images with best fitness value are
retrieved as a result of optimization.
5. PERFORMANCE EVALUATION
The performance of the proposed system is demonstrated using MATLAB
2010b platform. The retrieval efficiency, namely precision and recall were
calculated using natural color images from corel image database. Precision
is defined as the proportion of the number of relevant images retrieved to
the total number of retrieved images and Recall is defined as the number of
retrieved relevant images over the total number of relevant images available
in the database. Standard formula for precision and recall is given in Eqns
(8) and (9).

Precision = (8)


Recall =
(9)

The results obtained from the various categories have been tabulated in table
1. Figure 12 and 13 shows the average retrieval precision and recall of the
proposed system.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 52


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
Table 1.precision and recall values for different categories of images
Query No. Of Relevant No. Of Precision Recall
relevant images retrieved
image
images in images
retrieved database
Tiger 4 5 4 100% 80%

Bird 8 10 10 80% 80%

Flower 8 11 9 88.8% 72.2%

Road 7 10 10 70% 70%

precision

90%
80%
70%
60%
50%
precision
40%
30%
20%
10%
0%
tiger bird flower road

Figure 12. Retrieval Average precision

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 53


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

recall

80%
70%
60%
50%
40% recall
30%
20%
10%
0%
tiger bird flower road

Figure 13. Retrieval Average Recall

5. CONCLUSION
Thus in proposed method color feature is extracted using YUV method.
Texture feature is extracted using Grey level co-occurrence method. These
features are extracted both from the query image and the image from the
database. Edge feature is extracted using Edge Histogram Method.
Similarity is computed between the query image and the images from the
database. Results are optimized using Particle Swarm Optimization.

REFERENCES
[1] Allan M., and Verbeek J., Ranking User-Annotated Images for Multiple Query
Terms, In Proceedings BMVC 2009.
[2] Guillaumin M., Verbeek J., and Schmid C., Multimodal Semi Supervised Learning
for Image Classification, In Proceedings of CVPR 2010.
[3] Li X., Snoek C., and Worring M., Unsupervised Multi-Feature Tag Relevance
Learning for Social Image Retrieval, In Proceedings of CIVR 2010.
[4] Natsev A., Naphade M., and Tesic J., Learning the semantics of multimedia queries
and concepts from a small number of examples, In Proceedings of ACM Multimedia
2005.
[5] Shahri Asta., and Sima Uyar., A Novel Particle Swarm Optimization Algorithm, In
Proceedings of 10th International Conference on Artificial Evolution 2011.
[6] Snoek C., Huurnink B., Hollink L., De Rijke M., Schreiber G., and Worring
M.,Adding Semantics To Detectors for Video Retrieval, IEEE Trans. Multimedia,
vol. 9, no. 5, pp. 975986, 2007.
[7] Ulges A., Schulze C., Koch M., and Breuel T., Learning Automatic Concept
Detectors from Online Video, Comput. Vis. Image Understand, vol. 114, no. 4, pp.
429438, 2010.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 54


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
[8] Van de Sande K., Gevers T., and Snoek C., Evaluating Color Descriptors for Object
and Scene Recognition, IEEE Trans. Pattern Anal. Mach. Intell ,vol. 32, no. 9, pp.
15821596, 2010.
[9] Wei X.Y., Jiang Y.J., and Ngo C.W., Concept-Driven Multimodality Fusion for
Video Search, IEEE Trans. Circuits Syst. Video Technol, vol. 21, no. 1, pp. 62
73,2011.
[10] Xirong Li., Cees G., Snoek M., Marcel Worring., and Arnold Smeulders W.M.,
Harvesting Social Images for Bi-Concept Search, IEEE Transactions On
Multimedia, Vol. 14, No. 4,2012.
[11] Yu H., Li M., Zhang H.J., and Feng J., Color Texture Moment for Content-Based
Image Retrieval, In Proceedings of ICIP, 2002.

This paper may be cited as:

Subhakala S., Bhuvana S. and Radhakrishnan R., 2014. An Optimized


CBIR Using Particle Swarm Optimization Algorithm. International Journal
of Computer Science and Business Informatics, Vol. 11, No. 1, pp. 40-55.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 55


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

Study of Satisfaction Assessment


Techniques for Textual Requirements
K. S. Divya
P.G. Scholar, Department of CSE,
Sri Krishna College of Technology, Coimbatore, India

R. Subha
Assistant Professor, Department of CSE,
Sri Krishna College of Technology, Coimbatore, India

Dr. S. Palaniswami
Principal, Government College of Engineering,
Bodinayakanur, India

ABSTRACT
Requirements satisfaction is an important part in the software development. The right
product can only be developed if all the requirements are satisfied. Satisfaction assessment
is a process to determine whether all the requirements are satisfied in the design documents.
The satisfaction assessment is performed in order to find the satisfied requirements. There
are many satisfaction assessment techniques to find the satisfied requirements in the design
documents. This paper shows a study of the satisfaction assessment techniques in the
textual requirements. A new method is also proposed which implements the semantic
diversity to perform the satisfaction assessment. The semantic diversity uses the contextual
tracing while performing the candidate satisfaction mapping between the requirements
document and design documents.
Keywords
Software Engineering, Requirements Engineering, Requirements tracing, Satisfaction
assessment, Term frequency, Parts of speech tagging

1. INTRODUCTION
Software engineering is the application of a systematic and disciplined
approach to the design, development, operation, and maintenance of the
software. Requirements engineering is a sub discipline of software
engineering.

1.1 Requirements Engineering


Requirements are the statements about the system activities, system
behavior, system properties, system qualities and the system
constraints. Requirements engineering is the study under the software
engineering. Requirements engineering defines the use of systematic and

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 56


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
disciplined techniques that ensures that the requirement is complete,
consistent and correct. Requirements engineering consist of many activities
such as requirements elicitation, analysis, specification, verification, and
management, where:
Requirements elicitation is the process of determining and
understanding the needs of the customers.
Requirements analysis is the process of checking the customer
needs.
Requirements specification is the process of representing the
customer needs in the document format.
Requirements verification is the process of checking that the
system requirements are complete, correct, consistent, and clear.
Requirements management is the process of scheduling,
coordinating, and documenting all the requirements engineering
activities (that is, elicitation, analysis, specification, and
verification)
1.2 Problems in Requirements Engineering
Requirements engineering is recognized as a critical task, since many
software failures originate from inconsistent, incomplete or simply incorrect
requirements specifications. Many of the common, most serious problems
associated with software development are related to requirement. The main
problem occurs in the requirements elicitation process. The problems are:
Problems of scope
These problems occur when there is too little information or too
much information. Sometimes unnecessary design information may
also be given in the requirements document.
Problems of understanding
These problems occur when users have incomplete understanding
about their needs and conflicts views between others.
Problems of volatility
These problems occur when the requirements change due to change
in time.
The problems can also arise from requirements specification and
requirements validation and verification.

1.3 Requirements Traceability


Requirements traceability is a sub-discipline of requirements
management within software development and systems engineering.
Requirements traceability is concerned with documenting the requirements
and providing the bi-directional traceability between various associated
requirements. It enables users to find the origin of each requirement and
track every change that was made to this requirement.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 57


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
1.4 Satisfaction Assessment

Satisfaction assessment is defined as the process of performing the


satisfaction mapping of portions of textual requirements to design elements
which is represented in natural language. A satisfaction mapping contains a
satisfaction decision that has been made about a set of textual requirement
elements and a set of corresponding textual design elements. Satisfaction
assessment determines whether every component in the design document is
addressed by the component in the requirement document.

Table 1: Satisfaction assessment steps


Step Task
1 identifying each requirement
2 assigning a unique identifier to each requirement
3 for each high level requirement, determine all matching low
level requirements
4 for each low level requirement, determine a parent
requirement in the high level document
5 determine if each high level requirement have been
completely satisfied

Formal Definition

Given a set of requirements decomposed into terms (R = {tr1, tr2, }) and a


set of design element terms (D = {td1, td2, }). A satisfaction mapping is a
set of pairs of terms (trn, tdm) where trn is a term in a set of requirements and
tdm is a term in the set of design elements where trn is directly correlated to
tdm.

This paper proceeds as follows. In the next section, the related works in this
field are described. Section 3 describes satisfaction assessment techniques.
Section 4 describes the proposed system. Section 5 provides the conclusion.

2. RELATED WORKS
Elizabeth Ashlee Holbrook, Jane Huffman Hayes, Alex Dekhtyar, Wenbin
Li [1] explains various methods for the satisfaction assessment in the textual
requirements. Satisfaction assessment helps in identifying the unsatisfied
requirements. First the requirement traceability matrix for the data set is
constructed and the requirement and design text is converted into chunks.
Stop word removal and the stemming for the chunks are performed. The
chunks are tokenized into individual terms. The synonym pairs for the terms

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 58


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
are determined. For TF-IDF and Naive Satisfaction method, the threshold
values are predefined. For NLP satisfaction method, the rules are generated.
Finally the candidate satisfaction assessment mapping is performed to
determine the satisfied candidates.

Holbrook E A, Hayes J H, Dekhtyar A [2] explains the automatic methods


for satisfaction assessment. The system introduces the automation of
satisfaction assessment which is the process of performing the satisfaction
mapping of textual requirements to design elements which is represented in
natural language. The system describes the satisfaction assessment approach
algorithmically and then evaluates the effectiveness of two proposed
information retrieval (IR) methods in two industrial studies. Mainly focuses
on assessing whether requirements have been satisfied by lower level
artifacts such as design.

Hayes J H, Dekhtyar A, Sundaram S, Holbrook A, Vadlamudi S [3]


explains a tool for the requirements tracing. The recovery of traceability for
artifacts containing unstructured textual narrative is addressed. RETRO uses
information retrieval (IR) and text mining methods to construct candidate
traces. The task is to find documents in the collection that are deemed
relevant to the query. The method vector space retrieval with tf-idf term
weighting is the default tracing technique in RETRO. Stop word removal is
performed for every document and query.

Jane Huffman Hayes, Alex Dekhtyar, Senthil Karthikeyan Sundaram [4]


explains the candidate link generation for requirements tracing. The goals
for a tracing tool based on analyst responsibilities in the tracing process are
defined. The several new measures for validating that the goals have been
satisfied are introduced. The analyst feedback in the tracing process is
implemented. A prototype tool, RETRO (REquirements TRacing On-
target), to address the goals is presented. The methods and tool can be used
to trace any textual artifact to any other textual artifact. An additional IR
technique, Latent Semantic Indexing is used for requirements tracing. A
requirements tracing tool is defined that is special purpose software that
takes as input two or more documents in the project document hierarchy and
outputs a traceability matrix that is a mapping between the requirements of
the input documents. Two IR algorithms TF-IDF vector retrieval and vector
retrieval with a simple thesaurus one newly implemented method, Latent
Semantic Indexing are used for determining requirement similarity. LSI is a
dimensionality reduction method, which allows one to capture the similarity
of underlying concepts, rather than simple keyword matches.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 59


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
Robinson W N [5] explains the implementation of rule based monitors for
requirements tracing. A language for requirements and monitor definitions
are defined by the framework. A methodology for defining requirements,
identifying potential requirements obstacles, and analyzing monitor
feedback is defined. The framework address three interrelated monitoring
issues such as Formalization of high-level goals, requirements, and their
monitors, Automation of monitor generation, deployment, and optimization
and Traceability between high-level descriptions and lower-level run-time
events. The monitoring approach integrates requirements language research
with commercial business process monitoring. The approach defines the
logical monitoring model. The goals and requirements are defined. Potential
requirements obstacles are uncovered and their monitors are derived. The
monitoring architecture and implementation are defined. The requirements
of the monitoring event sources and sinks are defined. A logical-physical
mapping to ensure traceability of events back to requirements is defined.
The monitoring system is implemented and deployed. The high-level
feedback on the systems actions and requirements compliance is provided.
The compensation and adaptation rules are executed when violations occur.
The high-level feedback on the monitoring system itself, thereby providing
historical information used in defining new monitoring optimization rules is
provided.

Marcus A, Maletic J I [6] explains the latent semantic indexing method. A


method to recover traceability links between documentation and source
code, using an information retrieval method, namely Latent Semantic
Indexing (LSI) is presented. The traceability links based on similarity
measures are identified. The method utilizes all the comments and identifier
names within the source code to produce semantic meaning with respect to
the entire input document space. The vector space model (VSM) is a widely
used classic method for constructing vector representations for documents.
Latent Semantic Indexing (LSI) is a VSM based method for inducing and
representing aspects of the meanings of words and passages reflective in
their usage. LSI uses a user constructed corpus to create a term-by-
document matrix. New document vectors (and query vectors) are obtained
by orthogonally projecting the corresponding vectors in a VSM space
(spanned by terms) onto the LSI subspace. The LSI subspace captures the
most significant factors (i.e., those associated with the largest singular
values) of a term-by-document matrix, it is expected to capture the relations
of the most frequently co-occurring terms.

Cleland-Huang J, Chang C K, Sethi G, Javvaji K, Haijian H U, Jinchun Xia


[7] explains the event based requirements traceability. An activity that is of
critical importance to handling and managing changing requirements

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 60


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
effectively is described. A method for establishing and utilizing traceability
links between requirements and performance models is proposed.
Traceability links are established through the use of a dynamic traceability
scheme capable of speculatively driving the impacted models whenever a
quantitative requirement is changed. Key values from within the individual
performance models representing probabilities, rates, counts and sizes etc
are placed in the central requirements repository. Finely tuned links are then
established between the data-values in the models and those in the
repository. The process of analyzing the impact of a proposed change upon
the performance of the system through dynamic re-execution of requirement
dependent models is supported.

Giuliano Antoniol, Gerardo Canfora, Gerardo Casazza, Andrea De Lucia,


Ettore Merlo [8] explains the traceability between code and documentation.
A method based on information retrieval to recover traceability links
between source code and free text documents is proposed. The method
proposed ranks the free-text documents against queries constructed from the
identifiers of source code components and can be customized to work with
different IR models. Both a probabilistic and a vector space information
retrieval model are applied. In the probabilistic model, free-text documents
are ranked according to the probability of being relevant to a query
computed on a statistical basis. A language model for each document or
identifiable section is estimated and uses a Bayesian classifier to score the
sequences of mnemonics extracted from each source code component
against the models. The vector space model treats documents and queries as
vectors in an n-dimensional space. Documents are ranked against queries by
computing a distance function between the corresponding vectors. The
documents are ranked according to a widely used distance function, i.e., the
cosine of the angle between the vectors. The construction of the vocabulary
and the indexing of the documents are preceded by a text normalization
phase performed in three steps. In the first step, all capital letters are
transformed into lower case letters. In the second step, stop-words (such as
articles, punctuation, numbers, etc.) are removed. In the third step, a
morphological analysis is used to convert plurals into singulars and to
transform conjugated forms of verbs into infinitives. The construction of a
query consists of three steps. Identifier extraction parses the source code
component and extracts the list of its identifiers. Identifier separation splits
identifiers composed of two or more words into separate words. Text
normalization applies the three steps described above for document
indexing. Finally, a classifier computes the similarity between queries and
documents and returns a ranked list of documents for each source code
component.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 61


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
3. SATISFACTION ASSESSMENT TECHNIQUES

3.1 Naive Satisfaction Assessment

The naive satisfaction method is based on a simple idea of tracking and


thresholding the percentage of common terms between the two chunks. This
method is simple and easy to implement. The naive satisfaction method
determines the root of the elements in the requirements document and the
design document. If the terms in the requirement chunk and the design
chunk contain the same root or the root of the synonym then the terms are
considered as a match pair. The similarity value for the element pairs is
determined.
Number of times the term occur
= (1)
total number of terms in the document
Threshold values from 0.01 to 0.09 are used to filter the chunks. The chunks
with similarity values below the threshold values are excluded from the
candidate satisfaction assessment mapping.

Drawbacks
The naive satisfaction approach only determines the textual similarity. The
polysemous words are also treated as the textual similar words.

3.2 TF-IDF Satisfaction Assessment

The TF-IDF satisfaction assessment method is based on vector space


information retrieval using TF-IDF (term frequency - inverse document
frequency) term weighting. TF-IDF method is the traditional information
retrieval method which is commonly used in requirements tracing. TF-IDF
is the measure of the importance of a term within a document. Term
frequency (TF) is the number of times a particular term appears within a
document. Inverse document frequency (IDF) of a term is the logarithm of
the ratio of the total number of documents in a collection to the number of
documents that contain the term. Each requirement and design element
chunk is considered an individual document within the document collection.
TF-IDF similarity scores are calculated between pairs of requirement
chunks and design chunks.
IDF = log (N/ DF) (2)
TF-IDF = TF * IDF (3)
Threshold values from 0.01 to 0.09 and 0.1 to 0.9 are used to filter the
chunks. The chunks with similarity score below the threshold values are
excluded from the candidate satisfaction assessment mapping.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 62


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
Drawbacks
The TF-IDF satisfaction approach only determines the similarity based on
the importance of the terms within the document. The polysemous words
are also treated as similar words in the TF-IDF approach.

3.3 NLP Rule Based Satisfaction Assessment

NLP rule based satisfaction assessment is based on certain rules that are
defined by the user. For the NLP rule based satisfaction assessment the parts
of speech for all the elements are determined using parts of speech tagging.
Parts of speech represent the structure of the sentences. Rules are created to
help in identifying the requirement element and design element matching
pair. The rule set can be created manually using the text editor. A rule is
specified in the following format:

[Element1Position]|[Element1PartofSpeech]|[Element1Type]|[Element2Posi
tion]|
[Element2PartofSpeech]|[Element2Type]|[MinSimilarity]|[Confidence]|[Ena
bled]

For example, the rule:


Any|NP|RE|First|VP|DE|45|20|True

specifies that if any noun phrase in a requirement chunk is at least a 45%


match based on lexical similarity with the first verb phrase in a design
element, then the requirement chunk and design element chunk should be
paired with 20% confidence.

Drawbacks
The NLP rule based approach determines the similarity based on their
structure. The polysemous words are also treated as similar words in the
NLP rule based approach.

3.4 Candidate Link Generation Method

Requirement tracing first begins with the parsing of documents. Candidate


link generation is performed to determine the matching pair of requirement
elements and the design elements. Then the candidate link evaluation is
performed to evaluate the measurement of candidate link lists. To determine
the candidate mapping, first the elements are extracted from the
requirements document as well as from the design document. Then
keywords are assigned to each requirement document and each design
document. The keyword assignment can be performed either manually or

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 63


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
using the search functions from the word processor or spreadsheets. Then
the candidate links are determined. To determine whether the design
element is matching with the requirement element is performed using a key-
word matching algorithm. Recall and precision are the two measures used to
perform the candidate link evaluation. The candidate link evaluation
determines whether the candidate links are true links or false links.

Drawbacks
The candidate link generation approach determines the similar words using
the key word matching algorithm. The polysemous words are also treated as
similar words in the candidate link generation method approach.

3.5 RETRO Tool

Requirements tracing on-target is a special purpose requirement tracing tool.


RETRO uses information retrieval (IR) and text mining methods to
construct candidate traces. RETRO. The core part of RETRO consists of the
IR toolbox, the feedback processing methods, and the GUI front end. The
methods for building representations of traced documents are also included.
There are two modes for tracing the requirements in RETRO. In automatic
tracing mode, the candidate links are generated using automated methods.
The manual tracing mode provides the ability to browse high- and low-level
documents for the purpose of discovery of any links not found by the
automated tools. The filtering tool allows reducing the display of the
candidate link lists. A threshold value would be specified by the analyst.

The low level documents that have the weight value greater than the
threshold value would be displayed. The threshold is controlled by a slider
bar that can be moved between 0 and 1 in the intervals 0.01. The filter that is
selected could have global effect or local effect. When the filter is in global
effect, the current filter value is applied to candidate link lists belong to all
high-level elements. When the filter is in local effect, the current filter value
is applied only to the candidate link list belongs to the currently selected
high-level element. The candidate links could be displayed in three ways.
First, the links could be displayed one at a time. Second, all the candidate
links could be displayed in the order they appear in the design document.
Third, the candidate links could be displayed in their relevance order.

Drawbacks
The RETRO approach determines the similar words using the information
retrieval and text mining method. The polysemous words are also treated as
similar words in the RETRO tool approach.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 64


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
4. PROPOSED WORK
The existing satisfaction assessment techniques such as Naive satisfaction
assessment, TF-IDF satisfaction assessment, NLP Rule based satisfaction
assessment, Candidate link generation and RETRO tool performs the
candidate satisfaction mapping. These methods only determine the satisfied
candidates using the similar words. The semantic similarity is not
determined using the existing systems. The proposed system implements the
concept of semantic diversity to determine the satisfied candidates. To
determine the semantic similarity, latent semantic analysis is performed to
the requirements document and the design documents. Then the semantic
diversity for the elements is calculated. Semantic diversity is the degree to
which the different contexts associated with a given word vary in their
meanings.

5. CONCLUSION
In the software development, requirements satisfaction plays an important
role. All the requirements specified by the users should be satisfied.
Satisfaction assessment determines whether all the requirements are
addressed in the design document. There are many methods to perform the
satisfaction assessment to identify the satisfied requirements. Naive
satisfaction assessment method is based on the textual similarity. TF-IDF
satisfaction assessment is based on the importance of a term in a document.
NLP Rule based satisfaction assessment is based on certain user defined
rules. Candidate generation links generates the satisfied candidates based on
the information retrieval and the text mining method. The RETRO tool is
used to perform automated requirements tracing. The existing system does
not perform any contextual tracing. The semantic diversity could be
implemented to perform the contextual tracing to determine the satisfied
candidates.
REFERENCES
[1] Holbrook E.A., Hayes J.H., Dekhtyar A., Wenbin Li., 2013.A study of methods for
textual satisfaction assessment. Springer- Empirical Software Engineering, 18(1): 139-
176.
[2] Holbrook E.A., Hayes J.H., Dekhtyar A.,2009. Toward automating requirements
satisfaction assessment. In: Proceedings of IEEE International Conference on
Requirements Engineering, pp. 149 158.
[3] Hayes J.H., Dekhtyar A., Sundaram S., Holbrook A., Vadlamudi S.,2007.
Requirements Tracing on Target (RETRO): Improving software maintenance through
traceability recovery. Springer Innovations System Software Engineering, 3(3):193-
202.
[4] Hayes J.H., Dekhtyar A., Sundaram S.,2006. Advancing candidate link generation for
requirements tracing: the study of methods. IEEE Transactions on Software
Engineering, 32(1): 419.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 65


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
[5] Robinson W.N.,2005 Implementing rule-based monitors within a framework for
continuous requirements monitoring. In: Proceedings of Annual Hawaii International
Conference on System Sciences, 188a.
[6] Marcus A., Maletic J.I.,2003. Recovering documentation-to-source code traceability links using
latent semantic indexing. In: Proceedings of International Conference on Software Engineering,
pp. 125-135.
[7] ] Cleland Huang J., Chang C.K., Sethi G., Javvaji K., Haijian H.U., Jinchun Xia.,2002.
Automating speculative queries through event-based requirements traceability. In: Proceedings
of IEEE Joint Conference on Requirements Engineering, pp. 289-296.
[8] Giuliano Antoniol, Gerardo Canfora, Gerardo Casazza, Andrea De Lucia, Ettore Merlo,2002.
Recovering traceability links between code and documentation. IEEE Transactions On Software
Engineering, 28(10): 970983.
[9] Roger S Pressman, 2005 Software Engineering: a practitioners approach. 6th edition, McGraw-
Hill Pub Co, New York.
[10] Phillip A Laplante. Requirements Engineering for Software and Systems. 2 nd edition, CRC
press, New York.

This paper may be cited as:

Divya, K. S., Subha, R. and Palaniswami, S., 2014. Study of Satisfaction


Assessment Techniques for Textual Requirements. International Journal of
Computer Science and Business Informatics, Vol. 11, No. 1, pp. 56-66.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 66


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

Survey of MAC Protocols for


Heterogeneous Traffic in Wireless
Sensor Networks

Sridevi S.
Associate Professor, Department of Computer Science and Engineering,
Sona College of Technology,
Salem, India

Priyadharshini R.
PG Scholar, Department of Computer Science and Engineering,
Sona College of Technology,
Salem, India

Usha M.
Professor & Dean, Department of Computer Science and Engineering,
Sona College of Technology,
Salem, India

ABSTRACT
Wireless Sensor Networks (WSNs) consists of multiple sensor nodes, which are deployed
randomly to collect periodic data, processes the data and forward it to the sink node. The
main challenges that WSN faces are severe energy constraints, robustness, responsiveness,
self-configuration, etc Among this the main challenge is the energy efficiency. In order
to tackle all these challenges, new protocols in all the layers of communication stack need
to be designed. Designing a MAC protocol is of crucial importance because it influences
the transceiver unit of the sensor node. The Quality of Service (QoS) at the MAC layer
matters as it rules medium sharing and supports reliable communication. In WSNs nodes
generate heterogeneous traffic which have different QoS requirements like reliability and
delay deadline with different priority requirements that vary according to the application. In
this work, a variety of MAC protocols for WSNs are surveyed, with a special focus on
traffic classification and priority assignment. As in the existing TDMA based MAC
protocols, only one timeslot is allocated to all the sensor nodes in each frame. But our work
is to classify the sensed data according to its priority first and allocate slots variably based
on its requirement to be sent to the sink node to perform faster rescue operations. A
comparison of different MAC protocols with various parameters and future research
directions are also included.

Keywords:
Wireless Sensor Networks, energy efficiency, MAC protocol, traffic classification, priority
assignment

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 67


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
1 INTRODUCTION
Wireless Sensor Networks (WSNs) are becoming more popular and they are
used in numerous applications like industry, academia, military, forest fire,
medical and health and so on. In all these kinds of applications requires data
delivery with QoS as opposed to best-effort-performance in classical
monitoring applications. Reliable and real-time delivery of collected data is
important in the sensor network operation.

A sensor node has limited battery capacity of < 0.5Ah. With this capacity
itself, it plays the role of both data originator as well as data router. Sensing,
communicating and processing of data consume battery power. But
communication consumes 100 times more power than sensing and
processing. [1] So, optimization of energy consumption is required in WSNs
to improve the network lifetime.

1.1. Medium Access Control (MAC)


MAC is responsible for providing communication link between large
numbers of sensor nodes and shares the medium fairly and efficiently. [2]
Let us discuss some of the attributes of good MAC protocol. The first is the
energy efficiency. Instead of recharging the battery, it is better to replace the
sensor nodes. To get access to the channel, many sensor nodes will compete
with each other. The MAC protocol should be able to avoid collisions
among these nodes.

MAC layer is responsible for correcting the errors occurred at the physical
layer. It also performs some activities like framing, physical addressing, and
flow and error controls. It resolves the channel access conflicts among
different nodes. It also addresses issues like mobility of nodes and
unreliable time varying channel [3].

As already said MAC is responsible for controlling the transceiver unit of a


sensor node, this has its effect on the other side also. If sensor node turns off
its radio, it cannot communicate with other sensor nodes. If it switches to
listen state, it must wait for other nodes also to switch to listen state [4]. In
order to save energy, the node usually goes to sleep mode and wakes up
according to its planned schedule [5]. This is called duty cycling or sleep
scheduling.

1.2. Types Of MAC Schemes


The various types of MAC schemes include Time Division Multiple Access
(TDMA), Code Division Multiple Access (CDMA), and packet-based
protocols like ALOHA and Carrier Sense Multiple Access (CSMA).

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 68


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
1.2.1. CSMA VS. TDMA SCHEME

TDMA-based MAC schemes are contention-free. It divides the time into


slots and allocates slots to all the nodes. The node then communicates in the
allocated slots without any collision. This provides correct sleep-listen
schedule for all the nodes to save energy. TDMA protocols require proper
synchronization of nodes.

CSMA-based MAC schemes are contention-based. It does not require any


additional information about the network. As the node does not follow any
transmission schedule, it can able to handle busty and sporadic traffic. Here
collision is possible which gives rise to extra delivery latency and
retransmissions. Techniques like RTS/CTS are required to provide certain
level of service quality.

1.2.2. SENDER-DRIVEN VS. RECEIVER-DRIVER MAC


In sender-driven TDMA-based MAC scheme, the owner of the timeslots are
sender nodes. Sender nodes sends control message to inform the intended
receiver to wake up and receive data at specified slots. Each node is
assigned two slots: 1) transmit slot for data transmission, 2) wake-up slot to
receive control packets. The sender node will send control packets in
wake-up slot of intended receiver to inform it to wake-up in transmit
slot of sender node in the next frame. It assigns slots to the nodes for
message transmission. It eliminates collision of data messages but energy is
wasted due to message overhearing.

In receiver-driven TDMA-based MAC scheme, the owner of the timeslots


are receiver nodes. It assigns slots to the nodes for message reception.
Schedule of the timeslots in which the receiver nodes must wake-up is
constructed and exchanged between neighbor nodes. Receiver nodes have to
wake-up on their own timeslots. Neighbors of the slot owners have to
contend for the medium. It eliminates message overhearing. Contention
overhead and packet collision among sender nodes are the main drawbacks
of this scheme [6].

The remaining part of the paper is organized as follows: in section 2, we


present the various MAC protocols existing in the literature, in section 3, we
compare the surveyed MAC protocols based on certain performance criteria
and in section 4, we discuss possible directions for further research and
conclude the paper.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 69


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
2. CLASSIFICATION OF MAC PROTOCOLS
The MAC protocol classification includes QoS-based MAC protocol, Cross-
layer MAC protocol, sender-driven MAC, receiver-driven MAC protocol
and various other kinds of MAC protocols. Let us discuss this classification
one by one.

2.1. QoS-based MAC Protocols


Although all the layers of OSI are responsible for QoS provisioning, MAC
layer is of particular importance as it solves the problem of medium sharing
and supports reliable communication. MAC also handles additional
challenges like severe energy constraints by duty cycling and unpredictable
environmental conditions by retransmission [7]. Performance of a QoS-
based MAC protocols totally depends upon the requirements of application.
The designed MAC protocol must be energy efficient, scalable, must have
good memory and processing capability and no bandwidth scarcity.

2.1.1. PQ-MAC
Hoon Kim et. al [4] have designed PQ-MAC (Priority-based QOS MAC)
protocol to maintain the energy efficiency and solve the transmission
latency problem simultaneously. This also provides data type classification
and scheduling scheme for fast transmission of event data. This fast
transmission is provided by additional listen time and priority queue
scheduling. First the data is classified into four priority levels. Level 0 has
the highest priority and this mainly focuses on transmission delay rather
than energy efficiency.

Level 1 has the second highest priority and this may have more delay than
level 0. Periodic event data belongs to the Level 2, which has low
importance and fault tolerant characteristics. Level 3 has the lowest priority
data and may have delay tolerant scheduling. Each sensor node has two
priority queues, one for higher priority and another for lower priority. It
sends high priority data from high priority queue first. If the queue is empty,
then it sends low priority data from low priority queue. The priority queue
guarantees faster transmission of high-priority data compared to low-
priority data.

PQ MAC also provides three schemes: 1) Doubling scheme, 2) Advanced


wake-up scheme and 3) Dynamic Priority Listen (DPL) scheme. Doubling
scheme doubles the listen time to high priority data. This scheme is based
on data priority and provides more opportunity to send higher priority data
than lower priority data. The four data priorities discussed above can be
used in this scheme. If a node has only one priority it has one transmission
opportunity in the frame time. Similarly, for two priorities, 2 chances and
for three priorities, 8 chances and so on.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 70


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
In normal MAC protocols, the sensor node wakes up in the middle of sleep
time to receive data and energy is wasted. If the probability of data
reception is known well in advance, the energy wastage can be reduced. In
advanced wake-up scheme, additional filed is added in RTS/CTS message
to know the probability of receiving high priority data. Dynamic Priority
Listen (DPL) scheme changes the listen/sleep periods according to network
traffic conditions. This scheme suits well for dynamic traffic environment.

Parameters considered:
The simulation is done in ns-2. The simulation results show that the
protocol manages scheduling by adaptively controlling network traffic and
priority level. High priority data is given less waiting time. It reduces
latency and has good energy efficiency.

2.1.2. Diff-MAC
M. Aykut Yigitel et. al [8] proposed Diff-MAC which aims to increase the
channel utilization by differentiating the traffic and provide fast delivery of
data. In case of MAC failures, delivering the video frames as a single packet
is expensive. So Diff-MAC divides the video frame into many small
fragments. All the video frame fragments are sending as a burst once the
medium is reserved.

Diff MAC monitors the network periodically and calculates the probability
of collision as the ratio of number of collisions to the total number of
transmission attempts. To provide service differentiation, the size of
contention window is increased for lower priority traffic and decreased for
higher priority traffic. To give precedence to higher priority traffic, to
improve the throughput and decrease the latency the authors set CWRT <
CWNRT < CWBE. Here, RT, NRT, BE represents real-time multimedia
traffic, non real-time traffic and best effort traffic respectively.

Diff-MAC adapts to dynamic duty cycle to reduce both packet latencies and
idle listening. The processed packets are classified according to the traffic to
set the nodes active time based on the current dominating traffic class. If
the total number of processed packets is lower than the threshold, the active
period of a node is set to small since traffic on node is negligible. Diff-MAC
assigns priority to the packets dynamically based on the number of hops
traversed. High precedence is given to the packets for which energy,
bandwidth and memory have already been assigned. Dropping of those
packets will waste the cost. Packets are scheduled based on the traffic class
and number of hops traversed.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 71


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
Parameters considered:
The protocol is first simulated and then implemented on crossbow Imote-2
platform. The result shows that the protocol provides fast delivery of data
and has less collision and packet latency.

2.1.3. AMPH MAC


M.Souil et. al [9] proposed AMPH MAC which is an adaptive MAC
protocol for Heterogeneous wireless sensor networks. This follows hybrid
channel access method for achieving high channel utilization. The hybrid
behavior is the combination of both contention-based and schedule-based
techniques. This hybrid method allows slot-stealing and also adapts to
variable traffic loads. For meeting the needs of real-time traffic, it uses a
prioritization scheme.

It is based on TDMA mechanism and in order to increase the channel


utilization and reduce the latency, the nodes can transmit during any
timeslot. It follows fixed timeslot allocation. Nodes which are two-hop
neighbors to each other are not assigned to the same slot. Owners are the
name given to the nodes which are assigned to the given slot. Otherwise,
they are called non owners. The nodes with same priority level have equal
chances of stealing unused slots.

The transmission process has totally three states. They are init, wait and
backoff states. The node is in init state during the setup phase. When the
node reaches the end of a slot, it is in wait state. If the node has packets to
send at the beginning of each slot, it enters into backoff state. To improve
the channel utilization, more packets can be sent to the timeslot.

This paper uses a backoff mechanism and a strict priority scheduler which
always favor real time traffic. This may also result in starvation for the best
effort traffic. Nodes having best effort traffic have higher priority than
nodes having real time traffic as backoff values are less for best effort than
real time. This can be implemented only in high data rate continuous real
time traffic networks.

Parameters considered:
The protocol is simulated in OMNeT++ and the results proves that it
provides higher channel utilization and latency for both BE and RT traffic.
It offers 100% reliability for real time packet transmission. It also provides
QoS support and fair delivery of data for heterogeneous traffic.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 72


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
2.2. Cross-Layer MAC Protocols
The cross-layer interaction is defined as back-and-forth information flows,
merging of adjacent layers, design coupling without a common interface,
and vertical calibration across layers. The implementations for cross-layer
interactions include explicit interfaces between different layers, shared
database, and heap organization. [2]

Advantages of Cross-Layered Protocols:


Both the information and the functionalities of traditional communication
layers are melted into a single protocol. It provides informed scheduling
decisions, reflecting the current network status, and dynamically optimized
scheduling. [10]

2.2.1. CL-MAC
Mohamed S. Hefeida et. al [11] proposed CL-MAC to efficiently handle
multi-hop and multi-flow traffic patterns for heterogeneous wireless sensor
networks. The multi-flow traffic is generated by sensor nodes having
different sensors each sensing different type of data.

It is useful for both homogenous and heterogeneous traffic. It uses a unique


Flow setup packet (FSP) scheme which schedules multiple packets over
multiple multi-hop flows in a single cycle whereas other MAC protocols
support only multi-hop flows. Each FSP can operate as an RTS up to K
different destinations. Before setting up a flow, CL-MAC scheduling
considers pending packets in routing layer buffer and pending flow setup
requests.

The advantages of this set up is that it allows the nodes to make better flow
set up decisions, scheduling optimization, minimizes control overhead per
data packet and detects the traffic load variations and provides information
about the current network status. It follows dynamic timeslot allocation. It
accommodates more FSP requests by adopting early acknowledgement
scheme.

Parameters considered:
The protocol is simulated in ns2 and results prove that CL-MAC makes
good scheduling decisions and reduces end-to-end latency. It detects traffic
load variations by monitoring current network status. It minimizes control
overhead by having FSP.

2.2.2. XLP
Mehmet C. Vuran et. al [12] proposed XLP (cross-layer protocol) which
considers all the layers from physical to transport layer and achieves
congestion control, routing and medium access control. It uses the concept

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 73


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
of initiative determination which is the nodes willingness to participate in
the communication. This enables receiver-based contention, initiative-based
forwarding and distributed duty-cycle operation.

Each node initiates data transmission by listening the channel. If the channel
is idle, the node sends RTS packet which also serves as link quality
indicator. The neighbor receiving this packet checks the source and
destination. If neighbors are closer to the sink, then it is a feasible region
and performs initiative determination and receiver based contention.

Receiver-based contention is performed only if the initiative determination


is 1. It forwards a packet based on routing level of each node. The routing
level is based on progress of the packet. Nodes with longer progress have
highest priority. If two nodes want to send RTS packet to the same node at
the same time, the one with longer progress is allowed. A node does not
receives RTS packet for 3 reasons: 1) if initiative determination is not 1, 2)
if no feasible region, and 3) due to collision of CTS packets.

Distributed duty-cycle operation is to control the transceiver unit of the


node to save energy. Here, the buffer occupancy of the node is build-up
when they sleep. XLP has good network performance and less
implementation complexity.

Parameters considered:
XLP protocol is simulated on ns-2 and the result show that it performs
uniform energy consumption throughout the network. Each node performs
distributed duty cycle operation, which helps to improve network
performance and energy consumption.

2.2.3. EEDS PROTOCOL


Tayseer Alkhdour et. al [13] proposed an ILP (Integer Linear
Programming) model for EEDS (Energy Efficient Distributed Schedule-
based) protocol for constructing a routing tree and a TDMA schedule to
maximize the network lifetime. EEDS protocol will reduce the energy
consumed at idle listening state and it is mainly designed for periodic data
collection applications. It considers both data link and routing layers.

ILP considers EEDS assumptions which include energy consumption and


transmission range. The cost functions of ILP model include reducing
energy consumption and increasing network lifetime. In order to improve
the network lifetime, parent with high energy is selected by every node to
built energy efficient tree. A node with high energy has more children and
vice-versa.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 74


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
EEDS timeframes are divided into rounds. Each round has three phases: 1)
Building the tree, 2) Building the Schedule and 3) Data transmission phase.
In the first phase, tree routed at sink is built. In the second phase, based on
the tree, the TDMA schedule is built. The parent node prepares a schedule
and broadcast to its children. The children data are aggregated and
forwarded to the parent. The parent node does not transmit soon after
receiving the packets from a single child as this eliminates data aggregation.

In the third phase, data is transmitted from the source node to the sink.
Different frequencies are used to avoid interferences. Every node will be
ON in their own slots. Leaf node is ON for one slot where as non-leaf node
for its own and also for its children slots. This phase can be repeated many
times but till the nodes have required energy.

Parameters considered:
ILP problem is solved by using LINGO solver tool. EEDS simulations and
ILP model are compared and the results prove better in terms of throughput,
energy consumption and transmission range. Deployment of sensor nodes
and resource scarcity are the main drawbacks of this paper.

2.3. Other MAC Protocols

2.3.1. RMAC
Wee Lum Tan et. al [6] proposed Receiver-driven MAC (RMAC) protocol
which mainly focuses on Timeslot Stealing and Timeslot re-assignment
mechanism for optimizing channel utilization and handling traffic load
variations. For every time slot a pair of nodes are assigned one as primary
sender node and another as secondary sender node.

In Timeslot Stealing, the lightly loaded sensor nodes are called primary
sensor nodes and they does not fully utilize the timeslots assigned to it. The
heavily loaded sensor nodes are called secondary sensor nodes and they do
not have necessary timeslots. So by using stealing mechanism, the
secondary sensor nodes can steal the unused timeslots of primary sensor
nodes. It enhances the protocol throughput for varying traffic loads. It also
handles shorter timescale changes in traffic patterns.

Heavily loaded sensor nodes listens the channel to check if any lightly
loaded sensor node is transmitting. This is called Clear Channel Assessment
(CCA). The secondary sensor node cannot be a hidden node to primary
sensor node for stealing mechanism. This reduces average packet latency
and improves channel utilization. Pairing of primary and secondary sensor
nodes to a timeslot will improve the performance of stealing mechanism.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 75


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
In Timeslot re-assignment procedure, the timeslots are redistributed among
the sender nodes according to the traffic load. This mechanism assigns one
slot for lightly loaded sender nodes and multiple timeslots for heavily
loaded sender node. The number of packets backlogged in the buffer is
listened by sender nodes and number of times the timeslot is not utilized is
listened by receiver nodes.

Parameters considered:
The protocol is simulated in ns2 and the results show that energy
consumption is less and channel utilization is high. It eliminates message
overhearing but has contention overhead and packet collisions among
sender nodes.

2.3.2. DSA
Hoon Oh et.al proposed DSA [14], which allocates timeslots based on the
bandwidth demand of each node. It allocates timeslots in a sequence of
receiving slots and then sequence of sending slots in a disjoint manner
which increases the bandwidth by removing wasted slots, reduces power
consumption at the lower depths as switching between states is less, and
provides better data aggregation and filtering.

Also RTS/CTS messages are exchanged between parent and child within a
slot which removes link breakages and supports reliable data transmission
and updates synchronization time. Before forwarding the packets to the
parent, each node filters and aggregates the received packets. DSA handles
clock drift problem by considering SYNC_DELAY parameter. The clock
speed is not always constant at all the times. It varies by its quality and
power. This is called clock drift problem.
The slot demand is the number of slots needed to send its own packets and
also its childrens packets. Sink starts slot scheduling and allocates slots to
its children but no slot in the super frame is wasted here. The slot demand is
calculated using the equation (1) given below,
p(i) i
Di = kch (i) Dk + |T i | (1)
p(i)
where, Di is the slot demand of node i with respect to its ancestor j, |T(i)|
is the set of nodes that belong to the tree the originates from node i

From the above equation, the first term is assumed as and second term as
. Then the node distributes to its children and for transmitting to the
parent it reserves . For sink node, is the super frame and is 0. Super
frame is the total number of slots received at the root node.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 76


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
Parameters considered:
DSA protocol is implemented using ns2 and result show better performance
in terms of network lifetime, energy efficiency, reliability, bandwidth
utilization, balanced power consumption.

3. SUMMARY OF MAC PROTOCOLS


The summary shows that many authors have taken the parameters like
energy efficiency, throughput, bandwidth utilization, latency, scheduling
efficiency and traffic load adaptivity. They have also mentioned what type
of MAC used such as TDMA or CSMA or Hybrid and also mentioned the
timeslot size as fixed or variable slots.

Table 1. Comparison of the surveyed MAC protocols

SCHEME QoS-based MAC Cross-layer MAC Other MAC


DIFFMAC PQMAC AMPH XLP CL- EEDS DSA RMAC
[8] [4] [9] [11] MAC [12] [13] [6]
[10]
Energy
Efficiency
Priority
Throughput

Reliability
Bandwidth
utilization
SYNC_
DELAY
Latency
Scheduling
efficiency
Type CSMA/CA Hybrid Hybrid CDMA TDMA TDMA Hybrid TDMA
Traffic load
adaptivity
Timeslot Fixed Variable Fixed Fixed Fixed
size
Fairness
Duty cycle

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 77


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
4. CONCLUSION AND FUTURE RESEARCH DIRECTIONS
In this paper, we have surveyed various MAC protocols for WSN and a
comparison is also presented with different parameters. Majority of the
protocols follows traffic classification to classify the data according to their
type and the packets are treated according to their requirements. There are
also certain protocols which differentiate MAC parameters according to the
networking conditions and provide QoS support indirectly. Finally, we have
decided to take priority as a main parameter as it is considered by few
authors only. Each node will select its neighbor to forward the data sensed.
The traffic classifier will differentiate various types of traffic and according
to the QoS requirement, priority is assigned to all the sensed data. With this
traffic classification and priority assignment, the prioritized data will reach
the destination faster to perform the rescue operations as soon as possible.

REFERENCES
[1] Ranjana Thalore, Jyoti Sharma, Manju Khurana, M.K. Jha QoS evaluation of energy-
efficient ML-MAC proocol for wireless sensor networks, International Journal of
Electronics and Communications (AE), Elsevier, pp. 1-6, June 2013.
[2] Rajesh Yadav, Shrishu Varma, N. Malaviya A SURVEY OF MAC PROTOCOLS
FOR WIRELESS SENSOR NETWORKS, UbiCC Journal, Volume 4, Number 3, pp.
827-833, August 2009.
[3] Sunil Kumar, Vineet S. Raghavan, Jing Deng, Medium Access Control protocols for
ad hoc wireless networks: a survey Ad Hoc networks, Elsevier, pp. 1-33, 2004.
[4] Hoon Kim and Sung-Gi Min, Priority-based QoS MAC Protocol for Wireless sensor
Networks, IPDPS 09: Proceedings of the 2009 IEEE International Symposium on
Parallel& Distributed Processing, IEEE Computer Society, Washington, DC, USA,
pp. 1- 8, Dec 2009.
[5] GholamHossein Ekbatanifard, Reza monsefi, Mohammad H. Yaghmaee M., Seyed
Amin Hosseini S., Queen-MAC: A quorum based energy-efficient medium access
control protocol for wireless sensor networks, Computer Networks, Elsevier, pp.
2221-2236, 2012.
[6] Wee Lum Tan, Wing Cheong Lau, On Ching Yue, Performance analysis of an
adaptive, energy-efficient MAC protocol for wireless sensor networks, Journal of
parallel and Distributed Computing, Elsevier, pp. 504-514, Feb 2012.
[7] M. Aykut Yigitel , Ozlem Durmaz Incel, Cem Ersoy, QoS- aware MAC protocols for
wireless sensor networks: A survey, computer networks, Elsevier, pp.1982-2004,
Feb. 2011.
[8] M. Aykut Yigitel , Ozlem Durmaz Incel, Cem Ersoy Design and implementation of a
QoS-aware MAC protocol for Wireless Multimedia Sensor Networks, Computer
Communications, Elsevier, pp. 1991-2001, June 2011.
[9] M.Souil, A.Bouabdallah, A.E.Kamal Efficient QoS provisioning at the MAC layer in
heterogeneous wireless sensor networks, Computer Communications, Elsevier, pp. 1-
15, Feb 2014.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 78


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
[10] Christophe J. Merlin, Adaptability in Wireless Sensor Networks Through Cross-Layer
Protocols and Architectures, Ph.D. Thesis, University of Rochester, Rochester, New
York, 2009.
[11] Mohamed S. Hefeida , Turkmen Canli , Ashfaq Khokhar CL-MAC: A Cross Layer
MAC Protocol for Heterogeneous Wireless Sensor Networks, Ad Hoc Networks,
Elsevier, pp. 213-225, May 2013.
[12] Mehmet C. Vuran, Ian F. Akyildiz, XLP: A Cross-Layer Protocol for Efficient
Communication in Wireless Sensor Networks, IEEE TRANSACTIONS ON MOBILE
COMPUTING, Jan 2010.
[13] Tayseer Alkhdoura, Uthman Baroudib, Elhadi Shakshukic, Shokri Selimb An
Optimal Cross-Layer Scheduling for Periodic WSN Applications, The 4th
International Conference on Ambient Systems, Networks and Technologies, Elsevier,
pp. 88-97, June 2013.
[14] Hoon Oh and Trung-Dinh Han A demand-based slot assignment algorithm for
energy- aware reliable data transmission in wireless sensor networks, Springer, pp.
523534, Feb. 2012.

This paper may be cited as:

Sridevi S., Priyadharshini R. and Usha M., 2014. Survey of MAC Protocols
for Heterogeneous Traffic in Wireless Sensor Networks. International
Journal of Computer Science and Business Informatics, Vol. 11, No. 1, pp.67
-79.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 79


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

Harnessing Social Media for Business


Success. Case Study of Zimbabwe
Musungwini Samuel
Computer Science and Information Systems
Midlands State University
Gweru Zimbabwe

Zhou Tinashe Gwendolyn


Computer Science and Information Systems
Midlands State University
Gweru Zimbabwe

Zhou Munyaradzi
Computer Science and Information Systems
Midlands State University
Gweru Zimbabwe

Ruvinga Caroline
Computer Science and Information Systems
Midlands State University
Gweru Zimbabwe

ABSTRACT
The purpose of this research was to establish the impact of harnessing social media on the
Zimbabwean businesses with particular reference to Facebook. The researchers looked at
literature from other researchers to guide them. The researchers used focus group
discussion and questionnaires to elicit information from the subjects. Participants to the
questionnaire research were Facebook users that were actively running Facebook profiles
mainly those participating on Zimbabwean business promotional campaigns on the social
platform at the time of the research. The focus group discussion participants included MSc
Information Systems Management students at Midlands State University. Although in
Zimbabwe at the present moment social media is still evolving and its potential in business
still remains to be seen. Social media creates a real connection between companies and
customers that connection creates a trend for purchase intensity. The contribution of this
research to the body of knowledge is that Social media is imperative for any business in
todays world and therefore every business should embrace it, but in doing so caution must
be exercised.

Keywords
Social media, Netzens, Social network, ICTs, techno savvy.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 80


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
1. INTRODUCTION
There are a number of definitions for social media
(http://econsultancy.com). This implies that they can be a number of ways
that can be used to define social media depending on how one understands it
and what it can be used to accomplish. Social media is the collective of
online communications channels dedicated to community-based input,
interaction, content-sharing and collaboration. Kaplan and Haenlein (2010)
cited in Cox (2012) describe social media as a group of Internet- based
applications that build on the ideological and technological foundations of
Web 2.0, and that allow the creation and exchange of user-generated
content. Although the nature and nomenclature of these connections may
vary from site to site (Boyd & Ellison, 2007) the impact is the near uniform.
This media which first came into existence in 1997 with the first website
SixDegrees.com has since developed to become the media of choice. With a
reported 4.2 billion mobile users accessing the social site using their
mobiles, it has become imperative for businesses to harness this powerful
media for business success.

These writers looked at the impact of harnessing these technologies on the


Zimbabwean businesses with particular reference to Facebook. The impact
of ICTs on business has both a positive and a negative impact. These
technologies have the prospect of racking in revenue for any business that
properly uses them in an enabling environment. Although Zimbabwe has a
history of playing catch up in the technological regard, it equally has a rich
history of hitting the ground running. This may be attributed to the fact that
Zimbabwe has a literacy rate compared to none in Africa according to an
article by The African Economist published on July 6, 2013. Around the
middle of 2013 we witnessed a number of Zimbabwean companies taking
their business operations to Facebook.

LITERATURE REVIEW
a. What is Social media
Social networking is a subject that divides opinion that is while some people
think that it's a remarkable tool, others are equally worried about the impact
it has on people's lives. From all walks of life lots of people use social
media sites for both work and pleasure. The most visited online social
network is Facebook, founded in 2004 and with over 600 million users and
presence in over 70 countries (Carlson, 2011; Techtree News Staff, 2008)
According to Ofcoms research in 2012, six in ten of all adult Internet users
had their own social networking profile. Social media is surely altering the
way people engage in conversations and exchange knowledge about the
kind of service they get the quality of the products they buy and how they
generally want to be treated as consumers, citizens and as employees.
Businesses are increasingly recognising the influence social media can have

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 81


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
on their businesses and are aligning their marketing approaches and
investing resources accordingly.

Social media implementation within organizations the world over is


occurring at a rapid pace (Baker & Green, 2008). Global consulting firm
McKinsey, in a research they conducted found out that 65% of companies
reported the use of Web 2.0 technologies in their organizations (Bughin &
Chui, 2010). Forrester Research forecast that corporate spending on
enterprise social media will reach more than $4.6 billion annually by 2013
(Young et al., 2008 as cited by Treem & Leonardi, 2012). In Zimbabwe
firms are scrambling for this media which has seen the current young
generation becoming highly techno savvy. Nevertheless regardless of the
augmented adoption of social media by firms, the connotations of these new
technologies for organizational processes are not so far well understood by
the business people. Across the Globe different academics have suggested
that social media adoption in organizations is outpacing practical
understanding of the use of these technologies.

b. What is Facebook?
Facebook is the world's most popular social networking website. It makes it
easy for users to connect and share with family and friends online. It is
arguably believed that Facebook has helped the web become not only
more open but also social. Facebook social networking site, has reached
close to 700 million users (eBIZ MBA, 2011) and looking at the number of
users, if Facebook was a country, it would be the third largest (Hardaker,
2011). According to a brochure released by Websense, Facebook has an
annual growth rate of 41% and Twitter is growing at 85% year after year
(Websense, 2011). Facebook has more than 800 million active users with
over 50% of active users logging on every day (Facebook, 2011). This
means at least 400 million people log onto their Facebook accounts
everyday and this is sweet music to all business people because there is
power in numbers. Facebook has captured the number one ranking by time
spent in August 2010, accounting for 12.3% of time spent online in the
United States (ComScore, 2010). Because the world over and in particular
Zimbabwe implications of social media use in organizations are not well
understood, in this paper we explored social media and the Zimbabwean
business landscape.

2. RESEARCH OBJECTIVES
In this paper the researchers objectives were to establish the effects of
harnessing these social platforms and give recommendations for the
businesses in question and other companies that intend to take their
businesses to these social platforms. In order to accomplish the objectives
the researchers used the following questions:

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 82


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
a. What are the benefits that are accrue to a business as a result of harnessing
the power of social media?
b. What can business people do to maximize the benefits of social media?

In the process of answering these questions new practical knowledge for the
businesses in question as well other businesses in Zimbabwe and the world
at large was availed. This information is boon to all the business people in
general and in Zimbabwe to be particular.

3. METHODOLOGY
The researchers used focus group discussion and questionnaires to elicit
information from the subjects. Participants to the questionnaire research
were Facebook users that are actively running Facebook profiles especially
those participating on Zimbabwean business promotional campaigns on this
social platform. On the focus group discussion participants were some of
the MSc Information Systems Management students at Midlands State
University. A questionnaire is a research instrument consisting of a series of
questions and other prompts for the purpose of gathering information from
respondents (Chaudhuri, Ghosh & Mukhopadhyay, 2010). The primary
purpose of a survey is to elicit information which, after evaluation, results in
a profile or statistical characterization of the population sampled (Chaudhuri
et al, 2010).

In the last quarter of 2013 the Zimbabwean Facebook community was abuzz
with the comment and likes competition whereby individuals were asked to
answer a question or comment on a business organisations website and the
comment which got the most likes at the cut-off time was announced winner
on the organisations website. Such business organisations were prioritised
for consideration in this research. A focus group method was also used in
this study because of its long history of use in market research. This was
defined by Wimmer and Dominick (1997) as "a research strategy for
understanding audience/consumer attitudes and behaviour"(p.97). Calder
(1977) suggested that focus group interviews or discussions were a suitable
method for explorative studies. Jarvenpaa and Lang (2005) have also
demonstrated the feasibility of focus group discussions in studying
innovative mobile services. Most researchers prefer a homogeneous group
with the common threads being the issues for discussion (Vaughn, Schumm,
& Sinagub, 1996). The researchers therefore chose 8 students from the MSc
Information Systems Management programme at Midlands State University
and requested to look into the issues of Facebook and its impact on business
before coming for session.

4. FINDINGS
a. Results from stage one: Focus group discussion.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 83


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
Every research is conducted to fulfill a particular purpose. For this research
the researchers were guided by two questions.

i. What are the benefits that accrue to a business as a result of harnessing


the power of social media?
In the focus group discussion there was consensus among participants on
the benefits of this platform to business. However it is important to note that
one of the participants in the focus group discussion was not using any
social network at all. According to participants the benefits of social media
are that, it increases website traffic, social netzens will troop to an
organisations website once an organisation takes to the social media. A
social media enables company to network with customers in order to build
relationships and achieve a better understanding of customer needs Cox
(2012: p 18). As a result the organisation becomes more visible on the
network landscape. Their sentiments confirm the findings of The Harvard
Business review which says that the exponential growth of social media,
from blogs, Facebook and Twitter to LinkedIn and YouTube offers
organisations the chance to join a conversation with millions of customers
around the globe everyday (2010: p 1)

Participants concurred that within any social network, there is a segment of


the population that an organisation wants to see its messages and to be
familiar with content when a critical activity occurs, and they want this
reciprocated. Hence social networks provide the platform upon which
organisations build relationships and provide content to support the goals.
To maximize this reach, a business must have a presence where customers
are hanging out (Cox 2012). All participants in the focus group discussion
agreed that harnessing the social networks have a huge effect on the
organisations marketing budget. It is the reason why there is so much
promotional activities going on companies Facebook pages in Zimbabwe.
We believe that any pen saved is a pen gained hence this is very vital for
any business.

Six out of the 8 participants agreed that taking to the social platforms
enables an organisation to forge new partnerships with other organisations
and currently in Zimbabwe we are witnessing Handy Andy, Omo, Netone
etc posting messages on each others Facebook page. This, the participants
suggested have an effect of enabling non competing companies to market
other companies to its clients. Participants further suggested that this is
likely to have an impact on the organisations sales as a result of new clients.

ii. What can Zimbabwean business people do to maximize the benefits of


social media?

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 84


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
Since social media is so available to anyone with an internet connection and
even more now for everyone with a smart phone, it should be a platform
that can be recommended to the business people to increase their brand
awareness and facilitate direct feedback from their customers. A business
that understands the advantage of social media is well aware that social
media is essential in developing new business in the current competitive and
online driven marketplace. Business is about clients and therefore where
clients hang out becomes apparent. Increasingly, they are hanging out on
social networking sites (Halligan, Shah, & Scott, 2009). As the new kid on
the technology block, business organisations can take to this platform to
reach out to the people accessing these social media platforms regularly.

Business organisations can develop a social media policy to educate and


provide better understanding for their employees to keep within certain
parameters their social media activities and enlighten them on the
implications of their participation on the social space. Keep it as informal as
possible - if you want to use social media successfully Dont take the
social out of social media. Security- Social media access in corporate
environment has many security risks. Social media sites are a fertile ground
for attackers because it has huge number of users and information
availability. Misapprehension of acquaintance and trust can entice workers
to share the sensitive information of organization to outsiders. Participants
indicated the need to for every organisation to be aware of these dangers as
this has a double effect that is on the business as well as the customers.

Many organizations take a dichotomous decision of consenting to, or


forbidding social media across the organization. Organization may have
challenges to decide the degree of freedom in social media policy.
Therefore there is need for decision making capacity for any organisation on
this matter. Legal and Privacy Issues in social networking need to be
addressed if any organisation is to realise the maximum benefits of social
networks. Participants noted that social networks have a world presence
cutting across geographical boundaries and yet there are various laws and
regulations related to privacy in different geographical location. One issue is
the laws and regulations are not able to catch up the rate at which
technology is evolving. There are different expectations and sensitivity
levels with respect to privacy in different geographical regions.

b. Stage two: Questionnaire


We in-boxed the questionnaire to 100 Zimbabwean Facebook users who
were selected on the bases of their participation on promotional activities of
businesses as evidenced by their actions on the promotional Facebook
profiles of companies. Out of these 87 responded to our questionnaire and
out of these 53 were female while 34 were male. That makes it a ratio of

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 85


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
60.92% females compared to males 39.08%. This gave us an insight that
more women use social media that man in Zimbabwe. This is supported by
ComScores comprehensive 2010 review of digital usage in the United
States of America which concluded that women spent more time on these
social sites (16.8%) compared to men who spent 12% of their time on social
networking sites. On the issue of age the bulk of participants falls in the age
group 25 and below with this category accounting for 45 participants which
makes it 51.72%. This is supported by a research that was conducted in
Australia which stated that almost all young Australians are online with
90% of 16 29 years olds using the internet daily (Nielson 2010a:139).The
25 to 39 Age group constituted 36 participants making it 41.38%. The 40+
age group constituted a paltry 6 participants this accounted for 0.07%. It is
also critical to note that in this age group there was no single male
respondent. After looking at the demographic data we move now to the
subject specific questions.

On how often the respondents usually go online on their social platforms.


The first group which we called the social net workaholics were more
resident on the network than anywhere else. This group spend on average 6
hours on their Facebook pages and the bulk of these people are those in the
<25 age group, 31 respondents that is 35.63% composed this group while
the most number of respondents 48 (55.17%) visits the social network at
least once a day. The remaining 10 (11.5%) are occasional and irregular
Facebook visitors. This was again supported by the same research in
Australia which established that the young people spend more time online
(an average of 22 hours per week) than any other age group (Nielson
2010a:78).

On whether Facebook is influencing their buying behaviour the respondents


59 (67.82%) respondents indicated that Facebook is highly influencing their
buying behaviour. The remaining 29 (33.33) indicated that they were not
persuaded by Facebook in their buying decisions and indicated that they are
brand loyalists who stick to their brand regardless of the circumstances.
More women are highly affected in this regard compared to men.
Participants were also requested to state any products that they were not
buying at all or buying fewer quantities before, that they are now buying or
buying more quantities than before. Two products were prominent that is
Omo washing powder and Handy Andy 43 (49.43%) of respondents
indicated that they either now buy Omo washing powder or they have
increased the quantities they buy, while 31 (35.63%) are now buying
Handy Andy or are buying more of it.

5. DISCUSSION

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 86


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
In this paper we found that social media increases website traffic, social
network dwellers will troop to an organisations website once an
organisation takes its business to the social media platform. This tally with
previous research which implied that social networks enable information
dissemination to occur not only between companies and the customer, but
also between networks of customers (Mangold & Faulds, 2009) this lead to
increased visibility on the electronic business landscape, coupled with
promotional activities will result in more sales volumes for the business.
This corresponds with other Studies of social media in organizations which
have noted that the visibility of content is seen as an effective way for
employees to get a feel for what is happening in an organization
(Brzozowski, 2009; Zhao & Rosson, 2009).

Promotional activities on Facebook have a direct influence on customers


buying behaviour in Zimbabwe with a product like Omo powder washing
soap now filching other washing powders clients as a result of social
Facebook promotion. It therefore becomes imperative for every business to
adopt social networks for business. Our findings also indicates that social
media reduces the advertising budget so significantly. This is supported by
(Reijonen, 2010). Social media enables firms to engage consumers in a
timely and direct manner at relatively low cost and higher levels of
efficiency than with more traditional communication tools (Cox, 2012).

It should also be noted that because it has since become a communication


media of choice it can cause massive damage to the corporate image of an
organisation due to bad publicity. Jeremy Wagstaff, a commentator on
technology has shared that the most effective way to get satisfactory service
these days is to tweet about how bad it is. We also found out that females
are highly influenced by social media in their buying behaviour. The social
platforms enable organisations to forge new partnerships with other
organisations. There are more women than men on the social platforms in
Zimbabwe and it is more of the techno savvy generation. Although there is
no much literature to support these results we believe this to be a true
representation of the Zimbabwean landscape.

6. LIMITATIONS
This research was conducted at a time when the social networks use in
Zimbabwean businesses was in its infancy stage and hence this was an
exploratory research for the Zimbabwean case. Hence future research can be
conducted when the technology use have matured. In this paper we confined
our research to Facebook and we believe in future research can be extended
to other social networks. We also believe sector specific research can be
conducted for better understanding.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 87


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
7. CONCLUSION
The aim of this study was to establish the effects of harnessing social
platforms, Facebook in particular in business in Zimbabwe. Although in
Zimbabwe at the present moment social media is still evolving and its
potential in business still remains to be seen, we established that Social
Media creates a real connection between companies and customers; that
connection creates a trend for purchase intensity. During the 2010 FIFA
football world cup, Nike placed an ad with Facebook, and within few
minutes, an average of 8 million viewers had registered with Facebook
(kevthefont, 2010). Ongoing connection and relationship with the potential
customers will eventually turn them into real customers while at the same
time turning them into unofficial network ambassadors. Social media is a
cost-effective method for marketing activities (Paridon & Carraher, 2009).
Businesses which want to stay ahead of the curve need to invest today in the
media that is fast emerging as the future of internet marketing (Aggarwal,
2010. In our research we found out that most of the respondents were the
young techno savvy and women on social networks in Zimbabwe and more
women were also influenced by social networks in their buying behaviour.
The contribution of this to the body of knowledge is that Social media is
imperative for any business in todays world and therefore every business
should embrace it, but in doing so caution must be exercised.

ACKNOWLEDGEMENT
The researchers would like to acknowledge the cooperation from a number
of Facebook users scattered in various places of Zimbabwe who were
approached by these researchers through their Facebook profile inboxes, for
providing data that enabled this study to be carried out. Their contribution is
greatly appreciated.

REFERENCES
Age distribution on Social media sites (2010, February)
Baker. S. & Green. H. Beyond Blogs. Accessed on 13 January 2014
http://www.businessweek.com/stories/2008-05-21/beyond-blogs
Boyd, D. M., & Ellison, N. B. (2007). Social Network Sites: Definition, History, and
Scholarship. Journal of Computer-Mediated Communication, 13(1), 210-230.
Brzozowski, M., Sandholm, T., & Hogg, T. (2009). Effects of feedback and peer pressure
on contributions to enterprise social media. Proceedings of the 2009 International
Conference on Supporting Group Work (pp. 6170). New York: ACM.
doi:10.1145/1531674.1531684
Bughin. J. & Chui, M. (2010). The rise of the networked enterprise: Web 2.0 finds its
payday McKinsey Quarterly. Accessed on 21 December 2013
http://www.mckinsey.com/insights/high_tech_telecoms_internet/the_rise_of_the_networke
d_enterprise_web_20_finds_its_payday
Carlson. N Facebook Has More Than 600 Million Users, Goldman Tells Clients. Accessed
on 6 January 2014
http://www.businessinsider.com/facebook-has-more-than-600-million-users-goldman-tells-
clients-2011-1#ixzz2x2jZHSFv

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 88


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
Bullas. J. 12 Awesome Social Media Facts and Statistics for 2013. Accessed on 07 January
2014http://www.jeffbullas.com/2013/09/20/12-awesome-social-media-facts-and-statistics-
for-2013/
ComScore 2010 US Digital Year in Review (2011, February)
Cox. S. L. Social Media Marketing in a Small Business: A Case Study
Jackson, A., Yates, J., & Orlikowski, W. (2007). Corporate blogging: Building community
Gupta Ritesh, 2010, Top 10 Strategies to Promote Hotels on Social Media Channels
Jeffrey W. Treem and Paul M. Leonardi (2012) Social Media Use in Organizations
Exploring the Affordances of Visibility, Editability, Persistence, and Association.
Jarvenpaa S L, Lang K R. (2005). Managing the paradoxes of mobile technology
Lake. C. What is social media? Here are 34 definitions. Accessed on 17 December 2013
http://econsultancy.com/zw/blog/3527
Mangold, W.G & Faulds, D.J. (2009) Social media: The new hybrid element of the
promotion mix through persistent digital talk. Proceedings of the 40th Annual Hawaii
International Conference on System Sciences. Los Alamitos, CA: IEEE Computer Society
Press. doi: 10.1109/ HICSS.2007.155
Nielson (2010a) The Australian Internet & Technology Report Edition 12, The Nielson
Company.
Oatay.A The Strengths and Limitations of Interviews as A Research Technique for
Studying Television Viewers. Accessed 5 December 2013 from
http://www.aber.ac.uk/media/Students/aeo9702.html
Paridon, T. & Carraher, S.M. (2009). Entrepreneurial marketing: Customer shopping
value and patronage behavior. Journal of Applied Management & Entrepreneurship, 14
(2), 3-28.
Percy. L Using Qualitative Focus Groups in Generating Hypotheses For Subsequent
Quantitative Validation and Strategy Development. Advances in Consumer Research
Volume 9, 1982 Pages 57-61. Accessed on 7 December 2013
http://www.acrwebsite.org/search/view-conference-proceedings.aspx?Id=5901
The African Economist Ranking of African Countries by Literacy Rate: Zimbabwe No.1
Accessed on 9 December 2013 http://theafricaneconomist.com/ranking-of-african-
countries-by-literacy-rate-zimbabwe-no-1/
Vaughn, S. Schumm, J.S, Sinagub, J. M.(1996) Focus Group Interviews in Education and
Psychology
Zhao, D. & Rosson, M. B. (2009). How and why people Twitter: The role that
microblogging Plays in informal communication at work. Proceedings of the 2009
International Conference on Supporting Group Work (pp. 243252). New York: ACM.
doi:10.1145/1531674.1531710
https://www.facebook.com/handyandyzimbabwe?ref=br_tf
https://www.facebook.com/pages/Hammer-Tongues-Auctioneers/253464304680602
https://www.facebook.com/omozimbabwe
http://kevthefont.wordpress.com/author/kevthefont/page/2/
http://stakeholders.ofcom.org.uk/market-data-research/market-data/communications-
market-reports/cmr12/

This paper may be cited as:


Samuel, M., Gwendolyn, Z. T., Munyaradzi, Z. and Caroline, R., 2014.
Harnessing Social Media for Business Success. Case Study of Zimbabwe.
International Journal of Computer Science and Business Informatics, Vol.
11, No. 1, pp. 80-89.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 89


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

Quality Platforms for Innovation


and Breakthrough
Dr. Hima Gupta
Jaypee Business School
Jaypee Institute of Information Technology
A-10. Sector 62
Noida (UP) India 201 307
Tel+91 (120) 2400974-5 Ext 315 or 137
Fax +91 (120) 2400986

ABSTRACT
This paper focuses on Key issues of Innovation & Quality. Both 'quality' and 'innovation'
play vital roles for businesses competitive. Quality aims for high and sustainable
performance in existing business areas, while innovation intends for breakthrough
Opportunity recognition is the bridge that connects a breakthrough idea to the initial
innovation evaluation process-which in turn leads to the formation of a formally established
commercialization effort.This research paper has been made to know how innovation leads
to quality in an organization and also to tell relationship between quality and innovation. In
this paper, secondary data in the form of research papers has been used. The findings of the
research are useful in today's organizations.
Purpose- The importance of quality as regards to innovation is being documented in this
paper, which is based on the literature available in other reports, that how quality is
affecting the innovation. What basic companies are choosing to innovate which will
ultimately lead to breakthrough.
Design/methodology/approach - The procedure applied here, is totally based on
secondary data after reviewing various research papers and certain reports talking about the
organizational capability to innovate and also impart quality in the products they deliver to
the customers.
Findings The dynamic innovation capabilities are having an inverted U shape type of
relationship with breakthrough innovation, whish is substantiated by the data or facts
sighted here in the paper. At the same time, there are strong possibilities of adopting open
source innovation methods to justify the positive consequences of dynamic innovation on
breakthrough results.
Research Limitations/implications The open source of innovation develop effective
coordination and cooperation among the employees and the leader just to provide a
competitive edge to the companies. These facts and figures are highlighted with the
support of existing literature.
Practical implications The findings given here can be of some use to manufacturing and
service organization for the purpose of improving their bottom-line and direct their
employees to bring innovativeness through continuos improvement and dedication.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 90


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

Originality/ value- It totally came out from this study that those firms, which are failed
miserably just because not taken up the pace of developing novel product or services in the
current competitive situation.

Keywords
Quality, Innovation, Breakthrough Innovation, Business

1. INTRODUCTION
Innovation focuses on thinking in a different way, as long as imaginative
outlook and generating responses which may influence the social and
economic values. Innovation is of great significance not only for creating
competitive or collaborative advantage, though it is dealing many other
criteria for mitigating confrontation faced by the general public, doing
something to improve governance and creating academic importance.
In an Industrial or commercial set up innovation or creativity is generally
described as proven augment in the worth of any product or services. The
procedure may be incremental or breakthrough, which may take place
occasionally or scientifically; it may be achieved by adopting any one of the
following tactics;
- Launching better goods or services in terms of design or features;
- employing innovative or superior working methods and/or;
- applying novel or better organizational/ administrative measures.
This will result improved market share, competitiveness and quality along
with
reduced costs.
In today's globalize world, innovation encompasses using new and unique
applications of old technologies, creating design to develop new products
and services, new procedures and structures to improve performance in
diverse areas, organizational creativity, and societys
initiatives to enhance delivery of services. Further, innovation is also seen
as a way to create sustainable and cost effective solutions for and by
people at the bottom of the pyramid, to provide inclusive growth in
developing economies. The innovation system is also focusing on absorbing
hidden
innovations in the services sector, creative industries and grassroots
activities.
The profitable growth of the company is achieved by working on the quality
and innovation part as mapped out by the quality professionals. The people
associated and working in pioneering companies knowing the customers
need and the organizations capabilities, which are nothing but an intrinsic
advantage of being innovative and competitive. To extract the crux, they
only need an innovation methodology which actually utilizes the prospect
offered by the customers.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 91


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

2. LITERATURE REVIEW

Many firms invest heavily in developing innovation capabilities which is


coming through their resources only. Radical innovations to produce new
products are the result of open innovations only. Drawing from the
absorptive competence point of view, organizational inactivity presumption,
and open modernization, the writer search to argue that dynamic innovation
capabilities which have a curvilinear effect on breakthrough innovation that
is moderated by open innovation activities [1]. On the contrary of it the
organizational inertia theory suggests that dynamic breakthrough
innovation.
Researchers have conflicting view for open innovation used by firms [9].
Some firms adopt innovation process in collaboration with their partners
and have preferences for exercising control over breakthrough innovation
[10].
Organizational learning processes and routines leads to dynamic innovation
capabilities which are rooted in innovation knowledge and become a part of
transformation of a firm's innovation knowledge resources. In some
situations customers play a vital role and pass an innovative idea to the firm
for
the development of attractive quality creation [24].
The data collected for the research purpose was taken from 30 respondents
of a company, designing and manufacturing microwave ovens. It is very
much clear that novelty in the product can be produced by the concept of
attractive quality the theory of attractive quality. It has been clearly
mentioned here that the idea of attractive quality can be encapsulate only at
the early stage of product or service development process. As per the
literature review it was found that the components of value chain like
marketing, research and development, procurement, and operations
processes basically linked to product quality and its innovation [16].
Firm's capabilities which are being used as a part of strategy providing them
an edge over its competitors is basically rooted in the organizational
expertise in all sort of value chain activities, as the concept of value chain
proposed by Porter (1985). While in actual business scenario, core
distinctive competencies are the creating the base for the purpose of
formulating and banking upon it for the purpose of erecting greater knack in
performing those value chain activities by way of directing more resources
in that activity itself. Finally, that competency at the later stage of the life
cycle of any organization will become a part of sustainable competitive
advantage. Plenty of examples in the industry substantiate this concept.
For instance; Honda, Intel, and Du Pont are well known for their
exceptional research and development competencies. Similarly, Sony, Black
and Decker and Toyota are noted for their excellent manufacturing

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 92


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

competencies while Gillette has been praised for its effective promotion of
branded products and Wal-Mart for an effective distribution system.
Generally a basic notion which is prevalent all over the businesses that the
all innovative activities which are manifested as value chain activities can
be easily copied and imitated by the competitor while this is not true in case
of services. Further an analysis of value chain activities is the only source of
performing the value chain activities in a new or innovative way extricating
from the proper use of its resources. Because these types of innovation are
firm specific - that is, they are based upon the firm's unique way of
combining its resources and capabilities - they are difficult to interpret and
measure ([47] Hitt et al. 1996). The present study addresses this
shortcoming by directly focusing on competencies and capabilities in a way
that is consistent with theory.
A data of Australian firms was collected considering 194 managers as
respondents; structural equation modelling was used to test the hypotheses.
The results are limited by the sample size and geography of the survey.
As per the study (H Fred (2007) [9], to have competitive advantage in
global market, cost control and product quality is only source for innovating
product or services. According to the author, innovation is preceded by
process only and through quality tools the performance of the existing
processes continuously improved, to highlight effectiveness and innovations
as a whole. Thus, innovation includes advances in the products
production processes, management systems, organizational structures, and
strategies developed by a firm.
The study of Fred [9] shows that, the Total Quality Management (TQM)
plays a vital role in forming contemporary management practices. In case of
knowledge based society, the quality alone is not affecting the innovation
process. Accordingly, the sustainable competitive advantage has transferred
towards innovation rather than quality, which is considered as basic
component of entrepreneurship.
Several studies have also recognized a direct relationship between TQM and
innovation, while considering the aspect of speed to marketing case of new
product development accruing due to innovation is checked ( [25]
Flynn,1994), after considering, the organizational performance(dependent
variable) and TQM practices (independent variables) is taken into account
for a large random sample of manufacturing companies surveyed by the
researcher.
The positive impact of innovation in terms of new product or service
introduced in a fixed period of time could not justify the ideology of
innovation which affects on the performance of any organisation as such.
They have also discovered that not all the TQM practices
improve the firm's performance in the form of innovativeness. The study of

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 93


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

Pratibha, 2005[12] supports the fact that employee's perception of


organizational culture is important.

The organization should focus on the key or main components for creating a
supportive culture. Those key components are an outward looking focus
breakdown barrier, creating cross functional teams and learning by doing
rather than thinking (Exon, GE, and 3M). The technological intensive
radical innovations are possible through defined business and management
practices (Veryzer Jr, 1998, O'Connor
Tushman and Anderson 1986) [22], rather employing people of having
specialized knowledge and developing a new product through some idea,
which finally comes into production, organizations actually require three
kinds of people.
First category is of "arrow-shooters" who will express ideas to formerly
uncharted parts of the forest, for example, the creation of Photoshop at a
time where no one dealt with digital imagery. Then you must have a couple
of "path finders" - fast programmers who can develop minimum prototype
model for showing as an identity of the idea or thoughts.
Finally, you should look forward the persons who can work as a "road
builders" - engineering teams; actually these people give final shape of the
product in terms of detailing the requisite process and other inputs When
Creativity (C) occurs within the right organization culture (OC), it results
innovation (I).
The study (Wang Catherine & Ahmed Pervez, 2002)[23], in which the
author has highlighted the role of creativity and value innovation, as a
quality only. The proposed cross-disciplinary model taking knowledge and
wisdom in the excellence and novelty procedure is a holistic process. The
5-Smodel of creative quality and value innovation comprises the
following components, which stresses upon how to bring quality and
innovation in the present system;
1. Satisfying -The essence of innovative quality, as compare to traditional
quality, ultimately satisfy the customer needs.
2. Surprising - Creative quality stresses the anticipation and internalisation of
customer preferences and therefore creates new products.
3. Surpassing - Creative quality and value innovation focuses on customers.
4. Superposing - Innovators surpass the traditional with the innovative by
superposing organisational competency and building up new layers of
organisational competencies.
5. Stimulating - Value innovators capture the core of the marketplace. They
expand market by creating new demand and new customer preferences.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 94


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

As a result the proposed model enhances the competitive advantage in the


firm itself.
In order to be a competitor in the world, companies must study the leading
examples of innovation and quality (Liu (2001) [11]. As Japanese firms
already identify the success factors of a firm is totally depending on process
innovation and quality and Total Quality Management become the driving
force, which will create changes in today's organizations. Since basic
guiding principle of Total Quality Management describe continuous
improvement
in the processes, and it will be to be expected that new thoughts for
maintaining innovation and quality will come forward to guide
organisations to the next century and beyond.

The all associated entities got success using the Principles of Completeness
as it will make employees successful, make suppliers successful, and make
customers successful [4]. As the companies adopt Principle of
Completeness as the basis for quality management and are integrated into
every Total Quality Management process, the tasks covered would be
broader and consequently the management would face tough challenges.
Those organizations whereby innovation and quality become routine
achieve success in long term. They follow more horizontal, organic, and
decentralised structure and corporate management then truly become the
small entrepreneurial units as dreamt of by many leading management
experts.
The paper (Kunst, 2000), is related to analysis of factors associated with the
success or high performance of any organization; connected to quality or
something else is highlighted here. To sight the innovation capabilities, 3
service sectors had been chosen for the study namely hospital, transport and
banking sectors in 3 European countries (Spain, UK and the Netherlands)
early in 1995. Concentration is on the hospital sector. In general, the
conclusion drawn was that TQM is the practice which leads to higher and
profitable results of any service sector which affects the efficiency and
made the unit cost effective. Not only the effectiveness is being improved
but also the standard of perceived quality rose up significantly if the service
companies follow TQM along with innovative practices.

3. THEORETICAL BACKGROUND

There are many types of innovations which exist in any organization set up
ranging from incremental, radical or breakthrough, architectural or modular
innovations.
Broadly incremental and radical innovations are totally depending upon the

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 95


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

degree of innovation and the newness deployed in the product or process.

Simple improvement in products or slight changes in the technology as well


as line extension can be a part of incremental innovations which will
enhance existing performance of the product. As against of it, using a new
technology which offers considerably greater benefits to the customers as
compare to existing products. This will create substantial changes to
consumption or usage patterns (Chandy and Tellis, 2000; O'Connor and De
Martino De Visser et al., 2010) [6]. For that matter, a new knowledge base
and great amount of innovation capabilities (Song and Di Benedetto, 2008)
[21] are required for getting a breakthrough innovation (Rogers, 2003) [18].

4. OBJECTIVES
The objectives of this paper are to explore the connectivity of quality just
to achieve break through innovation. Accordingly:
1. To explore the relationship between quality and innovation.
2. To develop strategies to enhance innovations, coming out form the
solid foundation of quality only.

5. RESEARCH METHODOLOGY
Research Design- The research design chosen is descriptive design.
Descriptive design is helpful in obtaining information about the variables to
be researched. It helps to get description on the topic concerned. Efforts
were made to study the innovation and quality by using research papers.
The variables used in this paper are technology, market, resources,
environment, and organizational inertia to work upon the innovative ideas
which ultimately results consider to be breakthrough for any organization as
such.
The prevailing model working in most of the organizations banking upon
the dimensions as listed below:
1. Guidance
2. customs
3. endowment
4. ecological unit
5. procedure
6. group
7. Control
8. Organisation
9. Financial support
10. Metrics and aspirator

Data Type- Secondary data has been used to conduct research. Secondary
data like review of literature was used to get into the insights of the quality
management and innovation. This research paper uses various researches of
scholars in a summarized manner.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 96


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

Analytical tool- The analytical tool is the observations of the researchers


to find the dependence of quality on innovation and how does breakthrough
innovation brings wider impact.

6. DATA ANALYSIS
The research paper of Colin & Shen, 2013[1], data were taken from the top
1,000 Taiwanese firms in terms of total revenue (China Credit Information
Service, 2009). As in similar studies of innovation and dynamic capabilities
(e.g. Morgan et al., 2009; Zhou et al., 2005; Narver, 2004) [12, 13] senior
managers were selected. We first called each firm to identify a senior
manager to be the key respondent. We then screened the key respondent to
ensure that he or she possessed sufficient knowledge about the firm's
various functional areas and was committed to cooperating with the
research project.

The firms' annual sales ranged from $US2.3m to $US8.3bn. Finally, the
number of firm employees ranged from 1,534 to 26,473, with 70.6 percent
units reporting more than 1,000 full-time employees. The previous literature
highlights the role of dynamic capabilities in new product success because a
firm's dynamic capabilities to deal with rapid changes in the environment
are critical for product innovation (e.g. Morgan et al Verona and Ravasi,
2003; Danneels, 2004). Extending this logic, the dynamic innovation
capabilities has got inverted U shape viz a viz to breakthrough innovation.
That is, at the beginning stage, dynamic innovation capabilities relate to the
highest degree of breakthrough innovation, whereas in later stages, dynamic
innovation capabilities prevent breakthrough innovation (see Figure 1).

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 97


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

More precisely, the longer firms have dynamic innovation capabilities, the
more rooted firms may become in existing environments such that they
might ignore emerging changes in the environment; moreover, the longer
firms have dynamic innovation capabilities, the more they are not able to
manage changes in the environment. Organizational inertia further obstructs
breakthrough innovation. As a result, the longer firms have dynamic
innovation capabilities whereas the fewer firms intend to develop
breakthrough innovation.
Wittel Lars, 2010 [24] In this research project; we worked together with a
manufacturer of microwave ovens. In the microwave oven manufacturers
sell their different products under a variety of different brands, ranging from
low-priced ovens with few functions to high-end ovens with many
functions. Both the technology and the market are mature and there is an
interest for manufacturers to identify new ways of delivering customer
value in order to survive in the long term.
The early phases of the product development process often include a phase
of idea generation. In this case, idea generation consists of four main
activities:

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 98


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

(1) Generation;
(2) Screening;
(3) Identification; and
(4) Evaluation (see Figure 1).
In the survey, the 21 most promising customer ideas were tested. The
questionnaire consisted of three parts. The first section included regarding
customers' usage of microwave ovens. Then a section with questions based
on the theory of attractive quality followed, and finally came a section
where the customer judged the value of the different ideas. In total, 87
adults participated in the study. In our study, attractive ideas that do not
exist in the market are more original and provide higher customer value
than the ideas perceived as indifferent. The research on the theory of
attractive quality has previously focused on evaluating existing products,
while we focus on evaluation of ideas that do not exist in the market. This
change of focus is necessary to realise the full potential of the theory of
attractive quality. An interesting finding is that customers and product
developers judge value differently. One might argue that product
developers' judge value based on the characteristics of the product or
service, while the customers judge value based on value in use, i.e. how
value is co-created during use in the context of the customer.
In Prajogos, 2008[16] paper, the sample of the survey was derived from the
database of individuals who subscribed to the membership of the Australian
Organization for Quality (AOQ) encompassing both manufacturing and
non-manufacturing sectors. A single business unit was selected as the unit
of analysis (e.g. plant for a manufacturing firm) for the reason that the
operations and practices were homogenous at this level. The respondents
selected for this survey were manager(s) who have knowledge of past and
present organisational practices relating to continuous improvement and
innovation at the site.
The first insight drawn from these results is the uniqueness of the role of
each function within a value chain in determining the performance of a firm.
The marketing function, through the customer focus construct, shows a
significant relationship with product quality performance and this is
consistent with past studies ([24] Dow et al., 1999; [32] Grandzol and
Gershon, 1997; [73] Samson and Terziovski, 1999).

Supplier management shows strong association with both product quality


and product innovation. Those firms who involve suppliers early in the
product development process, their product innovation performance is going
to be enhanced considerably.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 99


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

The empirical analysis evokes a number of important findings. First, the


results suggest that each value chain function has a different relationship
nature with different types of competitive performance, specifically quality
and innovation.
A second finding from this study suggests that R&D is only significantly
related to product innovation. The relationship between procurement and
innovation was also significant.
Additionally, quality and innovation were shown positive and significant
relationship with each other.
As per the research paper of Hoang Dinh, (2006)[8], for checking the TQM
practices, a model developed based on 10 parameters and verified as well
as checked by 14 specialists, and intellectuals of Vietnam, which was the
result of previous studies, well justified with the criteria as defined in
MBNQA, 1999, Award, Vietnam Quality Award.
As in the final survey mainly 11 set of parameters being taken here which
are as follows;
the hard and soft aspects of TQM practices are very well covered
through these practices;
the most prestigious quality award comprises of direction, tactical
arrangements, concentration of customers and market, and well
defined practices as the most important dimension related to TQM
philosophies followed by researcher and practitioners;
TQM implementation in both manufacturing and service
organizations require significant trainings in understanding the full
deployment process;
All these procedures are related to Quality Award criteria and are therefore
suitable for testing in the Vietnamese industry context.
The findings of various industries have resulted that TQM practices directly
& positively associated with the level of novelty imparted and exactly how
many in number would be able to produce as new products and services in a
fixed time horizon.
The survey result of Vietnamese companies have highlighted that (see Table
X), top management commitment was ranked 4.02 and came second after
customer focus, followed by employee involvement, teamwork, open
organization, strategic planning, and service culture, with mean values
ranging from 3.5 to 3.9. The dimensions as analytical processing of the
information gathered, proper guidance, full authorization of its employees,
and process management were graded of values less than 3.5 which were

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 100


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

considered to be the lowest. The results depicted by Loan (2004) were quite
comparable, and justified with suitable explanation.
Table 1
TQM dimensions Mean S.D.
Customer focus 4.08 0.69
Top management
4.02 0.69
commitment
Service culture 3.88 0.68
Strategic planning 3.87 0.79
Open organization 3.73 0.80
Teamwork 3.66 0.84
Employee involvement 3.56 0.75
Process management 3.49 0.80
Employee empowerment 3.49 0.81
Education & training 3.49 0.85
Information and analysis
3.39 0.90
system

The above data is checked against the hypothesis that the population from
which the said samples are being extracted have the mean of 4.5 with
standard deviation of 0.24, and then it has proved that the null hypothesis
considered here is rejected at 5% level of significance.
To check the reliability of the above results, SPSS was run on the said facts
& figures and the following results were found out.
Table 2: Case Processing Summary

N %
Cases Valid 11 100
Excluded 0 0
Total 11 100

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 101


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

Table 3: Reliability Statistics

Cronbachs Cronbachs N of items


Alpha Alpha Based
on
Standardized
item
0.555 0.955 2
Table 4: Inter item Correlation Matrix

Mean SD
Mean 1.000 0.913
SD 0.913 1.000
As it is evident that the Cronbachs alpha is 0.555, which is acceptable to
some extent. The correlation between the items considered here is very high
i.e. 0.913.
Those companies who are thinking differently with regards to
competitiveness, strategy and organization accomplish position of leader
(Pratibha Malaviya, 2005) [12]. The strategy adopted by the company
should take into consideration, how an industrial revolution can occur,
which is nothing but an extension of intellectual and emotional commitment
(Hamel and Prahalad, 1994) [7]. To break away from global competition,
company has to follow radical and nonlinear innovation. Companys
financial performance can be improved if they persistently perform the
activities related to discovery, development and commercialization of
breakthrough innovations (Hamel 2000, Pethick and Ciacchella 1998) [12].
An integrated company wide holistic approach is required for innovation
and the involvement of each and every individual is obligatory in the
creative idea processing and its management" (Tucker, 2002).
According to Wang Catherine, 2002[23], the quality paradigm is developed
through evolutionary trajectory, which will take into describe the budge and
instability in the competitive setting. By simply relying on the traditional
quality competencies, building up of sustainable competitive advantage is
just impossible to achieve. The above model of 5-S consisting of innovative
quality and value provides a framework for those companies who are
aspiring to move to higher quality platform.
As per the study of health care unit in US (Liu, 2001) [11] seven principles
have been identified

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 102


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

1) effective relationships; (2) empowerment and decentralisation; (3)


accountability and teamwork; (4) measurable, observable results; (5)
process management; (6) customer satisfaction; and (7) collaboration. Any
organisation, following the TQM practice must adhere to the above
mentioned seven principles, which are complementary and significant in
every day to day process. Those organizations where innovations and
quality are considered as routine, has to function horizontal, organic and
decentralised as compare to small entrepreneurial units as dreamt by many
leading management experts.
The study (OConnor, 2001) [6] was conducted on 10 large US firms,
whereby the results of 12 radical innovation projects were discussed. The
opportunity recognition process within and external to a firm promulgate
from the individual initiative as well as informal network appears to very
much require. As the breakthrough innovation are generally associated with
high degree technical and market uncertainty the understanding changes
over a period of time, therefore this process redefined as in the example of
General Motors, multiple levels of opportunity recognition is involved.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 103


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

7. RECOMMENDATIONS
Managers need to encourage creativity. They should make innovation as
part of organizational culture. Treat employees fairly, insist on integrity,
value diversity, communicate openly and honestly, provide honest feedback
on performance, encourage risk taking and innovation, work as a team,
motivate to do the best and which provide opportunities in advancement in
the career as well as the work is equally compensated.
1. Firms should promote TQM so that innovation can be introduced in the
firms. If innovation will come in the firm, quality will be enhanced of the
products and processes.
2. Firms with strong dynamic innovation capabilities could use open
innovation activities to coordinate their resources with outsourcing
agencies. As a result the actions related to novel ness and developing
something original provoke greater breakthrough innovation.
To develop new products or services what is required align the innovative
objectives with the business ones.
Accordingly, what comes out of this study is
Distinguish the significance of innovation.
Innovate with rationale (with business upshots in brain).
Have a logical approach.
Consider advancement as other management procedure.
Testing with novel innovation working sculpts.
Employ public media which will assist you to construct new product or
services.

8. Conclusion
The research paper tries to explore how does quality is linked to
innovation. It studies their relationship. Breakthrough innovation which is
must for any organization is also studied to depict its importance. Various
research papers had been used as secondary data to find relationship
between quality and innovation. Through research, it can be said that
innovation brings quality in an organization and to introduce innovation,
quality matters. In short, it can be said that both these terms are mutually
exclusive. They are dependent on each other. Many organizations have not
started breakthrough innovation in practice, which is need of an hour. An
organization who does innovation on regular basis can sustain in
competitive world by bringing quality in processes and products.
In this paper we have scrutinized if and how the process of innovation is
being carried out in different industries and the quality can govern leads

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 104


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

toward innovation in that company itself. The insights of various


researchers have been sighted here for the purpose of coming into a
conclusion. Some of the study shown have clearly indicated that the quality
play a major role in culminating the innovation in the organizations as such.
However, the type of internal processes associated in the value chain has
also a very crucial role in depicting the process innovation. The part of
product innovation actually accruing through the creativity of the employee
at large. Sometimes framework like 5-S also governs the innovation
situation at times.
Through studying different facts and figures it is very much clear that
basically the growth in revenue can be achieved mainly through innovation,
digitization, visibility in the customers eyes and lastly globalization.
Innovation has become a competitive necessity for any organization to
survive and compete with the rest of the world. Innovations help the
companies in transforming and also provide a tool of competitive edge.
Once any company achieves the status of successful companies highest
degree of fear set forth not to leave the tried and tested philosophy for the
sake of innovative companies.
The right leadership and environment and culture are the most important
ingredient to be successful in innovative new regimes.
The end of all knowledge should be in virtuous action.
Philip Sydney
REFERENCES
[1] Cheng C.J. Colin and Chen Ja. Shen, (2013), Breakthrough innovation: the roles
of dynamic innovation capabilities and open innovation activities, Journal of Business
& Industrial Marketing, Vol.28, No 3, pp 444 -454.
[2] Christensen, C.M. (1997), The Innovators Dilemma, Harvard Business School
Press, Boston, MA.
[3] Crosby, Philip B. 1990. "21st Century leadersiip", Journal for Quality and
Participation. V15, N4, pp.2427.
[4] Davenport, T., Leibold, M. and Voelpel, S. (2006), Strategic Management in the
Innovation Economy. Strategy Approaches and Tools for Dynamic Innovation
Capabilities, Wiley, New York, NY.
[5] Gassmann, O., Enkel, E. and Chesbrough, H. (2010), The future of open
innovation, R&D Management, Vol. 40 No. 3, pp. 213-221.
[6] Gina Colarelli O'Connor; Rice, Mark P (2001), Opportunity recognition and
breakthrough innovation in large established firms, California Management Review
43.2 (Winter 2001), pp 95-116.
[7] Hamel Gray, Prahalad C.K (1994), Strategic Intent, Harvard Business Review.
[8] Hoang Thai Dinh, Igel Barbara (2005), The impact of total quality management
on innovation, International Journal of Quality & Reliability Management, Vol. 23,
No 9, pp 1072 1117.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 105


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

[9] Levesque, Justin; Walker, H Fred (2007), The Innovation Process and Quality
Tools, Quality Progress, Vol .40, No 7, pp 18-22.
[10] Lichtenthaler, U. (2011), "Open innovation: past research, current debates, and
future directions", The Academy of Management Perspectives, Vol.25 No.1, pp.75-93.
[11] Liu, Vincent C, Kleiner, Brian H (2001), Global trends in managing innovation
and quality, Management Research News, Vol. 24, No. , pp 13-16.
[12] Malaviya, Pratibha; Wadhwa, Subhash (2005), Innovation Management in
Organizational Context: An Empirical Study, Global Journal of Flexible Systems
Management, Vol. 6, No 2, pp 1-14.
[13] Morgan, N., Vorhies, D., Mason, C. (2009), "Market orientation, marketing
capabilities, and firm performance", Strategic Management Journal, Vol. 30 No.8,
pp.909-920.
[14] Narver, J., Slater, S., MacLachlan, D. (2004), "Low and high market orientation
and new product success", Journal of Product Innovation Management, Vol. 21 No.5,
pp.334-347.
[15] Motwani, J. (2001), Critical factors and performance measures of TQM, The
TQM Magazine, Vol. 13 No. 4, pp. 292-300.
[16] Prajogo, Daniel I; McDermott, Peggy; Goh, Mark (2008), Impact of value chain
activities on quality and innovation, International Journal of Operations & Production
Management 28.7, 615-635.
[17] Prajogo, D.I. and Sohal, A.S. (2003b), The relationship between TQM practices,
quality performance, and innovation performance: an empirical examination,
International Journal of Quality & Reliability Management, Vol. 20 No. 8, pp. 901-18.
[18]Rogers, E. (2003), Diffusion of Innovations, 5th ed., The Free Press, New York.
[19] Sila, I. and Ebrahimpour, M. (2002), An investigation of the total quality
management survey based research published between 1989 and 2000 a literature
review, International Journal of Quality & Reliability Management, Vol. 19 No. 7,
pp. 902-70.
[20] Singh, P.J. and Smith, A.F.R. (2004), Relationship between TQM and
innovation: an empirical study, Journal of Manufacturing Technology Management,
Vol. 15 No. 5, pp. 394-401.
[21] Song, L.Z., Song, Michael & Benedetto, A.C. Di (2009), A staged service
innovation model, Decision Sciences, Vol. 40, No 3, pp 571-599.
[22] Veryzer Robert W. Connor O Gina Colarelli (1998), Using mini-concepts to
identify opportunities for really new product functions, Journal of Consumer
Marketing, Vol. 15, No pp. 525-543.
[23] Wang L. Catherine, Ahmed K Pervaiz (2002), Learning through quality and
innovation, Managerial Auditing Journal 17/7, pp 417-423.
[24] Witell Lars (2011), Identifying ideas of attractive quality in the innovation
process, The TQM Journal, Vol. 23 No. 1, pp. 87-99.

This paper may be cited as:


Gupta, H., 2014. Quality Platforms for Innovation and Breakthrough.
International Journal of Computer Science and Business Informatics, Vol.
11, No. 1, pp. 90-106.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 106


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

Development of Virtual Experiment on


Waveform Conversion Using Virtual
Intelligent SoftLab
Bhaskar Y. Kathane
V.M.V. College, Wardhaman Nagar,
Nagpur (MS), India

ABSTRACT
Waveform conversion is difficult task for the students during studying. Virtual Intelligent
SoftLab (VIS) gives the easy implementation of waveform conversion using the virtual
instruments. The study of waveform conversion is important in Electronics, Computer
Science and Engineering. The virtual intelligent softlab convert rectangular waveform to
sawtooth, digital and pulse waveform using virtual instruments. This model will help
students to perform it any time and anywhere without traditional laboratory. The screen
shows virtual waveform using virtual input Instruments and observed converted waveform
using the virtual output Instruments. In this model we learn the circuit connection without
physical damages. There is a facility for the user to change the voltage and observed the
outputs on the screen.

Keywords
SoftLab, VIS Model, Waveform, Sine wave, Sawtooth, Rectangular wave.

1. INTRODUCTION
The basic concept of VIS (Virtual Intelligent SoftLab) Model of an
experiment is to provide a virtual platform for learners to perform the
experiment with their own selection. The effort is towards the working
procedure in a real laboratory and its environment in the virtual workbench.
Virtual experiments are designed and sequenced in such a manner as to give
a real feel of performing the experiment. During the experiment, the learner
can save and edit the desired data for his/her analysis. Apart from these the
focus is also aims to embed a maximum number of learning components in
virtual experiments. Virtualizations of experiments could be broadly
classified, based on the form data used for performing the experiment. The
Soft Lab philosophy facilitates us to link the physical laboratory experiment
with its theoretical simulation model within a unified and interactive
environment. The goal for each instance of a SoftLab laboratory is to create
a software environment where experimental research, simulation and
education coexist and interact with each other. As a part of the SoftLab
project, we have design various experiments for Electronics, Computer

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 107


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
science and Engineering students. This model describes how the
experiments perform for the user using virtual instruments. The VIS forces
us to address the challenge of solving experiments. Virtual Intelligent
SoftLab does not require a wide range of expertise to perform the
experiment. The SoftLab framework should provide the infrastructure and
facilities that serve the needs for basic research. SoftLab is such a flexible
laboratory environment. Its goal is to simulate a laboratory space having a
well-equipped storeroom of instruments and a variety of materials. Using
SoftLab a student may be guided by an instructor to perform an experiment,
or the student might also conceive of one on his own. The student may
choose a substance to study, take out the instruments he needs, connect
them together, make his measurements, and record and plot his results. The
computer screen is the laboratory room. The experimental possibilities open
to the student certainly are limited by the ability of the developers to
maximize flexibility in a practicable way [1].

2. WAVEFORM CONVERSION
With operational amplifiers we can convert sine waves to rectangular
waves; rectangular waves to triangular waves and so on. This experiment is
about some basic circuits that convert an input waveform to an output
waveform of a different shape [2].
2.1 Sine wave to Rectangular wave
When the input signal is periodic, the Schmitt trigger produces a rectangular
output. The input voltage exceeds Upper Trip Point (UTP) on the upward
swing of the positive half cycle, the output voltage switches to -ve. One
half cycles later, the input voltage becomes more negative than Lopper Trip
Point (LTP), and the output switches back to +ve. A Schmitt trigger always
produces a rectangular output, regardless of the shape of the input signal.

Fig: 1
2.2 Sine wave to Sawtooth wave
The capacitor charges toward the supply voltage, the capacitor voltage
reaches +10v, the diode breaks over. This discharges the capacitor,
producing the fly back (sudden voltage drop) of the output waveform.
When the voltage is ideally zero, the diode opens and the capacitor begins to
charge again. In this way, we get the ideal sawtooth waveform.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 108


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
Fig: 2
2.3 Rectangular wave to Triangular wave
Rectangular wave is the input to an integrator. Since the input voltage has a
dc or average value of zero, the dc or average value of the output is also
zero. The wave is decreasing during the positive half cycle of input voltage
and increasing during the negative half cycle. Therefore, the output is a
triangular wave with the same frequency as the input.

Fig: 3

2.4 Triangular wave to Pulse wave


With this circuit, we can move the trip point from zero to a positive level.
When the triangular input voltage exceeds the trip point, the output is high.
Since Vref is adjustable, we can vary the width of the output pulse, which is
equivalent to changing the duly cycle.

Fig: 4

2.5 Half Wave Rectifier


Half wave rectifier is a simple and low cost rectifier circuit it is used where
high quality DC is not required for example to operate Night lamp, Radio
circuit, etc. A diode is connected in series with load RL and output is taken
across RL. In first half cycle or positive half cycle of AC voltage, diode
becomes forward bias it is acting as a closed switch, the current flows
through the circuit its flow through RL. Thus same output voltage is
developed across RL similar to half cycle of AC input. In next cycle or
negative half cycle diode becomes reverse bias; diode is acting as an open
switch thus current through circuit is not possible it is blocked by diode. i.e.
Vout= I x RL= 0 x RL = 0v. Thus diode will conduct only at positive half cycle
and it rectifies negative half cycle.

Fig: 5

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 109


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

2.6 Full Wave Rectifier


To utilize negative half cycle one or more diode is connected with special
type of transformer called as center tap transformer. In centre tap
transformer the middle terminal is tapped. In center tap rectifier diodes are
conducting in alternate cycle so that current through R L flows in the same
direction for both half cycles.

Fig: 6

3. TOOLS & TECHNOLOGY


Virtual Intelligent SoftLab model is design in Visual Basic as front-end and
Microsoft Access as back-end. This language provide integrated
development environment to the user. Visual Basic helps us to construct
program to perform all virtual operation without physical instruments. This
language is relatively easy to learn and use all its graphical features. Visual
Basic easily connects with the database. A programmer can put together the
component provided with Visual Basic itself to develop an application. The
language not only allows programmers to create simple GUI applications,
but also develop complex applications. Programming in Visual Basic is a
combination of visually arranging component or control on a form,
specifying attributes and actions of those components. Visual Basic can
create executables (EXE files), ActiveX control or DLL files, but is
primarily used to develop Windows applications. The beauty of this VIS
model is that it does not require the Database to manage data [3].

4. VIS MODEL
We have constructed the programs in VB, such that all the blocks in the
model can be fully visualized on the screen. This model can demonstrate the
activities of waveform conversion visually. Inputs accepted throw software
and virtual output will observe on screen. In an experiment we can provide
different input values and observe output. This model provide circuit
connection facility to user to made connection properly otherwise the result
not generated.

4.1 DESIGN SPECIFICATIONS


A program is constructed for conduct of Waveform Conversion experiment
in VIS such that all the blocks in the model can be fully visualized on the
screen. This model also can demonstrate the activities of Waveform
Conversion including circuit connection visually. Inputs accepted through

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 110


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
virtual waveform generator and resultant waveform virtual output which is
observable on screen. In an experiment, one can provide different
amplitude and frequency values for waveform signal and observe results.
This model provides circuit connection facility to user so that the user can
practice circuit connection also.

Procedure:
Connect the circuit shown in fig 7.
Set the sine wave generator frequency and amplitude.
Change the Amplitude, frequency and observe the output waveform.

Sin to Rectangular form Sin to Sawtooth form

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 111


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

Rectangular to Triangular wave Triangular to Pulse wave

Half Rectifier Full Rectifier


Fig. 7: VIS Experiment on Waveform Conversion

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 112


International Journal of Computer Science and Business Informatics

IJCSBI.ORG

4.2 IMPLEMENTATIONS
Once the VIS is ready then we implement the circuits using then following
steps. The Circuit Connection Steps are
Connect AC socket to DC Converter device
Connect DC power supply to IC VCC pin
Connect Ground Socket to IC Ground Pin
Connect Output IC pin to Output switches
Connect Input IC pin to Input switches

Experiment Implementation Steps are


Made connection to selection two switches using mouse
Click on Check Button to verify the connection
Click on Reset Button if the connection are WRONG
Click on Help Button if you need Connection HELP
Click on Menu Button if you want to perform other Experiments

5. RESULTS
Virtual outputs are totally animated with the combination of software and
observed actual outputs virtually using virtual instruments.

6. CONCLUSIONS
Virtual Intelligent SoftLab will helps Electronics, Computer Science and
Engineering students to perform and practice experiments to improve their
understanding of the subject. The design of the VIS model is more effective
and realistic as necessary variable inputs and outputs are visible on the
monitor screen. This model created for the client based system, can be
converted into a client-server based application system. This virtual
experiment provides practice for students for the touch & feel part they
have already performed in the laboratory.

7. ACKNOWLEDGEMENTS
We are very much thankful to Dr. M. G. Chandekar, Dr. P. K. Butey and
Dr. D. A. Deshpande for their valuable inputs, constant guidance and their
extensive support and encouragement in this work.

8. REFERENCES
[1] Tiwari, R. & Singh, K. (2011), Virtualization of engineering discipline experiments for an
Internet-based remote laboratory. Australasian Journal of Educational Technology, 27(4),
671-692. http://www.ascilite.org.au/ajet/ajet27/tiwari.html.
[2] Malvino Electronic Principles, Tata McGraw-Hill , Sixth Edition 1999.
[3] B.Y. Kathane, P.B. Dahikar (Sept 2011), Virtual Intelligent SoftLab for p-n junction
Experiment, Journal of the Instrument Society of India,ISSN 0970-9983, Vol.41 No.3,
pp161-162.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 113


International Journal of Computer Science and Business Informatics

IJCSBI.ORG
[4] Digit FastTrack to Virtualization Volume 07, issue 04, April 2012.
[5] Physical Science Resource Center (PSRC) http://www.psrc-online.org/, Dec 2012.
[6] Remoter Dynamical System Laborator, STEVENS, Institute of Technology
http://www.stevens.edu/remotelabs/, Dec 2012.
[7] Mercer University Online Interactive Chaotic Pendulum,
http://physics.mercer.edu/pendulum, Retrieved on Dec 2012..
[8] http://www.cage.curtin.edu.au/mechanical/info/vibrations, Retrieved on Dec 2012.
[9] http://www.lci.kent.edu/ALCOM/alcom.html, Retrieved on Dec 2012..

This paper may be cited as:

Kathane, B. Y., 2014. Development of Virtual Experiment on Waveform


Conversion Using Virtual Intelligent SoftLab. International Journal of
Computer Science and Business Informatics, Vol. 11, No. 1, pp. 107-114.

ISSN: 1694-2108 | Vol. 11, No. 1. MARCH 2014 114

S-ar putea să vă placă și