Sunteți pe pagina 1din 9

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)

Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com


Volume 2, Issue 4, July August 2013
ISSN 2278-6856

An analytical survey of various congestion


control and avoidance algorithms in WSN
Nilima Rani Das1, Rashmi Rekha Sahoo2 and Debashree Mishra Sar3
1,2,3

Siksha O Anusandhan University, School of Computer Science,


Department of CA, ITER, Bhubaneswar,Odisha

Abstract: One of the most important challenges in wireless


sensor networks (WSN) is how to address congestion problem
in such environments. With the expansion of the application
scope and scale, WSN faces serious congestion because of its
limited storage capacity and large quantity of nodes. In
sensor networks, congestion causes overall channel quality to
degrade and loss rates to rise, leads to buffer drops and
increased delays (as in wired networks), and tends to be
grossly unfair towards nodes whose data has to traverse a
larger number of radio hops. Therefore congestion in WSNs
has to be controlled for high energy-efficiency, to lengthen
system lifetime and to improve fairness and quality of service
(QoS) in terms of throughput and packet loss ratio along with
the packet delay.

Keywords: wireless sensor network, congestion


avoidance, congestion degree, the Local Minimum
Spanning Tree algorithm.

1. INTRODUCTION
Network congestion occurs when offered traffic load
exceeds available capacity at any point in a network. In
wireless sensor networks, congestion causes overall
channel quality to degrade and loss rates to rise. It also
leads to buffer drops and increased delays. Congestion
can occur due to simultaneous packet transmission by
multiple nodes as a result of event detection. If the data
sending rate of the sender is much higher than the data
handling or processing capacity of the receiver or if the
buffer space at the receiving node is not enough, then it
may get overflow and can cause data packet loss. So the
limited bandwidth, high data sending rate, convergent
nature of the network and event driven nature of the
network are important factors that cause congestion. To
address these challenges, a number of congestion control
schemes have been proposed.
Congestion control algorithms in WSNs are divided in
two categories. They are Traffic Control and Resource
Control. There are many congestion control algorithms
in WSNs that use traffic control techniques. They
actually control or avoid congestion by reducing the rate
in which sources inject packets into the network, until
congestion is removed. But these are not suitable in the
situations where generation of data is very high. In
WSNs, algorithms that use a single path to transmit data,
can soon exhaust the power of the nodes and create
holes in the network. To address these challenges, a
number of congestion control algorithms have been
Volume 2, Issue 4 July August 2013

proposed that use the approach of resource control. In


this case, the sources don't have to reduce their data rate.
Instead the excess packets are routed to the sink through
alternative or multiple paths. Actually, these algorithms
turn on the sensor nodes that are currently in dormant
state (sleeping) to increase the capacity of the network to
accommodate the higher incoming traffic. In this paper
the performance of some congestion control algorithms
are studied in terms of network lifetime and energy
utilization.
2. ALGORITHMS ON CONGESTION CONTROL
AND AVOIDANCE
In this section a literature survey of some existing
protocols for congestion control and avoidance in WSNs
has been presented. The general working principle and
design approach of the protocols have been studied here.
Congestion Detection and Avoidance (CODA). It deals
with various degrees of congestion depending on the
sensing application [1]. It suggested an open-loop hop-byhop mechanism and a closed-loop multi source regulation
mechanism. CODA uses a combination of the present and
past channel loading conditions, and the current buffer
occupancy, to infer accurate detection of congestion at
each receiver with low cost. Sensor networks must know
the state of the channel, since the transmission medium is
shared and may be congested with traffic between other
devices in the neighborhood. Listening to the channel to
measure local loading incurs high energy costs if
performed all the time. Therefore, CODA uses a sampling
scheme that activates local channel monitoring at the
appropriate time to minimize cost while forming an
accurate estimate. Once congestion is detected, nodes
signal their upstream neighbors (towards the source) via a
back-pressure mechanism (open loop hop-by-hop backpressure). The back-pressure message is sent to all
neighbors in order to reduce their rate or drop packets. A
node broadcasts back-pressure messages as long as it
detects congestion. The nodes that receive back-pressure
signals can throttle their sending rates or drop packets
based on the local congestion policy (e.g., packet drop,
AIMD, etc.). When an upstream node receives a backpressure message it decides whether or not to further
propagate the back-pressure upstream, based on its own
local network conditions. The closed loop regulation
operates over a slower time scale and is capable of
Page 244

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 4, July August 2013
ISSN 2278-6856
asserting congestion control over multiple sources from a
single sink in the event of persistent congestion. When
the source event rate is less than some fraction of the
maximum theoretical throughput of the channel, the
source regulates itself. A source is more likely to
contribute to congestion and therefore closed-loop
congestion control is triggered. The source only enters
sink regulation if this threshold is exceeded. At this point
a source requires constant, slow time-scale feedback (e.g.,
ACK) from the sink to maintain its rate. The reception of
ACKs at sources serves as a self-clocking mechanism
allowing sources to maintain their current event rates. In
contrast, failure to receive ACKs forces a source to reduce
its own rate.
There are many disadvantages of this algorithm. They
are: 1) when congestion happens, the back-pressure
message may increase congestion due to high channel
loading 2) the explicit ACK wastes much energy and the
loss of the ACK due to link quality may give a false
congestion signal to the source and affect the network
throughput, 3) it does not take into consideration the
different classes of traffic flows and 4) it doesnt also
consider the event fairness and packet reliability.

As by measuring and dividing the average rate by the


number of downstream nodes, the mean packet
generation rate of all downstream nodes can be obtained,
the rate assignment is fair. By reducing the transmission
rates of all downstream nodes when this node's queue is
full or about to become full, the queue is allowed to be
empty, minimizing type B congestion. In elimination of
type A congestion the system forms a closed-loop and is
in constant oscillation: with reference to Figure 1,
suppose initially congestion occurs at the grey-colored
node, which then informs its downstream nodes to reduce
their transmission rate. This gets propagated through the
network, and ultimately the queues of this node and its
neighbors get empty. This causes less number of nodes to
transmit and the time taken to successfully transmit a
packet reduces. The node then propagates the new,
increased rate which then brings about congestion again.
The cycle repeats, and the instantaneous rate fluctuates
around the actual rate. Every node is able to control the
rate of its downstream nodes. This allows the root node to
reduce the generation rates of all downstream motes (the
back-pressure effect), and is important because it is wrong
to assume that congestion occurs only at the base station.

Rate based congestion control. It proposes a simple and


distributed algorithm [2]. It ensures the fair delivery of
packets to a central node, or base station. It can be used in
many-to-one multi-hop routing. As it is simple it can be
implemented in sensor nodes that have limited resources
such as memory and computational power and since it is
distributed so that no global coordinating entity is
required. Global coordination of nodes requires the usage
of additional control packets that consume more energy,
and is less robust because the coordinating entity is a
single point of failure in the network. According to this
algorithm there are two types of congestion.
Type A congestion: in a particular area, many nodes
within range of one another attempt to transmit
simultaneously, resulting in packet losses (interference
will occur at the listening node that is within range of the
transmitting nodes) and thereby reducing throughput of
all nodes in the area.
Type B congestion: within a particular node, the queue
or buffer used to hold packets to be transmitted, if
overflows then packet loss occurs.
The basic concept behind controlling congestion in this
scheme consists of the following steps repeatedly run at
each node:
1. The average rate r at which packets can be sent from a
node that experiences congestion, is measured,
2. The rate r will be divided among the number of
children nodes downstream, i.e., n, to give the per-node
data packet generation rate:
rdata = r/n. The rate has to be adjusted if queues are
overflowing or about to overflow.
3. The rate rdata will be compared with the rate rdata, parent
sent from the parent and the smaller rate will be used and
propagated downstream.

This algorithm is scalable, requiring only per-neighbor


state and by piggy-backing control information on data
packets, there is no need for additional control packets.
Finally, since congestion control and fairness is followed
in the transport layer, the solution is independent of the
routing in the network layer, as well as the MAC protocol
in the data-link layer. However the problem of unfairness
exists in small networks.

Volume 2, Issue 4 July August 2013

(a)

(b)

(c)

(d)

Figure 1 (a) The grey colored node experiences congestion. (b)


Informs downstream nodes to reduce transmission rates. (c)
When congestion is reduced time taken to transmit a packet
reduces. (d) The node informs the other nodes to increase
transmission rates. (a), (b), (c) and (d) form a continuous cycle.

Adaptive resource control scheme to alleviate


congestion. In this scheme [3] the authors proposed an
adaptive resource control scheme to alleviate congestion
that has two phases as increasing resource provisioning as
soon as congestion occurs and reducing the resource
Page 245

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 4, July August 2013
ISSN 2278-6856
budget as soon as congestion subsides. The scheme has
managed to balance the need of reliably shipping data
packets to the sink during a crisis state and the need of
conserving energy consumption. This scheme adjusts the
resource provisioning based on the congestion level so
that we can both increase the capacity, by having more
sensor nodes forwarding data or having more routing
paths, to alleviate the congestion, as well as reduce the
capacity after the congestion to conserve the energy
consumption. It focuses on alleviating congestion in
sensor networks. Upon detection, it has two options to
control the congestion: throttling the source traffic
volume (referred to as traffic control), and increasing the
network resource (referred to as resource control).
According to the authors a natural way of using the
resource is to conserve as much as possible during a
dormant state while using them wisely during crises. This
paper investigates a framework to alleviate congestion by
turning on more resources as soon as the congestion is
detected. The authors investigate the approach of using
adaptive resource control strategies to alleviate
congestion. More routing paths are set up in order to
share the load after the flow level congestion is detected.
Sometimes, they need to wake up the sleeping nodes
(these nodes were in sleep state to conserve energy) to
form new routing paths. The extra resources need to be
turning off as soon as the source traffic decreases. This
scheme has managed to balance the need of reliably
shipping data packets to the sink during a crisis state and
the need of conserving energy consumption.
This algorithm involves three steps: 1) Congestion
Detection: The upstream nodes of a flow periodically
notify their congestion level to downstream nodes by
embedding their perceived congestion level into the
header of data packets. After detecting the flow-level
congestion exceeds a predefined congestion threshold, the
first node whose congestion level is below the hotspot
proximity threshold will become the initiator and start
creating multiplexing paths. 2) Alternative Path Creation
(building multiplexing paths): As soon as flow-level
congestion is detected, these nodes will be incorporated
into routing by forming one or more additional paths
called multiplexing paths. 3) Traffic Multiplexing: The
node(s) where the multiplexing paths meet the original
path are referred to as traffic dispatcher. The dispatcher
will evenly distribute the traffic between multiple paths in
a round robin fashion so that the congestion on the
original path can be alleviated.
This scheme guarantees data delivery even under severe
sensor network congestion by increasing the resource
provisioning that participates in the data delivery. It also
reduces the resource provisioning after removing the
congestion.
Datagram Congestion Control Protocol (DCCP). In
DCCP [4] to avoid congestion, the upstream sensors must
redirect packets to other paths. If all forwarding paths
from an active sensor to the base stations are congested,
Volume 2, Issue 4 July August 2013

the sensor must generate data at a reduced rate. For each


active sensor, the authors try to find the highest possible
rate that does not cause congestion at a downstream node.
They also want all active sensors to have equal access to
the transmission capacity of the network, no matter how
different their forwarding paths are. The purpose of
DCCP is to provide a standard way to introduce
congestion control and congestion control negotiations
into multimedia applications. This new transport protocol
is designed for deployment as a standard feature in end
hosts (PCs, VoIP codecs, and other internet-enabled
multimedia appliances). DCCP is a new protocol
designed for applications that require the flow-based
semantics of TCP, but prefer timely delivery to in-order
delivery, or a congestion control mechanism different
from what TCP provides. DCCP provides two main
functions: The establishment, maintenance and teardown
of an unreliable packet flow, and Congestion control of
that packet flow. The architecture of DCCP is shown in
the Figure 2.
DCCP aims to be a minimal overhead and generalpurpose transport-layer protocol.
Media/Session Control
SIP

H.323

RTSP

Media Codecs
RTP+RTCP

Transport Layer Interface


CCID2

CCID3

CCID4

DCCP
IPV4

IPV6

Figure 2 The architecture of DCCP


Hierarchical medium access control (HMAC). It
proposes an energy efficient congestion avoidance
protocol that includes source count based hierarchical and
load adaptive medium access control and weighted round
robin packet forwarding [5]. It gives proportional access
to the medium, i.e. a node carrying higher amount of
traffic gets more access to the medium than others.
Therefore, downstream nodes obtain higher access to the
medium than the upstream nodes. This access pattern is
controlled with local values and is made load adaptive to
cope up with various application scenarios. The
congestion process can be normalized when there is
synchronization between upstream node and downstream
node. The method that promotes this strategy is Weighted
Round Robin Forwarding. In this method it avoids the
packet to drop due to congestion by not allowing
upstream nodes to transmit if there is not available buffer.
In each round, a downstream node allows all of its
upstream nodes to transmit their weighted-share amount
of packets. If the downstream node allows its upstream
nodes to transmit at most R packets in total in a round,
Page 246

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 4, July August 2013
ISSN 2278-6856
any upstream node u can calculate its weighted-share
number of packets as follows:
Sw(u)=R x SCu/SCd
Where SCu and SCd are source count values (the total
number of source for which a node is forwarding data) of
upstream and downstream node, respectively. Each
downstream node controls round individually, no
synchronization is required among the nodes. For round
controlling a single bit field, round control, is appended
with each packet forwarded from it. Rounds cycle with 0
or 1, a new round is started with 0 and it remains
unchanged until downstream node receives weighted
share number of packets from all of its upstream nodes.
Upstream nodes get round value by snooping packets
transmitted by its downstream node. An upstream node
restricts itself from transmitting any further packet if it

Figure 3 Weighted round robin packet forwarding


completes its share in that round. Thereafter, the
downstream node switches round control bit to 1 and
allows upstream nodes to transmit further packets. Thus,
WRRF provides fair packet delivery in each routing path.
It decreases channel contention and also takes care of
balanced energy consumption by allowing equal packets
from all nodes.
The combined effort of HMAC and WRRF highly
increases the data transmission reliability of the network
as well as the level of fairness significantly. It ensures
that loss of energy due to packet drops (collision and
buffer drops) is greatly reduced. It also saves energy by
reducing the number of retransmissions at each
downstream node. Finally, the use of snoop based
acknowledgement further reduces energy consumption.
In Topology Aware Resource Adaptation (TARA). It is
a topology-aware resource adaptation strategy to alleviate
congestion which can adapt network resources based on
the congestion level [6]. Hence, it can serve the dual
purposes of alleviating congestion during crisis states and
conserving energy during dormant states. It activates
appropriate sensor nodes whose radio is off (that is,
sleeping nodes) to form a new topology that has just
enough capacity to handle the increased traffic, satisfying
either fidelity (or accuracy) and energy requirements at
the same time. TARAs distinct ability to find an optimal
topology in response to congestion under the fidelity and
energy requirements is enabled by a capacity analysis
model that can efficiently estimate the capacity of various
network topologies using a graph-theoretic approach. The
capacity analysis model in TARA can efficiently estimate
Volume 2, Issue 4 July August 2013

the end-to-end throughput of different topologies and


capture the degree of interference of a given topology.

Figure 4 Illustration of TARA


TARA proposes the use of resource control: increasing
capacity by enabling more nodes to become active during
periods of congestion. It makes extra nodes, referred to as
backup nodes, into sleep mode (by turning off their radio)
so that the overall resource (for example, energy) is saved
during a dormant state. However, these nodes need to
wake up periodically to check whether they are needed,
such as when a node on the routing path runs out of
battery. Every time a backup node wakes up, it measures
its congestion level. If it observes that the current
congestion level is greater than the previous
measurement, it will shorten its sleep interval and wake
up more often. As soon as its local congestion level is
above a threshold, it will significantly reduce its sleep
interval and becomes alert. During a dormant state, the
congestion level around a backup node is extremely low
because no traffic is routed through it. As soon as there is
a hot spot, the backup nodes around the hot spot will first
become active. Once they become active, they
communicate with each other, and the generated traffic
will increase the perceived congestion level of those
backup nodes that are nearby. Similarly, a large number
of backup nodes that are not too far away from the hot
spot can thus be woken up and become ready to be picked
to form detour paths. The framework of TARA is
illustrated in Figure 4, where the hot spot is around node
B. As soon as the hot spot node detects that its congestion
level is above the upper watermark, it needs to quickly
locate two important nodes: the distributor (node G) and
the merger (node J). Then, a detour path can be
established, starting at the distributor and ending at the
merger. To alleviate congestion, the distributor should
split the outgoing traffic between the original path and
the detour path, whereas the merger merges these two
flows. The choice of these two nodes, the establishment of
the detour path, and the distribution of loads between the
two paths are topology-aware. If TARA fails to create a
detour path, the distributors timer expires and notifies
the hot spot node of this failure. Upon receiving this
message, the hot spot node resorts to the traditional traffic
control approach by sending a back-pressure message to
its upstream neighbors. Due to TARAs topology
awareness, the frequency of resource adaptations is
minimized.
Page 247

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 4, July August 2013
ISSN 2278-6856
TARA can achieve data delivery rate and energy
consumption that is close to an ideal offline resource
control algorithm. However the time complexity of the
capacity model used in TARA is greatly affected by the
number of nodes in the topology. As a result, it may be
hard to apply it over a large topology.
Queue Based Congestion Control Protocol with
Priority Support (QCCP-PS). It uses the queue length
as an indication of congestion degree [7]. The rate
assignment to each traffic source is based on its priority
index as well as its current congestion degree. This
approach is motivated by the apparent limitations of
existing popular schemes, such as the PCCP. The
simulation results confirm that the PCCP performs very
poorly in providing relative priority in the case of random
service time. The PCCP increases the scheduling rate and
source rate of all traffic sources without paying any
attention to their priority index. In the case of high
congestion, PCCP decreases the sending rate of all traffic
sources based on their priority index. The QCCP-PS
protocol solves this problem by a proper adjustment of the
rate at each node. In QCCP-PS, the sending rate of each
traffic source is increased or decreased depending on its
congestion condition and its priority index. Figure 5,
shows the architecture of QCCP-PS. Similar to the other
congestion control protocols, QCCP-PS consists of three
parts namely, Congestion Detection Unit (CDU),
Congestion Notification Unit (CNU), and Rate
Adjustment Unit (RAU). The CDU is responsible for
detecting any congestion in advance. The CDU uses the
queue length as the congestion indicator. The output of
CDU is a congestion index, which is a number between 0
and 1. Two different fixed thresholds th max and th min are
defined. When the queue length ( q ) is less than th min ,
congestion index is very low and the source node could
increase its rate. On the other hand, when queue length is
greater than th max , congestion index is high and the
traffic source should decrease its rate to avoid any packet
loss. In the case that queue length is between th max and
thmin the congestion index is related to queue length
linearly. In each predefined time interval T, each parent
node calculates the sending rate of all its child traffic
sources as well as its local traffic source. As each sensor
node may have different priorities since sensor nodes
might be installed with different kinds of sensors in an
environment, the upstream node also considers the
priority of each of its child nodes in calculating the rate of
the child nodes. Based on the current congestion index
and the source traffic priority, the RAU calculates the
new rate of each child traffic sources as well as its local
traffic source. The new rate is sent to the CNU unit which
is responsible for notifying all the child nodes of the new
rate. To decrease energy consumption, CNU uses an
implicit congestion notification by adding the new rate of
each child node to the sending data of each sensor node.
When a node receives a congestion notification message
Volume 2, Issue 4 July August 2013

from its upstream node, the node is expected to adjust its


traffic rate accordingly.
QCCP-PS has better performance than PCCP as it can
provide a better priority index in comparison to PCCP.
QCCP-PS adjusts the traffic rate of each node based on its
degree of congestion, and thus can avoid unnecessary
packet loss. This results in a high throughput and better
achieved priority index. However it supports only single
path routing.

Figure 5 The structure of QCCP-PS


Fairness for Extend DCCP Congestion Control in
Wireless Sensor Networks. It utilizes DCCP with the
congestion control mechanism specified in a new
Congestion Control Identifier (CCID) [8]. An optional
ACK based reliability layer has been added on top of the
DCCP connection, similar to TCPs reliability scheme.
The new CCID profile defines when acknowledgments
are sent and how to identify the true reasons of packet
loss. To implement reliable transmission based on DCCP
and provide a comparable level of reliability as TCP does,
the following functions have been added to DCCP:
Buffering of received packets at the receivers,
retransmission of lost or corrupted packets by the senders,
detection and deletion of duplicated packets at the
receivers, and in-order delivery of received packets to the
application program at the receivers.
In extended protocol, the sender has four states: Normal
State, Congestion State, Failure State (route change or
link failure) and Error State (transmission error). Rate
based congestion control is used to avoid the frequent
slow starts. To determine the available end-to-end
bandwidth, the authors adopted the delay based rate
estimation mechanism. The sender maintains two RTT
values, one is base RTT (baseRTT), which is the
minimum recorded RTT, and the other is exponentially
averaged RTT (avgRTT). Each time the sender goes into
the failure state, the baseRTT will be reset by the round
trip time of a probe packet and its corresponding
acknowledgment, after being temporarily saved as old
baseRTT. The sending rate after the route establishment
is proportional to baseRTT/Old baseRTT. In the Normal
State, the sender adjusts the rate proportional to
baseRTT/avgRTT. In the Congestion State, the rate
adjustment is the same as in Normal State. But when
packet loss happened, the sending rate will halve. This
Page 248

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 4, July August 2013
ISSN 2278-6856
idea is based on FAST TCP for High-Speed LongDistance Networks, which showed proportional fairness
under no congestion or mild congested situations when
packet loss occurs infrequently. In the Failure State, probe
packets are send out to monitor the network situation. In
the Error State, the rate is set to *rate, calculated using
the above scheme, where ranges from 0.5 to 1,
according to the error rate.
This algorithm shows better performance than standard
DCCP.
A fairness-aware congestion control scheme in
wireless sensor networks (FACC). It is a rate-based
fairness-aware congestion control (FACC) protocol,
which controls congestion and achieves approximately
fair bandwidth allocation for different flows [9]. In
FACC, intermediate relaying sensor nodes are
categorized into near-source nodes and near-sink nodes.
Near source nodes maintain a per-flow state and allocate
an approximately fair rate to each passing flow. On the
other hand, near-sink nodes do not need to maintain a
per-flow state and use a lightweight probabilistic
dropping algorithm based on queue occupancy and hit
frequency.
The scheme is shown in Figure 6. First, the near-sink
node sends a warning message (WM) back to the nearsource nodes once a packet is dropped at this node.
Second, the near-source nodes calculate and allocate the
approximately fair rate share for each passing flow.
Finally, the near-source node sends a control message
(CM) to notify the designated source node of the updated
sending rate. This simulation demonstrates that this
congestion control scheme is capable of automatically
adapting the sensors data rate according to the network
conditions and achieving better congestion-free rate than
other schemes. The total source rate is defined as the total
number of data packets generated by all data sources per
second. During the course of congestion control, the total
source rate is reduced. Simulation results show that this
congestion control scheme achieves higher throughput
than the back-pressure algorithms. This is because backpressure is hard to adapt to an appropriate level, while
this scheme assigns the exact available bandwidth

Figure 6 Logic framework of FACC


Volume 2, Issue 4 July August 2013

share to each flow and, thus, efficiently utilizes the


available bandwidth. The average energy expenditure is
defined as the total number of transmissions in the
network divided by the number of packets successfully
delivered to the sinks. This scheme is more energy
efficient than the back-pressure scheme because of more
efficient bandwidth utilization.
Enhanced Congestion Detection and Avoidance
(ECODA). It is an energy efficient congestion control
scheme for sensor networks [10]. The key ideas of
ECODA are as follows: 1) ECODA adopts a technique to
measure congestion, which uses dual buffer thresholds
and weighted buffer difference for congestion detection. It
could differentiate congestion level and deal with them
correspondingly. 2) The flexible queue scheduler can
dynamically select the next packet to send. Moreover, it
adopts a novel method to filter packets according to
channel loading and packets priority when congestion
happens. 3) Transient congestion and persistent
congestion are differentiated and are dealt with
differently. For transient congestion, hop-by-hop implicit
back-pressure manner is used. For persistent congestion,
bottleneck node based source sending rate control and
multi-path loading balancing are proposed. This method
does not need explicit ACK from sink. Using the method,
bottleneck nodes can be identified and source sending rate
can be dynamically adjusted more accurately.
In order to precisely measure local congestion level at
each node, this scheme proposes dual buffer thresholds
and weighted buffer difference for congestion detection.
Buffer is defined as three states, accept state, filter
state and reject state, as Figure 7 indicates. Two
thresholds Qmin and Qmax are used to border different
buffer states. Different buffer states reflect different
channel loading, corresponding strategy is adopted to
accept or reject packets in different states. The reject state
means that most of packets will be rejected because buffer
utilization is too high. If a nodes buffer occupancy
exceeds a certain threshold and its data has higher
priority among neighborhood, the corresponding
congestion level bit in the outgoing packet header is set.
When congestion occurs, packets are dropped to alleviate
congestion. It drops a low priority packet rather than the
high priority packet. The ECODA protocol achieves
fairness through Flexible Queue Scheduler. There are two
sub-queues. One is for local generated traffic, and the
other is for route-through traffic. In the route-through
traffic queue, packets are grouped by sources. For every
source, packets are sorted by their dynamic priority from
high to low. When sending next packet, a round robin
algorithm is adopted. To ensure fairness, the algorithm
scans the route-through traffic queue from head to tail.
One packet from one source is sent from route-through
traffic queue, and then a local generated packet is sent.
Due to convergence nature of the sensor network, though
the data sending rate of the near-source nodes is less,
there is possibility of congestion to the sink node. To
Page 249

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 4, July August 2013
ISSN 2278-6856
solve this problem, the authors proposed a method called
bottleneck node based source data sending rate control. In
which, to indicate the path status from a particular node
to sink node, own data forwarding delay or parent nodes
data forwarding delay which is maximum, is piggybacked
in the data packets header. And this is computed up to
source node so that the path status can be determined.
When any source node or forwarding node receives the
back-pressure message, it reduces its data sending rate or
adjusts the rate if there are multiple paths. According to
author, if no back-pressure message received, the source
data sending rate does not increased additively.

Figure 7 Buffer state


ECODA deals with transient congestion and persistent
congestion efficiently. For transient congestion, it adopts
hop-by-hop congestion control scheme. For persistent
congestion, it adopts bottleneck node based source data
sending rate control. Simulations show that ECODA
achieves high link utilization and flexible fairness. It can
reduce packet loss; improve energy efficiency, and lower
delay. ECODA achieves efficient congestion control and
flexible weighted fairness for different class of traffic.
Therefore it leads to higher energy efficiency and better
QoS in terms of throughput, fairness, and delay. The most
important improvement of ECODA is that it provides
fairness to different class of traffic.
DPCC: Dynamic Predictive Congestion Control in
Wireless Sensor Networks. The DPCC [11] is a
predictive congestion algorithm and can predict
congestion in a node and will broadcast traffic on the
entire network fairly and dynamically. The DPCC
protocol tries to increase throughput and reduce packet
loss while guaranteeing distributed priority based fairness
with lower control overhead. The congestion control
scheme of sensor node i is shown in Figure 8. DPCC
protocol consists of three components: backward and
forward nodes selection (BFS), predictive congestion
detection (PCD) and dynamic priority-based rate
adjustment (DPRA), which are introduced with
responsibility for precise congestion discovery and
weighted fair congestion control. The node i selects a
forward node for itself according to received rate
adjustment values from the set of forward nodes of i. The
node i selects the one as a forward node which received
rate value from it is maximum. Then node i send
notification to selected forward node. For increasing the
throughput, the other forward nodes of node i which are
not selected as a forward node of this node adjust the new
rates for their other backward nodes. To detect congestion
Volume 2, Issue 4 July August 2013

a congestion index (CIi) reflecting the current congestion


level at each sensor node i is determined on its
unoccupied buffer size (UBSi) and traffic rate (Tri) at
MAC layer as follows:
CIi= UBSi Tri
UBSi =MBSi OBSi
TRi = (Ris + j b(i) rji - k f(i) rik) X T
Here, MBSi and OBSi are defined as the maximal buffer
size and current queue length of node I, f (i) is the set of
forward nodes of i and b(i) is set of backward nodes of i.
If the index value is less than 0, it means that congestion
may occur in the node i with this traffic rate. In this state,
Dynamic Priority based Rate Adjustment component must
adjust the traffic rates of backward nodes to avoid
congestion. According to this the total traffic priority
(TPi) in each sensor node i is:
TPi = jb(i) TPj + SPi
Here, SPi and TPj are defined as local source traffic
priority of node i and total traffic priority of node j which
is the member of b (i), respectively. Traffic priority ratio
of node i (TPRi) and its backward nodes (TPRji, j b(i)) in
one hop are obtained as follows:
TPRi = SPi / TPi
TPRji = TPj / TPi
According to the above equations the source traffic rate of
node i and each transit traffic rate of this node can be
allocated with the traffic priority as follows:
ris_new = TPRi X CIi X 1/T
rji_new =TPRji X CIi X 1/T

Figure 8 Structure of DPCC


This protocol is more efficient in network throughput
evaluation. The DPCC reduces congestion and improves
performance over Congestion Detection and Avoidance
(CODA) and IEEE 802.11 protocols. With the addition of
a fair scheduling algorithm, the scheme guarantees
desired quality of service (QoS) and weighted fairness for
all flows even during congestion and fading channels.
Hierarchical Tree Alternative Path (HTAP)
Algorithm for Congestion Control in Wireless Sensor
Networks. It is a resource control algorithm that
attempts through simple steps and minor computations to
control congestion in wireless sensor networks by creating
dynamic alternative paths to the sink [12].
HTAP consists of four different schemes:
Page 250

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 4, July August 2013
ISSN 2278-6856
Topology control
Hierarchical Tree Creation
Alterative Path Creation
Handling of Powerless (dead) nodes
The Local Minimum Spanning Tree algorithm (LMST) is
used as the initial topology control that runs on the
network. LMST is an algorithm capable of preserving the
network connectivity using minimal power, while the
degree of any node in the resulting topology is restricted
to six. The resulting topology is possible to use only bidirectional links. The variation introduced to LMST by
HTAP relates to the selection of the neighbor list. Instead
of selecting any node that fulfils the criteria as neighbor
(maximum six), the modified LMST used by HTAP
keeping as neighbor nodes only those that reside one
level closer to the sink than itself. The hierarchical tree
creation algorithm runs over the topology control
algorithm and only at the moment, where a node becomes
a source (starts sensing a phenomenon). A hierarchical
tree is created beginning at the source node. After the end
of the topology control phase each node is able to be
connected to at most six nodes which are only one hop
away from itself. During this phase each node that is
becoming a source node is self-assigned as level 0 and
sends a level discovery message to the six neighbors
selected during topology control phase. Nodes that receive
this packet are considered as children to the source node
and are set as level 1.

Figure 9(a) Network connectivity after topology control


and 9(b) level placement procedure.
Each of these nodes broadcast again the level discovery
packet, and the pattern continues with the level 2 nodes
etc. This procedure iterates until all nodes are assigned a
level and stops when the level discovery packets reach the
sink. When the procedure finishes it is possible that the
sink receives more than one level discovery packets from
different nodes and each packet may have a different level
value. This is an indication that disjoint paths are
reaching the sink. An example of the operation of
hierarchical tree algorithm, and placement of nodes in
levels, is illustrated in Figures 9(a) and 9(b).

Volume 2, Issue 4 July August 2013

The level placement procedure takes the output of


topology control phase as its input and attempts to place
nodes in levels from each source to sink. During this
procedure, if a node receives a packet from more than one
node, then it keeps connection only with the node that
assigns it the higher level. For example node 7 is
connected, after the end of topology control phase, with
nodes 6 and 4. During level placement procedure, it will
receive a packet form node 6, that will ask it become a
level 3 node and from node 4, that will ask it to become a
level 2 node. In this case, node 4, will become a level 2
node and will ask from its neighbor nodes to become level
3 nodes. Finally, in this figure, node 17 represents the
case where a node does not have any upstream to transmit
a packet. In such a case, level placement algorithm
removes this node from the table of upstream nodes of
node 13. The Alternative Path Creation algorithm runs
when congestion is possible to occur at a specific node in
the network. Since the employed topology control
algorithm is able to counteract collisions in the medium
by choosing the smallest transmission power (one hop
nodes), congestion is still possible to happen when a node
receives packets with a higher rate than it can transmit
(buffered based congestion). In a wireless sensor network
where all nodes, except the sink, are exactly the same,
this can happen if a node is receiving packets from at
least two flows, or if the nodes to which it has to transmit
packets to, cannot accept any more packets. When the
buffer of a node starts filling, this node has to take action.
In such a case each node is programmed to run locally
lightweight congestion detection (CD) algorithm. When
the buffer reaches a buffer-level threshold value, the CD
algorithm starts counting the rate with which packets are
reaching the node.
Special care is taken in the HTAP algorithm in handling
of Powerless (dead) nodes. When the power of a node
reaches the power extinction limit, it immediately
broadcasts this fact to the nodes around it. The nodes that
receive this packet extract this node ID from their
neighbor list. If this node is a part of an active path (a
path that is relaying packets to sink), the nodes that are
sending packets to this node and receive power
extinction message, apply the alternative path
algorithm and find another path to forward packets to the
sink.
HTAP is affected by different placements and its
performance improves when nodes are densely deployed
near possible hot spots like sources and sinks.

3. CONCLUSION
Congestion control in wireless sensor network is a very
significant area to be considered. The algorithms
discussed here have the common objective of trying to
extend the lifetime of the wireless sensor networks. We
have highlighted different strategy and working
principles as well as pros and cons of these protocols. The
algorithms that use resource control technique are
Adaptive resource control scheme to alleviate congestion
Page 251

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 2, Issue 4, July August 2013
ISSN 2278-6856
and the Hierarchical Tree Alternative Path (HTAP)
Algorithm for Congestion Control in Wireless Sensor
Networks and rest of the algorithms discussed here traffic
control based algorithms.
Although these congestion control techniques are
promising, they have been developed under a certain
environment assumption. These protocols cannot control
the congestion 100 percent. We need such a scheme
which can control the congestion 100% with minimum
packet loss, minimum power consumption, and long
delay. To meet the needs of all WSN applications, a
practical study about the actual working environment of
WSN must be made. Based on different working
conditions of WSN, different modes of congestion control
scheme should be constructed.

[10] L. Q. Tao and F. Q. Yu, ECODA: Enhanced


Congestion Detection and Avoidance for Multiple
Class of Traffic in Sensor Networks, IEEE
Transactions on Consumer Electronics, Vol. 56, No.
3, August 2010.
[11] Saeed Rasouli Heikalabad, Ali Ghaffari, Mir
Abolgasem Hadian and Hossein Rasouli, DPCC:
Dynamic Predictive Congestion Control in Wireless
Sensor Networks, IJCSI International Journal of
Computer Science Issues, Vol. 8, Issue 1, January
2011.
[12] Charalambos Sergiou, Vasos Vassiliou and
Aristodemos Paphitis, Hierarchical Tree Alternative
Path (HTAP) Algorithm for Congestion Control in
Wireless Sensor Networks,2012.

References
[1] C.Y. Wan, S. B. Eisenman, and A. T. Campbell,
CODA: congestion detection and avoidance in
sensor networks, In Proc. 1st international
conference on Embedded networked sensor systems,
2003.
[2] C.T. Ee and R. Bajcsy, Congestion Control and
Fairness for Many-to-one Routing in Sensor
Networks, In Proc. ACM Sensys, Nov. 2004.
[3] Jaewon Kang, Badri Nath, Yanyong Zhang.
Adaptive resource control scheme to alleviate
congestion in sensor networks. The 3st IEEE
Workshop on Broadband Advanced Sensor
Networks, San Jose, CA, 2004.
[4] Kohler E., Handley M., Floyd S., Datagram
Congestion Control Protocol (DCCP), RFC 4340,
IETF, March 2006.
[5] Congestion Avoidance and Fair Event Detection in
Wireless Sensor Network, Md. Mamun-Or-Rashid,
Muhammad Mahbub Alam, Md. Abdur Razzaque
and Choong Seon Hong,2007.
[6] J.Kang, Y. Zhang, B. Nath, TARA Topology Aware
Resource Adaptation to Alleviate Congestion in
Sensor Networks, IEEE Transaction on Parallel and
Distributed Systems 18(7), 919931, 2007.
[7] Mohammad Hossein Yaghmaee and Donald Adjeroh,
A New Priority Based Congestion Control Protocol
for Wireless Multimedia Sensor Networks, in proc.
of the international Symposium on a world of
Wireless, Mobile and Multimedia Networks,18,2008.
[8] LIU Yong-Min, JIANG Xin-Hua, NIAN Xiao-Hong,
Fairness for Extend DCCP Congestion Control in
Wireless Sensor Networks, IEEE conference
publications, control and decision conference, 4732
4737,2009.
[9] X. Yin, X. Zhou, R. Huang, and Y. Fang, A
fairness-aware congestion control scheme in wireless
sensor networks, IEEE Trans. Veh. Technol.,vol.
58, no. 9, Nov. 2009.

Volume 2, Issue 4 July August 2013

Page 252

S-ar putea să vă placă și