Sunteți pe pagina 1din 13

9.

3 Congestion Control Algorithms


When two many packets are present in (part of) subnet, performance degrades. This situation
is called Congestion.

Fig 9.? When too much traffic is offered, congestion sets in and performance degrades sharply

When no. of packets dumped into the subnet by the host is within its carrying capacity of
subnet, they are all delivered (except for few which have transmission error) and number
delivered is proportional to number sent.
As traffic increase too far, the routers are no longer able to cope and they begin losing
packets. At very high traffic, performance collapses completely and almost no packets are
delivered.
Following factors are responsible for congestion to occur
a) Many input line requires same output line
If all of sudden, streams of packets begin arriving on three or four input lines and all need the
same output line, a queue will build up. If there is insufficient memory to hold all of them,
packets will be lost.
Scientist Nagle discovered that even if routers have an infinite amount of memory still it does
not solve congestion problem. Because by the time packets get to the front of queue, they
have already timed out and their duplicates are sent by sender after timeout. So lot of packets
will be dutifully forwarded to next router, increasing the load all the way to destination.
b) Slow Processors
If router’s CPU is slow, then CPU requires more time to process each packet and for other
tasks such as queuing buffers, updating tables etc. So as a result queues are built up even
though there is enough buffer space.

Updating the lines (bandwidth) but not changing the processors or vice versa helps a little but
frequently just shifts bottleneck.
Solution is to upgrade all components and maintain balance them.

Presence of congestion means that load (temporarily) greater than resources ( in part of
system) can handle.
Two solutions can be possible
a) Increase Resources
For increasing resources possible ways are
• Subnet may start using dial-up telephone line to temporarily increase the bandwidth
between certain points.
• Splitting traffic over multiple routes instead of always using best one.
• Spare routers which are used as backups can make online to give more capacity.
• On satellite systems, increasing transmission power often gives higher bandwidth.
• But sometimes it is not possible to increase capacity or it has already been increased
to the limit.
b) Decrease Load
For decreasing load possible ways are
• Deny services to some users.
• Degrade services to some or all users.
• Users schedule their demands in a more predictable way.

There is always confusion between Congestion Control and Flow Control as their
relationship is subtle.

Congestion Control
It has global scope. Congestion control has to do with making sure the subnet is able to carry
the offered traffic.
Involve behavior of all hosts, all the routers, store-and-forward processing within the routers
and all other factors that tend to diminish the carrying capacity of subnet.
For example consider Store-and-Forward Network with 1 Mbps lines and 1000 large
computers, half of which are trying to transfer files at 100 Kbps to other half. The problem is
not that of facts senders overpowering slow receivers, but that the total offered traffic
exceeds the network handling capacity.

Flow Control
It has local scope. It relates to point-to-point traffic between a given sender and receiver.
Its job is to make sure that a fast sender cannot continuously transmit data faster than the
receiver is able to absorb it.
It frequently involves some direct feedback from receiver to sender to slow down speed.
For example consider Fiber Optic Network with capacity of 1000Gbps on which a
supercomputer is trying to transfer a file to personal computer at 1Gbps. Although there is no
congestion, flow control is needed to force the supercomputer to stop frequently to give
personal computer to relax.

6.3.1 General Principles of Congestion Control


Many problems in complex systems such as Computer Network can be viewed from control
theory point of view. This approach leads to dividing all solutions into two groups.
a) Open Loop
b) Closed Loop
Fig 9.? Classification of Congestion Control Policies

Open Loop
It is Avoidance Strategy. Solve the problem by good design i.e. make sure it does not occur
in first place. Once the system is up and running, midcourse corrections are not made.
• Tools for Open Loop includes
• Deciding when to accept new traffic?
• Deciding when to discard packets and which ones?
• Making scheduling decisions at various points in the network.
All above factors have common fact is that they make decisions without regard to current
state of network.

Closed Loop
It is Repair Strategy. Solution based on concept of feedback loop. This approach has three
parts.
a) Monitor the system to detect when and where congestion occurs.
Variety of matrices can be used to monitor the subnet for congestion. List of Metrics are
Percentage of all packets discarded for lack of buffer space. Average Queue length No. of
packets that time out are transmitted Average packet delay, standard deviation of packet
delay. In all cases rising number indicates growing congestion.
i. Pass congestion information to place where action can be taken.
Routers, who’s detecting congestion sent packets to the traffic source or sources with
announcement of the congestion problem.
ii. Adjust system operation to correct the problem.
A bit of field can be reserved in every packet for routers. Router fills this field in all
outgoing packets whenever it detects collision and warns the neighbors. Hosts or
routers periodically send probe packets out to explicitly ask about congestion.
For feedback scheme to work properly, time scale must be adjusted carefully. Response of
congestion control policies should not be too much fast or too much slow. To work well,
some kind of averaging is needed.

Classification of Closed Loop Algorithms based on


Explicit Feedback
Packets are sent back from point of congestion to source.
Implicit Feedback
Source deduces existence of congestion by making local observations such as time needed
for acknowledgements to comeback.

9.3.2 Congestion Prevention Policies (Based on Open Loop Concept)


Open Loop is congestion avoidance policy. Means first try to minimize congestion rather
than letting it happen and taking action after that.
Following table shows how different layers such as data link, network and transport policies
involve in creating congestion or affect congestion
Layer Policies
Data Link i. Retransmission Policy
ii. Out-of-order Caching Policy
iii. Acknowledgement Policy
iv. Flow Control Policy
Network i. Virtual circuit versus datagram inside the subnet
ii. Packet Queuing & Service Policy
iii. Packet Discard Policy
iv. Routing Algorithm
v. Packet Lifetime Management
Transport i. Retransmission Policy
ii. Out-of-order Caching Policy
iii. Acknowledgement Policy
iv. Flow Control Policy
v. Timeout determination
Data Link Layer
i. Retransmission Policy
It is concerned with how fast sender times out and after its timeout by which
protocol it transmits frame again.

ii. Out-of-order Caching Policy


Jumpy sender that times out quickly and retransmits all frames using ‘Go Back n’ will
put a heavier load on the system.
Instead of ‘Go Back n’, Selective Repeat is better approach for congestion prevention.

iii. Acknowledgement Policy


If every packet is acknowledged, then it generates extra traffic. Piggybacking
Techniques can be used to reduce acknowledgement by inserting into reverse traffic
but extra timeouts and retransmission may result.

iv. Flow Control Policy


Tight flow control scheme such as small window reduces data rate.

Network Layer
i. Virtual Circuits Vs Data gram inside the subnet
Many congestion control algorithms works only with virtual circuit subnet. So choice
of either virtual circuit or datagram affects congestion.

ii. Packet Queuing & Service Policy


It relates to whether routes have one queue per input line, one queue per output line or
both.
It also relates to the order in which packets are processed (e.g. Round Robin or
Priority based)

iii. Packet Discard Policy


Policy that determines which packet is dropped when there is no space. Good policy
can help alleviate congestion and bad one can make it worse.
iv. Routing Algorithm
Good routing algorithm spread the traffic over all the lines whereas bad one can send
too much traffic over already congested lines.

v. Packet Lifetime Management


It deals with how long a packet may live before being discarded.
If it is too long, lost packet may clog up the works for long time, but if it is too short,
packets may sometimes time out before reaching their destination thus introducing
retransmission.

Transport Layer
Issues are same as Data Link Layer but determining timeout interval is harder because transit
time across network is less predictable as transport entities run in end systems than transit
time over a wire between two routers.
If timeout interval is too short, extra packets will be sent unnecessarily. If it is too long,
congestion will be reduced but response time will suffer whenever a packet is lost.

9.3.3 Congestion Control in Virtual-Circuit Subnets


It is based on Closed Loop Concept.
Following are approaches to dynamically control congestion in virtual-circuit subnets.
i) Admission Control
Basic idea is once congestion has been signaled, no more virtual circuits are set up until the
problem has gone away. Thus, attempts to set up new transport layer connections fail. This
approach is crude but simple one and easy to implement. For e.g. in telephone system, when
a switch gets overloaded, telephone system uses admission control by not giving dial tones.
ii) Allow new virtual circuits with careful routes
For e.g. consider following subnet in which two routers are congested.
Fig 9.? a) Congested subnet b) Redrawn subnet that eliminates the congestion

Select alternative routes to avoid part of the network that is overloaded, i.e. temporarily
rebuild your view of network.
Suppose that host attached to router A wants to setup connection with host attached to router
B. As this connection is passes through one of congested routers. To avoid this situation
subnet is redrawn with omission of congested routers and their lines. Dashed line shows a
possible route for virtual circuits that avoids congested routers.
Reservation (Negotiation while virtual circuit set up)
Negotiate quality of connection in advance, so that network provider can reserve buffers and
other resources, guaranteed to be there
Advantages
With this approach, rare chances of congestion as necessary resources are guaranteed
available.
Disadvantages
It tends to waste resources.

9.3.4 Congestion Control in Datagram Subnet


This closed-loop congestion control is applicable to both virtual circuits and datagram
subnets.
Each router can easily monitor utilization of its output lines. Each output line has one
variable u which indicates utilization of that output line. (0<=u<=1)
It checks line usage f periodically (f is 0 when not in use, f is 1 when in use)
u=a*uold+(1-a)*f
where a is having range 0<=a<=1 determines to what extent “history” is taken into account.
When u moves above threshold, output lines enter warning state.
Instead of line utilization, other variations can be used such as queue lengths, buffer
utilization etc.
As newly arrived packets comes & wants to use this outgoing line, some action is taken.
These actions are basically congestion detection and recovery. Actions may be as follow
i. Warning Bit
ii. Choke Packets
iii. Hop-by-Hop Choke Packets

i) Warning Bit
Special bit in packet header is set by router to warn the source when congestion is detected.
When packet arrived at its destination, special bit is copied and is piggybacked on
acknowledgement and send to sender. Source then cut back on traffic.
As long as router was in warning state, it continued to set warning bit and source
continued to get acknowledgement of warning bit. Source monitored fraction of
acknowledgements with bit set and adjusted its transmission rate accordingly.
When warning bits slow down, source increases transmission rate.
Diagram form net
Advantages
No bandwidth overhead for control packets and only one bit is set in packet.
Disadvantages
It uses round to tell source to slowdown so long delay before which sender can start reducing
traffic and thus relieve congested router.

ii) Choke Packets


Router detecting congestion sends a warning packet directly to sender. Original packet is
tagged so that it will not generate any more choke packets farther along the path and is then
forwarded in usual way.
Basic Idea: Router checks the status of each output line: if it is too occupied, sends a choke
packet to the source. The host is assumed to be cooperative and will slow down. When the
source gets a chock packet, it cuts rate by half, and ignores further choke packets coming
from the same destination for a fixed period.
After that period has expired, the host listens for more choke packets. If one arrives, the host
cut rate by half again. If no choke packet arrives, the host may increase rate
Problem of basic Choke Packets: For high-speed WANs, return path for a choke packet
may be so long that too many packets have already been sent by the source before the source
notes congestion and takes action
Variation in this approach. Routers can maintain several thresholds values. Depending on
threshold value, choke packet can contain mild warning, stern warning or an ultimatum.

Fig 9.? Functioning of Choke Packets a) Heavy traffic between nodes P & Q, b) Node Q sends
choke packet to p, c) Choke packet reaches P, d) P reduces the flow and send a reduced flow out
e) Reduced flow reaches node Q

Advantage:
It provides quicker warning to the sender to reduce traffic.
Disadvantages:
Difficult to decide by how much a host slow down. Because answer depends on how much
traffic host is sending, how much of congestion it is responsible for and total capacity of
congested region. Such information is not readily available in practice.

iii) Hop-by-Hop Choke Packets


This technique is an improvement over choke packet method. At high speeds or over long
distances, sending a choke packet to the source host does not work well because reaction is
so slow.

Figure 9.? Hop-by-Hop choke packets, (a) Heavy traffic between nodes P and Q, (b) Node Q
sends the Choke packet to P, (c) Choke packet reaches R, and the flow between R and Q is
curtail down, Choke packer reaches P, and P reduces the flow out

In hop-by-hop, router sends a choke packet to previous router which reduces rate in the line
(network interface) through which warning is received. Only if necessary the previous router
forwards the warning to its previous hop.
Advantage:
Provide very quick relief.
Load Shedding (Discarding Packets)
It is one of the simplest and more effective techniques. In this, whenever a router finds that
there is congestion in the network, it simply starts dropping out the packets. Packet which is
to be dropped depends on application. There are different methods by which a host can find
out which packets to drop.
i) Discard Randomly
This method picks packets at random to discard. Not good method.
ii) Discard younger ones (Wine Policy)
Suitable for file transfers. Cannot discard older packets since this will cause gap in the
received data.
For example in 12 packet file, dropping 6 may require 7 through 12 to be retransmitted (if the
receiver routinely discards out-of-order packets) whereas dropping 10th packet may require
only 10 through 12 to be retransmitted.
iii) Discard Older Ones (Milk Policy)
Suitable for multimedia files. For multimedia, a new packet is more important than an old
one. For example for real-time voice or video, it is probably better to throw away old data
and keep new packets.
iv) Discard less important ones (Intelligent Policy)
Support from sender is requires to mark their packets in priority classes to indicate how
important they are. Priority classes may be of type very important, ever discard etc.
For example, in MPEG video standard, periodically an entire frame is transmitted and this is
followed by subsequent frames as differences from full reference frame. Drop packets that is
part of difference is preferred to drop one that contains part of last full reference frame.
Another example, consider transmitting a document containing ASCII text and pictures.
Losing a line of pixels in some image is far less damaging than losing a line of readable text.

Random Early Detection (RED)


By having routers to drop packets before the situation has become hopeless, the idea is that
there is time for action to be taken before it is too late. This observation leads to the ideal of
discarding packets before all buffer space is really exhausted. Algorithm for doing this is
called RED.
To determine when to start discarding, routers maintain a running average of their queue
lengths. When average queue length on some line exceeds a threshold, the line is said to be
congested and action is taken.
As packets are discarded, source will eventually notice lack of acknowledgement & take
action since it know lost packets are generally caused by congestion & discards, it will
respond by showing down instead of trying harder. This is actually implicit feedback of
source.
Works only for wired network because lost packets are most due to buffer overruns rather
than transmission error. In wireless, where most poses are due to noise on the air links, this
approach can’t be used.

Jitter Control
For applications such as audio and video streaming, it does not matter much if packets takes
20msec or 30msec to be delivered as long as the transit time is constant.
The variation in packet arrival times is called Jitter. For example, some packets taking
20msec and other taking 30msec to arrive will give an uneven quality to sound or movie.
Two approaches to overcome the jitter problem
i. Check packet schedule at each router along the path
When packet arrives at a router, the router checks to see how much the packet is
behind or ahead of its schedule. This information is stored in the packet and updated
at each hop. If the packet is ahead of schedule, it is held long enough to get back on
schedule. If is behind schedule, the router tries to get it out the door quickly.
ii. Buffering
Buffering data at receiver and then fetching data for display from the buffer instead of
from network in real time. It is suitable for video on demand. Not suitable for real-
time interaction applications such as Internet Telephony & Video Conferencing

S-ar putea să vă placă și