Documente Academic
Documente Profesional
Documente Cultură
VII Semester
2013-14
Unit-2
PART A (2 Marks)
1. State the key characteristics to be considered for deriving the analytic equations for the
queuing model.
Nov/Dec 2011
Key characteristics for deriving the analytic equations for the queuing model are:
Item population
Queue size
Dispatching discipline
2. What is meant by choke packets?
Nov/Dec 2011
A choke packet is a control packet generated at a congested node and transmitted back to a
source node to restrict traffic flow.
Example: Internet Control Message Protocol Source Quench packet.
3. What are the advantages of packet over circuit switching?
May/June 2012
May/June 2012
Congestion occurs when the number of packages being transmitted through a network
begins to approach the packet handling capacity of the network.
5. List and explain the parameters for a single server queue.
Nov/Dec 2012
Parameter
Arrival Rate
Explanation
Items arriving per second
Waiting time
Service time
Utilization
Parameter
Items resident in queuing system
Residence time
Explanation
The average no of items resident in the system, including the
item being served & the items waiting
The average time that an item spends in the system.
Nov/Dec 2012
May/June 2013
Congestion is state occurring in a network when the load on the network is greater than the
capacity of the network.
8. What are the types of queuing models?
May/June 2013
Model-1
Model-1
Model-2
3|Pag
e
It can be used only in connection oriented network that allows hop by hop flow
control.
Neither frame relay nor ATM has any capability for restricting flow on a hop by hop
basis.
12. States discard strategy.
Model-2
It deals with the most fundamental response to congestion. When congestion becomes
severe enough the network is forced to discard frames and it must be done in a way that is fair to
all users.
13.What is meant by Kendalls notation?
Nov/Dec 2013
14.Mention the congestion control techniques used in packet switching networks. Nov/Dec 2013
4|Pag
e
Or
4. (a)Explain in detail about single server queues and its application. (8 Marks) May/June 2012
Or
6. (a) Explain the single server queuing model and its applications. (8 Marks)
Nov/Dec 2012
Or
7. (a) Explain Single Server Queuing model in detail. (10 Marks)
May/June 2013
The simplest queuing system is depicted in above Figure. The central element of the system
is a server, which provides some service to items. Items from some population of items
arrive at the system to be served. If the server is idle, an item is served immediately.
Otherwise, an arriving item joins a waiting line2. When the server has completed serving
an item, the item departs.
If there are items waiting in the queue, one is immediately dispatched to the server. The
server in this model can represent anything that performs some function or service for a
collection of items.
5|Pag
e
Examples: a processor provides service to processes; a transmission line provides a transmission service to
packets or frames of data; an I/O device provides a read or write service for I/O requests.
Queue Parameters
The above Figure also illustrates some important parameters associated with a queuing
model. Items arrive at the facility at some average rate (items arriving per second). At any
given time, a certain number of items will be waiting in the queue (zero or more); the
average number waiting is w, and the mean time that an item must wait is Tw.
Tw is averaged over all incoming items, including those that do not wait at all. The server
handles incoming items with an average service time Ts; this is the time interval between
the dispatching of an item to the server and the departure of that item from the server.
Utilization is the fraction of time that the server is busy, measured over some interval of
time.
Finally, two parameters apply to the system as a whole. The average number of items
resident in the system, including the item being served (if any) and the items waiting (if
any), is r; and the average time that an item spends in the system, waiting and being served,
is Tr; we refer to this as the mean residence time. If we assume that the capacity of the
queue is infinite, then no items are ever lost from the system; they are just delayed until
they can be served. Under these circumstances, the departure rate equals the arrival rate. As
the arrival rate, which is the rate of traffic passing through the system, Thus, the theoretical
maximum input rate that can be handled by the system is:
practice, any queue is finite. In many cases, this will make no substantive difference to the analysis. We
address this issue briefly, below.
Dispatching discipline: When the server becomes free, and if there is more than one item
waiting, a decision must be made as to which item to dispatch next. The simplest approach
is first-in, first-out; this discipline is what is normally implied when the term queue is used.
Another possibility is last-in, first-out. One that you might encounter in practice is a
dispatching discipline based on service time. For example, a packet-switching node may
choose to dispatch packets on the basis of shortest first (to generate the most outgoing
packets) or longest first (to minimize processing time relative to transmission time).
Unfortunately, a discipline based on service time is very difficult to model analytically.
1. (b)Explain the Kendalls notation and the common distributions for the queuing model.
(6 Marks)
Nov/Dec 2011
Kendall Notations
Kendall proposed a set of notations for queuing models. This is widely used in literature.
The common patter of notations of a queuing model is given by:
(a/b/c) : (d/e)
where:
a
: Queue discipline.
Name
Description
Markovian
Degenerate distribution
Ek
G
Erlang distribution
General distribution
PH
Phase-type distribution
2. (a) Explain in detail the explicit and implicit congestion signaling. (8 Marks) Nov/Dec 2011
Explicit Congestion Signaling
Explicit congestion control techniques operate over connection oriented networks and
control the flow of packets over individual connections. Explicit congestion signaling
approaches can work in one of two directions
Backward
Notifies the source that congestion avoidance procedures should be initiated where
applicable for traffic in the opposite direction of the received packet. It indicates that the
packets that the user transmits on this logical connection may encounter congested
resources.
Backward information is transmitted either by altering bits in a data packet headed for the
source to be controlled or by transmitting separate control packets to the source.
Forward
Notifies the user that congestion avoidance procedure should be initiated where applicable
for traffic in the same direction as the received packet. It indicates that this packet, on this
logical connection, has encountered congested resources.
Again, this information may be transmitted either as altered bits in data packets or in
separate control packets.
Explicit congestion signaling approaches are divided into three general categories
Binary
A bit is set in a data packet as it is forwarded by the congested node. When a source
receives a binary indication of congestion on a logical connection, it reduces its traffic flow.
Credit Based:
The credit indicates how many octets or how many packets the source may transmit.
Rate Based
These schemes are based on providing an explicit data rate limit to the source over a logical
connection. The source may transmit data at a rate up to the set limit.
8|Pag
e
Model-2
Type
Function
Key elements
Discard
strategy
Backward explicit
Congestion
congestion
avoidance
Forward explicit
Congestion
congestion
avoidance
FECN bit
Implicit congestion
Congestion
Sequence numbers in
notification
recovery
frame loss
higher-layer PDU
DE bit
BECN bit
notification
notification
9|Pag
e
The above table lists the congestion control techniques defined in various ANSI documents.
Discard Strategy deals with the most fundamental response to congestion. When congestion
becomes severe enough, the network is forced to discard frames.
Congestion avoidance procedures are used at the onset of congestion to minimize the effect
on the network. Thus, there must be some explicit signaling mechanism from the network
that will trigger the congestion avoidance.
Congestion recovery procedures are used to prevent network collapse in the face of severe
congestion. These procedures are typically initiated when the network has begun to drop
frames due to congestion. Such dropped frames will be reported by some higher layer of
software and severe as an implicit mechanism.
Objectives of frame relay congestion control are:
Minimize the possibility that one end user can monopolize network
resources at the expense of other end users.
3. (b) Explain about traffic management in packet switching. (8 Marks) May/June 2012
Model-2
When a node is saturated and must discard packets, it can apply some simple rule, such as
discard the most recent arrival. However, other considerations can be used to refine the
application of congestion control techniques and discard policy.
Fairness
As congestion develops, flows of packets between sources and destinations will experience
increased delays, with high congestion, packet losses. In the absence of other requirements,
we would like to assure that the various flows suffer from congestion equally.
To simply discard on a last-in, first-discarded basis may not be fair.
As an example of a technique that might promote fairness, a node can maintain a separate
queue for each logical connection or for each source-destination pair.
10 | P a g
e
If all of the queue buffers are of equal length, then the queues with the highest traffic load
will suffer discards more often, allowing lower-traffic connections a fair share of the
capacity.
Quality of Service
Different traffic flows have different priorities; for example, network management traffic,
particularly during times of congestion or failure, is more important than application traffic.
It is particularly important during periods of congestion that traffic flows with different
requirements be treated differently and provided a different quality of service (QoS).
For example, a node might transmit higher-priority packets ahead of lower-priority packets
in the same queue. Or a node might maintain different queues for different QoS levels and
give preferential treatment to the higher levels.
Reservations
On the way to avoid congestion and also to provide assured service to application is to use
a reservation scheme. Such a scheme is an integral part of ATM networks. When a logical
connection is established, the network and the user enter into a traffic contract, which
specifies a data rate and other characteristics of the traffic flow.
The network agrees to give a defined QoS so long as the traffic flow is within contract
parameters; excess traffic is either discarded or handled on a best-effort basis, subject to
discard.
If the current outstanding reservations are such that the network resources are inadequate to
meet the new reservation, then the new reservation is denied.
One aspect of a reservation scheme is traffic policing. A node in the network, the node to
which the end system attaches, monitors the traffic flow and compares it to the traffic
contract. Excess traffic is either discarded or marked to indicate that it is liable to discard or
delay.
11 | P a g
e
May/June 2012
Model-2
Or
7. (b) Discuss briefly the effect of congestion in networks. (6 Marks)
May/June 2013
Let us consider what happens in a network with finite buffers if no attempt is made to
control congestion or to restrain input from end systems.
At light loads, throughput and hence network utilization increases as the offered load
increases. As the load continues to increase, a point is reach beyond which the throughput
of the network increases at a rate slower than the rate at which offered load is increased,
This is due to network entry into a moderate congestion rate. In this region, the network
continues to cope with the load, although with increased delays.
The departure of throughput from the ideal is accounted for by a number of factors. The
load is unlikely to be uniformly spread throughout the network. Therefore, while some
nodes may experience moderate congestion, others may be experiencing severe congestion
and may need to discard traffic.
In addition, as the load increases, the network will attempt to balance the load by routing
packets through areas of lower congestion.
For the routing function to work, an increased number of routing messages must be
exchanged between nodes to alert each other to areas of congestion; this overhead reduces
the capacity available for data packets.
As the load on the network continues to increase, the queue lengths of the various nodes
continue to grow. Eventually, a point is reached beyond which throughput actually drops
with increased offered load.
The reason for this is that the buffers at each node are of finite size. When the buffers at a
node become full; it must discard packets. So as more and more packets are retransmitted,
the load on the system grows, and more buffers become saturated.
While the system is trying desperately to clear the backlog, end systems are pumping old
and new packets into the system. Even successfully delivered packets may be retransmitted
because it take too long, at a higher layer to acknowledge them. Under these circumstances,
the effective capacity of the system is virtually zero.
5. (a) Explain in detail the following congestion control techniques.
Nov/Dec 2012
13 | P a g
e
Backward information is transmitted either by altering bits in a data packet headed for the
source to be controlled or by transmitting separate control packets to the source.
Forward
Notifies the user that congestion avoidance procedure should be initiated where applicable
for traffic in the same direction as the received packet. It indicates that this packet, on this
logical connection, has encountered congested resources.
Again, this information may be transmitted either as altered bits in data packets or in
separate control packets.
Explicit congestion signaling approaches are divided into three general categories
Binary
A bit is set in a data packet as it is forwarded by the congested node. When a source
receives a binary indication of congestion on a logical connection, it reduces its traffic flow.
Credit Based:
The credit indicates how many octets or how many packets the source may transmit.
Rate Based
These schemes are based on providing an explicit data rate limit to the source over a logical
connection. The source may transmit data at a rate up to the set limit.
5. (b) Explain the Kendalls notation in detail. (4 Marks)
Nov/Dec 2012
Kendall proposed a set of notations for queuing models. This is widely used in
literature. The common patter of notations of a queuing model is given by:
(a/b/c) : (d/e)
where:
a
: Queue discipline.
15 | P a g
e
6. (b) Explain about traffic rate management in frame relay networks. (8 Marks) Nov/Dec 2012
Model-2
Traffic rate management:
The simplest way to cope with congestion is for the frame-relaying network to discard
frames arbitrarily, with no regard to the source of a particular frame. In that case, because
there is no reward for restraint, the best strategy for any individual end system is to transmit
frames as rapidly as possible.
To provide for a fairer allocation of resources, the frame relay bearer service includes the
concept of a committed information rate (CIR). This is a rate, in bits per second that the
network agrees to support for a particular frame-mode connection.
Any data transmitted in excess of the CIR is vulnerable to discard in the event of
congestion. Despite the use of the term committed, there is no guarantee that even the CIR
will be met.
In case of extreme congestion, the network may be forced to provide a service at less than
the CIR for a given connection.
However, when it comes time to discard frames, the network will choose to discard frames
on connections that are exceeding their CIR before discarding frames that are within their
CIR.
In theory, each frame-relaying node should manage its affairs so that the aggregate of CIRs
of all the connections of all the end systems attached to the node does not exceed the
capacity of the node.
In addition, the aggregate of the CIRs should not exceed the physical data rate across the
user-network interface, known as the access rate. The limitation imposed by access rate can
be expressed as follows:
16 | P a g
e
The CIR by itself does not provide much flexibility in dealing with traffic rates. So
two additional parameters assigned on permanent connections and negotiated on
switched connections, are needed:
Committed burst size (Bc):
The maximum amount data that the network agrees to transfer, under normal conditions,
over a measurement interval T. These data may not be contiguous that is they may appear
in one frame or in several frames.
Excess burst size (Be):
The maximum amount of data in excess of Bc that the network will attempt to transfer,
under normal conditions, over a measurement interval T. These data are uncommitted in
the sense that the network does not commit to delivery under normal conditions. Put
another way, the data that represent Be are delivered with lower probability than the
data within Bc.
17 | P a g e
May/June 2013
Model-1
With packet switching, data are transmitted in short blocks, called packets. A typical upper
bound on packet length is 1000 octets (bytes). If a source has a longer message to send, the
message is broken up into a series of packets.
Each packet contains a portion of the users data plus some control information.
The control information, at a minimum, includes the information that the network requires
to be able to route the packet through the network and deliver it to the intended destination.
At each node, the packet is received, stored briefly, and passed on to the next node.
18 | P a g
e
LAPF control
LAPF core
Physical
Layer
Frame relay involves the physical layer and a data link control protocol known as LPAF
(Link Access Procedure for Frame Mode Bearer Services). There are two versions of LAPF
defined. All frame relay networks involve the implementation of the LAPF core protocol
on all subscriber systems and on all frame relay nodes. LAPF core provides a minimal set
of data link control functions, consisting of the following:
Frame delimiting, alignment and transparency
Above this, the user may choose to select additional data link or network-layer end-to-end
functions. One possibility is known as the LAPF control protocol. LAPF control is not part
of the frame relay service but may be implemented only in the end systems to provide flow
and error control.
The frame relay service using LAPF core has the following properties for the transmission
of data.
Preservation of the order of frame transfer from one edge to the network to the other
A small probability of frame loss
20 | P a g
e
Model-1
Identify the parameters of the system, such as the arrival rate, service time, Queue
capacity, and perhaps draw a diagram of the system.
Identify the system states. (A state will generally represent the integer number of
customers, people, jobs, calls, messages, etc. in the system and may or may not be
limited.)
Draw a state transition diagram that represents the possible system states and
identify the rates to enter and leave each state. This diagram is a representation of a
Markov chain.
Because the state transition diagram represents the steady state situation between
states there is a balanced flow between states so the probabilities of being in
adjacent states can be related mathematically in terms of the arrival and service
rates and state probabilities.
Express all the state probabilities in terms of the empty state probability, using the
inter-state transition relationships.
Determine the empty state probability by using the fact that all state probabilities
always sum to 1.
22 | P a g
e
Model-1
The above figure shows a generalization of the simple model we have been discussing for
multiple servers, all sharing a common queue. If an item arrives and at least one server is
available, then the item is immediately dispatched to that server.
It is assumed that all servers are identical; thus, if more than one server is available, it
makes no difference which server is chosen for the item.
If all servers are busy, a queue begins to form. As soon as one server becomes free, an item
is dispatched from the queue using the dispatching discipline in force.
If we have N identical servers, then r is the utilization of each server, and we can consider
Nr to be the utilization of the entire system; this latter term is often referred to as the traffic
intensity, u.
Thus, the theoretical maximum utilization is N 100%, and the theoretical maximum input
rate is:
The key characteristics typically chosen for the multiserver queue correspond to those for the
single-server queue. That is, we assume an infinite population and an infinite queue size, with a
single infinite queue shared among all servers. Unless otherwise stated, the dispatching discipline
is FIFO.
23 | P a g
e
12. (a) (i) Explain with an example the implementation of single server
queues.
(8)
(8)
(Or)
(b) (i) Explain the effects of congestion in packet switching networks. (8)
(ii) Explain how congestion avoidance is done in a frame relay
networks.
(8)