Sunteți pe pagina 1din 26

VII Sem ECE

CS2060 High Speed Networks

DHANALAKSHMI SRINIVASAN COLLEGE OF ENGINEERING

(An ISO 9001:2008 Certified Institution)


Coimbatore-641 105

B.E./B.Tech. DEGREE EXAMINATION

VII Semester

Electronics and Communication Engineering

CS2060 HIGH SPEED NETWORKS

QUESTION & ANSWER BANK

2013-14

VII Sem ECE

CS2060 High Speed Networks

Unit-2
PART A (2 Marks)
1. State the key characteristics to be considered for deriving the analytic equations for the
queuing model.

Nov/Dec 2011

Key characteristics for deriving the analytic equations for the queuing model are:
Item population
Queue size
Dispatching discipline
2. What is meant by choke packets?

Nov/Dec 2011

A choke packet is a control packet generated at a congested node and transmitted back to a
source node to restrict traffic flow.
Example: Internet Control Message Protocol Source Quench packet.
3. What are the advantages of packet over circuit switching?

May/June 2012

Advantages of packet switching over circuit switching are:


Line efficiency is greater, because a single node-to-node link can be dynamically shared by
many packets over time.
A packet switching network can carry out data-rate conversion. Two stations of different
data rates can exchange packets, because each connects to its node at its proper data rate.
4. Why congestion occurs in the networks?

May/June 2012

Congestion occurs when the number of packages being transmitted through a network
begins to approach the packet handling capacity of the network.
5. List and explain the parameters for a single server queue.

Nov/Dec 2012

Parameter
Arrival Rate

Explanation
Items arriving per second

Waiting time

A certain number of items will be waiting in the waiting line

Service time
Utilization

Time interval between the dispatching of an item to the


server and the departure of that item from the server.
Fraction of time that the server is busy.

VII Sem ECE

CS2060 High Speed Networks


2|Pag
e

VII Sem ECE

CS2060 High Speed Networks

Parameter
Items resident in queuing system
Residence time

Explanation
The average no of items resident in the system, including the
item being served & the items waiting
The average time that an item spends in the system.

6. What is meant by BECN?

Nov/Dec 2012

Backward Explicit Congestion Notification (BECN) notifies the user that


congestion avoidance procedures should be initiated where traffic in the opposite
direction of the received frame.
It indicates that the frames that the user transmits on this logical connection may
encounter congested resources.
7. What is meant by the term Congestion in networks?

May/June 2013

Congestion is state occurring in a network when the load on the network is greater than the
capacity of the network.
8. What are the types of queuing models?

May/June 2013

Types of queuing models are:


The Single Server Queue
The Multiserver Queue
9. State the categories of congestion control.

Model-1

Categories of congestion control are:


Open loop (prevention)
Closed loop (Removal)
10. What are the characteristics of queuing process?
Characteristics of queuing process are:
Arrival Pattern
Service Pattern
Number of server
System Capacity
Queue discipline

Model-1

VII Sem ECE

CS2060 High Speed Networks

11. What is the drawback of back pressure?

Model-2
3|Pag
e

It can be used only in connection oriented network that allows hop by hop flow
control.
Neither frame relay nor ATM has any capability for restricting flow on a hop by hop
basis.
12. States discard strategy.

Model-2

It deals with the most fundamental response to congestion. When congestion becomes
severe enough the network is forced to discard frames and it must be done in a way that is fair to
all users.
13.What is meant by Kendalls notation?

Nov/Dec 2013

14.Mention the congestion control techniques used in packet switching networks. Nov/Dec 2013

VII Sem ECE

CS2060 High Speed Networks

4|Pag
e

VII Sem ECE

CS2060 High Speed Networks

PART B (16 Marks)


1. (a) Describe the single server queuing model with its structures and parameters. (10 Marks)
Nov/Dec 2011

Or
4. (a)Explain in detail about single server queues and its application. (8 Marks) May/June 2012
Or
6. (a) Explain the single server queuing model and its applications. (8 Marks)

Nov/Dec 2012

Or
7. (a) Explain Single Server Queuing model in detail. (10 Marks)

May/June 2013

The simplest queuing system is depicted in above Figure. The central element of the system
is a server, which provides some service to items. Items from some population of items
arrive at the system to be served. If the server is idle, an item is served immediately.
Otherwise, an arriving item joins a waiting line2. When the server has completed serving
an item, the item departs.
If there are items waiting in the queue, one is immediately dispatched to the server. The
server in this model can represent anything that performs some function or service for a
collection of items.

5|Pag
e

VII Sem ECE

CS2060 High Speed Networks

Examples: a processor provides service to processes; a transmission line provides a transmission service to
packets or frames of data; an I/O device provides a read or write service for I/O requests.
Queue Parameters
The above Figure also illustrates some important parameters associated with a queuing
model. Items arrive at the facility at some average rate (items arriving per second). At any
given time, a certain number of items will be waiting in the queue (zero or more); the
average number waiting is w, and the mean time that an item must wait is Tw.
Tw is averaged over all incoming items, including those that do not wait at all. The server
handles incoming items with an average service time Ts; this is the time interval between
the dispatching of an item to the server and the departure of that item from the server.
Utilization is the fraction of time that the server is busy, measured over some interval of
time.
Finally, two parameters apply to the system as a whole. The average number of items
resident in the system, including the item being served (if any) and the items waiting (if
any), is r; and the average time that an item spends in the system, waiting and being served,
is Tr; we refer to this as the mean residence time. If we assume that the capacity of the
queue is infinite, then no items are ever lost from the system; they are just delayed until
they can be served. Under these circumstances, the departure rate equals the arrival rate. As
the arrival rate, which is the rate of traffic passing through the system, Thus, the theoretical
maximum input rate that can be handled by the system is:

To proceed, we need to make some assumption about this model:


Item population: Typically, we assume an infinite population. This means that the arrival
rate is not altered by the loss of population. If the population is finite, then the population
available for arrival is reduced by the number of items currently in the system; this would
typically reduce the arrival rate proportionally.
Queue size: Typically, we assume an infinite queue size. Thus, the waiting line can grow
without bound. With a finite queue, it is possible for items to be lost from the system. In
6|Pag
e

VII Sem ECE

CS2060 High Speed Networks

practice, any queue is finite. In many cases, this will make no substantive difference to the analysis. We
address this issue briefly, below.
Dispatching discipline: When the server becomes free, and if there is more than one item
waiting, a decision must be made as to which item to dispatch next. The simplest approach
is first-in, first-out; this discipline is what is normally implied when the term queue is used.
Another possibility is last-in, first-out. One that you might encounter in practice is a
dispatching discipline based on service time. For example, a packet-switching node may
choose to dispatch packets on the basis of shortest first (to generate the most outgoing
packets) or longest first (to minimize processing time relative to transmission time).
Unfortunately, a discipline based on service time is very difficult to model analytically.
1. (b)Explain the Kendalls notation and the common distributions for the queuing model.
(6 Marks)

Nov/Dec 2011

Kendall Notations
Kendall proposed a set of notations for queuing models. This is widely used in literature.
The common patter of notations of a queuing model is given by:
(a/b/c) : (d/e)
where:
a

: Probability distribution of the inter-arrival time

: Probability distribution of the service time

: Number of servers in the queuing model

: Maximum allowed customers in the system

: Queue discipline.

Common distributions for the queuing model:


Symbol

Name

Description

Markovian

Exponential service time

Degenerate distribution

A deterministic or fixed service time

Ek
G

Erlang distribution
General distribution

An Erlang distribution with k as the shape parameter


It refers to independent service time

PH

Phase-type distribution

Some of the above distributions are special cases of the phase-type,


often used in place of a general distribution
7|Pag
e

VII Sem ECE

CS2060 High Speed Networks

2. (a) Explain in detail the explicit and implicit congestion signaling. (8 Marks) Nov/Dec 2011
Explicit Congestion Signaling
Explicit congestion control techniques operate over connection oriented networks and
control the flow of packets over individual connections. Explicit congestion signaling
approaches can work in one of two directions
Backward
Notifies the source that congestion avoidance procedures should be initiated where
applicable for traffic in the opposite direction of the received packet. It indicates that the
packets that the user transmits on this logical connection may encounter congested
resources.
Backward information is transmitted either by altering bits in a data packet headed for the
source to be controlled or by transmitting separate control packets to the source.
Forward
Notifies the user that congestion avoidance procedure should be initiated where applicable
for traffic in the same direction as the received packet. It indicates that this packet, on this
logical connection, has encountered congested resources.
Again, this information may be transmitted either as altered bits in data packets or in
separate control packets.
Explicit congestion signaling approaches are divided into three general categories
Binary
A bit is set in a data packet as it is forwarded by the congested node. When a source
receives a binary indication of congestion on a logical connection, it reduces its traffic flow.
Credit Based:
The credit indicates how many octets or how many packets the source may transmit.
Rate Based
These schemes are based on providing an explicit data rate limit to the source over a logical
connection. The source may transmit data at a rate up to the set limit.

8|Pag
e

VII Sem ECE

CS2060 High Speed Networks

Implicit Congestion Signaling


Implicit signaling is an effective congestion control technique in connectionless or
datagram configurations such as IP-based internets. In such cases there are no logical
connections through the internet on which flow can be regulated.
However, between the two end systems, logical connections are established at the TCP
level. TCP includes mechanisms for acknowledging receipt of TCP segments and for
regulating the flow of data between source and destination on a TCP connection. TCP
congestion control techniques based on the ability to detect increased delay and segment
loss.
Implicit signaling can also be used in connection-oriented networks. For example, in frame
relay networks. The LAPF control protocol includes facilities similar to those of TCP for
flow and error control. LPAF control is capable of detecting lost frames and adjusting the
flow of data accordingly.
2. (b) List and explain the frame relay congestion control techniques. (8 Marks) Nov/Dec 2011
Or
3. (a) Explain in detail about frame relay congestion control technique. (8 Marks)
May/June 2012

Model-2

Frame Relay Congestion Control Technique


Technique
Discard control

Type

Function

Key elements

Discard

Provides guidance to network

strategy

concerning which frames to discard

Backward explicit

Congestion

Provides guidance to end systems

congestion

avoidance

about congestion in network

Forward explicit

Congestion

Provides guidance to end systems

congestion

avoidance

about congestion in network

FECN bit

Implicit congestion

Congestion

End system infers congestion from

Sequence numbers in

notification

recovery

frame loss

higher-layer PDU

DE bit
BECN bit

notification

notification

9|Pag
e

VII Sem ECE

CS2060 High Speed Networks

The above table lists the congestion control techniques defined in various ANSI documents.
Discard Strategy deals with the most fundamental response to congestion. When congestion
becomes severe enough, the network is forced to discard frames.
Congestion avoidance procedures are used at the onset of congestion to minimize the effect
on the network. Thus, there must be some explicit signaling mechanism from the network
that will trigger the congestion avoidance.
Congestion recovery procedures are used to prevent network collapse in the face of severe
congestion. These procedures are typically initiated when the network has begun to drop
frames due to congestion. Such dropped frames will be reported by some higher layer of
software and severe as an implicit mechanism.
Objectives of frame relay congestion control are:

Minimize frame discard

Maintain, with high probability and minimum variance, an agreed quality of


service.

Minimize the possibility that one end user can monopolize network
resources at the expense of other end users.

Be simple to implement, and place little overhead on either end user or


network

3. (b) Explain about traffic management in packet switching. (8 Marks) May/June 2012
Model-2
When a node is saturated and must discard packets, it can apply some simple rule, such as
discard the most recent arrival. However, other considerations can be used to refine the
application of congestion control techniques and discard policy.
Fairness
As congestion develops, flows of packets between sources and destinations will experience
increased delays, with high congestion, packet losses. In the absence of other requirements,
we would like to assure that the various flows suffer from congestion equally.
To simply discard on a last-in, first-discarded basis may not be fair.
As an example of a technique that might promote fairness, a node can maintain a separate
queue for each logical connection or for each source-destination pair.
10 | P a g
e

VII Sem ECE

CS2060 High Speed Networks

If all of the queue buffers are of equal length, then the queues with the highest traffic load
will suffer discards more often, allowing lower-traffic connections a fair share of the
capacity.
Quality of Service
Different traffic flows have different priorities; for example, network management traffic,
particularly during times of congestion or failure, is more important than application traffic.
It is particularly important during periods of congestion that traffic flows with different
requirements be treated differently and provided a different quality of service (QoS).
For example, a node might transmit higher-priority packets ahead of lower-priority packets
in the same queue. Or a node might maintain different queues for different QoS levels and
give preferential treatment to the higher levels.
Reservations
On the way to avoid congestion and also to provide assured service to application is to use
a reservation scheme. Such a scheme is an integral part of ATM networks. When a logical
connection is established, the network and the user enter into a traffic contract, which
specifies a data rate and other characteristics of the traffic flow.
The network agrees to give a defined QoS so long as the traffic flow is within contract
parameters; excess traffic is either discarded or handled on a best-effort basis, subject to
discard.
If the current outstanding reservations are such that the network resources are inadequate to
meet the new reservation, then the new reservation is denied.
One aspect of a reservation scheme is traffic policing. A node in the network, the node to
which the end system attaches, monitors the traffic flow and compares it to the traffic
contract. Excess traffic is either discarded or marked to indicate that it is liable to discard or
delay.

11 | P a g
e

VII Sem ECE

CS2060 High Speed Networks

4. (b) Describe about effect of congestion. (8 Marks)

May/June 2012

Model-2

Or
7. (b) Discuss briefly the effect of congestion in networks. (6 Marks)

May/June 2013

Let us consider what happens in a network with finite buffers if no attempt is made to
control congestion or to restrain input from end systems.
At light loads, throughput and hence network utilization increases as the offered load
increases. As the load continues to increase, a point is reach beyond which the throughput
of the network increases at a rate slower than the rate at which offered load is increased,
This is due to network entry into a moderate congestion rate. In this region, the network
continues to cope with the load, although with increased delays.

VII Sem ECE

CS2060 High Speed Networks

The departure of throughput from the ideal is accounted for by a number of factors. The
load is unlikely to be uniformly spread throughout the network. Therefore, while some
nodes may experience moderate congestion, others may be experiencing severe congestion
and may need to discard traffic.
In addition, as the load increases, the network will attempt to balance the load by routing
packets through areas of lower congestion.
For the routing function to work, an increased number of routing messages must be
exchanged between nodes to alert each other to areas of congestion; this overhead reduces
the capacity available for data packets.
As the load on the network continues to increase, the queue lengths of the various nodes
continue to grow. Eventually, a point is reached beyond which throughput actually drops
with increased offered load.
The reason for this is that the buffers at each node are of finite size. When the buffers at a
node become full; it must discard packets. So as more and more packets are retransmitted,
the load on the system grows, and more buffers become saturated.
While the system is trying desperately to clear the backlog, end systems are pumping old
and new packets into the system. Even successfully delivered packets may be retransmitted
because it take too long, at a higher layer to acknowledge them. Under these circumstances,
the effective capacity of the system is virtually zero.
5. (a) Explain in detail the following congestion control techniques.

Nov/Dec 2012

(1) Back pressure. (4 Marks)


(2) Choke packet. (4 Marks)
(3) Explicit congestion signaling. (4 Marks)
(1) Back pressure
Backpressure can be selectively applied to logical connections, so that the flow from
one node to the next is only restricted or halted on some connections, generally the ones
with the most traffic. In this case, the restriction propagates back along the connection
to the source.

13 | P a g
e

VII Sem ECE

CS2060 High Speed Networks

Backpressure is of limited utility. It can be used in a connection-oriented network that


allows hop-by-hop flow control. X.25 based packet switching networks typically
provide this feature.
However, neither frame relay nor ATM has any capability for restricting flow on a hopby-hop basis. In the case of IP-based internets, there have traditionally been no built-infacilities for regulating the flow of data from one router to the next along a path through
the internet.
(2) Choke Packet
Choke packet is a control generated at a congested node and transmitted back to a
source node to restrict traffic flow.
An example of a choke package is the ICMP (Internet Control Message Protocol)
Source Quench packet. Either a router or a destination end system may send this
message to a source end system, requesting that it reduce the rate at which it is sending
traffic to the internet destination.
On receipt of a source quench message, the source host should cut back the rate at
which it is sending traffic to the specified destination until it no longer receives source
quench message.
The source quench message can be used by a router or host that must discard IP
diagrams because of a full buffer.
In that case, the router or host will issue a source quench message for every datagram
that is discards. The choke package is a relatively crude technique for controlling
congestion.
(3) Explicit congestion signaling
Explicit congestion control techniques operate over connection oriented networks and
control the flow of packets over individual connections. Explicit congestion signaling
approaches can work in one of two directions
Backward
Notifies the source that congestion avoidance procedures should be initiated where
applicable for traffic in the opposite direction of the received packet. It indicates that the
packets that the user transmits on this logical connection may encounter congested
resources.
14 | P a g
e

VII Sem ECE

CS2060 High Speed Networks

Backward information is transmitted either by altering bits in a data packet headed for the
source to be controlled or by transmitting separate control packets to the source.
Forward
Notifies the user that congestion avoidance procedure should be initiated where applicable
for traffic in the same direction as the received packet. It indicates that this packet, on this
logical connection, has encountered congested resources.
Again, this information may be transmitted either as altered bits in data packets or in
separate control packets.
Explicit congestion signaling approaches are divided into three general categories
Binary
A bit is set in a data packet as it is forwarded by the congested node. When a source
receives a binary indication of congestion on a logical connection, it reduces its traffic flow.
Credit Based:
The credit indicates how many octets or how many packets the source may transmit.
Rate Based
These schemes are based on providing an explicit data rate limit to the source over a logical
connection. The source may transmit data at a rate up to the set limit.
5. (b) Explain the Kendalls notation in detail. (4 Marks)

Nov/Dec 2012

Kendall proposed a set of notations for queuing models. This is widely used in
literature. The common patter of notations of a queuing model is given by:
(a/b/c) : (d/e)
where:
a

: Probability distribution of the inter-arrival time

: Probability distribution of the service time

: Number of servers in the queuing model

: Maximum allowed customers in the system

: Queue discipline.
15 | P a g
e

VII Sem ECE

CS2060 High Speed Networks

Some standard notation for distributions is:


M for a Markovian (exponential) distribution
E for an Erlang distribution with phases
D for Deterministic (constant)
G for General distribution
PH for a Phase-type distribution

6. (b) Explain about traffic rate management in frame relay networks. (8 Marks) Nov/Dec 2012
Model-2
Traffic rate management:
The simplest way to cope with congestion is for the frame-relaying network to discard
frames arbitrarily, with no regard to the source of a particular frame. In that case, because
there is no reward for restraint, the best strategy for any individual end system is to transmit
frames as rapidly as possible.
To provide for a fairer allocation of resources, the frame relay bearer service includes the
concept of a committed information rate (CIR). This is a rate, in bits per second that the
network agrees to support for a particular frame-mode connection.
Any data transmitted in excess of the CIR is vulnerable to discard in the event of
congestion. Despite the use of the term committed, there is no guarantee that even the CIR
will be met.
In case of extreme congestion, the network may be forced to provide a service at less than
the CIR for a given connection.
However, when it comes time to discard frames, the network will choose to discard frames
on connections that are exceeding their CIR before discarding frames that are within their
CIR.
In theory, each frame-relaying node should manage its affairs so that the aggregate of CIRs
of all the connections of all the end systems attached to the node does not exceed the
capacity of the node.
In addition, the aggregate of the CIRs should not exceed the physical data rate across the
user-network interface, known as the access rate. The limitation imposed by access rate can
be expressed as follows:
16 | P a g
e

VII Sem ECE

CS2060 High Speed Networks

CIR Access Ratej


Where,

CIRi,j is the committed information rate for connection i on channel j


Access Ratej is a data rate of user access channel j.

The CIR by itself does not provide much flexibility in dealing with traffic rates. So
two additional parameters assigned on permanent connections and negotiated on
switched connections, are needed:
Committed burst size (Bc):
The maximum amount data that the network agrees to transfer, under normal conditions,
over a measurement interval T. These data may not be contiguous that is they may appear
in one frame or in several frames.
Excess burst size (Be):
The maximum amount of data in excess of Bc that the network will attempt to transfer,

under normal conditions, over a measurement interval T. These data are uncommitted in
the sense that the network does not commit to delivery under normal conditions. Put
another way, the data that represent Be are delivered with lower probability than the
data within Bc.

17 | P a g e

8. Write notes on congestion control techniques used in


(i) Packet Switching Networks (8 Marks)
(ii) Frame relay Networks (8 Marks)

May/June 2013

Model-1

(i) Packet Switching Networks


The packet switching network is a distributed collection of packet switching nodes. Ideally,
all packet-switching nodes would always know the state of the entire network.
However, because the nodes are distributed, there is always a time delay between a change
in status in one portion of the network and the knowledge of that change elsewhere.

With packet switching, data are transmitted in short blocks, called packets. A typical upper
bound on packet length is 1000 octets (bytes). If a source has a longer message to send, the
message is broken up into a series of packets.
Each packet contains a portion of the users data plus some control information.
The control information, at a minimum, includes the information that the network requires
to be able to route the packet through the network and deliver it to the intended destination.
At each node, the packet is received, stored briefly, and passed on to the next node.

18 | P a g
e

Advantages over circuit switching


Line efficiency is greater, because a single node-to-node link can be dynamically shared by
many packets over time. The packets are queued up and transmitted as rapidly as possible over
the link.
A packet-switching network can carry out data-rate conversion. Two stations of different data
rates can exchange packets, because each connects to its node at its proper data rate
When traffic becomes heavy on a circuit switching network, some cells are blocked; that is, the
network refuses to accept additional connection requests until the load on the network
decreases. On packet-switching network, packets are still accepted, but delivery delay
increases.
Priorities can be used. Thus, if a node has a number of packets queued for transmission; it can
transmit the higher priority packets first. These packets will therefore experience less delay
than lower-priority packets.
(ii) Frame relay Networks
Frame Relay Network
Frame relaying is designed to eliminate much of the overhead that X.25 imposes on end
user systems and on the packet-switching network. The key differences between frame
relaying and a conventional X.25 packet-switching service are as follows:
Call control signaling is carried on a separate logical connection from user data. Thus,
intermediate nodes need not maintain state tables or process messages relating to call
control on an individual per-connection basis.
Multiplexing and switching of logical connections takes place at layer 2 instead of layer 3,
eliminating one entire layer of processing.
There is no hop-by-hop flow control and error control. End-to-end flow control and error
control are the responsibility of a higher layer, if they are employed at all.
Thus, with frame relay, a single user data frame is sent from source to destination, and an
acknowledgement, generated at a higher layer, is carried back in a frame. There is no hopby-hop exchange of data frames and acknowledgements.
19 | P a g
e

Implemented by end system


but not network

LAPF control
LAPF core

Implemented by end system


and network

Physical
Layer

Frame relay involves the physical layer and a data link control protocol known as LPAF
(Link Access Procedure for Frame Mode Bearer Services). There are two versions of LAPF
defined. All frame relay networks involve the implementation of the LAPF core protocol
on all subscriber systems and on all frame relay nodes. LAPF core provides a minimal set
of data link control functions, consisting of the following:
Frame delimiting, alignment and transparency

Frame multiplexing/demultiplexing using the address field

Inspection of the frame to ensure that it consists of an integer number of


octets prior to zero bit insertion or following zero bit extraction
Inspection of the frame to ensure that it is either too long or too short

Detection of transmission errors

Congestion control functions

Above this, the user may choose to select additional data link or network-layer end-to-end
functions. One possibility is known as the LAPF control protocol. LAPF control is not part
of the frame relay service but may be implemented only in the end systems to provide flow
and error control.
The frame relay service using LAPF core has the following properties for the transmission
of data.
Preservation of the order of frame transfer from one edge to the network to the other
A small probability of frame loss
20 | P a g
e

9. (a) Explain the need for Queuing Analysis (10 Marks)

Model-1

In queueing theory, a queueing model is used to approximate a real queueing situation or


system, so the queueing behaviour can be analysed mathematically. Queueing models allow
a number of useful steady state performance measures to be determined, including:
The average number in the queue, or the system, the average time spent in the queue, or the
system, the statistical distribution of those numbers or times, the probability the queue is
full, or empty, and the probability of finding the system in a particular state.
These performance measures are important as issues or problems caused by queueing
situations are often related to customer dissatisfaction with service or may be the root cause
of economic losses in a business. Analysis of the relevant queueing models allows the
cause of queueing issues to be identified and the impact of any changes that might be
wanted to be assessed.
Notation
Queueing models can be represented using Kendall's notation:
A/B/S/K/N/Disc
where:
A is the interarrival time distribution
B is the service time distribution
S is the number of servers
K is the system capacity
N is the calling population
Disc is the service discipline assumed
Some standard notation for distributions (A or B) is:
M for a Markovian (exponential) distribution
E for an Erlang distribution with phases
D for Deterministic (constant)
G for General distribution
PH for a Phase-type distribution
21 | P a g
e

Construction and analysis


Queueing models are generally constructed to represent the steady state of a queueing
system, that is, the typical, long run or average state of the system.
As a consequence, these are stochastic models that represent the probability that a queueing
system will be found in a particular configuration or state.
A general procedure for constructing and analyzing such queuing models is:

Identify the parameters of the system, such as the arrival rate, service time, Queue
capacity, and perhaps draw a diagram of the system.

Identify the system states. (A state will generally represent the integer number of
customers, people, jobs, calls, messages, etc. in the system and may or may not be
limited.)

Draw a state transition diagram that represents the possible system states and
identify the rates to enter and leave each state. This diagram is a representation of a
Markov chain.

Because the state transition diagram represents the steady state situation between
states there is a balanced flow between states so the probabilities of being in
adjacent states can be related mathematically in terms of the arrival and service
rates and state probabilities.

Express all the state probabilities in terms of the empty state probability, using the
inter-state transition relationships.

Determine the empty state probability by using the fact that all state probabilities
always sum to 1.

22 | P a g
e

9. (b) Explain Multiserver Queue in detail (6 marks)

Model-1

The above figure shows a generalization of the simple model we have been discussing for
multiple servers, all sharing a common queue. If an item arrives and at least one server is
available, then the item is immediately dispatched to that server.
It is assumed that all servers are identical; thus, if more than one server is available, it
makes no difference which server is chosen for the item.
If all servers are busy, a queue begins to form. As soon as one server becomes free, an item
is dispatched from the queue using the dispatching discipline in force.
If we have N identical servers, then r is the utilization of each server, and we can consider
Nr to be the utilization of the entire system; this latter term is often referred to as the traffic
intensity, u.
Thus, the theoretical maximum utilization is N 100%, and the theoretical maximum input
rate is:

The key characteristics typically chosen for the multiserver queue correspond to those for the
single-server queue. That is, we assume an infinite population and an infinite queue size, with a
single infinite queue shared among all servers. Unless otherwise stated, the dispatching discipline
is FIFO.
23 | P a g
e

12. (a) (i) Explain with an example the implementation of single server

queues.

(8)

(ii) Explain in detail about the Jacksons theorem.

(8)

(Or)
(b) (i) Explain the effects of congestion in packet switching networks. (8)
(ii) Explain how congestion avoidance is done in a frame relay
networks.

(8)

S-ar putea să vă placă și