Documente Academic
Documente Profesional
Documente Cultură
Reviewers:
Dr. Jzsef Br
Doctor of the Hungarian Academy of Sciences, Full Professor
Budapest University of Technology and Economics
Dr. Zaln Heszberger
PhD, Associate Professor
Budapest University of Technology and Economics
Contents
Preface
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
II
Exercises
System
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
11
12
14
15
16
.
.
.
.
.
.
.
.
.
17
17
25
30
32
37
38
44
55
57
.
.
.
.
.
.
.
.
69
69
73
88
89
92
104
106
109
117
4 Infinite-Source Systems
119
5
5 Finite-Source Systems
137
III
141
6 Relationships
143
6.1 Notations and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 143
6.2 Relationships between random variables . . . . . . . . . . . . . . . . . . 145
7
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Bibliography
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
147
147
149
150
152
154
155
157
158
160
161
163
164
173
175
177
185
186
193
Preface
Modern information technologies require innovations that are based on modeling, analyzing, designing and finally implementing new systems. The whole developing process
assumes a well-organized team work of experts including engineers, computer scientists,
mathematicians, physicist just to mention some of them. Modern infocommunication
networks are one of the most complex systems where the reliability and efficiency of the
components play a very important role. For the better understanding of the dynamic
behavior of the involved processes one have to deal with constructions of mathematical
models which describe the stochastic service of randomly arriving requests. Queueing
Theory is one of the most commonly used mathematical tool for the performance evaluation of such systems.
The aim of the book is to present the basic methods, approaches in a Markovian
level for the analysis of not too complicated systems. The main purpose is to understand
how models could be constructed and how to analyze them. It is assumed the reader has
been exposed to a first course in probability theory, however in the text I give a refresher
and state the most important principles I need later on. My intention is to show what is
behind the formulas and how we can derive formulas. It is also essential to know which
kind of questions are reasonable and then how to answer them.
My experience and advice are that if it is possible solve the same problem in different
ways and compare the results. Sometimes very nice closed-form, analytic solutions are
obtained but the main problem is that we cannot compute them for higher values of the
involved variables. In this case the algorithmic or asymptotic approaches could be very
useful. My intention is to find the balance between the mathematical and practitioner
needs. I feel that a satisfactory middle ground has been established for understanding
and applying these tools to practical systems. I hope that after understanding this book
the reader will be able to create his owns formulas if needed.
It should be underlined that most of the models are based on the assumption that the
involved random variables are exponentially distributed and independent of each other.
We must confess that this assumption is artificial since in practice the exponential distribution is not so frequent. However, the mathematical models based on the memoryless
property of the exponential distribution greatly simplifies the solution methods resulting
in computable formulas. By using these relatively simple formulas one can easily foresee
the effect of a given parameter on the performance measure and hence the trends can be
forecast. Clearly, instead of the exponential distribution one can use other distributions
but in that case the mathematical models will be much more complicated. The analytic
7
results can help us in validating the results obtained by stochastic simulation. This approach is quite general when analytic expressions cannot be expected. In this case not
only the model construction but also the statistical analysis of the output is important.
The primary purpose of the book is to show how to create simple models for practical
problems that is why the general theory of stochastic processes is omitted. It uses only
the most important concepts and sometimes states theorem without proofs, but each time
the related references are cited.
I must confess that the style of the following books greatly influenced me, even if
they are in different level and more comprehensive than this material: Allen [2], Jain [41],
Kleinrock [48], Kobayashi and Mark [51], Stewart [74], Tijms [91], Trivedi [94].
This book is intended not only for students of computer science, engineering, operation
research, mathematics but also those who study at business, management and planning
departments, too. It covers more than one semester and has been tested by graduate
students at Debrecen University over the years. It gives a very detailed analysis of the
involved queueing systems by giving density function, distribution function, generating
function, Laplace-transform, respectively. Furthermore, Java-applets are provided to calculate the main performance measures immediately by using the pdf version of the book in
a WWW environment. Of course these applets can be run if one reads the printed version.
I have attempted to provide examples for the better understanding and a collection
of exercises with detailed solution helps the reader in deepening her/his knowledge. I
am convinced that the book covers the basic topics in stochastic modeling of practical
problems and it supports students in all over the world.
I am indebted to Professors Jzsef Br and Zaln Heszberger for their review, comments and suggestions which greatly improved the quality of the book. I am also very
grateful to Tams Trk, Zoltn Nagy and Ferenc Veres for their help in editing. .
All comments and suggestions are welcome at:
sztrik.janos@inf.unideb.hu
http://irh.inf.unideb.hu/user/jsztrik
Debrecen, 2012.
Jnos Sztrik
Part I
Basic Queueing Theory
Chapter 1
Fundamental Concepts of Queueing
Theory
Queueing theory deals with one of the most unpleasant experiences of life, waiting. Queueing is quite common in many fields, for example, in telephone exchange, in a supermarket,
at a petrol station, at computer systems, etc. I have mentioned the telephone exchange
first because the first problems of queueing theory was raised by calls and Erlang was
the first who treated congestion problems in the beginning of 20th century, see Erlang
[21, 22].
His works inspired engineers, mathematicians to deal with queueing problems using
probabilistic methods. Queueing theory became a field of applied probability and many of
its results have been used in operations research, computer science, telecommunication,
traffic engineering, reliability theory, just to mention some. It should be emphasized that
is a living branch of science where the experts publish a lot of papers and books. The
easiest way is to verify this statement one should use the Google Scholar for queueing related items. A Queueing Theory Homepage has been created where readers are informed
about relevant sources, for example books, softwares, conferences, journals, etc. I highly
recommend to visit it at
http://web2.uwindsor.ca/math/hlynka/queue.html
There is only a few books and lectures notes published in Hungarian language, I would
mention the work of Gyrfi and Pli [33], Jereb and Telek [43], Kleinrock [48], Lakatos
and Szeidl , Telek [55] and Sztrik [84, 83, 82, 81]. However, it should be noted that the
Hungarian engineers and mathematicians have effectively contributed to the research and
applications. First of all we have to mention Lajos Takcs who wrote his pioneer and famous book about queueing theory [88]. Other researchers are J. Tomk, M. Arat, L.
Gyrfi, A. Benczr, L. Lakatos, L. Szeidl, L. Jereb, M. Telek, J. Br, T. Do, and J.
Sztrik. The Library of Faculty of Informatics, University of Debrecen, Hungary offer a
valuable collection of queueing and performance modeling related books in English, and
Russian, too. Please visit:
http://irh.inf.unideb.hu/user/jsztrik/education/05/3f.html
I may draw your attention to the books of Takagi [85, 86, 87] where a rich collection of
references is provided.
11
1.1
If % > 1 then the systems is overloaded since the requests arrive faster than as the are
served. It shows that more server are needed.
Let (A) denote the characteristic function of event A, that is
(
1 , if A occurs,
(A) =
0 , if A does not ,
furthermore let N (t) = 0 denote the event that at time T the server is idle, that is no
customer in the system. Then the utilization of the server during time T is defined
by
ZT
1
(N (t) 6= 0) dt ,
T
0
ZT
(N (t) 6= 0) dt = 1 P0 =
E
,
E + Ei
where P0 is the steady-state probability that the server is idle E, Ei denote the mean
busy period, mean idle period of the server, respectively.
This formula is a special case of the relationship valid for continuous-time Markov chains
and proved in Tomk [93].
Theorem 1 Let X(t) be an ergodic Markov chain, and A is a subset of its state space.
Then with probability 1
X
Z T
m(A)
1
(X(t) A)dt =
Pi =
,
lim
T T
m(A) + m(A)
0
iA
where m(A) and m(A) denote the mean sojourn time of the chain in A and A during a
cycle,respectively. The ergodic ( stationary, steady-state ) distribution of X(t) is denoted
by Pi .
In an m-server system the mean number of arrivals to a given server during time T
is T /m given that the arrivals are uniformly distributed over the servers. Thus the
utilization of a given server is
Us =
.
m
The other important measure of the system is the throughput of the system which
is defined as the mean number of requests serviced during a time unit. In an m-server
system the mean number of completed services is m% and thus
throughput = mUs = .
13
However, if we consider now the customers for a tagged customer the waiting and
response times are more important than the measures defined above. Let us define by
Wj , Tj the waiting, response time of the jth customer, respectively. Clearly the waiting
time is the time a customer spends in the queue waiting for service, and response time is
the time a customer spends in the system, that is
Tj = Wj + Sj ,
where Sj denotes its service time. Of course, Wj and Tj are random variables and their
mean, denoted by Wj and Tj , are appropriate for measuring the efficiency of the system.
It is not easy in general to obtain their distribution function.
Other characteristic of the system is the queue length, and the number of customers
in the system. Let the random variables Q(t), N (t) denote the number of customers in
the queue, in the system at time t, respectively. Clearly, in an m-server system we have
Q(t) = max{0, N (t) m}.
The primary aim is to get their distributions, but it is not always possible, many times
we have only their mean values or their generating function.
1.2
Kendalls Notation
Before starting the investigations of elementary queueing systems let us introduce a notation originated by Kendall to describe a queueing system.
Let us denote a system by
A / B / m / K / n/ D,
where
A: distribution function of the interarrival times,
B: distribution function of the service times,
m: number of servers,
K: capacity of the system, the maximum number of customers in the system including
the one being serviced,
n: population size, number of sources of customers,
D: service discipline.
Exponentially distributed random variables are notated by M , meaning Markovain or
memoryless.
Furthermore, if the population size and the capacity is infinite, the service discipline is
14
1.3
Since birth-death processes play a very important role in modeling elementary queueing
systems let us consider some useful relationships for them. Clearly, arrivals mean birth
and services mean death.
As we have seen earlier the steady-state distribution for birth-death processes can be
obtained in a very nice closed-form, that is
(1.1)
Pi =
0 i1
P0 ,
1 i
i = 1, 2, ,
P 0 1 = 1 +
X
0 i1
i=1
1 i
(1.2)
Similarly
(1.3)
Since Pk+1 =
(1.4)
(k+1 h + o(h))Pk+1
k+1 Pk+1
Dk = lim P
= P
.
h0
j=1 (j h + o(h))Pj
j=1 j Pj
k
Pk ,
k+1
k = 0, 1, . . ., thus
k Pk
Dk = P
= k ,
i=0 i Pi
15
k = 0, 1, . . . .
In words, the above relation states that the steady-state distributions at the moments of
births and deaths are the same. It should be underlined, that it does not mean that it is
equal to the steady-state distribution at a random point as we will see later on.
Further essential observation is that in steady-state the mean birth rate is equal to the
mean death rate. This can be seen as follows
(1.5)
X
i=0
1.4
i Pi =
i+1 Pi+1 =
i=0
k Pk = .
k=1
Queueing Softwares
To solve practical problems the first step is to identify the appropriate queueing system
and then to calculate the performance measures. Of course the level of modeling heavily
depends on the assumptions. It is recommended to start with a simple system and then
if the results do not fit to the problem continue with a more complicated one. Various
software packages help the interested readers in different level. The following links worths
a visit
http://web2.uwindsor.ca/math/hlynka/qsoft.html
For practical oriented teaching courses we also have developed a collection of Java-applets
calculating the performance measures not only for elementary but for more advanced
queueing systems. It is available at
http://irh.inf.unideb.hu/user/jsztrik/education/09/english/index.html
For simulation purposes I recommend
http://www.win.tue.nl/cow/Q2/
If the preprepared systems are not suitable for your problem then you have to create your
queueing system and then the creation starts and the primary aim of the present book
is to help this process.
For further readings the interested reader is referred to the following books: Allen [2],
Bose [9], Daigle [18], Gnedenko and Kovalenko [31], Gnedenko, Belyayev and Solovyev
[29], Gross and Harris [32], Jain [41], Jereb and Telek [43], Kleinrock [48], Kobayashi
[50, 51], Kulkarni [54], Nelson [59], Stewart [74], Sztrik [81], Tijms [91], Trivedi [94].
The present book has used some parts of Allen [2], Gross and Harris [32], Kleinrock [48],
Kobayashi [50], Sztrik [81], Tijms [91], Trivedi [94].
16
Chapter 2
Infinite-Source Queueing Systems
Queueing systems can be classified according to the cardinality of their sources, namely
finite-source and infinite-source models. In finite-source models the arrival intensity of
the request depends on the state of the system which makes the calculations more complicated. In the case of infinite-source models, the arrivals are independent of the number
of customers in the system resulting a mathematically tractable model. In queueing networks each node is a queueing system which can be connected to each other in various
way. The main aim of this chapter is to know how these nodes operate.
2.1
An M/M/1 queueing system is the simplest non-trivial queue where the requests arrive
according to a Poisson process with rate , that is the interarrival times are independent,
exponentially distributed random variables with parameter . The service times are also
assumed to be independent and exponentially distributed with parameter . Furthermore, all the involved random variables are supposed to be independent of each other.
Let N (t) denote the number of customers in the system at time t and we shall say
that the system is at state k if N (t) = k. Since all the involved random variables are
exponentially distributed, consequently they have the memoryless property, N (t) is a
continuous-time Markov chain with state space 0, 1, .
In the next step let us investigate the transition probabilities during time h. It is easy to
see that
Pk,k+1 (h) = (h + o(h)) (1 (h + o(h)) +
X
+
(h + o(h))k (h + o(h))k1 ,
k=2
k = 0, 1, 2, ... .
By using the independence assumption the first term is the probability that during h
one customer has arrived and no service has been finished. The summation term is the
probability that during h at least 2 customers has arrived and at the same time at least 1
17
has been serviced. It is not difficult to verify the second term is o(h) due to the property
of the Poisson process. Thus
Pk,k+1 (h) = h + o(h).
Similarly, the transition probability from state k into state k 1 during h can be written
as
Pk,k1 (h) = (h + o(h)) (1 (h + o(h)) +
X
+
(h + o(h))k1 (h + o(h))k
k=2
= h + o(h).
Furthermore, for non-neighboring states we have
| k j | 2.
Pk,j = o(h),
In summary, the introduced random process N (t) is a birth-death process with rates
k = ,
k = 0, 1, 2, ...,
k = ,
k = 1, 2, 3....
That is all the birth rates are , and all the death rates are .
As we notated the system capacity is infinite and the service discipline is FIFO.
To get the steady-state distribution let us substitute these rates into formula (1.1) obtained for general birth-death processes. Thus we obtain
Pk = P0
k1
Y
i=0
k
= P0
,
k 0.
By using the normalization condition we can see that this geometric sum is convergent
iff / < 1 and
P0 =
1+
k
X
k=1
!1
=1
=1%
where % = . Thus
Pk = (1 %)%k ,
k = 0, 1, 2, ...,
kPk = (1 %)%
k=0
X
k=1
18
k%k1 =
= (1 %)%
X
d%k
k=1
d
1
%
= (1 %)%
=
.
d%
d% 1 %
1%
Variance
V ar(N ) =
X
k
(k N ) Pk =
2
k=0
k=0
2
%
=
k Pk +
1%
k=0
2
%
1%
2k
k=0
2
Pk
%
Pk
1%
2
%
%
%
=
2
k(k 1)Pk +
+
(1 %)2 1 %
1%
k=0
2
d2 X k
%
%
= (1 %)%2 2
% +
d% k=0
1%
1%
2
2%2
%
%
%
=
=
+
.
2
(1 %)
1%
1%
(1 %)2
2
(k 1)Pk =
k=1
Variance
kPk
k=1
Pk = N (1 P0 ) = N % =
k=1
%2
.
1%
X
%2 (1 + % %2 )
2
(k 1)2 Pk Q =
.
V ar(Q) =
2
(1
%)
k=1
Server utilization
Us = 1 P0 =
= %.
+ E
where E a is the mean busy period length of the server, 1 is the mean idle time of
the server. Since the server is idle until a new request arrives which is exponentially
distributed with parameter . Hence
1%=
+ E
and thus
E =
1 %
1
1
= N=
.
1%
19
In the next few lines we show how this performance measure can be obtained in a
different way.
To do so we need the following notations.
Let E(A ), E(D ) denote the mean number of customers that have arrived, departed
during the mean busy period of the server, respectively. Furthermore, let E(S )
denote the mean number of customers that have arrived during a mean service
time. Clearly
E(D ) = E(),
E(S ) = ,
E(A ) = E(),
E(A ) + 1 = E(D ),
and thus after substitution we get
E() =
1
.
Consequently
E(D ) = E() =
1
1%
1
%
=
1%
1%
%
.
1%
k (t) = lim
20
i = 0, 1, . . .
If a customer arrives it finds the server idle with probability P0 hence the waiting
time is 0. Assume, upon arrival a tagged customer, the system is in state n. This
means that the request has to wait until the residual service time of the customer
being serviced plus the service times of the customers in the queue. As we assumed
the service is carried out according to the arrivals of the requests. Since the service times are exponentially distributed the remaining service time has the same
distribution as the original service time. Hence the waiting time of the tagged customer is Erlang distributed with parameters (n, ) and the response time is Erlang
distributed with (n + 1, ). Just to remind you the density function of an Erlang
distribution with parameters (n, ) is
fn (x) =
(x)n1 x
e ,
(n 1)!
x 0.
Hence applying the theorem of total probability for the density function of the
response time we have
n
X
X
(%x)n
n (x)
x
x
fT (x) =
(1 %)%
e
= (1 %)e
=
n!
n!
n=0
n=0
= (1 %)e(1%)x .
Its distribution function is
FT (x) = 1 e(1%)x .
That is the response time is exponentially distributed with parameter
(1 %) = .
Hence the expectation and variance of the response time are
T =
1
,
(1 %)
V ar(T ) = (
21
1
)2 .
(1 %)
Furthermore
1
1
=
= E.
(1 %)
T =
X
(x)n1
n=1
(n 1)!
x n
% (1 %) = (1 %)%
X
(x%)k
k=0
k!
ex =
= (1 %)%e(1%)x .
Thus
fW (0) = 1 %,
if x = 0,
(1%)x
fW (x) = %(1 %)e
, if x > 0.
Hence
FW (x) = 1 % + % 1 e(1%)x = 1 %e(1%)x .
The mean waiting time is
Z
W =
xfW (x)dx =
%
1
= %E = N .
(1 %)
1
1
=
V
ar(W
)
+
,
((1 ))2
2
thus
V ar(W ) =
1
2 2
2
2
1
=
=
,
((1 ))2 2
((1 ))2
((1 ))2 ((1 ))2
T =
1
%
=
= N.
(1 %)
1%
Furthermore
(2.2)
W =
%
%2
=
= Q.
(1 %)
1%
Relations (2.1), (2.2) are called Little formulas or Little theorem, or Little
law which remain valid under more general conditions.
22
Let us examine the states of an M/M/1 system at the departure instants of the customers.
Our aim is to calculate the distribution of the departure times of the customers. As it
was proved in (1.3) at departures the distribution is
k P k
.
Dk = P
i=0 i Pi
In the case of Poisson arrivals k = , k = 0, 1, . . ., hence Dk = Pk .
Now we are able to calculate the Laplace-transform of the interdeparture time d. Conditioning on the state of the server at the departure instants, by using the theorem of total
Laplace-transform we have
Ld (s) = %
+ (1 %)
,
+s
+s+s
since if the server is idle for the next departure a request should arrive first. Hence
Ld (s) =
% + s + %
%( + s) + (1 %)
=
( + s)( + s)
( + s)( + s)
=
(s + )
=
,
( + s)( + s)
+s
which shows that the distribution is exponential with parameter and not with as one
might expect. The independence follows from the memoryless property of the exponential
distributions and from their independence. This means that the departure process is a
Poisson process with rate .
This observation is very important to investigate tandem queues, that is when several
simple M/M/1 queueing systems as nodes are connected in serial to each other. Thus
at each node the arrival process is a Poisson process with parameter and the nodes
operate independently of each other. Hence if the service times have parameter i at
x
x
x
= e
+
e
= ex + ex ex = ex .
23
Now let us consider an M/G/1 system and we are interested in under which service time
distribution the interdeparture time is exponentially distributed with parameter. First
prove that the utilization of the system is US = % = E(S). As it is understandable for
any stationary stable G/G/1 queueing system the mean number of departures during
the mean busy period length of the server is one more than the mean number of arrivals
during the mean busy period length of the server. That is
E()
E()
=1+
,
E(S)
E( )
where E( ) denotes the mean interarrival times. Hence
E( )
E(S)
1
E( )E(S)
= E(S)
,
E() =
E( ) E(S)
1%
E( ) + E() = E()
where % =
E(S)
.
E( )
Clearly
%
1
E(S) 1%
E()
1%
=
=
US =
% = % < 1.
E( ) + E()
1 + 1%
E( ) + E(S)
1%
Thus the utilization for an M/G/1 system is %. It should be noted that an M/G/1 system
Dk = Pk , that is why our question can be formulated as
(1 %)
= %LS (s) + (1 %)
LS (s) = LS (s) % +
+s
+s
+s
2
2
(1 + sE(S))
E(S) + sE(S) + E(S)
= LS (s)
,
= LS (s)
+s
+s
thus
LS (s) =
1
,
1 + sE(S)
which is the Laplace-transform of an exponential distribution with mean E(S) . In summary, only exponentially distributed service times assures that Poisson arrivals involves
Poisson departures with the same parameters.
Solution:
Let the time unit be an hour. Then = 7, = 10, =
7
10
=
1
3
7
7
70 21
49
Q=N =
=
=
3 10
30
30
P (n > 3) = 1 P (n 3) = 1 P0 P1 P2 P3
= 1 1 + (1 )( + 2 + 3 ) = 4 = 0.343 0.7 = 0.2401
N=
N
7
7
W =
=
=
hour 14 minutes
3 10
30
1
1
1
P W >
= 1 FW
= 0.7 e10 3 0.3 = 0.7 e1 = 0.257
3
3
2.2
k = 0, 1, . . .
Clearly, there are various candidates for bk but we have to find such probabilities which
result not too complicated formulas for the main performance measures. Keeping in mind
this criteria let us consider the following
bk =
1
,
k+1
k = 0, 1, . . .
Thus
k
P0 ,
k = 0, 1, . . . ,
k!
and then using the normalization condition we get
Pk =
k
e ,
k = 0, 1, . . .
k!
The stability condition is E < , that is we do not need the condition < 1 as in an
M/M/1 system.
Notice that the number of customers follows a Poisson law with parameter and we can
Pk =
US = 1 P0 = 1 e ,
E()
,
US = 1
+ E()
hence
E() =
US
1 1 e
1
=
.
1 US
N = ,
V ar(N ) =
Q = N US = (1 e ) = + e 1.
X
X
X
X
2
2
2
E(Q ) =
(k 1) Pk =
k Pk 2
kPk +
Pk
k=1
k=1
k=1
k=1
2
= E(N ) 2N + US = + 2 + US = + 1 e .
2
Thus
V ar(Q) = E(Q2 ) (E(Q))2 = 2 + 1 e ( + e 1)2
= 2 + 1 e 2 e2 1 2e + 2 + 2e
= e2 + e 2e = e (e + 2 1).
To get the distribution of the response and waiting times we have to know the
distribution of the system at the instant when an arriving customer joins to the
system.
By applying the Bayess rule it is not difficult to see that
k =
k+1
X
i=0
Pk
Pi
i+1
k+1
e
(k+1)!
i+1
X
i=0
e
(i + 1)!
Pk+1
.
1 e
X
k+1
1 X (k + 1)Pk+1
1
T =
=
k =
N =
.
e
(1
e
)
(1
e
k=0
k=0
1
1 + e 1
W =T =
.
1 e
26
k Pk =
k=0
k Pk =
k=1
Pk = (1 e ),
k=1
thus
= = N,
(1 e )
+ e 1
W = (1 e )
= + e 1 = Q
(1 e )
T = (1 e )
fT (x|k) k =
k=0
X
(x)k ex
k=0
k!
k+1
e
(k + 1)! 1 e
e(+x) X (x)k
,
1 e k=0 k!(k + 1)!
which is difficult to calculate. We have the same problems with fW (x), too.
However, the Laplace-transforms LT (s) and LW (s) can be obtained and the hence
the higher moments can be derived.
Namely
k+1
k+1
X
e
(k+1)!
LT (s|k)k =
LT (s) =
+s
1 e
k=0
k=0
k+1
e X
1
e +s
=
e
=
1
.
1 e k=0 + s
(k + 1)!
1 e
LW (s) = LT (s)
+s
.
Find T by the help of LT (s) to check the formula. It is easy to see that
e
e +s (( + s)2 )
1e
e
0
LT (0) =
e =
.
1e
(1 e )
L0T (s) =
Hence
T =
,
(1 e )
27
e +s
e
1 .
LT (s) =
1 e
Thus
L0T (s) =
e
e +s (1)( + s)2 ,
1e
therefore
L00T (s) =
e
2 2
3
+s
+s
e
(1)(
+
s)
+
2(
+
s)
e
.
1 e
Hence
e
L00T (0) =
1 e
2
2
+ 2 e
!
=
1 2 + 2
.
2 1 e
Consequently
2
1 2 + 2
V ar(T ) = 2
1 e
(1 e )
2 + 2 2 e 2e 2
(2 + 2) (1 e ) 2
=
=
2 (1 e )2
2 (1 e )2
(2 ( + 2)e )
2 2 e 2e
=
.
=
2 (1 e )2
2 (1 e )2
However, W and T can be considered as a random sum, too. That is
2
1
1
1
V ar(W ) = E(Na ) 2 + V ar(Na )
= 2 (E(Na ) + V ar(Na )).
X
X kPk+1
E(Na ) =
kk =
1 e
k=1
k=1
!
X
X
1
=
(k + 1)Pk+1
Pk+1
1 e k=0
k=0
1
+
e
1
.
1 e
Since
V ar(Na ) = E(Na2 ) (E(Na ))2
28
k 2 k =
k=1
1
1 e
1
=
1 e
X
k=1
k2
Pk+1
1 e
(k + 1)2 2k 1 Pk+1
k=0
(k + 1)2 Pk+1 2
k=0
X
k=0
kPk+1
!
Pk+1
k=0
1
+ 2 2 + e 1 1 e
=
1e
1
=
2 e + 1 .
1e
Therefore
2
1
1
2
V ar(Na ) =
e +1
( + e 1)
1 e
1 e
2
2
1
(1 e ) e + 1 + e 1
=
1 e
2
1
=
(2 e + 1 2 e + e + e2 e
1 e
2 e2 1 2e + 2 2e )
e (2 + )
.
=
(1 e )2
Finally
2
1
1
e (2 + )
V ar(W ) =
( + e 1) +
1 e
(1 e )2
1
=
(( + e 1)(1 e ) + e (2 + )).
((1 e ))2
Thus
1
V ar(T ) = V ar(W ) + 2
2
1
( + e 1)(1 e ) + e (2 + ) + (1 e )2 )
V ar(T ) =
(1 e )
(1 e )( + e 1 + 1 e ) + e (2 + )
=
((1 e ))2
2 2e 2 e
=
((1 e )2
which is the same we have obtained earlier.
29
2.3
In the following let us consider an M/M/1 systems with priorities. This means that we
have two classes of customers. Each type of requests arrive according to a Poisson process
with parameter 1 , and 2 , respectively and the processes are supposed to be independent
of each other. The service times for each class are assumed to be exponentially distributed
with parameter . The system is stable if
1 + 2 < 1,
where i = i /, i = 1, 2.
Let us assume that class 1 has priority over class 2. This section is devoted to the investigation of preemptive and non-preemptive systems and some mean values are calculated.
Preemptive Priority
According to the discipline the service of a customer belonging to class 2 is never carried
out if there is customer belonging to class 1 in the system. In other words it means that
class 1 preempts class 2 that is if a class 2 customer is under service when a class 1 request
arrives the service stops and the service of class 1 request starts. The interrupted service
is continued only if there is no class 1 customer in the system.
Let Ni denote the number of class i customers in the system and let Ti stand for the
response time of class i requests. Our aim is to calculate E(Ni ) and E(Ti ) for i = 1, 2.
Since type 1 always preempts type 2 the service of class 1 customers is independent of
the number of class 2 customers. Thus we have
E(T1 ) =
(2.3)
1/
,
1 1
E(N1 ) =
1
.
1 1
Since for all customers the service time is exponentially distributed with the same parameter, the number of customers does not depends on the order of service. Hence for
the total number of customers in an M/M/1 we get
E(N1 ) + E(N2 ) =
(2.4)
1 + 2
,
1 1 2
1 + 2
1
2
=
,
1 1 2 1 1
(1 1 )(1 1 2 )
E(N2 )
1/
=
.
2
(1 1 )(1 1 2 )
Example 2 Let us compare what is the difference if preemptive priority discipline is applied instead of FIFO.
30
E(W ) = 3.0,
E(N ) = 3.0
E(T1 ) = 2.0,
E(W1 ) = 1.0,
E(N1 ) = 1.0
E(T2 ) = 8.0,
E(W2 ) = 6.0,
E(N2 ) = 2.0
Non-preemptive Priority
The only difference between the two disciplines is that in the case the arrival of a class
1 customer does not interrupt the service of type 2 request. That is why sometimes this
discipline is call HOL ( Head Of the Line ). Of course after finishing the service of class
1 starts.
By using the law of total expectations the mean response time for class 1 can be obtained
as
E(T1 ) = E(N1 )
1
1
1
+ + 2 .
The last term shows the situation when an arriving class 1 customer find the server
busy servicing a class 2 customer. Since the service time is exponentially distributed the
residual service time has the same distribution as the original one. Furthermore, because
of the Poisson arrivals the distribution at arrival moments is the same as at random
moments, that is the probability that the server is busy with class 2 customer is 2 . By
using the Littles law
E(N1 ) = 1 E(T1 ),
after substitution we get
E(T1 ) =
(1 + 2 )/
,
1 1
E(N1 ) =
(1 + 2 )1
.
1 1
To get the means for class 2 the same procedure can be performed as in the previous
case. That is using (2.4) after substitution we obtain
E(N2 ) =
(1 1 (1 1 2 ))2
,
(1 1 )(1 1 2 )
(1 1 (1 1 2 ))/
.
(1 1 )(1 1 2 )
31
Example 3 Now let us compare the difference between the two priority disciplines.
Let 1 = 0.5, 2 = 0.25 and = 1, then
E(T1 ) = 2.5,
E(W1 ) = 1.5,
E(N1 ) = 1.25
E(T2 ) = 7.0,
E(W2 ) = 6.0,
E(N2 ) = 1.75
Of course knowing the mean response time and mean number of customers in the system
the mean waiting time and the mean number of waiting customers can be obtained in
the usual way.
Java applets for direct calculations can be found at
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MMcPrio/MMcPrio.html
http://www.win.tue.nl/cow/Q2/
2.4
Let K be the capacity of an M/M/1 system, that is the maximum number of customers in
the system including the one under service. It is easy to see that the nu,ber of customers
in the systems is a birth-death process with rates k = , k = 0, . . . , K 1 s k = ,
k = 1, . . . , K. For the steady-state distribution we have
Pk =
k
,
K
X
i
k = 0, . . . , K,
i=0
that is
1
P0 = K
X
=
i
K+1 ,
1
,
1K+1
=1
6= 1.
i=0
It sholud be noted that the system is stable for any > 0 when K is fixed. However, if
K the the stability condition is < 1 since the distribution of M/M/1/K converges
to the distribution of M/M/1.
It can be verified analytically since K 0 then P0 1 .
Similarly to an M/M/1 systems after reasonable modifications the performance measures
can be computed as
US = 1 P0 ,
1 US
E() =
1 US
32
N=
K
X
k P0 = P0
k=1
K
X
kk1
k=1
K
X
= P0
!0
= P0
k=1
1 K
0
= P0
K+1
1
0
P0
(1 )2
P0 1 (K + 1)K + (K + 1)K+1 + K+1
=
(1 )2
P0 1 (K + 1)K + KK+1
=
(1 )2
1 (K + 1)K + KK+1
.
=
(1 )(1 K+1 )
=
1 (K + 1)K (1 ) + K+1
Q=
K
X
(k 1)Pk =
k=1
K
X
k=1
kPk
K
X
Pk = N US
k=1
To obtain the distribution of the response and waiting time we have to know the
distribution of the system at the moment when the tagged customer enters into
to system. It should be underlined that the customer should enter into the system
and it is not the same as an arriving customer. An arriving customer can join the
system or can be lost because the system is full. By using the Bayes theorem it is
easy to see that
k =
Pk
Pk
=
.
K1
1 PK
X
Pi
i=0
Similarly to the investigations we carried out in an M/M/1 system the mean and
the density function of the response time can be obtained by the help of the law of
total means and law of total probability, respectively.
For the expectation we have
T =
K1
X
k=0
K1
X k + 1 k P0
k+1
k =
1 Pk
k=0
K1
X
1
N
=
(k + 1)Pk+1 =
.
(1 PK ) k=0
(1 PK )
33
Consequently
W =T
N
1
1
=
.
(1 PK )
We would like to show that the Littles law is valid in this case and the same time
we can check the correctness of the formula.
It can easily be seen that the average arrival rate into the system is = (1 PK )
and thus
T = (1 PK )
N
= N.
(1 PK )
Similarly
1
N
W =
=N
(1 PK )
= N (1 PK ) = N US = Q,
since
= = US .
Now let us find the density function of the response and waiting times
By using the theorem of total probability we have
fT (x) =
K1
X
k=0
(x)k x Pk
e
,
k!
1 PK
Z
k
(t) et dt Pk
FT (x) =
k!
1 PK
k=0
0
!
K1
k
X
X
Pk
(x)i x
=
1
e
i!
1 PK
i=0
k=0
!
K1
k
X X
(x)i x
Pk
=1
e
.
i!
1 PK
i=0
k=0
K1
X
Thes formulas are more complicated due to the finite summation as in the case of
an M/M/1 system, but it is not difficult to see that in the limiting case as K
we have
fT (x) = (1 )e(1)x .
34
For the density and distribution function of the waiting time we obtain
P0
1 PK
K1
X (x)k1
Pk
ex
,
fW (x) =
(k
1)!
1
P
K
k=1
fW (0) =
K1
X
P0
FW (x) =
+
1 PK
k=1
=1
k1
X
(x)i
i!
i=0
K1
X
k1
X
(x)i
k=1
i=0
i!
x>0
!
x
!
ex
Pk
1 PK
Pk
.
1 PK
PK
K
X
= PK .
Pk
k=0
K
.
K
X
k
k=0
Notice that
PB (K, ) =
K1
K1
X
k + K1
PB (K 1, )
.
1 + PB (K 1, )
k=0
PB (K, ) < P ,
where P is a predefined limit value for the probability of loss.
To find the value of K without recursion we have to solve the inequality
K (1 )
< P
1 K+1
35
K (1 ) X k
<
(1 ) = K ,
1 K+1
k=K
and thus if
K < P ,
then PB (K, ) < P . That is
K ln < ln P
ln P
.
K>
ln
Now let us turn our attention to the Laplace-transform of the response and waiting times. First let us compute it for the response time. Similarly to the previous
arguments we have
K1
X k+1 k P0
LT (s) =
+s
1 PK
k=0
l
K
X
P0
=
(1 PK ) l=1 + s
K
1 +s
P0
=
(1 PK ) + s 1 +s
K
+s
P0
=
.
(1 PK ) + s
The Laplace-transform of the waiting time can be obtained as
K1
X k k P0
LW (s) =
+s
1 PK
k=0
k
K1
P0 X
=
1 PK k=0 + s
K
+s
P0
=
1 PK 1 +s
K
( + s) 1 +s
P0
=
,
1 PK
+s
36
.
+s
By the help of the Laplace-transforms the higher moments of the involved random
variables can be computed, too.
Java applets for direct calculations can be found at
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MM1K/MM1K.html
2.5
Similarly to the previous systems it is easy to see that the number of customers in the
system, that is the process (N (t), t 0) is a birth-death process with rates
k = , k = 0, 1, . . .
k = k, k = 1, 2, . . . .
Hence the steady-state distribution can be obtained as
X %k
%k
= e% ,
Pk = P0 , where P01 =
k!
k!
k=0
That is
%k %
Pk = e ,
k!
showing that N follows a Poisson law with parameter %.
It is easy to see that the performance measures can be computed as
N = %,
1
, W = 0, r = N , = r
E(r )
1 e%
1 1 e%
=
,
E(
)
=
.
r
1
%
%
e
= ,
Ur = 1 e% ,
T =
It can be proved that these formulas remain valid for an M/G/ system as well where
1
E(S) = .
2.6
This system is the oldest and thus the most famous system in queueing theory. The origin of the traffic theory or congestion theory started by the investigation of this system
and Erlang was the first who obtained his well-reputed formulas, see for example Erlang
[21, 22].
By assumptions customers arrive according to a Poisson process and the service times
are exponentially distributed. However, if n servers all busy when a new customer arrives
it will be lost because the system is full. The most important question is what proportion
of the customers is lost.
The process (N (t), t 0) is said to be in state k if k servers are busy, which is the same as
k customers are in the system. It is easy to see that (N (t), t 0)is a birth-death process
with rates
(
, if k < n,
k =
0, if k n,
k = k,
k = 1, 2, ..., n.
Clearly the steady-state distribution exists since the process has a finite state space. The
stationary distribution can be obtained as
k
, if k n,
P0
Pk =
k!
0
, if k > 0.
Due to the normalizing condition we have
P0 =
n k
X
1
k!
k=0
!1
,
k n.
38
By using the Bayess rule it is easy to see that Pn is the probability that an arriving
customer is lost. For moderate n the probability P0 can easily be computed. For large n
and small % P0 e% , and thus
%k %
e ,
k!
that is the Poisson distribution. For large n and large %
Pk
n
X
%j
j=0
j!
6= e% .
However, in this case the central limit theorem can be used, since the denominator is the
sum of the first (n + 1) terms of a Poisson distribution with mean %. Thus by the central
limit theorem this Poisson distribution can be approximated by a normal law with mean
(s 1/ %)
(s) (s 1/ %)
Pn
=1
,
(s)
(s)
where
Zs
(s) =
and
s=
x2
1
e 2 dx,
2
n + 21 %
.
Another way to calculate B(n, ) is to find a recursion. This can be obtained as follows
B(n, p) =
n
n!
n
i
X
i!
n1
n (n1)!
n
X
i
i!
i=0
i=0
B(n
1, )
n
1 + n B(n 1, )
n1
n (n 1)!
B(n 1, )
.
n + B(n 1, )
Using B(1, ) =
Due to the great importance of B(n, ) in practical problems so-called calculators have
been developed which can be found at
http://www.erlang.com/calculator/
39
To compare the approximations and the exact values we also have developed our own
Java script which can be used at
http://jani.uw.hu/erlang/erlang.html
Now determine the main performance measures of this M/M/n/n system
Mean number of customers in the systems, mean number of busy servers
n
X
n1 i
n
X
X
%
%j
N =n=
P0 = %(1 Pn ),
jPj =
j P0 = %
j!
i!
j=0
j=0
j=0
n
X
i
n
Pi = .
n
n
i=1
This case
Us =
%
(1 Pn ).
n
1/
,
e + 1/
n
1
.
(1 Pn )
Er
,
+ Er
n
X
%i
Er =
1 P0
=
P0
i=1
1+
i!
n
X
%i
i=1
40
i!
!.
10
= 1 = 30, Pn = 0.01.
Pn = 0.01 =
n
e
n! 1
n+ 2
Thus
0.99
n + 21
n+ 12
=
n+ 12
n 21
n 12
.
.
It is not difficult to verify by using the Table for the standard normal distribution that
n = 41.
Thus the approximation value of P41 is 0.009917321712214377,
and the exact value is 0.01043318100246811.
0.394
0.394
(1 P0 )
=
=
= 0.32 minutes
(P0 )
2 0.606
1.212
50
0, 5
1
=
= 25 = 24.75 minutes
(1 Pn )
2(1 0)
2
4
Heterogeneous Servers
In the case of an M/M /n/n system the service time distribution depends on the index
of the server. That is the service time is exponentially distributed with parameter i for
server i. An arriving customer choose randomly among the idle servers, that is each idle
server is chosen with the same probability. Since the servers are heterogeneous it is not
enough to to the number of busy servers but we have to identify them by their index. It
means that we have to deal with general Markov-processes.
Let (i1 , . . . , ik ) denote the indexes of the busy servers, which are the combinations of n
objects taken k at a time without replacement. Thus the state space of the Markov-chain
is the set of these combinations, that is (0, (i1 , . . . , ik ) Ckn , k = 1, . . . , n).
Let us denote by
P0 = P (0),
P (i1 , . . . , ik ) = P ((i1 , . . . , ik )), (i1 , . . . , ik ) Ckn ,
k = 1, . . . , n
the steady-state distribution of the chain which exists since the chain has a finite state
space and it is irreducible. The set of steady-state balance equations can be written as
P0 =
(2.5)
n
X
j P (j)
j=1
k
X
(2.6)
( +
ij )P (i1 , . . . , ik ) =
P (i1 , . . . , ij1 , ij+1 , . . . , ik )
n k + 1 j=1
j=1
X
+
j P (i01 , . . . , i0k , j 0 )
j6=i1 ,...,ik
(2.7)
X
n
j=1
n
X
P (1, . . . , j 1, j + 1, . . . , n)
j P (1, . . . , n) =
j=1
42
where (i01 , . . . , i0k , j 0 ) denotes the ordered set i1 , . . . , ik , j, i1 and in+1 are not defined.
Despite of the large number of unknowns, which is 2n , the solution is quite simple, namely
P (i1 , . . . , ik ) = (n k)!
(2.8)
k
Y
%ij C,
j=1
, j = 1, . . . , n,
i
normalizing condition
where %j =
P0 +
n
X
k=1
P (i1 , . . . , ik ) = 1.
n
X
j=1
(n 1)!C = n!C.
j
j
X
n
n
X
n1 C
n
n
C=
=
j C.
1 n
1
j1
j+1
n
1
n
j=1
j=1
Finally let us check the most complicated one, the second set of equations (2.6), namely
( +
k
X
ij )(n k)!
j=1
k
Y
%ij C
j=1
k
X
k1 C
(n k + 1)!
=
nk+1
ij1 ij+1 ik
j=1 i1
(n k 1)!
j6=i1 ,...,ik
k+1 j C
i1 ik j
k
X
X
ij k C
k C
= (n k)!
+
(n k 1)!
ik
i1 ik
j=1 i1
j6=i1 ,...,ik
X
k
k C
k C
= (n k)!
ij
+ (n k)!
,
i
i
i
i
1
1
k
k
j=1
n
X
P (i1 , . . . , ik ),
and thus
Uj =
1
j
1
j
+ E(ej )
where E(ej ) is the mean idle period of the jth server. Hence
E(ej ) =
N=
Pn
j=1
1 1 Uj
.
j Uj
Uj
n
X
Uj j .
j=1
%k
n
%k
%k
k
P (i1 , . . . , ik ) =
(n k)!% C = n!C = P0 = Pnk!
k
k!
k!
n
j=1
%j
j!
2.7
It is a variation of the classical queue assuming that the service is provided by n servers
operating independently of each other. This modification is natural since if the mean
arrival rate is greater than the service rate the system will not be stable, that is why
the number of servers should be increased. However, in this situation we have parallel
services and we are interested in the distribution of first service completion.
That is why we need the following observation.
44
r
Y
Pr
P (Xi x) = 1 e(
i=1
i )x
i=1
Similarly to the earlier investigations, it can easily be verified that the number of customers in the system is a birth-death process with the following transition probabilities
Pk,k1 (h) = (1 (h + o(h))) (k h + o(h)) + o(h) = k h + o(h),
Pk,k+1 (h) = (h + o(h)) (1 (k h + o(h))) + o(h) = h + o(h),
where
k = min(k, n) =
, for 0 k n,
n , for n < k.
k1
Y
i=0
k
1
= P0
.
(i + 1)
k!
n1
Y
i=0
k
k1
Y
= P0
.
(i + 1) j=n n
n!nkn
In summary
Pk =
P
0
k!
, for k n,
k n
P0 a n
n!
where
, for k > n,
= < 1.
n
n
This a is exactly the utilization of a given server . Furthermore
!1
n1 k
X
X
k 1
P0 = 1 +
+
,
kn
k!
n!
n
k=1
k=n
a=
45
and thus
P0 =
n1 k
X
k=0
k!
1
n! 1 a
!1
.
Since the arrivals follow a Poisson law the the distribution of the system at arrival instants
equals to the distribution at random moments, hence the probability that an arriving
customer has to wait is
P (waiting) =
Pk =
k=n
P0
k=n
k 1
.
n! nkn
a
k!
n!(n )
k=0
k=0
This probability is frequently used in different practical problems, for example in telephone systems, call centers, just to mention some of them. It is also a very famous formula
which is referred to as Erlangs C formula,or Erlangs delay formula and it is denoted by C(n, /).
The main performance measures of the systems can be obtained as follows
For the mean queue length we have
Q=
(k n)Pk =
j=0
jPn+j =
j=0
k=n
n
n
n!
n!
a P 0 = P0
j=0
X
daj
j=0
X
j
n+j
P0
n!nj
n
d X j
= P0
a
a =
da
n! da j=0
n
= P0
=
C(n, ).
2
n! (1 a)
n
1)!
1
a
k=0
k=n
k=0
!
n2
X k
1
n1
n1
=
+
+
1
P0 =
k! (n 1)! (n 1)! 1 a
k=0
!
n1 k
X
n 1
1
=
+
P0 = P0 = .
k!
n! 1 a
p0
k=0
n1
X
n2 k
X
46
N=
X
k=0
=+
kPk =
n1
X
kPk +
k=0
(k n)Pk +
k=n
nPk = n + Q
k=n
C(n, ),
n
S =n ,
thus
N = n S + Q,
hence
N n = Q S.
Distribution of the waiting time
An arriving customer has to wait if at his arrival the number of customers in the
system is at least n. In this case the time while a customer is serviced is exponentially
distributed with parameter n, consequently if there n + j customers in the system
the waiting time is Erlang distributed with parameters (j + 1, n). By applying the
theorem of total probability for the density function of the waiting time we have
fW (x) =
Pn+j (n)j+1
j=0
xj nx
e
.
j!
P0
j=0
P0
n!
n
n!
n!
a (n)j+1
n
ne
n!
n
n
n
nx
xj nx
e
j!
X
(anx)j
j=0
j!
P0 ne(n)x
P0 nen(1a)x
1
n(1 a)en(1a)x
n!
1a
= P (waiting)n(1 a)en(1a)x .
=
P0
47
fW (u)du = P (waiting)en(1a)x
= C(n, ) e(n)x .
Therefore the distribution function can be written as
FW (x) = 1 P (waiting) + P (waiting) 1 en(1a)x
P0
=
C(n, ).
W = xfW (x)dx =
n!
(1 a)2 n
(n )
0
Zz
fW +S (z) =
fW (x)e(zx) dx =
Zz
= P (waiting)n(1 a)
en(1a)x e(zx) dx =
n
1
= P0
n(1 a)ez
n! (1 a)
Zz
e(n1/)x dx =
0
n
1
P0 n
ez 1 e(n1/)z .
n!
n 1 /
Therefore
fT (x) =
n
P0
1
ex +
n!(1 a)
48
n
n!
nP0
n
P0
1
ex 1 e(n1/)x =
n 1 /
n
!
1
1
+
nP0
1 e(n1/)x
=
n!(1 a)
n!
n 1 /
= ex
= ex
n
P0
1 (n /)e(n1/)x
1+
n!(1 a)
n 1 /
!
.
Consequently for the complement of the distribution function of the response time
we have
Z
P (T > x) = fT (y)dy =
x
Z
=
n
P0
!
1
ey +
ey (n /)e(n/)y dy =
n!(1 a) n 1 /
=e
n
1
ex e(n/)x =
+
P0
n!(1 a)(n 1 /)
= ex
n
P0
1 e(n1/)x
1+
n!(1 a) n 1 /
!
.
n
1
1
1
1
xfT (x)dx = +
P0
=
+ W,
n n!
(1 a)2
as it was expected.
In stationary case the mean number of arriving customer should be equal to the
mean number of departing customers, so the mean number of customer in the system
is equal to the mumber of customers arrived during a mean response time. That is
T = N = Q + n,
in addition
W = Q.
49
These are the Littles formulas, that can be proved by simple calculations. As we
have seen
n
N = + P0
a.
n!(1 a)2
Since
n
1
1
1
T = +
P0
,
n n!
(1 a)2
thus
n
a
,
T = + P0
n! (1 a)2
that is
N = T ,
because
= .
Furthermore
Q = W ,
since
n = .
Overall utilization of the servers can be obtained as
The utilization of a single server is
Us =
n1
X
k
k=1
Pk +
Pk =
k=n
= a.
n
Er
,
+ Er
1 P0
.
P0
If the individual servers are considered then we assume that a given server becomes
busy earlier if it became idle earlier. Hence if j < n customers are in the system
then the number of idle servers is n j.
50
Let as consider a given server. On the condition that at the instant when it became
idle the number of customers in the system was j its mean idle time is
ej =
nj
.
aj =
Pj
n1
X
Pi
i=0
Then applying the law of total expectations for its mean idle period we have
n1
X
n1
X
(n j)Pj
S
e=
aj ej =
=
,
Pn1
P (e)
i=0 Pi
j=0
j=0
where P (e) denotes the probability that an arriving customer find an idle server.
Since
Us = a =
E
,
e + E
thus
ae = (1 a)E,
where E denotes it busy period.
Hence
S
a
.
1 a P (e)
E =
P (e) = P0 = 1 a,
thus
E =
a=
1
,
C m,
=
P
m! 1 m
m1
k=0
m
(
)
1
k
(
)
k!
m
(
)
1
m! 1
m
B(m, )
(1 B(m, ))(1
)
m
+ B(m, )
51
=P
m1
m!
m
(
)
k=0
m!
(1
)
m
B(m, )
1
(1
m
B(m, ))
m
(
)
m!
As we have seen in the previous investigations the delay probability C(n, ), plays an
important role in determining the main performance measures. Notice that the above
formula can be rewritten as
C(n, ) =
nB(n, )
,
n + B(n, )
moreover it can be proved that there exists a recursion for it, namely
C(n, ) =
(n 1 ) C(n 1, )
,
(n 1)(n ) C(n 1, )
.
B(n, )
Let k =
n
,
thus n = +
k. Hence
( + k ) (k)
nB(n, )
(k)
C(n, ) =
n + B(n, )
+ k k + (k)
(k)
(k)
1
(k)
(k)
= 1+k
.
(k)
k + (k)
(k)
That is if we would like to find such an n for which C(n , ) < , then we have to solve
the following equation
1
(k )
1 + k
(k )
which can be rewritten as
k
If k is given then
1
(k )
=
(k )
n = + k .
52
It should be noted that the search for k is independent of the value of and n thus it
can be calculated for various values of .
For example, if = 0.8, 0.5, 0.2, 0.1,
then the corresponding k -as are 0.1728, 0.5061, 1.062, 1.420.
exact
=1
2
=5
7
12
= 10
= 50
54
106
= 100
259
= 250
= 500
512
= 1000 1017
(2 2)k
, that is the reason that the joint queue is used in practice.
C(n, ) is of great importance in practical problems hence so-called calculators have been
developed and can be used at the link
http://www.erlang.com/calculator/
Java applets for direct calculations can be found at
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MMc/MMc.html
Example 6 Consider a service center with 4 servers where = 6, = 2.
Find the performance measures of the system.
Solution:
P0 = 0.0377,
Q = 1.528,
N = 4.528,
S = 1,
n = 3,
T = 0.755 time unit ,
3
Un = , e = 0.35 time unit ,
4
Ur = 0.9623.
Example 7 Find the number of runways in an airport such a way the the probability
of waiting of an airplane should not exceed 0.1. The arrivals are supposed to be Poisson
distributed with rate = 27 per hour and the service times are exponentially distributed
with a mean of 2 minutes.
Solution:
First use the same time unit for the rates, let us compute in hours. Hence = 30 and for
< 1 which results n > 1.
stability we need n
Denote by Pi (W > 0) the probability of waiting for i runways. By applying the corresponding formulas we get
P2 (W > 0) = 0.278,
P3 (W > 0) = 0.070,
P4 (W > 0) = 0.014.
Q = 0.03.
Example 8 Consider a fast food shop where to the customers arrive according to a Poisson law one customer in 6 seconds on the average. The service time is exponentially distributed with 20 seconds mean. Assuming that the maintenance cost of a server is 100
Hungarian Forint and the waiting cost is the same find the optimal value of the server
which minimizes the mean cost per hour.
Solution:
Q = W = 100
3600
W
6
1
6
1
20
20
thus n 4 .
6
Computing for the values n = 4, 5, 6, 7, 8 we have found that the minimum is achieved
at n = 5. This case the performance measures are
W = 3.9second,
E = 29.7second,
n = 2.5,
N = 3.15,
P (e) = 0.66,
P (W ) = 0.34 ,
e = 14.9second,
S = 2.5,
54
Q = 0.65 ,
2.8
This queue is a variation of a multiserver system and only maximum K customers are
allowed to stay in the system. As earlier the number of customers in the system is a
birth-deat process with appropriate rates and for the steady-state distribution we have
n
for 0 n c
n!n P0 ,
Pn =
n
P , for c n K.
cnc c!n 0
From the normalizing condition for P0 we have
1
X
K
c1
X
n
n
+
.
P0 =
n
nc c!n
n!
c
n=c
n=0
To simplify this expression let =
a= .
c
Then
K
X
c
1aKc+1
c! 1a ,
K
c X nc
=
a
=
cnc c!
c! n=c
c
n=c
c!
if a 6= 1
(K c + 1),
if a = 1.
Thus
P0 =
c1 n 1
X
c
Kc+1
1a
+
,
c!
1a
n!
n=0
if a 6= 1
c1 n 1
(K c + 1) +
,
c!
n!
if a = 1.
n=0
K
X
(n c)Pn =
n=c+1
K
X
n=c+1
K
X
P 0 c a
(n c)anc1
c! n=c+1
P0 c a d 1 aKc+1
=
c! da
1a
K
n
P 0 c X
nc
P
=
(n
c)
0
cnc c!n
c! n=c+1
cnc
Kc
Kc
P0 c a X i1 P0 c a d X i
=
ia =
a
c! i=1
c! da i=0
(n c)
which results
Q=
P0 c a
[1 aKc+1 (1 a)(K c + 1)aKc ]
c!(1 a)2
55
56
c(cx)nc cx
e
dx
(n
c)!
0
n=c
Z
K1
X
c(cx)nc cx
= FW (0) +
n 1
e
dx .
(n
c)!
t
n=c
FW (t) = FW (0) +
Since
Z
t
X (t)i et
(x)m x
e
dx =
m!
i!
i=0
nc
X (ct)i ect
c(cx)nc cx
e
dx =
,
(n c)!
i!
i=0
thus
FW (t) = FW (0) +
K1
X
=1
n=c
n=c
n=c
K1
X
K1
X
nc
X
(ct)i ect
nc
X
(ct)i ect
i=0
i!
i=0
i!
The Laplace-transform of the waiting and response times can be derived similarly, by
using the law of total Laplace-transforms.
2.9
So far systems with exponentially distributed serviced times have been treated. We must
admit that it is a restriction since in many practical problems these times are not exponentially distributed. It means that the investigation of queueing systems with generally
distributed service times is natural. It is not the aim of this book to give a detailed analysis of this important system I concentrate only on the mean value approach and some
practice oriented theorems are stated without proofs. A simple proof for the Littles law
is also given.
57
Littles Law
As a first step for the investigations let us give a simple proof for the Littles theorem,
Littles law, Littles formula, which states a relation between the mean number of
customers in the systems, mean arrival rate and the mean response time. Similar version
can be stated for the mean queue length, mean arrival rate and mean waiting time.
Let (t) denote the number of customers arrived into the system in a time interval (0, t),
and let (t) denote the number of departed customers in (0, t). Supposing that N (0) = 0,
the number of customert in the system at time t is N (t) = (t) (t).
Let the mean arrival rate into the system during (0, t) be defined as
t := (t) .
t
Let (t) denote the overall sojourn times of the customers until t and let T t be defined
as the mean sojourn time for a request. Clearly
Tt =
(t)
.
(t)
t denote the mean number of customers in the system during in the interval
Finally, let N
(0, t), that is
t = (t) .
N
t
From these relations we have
t Tt .
t =
N
Supposing that the following limits exist
= lim
t,
T = lim Tt .
we get
T,
=
N
which is called Littles law .
Similar version is
W
=
.
Q
58
k = Dk .
Thus for an M/G/1 system
k = Pk = Dk ,
that is in stationary case these 3 types of distributions are the same.
Due to their importance we prove them. Les us consider first Statement 1.
Introduce the following notation
Pk (t) := P (N (t) = k) ,
k (t) := P (an arriving customer at instant t finds k customers in the system ) .
Let A(t, t + t) be the event that one arrival occurs in the interval (t, t + t). Then
k (t) = lim P (N (t) = k | A(t, t + t)) .
t0
k (t) = lim
= lim
t0
Due to the memoryless property of the exponential distribution event A(t, t + t) does
not depend on the number of customers in the systems and even on t itself thus
P (A(t, t + t) | N (t) = k) = P (A(t, t + t)) ,
hence
k (t) = lim P (N (t) = k) ,
t0
that is
k (t) = Pk (t).
This holds for the limiting distribution as well, namely
k = lim k (t) = lim Pk (t) = Pk .
t
59
(2.9)
Furthermore if the total number of departures is denoted by D (t), and the total number
of arrivals is denoted by R (t) then
D (t) = R (t) + N (0) N (t) .
The distribution at the departure instants can be written as
k (t)
D
.
t D (t)
It is easy to see that the after simple algebra we have
Dk = lim
k (t)
k (t) + D
k (t) R
k (t)
D
R
=
.
D (t)
R (t) + N (0) N (t)
(t)
Since N (0) is finite and N (t) is also finite due to the stationarity from (2.9) and R
, with probability one follows that
k (t)
k (t)
D
R
= lim
= k .
t D (t)
t R (t)
Dk = lim
(E(R) + (k 1)E(S)) k
k=1
X
k=1
E(R)Pk +
!
(k 1)Pk
k=1
E(W ) =
60
E(R)
1
E(R) =
(2.11)
which can be written as
(2.12)
E(R) =
V ar(S) + E2 (S)
1
E(S 2 )
=
= (CS2 + 1)E(S),
2E(S)
2E(S)
2
where CS2 is the squared coefficient of the service time S. It should be noted that mean
residual service time depends on the first two moments of the service time.
Thus for the mean waiting time we have
E(W ) =
E(R)
=
(C 2 + 1)E(S).
1
2(1 ) S
By using the Littles law for the mean queue length we get
E(Q) =
2 CS2 + 1
.
1 2
Clearly, the mean response time and the mean number of customers in the systems can
be expressed as
E(T ) =
CS2 + 1
E(S) + E(S),
1 2
E(N ) = +
2 CS2 + 1
,
1 2
E(S),
1
E(Q) =
2
,
1
E(T ) =
1
E(S),
1
E(N ) =
.
1
Example 10 In the case of deterministic service time CS2 = 0, thus E(R) = E(S)/2.
Consequently we have
E(W ) =
E(T ) =
E(S)
,
1 2
1 E(S)
+ E(S),
1 2
61
E(Q) =
2
2(1 )
E(N ) = +
2
.
2(1 )
k = 0, 1, . . .
therefore the generating function of the number of customers in the system is equal to the
generating function of the number of customers at departure instant. Furthermore, it is
clear that the number of customers at departure instants is equal the number customers
arrived during the response time. In summary we have
Z
Dk = Pk =
(x)k x
e fT (x)dx.
k!
zk
k=0
Z X
0 k=0
Z
(x)k x
e fT (x)dx
k!
(xz)k x
e fT (x)dx
k!
=
0
that is it can be expressed by the help of the Laplace-transform of the response time T .
By applying the properties of the generating function and the Laplace-transform we have
(k)
(k)
thus
xfS (x)
.
E(S)
Z
Z
1
E(S 2 )
xfX (x) dx =
E(X) =
x2 fS (x) dx =
.
E(S) 0
E(S)
0
fX (x) =
Since the tagged customers arrives randomly in service time S hence the mean residual
can be obtained as
E(X)
E(S 2 )
E(R) =
=
2
2E(S)
Example 11 Let the service time be Erlang distributed with parameters (n, ) then
E(S) =
n
,
V ar(S) =
thus
E(S 2 ) = V ar(S) + E2 (S) =
hence
E(R) =
n
,
2
n(1 + n)
2
1+n
.
2
It is easy to see that using this approach the the density function the residual service
time can be calculated. Given that the tagged customer arrives in a service time of length
x, the arrival moment will be a random point within this service time, that is it will be
uniformly distributed within the service time interval (0, x). Thus we have
dy
fX (x)dx, qquad0 y x.
x
After substitution for fX (x) and integrating over x we get the desired density function
of the residual service time, that is
P (x X x + dx, y R y + dy) =
fR (y) =
1 FS (y)
.
E(S)
Hence
Z
E(R) =
Z
xfR (x)dx =
x
0
Thus
E(R) =
1 FS (x)
dx,
E(S)
E(S 2 )
.
2E(S)
63
thus
xn f (x)dx =
xn f (x)dx +
Zy
xn f (x)dx =
Since
Zy
xn f (x)dx
xn f (x)dx y n
xn f (x)dx,
xn f (x)dx.
f (x)dx = y n (1 F (y)) ,
hence
0 y n (1 F (y))
xn f (x)dx
therefore
0 lim y n (1 F (y))
xn f (x)dx,
Zy
xn f (x)dx lim
Zy
xn f (x)dx,
that is
lim y n (1 F (y)) = 0.
After these using integration by parts keeping in mind the above relation we get
Z
xn1 (1 F (x))dx =
xn
E(X n )
f (x)dx =
.
n
n
E(S 2 )
2E(S)
GN (z) = LS ( z)
64
by the help of which, in principle, the distribution of the number of customers in the system, the density function of the response and waiting times can be obtained. Of course
this time we must be able to invert the involved Laplace-transforms.
Takcs Recurrence Theorem
k
X k E(S i+1 )
E(W ki )
E(W ) =
1 i=1 i
i+1
k
that is moments of the waiting time can be obtained in terms of lower moments of the
waiting time and moments of the service time. It should be noted to get the kth moment
of W the k + 1th moment of the service time should exist.
Since W ,S are independent and T = W + S the kth moment of the response time can
also be computed by
k
X
n
E(T ) =
E(W l ) E(S kl ).
l
l=0
k
(k 1)2 Pk =
k=1
X
k=1
= E(N ) 2N +
65
k 2 Pk 2
X
k=1
kPk +
X
k=1
Pk
Now let us turn our attention to the Laplace-transform of the busy period of the server.
Lajos Takcs proved that
L (t) = LS (t + L (t)),
that is for the Laplace-transform L (t) a function equation should be solved ( which is
usually impossible to invert ).
However, by applying this equation the moments the busy period can be calculated.
First determine E(). Using the properties of the Laplace-transform we have
L0 (0) = (1 L0 (0))L0S (0)
E() = (1 + E())E(S)
1
E(S)
=
E() =
1
1
which was obtained earlier by the well-known relation
1
E()
= .
+ E()
V ar(S) + (E(S))2
.
(1 )3
Now let us consider the generating function of the customers served during a busy period.
It can be proved that
GNd () (z) = zLS ( GNd () (z))
which is again a functional equation but using derivations the higher moments can be
computed.
Thus for the mean numbers we have
E(Nd ()) = 1 + E(S)E(Nd ())
1
E(Nd ()) =
,
1
66
1
E(Nd ()) =
=
.
(1 )
1
It can be proved that
(1 ) + 2 E(S 2 )
V ar(Nd ()) =
.
(1 )3
It is interesting to note that the computation of V ar(), V ar(Nd ()) does not require the
existence of E(S 3 ), as it in the case of V ar(N ), V ar(Q), V ar(T ), V ar(W ).
As it is one of the most widely used queueing system the calculation of the main performance measure is of great importance. Tis can be done by the help of our Java applets
Java applets for direct calculations can be found at
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MH21/MH21.html
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MGamma1/MGamma1.html
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MEk1/MEk1.html
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MD1/MD1.html
67
68
Chapter 3
Finite-Source Systems
So far we have been dealing with such queueing systems where arrivals followed a Poisson
process, that is the source of customers is infinite. In this chapter we are focusing on
the finite-source population models. They are also very important from practical point
of view since in many situation the source is finite. Let us investigate the example of the
so-called machine interference problem treated by many experts.
Let us consider n machines that operates independently of each other. The operation
times and service times are supposed to be independent random variables with given
distribution function. After failure the broken machines are repaired by a single or multiple repairmen according to a certain discipline. Having been repaired the machine starts
operating again and the whole process is repeated.
This simple model has many applications in various fields, for example in manufacturing,
computer science, reliability theory, management science, just to mention some of them.
For a detailed references on the finite-source models and their application the interested
reader is recommended to visit the following link
http://irh.inf.unideb.hu/user/jsztrik/research/fsqreview.pdf
3.1
,
,
69
0 k < r,
1 k r,
k
Pk = r ,
X n
i
i
i=0
0 k r,
k
k
Pk = n =
X n
(1 + )n
i
i
i=0
k
nk
,
1
=
1+
1+
k
.
1+
That is p is the probability that a given request is in the system. It is easy to see that
this distribution remains valid even for a G/G/n/n/n system since
that is we have a binomial distribution with success parameter ip =
p=
where =
E(S)
,
E( )
E(S)
=
,
E(S) + E( )
1+
r
X
kPk ,
r = N,
US =
k=0
m
E( )
=
,
n
E( ) + 1
70
r
N
= ,
r
r
thus
E( ) =
1 Ut
.
1 Ut
This help us to calculate the mean number of retrials of a customer from the source
to enter to the system. That it we have
E(NR ) = E( ),
hence the mean number of rejection is E(NR ) 1.
The blocking probability, that is the probability that a customer find the system full at
his arrival, by the help of the Bayess theorem can be calculated as
PB (n, r) =
(n r)Pr (n, r)
= Pr (n 1, r).
r
X
(n i)Pk (n, r)
i=0
PB (n, r) = lim
h0
i=0
(n r)Pr (n, r)
r
X
(n i)Pi (n, r)
i=0
n r
(n r)
n!
(n r) r!(nr)!
r
r
= r
=X
r
X
n i
n!
(n i)
(n i)
i
i
i!(n
i)!
i=0
i=0
n1 r
(n1)!
r
r
r!(n1r)!
= r
= r
= Pr (n 1, r).
X (n 1)!
X n 1
i
i
i!(n
i)!
i
i=0
i=0
Let E(n, r, ) denote the blocking probability, that is E(n, r, ) = Pr (n 1, r), which is
called Engsets loss formula.
In the following we show a recursion for this formula, namely
n 1 nr r
n1 r
r1 r
r
E(n, r, ) = r
= r1
X n 1
X n 1
n1 nr r
i
i
i
r
i
r1
i=0
i=0
=
nr
E(n, r 1, )
r
nr
+ r E(n, r 1, )
71
(n r)E(n, r 1, )
.
r + (n r)E(n, r 1, )
(n 1)
.
1 + (n 1)
It is clear that
n,
lim
0,
n0
E(n, r, ) = B(n, 0 ),
where
0 =
which van be seen formally, too. Moreover, as (n r) 0 the well-known recursion for
B(n, 0 ) is obtained which also justifies the correctness of the recursion for E(n, r, ).
US =
,
1+
m=
n
,
1+
Ut =
1
,
1+
E( ) =
and thus
1
,
E(NR ) = 1,
PB = 0,
r1
X
(n k)Pk 6= (n N ),
k=0
T =
1
.
Let us consider the distribution of the system at the instant when an arriving customer
enters into the system.
By using the Bayess law we have
k = lim
(k h + o(h))Pk
r1
h0 X
T =
(i h + o(h))Pi
k P k
r1
X
i Pi
i=0
r1
X
i=0
1
k Pk
1
=
r1
k=0 X
i Pi
i=0
1
T = r = r = N
k = 0, , r 1
3.2
It is the traditional machine interference problem, where the broken machines has to
wait and the single repairman fixes the failed machine in FIFO order. Assume the the
operating times are exponentially distributed with parameter and the repair rate is .
All random variables are supposed to be independent of each other.
Let N (t) denote the number of customers in the system at time t, which is a birth-death
process with birth rates
k =
(n k) , ha 0 k n,
, ha k > n,
k = ,
Thus for the distribution we have
Pk =
n!
%k P0 = (n k + 1)%Pk1 ,
(n k)!
where
%=
and
P0 =
1+
n
P
k=1
n!
%k
(nk)!
= P
n
k=0
n!
%k
(nk)!
Since the state space is finite the steady-state distribution always exists but if % > 1 then
more repairmen is needed.
For numerical calculation other forms are preferred that is why we introduce some notations.
Let P (k; ) a denote the Poisson distribution with parameter ) and let Q(k; ) denote
its cumulative distribution function, that is
P (k; ) =
Q(k; ) =
k
e ,
k!
k
X
P (i; ),
0 k < ;
0 k < .
i=0
P (n k; R)
,
Q(n; R)
73
0 k n,
where
R=
= %1 .
nk
n!
e
(nk)!
n
P
n! i
e
i!
i=0
k
n!
(nk)!
ni
n
P
n!
i!
i=0
n!
(nk)!
k
=P
n
n!
k=0 (ni)!
i = Pk .
Q(n 1; R)
.
Q(n; R)
n
X
kPk = n
k=0
n
X
(n k)Pk =
k=0
n1
1X
1X
=n
(n k)%Pk = n
Pk+1 =
% k=0
% k=0
1
Us
= n (1 P0 ) = n .
%
%
In other form
N =n
RQ(n 1; R)
Us
=n .
Q(n; R)
%
74
n
X
n
X
n
X
(1 P0 ) (1 P0 ) =
k=1
k=1
1
= n (1 P0 )(1 + ) = n 1 +
Us .
(k 1)Pk =
k=1
kPk
Pk = n
n
X
(n k)Pk = n N =
k=0
1
n
Us
(1 P0 ) =
.
E
,
+ E
1 P0
Us
=
.
nP0
n(1 Us )
In computer science and reliability theory application we often need the following
measure
Utilization of a given source ( machine, terminal )
The utilization of the ith source is defined by
U
(i)
1
= lim
T T
ZT
(at time t the ith source is active)dt
0
Then
U (i) = P ( there is a request in the ith source) .
n
X
(n k)Pk = m = (1 P0 ).
k=0
m
Ut =
(1 P0 ) = .
n
n
This can be obtained in the following way as well,
U
(i)
n
X
nk
k=1
Pk =
m
,
n
1/
m
= .
n
1/ + W + 1/
Thus
n
,
1/ + W + 1/
and
Us
= n (1 + %) = Q,
mW = n m 1 +
%
which the Litles law for the mean waiting time. Hence
Q
1 n
1+%
W =
=
.
m
Us
%
m =
1 P0 %
Us %
It is easy to prove that
mT = N ,
which is the Littles law for the mean response time. Clearly we have
1
m W +
= Q + m% =
Us
Us
= n (1 + %) + Us = n
=N .
%
%
Further relations
Us = 1 P0 = n%Ut = m%,
and thus
m = Us = t .
It should be noted that the utilization of the server plays a key role in the calculation of
all the main performance measures.
(k h + o(h))Pk
k Pk
= Pn1
= Pn1
k (n) = lim Pn1
h0
j=0 (j h + o(h))Pj
j=0 j Pj
j=0
=
1+
(n1)(nk)k
1 k
Pn1 (n1)(nj)j
j=1
1 j
1+
76
(n1)(n1k+1)k
1 k
Pn1 (n1)(n1i+1)i
i=1
1 i
= Pk (n 1)
irrespective to the number of servers. It should be noted that this relation shows a very
important result, namely that at arrivals the distribution of the system containing n
sources is not he same as its distribution at random points, but equals to the random
point distribution of a system with n 1 sources.
k+1 Pk+1
(k+1 h + o(h))Pk+1
= Pn
= Pn
Dk (n) = lim Pn
h0
j=1 (j h + o(h))Pj
j=1 j Pj
=
1+
(n1)(nk)k
1 k
Pn (n1)(nj+1)j1
j=2
1 j1
1+
(n1)(n1k+1)k
1 k
Pn1 (n1)(n1i+1)i
i=1
1 i
= Pk (n 1)
D0 (n) =
1+
(n1)(n1i+1)i
i=1
1 i
Pn1
= P0 (n 1)
Recursive Relations
Similarly to the previous arguments it is easy to see that the density function of the
response time can be obtained as
fT (x) =
n1
X
n1
X
(x)k
fT (x|k)k (n) =
k=0
k!
k=0
ex Pk (n 1).
n1
X
k+1
k=0
1
(N (n 1) + 1).
Pk (n 1) =
n1
X
fW (x|k)k (n) =
n1
X
(x)k1
k=0
k=0
(k 1)!
ex Pk (n 1),
n1
X
k
k=0
Pk (n 1) =
77
1
(N (n 1)),
which is clear.
We want to verify the correctness of the formula
T (n) =
1
(N (n 1) + 1).
As we have shown earlier the utilization can be expressed by the Erlangs loss formula,
hence
N (n) = n
1 B(n, %1 )
%
Since
N (n 1) = n 1
1 B(n 1, %1 )
%
thus
1
%N (n 1) = (n 1)% 1 B n 1,
%
1
= 1 + %N (n 1) (n 1)%.
B n 1,
%
1 + %N (n 1) (n 1)%
1 + %N (n 1) (n 1)%
=
.
n% + 1 + %N (n 1) (n 1)%
1 + %N (n 1) + %
Therefore
N (n) =
n% 1 + B(n, %1 )
%
n%
=
n%
1+%N (n1)+%
n% 1 +
=
=n
1+%N (n1)(n1)%
1+%N (n1)+%
%
n
1 + %N (n 1) + %.
Fnally
n N (n) =
n
1 + %N (n 1) + %
n
n N (n)
N (n)
%(N (n 1) + 1) =
,
n N (n)
1 + %(N (n 1) + 1) =
78
1
(N (n 1) + 1)
T (n) = %(N (n 1) + 1) =
N (n)
n N (n)
B(n 1, )
1
%
US (n) = 1 B(n, ) = 1
%
n + %1 B(n 1, %1 )
=
n
n+
1
B(n
%
1,
1
)
%
n%
n%
,
1 =
n% + 1 US (n 1)
n% + B(n 1, % )
that is there is a recursion for the utilization as well. It is also very important because by
using this recursion all the main performance measures can be obtained. Thus if , , n are
given we can use the recursion for US (n) and finally substitute it into the corresponding
formula. Thus
1
n%
= 1 + n% 1
US (n 1) = n% + 1
.
US (n)
US (n)
Since
N (n 1) = n 1
US (n 1)
,
%
we proceed
1
1
US (n 1)
T (n) = (N (n 1) + 1) =
n1
+1
%
1 + n%(1 US1(n) )
1
US (n 1)
1
1
n
1
=
n
=
n
=
%
US (n) %
which shows the correctness of the formula.
In the following let us show to compute T (n), W (n), N (n) recursively. As we have seen
1
(N (n 1) + 1)
1
1
W (n) = T (n) = N (n 1),
T (n) =
79
nT (n)
.
1 + T (n)
N (1) =
%
.
1+%
1
T (n) = + W (n)
nT (n)
N (n) =
(1 + T )(n)
W (n) =
that is we use a double iteration. The main advantage is that only the mean values are
needed. This method is referred to as mean value analysis.
In the previous section we have derived a recursion for US (n) and thus we may expect
that there is direct recursive relation for the other mean values as well since they depends
on the utilization. As a next step we find a recursion for the mean number of customers
in the source m(n). It si quite easy since
Us (n)
n
=
=
n + 1 Us (n 1)
n
n
=
=
1
n + 1 m(n 1).
n + m(n 1)
By using this relation for the utilization of the source can be expressed as
m(n) =
Ut (n) =
m(n)
n
1
=
=
n
n + 1 m(n 1) n
1
n1
n + 1
Ut (n 1)
n1
1
.
n + 1 (n 1)Ut (n 1)
80
N (n) = n
=
Us (n)
n Us (n)
=
=
n
n + 1 Us (n 1)
=
n2 + n nUs (n 1) 1
n(n Us (n 1))
=
.
n + 1 Us (n 1)
n + 1 Us (n 1)
Since
N (n 1) = n 1
Us (n 1
n Us (n 1)
=
1
(N (n 1) + 1) = n Us (n 1)
Us (n 1) = n (N (n 1) + 1),
thus after substitution we get
N (n) =
n(N (n 1) + 1)
.
1 + (N (n 1) + 1)
Finally find the recursion for the mean response time . Starting with
1
1
n
T (n) =
Us (n)
using that
1
T (n 1) =
n1
1
Us (n 1)
T (n 1) =
n1
1
Us (n 1)
T (n 1) +
1
n1
=
Us (n 1)
Us (n 1) =
(n 1)
,
T (n 1) + 1
1 n Us (n 1)
1 nT (n 1) + 1
=
.
T (n 1) + 1
1
.
1+%
n1
X
(x + )n1 e(x+ )
(x)k x (n1k)! e
=
e
fT (n, x) =
n1 i
k!
(n 1)! Q(n 1, )
X
)
(
k=0
e
i!
}
|i=0 {z
Q(n1,
)
P (n 1, x + )
.
Q(n 1, )
(k 1)!
k=1
Pn2
i=0
( )n2i
(x)
ex (n2i)!
e
i!
Q(n 1, )
P (n 2, x + )
.
=
Q(n 1, )
To get the distribution function we have to calculate the integral
Z x
fT (n, t) dt.
FT (n, x) =
0
y n1 y
e
(n1)!
Q(n 1, )
t = (y ) 1 ,
dt
dy
x+
dy
=
y i y
i=0 y! e
Pn1
Q(n 1, )
= 1 .
=1
Q(n 1, x + )
.
Q(n 1, )
n1
X
FT (x) =
k
X
(x)j
j=0
k=0
=1
n1 X
k
X
(x)j
k=0
=1
n1
X
j!
j!
j=0
Q(k, x)Pk (n 1) = 1
k=0
=1
Pk (n 1)
Pk (n 1)
)n1k
(
e
(n1k)!
Q(k, x)
Q(n 1, )
k=0
n1
X
Q(n 1, x + )
Q(n 1, )
Z
0
X i
tj t
e dt = 1
e
j!
i!
i=0
and thus
j
l
X
lj X i
e
e
(l
j)!
i!
j=0
i=0
can be written as
Z
l
X
1
j=0
lj
tj t
e dt
e
(l j)!
0 j!
Z
Z + l
l
X
(t + )l (t+)
lj
y y
=
e
e
= Q(l, )
e dy
(l
j)!
l!
l!
0
j=0
{z
}
|
Q(l,)
= Q(l, ) 1
l
X
yi
i=0
i!
+
= Q(l, + ).
During the calculations we could see that the derivative of Q(k, t) is P (k, t), which can
be used to find the density function, that is
fT (x) =
P (n 1, x + )
.
Q(n 1, )
83
k)!
k=0
nk
1
n
X
s
n
=s
P0
(n
k)!
k=0
1
Q n, s
1
1
.
= sn e (1 s )
1
Q n,
This could be derived in the following way. Let denote by F the number of customers in
the source. As we have proved earlier its distribution can be obtained as the distribution
of an Erlang loss system with traffic intensity 1 . Since the generating function of this
system has been obtained we can use this fact. Thus
N
nF
) = s E(s
1
Q n, s
1
1
.
= sn e (1 s )
Q n, 1
1
) = s GF
s
n
To verify the formula let us compute the mean number of customers in the system. By
the property of the generating function we have
0
1
0
n
N (n) = GN (n) (1) = s GF (n)
s
s=1
1
1
1
0
n1
n 0
GN (n) (s) = n s GF (n)
+ s GF (n)
2 ,
s
s
s
thus
N (n) = nGF (n) (1)
1
=n
1
US (n)
1 B n,
=n
.
LT (s) =
Pk (n 1)
+
s
k=0
84
k+1 n1k
n1
X
e
LT (s) =
+
s
(n
k)!
Q
n
1,
k=0
n1
X
+s
k1
n n1k
()
e
+s
(n1k)!
k=0
+s
n
Q n 1,
n1
X
=
+s
n
+s
n1k
1
(n1k)!
k=0
Q n 1,
+s
n
Q n 1, +s
e
=
e
+s
Q n 1,
n
+s
s Q n 1,
.
=
e
+s
Q n 1,
Solution 2
Let us calculate LT (s) by the help of the density function. Since the denominator is a
constant we have to determine the Laplace-transform of the numerator, that is
Z
LN um (s) =
n1
x +
e(x+ ) esx dx
(n 1)!
=e
n1
x +
e(+s)x dx.
(n 1)!
e
Lsz (s) =
(n 1)!
Z X
n1
n1k
n1
(x)k
e(+s)x dx
k
k=0
0
=e
k=0
=e
= e
n1k
n1
X
n1
X
(n 1 k)!
0
n1k
(x)k (+s)x
e
dx
k!
k+1
(n 1 k)! + s
k=0
n1k
+s
n X
n1
(n 1 k)!
n k=0
+s
+s
=e
Q n 1,
e
+s
n
s
+s
=
Q n 1,
e .
+s
+s
Since
LT (s) =
LN um (s)
,
Q n 1,
thus
LT (s) =
+s
n
Q n 1, +s
e
.
Q n 1,
s
Solution 3
The Laplace-transform of the numerator be can obtained as
Z
LN um (s) =
n1
x +
e(x+ ) esx dx
(n 1)!
Substituting t = x +
we get
x=
1
t
,
86
dx
1
= ,
dt
and thus
Z
LN um (s) =
tn1 t s (t ) 1
e e
dt
(n 1)!
=e
tn1 (1+ s )t
dt
e
(n 1)!
= e
+s
n1 Z
n1
1+
t
s
(n 1)!
e(1+ ) dt.
Substituting again y =
+s dt
t dy
,
+s
LN um (s) = e
thus
+s
n Z
y n1 y
e dy
(n 1)!
+s
=e
+s
n
+s
Q n 1,
,
therefore
s
LN um (s)
LT (s) =
= e
Q n 1,
+s
n
Q n 1, +s
.
Q n 1,
That is all 3 solutions gives the same result. Thus, in principle the higher moments of
the response time can be evaluated.
Since LT (s) = LW (s)
,
+s
thus
LW (s) =
+s
n1
Q n 1, +s
e
.
Q n 1,
s
87
Example 12 Consider 6 machines with mean lifetime of 40 hours. Let their mean repair
time be 4 hours. Find the performance measures.
Solution: =
1
40
Usz = 0.516,
W = 2.51 hour,
e = 40 hour,
m = n Ug = 5.16,
E =
4
40
= 0.1, n = 6, P0 = 0.484
0
1
2
3
4
5
6
0
0
1
2
3
4
5
0, 484 0.290 0.145 0.058 0.017 0.003 0.000
Failed machines
Waiting machines
Pn
Q = 0.324,
Ug = 0.86
N = 6 5.16 = 0.84
4 5.16
0.516
7
=
hour
1
6 0.484
10
6 40 0.484
Example 13 Change the mean lifetime to 2 hours in the previous Example. Find the
performance measures.
1
, which shows that a single repairman
Solution: 1 = 2, 1 = 4, = 2, n = 6, P0 = 75973
is not enough. We should increase the number of repairmen.
Failed machines
Waiting machines
Pk
0
0
1
0
1
75973
1
75973
Us 0.999,
Q 4.5,
e = 2 hours,
Ug 0.08,
2
3
4
5
6
1
2
3
4
5
0.001 0.012 0.075 0.303 0.606
W 22.5hours,
m 0.5,
T = 26.5 hours
N 5.5,
E .
All these measures demonstrate what we have expected because 1 is greater than 1. To
decide how many repairmen is needed there are different criterias as we shall see in Section
3.3
Heterogeneous Queues
The results of this section have been published in the paper of Csige and Tomk [16].
The reason of its introduction is to show the importance of the service discipline.
Let us consider n heterogeneous machines with exponentially distributed operating and
repair time with parameter k > 0 and k > 0, respectively for the kth machine, k =
88
(t 0) ,
is a continuous-time Markov where the ordering of x1 (t) , . . . , xv(t) (t) depends ot the
service discipline.
Since X(t)(t) is a finite state Markov chain thus if the parameters k , k , (1 k n)
are all positive then it is ergodic and hence the steady-state distribution exists. Of course
this heavily depends on the service discipline.
~ /M
~ /1/n/n/P S Queue
The M
3.3.1
P0 (t),
i=1
0
k
X
r=1
"
i1 ...ik
k
X
r
1X
ir Pi1 ,...,ik (t) +
Pi0 i0 ...i0 (t)
+
1 2
k+1
k r=1
k
+
1
r6=i ...i
1
r , k = 1, . . . , n 1.
r6=i1 ...ik
89
"
#
n
1X
r P1,...,r1,r+1,]...,n (t)
P1,...,n (t) =
r P1,...,n (t).
n r=1
r=1
n
X
"
i=1
k
k
X
1X
ir Pi1 ...ik =
ir Pi1 ...ir1 ir+1 ...ik +
k r=1
r=1
X
r
Pi0 i0 ...i0 ,
+
1 2
k+1
k
+
1
r6=i1 ...ik
#
" n
n
X
1X
r P1...,r1,r+1,...,n
r P1,...,n =
n r=1
r=1
i1 ...ik +
Pi1 ...ik = 1
k
Y
i
r=1 ir
Performance Measures
Utilization of the server
Us =
E()
n 1 = 1 P0 .
P
E() +
i
i=1
90
(i)
1
i
1
i
+ Ti
= 1 P (i) ,
where T i denotes the mean response time for machine i, that is the mean time while
it is broken, and
n
X
X
(i)
P =
Pi1 ,...,ik ,
k=1 i(i1 ,...,ik )
P (i)
,
i (1 P (i) )
1
.
i
Furthermore it is easy to see that the mean number of failed machines can be
obtained as
n
X
N=
P (i) .
i=1
In addition
n
X
i 1 P
(i)
Ti =
n
X
P (i)
i=1
i=1
which is the Littles formula for heterogeneous customers. In particular, for homogeneous case we
)T = N
(n N
which was proved earlier.
Various generalized versions of the machine interference problem with heterogeneous machines can be found in Psafalvi and Sztrik [62, 63].
Let us see some sample numerical results for the illustration of the influence of the service
disciplines on the main performance measures
91
Inpur parameters
FIFO
Machine utilizations
PROC-SHARING PRIORITY
n=3
1 = 0.3 1 = 0.7
2 = 0.3 2 = 0.7
3 = 0.3 3 = 0.7
Overall machine utilization
0.57
0.75 0.57 0.74
0.57
1.72
n=3
1 = 0.5 1 = 0.9
2 = 0.3 2 = 0.7
3 = 0.2 3 = 0.5
Overall machine utilization
0.48
0.75 0.56 0.76
0.62
1.669
1 = 0.5
2 = 0.4
n=4
1 = 0.9
2 = 0.7
1.72
0.57
0.70
0.57 0.74
0.58
0.57
0.44
1.72
1.666
0.51
0.64
0.56 0.77
0.56
0.58
0.44
1.656
0.38
0.41
0.903
3 = 0.3 3 = 0.6
4 = 0.2 4 = 0.5
Overall machine utilization
0.429
0.423
0.906
0.46
0.54
1.814
0.64
0.49
0.922
0.451
0.500
1.804
0.36
0.24
1.751
3.4
Consider the homogeneous finite-source model with r, r n independent servers. Denoting by N (t) the number of customers in the system at time t similarly to the previous
sections it can easily be seen that it is a birth-death process rates
k = (n k),
0 k n 1,
(
k , 1 k r,
k =
r , r < k n,
intenzitsokkal.
The steady-state distribution can be obtained as
n k
Pk =
P0 ,
0 k r,
k
k!
n k
Pk =
P0 ,
rkn
kr
r!r
k
with normalizing condition
n
X
Pk = 1
k=0
k
Let akP=P
and using the relation for the consecutive elements of the birth-death process
0
our procedure operates as follows
a0 = 1,
nk+1
%ak1 ,
k
nk+1
ak =
%ak1 ,
r
0 k r 1,
ak =
Since
n
X
r k n.
Pk = 1
k=0
n
X
Pk .
k=1
X Pk
X
1
1
ak ,
1=
P0 k=1 P0
P0 k=1
hence
P0 =
1+
1
n
P
.
ak
k=1
Finally
Pk = ak P0 .
Let us determine the main performance measures
Mean number of customers in the systems can be computed as
N=
n
X
kPk .
k=0
n
rr P0 X (k r)k! n k
Q=
(k r)Pk =
% .
r! k=r+1
rk
k
k=r+1
Mean number of customers in the source can be calculated by
m = n N.
Utilization of the system is computed by
Ur = 1 P0 .
93
1 P0
Ur
=
.
nP0
nP0
r
X
n
X
kPk +
k=1
rPk =
k=r+1
r1
X
n
X
kPk + r
k=1
Furthermore
r
X
Us =
Pk =
k=r
n
X
kPk + r
k=1
r1
X
k=1
Pk
r
= .
r
k=r+1
r
X
k=1
n
X
kPk +
(k r)Pk + r
k=r+1
n
X
Pk = Q + r = Q + r S = n m.
k=r+1
n
X
nk
k=1
Pk =
m
.
n
+W +
m
,
n
N
1 .
m%
N
1
=
,
consequently we get
mT = N ,
which is the well-known Littles formula. Thus we get
1
m W +
= Q + r,
94
that is
mW + m% = Q + r.
Show that
r = m%,
because from this follows
mW = Q
which is the Littles formula for the waiting time.
Since
Pk+1 =
(n k)
Pk ,
k+1
where
(
j , j r,
j =
r , j > r.
Furthermore, it is well-known that
r=
r1
X
kPk + r
n
X
Pk .
k=r
k=1
We can proceed as
%m =
n
X
%(n k)Pk =
k=0
%(n k)Pk +
k=0
r1
X
(n k)(k + 1)
(k + 1)
k=0
r1
X
n1
X
%(n k)Pk =
k=r
Pk + r
n1
X
(n k)
k=r
Pk =
r1
n1
r
n
r1
n
X
X
X
X
X
X
(k + 1)Pk+1 + r
Pk+1 =
jPj + r
Pj =
jPj + r
Pj = r.
k=0
j=1
k=r
j=r+1
j=1
j=r
Finally we get
%m = r,
or in another form
m = r,
that is
mean arrival rate = mean service rate,
which was expected because the system is in steady state. Consequently
W =
Q
Q
Q
=
=
.
m
r%
r
95
j
,
and e can be computed by the help of the theorem of total expectation, namely
r
X
S
Prj j
=
,
e=
P (e)
P (e)
j=1
where
P (e) =
r1
X
Pj = 1 P (W > 0),
j=0
r
r
rr
r
S
r S
r
m
=
=
=
.
P (e)
P (e)
P (e)
S P (e)
That is
E =
m
.
P (e)
P0
Pk =
k!k
P0 .
r!rkr
96
n z k P0
Pk =
k!z k
P0
r!rkr
thus
n
k
k!rr (rz)k
P0
r!
n!rr (rz)nk erz
=
P0
(n k)!r!(rz)n erz
rr P (n k, rz)
P0 ,
k r.
=
r! P (n, rz)
Pk =
Since
k (n) = Pk (n 1), thus
rr P (n 1 k, rz)
k (n) =
P0 (n 1),
r! P (n 1, rz)
ha k = r, . . . , n 1.
k (n) =
n1
X
Pk (n 1) = PW
k=r
k=r
n1 r
X
r P (n 1 k, rz)
k=r
r!
P (n 1, rz)
P0 (n 1)
n1r
X
P (i, rz)
rr
= P0 (n 1) i=0
r!
P (n 1, rz)
r
r Q(n 1 r, rz)
=
P0 (n 1).
r! P (n 1, rz)
We show that the distribution function of the waiting time can be calculated as
FW (x) = 1
and thus
FW (0) = 1
rr Q(n 1 r, rz)
P0 (n 1)
r!P (n 1, rz)
97
which is probability that an arriving customer finds idle server. For the density function
we have
fW (0) = 1 PW ,
rr P (n 1 r, r(z + x))
P0 (n 1),
fW (x) = r
r!P (n 1, rz)
If we calculate the integral
x > 0.
0+
rr P0 (n 1)
fW (x)dx =
r!P (n 1, rz)
0+
Z
r
0+
dt
dy
y n1r
ey dy = Q(n 1 r, rz)
(n 1 r)!
rz
that is
Z
fW (x)dx =
rr Q(n 1 r, rz)
P0 (n 1) = PW ,
r!P (n 1, rz)
0+
Z
fW (x)dx = fW (0) +
fW (x)dx = 1.
0+
(rx)kr rx
r
fW (x) =
e
Pk (n 1)
(k r)!
k=r
=
n1
X
k=r
(rx)kr rx rr P (n 1 k, rz)
e
P0 (n 1)
(k r)!
r! P (n 1, rz)
n1
98
fW (t)dt
x
rr P0 (n 1)
=
r!P (n 1, rz)
Z
r
rr P0 (n 1)
=
r!P (n 1, rz)
y n1r
ey dy
(n 1 r)!
r(z+x)
r
P0 (n 1)Q(n 2, z + x)
,
P (n 1, z)
but
P0 (n 1) =
P (n 1, z)
,
Q(n 1, z)
thus
P (W > x) =
Q(n z, z + x)
.
Q(n 1, z)
The derivation of the distribution function of the response time is analogous. Because the
calculation is rather lengthly it is omitted, but can be found in the Solution Manual for
Kobayashi [50].
As it can be seen in Allen [2], Kobayashi [50], the following formulas are valid for r 2
FT (x) = 1 C1 ex + C2 Q(n r 1, r(z + x)),
where
C1 = 1 + C2 Q(n r 1, rz),
rr P0 (n 1)
.
C2 =
r!(r 1)(n r 1)!P (n 1, rz)
99
,
n > r,
P0 (n) = 1 + P0 (n 1) +
rz
z i=0 z i
i+1 r
with initial value
P01 (r)
=
1
1+
z
r
.
r 1.
i)!
i=0
where i = k r. Thus the last equation can be written as
nr n1r
nr
r+z n1ri
X
r+z
r
r
r + z
=
e
.
Q n 1 r,
r + s
(n 1 r i)!
r + s
i=0
Finally collecting all terms we get
nr r
r
r P0 (n 1)erz r+s
r + s
LW (s) = 1 PW +
e Q n 1 r,
r + s
r!P (n 1, rz)
nr
r+s
r s
r e P0 (n 1)Q n 1 r,
r
= 1 PW +
.
r!P (n 1, rz)
r + s
100
LW (s) = P0 (n 1) +
+s
P (n 1, z)
n1 s
e Q n 2, +s
P (n 1, z)
=
+
Q(n 1, z)
+s
Q(n 1, z)
n1
n1
+s
+s
s
+s
z
ez + e Q n 2,
=
Q(n 1, z)
(n 1)!
=
+s
n1 "
s
+s n1 +s
e e
Q(n 1, z)
(n 1)!
n1 s
e Q n 1, +s
+s
Q (n 1, z)
+s
+ e Q n 2,
#
as we got earlier.
Keeping in mind the relation between the waiting time and the response time and the
properties of the Laplace-transform we have
LT (s) =
LW (s),
+s
which is in the case of r = 1 reduces to
n s
e Q n 1, +s
.
LT (s) =
+s
Q(n 1, z)
= =
5
1
=
= 0.1
50
10
a0 = 1
20 0
0.1 1 = 2
0+1
20 1
a2 =
0.1 2 = 1.9
1+1
20 2
a3 =
0.1 1.9 = 1.14
2+1
20 3
a4 =
0.1 1.14 = 0.646
3
..
.
a1 =
and so on.
Hence
P0 =
1+
1
Pn
k=1
ak
1
= 0.13625.
1 + 6.3394
Innen
P1 = a1 P0 = 2 0.13625 = 0.2775
P2 = a2 P0 = 1.9 0.13625 = 0.2588 etc
The distribution can be seen in the next Table for
n = 20, r = 3, = 0.1
K
0
1
2
3
4
5
6
7
8
9
10
11
12
Number of busy
under repair
repairmen
0
1
2
3
3
3
3
3
3
3
3
3
3
Steady-state.
distribution
(Pk )
0.13625
0.27250
0.25888
0.15533
0.08802
0.04694
0.02347
0.01095
0.00475
0.00190
0.00070
0.00023
0.00007
S = 1.213,
102
N = Q + r S = 2.126
P (W > 0) = 0.3323,
P (e) = 0.6677,
W =
Q
= 0.918 hours, 58 minutes
(n N )
U (n) = 0.844
m = 20 2.126 = 17.874,
U (n)
5 0.844
=
15.5 hours,
r = 1.787,
s = 1.213
nP0
2 0.136
1.787
1.213
50 1.213
r
s
= 0.595,
=
90.8 hours
US = =
e=
1 =
r
3
P (e)
0.667
0.667 50
r
1.787
50 1.787
E =
=
132.1 hours
1 =
P (e)
0.667
0.667 50
m
17.874
Ug =
=
0.893
n
20
1
T = W + = 0.981 + 5 = 5.981 hours
E (n) =
Number of machines
Number of repairman
Number of machines per repairman
Waiting coefficient for the servers K2
Waiting coefficient for the machines K1
6
1
6
0.4845
0.0549
20
3
6 23
0.4042
0.01694
Example 15 Let us continue the previous Example with cost structure. Assume that the
waiting cost is 18 000 Euro/hour and the cost for an idle repairman is 600 Euro/hour.
Find the optimal number of repairmen. It should be noted that different cost functions
can be constructed.
Solution:
The mean cost per hour as a function of r can be seen in the next Table which are
calculated by the help of the distribution listed below for r = 3, 4, 5, 6, 7.
r
3
4
5
6
7
P0
0.136
0.146
0.148
0.148
0.148
P1
0.272
0.292
0.296
0.297
0.297
P2
0.258
0.278
0.281
0.282
0.282
P3
0.155
0.166
0.168
0.169
0.169
P4
0.088
0.071
0.071
0.072
0.072
103
P5
0.047
0.028
0.022
0.023
0.023
P6
0.023
0.010
0.006
0.006
0.006
P7
P8
0.011 0.005
0.003 0.001
0.001 0.000
0.001
Q
0.32
0.06
0.01
0
0
S
1.20
2.18
3.17
4.17
5.16
E(Cost) Euro
6480
2388
2082
2502
3096
3.5
(
k ,
k =
r ,
0 k < K,
1 k r,
r<kK
Pk (n, r, K) = 1.
i=0
P0 (n, r, K)
,
Pk (n, r, K) =
k!k
P0 (n, r, K) ,
r!rkr
104
0 k < r,
rkK
K
X
Q=
kPk ,
k=0
K
X
(k r)Pk ,
r=
k=r
r1
X
kPk + r
K
X
k=1
nN
,
n
Q
W = ,
r
US = ,
r
N
T = ,
Ut =
nN
E( )
=
,
n
E( ) + T
E( ) =
Pk ,
m = n N,
k=r
= = r,
W =T
(n N )T
,
N
1
,
NR = E( ).
By using the Bayess rule it is easy to see that for the probability of blocking we have
PB (n, r, K) =
(n K)PK (n, r, K)
= PK (n 1, r, K).
K
X
(n i)Pi (n, r, K)
i=0
In particular, if K = n, then
= (n N ) = r,
thus
T =
N
,
(n N )
E( ) =
1
,
PB = 0,
as it was expected.
Furthermore, by elementary calculations it can be seen that the normalizing constant
P0 (n, r, K) can be expressed recursively with respect to K under fixed r, n. Namely we
have
n
K
K!K
,
r!rKr
r
X
n i
.
=
i
i=0
By using the Bayess rule it is easy to see that the probability that an arriving customer
finds k customers in the system is
k (n, r, K) = Pk (n 1, r, K),
105
k = 0, , K
but the probability that a customer arriving into the systems finds k customers there is
k (n, r, K) =
(n k)Pk (n, r, K)
K1
X
k = 0, . . . , K 1.
(n i)Pi (n, r, K)
i=0
Hence the probability of waiting and the density function of the waiting time can be
expressed as
PW (n, r, K) =
K1
X
k (n, r, K)
k=r
fW (0) = 1 PW (n, r, K)
fW (x) =
K1
X
k=r
(r)(rx)kr+1 rx
e
k (n, r, K)
(k r + 1)!
Pk (n 1, r, K)
,
1 PK (n 1, r, K)
and analogously to the earlier arguments for the density function we obtain
fW (x) =
rrr P (K 1 r, rz)
P0 (n 1)
.
r!P (K 1, rz)
1 PK (n 1, r, K)
In paricular, if K = n, that is all customer may enter into the system, then
PK (n 1, r, K) = 0 and thus we got the formulas derived before.
By reasonable modifications for the distribution function we have
FW (x) = 1
3.6
r
r + s
K
s
r+s
rr e Q K 1 r, P0 (n 1, r, K)
.
r! P (K 1, rz)(1 PK (n 1, r, K))
106
The requests arrive from a finite-source where they spend an exponentially distributed
time with parameter . The required service time S is generally distributed random variable with ES < . Let us denote by G(x) and g(x) its distribution function, density
function, respectively, assuming that (G (0+ ) = 0). The service discipline is Processor
Sharing, that is all customers in the service facility are being served but the rate is proportional to the number of customers in service.
The method of supplementary variables is used for the description of the behavior of the
system.
Let us introduce the following random variables.
Let (t) denote the number of customers in the system at time t, and for (t) > 0 let
1 (t), . . . , (t) (t) denote the elapsed service time of the requests.
The stochastic process
X(t) = (t); 1 (t), . . . , (t) (t)
is a continuous-time Markov process with discrete and continuous components which are
called piecewise linear Markov process.
It should be noted the many practical problems can be modeled by the help of these
processes and the interested reader is referred to the book of GnedenkoKovalenko [31].
Let
Pk (t, x1 , . . . , xk ) dx1 . . . dxk = P ((t) = k; xi i < xi + dxi , i = 1, . . . , k) ,
that is Pk (t, x1 , . . . , xk ), k = 1, . . . , n denotes the density function that at time t there
are k customers in the system and their elapsed service times are x1 , . . . , xk .
Let be a small positive real number. Then for the density functions Pk (t, x1 , . . . , xk ) we
have the following set of equations
Pk (t; x1 , . . . , xk ) =
k
Y 1 G (xi )
[1 (n k) ] +
= Pk t ; x1 , . . . , xk
k
k i=1 1 G xi k
Z
+ (k + 1)
Pk+1 t ; x1 , . . . , xk+1
k
k+1
k
Y
i=1
1 G xi k+1
1 G xk+1 k+1
k
Q
i=1
Z
qk+1 (x1 , . . . , xk+1 ) g (xk+1 ) dxk+1 , k = 1, . . . , n 1,
0
where
qk (x1 , . . . , xk ) = lim Pk (t; x1 , . . . , xk ) /
t
k
Y
[1 G (xi )]
i=1
1X
qn (x1 , . . . , xn ) = 0.
n i=1 xi
Beside these equation we need the boundary conditions which are
q1 (0) = nP0 ,
qk (0, x1 , . . . , xk1 ) = (n k + 1) qk1 (x1 , . . . , xk1 ) ,
k = 1, . . . , n.
The solution to these set of integro-differential equations is surprisingly simple, namely
qk (x1 , . . . , xk ) = P0 k n!/ [(n k)!] ,
which can be proved by direct substitution.
Consequently
k
Y
n!
[1 G (xi )] ,
Pk (x1 , . . . , xk ) = P0
(n k)! i=1
k
i = 1, . . . , n.
Let us denote by Pk the steady-state probability of the number of customers in the system.
Clearly we have
Z
Pk =
Z
...
n!
(ES)k .
(n k)!
n
P
Pi = 1.
i=0
Recall that it is the same as the distribution in the M/M/1/n/n system if % = ES.
108
It is not difficult to see that for this M/G/1/n/n/P S system the performance measures
can be calculated as
(i)
N=
(ii) U (i) =
n
X
kPk
k=1
1
1
+
nN
,
n
thus
T =
1 N
,
nN
hence
(n N )T = N
which is the Littles formula.
Clearly, due to the Processor Sharing discipline the response time is longer then the required service time, and there is no waiting time since each customer are being served.
The difference is T E(S).
~ G/1/n/n/P
~
It can be proved, see Cohen [14], that for an G/
S system the steady-state
probability that customers with indexes i1 , . . . , ik are in the system can be written as
P (i1 , . . . , ik ) = C k!
k
Y
ij ,
j=1
i =
E(Si )
,
E(i )
i = 1, . . . , n.
3.7
~
The G/M/r/n/n/F
IF O Queue
This section is devoted to a generalized version the finite-source model with multiple
servers where the customers are supposed to have heterogeneous generally distributed
source times and homogeneous exponentially distributed service times. They are served
according to the order of their arrivals. The detailed description of this model can be
found in Sztrik [77].
Customers arrive from a finite source of size n and are served by one of r (r n)
servers at a service facility according to a first-come first-served (FFIFO) discipline. If
there is no idle server, then a waiting line is formed and the units are delayed. The service
times of the units are supposed to be identically and exponentially distributed random
variables with parameter . After completing service, customer with index i returns to
the source and stays there for a random time i having general distribution function Fi (x)
with density fi (x). All random variables are assumed to be independent of each other.
109
i = 1, . . . , n are exponential.
To use the supplementary variable technique let us introduce the supplementary variable
i (t) to denote the elapsed source time of request with index i , i = 1, , n. Define
X (t) = (t) ; 1 (t) , . . . , (t) ; 1 (t) , . . . , (t) ; 1 (t) , . . . .n(t)
This is a multicomponent piecewise linear Markov process.
Let Vkn and Ckn denote the set of all variations and combinations of order k of the integers 1, 2, . . . , n, respectively, ordered lexicographically. Then the state space of process
(X (t) , t 0) consists of the set of points
(i1 , . . . , ik ; x1 , . . . , xk ; j1 , . . . , jnk )
where
(i1 , . . . , ik ) Ckn , (j1 , . . . , jnk ) Vkn , xi R+ , i = 0, 1, . . . , k, k = 0, 1, . . . , n.
Process X(t) is in state (i1 , . . . , ik ; x1 , . . . , xk ; j1 , . . . , jnk ) if k customers with indexes
(i1 , . . . , ik ) have been staying in the source for times (x1 , . . . , xk ), respectively while the
rest need service and their indexes in the order of arrival are (j1 , . . . , jnk ).
To derive the Kolmogorov-equations we should consider the transitions that can occur in
an arbitrary time interval (t, t + h) . For 0 n k < r the transition probabilities are
then the following
P [X (t + h) = (i1 , . . . , ik ; x1 + h, . . . , xk + h; j1 , . . . , jnk ) |
X (t) = (i1 , . . . , ik ; x1 , . . . , xk ; j1 , . . . , jnk )]
k
Y
1 Fil (xl + h)
+ o (h) ,
= (1 (n k) h)
1 Fil (xl )
l=1
P [X (t + h) = (i1 , . . . , ik ; x1 + h, . . . , xk + h; j1 , . . . , jnk ) |
110
where (i1 , . . . , jnk , . . . , ik ) denotes the lexicographical order of indexes (i1 , . . . , ik , jnk )
0
0
while (x1 0 , . . . , y , . . . , xk ) indicates the corresponding times.
For r n k n the transition probabilities can be obtained as
P [X (t + h) = (i1 , . . . , ik ; x1 + h, . . . , xk + h; j1 , . . . , jnk ) |
X (t) = (i1 , . . . , ik ; x1 , . . . , xk ; j1 , . . . , jnk )]
k
Y
1 Fil (xl + h)
= (1 rh)
+ o (h) ,
1
F
il (xl )
l=1
h
P X (t + h) = (i1 , . . . , ik ; x1 + h, . . . , xk + h; j1 , . . . , jnk ) |
i
0
0
0
0
X (t) = i1 , . . . , jnk , . . . , ik ; x1 , . . . , y , . . . , xk ; j1 , . . . , jnk1 =
k
Qi1 ,...,ik ;j1 ,...,jnk (x1 , . . . , xk ) = lim Qi1 ,...,ik ;j1 ,...,jnk (x1 , . . . , xk ; t) .
t
Notice that X(t) belongs to the class of piecewise-linear Markov processes, subject to
discontinuous changes treated by Gnedenko and Kovalenko [31]. Our statement follows
from a theorem on page 211 of this monograph.
Since by assumption Fi (x) has density function, for fixed k Theorem 2 provides the
existence and uniqueness of the following limits
qi1 ,...,ik ;j1 ,...,jnk (x1 , . . . , xk ) dx1 . . . dxk =
111
k = 1, . . . , n
where qi1 ,...,ik ;j1 ,...,jnk (x1 , . . . , xk ) denotes the density function of state
(i1 , . . . , ik ; x1 , . . . , xk ; j1 , . . . , jnk ) when t .
Let us introduce the so-called normed density function defined by
qi1 ,...,ik ;j1 ,...,jnk (x1 , . . . , xk ) =
Then we have
Theorem 3 The normed density functions satisfy the following system of integro-differential
equations (3.1), (3.3) with boundary conditions (3.2), (3.4)
(3.1)
+ ... +
qi1 ,...,ik ;j1 ,...,jnk (x1 , . . . , xk )
x1
xk
= (n k)
qi1 ,...,ik ;j1 ,...,jnk (x1 , . . . , xk ) +
Z
+
qi0 ,...,j 0
0
nk ,...,ik ;j1 ,...,jnk1
0
0
0
x1 , . . . , y , . . . , xk fjn (y) dy,
(3.2)
i
Vj l ,...,j
1
nk
for l = 1, . . . , k, 0 n k < r
(3.3)
+ ... +
x1
xk
qi1 ,...,ik ;j1 ,...,jnk (x1 , . . . , xk ) =
= r
qi1 ,...,ik ;j1 ,...,jnk (x1 , . . . , xk ) +
Z
+
qi0 ,...,j 0
1
(x1 , . . . , y
nk ,...,ik ;j1 ,...,jnk1
0
, . . . , xk )fjn (y)dy
(3.4)
=
i
Vj l ,...,j
1
r1
for l = 1, . . . , k,
r nk <n1
112
furthermore
Z
rQ0;j1 ,...,jn =
Proof: Since the process (X (t) , t 0) is Markovian its densities must satisfy the Kolmogorovequations. A derivation is based on the examination of the sample paths of the process
during an infinitesimal interval of width h. The following relations hold
qi1 ,...,ik ;j1 ,...,jnk (x1 + h, . . . , xk + h) =
k
Y
1 Fil (xl + h)
= qi1 ,...,ik ;j1 ,...,jnk (x1 , . . . , xk ) (1 (n k) h)
+
1
F
il (xl )
l=1
Z
k
0
Y
1 Fil (xl + h)
0
0
+ qi0 ,...,j 0 ,...,i0 ;j1 ,...,jnk1 x1 , . . . , y . . . , xk
+
1
nk
k
1 Fil (xl )
l=1
0
fjnk (y) h
1 Fjnk (xl )
dy + o (h) ,
= o (h) +
1
F
(x
)
i
s
s
s=1
s6=l
i
Vj l ,...,j
1
nk
for 0 n k < r, l = 1, . . . , k.
Similarly
qi1 ,...,ik ;j1 ,...,jnk (x1 + h, . . . , xk + h) =
k
Y
1 Fil (xl + h)
= qi1 ,...,ik ;j1 ,...,jnk (x1 , . . . , xk ) (1 rh)
+
1
F
il (xl )
l=1
Z
k
0
Y
1 Fil (xl + h)
0
0
+
qi0 ,...,j 0 ,...,ie0 ;j1 ,...,jnk1 x1 , . . . , y , . . . , xk
1
k
nk
1 Fil (xl )
l=1
0
fjnk (y) h
1 Fjnk (xl )
dy + o (h) ,
k
Y
1 Fis (xs + h)
= o (h) +
1 Fis (xs )
s=1
s6=l
i
Vj l ,...,j
1
nk
for 0 n k < r,
l = 1, . . . , k.
Finally
Q0;j1 ,...,jn = Q0;j1 ,...,jn (1 rh) +
Z
+
fjn (y) h
dy + o (h) .
1 Fjn (y)
Thus the statement of this theorem can easily be obtained. Dividing the left-hand side
k
Q
of equations by
(1 Fil (xl + h)) and taking into account the definition of the normed
l=1
N
(j1 , . . . , jnk ) Vnk
,
k = 1, . . . , n.
k = 1, . . . , n,
then by direct substitution it can easily be verified that they satisfy these equations with
boundary conditions. Moreover these ck can be obtained by the help of cn , namely
1
ck = r!rnrk nk cn ,
0 k n r,
ck = (n k)!nk
1
cn ,
n r k n.
Since these equations completely describe the system, this is the required solution.
Let Qi1 ,...,ik ;j1 ,...,jnk denote the steady-state probability that customers with indexes
(i1 , . . . , ik ) are in the source and the order of arrivals of the rest to the service facility
is (j1 , . . . , jnk ). Furthermore, denote by Qi1 ,...,ik the stationary probability that requests
with indexes (i1 , . . . , ik ) i are staying in the source.
114
k = 1, . . . , n.
1
cn ,
k = 0, 1, . . . , n r.
Similarly
Qi1 ,...,ik = nk i1 , . . . , ik
(i1 , . . . , ik ) Ckn ,
1
cn ,
k = n r, . . . , n.
k = 0, . . . , n.
Furthermore
n (1 , . . . , n ) ,
cn = Q
X
k =
Qi1 ,...,ik ,
Q
(i1 ,...,ik )Ckn
n
n can be obtained by the help of the normalizing condition P Q
k = 1.
where Q
k=0
n,
Qk =
Q
f or 0 k n r,
nkr
r!k!r
k =
Q
nk
n
n,
Q
f or n r k n,
which is the result of the paper Bunday and Scraton [12], and for r = 1 is the formulas
obtained by Schatte [72]. Thus the distribution of the number of customers in the service
facility is
k
n
P0 ,
f or 0 k r,
Pk =
k
k
n!
Pk =
P0 ,
f or r k n.
kr
r!(n k)!r
This is exactly the same result that we got for the < M/M/r/n/n > model.
It should be underlined that the distribution of the number of customers in the system
does not depend on the form of Fi (x) but the mean 1/i , that is it is robust.
115
Performance Measures
Utilization of the sources
Let Q(i) denote the steady-state distribution that source i is busy with generating
a new customer, that is
(i)
n
X
k=1
Qi1 ,...,ik .
Consequently the mean response time Ti of the ith request can be calculated as
i + 1/ = 1 Q(i) i Q(i) 1 ,
Ti = W
i = 1, . . . , n.
Since
n
X
,
1 Q(i) = N
i=1
denotes the mean number customers at the service facility. This can be
where N
rewritten as
n
X
,
i Ti Q(i) = N
i=1
~
which the Littles formula for the G/M/r/n/n/F
IF O > queueing system.
It should be noted that using the terminology of the machine interference problem U (i) ,
i , Ti denote the utilization, mean waiting time and the mean time spent in a broken
W
state of the ith machine.
This model can be generalized such a way that the service intensities depend on the
number of customers in the source, see Sztrik [79, 80].
116
Part II
Exercises
117
Chapter 4
Infinite-Source Systems
Exercise 1 Solve the following system of equations by the help of difference equations
P0 = P1
( + )Pn = Pn1 + Pn+1 , n 1.
Solution:
It is easy to see that it can be rewritten as
Pn1 ( + )Pn + Pn+1 = 0, n = 1, 2, . . .
which is a 2-nd order difference equation with constant coefficient. Its general solution
can be obtained in the form
Pn = c1 xn1 + c2 xn2 , n = 1, 2, . . .
where x1 , x2 are the solutions to
x2 ( + )x + = 0.
It can easily be verified that x1 = 1,
x2 = % and thus
Pn = c1 + c2 %n , n = 1, 2, . . . .
P
However P1 = %P0 , and because
n=0 Pn = 1, thus c1 = 0 and c2 = P0 = 1 %.
Exercise 2 Find the generating function of the number of customers in the system for
an M/M/1 queueing system by using the steady-state balance equations. Then derive the
corresponding distribution.
Solution:
Staring with the set of equations
P0 = P1
( + )Pn = Pn1 + Pn+1 , n 1
119
GN (s) P0 .
s
s
GN (s) (1 s) + 1
1
GN (s) 1
s 1
s
1
1
= 1
P0 ,
s
s
1
1
= 1
P0 ,
s
s
GN (s) =
P0 .
s
= 1 %.
That is
1%
,
1 s%
which is exactly the generating function of a modified geometric distribution with parameter (1 %). It can be proved as follows, if
GN (s) =
P (N = k) = (1 %)%k , k = 0, . . .
then its generating function is
GN (s) =
sk (1 %)%k =
k=0
1%
.
1 s%
Exercise 3 Find the generating function of the number of customers waiting in a queue
for an M/M/1 queueing system.
Solution:
Clearly
0
GQ (s) = (P0 + P1 )s +
k1
Pk = P 0 +
k=2
=1%+%
k1
Pk = 1 % +
k=1
sk1 %k (1 %)
k=1
i=0
%G0N (1)
120
%2
=
.
1%
X %
k
LT (s) =
% (1 %) = (1 %)
+
s
+ s k=0 + s
k=0
= (1 %)
+s
(1 %)
=
,
% = (1 %)
+ s 1 +s
+ s (1 %) + s
(1 %) + s
which was expected, since T follows an exponential distribution with parameter (1 %).
To get the Laplace-transform of W we have
k
k
X
X
k
% (1 %) = 1 % +
%k (1 %)
LW (s) =
+
s
+
s
k=1
k=0
=1%+
which should be
%(1 %)
,
(1 %) + s
LT (s)
+s
since
LT (s) = LW (s)
.
+s
+s
(1 %) + s
(1 %)
=
=1%+%
.
(1 %) + s
(1 %) + s
1
,
(1 %)
%
,
(1 %)
1
%
,W =
,
(1 %)
(1 %)
,
1
.
121
<1
Solution:
It is well-known if < 1 then
lim K = 0.
Since N =
(1(K+1)K +KK+1 )
,
(1)(1K+1 )
=
K
K
K
1
K
lim K = lim
= 0.
=
K
K lnK
ln
lim
LT (s) =
+s
K
P0
1 PK + s
satisfies LT (0) = 1.
Solution:
LT (0) =
P0 1 K
P0 1 K
=
1 PK
1 PK 1
=
1
1
1K+1
K
1(1)
K+1
1 K
1
1
1 K+1
1 K
LT (s) =
+s
K
P0
1 PK + s
122
then
L0T (s)
K
P0
=
1 PK
+s (K+1)
K
+ s) 1 +s
( + s)2
1
(
K
K
1
+
1
P0
L0T (0) =
1 PK
( )2
P0
1
K+1
K
=
K
1 + 1
(1 PK )(1 )2
P0 (KK KK+1 + K 1)
1
=
(1 PK )
(1 )2
1
P0 ((K + 1)K KK+1 1)
=
(1 PK )
(1 )2
N
,
=
(1 PK )
K+1
that is
T =
N
,
(1 PK )
n
X
k=0
k%
k!
P0 =
n
X
(s%)k
k=0
k!
123
Q(n, s%)
.
Q(n, %)
%P (n, %s)
Q(n, %s)
e%(1s)
.
Q(n, %)
Q(n, %)
hence
G0N (1) = % %B(n, %) = %(1 B(n, %)),
which was obtained earlier.
n
X
k=1
n
X
k 2 Pk =
n
X
(k(k 1) + k)Pk =
k=1
n
X
k(k 1)Pk +
k=1
kPk
k=1
%
P0 + E(N )
i!
i=0
k=2
n
2
2
= % (1 Pn Pn1 ) + E(N ) = % 1 Pn 1 +
+ E(N ).
%
k(k 1)
%
P0 + E(N ) =
k!
n2
X
n
X
%2
Exercise 11 Show that B(m, a) is a monotone decreasing sequence and its limit is 0.
Solution:
B(m, a) =
aB(m 1, a)
a
< ,
m + aB(m 1, a)
m
that is
aB(m 1, a)
B(m 1, a) < 0
m + aB(m 1, a)
B(m 1, a)(a m aB(m 1, a))
<0
m + aB(m 1, a)
a m aB(m 1, a) < 0
am
B(m 1, a) >
,
a
which is satisfied if a m. Since 1 B(m 1, a) 0 therefore 1 am
0, and thus
a
a m,m 0, that is 0 m a. It means that B(m, a) is monotone decreasing for m
which was expected since as the number of servers increases the probability of loss should
decrease.
B(m, a)
B(m, a))
a
(1
m
we should write a recursion for C(m, a) since B(m, a) can be obtained recursively. First
we show how B(m1, a) can be expressed by the help of C(m1, a) and then substituting
into the recursion
aB(m 1, a)
B(m, a) =
m + aB(m 1, a)
we get the desired formula. So let us express B(m, a) via C(m, a) that is
C(m, a) =
mB(m, a)
m a(1 B(m, a))
(m a)C(m, a)
,
m aC(m, a)
which is positive since m > a is the stability condition for an M/M/m queueing system.
This shows that
B(m, a) < C(m, a),
which was expected because of the nature of the problem.
Consequently
(m 1 a)C(m 1, a)
B(m 1, a) =
,
m 1 aC(m 1, a)
125
and m 1 > a is also valid due to the stability condition. Let us first express C(m, a) by
the help of B(m 1, a) then substitute. To do so
C(m, a) =
=
aB(m1,a)
m
m+aB(m1,a)
aB(m1,a)
a 1 m+aB(m1,a)
ma
amB(m1,a)
m+aB(m1,a)
m+aB(m1,a)aB(m1,a)
m+aB(m1,a)
aB(m 1, a)
aB(m 1, a)
=
.
m + aB(m 1, a) a
m a(1 B(m 1, a))
Now let us substitute C(m1, a) into here. Let us express the numerator and denominator
in a simpler form, namely
NUM = a
(m 1 a)C(m 1, a)
m 1 aC(m 1, a)
(m 1 a)C(m 1, a)
DEN OM = m a 1
m 1 aC(m 1, a)
m 1 aC(m 1, a) (m 1)C(m 1, a) + aC(m 1, a)
=ma
m 1 aC(m 1, a)
(m 1)(1 C(m 1, a))
=ma
m 1 aC(m 1, a)
m(m 1) maC(m 1, a) a(m 1)(1 C(m 1, a))
=
m 1 aC(m 1, a)
m(m 1) a(m 1) aC(m 1, a)
=
m 1 aC(m 1, a)
(m 1)(m a) aC(m 1, a)
=
.
m 1 aC(m 1, a)
Thus
C(m, a) =
a(m 1 a)C(m 1, a)
,
(m 1)(m a) aC(m 1, a)
and the initial value is C(1) = a. Thus the probability of waiting can be computed recursively. It is important because the main performance measures depends on this value.
Now, let us show that C(m, a) is a monotone decreasing sequence and tends to 0 as m,
increases which is expected. It is not difficult to see that
C(m, a) <
a(m 1 a)C(m 1, a)
(m 1)(m a) a
1 + 2a 1 + 4a2 + 4a 4a2 4a
m1,2 =
2
1 + 2a 1
m1,2 =
2
2
that is if m > a + 1 then the values of the parabola are positive. However, this condition
is satisfied since the stability condition is m 1 > a.
Furthermore, since
B(m, a)
C(m, a) =
a
1 m (1 B(m, a))
therefore limm C(m, a) = 0, which was expected.
This can be proved by direct calculations, since
C(m, %) =
%m m
P0 (m)
m! m %
and from
%m m
lim
= 0,
m m! m %
lim P0 (m) = e ,
Exercise 13 Verify that the distribution function of the response time for a M/M/r
queueing system in the case of r = 1 reduces to the formula obtained for an M/M/1
system.
Solution:
x
P (T > x) = e
1 e(r1%)x
1 + C(n, %)
r1%
=e
1 e%x
1+%
%
= ex (1 1 + e%x ) = e(1%)x .
Thus
FT (x) = 1 e(1%)x .
127
Solution:
lim GN (z) = lim (1 )LS ( z)
z1
z1
0
z1
= ,
z LS ( z)
0
and thus
1
z1
=
= 1.
z z LS ( z)
1
z1
z1
Exercise 15 Show that if the residual service time in an M/G/1 queueing system is
S (t)
denoted by R then its Laplace-transform can be obtained as LR (t) = 1L
.
tE(S)
Solution:
Z
LR (t) =
etx
1 FS (x)
dx.
E(S)
+
0
etx (f (x))
1 LS (t)
dx =
.
t
E(S)
tE(S)
Solution:
LR (t) =
1 +t
1 LS (t)
=
=
,
t
tE(S)
+
t
thus R Exp().
Exercise 17 By the help of the formulas for an M/G/1 system derive the corresponding
formulas for an M/M/1 system.
Solution:
In this case
LS (t) =
,
+t
t(1 )
=
+ t t + +t
LT (t) = LS (t)
t(1 )( + t)
+ t t + t2 t +
t( )
(1 )
=
=
=
,
t(t + )
t+
(1 ) + t
=
(1 )(1 z)
=
z +
z
z+
GN (z) = LS ( z)
(1 )(1 z)( z + )
z +
z + z 2 z
(1 )(1 z)
(1 )
1
=
=
=
,
(1 z) z(1 z)
z
1 z
=
129
E(S) 1 + CS2
1+1
=
,
1
2
(1 )
2
(1 )
1
1
1
T =W+ =
+1 =
.
1
(1 )
W =
2
3!3
E(S 3 )
E(W ) = 2(W ) +
=2
+
3(1 )
(1 )
3(1 )
2
2
2
2 + 2(1 )
=2 2
+ 3
=
2
(1 )
(1 )
3 (1 )2
2 + 2 2
2
=
= 3
,
3
2
(1 )
(1 )2
2
thus
2
2
V ar(W ) = 3
(1 )2
(1 )
2
2
(2 )
2
= 3
= 3
= 2
,
2
2
(1 )
(1 )
(1 )2
2
1
2
2
2
2 + 1 2 +
1
1
=
= 2
=
.
2 (1 )2
(1 )2
(1 )
(2 )
V ar(T ) = V ar(W ) + V ar(S) = 2
+
(1 )2
130
3 3!3
2 E(S 2 )
2(1 )
2
2 22
!2
2 (3 2)E(S 2 )
+ (1 )
2(1 )
2 (3 2) 22
+ (1 )
+
2(1 )
2(1 )
2 2
23
2 (3 2)
+
+ (1 )
+
3 (1 )
1
1
23
4
2 (3 2)
+
+ (1 )
+
1 (1 )2
1
23 (1 ) + 4 + 2 (1 )(3 2) (1 )3
+
(1 )2
(1 )2
23 24 + 4 + 32 23 33 + 24 + + 33 32 4
(1 )2
,
(1 )2
3(1 )
3 3!3
3(1 )
2 E(S 2 )
2(1 )
2
2 22
!2
2(1 )
+
+
2 E(S 2 )
2(1 )
2 22
2(1 )
23
4
2
2(1 )3 + 4 + 2 (1 )
+
+
=
1 (1 )2 1
(1 )2
23 24 + 4 + 2 3
3 4 + 2
2 (1 + 2 )
=
=
=
.
(1 )2
(1 )2
(1 )2
These verifications help us to see if these complicated formulas reduces to the simple ones.
1z
LS ( z) z
find N -t.
Solution:
It is well-known that N = G0N (1), that is why we have to calculate the derivative at the
131
1z
right hand side. However, the term LS (z)z
takes an indetermined value at z = 1 hence
the LHospitals rule is used. Let us first define a function
LS ( z) z
1z
f (z) =
.
Hence one can see that
LN (z) = (1 )
LS ( z)
.
f (z)
X
(1)k E(S k )
k=1
k!
( z)k
we have
1+
X
(1)k E(S k )
f (z) =
k!
k=1
( z)k z
1z
2
2 E(S )(1 z)
+ ....
= 1 E(S) +
2
2 E(S 2 )
and hence
2
)
(1 )f (1)E(S) + E(S
0
2
N = GN (1) =
(1 )2
2
2)
(1 ) (1 ) + E(S
2
=
2
(1 )
2
E(S 2 )
2
1 + CS2
=+
=+
,
2(1 )
1
2
s(1)
.
s+LS (s)
Solution:
Let us define a function
s + LS (s)
,
s
which is after expansion can be written as
E(S i ) i
X
1 + LS (s) = 1 +
(1)i
s
s
s
s
s i=0
i!
f (s) =
= 1 E(S) +
E(S 2 )
E(S 3 ) 2
s
s + ....
2
3!
Therefore
E(S 2 ) 2E(S 3 )
3E(S 4 ) 2
s+
s + ...,
2
3!
4!
2E(S 3 ) 3 2E(S 4 )
+
s + ....
f (2) (s) =
3!
4!
f 0 (s) =
Hence
f (0) = 1 ,
E(S 2 )
f 0 (0) =
,
2
E(S 3 )
f 00 (0) =
.
3
Consequently, because
LW (s) =
1
f (s)
we have
f 0 (s)
,
(f (s))2
f 00 (s)(f (s))2 2f (s)(f 0 (s))2
.
L00W (s) = (1 )
(f (s))4
L0W (s) = (1 )
Thus
(1 ) E(S
f 0 (0)
2
E(W ) =
= (1 )
=
2
2
(f (0))
(1 )
2
2
E(S )
E(S) 1 + CS
=
=
.
2(1 )
1
2
2)
L0W (0)
Similarly
f 00 (0)(f (0))2 2f (0)(f 0 (0))2
E(W 2 ) = L00W (0) = (1 )
(f (0))4
2
3)
E(S 2 )
(1 )(1 )2 E(S
2(1
)
3
2
=
4
(1 )
3
E(S )
= 2(E(W ))2 +
.
3(1 )
133
Thus
V ar(W ) = E(W 2 ) (E(W ))2
E(S 3 )
= 2(E(W ))2 +
(E(W ))2
3(1 )
E(S 3 )
= (E(W ))2 +
.
3(1 )
Finally
V ar(T ) = V ar(W + S) = V ar(W ) + V ar(S).
E(S k+1 )
.
(k + 1)E(S)
Solution:
As we have seen earlier
LR (s) =
1 LS (s)
,
sE(S)
(i)
X
L (0)
S
i=0
X
i=0
i!
si
(1)i E(S i ) i
s.
i!
X
(1)k
k=1
k!
E(Rk )sk .
Therefore
LR (s) = 1 +
X
(1)k
k=1
1
=
1+
k!
E(Rk )sk =
X
(1)k
k=1
k!
1 LS (s)
sE(S)
!
E(S k )sk
sE(S)
k
X
X
(1) E(S k+1 ) k
(1)k E(Sk + 1) k
=
s =1+
s .
(k
+
1)!E(S)
k!
(k
+
1)E(S)
k=1
k=1
134
Consequently
E(Rk ) =
E(S k+1 )
,
(k + 1)E(S)
k = 1, 2, . . .
Exercise 21 Find the generating function of the number of customers arrived during a
service time for an M/G/1 system.
Solution:
By applying the theorem of total probability we have
Z
(x)k x
e fS (x) dx.
P (A (S) = k) =
k!
0
Hence its generating function can be written as
Z X
(x)k x
(zx)k x
GA (S) (z) = z
e fS (x) dx =
e fS (x) dx
k!
k!
0
0
k=0
Z
Z
zx x
ex(1z) fS (x) dx = LS ((1 z)).
e e fS (x) dx =
=
k
135
136
Chapter 5
Finite-Source Systems
Exercise 22 If P (k, ) =
important formula
k
X
k
e
k!
and Q(k, ) =
i
,
i=0 i! e
Pk
P (k j, a1 )Q(j, a2 ) = Q(k, a1 + a2 ).
j=0
Solution:
It is well-known that
P (n, y) dy,
Q(n, a) =
a
therefore
j
k
X
X
akj
ai2 a2
1
a1
P (k j, a1 )Q(j, a2 ) =
e
e
(k
j)!
i!
j=0
j=0
i=0
k
X
Z
Z
Z j
k
X
(y + a1 )k (a1 +y)
tk t
y y
akj
1
a1
e
e dy =
e
dy =
e dt = Q(k, a1 +a2 ),
(k
j)!
j!
k!
k!
a
a
a
+a
2
1
2
2
j=0
where we introduced the substitution t = y + a1 .
Exercise 23 Find the mean response time for an M/M/1/n/n queueing system by using
the Laplace-transform.
Solution:
It is well-known that T = L0T (0), that is why let us calculate L0T (0).
"
#0
n
+s
s Q n 1,
e
L0T (s) =
+s
Q n 1,
0
n 0
n
+s
Q n 1, +s
Q
n
1,
s
s
=
e
+
e
.
+s
+s
Q n 1,
Q n 1,
137
Thus
L0T (o) =
1
n 1
n 1
+ B n 1,
= + US (n 1),
and hence
T (n) =
n US (n 1)
Since
1
(N (n 1) + 1)
1
US (n 1)
=
n1
+1
n US (n 1)
,
=
T (n) =
which was obtained earlier. The higher moments of T (n) can be obtained and hence
further measured can be calculated.
Similarly, the moments of W (n) can be derived.
Exercise 24 Find the mean response time, denoted by T A, for an M/M/1/n/n system
by using the density function approach.
Solution:
.
z=
1
T =
Q(n 1, z)
Z
x
(x + z)n1 (x+z)
e
dx
(n 1)!
ez
Q(n 1, z)
Z
x
n1
X
(x)k
z n1k
ex dx
k! (n 1 k)!
k=0
0
n1
X z n1k
ez
=
Q(n 1, z) k=0 (n 1 k)!
=
ez
Q(n 1, z)
1
=
n1
X
k=0
n1
X
k=0
n1
X
k=0
(k + 1)
Z
x
(x)k x
e dx
k!
z n1k
k+1
(n 1 k)!
z n1k
(n1k)!
ez
Q(n 1, z)
k+1
1
Pk (n 1) = (N (n 1) + 1).
The mean waiting time can be obtained similarly, starting the summation from 1 and
taking a Erlang distribution with one phase less.
138
Solution:
Let us denote by F the number of customers in the source. Hence F + N = n, and thus
V ar(N ) = V ar(F ).
As we have proved the distribution of F equals to the distribution of an M/M/n/n system
with traffic intensity z = 1 we have
1
1
1
1
1
1
1 B n,
B n,
n
1 B n,
V ar(N ) =
US
1 US
US
=
n
.
n
.
This result helps us to determine V ar(T (n))-t and V ar(W (n))-t. Since W (n) can be
consider as a random sum, where the summands are the exponentially distributed service
times with parameter , and the counting process is the number of customers in the
(n)
system at the arrival instant of a customer, denoted by NA , we can use the formula
obtained for the variance of a random sum, namely
1
1
(n)
+
V ar(NA )
2 2
1
1
= N (n 1) 2 + 2 V ar(N (n 1))
1
= 2 (N (n 1) + V ar(N (n 1))),
(n)
where
Us (n 1)
Us (n 1) 1 Us (n 1)
Us (n 1)
V ar(N (n 1)) =
n1
.
N (n 1) = n 1
139
1
.
2
140
Part III
Queueing Theory Formulas
141
Chapter 6
Relationships
6.1
Server utilization.
Utilization of component i in a queueing network.
Distribution function of interarrival time. A[t] = P [ t]
Random variable describing the busy period for a server
Erlangs B formula. Probability all servers busy in M/M/c/c system.
Also called Erlangs loss formula.
c
Number of servers in in a service facility.
C[c, ]
Erlangs C formula. Probability all servers busy in
M/M/c/c system. Also called Erlangs delay formula
2
Squared coefficient of a variation of a positive
CX
V ar[X]
2
.
random variable, CX
=
E[X]2
D
Symbol for constant (deterministic) interarrival
or service time.
Ek
Symbol for Erlang-k distribution of interarrival or service time.
E[Q|Q > 0]
Expected (mean or average) queue length of nonempty queues.
E[W |W > 0] Expected queueing time.
FCFS
First Come First Served queue discipline.
FIFO
First In First Out queue discipline. Identical with FCFS.
FT (t)
The distribution function of T , FT (t) = P [T < t].
FW (t)
The distribution function of W , FW (t) = P [W < t].
G
Symbol for general probability distribution of service
time. Independence usually assumed.
GI
Symbol for general independent interarrival time
distribution.
H2
Symbol for two-stage hyperexponential distribution.
Can be generalized to k stages.
K
Maximum number of customers allowed in queueing
system. Also size of population in finite population
models.
143
T
ln )
Ns
LCFS
LIFO
M
a , b
N
N [t]
N
Nb
Ns [t]
Ns
O
Pn [t]
Pn
PRI
PS
pi
a , b
X [r]
Q
RSS
S
S
SIRO
T
T
W
W0
6.2
146
Chapter 7
Basic Queueing Theory Formulas
7.1
M/M/1 Formulas
Pn = P [N = n] = (1 )n ,
P [N n] = n ,
n = 0, 1, . . . .
n = 0, 1, . . . .
N = E[N ] = T =
,
1
2
Q = W =
,
1
2 (1 + 2 )
V ar(Q) =
.
(1 )2
E[Q|Q > 0] =
1
,
1
V ar(N ) =
V ar[Q|Q > 0] =
t
FT (t) = P [T t] = 1 exp
,
T
T = E[T ] =
S
1
=
,
1
(1 )
100
T [r] = T ln
,
100 r
.
(1 )2
.
(1 )2
t
P [T > t] = exp
.
T
2
V ar(T ) = T .
T [90] = T ln 10,
t
FT (t) = P [W t] = 1 exp
,
T
S
W =
,
1
T [95] = T ln 20
t
P [W > t] = exp
.
T
(2 )S
V ar(W ) =
.
(1 )2
147
100
W [r] = max T ln
,0 .
100 r
W [90] = max{T ln(10), 0},
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MM1/MM1.html
148
7.2
M/M/1/K Formulas
Pn =
(1 )n
if 6=
(1 K+1 )
1
K +1
if =
n = 0, 1, . . . , K, where = S.
= (1 PK ),
N=
[1 (K + 1)K + KK+1 ]
ha 6=
(1 )(1 K+1 )
K
2
ha = .
Q = N (1 P0 ),
FT (t) = 1
K1
X
n =
Pn
, n = 0, 1, . . . , K 1.
1 PK
n Q[n; t],
n=0
where
Q[n; t] = e
n
X
(t)k
k=0
T =
N
,
Q
.
W =
FT (t) = 1
k!
K2
X
n=0
E[W |W > 0] =
W
,
1 P0
a = (1 PK ).
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MM1K/MM1K.html
149
7.3
M/M/c Formulas
P0 =
" c1
X n
n=0
Pn =
a=
c
+
n! c!(1 a)
n! P0 ,
#1
=
c!(1 a)P [N c]
.
c
if n c
n
P0 , if n c.
c!cnc
" c1
#
c
k
X
P0
+
if n < c,
k!
c!(1
a)
k=n
P [N n] =
"
#
c nc
a
a
nc
if n c
P0 c!(1 a) = P [N c]a
Q=W =
P [N c]
,
c(1 a)
where
c
c!
P [N c] = C[c, ] =
.
c1 n
X
+
(1 )
c n=0 n!
c!
V ar(Q) =
N = T = Q + .
V ar(N ) = V ar(Q) + (1 + P [N c]).
W [0] = 1 P [N c],
W =
P [N c]S
.
c(1 a)
150
[2 C[c, ]]C[c, ]S
V ar(W ) =
.
c2 (1 a)2
S
100C[c, ]
ln
}.
W [r] = max{0,
c(1 a)
100 r
S
W [90] = max{0,
ln(10C[c, ])}.
c(1 a)
S
W [95] = max{0,
ln(20C[c, ])}.
c(1 a)
ct(1 a)
W W 0 = P [W t|W > 0] = 1 exp
,
S
S
E[W |W > 0] = E[W 0 ] =
.
c(1 a)
t > 0.
2
S
.
c(1 a)
V ar([W |W > 0] =
FT (t) =
1 + C1 et + C2 ect(1a) if 6= c 1
1 {1 + C[c, ]t}et
if = c 1
where
C1 =
P [N c]
1,
1 c(1 a)
and
C2 =
P [N c]
.
c(1 a) 1
T = W + S.
2P [N c][1 c2 (1 a)2 ]S
2
+
2S
if 6= c 1
2 (1 a)2
(
+
1
c)c
2
E[T ] =
2{2P [N c] + 1}S
if = c 1
2
V ar(T ) = E[T 2 ] T .
T [90] T + 1.3D(T ),
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MMc/MMc.html
151
7.4
M/M/2 Formulas
a=
1a
.
1+a
Pn = 2P0 an ,
P [N n] =
Q = W =
n = 1, 2, 3, . . .
2an
,
1+a
n = 1, 2, . . .
2a3
,
1 a2
V ar(Q) =
2a2
.
1+a
N =T =Q+=
2a
.
1 a2
V ar(N ) = V ar(Q) +
2a(1 + a + 2a2 )
.
1+a
1 + a 2a2
.
1+a
2a2
exp[2t(1 a)]
FT (t) = 1
1+a
W [0] =
W =
a2 S
.
1 a2
2
a2 (1 + a a2 )S
V ar(W ) =
.
(1 a2 )2
S
200a2
ln
}.
W [r] = max{0,
2(1 a)
(100 r)(1 + a)
152
2t(1 a)
W W 0 = P [W t|W > 0] = 1 exp
,
S
S
E[W |W > 0] = E[W 0 ] =
.
2(1 a)
2
S
V ar[W |W > 0] =
.
2(1 a)
t > 0.
1a
2a2
1
+
e
+
e2t(1a) where 6= 1
1 a2 2a2
1 a 2a2
FT (t) =
1 {1 + t }et
where = 1
3
S
T =W +S =
.
1 a2
a2 [1 4(1 a)2 ]S
2
10 2
S
if = 1
3
2
V ar(T ) = E[T 2 ] T .
T [90] T + 1.3D(T ),
T [95] T + 2D(T )
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MM2/MM2.html
153
7.5
M/M/c/c Formulas
Pn =
n
n!
2
c
1++
+ ... +
2!
c!
n = 0, 1, . . . , c.
B[c, ] =
c
c!
2
c
1++
+ ... +
2!
c!
S
.
c
N = S.
T =
N
= S.
t
FT (t) = 1 exp
.
S
All of the formulas except the last one arc true for
the M/G/c/c queueing system. For this system we have
FT (t) = FS (t).
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MMcc/MMcc.html
154
7.6
M/M/c/K Formulas
#
Kc n 1
c X
+
.
n!
c! n=1 c
c
X
n
n=0
P0
if n = 1, 2, . . . , c,
n!n nc
Pn =
P0 if n = c + 1, . . . , K.
c! c
The average arrival rate of customers who actually
enter the system is = (1 PK ).
The actual mean server utilization, a, is given by:
S
.
c c
rP0
Q=
1 + (K c)rKc+1 (K c + 1)rKc ,
2
c!(1 r)
a=
where
r= .
c
N = Q + E[Ns ] = Q +
c1
X
nPn + c 1
n=0
c1
X
n=0
By Littles Law
W =
Q
,
n =
Pn
,
1 PK
T =
N
.
n = 0, 1, 2, . . . , K 1,
155
!
Pn .
W
c1
X
.
n
n=0
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MMcK/MMcK.html
156
7.7
M/M/ Formulas
n
e ,
n!
n = 0, 1, . . . .
N
= S.
157
7.8
M/M/1/K/K Formulas
E[O]
.
S
k!
a = 1 P0 .
=
a
.
S
K
E[O].
N = T.
T =
W = T S.
158
n = 0, 1, 2, . . . , K 1,
k!
Q(K 1; z + t)
,
Q(K 1; z)
t 0,
where
Q(n; x) = ex
n
X
xk
k=0
k!
FT (t) = P [W t] = 1
E[W |W > 0] =
Q(K 2; z + t)
,
Q(K 1; z)
t 0,
W
.
1 0
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MM1KK/MM1KK.html
159
7.9
M/G/1/K/K Formulas
where
n = 0,
1
n
Q
Bn =
1S [i]
n = 1, 2, . . . , K 1.
S [i]
i=1
a
.
S
T =
K
E[O].
N = T.
W = T S.
Q = W.
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MMHyper1KK/MHyper1KK.html
160
7.10
M/M/c/K/K Formulas
#1
k!
K k
z k +
z
,
kc
k
k
c!c
k=c+1
c
X
K
k=0
K
X
where
z=
E[O]
.
S
Pn =
K
X
Q=
K
z n P0
n
K
n!
z n P0
c!cnc n
(n c)Pn .
n=c+1
W =
=
T =
Q(E[O] + S)
.
K Q
K
.
E[O] + W + S
K
E[O].
N = T.
161
n = 0, 1, . . . , c,
n = c + 1, . . . , K.
n =
where n is the probability that a machine which breaks down finds n inoperable machines already in the repair facility. We denote n by n [K] to emphasize the fact that
there are K machines. It can be shown that
n [K] = Pn [K 1],
Pn [K 1] =
n = 0, 1, . . . , K 1.
cc p(K n 1; cz)
P0 [K 1],
c! p(K 1; cz)
where, of course,
p(k; ) =
k
e .
k!
FT (t) = P [W t] = 1
cc Q(K c 1; cz)P0 [K 1]
, t 0,
c!p(K 1; cz)
where
Q(k; ) = e
k
X
n
n=0
n!
FT (t) = P [T t] = 1 C1 exp(t/S) + C2
t 0,
where C1 = 1 + C2 s
C2 =
cc Q(K c 1; cz)
P0 [K 1].
c!(c 1)(K c 1)!p(K 1; cz)
The probability that a machine that breaks down must wait for repair is given by
D=
K1
X
n = 1
n=c
E[W |W > 0] =
c1
X
n .
n=0
W
.
D
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MMcKK/MMcKK.html
162
7.11
D/D/c/K/K Formulas
1
.
1
.
a = min{1,
K
},
c(1 + z)
where
z=
E[O]
.
S
= ca =
T =
ca
.
S
K
E[O].
N = T.
W = T S.
Q = W.
The equations for this model are derived in "A straightforward model of computer
performance prediction" by John W. Boyse es David R. Warn in ACM Comput. Surveys,
7(2), (June 1972).
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/DDcKK/DDcKK.html
163
7.12
M/G/1 Formulas
GN (z) =
Pn z n =
n=0
S [(1 z)] z
(1 )S []
,
W [] =
+ S []
W [] =
(1 )
.
+ S []
Each of the three transforms above is called the Pollaczek-Khintchine transform equation by various authors. The probability, P0 , of no customers in the system has the simple
and intuitive equation P0 = 1 , where the server utilization = S. The probability
that the server is busy is P [N 1] = .
S
E[S 2 ]
=
W =
2(1 )
1
1 + CS2
2
(Pollaczek formula).
Q = W .
2
2 E[S 2 ]
2 E[S 2 ]
+
.
2(1 )
2(1 )
S
1 + CS2
.
E[W |W > 0] =
1
2
E[S 3 ]
2
E[W 2 ] = 2W +
.
3(1 )
3 E[S 3 ]
V ar(Q) =
+
3(1 )
V ar(W ) = E[W 2 ] W .
T = W + S.
N = T = Q + .
2
2
2 (3 2)E[S 2 ]
3 E[S 3 ]
E[S 2 ]
V ar(N ) =
+
+
+ (1 ).
3(1 )
2(1 )
2(1 )
E[S 2 ]
E[T 2 ] = E[W 2 ] +
.
1
164
V ar(T ) = E[T 2 ] T .
T [90] T + 1.3D(T ),
T [95] T + 2D(T ).
165
X
z2
z1
+ C2
,
GN (z) =
Pn z n = C 1
z1 z
z2 z
n=0
where z1 and z2 are the roots of the next equation
a1 a2 z 2 (a1 + a2 + a1 a2 )z + 1 + a1 + a2 a = 0,
where
a = S,
ai =
,
i
C1 =
i = 1, 2,
and
C2 =
n = 0, 1, . . ..
Specifically, P0 = 1 a.
z2n+1
z1n+1
C2
.
P [N n] = C1
z1 1
z2 1
Additionally,
P [N 1] = a.
FT (t) = P [W t] = 1 C5 et C6 ebt , t 0,
where = 1 , b = 2 , 1 , 2 are the solutions of the
2 + (1 + 2 ) + 1 2 (1 a) = 0,
equation,
166
C5 =
and
C6 =
E[S 2 ]
aS
W =
=
2(1 a)
1a
S
E[W |W > 0] =
1a
2
E[W 2 ] = 2W +
1 + CS2
. (Pollaczek-formula)
2
1 + CS2
.
2
E[S 3 ]
.
3(1 a)
E[S 3 ] =
6p1 6p2
+ 3,
31
2
then
2
V ar(W ) = E[W 2 ] W .
FT (t) = P [T t] = 1 a ea t b eb t ,
where
a = C1
z1
,
z1 1
b = C2
z2
,
z2 1
167
t 0,
a = (z1 1),
and
b = (z2 1).
T = W + S.
E[T 2 ] = E[W 2 ] +
E[S 2 ]
,
1a
where of course
E[S 2 ] =
2p1 2p2
+ 2.
21
2
2
V ar(T ) = E[T 2 ] T .
CT2 =
E[T 2 ]
T
1.
a2
Q=W =
1a
1 + CS2
.
2
3 E[S 3 ]
+
V ar(Q) =
3(1 a)
2 E[S 2 ]
2(1 a)
2
+
2 E[S 2 ]
.
2(1 a)
2 (3 2a)E[S 2 ]
+ a(1 a).
2(1 a)
N = T = Q + a.
3 E[S 3 ]
V ar(N ) =
+
3(1 a)
2 E[S 2 ]
2(1 a)
2
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MH21/MH21.html
168
( + 1) . . . ( + n 1)
,
n
n = 1, 2, . . ..
1
CS2 = ,
so
2
E[S 2 ] = S (1 + CS2 ),
3
E[s ] = S
n1
Y
(1 + kCS2 ),
n = 1, 2, . . ..
k=1
This time
aS
E[S 2 ]
=
W =
2(1 a)
1a
1 + CS2
,
2
Q = W,
a2 (1 + CS2 )
a2 (1 + CS2 ) 2a(1 + 2CS2 )
V ar(Q) =
+
1+
,
2(1 a)
2(1 a)
3
S
1 + CS2
E[W |W > 0] =
,
1a
2
2
V ar(W ) = E[W 2 ] W ,
N = T = Q + a,
T = W + S,
a2 (1 + CS2 )
2(1 a)
2
+
a2 (3 2a)(1 + CS2 )
+ a(1 a).
2(1 a)
S (1 + CS2 )
E[T ] = E[W ] +
.
1a
2
V ar(T ) = E[T 2 ] T .
T [90] T + 1.3D(T ),
T [95] T + 2D(T ).
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MGamma1/MGamma1.html
169
n = 1, 2, . . ..
so
1
,
E[S ] = S 1 +
k
and
2
1
3
3
E[s ] = S 1 +
1+
.
k
k
2
This time
E[s2 ]
aS
W =
=
2(1 a)
1a
1+
2
1
k
. (Pollaczeks formula)
Q = W.
a2 (1 + k)
a2 (1 + k) 2a(k + 2)
V ar(Q) =
1+
+
.
2k(1 a)
2k(1 a)
3k
1 + k1
S
E[W |W > 0] =
.
1a
2
2
aS (k + 1)(k + 2)
E[W ] = 2W +
.
3k 2 (1 a)
2
V ar(W ) = E[W 2 ] W .
N = T = Q + a.
T = W + S,
a3 (k + 1)(k + 2)
V ar(N ) =
+
3k 2 (1 a)
a2 (1 + k1 )
2(1 a)
2
a2 (3 2a)(1 + k1 )
+
+ a(1 a).
2(1 a)
S (1 + k1 )
E[T ] = E[W ] +
.
1a
2
V ar(T ) = E[T 2 ] T .
T [90] T + 1.3D(T ),
T [95] T + 2D(T ).
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MEk1/MEk1.html
170
E[S n ] = S ,
n = 1, 2, . . .,
so
GN (z) =
(1 a)(1 z)
.
1 zea(1z)
We suppose that
|zea(1z) | < 1,
X
a(1z) j
gN (z) = (1 a)(1 z)
ze
.
j=0
Pn = (1 a)
n
X
(1)nj (ja)nj1 (ja + n j)eja
(n j)!
j=1
Additionally
FT (t) =
k1
X
n=0
Pn + P k
t (k 1)S
,
S
k = 1, 2, . . ..
171
n = 2, 3, . . ..
aS
.
2(1 a)
FW [W |W > 0] =
S
.
2(1 a)
2
aS
E[W ] = 2W +
.
3(1 a)
2
V ar(W ) = E[W 2 ] W .
Q = W =
a2
.
2(1 a)
2
a3
a2
a2
V ar(Q) =
+
.
+
3(1 a)
2(1 a)
2(1 a)
FT (t) =
if
t < S,
k1
P
Pn + Pk tkS
if
S
t S.
n=0
where
kS t < (k + 1)S,
k = 1, 2, . . ..
T = W + S.
2
S
E[T ] = E[W ] +
.
1a
2
V ar(T ) = E[T 2 ] T .
N = T = Q + a.
a3
V ar(N ) =
+
3(1 a)
a2
2(1 a)
2
+
a2 (3 2a)
+ a(1 a).
2(1 a)
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MD1/MD1.html
172
7.13
GI/M/1 Formulas
0
0
(1 0 )a
Q=
.
0
a(1 0 )(2 0 a(1 0 ))
V ar(Q) =
.
20
1
.
E[Q|Q > 0] =
0
S
T =
.
0
N=
FT (t) = P [T t] = 1 exp(t/T ).
100
T [r] = T ln
.
100 r
T [90] = T ln 10,
W = (1 0 )
T [95] = T ln 20.
S
.
0
V ar(W ) = (1
20 )
S
0
2
.
FT (t) = P [W t] = 1 (1 0 ) exp(t/T ).
100(1 0 )
W [r] = max 0, T ln
.
100 r
W 0 , the queueing time for those who must, has the same distribution as T .
173
E2
0.970820
0.906226
0.821954
0.724695
0.618034
0.504159
0.384523
0.260147
0.131782
0.066288
0.026607
0.001333
E3
0.987344
0.940970
0.868115
0.776051
0.669467
0.551451
0.626137
0.289066
0.147390
0.074362
0.029899
0.001500
U
0.947214
0.887316
0.817247
0.734687
0.639232
0.531597
0.412839
0.284028
0.146133
0.074048
0.029849
0.001500
D
0.999955
0.993023
0.959118
0.892645
0.796812
0.675757
0.533004
0.371370
0.193100
0.098305
0.039732
0.001999
H2
0.815535
0.662348
0.536805
0.432456
0.343070
0.263941
0.191856
0.124695
0.061057
0.030252
0.012039
0.000600
H2
0.810575
0.624404
0.444949
0.281265
0.154303
0.081265
0.044949
0.024404
0.010495
0.004999
0.001941
0.000095
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/GIM1/GIM1.html
174
7.14
GI/M/c Formulas
n =
c1
P
(1)in ni Ui , n = 0, 1, . . . , c 2,
i=n
D nc ,
n = c 1, c, . . . ,
where is the unique solution of the equation = A [c(1 )] such that 0 < < 1,
where A [] is the Laplas-Stieltjes transform of r,
gj = A [j],
j = 1, 2, . . . , c,
j = 0,
1,
j
Q gi
Cj =
, j = 1, 2, . . . , c,
1gi
i=1
#1
c
c
X
1
c(1 gj ) j)
j
D=
+
1 j=1 Cj (1 gj ) c(1 ) j
"
and
Un = DCn
c
X
c
j
j=n+1
Cj (1 gj )
c(1 gj ) j)
c(1 ) j
175
n = 0, 1, . . . , c 1.
P [W > 0] =
DS
D
S
. W =
.
. E[W |W > 0] =
2
1
c(1 )
c(1 )
If c(1 ) 6= 1, then
FT (t) = P [ t] = 1 + (G 1)et Gec(1)t ,
t 0,
where
G=
D
.
(1 )[1 c(1 )]
Dt
FT (t) = P [ t] = 1 1 +
et ,
1
t 0.
We also have
T = W + S.
c1
X
S
P0 = 1
S
j1
c
j=1
1 1
.
j
c
(
Pn =
Sn1
n,
Sn1
c,
n = 1, 2, . . . , c 1,
n = c, c + 1, . . . .
176
7.15
1
2
n
E[S 2 ] = E[S12 ] + E[S22 ] + . . . + E[Sn2 ],
S=
and
E[S 3 ] =
1
2
n
E[S13 ] + E[S23 ] + . . . + E[Sn3 ],
By Pollaczeks formula,
W =
E[S 2 ]
.
2(1 a)
i = 1, 2, . . . , n.
1
2
n
T 1 + T 2 + . . . + T n.
E[S 3 ]
2 (E[S 2 ])2
+
.
3(1 a)
4(1 a)2
i = 1, 2, . . . , n.
E[Ti2 ] = V ar(Ti ) + T i ,
i = 1, 2, . . . , n.
177
1
2
n
E[T12 ] + E[T22 ] + . . . + E[Tn2 ],
and
2
V ar(T ) = E[T 2 ] T .
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MG1NoPrio/MG1NoPrio.html
178
= 1 + 2 + . . . + n .
S=
1
2
n
E[S1 ] + E[S2 ] + . . . + E[Sn ],
E[S 2 ] =
1
2
n
E[S12 ] + E[S22 ] + . . . + E[Sn2 ],
and
E[S 3 ] =
1
2
n
E[S13 ] + E[S23 ] + . . . + E[Sn3 ],
Let
j = 1, 2, . . . , n,
n = = S.
W j = E[Wj ] =
j = 1, 2, . . . , n,
E[S 2 ]
,
2(1 j1 )(1 j )
0 = 0.
179
j = 1, 2, . . . , n.
2
n
1
E[W1 ] + E[W2 ] + . . . + E[Wn ].
j = 1, 2, . . . , n,
j = 1, 2, . . . , n.
V ar(Tj ) = V ar(Sj ) +
i=1
4(1 j1 )2 (1 j )2
E[S 2 ]
j1
P
i E[Si2 ]
i=1
2(1 j1 )3 (1 j )
j = 1, 2, . . . , n.
180
2
1
2
2
[V ar(T1 ) + T 1 ] + [V ar(T2 ) + T 2 ]
n
2
2
[V ar(Tn ) + T n ] T .
j = 1, 2, . . . , n.
2
j = 1, 2, . . . , n,
so
E[W 2 ] =
1
2
n
E[W12 ] + E[W22 ] + . . . + E[Wn2 ].
Finally
2
V ar(W ) = E[W 2 ] W .
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MG1Relativ/MG1Relativ.html
181
1
2
n
E[S1 ] + E[S2 ] + . . . + E[Sn ],
E[S 2 ] =
1
2
n
E[S12 ] + E[S22 ] + . . . + E[Sn2 ],
E[S 3 ] =
1
2
n
E[S13 ] + E[S23 ] + . . . + E[Sn3 ].
Let
j = 1 E[S1 ] + 2 E[S2 ] + . . . + j E[Sj ],
j = 1, 2, . . . , n,
1
T j = E[Tj ] =
1 j1
0 = 0,
j
P
i E[Si2 ]
i=1
E[Sj ] +
,
2(1 j )
j = 1, 2, . . . , n.
182
j = 1, 2, . . . , n.
Q j = j W j ,
j = 1, 2, . . . , n.
W =
2
n
1
E[W1 ] + E[W2 ] + . . . + E[Wn ].
The mean number of customers staying in the system for each class is
N j = j W j ,
j = 1, 2, . . . , n.
T =
1
2
n
T 1 + T 2 + . . . + T n = W + S.
Q = W,
N = T.
183
V ar(Sj )
V ar(Tj ) =
+
(1 j1 )2
j
P
i E[Si3 ]
i=1
)2 (1
E[Sj ]
j1
P
i E[Si2 ]
i=1
(1 j1 )3
2
j
P
i E[Si2 ]
i=1
3(1 j1
j ) 4(1 j1 )2 (1 j )2
j
j1
P
P
2
2
i E[Si ]
i E[Si ]
i=1
i=1
, 0 = 0, j = 1, 2, . . . , n.
+
2(1 j1 )3 (1 j )
The overall variance
V ar(T ) =
+... +
1
2
2
2
[V ar(T1 ) + T 1 ] + [V ar(T2 ) + T 2 ]
n
2
2
[V ar(Tn ) + T n ] T .
j = 1, 2, . . . , n.
Because,
2
E[Wj2 ] = V ar(Wj ) + W j ,
j = 1, 2, . . . , n,
so
E[W 2 ] =
1
2
n
E[W12 ] + E[W22 ] + . . . + E[Wn2 ].
Finally
2
V ar(W ) = E[W 2 ] W .
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MG1Absolute/MG1Absolute.html
184
7.16
,
1
E[T |S = t] =
S
t
, and T =
.
1
1
Finally
E[W |S = t] =
t
S
, and W =
.
1
1
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MG1Process/MG1Process.html
185
7.17
S
= ,
c
c
C[c, ]S
W W1 =
,
c(1 1 S/c)
a=
W j = W j + S,
i=1
j = 1, 2, . . . , n.
Q j = j W j ,
j = 1, 2, . . . , n.
N j = j T j ,
j = 1, 2, . . . , n.
W =
j = 2, . . . , n.
n
1 2
+
+ ... + .
Q = W.
T = W + S.
N = T.
http://irh.inf.unideb.hu/user/jsztrik/education/03/EN/MMcPrio/MMcPrio.html
186
Bibliography
[1] Adan, I., and Resing, J. Queueing Theory.
http://web2.uwindsor.ca/math/hlynka/qonline.html.
[2] Allen, A. O. Probability, statistics, and queueing theory with computer science
applications, 2nd ed. Academic Press, Inc., Boston, MA, 1990.
[3] Anisimov, V., Zakusilo, O., and Donchenko, V. Elements of queueing theory
and asymptotic analysis of systems. Visha Skola, Kiev, 1987.
[4] Artalejo, J., and Gmez-Corral, A. Retrial queueing systems. Springer,
Berlin, 2008.
[5] Asztalos, D. Finite source queueing systems and their applications to computer
systems ( in Hungarian ). Alkalmazott Matemaika Lapok (1979), 89101.
[6] Asztalos, D. Optimal control of finite source priority queues with computer
system applications. Computers & Mathematics with Applications 6 (1980), 425
431.
[7] Begain, K., Bolch, G., and Herold, H. Practical perfromance modeling, Application of the MOSEL language. Wiley & Sons, New York, 2001.
[8] Bolch, G., Greiner, S., de Meer, H., and Trivedi, K. Queueing networks
and Markov chains, 2nd ed. Wiley & Sons, New York, 2006.
[9] Bose, S. An introduction to queueing systems. Kluwer Academic/Plenum Publishers, New York, 2002.
[10] Breuer, L., and Baum, D. An introduction to queueing theory and matrixanalytic methods. Springer, 2005.
[11] Brockmeyer, E., Halstrom, H., and Jensen, A. The life and works of a.k.
erlang. Academy of Technical Sciences, Copenhagen (1948).
[12] Bunday, B., and Scraton, R. The G/M/r machine interference model. European
Journal of Operational Research 4 (1980), 399402.
[13] Chee-Hock, N., and Boon-He, S. Queueing modelling fundamentals, 2nd ed.
Wiley & Son, Chichester, 2002.
187
[14] Cohen, J. The multiple phase service network with generalized processor sharing.
Acta Informatica 12 (1979), 245284.
[15] Cooper, R. Introduction to Queueing Theory, 3-rd Edition. CEE Press, Washington, 1990.
http://web2.uwindsor.ca/math/hlynka/qonline.html.
[16] Csige, L., and Tomk, J. Machine interference problem with exponential distributions ( in Hungarian ). Alkalmazott Matematikai Lapok (1982), 107124.
[17] Daigle, J. Queueing theory with applications to packet telecommunication.
Springer, New York, 2005.
[18] Daigle, J. N. Queueing theory for telecommunications. Addison-Wesley, Reading,
MA, 1992.
[19] Dattatreya, G. Performance analysis of queuing and computer networks. CRC
Press, Boca Raton, 2008.
[20] Dshalalow, J. H. Frontiers in queueing : Models and applications in science and
engineering. CRC Press., Boca Raton, 1997.
[21] Erlang, A. The theory of probabilities and telephone conversations. Nyt Tidsskrift
for Matematik B 20 (1909), 3339.
[22] Erlang, A. Solution of some problems in the theory of probabilities of significance
in automatic telephone exchanges. The Post Office Electrical Engineers Journal
10 (1918), 189197.
[23] Falin, G., and Templeton, J. Retrial queues. Chapman and Hall, London,
1997.
[24] Fazekas, I. Theory of probability ( in Hungarian ). Kossuth Egyetemi Kiad,
Debrecen, 2000.
[25] Franken, P., Konig, D., and Arndt, U. Schmidt, V. Queues and point
processes. Academie Verlag, Berlin, 1981.
[26] Gebali, F. Analysis of computer and communication networks. Springer, New
York, 2008.
[27] Gelenbe, E., and Mitrani, I. Analysis and synthesis of computer systems.
Academic Press, London, 1980.
[28] Gelenbe, E., and Pujolle, G. Introduction to queueing networks. Wiley &
Sons, Chichester, 1987.
[29] Gnedenko, B., Belyayev, J., and Solovyev, A. Mathematical methods of
reliability theory ( in Hungarian ). Mszaki Knyvkiad, Budapest, 1970.
[30] Gnedenko, B., Belyayev, Y., and Solovyev, A. Mathematical methods of
reliability theory. Academic Press, New York, London, 1969.
188
[32] Gross, D., Shortle, J., Thompson, J., and Harris, C. Fundamentals of
queueing theory, 4th edition. John Wiley & Sons, New York, 2008.
[33] Gyrfi, L., and Pli, I. Queueing theory in informatics systems (in Hungarian).
Megyetemi Kiad, Budapest, 1996.
[34] Haghighi, A., and Mishev, D. Queueing models in industry and business. Nova
Science Publishers, Inc., New York, 2008.
[35] Hall, R. W. Queueing methods for services and manufacturing. Prentice Hall,
Englewood Cliffs, NJ, 1991.
[36] Haribaskaran, G. Probability, queueing theory and reliability engineering. Laxmi
Publications, Bangalore, 2006.
[37] Haverkort, B. Performance of computer communication systems: A model-based
approach. Wiley & Sons, New York, 1998.
[38] Hlynka, M. Queueing Theory Page.
http://web2.uwindsor.ca/math/hlynka/queue.html.
[39] Ivcsenko, G., Kastanov, V., and Kovalenko, I. Theory of queueing systems.
Nauka, Moscow, 1982.
[40] Iversen, V. Teletraffic Engineering Handbook. ITC in Cooperation with ITU-D
SG2, 2005.
http://web2.uwindsor.ca/math/hlynka/qonline.html .
[41] Jain, R. The art of computer systems performance analysis. Wiley & Sons, New
York, 1991.
[42] Jaiswal, N. Priority queues. Academic Press, New York, 1969.
[43] Jereb, L., and Telek, M. Queueing systems ( in Hungarian ). teaching material,
BME Department of Telecommunication.
http://webspn.hit.bme.hu/telek/notes/sokfelh.pdf.
[44] Karlin, S., and Taylor, H. Stochastic process ( in Hungarian ). Gondolat
Kiad, Budapest, 1985.
[45] Karlin, S., and Taylor, H. An introduction to stochastic modeling. Harcourt,
New York, 1998.
[46] Khintchine, A. Mathematical methods in the theory of queueing. Hafner, New
York, 1969.
[47] Kleinrock, L. Queueing systems. Vol. I. Theory. John Wiley & Sons, New York,
1975.
189
[81] Sztrik, J. An introduction to queueing theory and its applications (in Hungarian).
Kossuth Egyetemi Kiad, Debrecen, 2000.
http://irh.inf.unideb.hu/user/jsztrik/education/eNotes.htm.
[82] Sztrik, J. A key to queueing theory with applications (in Hungarian). Kossuth
Egyetemi Kiad, Debrecen, 2004.
http://irh.inf.unideb.hu/user/jsztrik/education/eNotes.htm.
[83] Sztrik, J. Practical queueing theory. Teaching material, Debrecen University
Egyetem,Faculty if Informatics, 2005.
http://irh.inf.unideb.hu/user/jsztrik/education/09/index.html.
[84] Sztrik, J. Performance modeling of informatics systems ( in Hungarian ). EKF
Lceum Kiad, Eger, 2007.
[85] Takagi, H. Queueing analysis. A foundation of performance evaluation. Volume
1: Vacation and priority systems, part 1. North-Holland, Amsterdam, 1991.
[86] Takagi, H. Queueing analysis. A foundation of performance evaluation. Volume
2: Finite Systems. North-Holland, Amsterdam, 1993.
[87] Takagi, H. Queueing analysis. A foundation of performance evaluation. Volume
3: Discrete-Time Systems. North-Holland, Amsterdam, 1993.
[88] Takcs, L. Introduction to the theory of queues. Oxford University Press, New
York, 1962.
[89] Takcs, L. Combinatorial Methods in the Theory of Stochastic Processes. John
Wiley & Sons, 1977.
[90] Tijms, H. Stochastic Modelling and Analysis: A Computational Approach. Wiley
& Sons, New York, 1986.
[91] Tijms, H. A first course in stochastic models. Wiley & Son, Chichester, 2003.
[92] Tomk, J. On sojourn times for semi-Markov processes. Proceeding of the 14th
European Meeting of Statisticians, Wroclaw (1981).
[93] Tomk, J. Sojourn time problems for Markov chains ( in Hungarian ). Alkalmazott
Matematikai Lapok (1982), 91106.
[94] Trivedi, K. Probability and Statistics with Reliability, Queuing, and Computer
Science Applications, 2-nd edition. Wiley & Son, New York, 2002.
[95] Ushakov, I. A., and Harrison, R. A. Handbook of reliability engineering.
Transl. from the Russian. Updated ed. John Wiley & Sons, New York, NY, 1994.
[96] van Hoorn, M. Algorithms and approximations for queueing systems. Centrum
voor Wiskunde en Informatica, Amsterdam, 1984.
[97] Virtamo, J. Queueing Theory.
http://www.netlab.tkk.fi/opetus/s383143/kalvot/english.shtml .
192
[98] Wentzel, E., and Ovcharov, L. Applied problems in probabbility theory. Mir
Publisher, Moscow, 1986.
[99] White, J. Analysis of queueing systems. Academic Press, New York, 1975.
[100] Wolf, R. Stochastic Modeling and the Theory of Queues. Prentice-Hall, 1989.
[101] Yashkov, S. Processor-sharing queues: some progress in analysis. Queueing Systems: Theory and Applications 2 (1987), 117.
193