Sunteți pe pagina 1din 143

1. PERFORMANCEMODELINGByOYEWOLE,MopelolaO.

Matno:090805054
2. MODELLINGOFCOMPUTERSYSTEMSNETWORKSbyKesaOluwafunmilola
ElizabethMaTNO:090805026
3. ModellingComputerSystemsNetworksbyOkoroUgochukwuChristian
Matno:090805043
4. DISCRETEEVENTSIMULATIONbyOSINUGAOLUWASEUN
Matno:090805049
5. ANALYSISOFSIMULATIONOUTPUTByMATTHEWOMOLABAKEO.
Matno:090805028
6. VERIFICATIONANDVALIDATIONOFSIMULATIONMODELSbyAMUDA,
TosinJosephMatno:090805009
7. VRIFICATIONANDVALIDATIONOFSIMULATIONMODLSBYSUN
FAPOHUNDAMatno:100805032
8. ANALYSISOFSINGLESERVERQUEUEANDQUEUINGNETWORKby
ADIBIOLOGUNFUNKEOLUWASEUNMATRICNO:090805005
9. ANALYSISOFASINGLESERVERQUEUEANDQUEUENETWORKSby
ADEYEMIMONSURATADEOLAMATRICNO:100805008









CSC 524:
DISCRETE EVENT SIMULATION
OSINUGA OLUWASEUN
090805049
LECTURER: Dr. Adewole

Discrete Event Simulation
Firstly, lets define the following terminologies before delving into what discrete event
simulation is all about.

Discrete System:
A discrete system is a system with a countable number of states which change instantaneously
at separated point in time. For example, the customer service system in a bank is discrete since
state variables (number of customers) change only when a customer arrives or when a
customer finishes being served and departs.

Event: An event is an occurrence which is instantaneous and may change the state of the
system.

Simulation:
Computer simulation is the discipline of designing a model of an actual or theoretical physical
system, conducting experiments with the model, executing the model on a computer, and
analyzing the execution output for the purpose either of understanding the behaviour of the
system or of evaluating various strategies (within the limits imposed by a criterion or set of
criteria) for the operation of a system.

What then is Discrete Event Simulation (DES)?

A discreteevent simulation (DES), models the operation of a system as a discrete sequence
of events in time. Each event occurs at a particular instant in time and marks a change of state
in the system. Between consecutive events, no change in the system is assumed to occur; thus
the simulation can directly jump in time from one event to the next.

Discrete event simulation utilizes a mathematical/logical model to model a physical system
that portrays state changes at precise points in simulated time. A (DES) models a system whose
state may change only at discrete point in time.


Advantages of Simulation
1. Simulation helps us to study new designs without interrupting real system.
2. Simulation helps us to study new designs without needing extra resources
3. Simulation is less dangerous / expensive / intrusive.
4. Simulation helps us to improve the understanding of the system.
5. Simulation helps us to manipulate time.

Simulation is performed using Models.



MODEL
A model in science is anything used as a representation of an object, law, theory or event used
as a tool for understanding the science world.
The representations can either be
a. Algorithmic (sequence of steps)
b. Mathematical (equations) or
c. Logical (conditions)

There are two main techniques for building models:
a. Abstraction : means leaving out unnecessary details . It represents only selected attributes
of a customer.

b. Idealisation : means replacing real things by concepts . It replaces a set of measurements by
a function.

We model because they are often cheaper, easier, faster, and/or safer to build and experiment
on than the actual system.




MODEL TAXONOMY






MODELLING A SYSTEM




















*VALIDATION: This is the process of ensuring that the right model is created.

*VERIFICATION: This is the act of writing a correct program i.e building the model rightly

Components of a Discrete Event Simulation Model

1. System state: This is the collection of state variables necessary to describe the system at a
particular time.

2. Simulation clock: This clock is a variable that shows the current value of simulated time.

3. Event list: An event list contains the next time when each type of event will occur. It
contains all scheduled events, arranged in chronological time order

4. Statistical counters: These are variables used for storing statistical information about
system information.

5. Initialization routine: This routine is used to initialize the simulation model at time 0

6. Timing routine: This routine determines the next event from the event list and then
advances the simulation clock to the time when that event is to occur

7. Report generator: This is a subprogram that computes estimates (from the statistical
counters) of the desired measures of performance and produces a report when the simulation
ends.

8. Event routine: This is a subprogram that updates the system state when a particular type
of event occur. There is one event routine for each event type.

9. Library routines: A set of subprogram used to generate random observations from
probability distributions that were determined as part of the simulation model.

10. Main program: This is a subprogram that invokes the timing routine. It determines the
next event and transfers control to the corresponding event routine as well as update the
system state appropriately and finally invokes the report generator when the simulation is
over.









FLOWCHART FOR AN EVENT SIMULATION MODEL



A SIMULATION CLASSIC

The Single Server Queue



a. Problem formulation:
Customers wait too long in my bank

b. Objectives:
Determine the effect of an additional cashier on the mean queue length

c. Data needed:
I. Inter-arrival times of customers
ii. Service times

d. Entities: customers; server

e. Attributes of a customer: service required

f. Attributes of server: servers skill (its service rate)

g. Events: arrival of a customer; departure of a customer

h. Activities: serving a customer, waiting for a new customer


The Event List

The (future) event list (FEL) controls the simulation.

The FEL contains all future events that are scheduled.

The FEL is ordered by increasing time of event notice.

Example FEL (at some simulation time t 1):



Conditional and Primary Events

A primary event is an event whose occurrence is scheduled at a certain time
E.g. The arrivals of customers.

A conditional event is an event which is triggered by a certain condition becoming true
E.g. A customer moving from queue to service.


The Event List

Event list consists of pending event set. It contains all scheduled events, arranged in
chronological time order. In the simulator, the event list is a data structure e.g. list, tree.

Example: Simulation of the Mensa:
Some state variables:
# People in line 1
# People at meal line 1 & 2
# People at cashier 1 & 2
# People eating at tables

Operations that can be performed on an event list:
a. Insert an event into FEL (at appropriate position!)
b. Remove first event from FEL for processing
c. Delete an event from the FEL

The event list is usually stored as a linked list such that we can traverse list forward and
backward.


SIMULATING THE BANK MANUALLY

Simulation Clock: 15 minutes.

Arrival Interval Customer Arrives Begin Service Service Duration Service Complete
5 5 5 2 7
1 6 7 4 11
3 9 11 3 14
3 12 14 1 15




REFERENCES

1. Introduction to Simulation - Graham
2. Simulation, Modeling & Analysis (3/e) by Law and Kelton, 2000
3. Wikipedia.org

qwertyuiopasdfghjklzxcvbnmqwertyui
opasdfghjklzxcvbnmqwertyuiopasdfgh
jklzxcvbnmqwertyuiopasdfghjklzxcvb
nmqwertyuiopasdfghjklzxcvbnmqwer
tyuiopasdfghjklzxcvbnmqwertyuiopas
dfghjklzxcvbnmqwertyuiopasdfghjklzx
cvbnmqwertyuiopasdfghjklzxcvbnmq
wertyuiopasdfghjklzxcvbnmqwertyuio
pasdfghjklzxcvbnmqwertyuiopasdfghj
klzxcvbnmqwertyuiopasdfghjklzxcvbn
mqwertyuiopasdfghjklzxcvbnmqwerty
uiopasdfghjklzxcvbnmqwertyuiopasdf
ghjklzxcvbnmqwertyuiopasdfghjklzxc
vbnmqwertyuiopasdfghjklzxcvbnmrty
uiopasdfghjklzxcvbnmqwertyuiopasdf
ghjklzxcvbnmqwertyuiopasdfghjklzxc

ANALYSIS OF SIMULATION OUTPUT


DATA

ADEFEMI FOLAMOLUWA

MATRICNO:090805002


SIMULATION
Simulationcanbedefinedasthedesigningofaproposedorexisting
system,executingthismodelonacomputerandanalyzingtheoutputgotten
fromtheexecutionofthemodel.
Mosttimes,simulationiscarriedoutbecausethephysicalsystemdoesnotexist,
costofbuildinganactualsystemishighorbecausemeasuringanactualsystemis
timeconsuming.Inall,simulationsofsystems,proposedorexistingistoanalyze
andpredicttoagreatextent,thefunctionalityofthesystem.

SIMULATIONOUTPUT
Simulationalsocorrectserrorsofsystemsintendedtobebuilt.Therefore,for
correctanalysisofthesystem,outputofthesystemsimulationhastobecorrectly
analyzed.Iftheoutputisanalyzedwrong,thesystemwillnotbehaveasexpected
andcaninvalidateallresults.
Insimulationstudies,alotoftimeandmoneyisspentonmodel
developmentandprogrammingbutnotsomuchisspentonanalysisofsimulation
outputinanappropriatemanner.Sometimestheoutputofasinglesimulation
runofarbitrarylengthissometimestreatedastheactualcharacteristicofthe
system.
Outputsofsimulationmusthoweverberegardedasrandomassimulationsare
statisticalsamplingexperiments.Astatisticalapproachmustthereforebegivento
analysisofoutputdata.Thiswillbedonethroughdiscreteeventcomputer
simulation.Simulationexecutionsyieldestimatesofmeasuresofsystem
performanceandnotactualmeasurementvalues.Theseestimatesareerror
prone(subjecttosamplingerrors)andthisshouldbetakenintoconsiderationto
makecorrectinferencesaboutsystemperformance.
Simulationoutputalmostneverproduce
rawindependent(datafromsimulationrunsaremosttimescorrelated),
identicallydistributed
normaldata.
Classical statistical techniques based on independent, identically
distributedtechniquesarenotappliedforcorrectsysteminferences.
USINGCLASSICALSTATISTICALMETHODSTOANALYZESIMULATIONOUTPUT
DATA
Itisbelievedthatalloutputsofsimulationsareautocorrelated.Forexample,if
thexthcustomertoarriveinabankwaitsforalongtime,itishighlypossiblethat
the(x+1)thcustomerwillwaitforalongperiodoftimealso.
Simulationoutputsarealsononstationaryratherthanidenticallydistributedasit
isnotpossibletochoosetheinitialconditionsforthesimulationtobe
representativeofthetypicaloperationofthesimulatedsystem.
Forthispurpose,classicalstatisticalmethodsshouldnotbeusedtoanalyze
simulationoutputdata.
TYPESOFSIMULATIONS
Withrespecttooutputanalysis,therearetwotypesofsimulations:Finite
Horizon(Terminating)andSteadyStatesimulations.
FiniteHorizonSimulations:
Theterminationofafinitehorizonsimulationtakesplaceataspecifictimeoris
causedbytheoccurrenceofaspecificevent.Examplesare:
Masstransitsystembetweenduringrushhour.
Productionsystemuntilasetofmachinesbreaksdown.
Startupphaseofanysystem

Steadystatesimulations:
Inthistypeofsimulation,longtermbehaviorsofsystemsareanalyzed.A
performancemeasureisthereforecalledasteadystateparameterifitisa
characteristicoftheequilibriumdistributionofanoutputstochasticprocess.
Examplesare:
Continuouslyoperatingcommunicationsystemwheretheobjectiveisthe
computationofthemeandelayofapacketinthelongrun.
Distributionsystemoveralongperiodoftime.

FINITEHORIZONSIMULATIONS
Asystemofinterestoverafinitetimehorizonissimulatedinthiscase.Assume
weobtaindiscretesimulationoutputY
1
,Y
2
.Y
m
,wherethenumberof
observations,mcanbeaconstantorarandomvariable.
Example:Theexperimentercanspecifythenumber,mofcustomerwaitingtimes
Y
1
,Y
2
.Y
m,
tobetakenfromaqueuingsimulation.
Ormcoulddenotetherandomnumberofcustomersobservedduringaspecified
timeperiod[0,T].
Alternatively,wemightobservecontinuoussimulationoutput{Y(t)0tT}
overaspecifiedinterval[0,T].
Example:Ifweareinterestedinestimatingthetimeaveragednumberof
customerswaitinginaqueueduring[0,T],thequantityY(t)wouldbethenumber
ofcustomersinthequeueattimet.
Estimatetheexpectedvalueofthesamplemeanoftheobservations,E[
m
],

STEADYSTATESIMULATIONS
Nowassumethatwehaveonhandstationary(steadystate)simulationoutput,
Y
1
,Y
2
.Y
n
,
Ourgoal:Estimatesomeparameterofinterest,e.g.,themeancustomerwaiting
timeortheexpectedprofitproducedbyacertainfactoryconfiguration.In
particular,supposethemeanofthisoutputistheunknownquantity.Welluse
thesamplemean
n
toestimate.
Asinthecaseofterminatingsimulations,wemustaccompanythevalueofany
pointestimatorwithameasureofitsvariance.



Bibliography
Goldsman,D.(2010,May26).SIMULATIONOUTPUTANALYSIS,155.
M.Law,A.(1983).OperationsResearch.StatisticalAnalysisofSimulationOutputData,148.

NAME
ADIBIOLOGUN FUNKE OLUWASEUN
MATRIC NO
090805005
COURSE CODE
CSC524

ASSIGNMENT
A WRITEUP ON ANALYSIS OF SINGLE
SERVER QUEUE AND QUEUING NETWORK



LECTURER IN CHARGE
DR ADEWOLE

ANALYSIS OF SINGLE SERVER QUEUE AND QUEUING


NETWORK.
ANALYSIS OF A SINGLE QUEUE
WHAT DOES QUEUE MEAN?
A queue occurs when a potential customers arrives at a system that offers certain service that the
customers wish to use. A queue works almost on the same methodology used at banks or
supermarkets, where the customers are treated according to their arrival.
In computer systems, many jobs share the system resources such as CPU, disks, and other
devices. Since generally only one job can use the resource at any time, all other jobs
wanting to use that resource wait in queues

THE SINGLE SERVER QUEUE
It is a queuing model that has only one queue. This kind of model can be used to analyze
individual resources in computer systems. For example if all jobs waiting for the CPU are
kept in one queue, the CPU can be modeled using results that apply to single queues.
This is one of the most prevalent forms of queuing, which customers enter a system, get
priority in terms of their arrival and get served by one single server. With proper
enforcement, a high level of social justice will be maintained, while the higher ability of
making social comparisons will be emphasized, particularly in the situations where queues
are visible to the customers, such as queuing at a bus stop.
The central element of the system is a server, which provides some service to the items.
Items from some population of items arrive at the system to be served. If the server is
idle, an item is served immediately. Otherwise, an arriving item joins a waiting line. When
the server has completed serving an item, the item departs. If there are items waiting in the
queue one is immediately dispatched to the server. The server in this model can represent
anything that performs some function or service for a collection of items. For example, a
processor provides service to processes; an I/O device provides a read or write service for
I/O requests; transmission line provides transmission service to packets of data.

Formulas of a Single Server Queue
Table below provides some equations for single server queues that follow the M/G/1
model. That is, the arrival rate is Poisson and the service time is general. Making use of a
scaling factor, A, the equations for some of the key output variables is straightforward.
Note that the key factor in the scaling parameter is the ratio of the standard deviation of
service time to the mean. No other information about the service time is needed. Two
special cases are of some interest. When the standard deviation is equal to the mean, the
service time distribution is exponential (M/M/1).
This is the simplest case and the easiest one for calculating results. Table 3b shows the
simplified versions of equations for the standard deviation of r and Tr, plus some other
parameters of interest. The other interesting case is a standard deviation of service time
equal to zero, that is, a constant service time (M/D/1). The corresponding equations are
shown in Table 3c.

BIRTH-DEATH PROCESSES: this is used to model a system in which jobs arrive one at
a time. The state of a system can be represented by number of jobs n in the system.
Arrival of a new job changes to n+1. This is called a Birth. Similarly the departure of jobs
changes the system state to n-1. This is called a Death. Therefore the number of jobs in
such system can be modeled as a birth-death process.
SUMMARY OF SOME CLASSICAL RESULTS FOR THE SINGLE SERVER
QUEUE
We focus now on the case s = 1. Quantities of interest are
the arrival rate the intensity of the arrival process an (mean number of customer
arrivals per second, also equal to the inverse of the mean interarrival time (Chapter 11)
= S (server utilization) where S is the mean service time (Palm
expectation of Sn).
the residence time Rn = Dn An and waiting time Wn = Rn Sn for customer n
the number of customers in the system N(t), the number of customers waiting Nw(t),
given
by
N(t) =
_
nZ
1{Ant}1{Dn>t}
Nw(t) = (N(t) 1)+

STABILITY An important issue in the analysis of the single server queue is stability. In
mathematical terms, it means whether N(t) is stationary. When the system is unstable, a
typical behavior is that the backlog grows to infinity.
The single server queue is unstable for > 1 and stable for < 1.
The first part says that a necessary condition for stability is 1. We give a heuristic
explanation for the necessary condition is as follows. If the system is stable, all customers
eventually enter service, thus the mean number of beginnings of service per second is .
From Littles law applied
to the server (see Section 6.3), we have = the probability that the server is busy, which
is 1.
The proof of the second statement is more complex see [Baccelli88-book] for details.
For = 1
Be careful that this intuitive stability result holds only for a single queue

QUEUINGNETWORK
A model in which jobs departing from one queue arrive at another queue or possibly the
same queue is called queuing network. It describes the system as a set of interacting
resources.
A network of queues is a collection of service centers, which represent system resources,
and customers, which represent users or transactions. It is a network consisting of
interconnected queues.
Examples
Customers go from one queue to another in post office, bank, and supermarket.
Data packets traverse a network moving from a queue in a router to the queue in
another router.
Queuing network can be classified into three
Open
Closed
Mixed

(1) An open queuing network is the one that has external arrivals and departures. i.e. it
receive customers from an external source and send them to an external
destination. The job enters the system as IN and exits as OUT. The number of
jobs in the system varies with time.



In analyzing this system, we assume throughput is equal to arrival time.
Analysis of open queuing network
Open queueing network models are used to represent transaction processing systems such
as airline reservation systems or banking systems. In these systems, the transaction arrival
rate is not dependent on the load on the computer system. The transaction arrivals are
modeled as a Poisson process with a mean arrival rate .



(2) A closed queuing network has no external arrivals and departures. It has constant
number of customers (finite population). They have a fixed population that moves
between the queues but never leaves the queue. The jobs in the queue keep
circulating from one queue to the next

The jobs exiting the system immediately reenter the system. The flow of jobs in the Out-
to-In link defines the throughput of the closed system. In analyzing a closed system, we
assume that the number of jobs is given, and we attempt to determine the throughput (or
the job completion rate).

(3) Mixed queuing Network: are networks that are open for some workloads and
closed for others. i.e it is open for some classes and closed for others.
The figure below shows an example of a mixed system with two classes of jobs. The
system is closed for interactive jobs and is open for batch jobs. The term class refers to
types of jobs. All jobs of a single class have the same service demands and transition
probabilities. Within each class, the jobs are indistinguishable.



QueuingNetworkModelforComputerSystems
Twooftheearliestqueuingmodelsofcomputersystemsarethemachinerepairmanmodeland
thecentralservermodelshowninFigures6and7,respectively.Themachinerepairmanmodel,
as the name implies, was originally developed for modeling machine repair shops. It has a
number of working machines and a repair facility with one or more servers (repairmen).
Whenever a machine breaks down, it is put in the queue for repair and serviced as soon as a
repairman is available. Scherr (1967) used this model to represent a time sharing system with n
terminals.Userssittingattheterminalsgeneraterequests(jobs)thatareservicedbythesystem,
whichservesasarepairman.Afterajobisdone,itwaitsattheuserterminalforarandomthink
timeintervalbeforecyclingagain.
ThecentralservermodelshowninFigure32.8wasintroducedbyBuzen(19173).TheCPUinthe
model is the central server that schedules visits to other devices. After service at the I/O
devices, the jobs return to the CPU for further processing and leave it when the next I/O is
encounteredorwhenthejobiscompleted.

FIG6:AMachineRepairmanModel
FIG7:ACentralServerModel

TypesofServiceCenters
Incomputersystemsmodeling,weencounterthreekindsofdevices.Mostdeviceshaveasingle
serverwhoseservicetimedoesnotdependuponthenumberofjobsinthedevice.Suchdevices
arecalledfixedcapacityservicecenters.Forexample,theCPUinasystemmaybemodeledasa
fixedcapacity service center. Then there are devices that have no queuing, and jobs spend the
same amount of time in the device regardless of the number of jobs in it. Such devices can be
modeled as a center with infinite servers and are called delay centers or IS (infinite server). A
groupofdedicatedterminalsisusuallymodeledasadelaycenter.Finally,theremainingdevices
arecalledloaddependentservicecenterssincetheirserviceratesmaydependupontheloador
the number of jobs in the device. A M/M/m queue (with m e 2) is an example of a load
dependent service center. Its total service rate increases as more and more servers are used. A
group of parallel links between two nodes in a computer network is an example of a load
dependentservicecenter.

Definition - What does Queue mean?
A queue, in computer networking, is a collection of data packets collectively waiting to be
transmitted by a network device using a per-defined structure methodology.
A queue consists of a number of packets. These packets are bound to be routed over the
network, lined up in a sequential way with a changing header and trailer and taken out of the
queue for transmission by a network device using some defined packet processing algorithm
like first in first out (FIFO), last in last out (LIFO), etc. The queue dequeues, or takes out a
data packet from the head, when it needs to transfer and trailer by adding new data packets
to the queue, which is known as enqueuing.

A queue works almost on the same methodology used at banks or supermarkets, where the
customer is treated according to its arrival. An example would be FIFO or some other priority
if they are a privileged customer. Similarly, a network queue processes data packets based
on their arrival, priority, smallest task first and multitasking, FIFO, LIFO, emption and pre-
emption.

CSC 524
VERIFICATION AND VALIDATION OF SIMULATION MODELS






AMUDA, Tosin Joseph
090805009




UNIVERSITY OF LAGOS
ii

Abstract
Simulation models are more and more used to solve difficult scientific and social problems and
to aid in decision-making. The developers and users of these models, the decision makers using
information obtained from the results of these models, and the individuals affected by decisions
based on such models are all rightly concerned with whether a model and its results are
correct.
Consequently, no model can be accepted unless it has passed the tests of validation. Therefore,
it is salient to carry out the procedure of validation to ascertain the credibility of a simulation
model. This usually involve a twin process: validation and verification. This rest of this article
will review several literatures on how to verify and validate our simulation models in order to
ensure models credibility to an acceptable level.

iii

Table of Contents
Abstract ...................................................................................................................................... ii
Section 1: Introduction ............................................................................................................... 1
Section 2: Verification ............................................................................................................... 3
2.1: Good Programming Practice ........................................................................................... 3
Section 3: Validation.................................................................................................................. 4
3.1 Face Validity .................................................................................................................... 4
3.2 Validation of Model Assumptions ................................................................................... 5
Structural Assumptions ...................................................................................................... 5
Data Assumptions .............................................................................................................. 5
3.3 Validating Input-Output Transformations ....................................................................... 6
Hypothesis Testing............................................................................................................. 6
Model Accuracy as a Range .............................................................................................. 7
Confidence Intervals .......................................................................................................... 7
Section 4: Conclusion ................................................................................................................ 8
References .................................................................................................................................. 9


1

Section 1: Introduction
There is always a need to evaluate and improve the performance of a system that evolves over
time. First, the behaviour of such system must be studied. For one to study the behaviour of a
system, one must first come up with a representation (a close approximation) of such system.
This representation of the construction and working of a system of interest is known as a
Model. In addition, experiment will be carried out on the model in order to imitate the
operations of the actual system. This process- usually carried out on a computer- is known as
Simulation. Generally, a model intended for a simulation study is a mathematical model
developed with the help of simulation software.
Simulation models are approximate imitations of real-world systems with several assumption
and they never exactly imitate the real-world system. Due to the assumptions and
approximation, an important issue in modelling is model validity. Therefore, a model should
be verified and validated to the degree needed for the models intended purpose or application
This concern for quantifying and building credibility in simulation models is addressed by
Verification and Validation (V & V). This paper uses the definitions of V & V given in the
classic simulation textbook by Law and Kelton (1991, p.299): "Verification is determining
that a simulation computer program performs as intended, i.e., debugging the computer
programValidation is concerned with determining whether the conceptual simulation model
(as opposed to the computer program) is an accurate representation of the system under study".
Both verification and validation are processes that accumulate evidence of a models
correctness or accuracy for a specific scenario; thus, (V & V) cannot prove that a model is
correct and accurate for all possible scenarios, but, rather, it can provide evidence that the
model is sufficiently accurate for its intended use.
2

Another popular author on V & V in simulation relate the various phases of modelling with V
& V in Figure 1: Sargent (1991, p.38) states "the conceptual model is the
mathematical/logical/verbal representation (mimic) of the problem entity developed for a
particular study; and the computerized model is the conceptual model implemented on a
computer. The conceptual model is developed through an analysis and modelling phase, the
computerized model is developed through a computer programming and implementation
phase, and inferences about the problem entity are obtained by conducting computer
experiments on the computerized model in the experimentation phase".

Figure 1Simplified Version of the Modeling Process
There is no standard theory on V&V, therefore, there exist a number of philosophical theories,
statistical techniques; software practices, and so on. However, the emphasis of this article is on
statistical techniques, which may yield reproducible, objective, quantitative data about the
quality of simulation models.
This article is organized as follows. Section 2 discusses verification. Section 3 examines
validation. Section 4 provides conclusions. It is followed by a list of references.
3

Section 2: Verification
Once the simulation model has been programmed, the analysts/programmers must check if this
computer code contains any programming errors ('bugs') to ensure that the conceptual model
is reflected accurately in the computerized representation. The objective of model verification
is to ensure that the implementation of the model is correct.
Various processes and techniques are used to assure the model matches specifications and
assumptions with respect to the model concept. Many common-sense suggestions are
applicable, but none is perfect, for example: 1) general good programming practice such as
object oriented programming, 2) checking of intermediate simulation outputs through tracing
and statistical testing per module, 3) comparing (through statistical tests) final simulation
outputs with analytical results, and 4) animation.
Many software engineering techniques used for software verification are applicable to
simulation model verification.
2.1: Good Programming Practice
Software engineers have developed numerous procedures for writing good computer programs
and for verifying the resulting software, in general (not specifically in simulation). One of the
few best software engineering practices are: object oriented programming, formal technical
review, structured walk-throughs, correctness proofs
There are many software engineering testing and quality assurance techniques that can be
utilized to verify a model. Including, but not limited to, have the model checked by an expert
(e.g. chief programmer), making logic flow diagrams that include each logically possible
action, examining the model output for reasonableness under a variety of settings of the input
parameters, and using an interactive debugger.
4

Section 3: Validation
Once the simulation model is programmed correctly, we face the next question: is the
conceptual simulation model (as opposed to the computer program) an accurate representation
of the system under study.
There are many approaches described here from literatures that can be used to validate a
computer model. The approaches range from subjective reviews to objective statistical tests.
By objectively, we mean using some type of mathematical procedure or statistical test, e.g.,
hypothesis tests or confidence intervals. One approach that is commonly used is to have the
model builders determine validity of the model through a series of tests.
Naylor and Finger [1967] formulated a three-step approach to model validation that has been
widely followed:
Step 1. Build a model that has high face validity.
Step 2. Validate model assumptions.
Step 3. Compare the model input-output transformations to corresponding input-output
transformations for the real system.
3.1 Face Validity
A model that has face validity appears to be a reasonable imitation of a real-world system to
people who are knowledgeable of the real world system.

Face validity is tested by having users
and people knowledgeable with the system examine model output for reasonableness and in
the process identify deficiencies. An added advantage of having the users involved in validation
is that the model's credibility to the users and the user's confidence in the model increases.
Sensitivity to model inputs can also be used to judge face validity. For example, if a simulation
of a fast food restaurant drive through was run twice with customer arrival rates of 20 per hour
5

and 40 per hour then model outputs such as average wait time or maximum number of
customers waiting would be expected to increase with the arrival rate
3.2 Validation of Model Assumptions
Assumptions made about a model generally fall into two categories: structural assumptions
about how system works and data assumptions.
Structural Assumptions
Assumptions made about how the system operates and how it is physically arranged are
structural assumptions. For example, the number of servers in a fast food drive through lane
and if there is more than one how are they utilized? Do the servers work in parallel where a
customer completes a transaction by visiting a single server or does one server take orders and
handle payment while the other prepares and serves the order. Many structural problems in the
model come from poor or incorrect assumptions. If possible the workings of the actual system
should be closely observed to understand how it operates. The systems structure and operation
should also be verified with users of the actual system.
Data Assumptions
There must be a sufficient amount of appropriate data available to build a conceptual model
and validate a model. Lack of appropriate data is often the reason attempts to validate a model
fail. Data should be verified to come from a reliable source. A typical error is assuming an
inappropriate statistical distribution for the data. The assumed statistical model should be tested
using goodness of fit tests and other techniques. Examples of goodness of fit tests are the
KolmogorovSmirnov test and the chi-square test. Any outliers in the data should be checked.


6

3.3 Validating Input-Output Transformations
The model is viewed as an input-output transformation for these tests. The validation test
consists of comparing outputs from the system under consideration to model outputs for the
same set of input conditions. Data recorded while observing the system must be available in
order to perform this test. The model output that is of primary interest should used as the
measure of performance. For example, if system under consideration is a fast food drive
through where input to model is customer arrival time and the output measure of performance
is average customer time in line, then the actual arrival time and time spent in line for customers
at the drive through would be recorded. The model would be run with the actual arrival times
and the model average time in line would be compared actual average time spent in line using
one or more tests.

Hypothesis Testing
Statistical hypothesis testing using the t-test can be used as a basis to accept the model as valid
or reject it as invalid.
The hypothesis to be tested is
H0 the model measure of performance = the system measure of performance
versus
H1 the measure of performance the measure of performance.
The test is conducted for a given sample size and level of significance or . To perform the test
a number n statistically independent runs of the model are conducted and an average or
expected value, E(Y), for the variable of interest is produced. Then the test statistic, t0 is
computed for the given , n, E(Y) and the observed value for the system
0

7

and the critical value for and n-1 the degrees of
freedom
is calculated.
If

reject H0, the model needs adjustment.

Model Accuracy as a Range
A statistical technique where the amount of model accuracy is specified as a range has recently
been developed. The technique uses hypothesis testing to accept a model if the difference
between a model's variable of interest and a system's variable of interest is within a specified
range of accuracy. A requirement is that both the system data and model data be approximately
Normally Independent and Identically Distributed (NIID). The t-test statistic is used in this
technique. If the mean of the model is m and the mean of system is s then the difference
between the model and the system is D = m - s. The hypothesis to be tested is if D is within
the acceptable range of accuracy.

Confidence Intervals
Confidence intervals can be used to evaluate if a model is "close enough" to a system for some
variable of interest. The difference between the known model value, 0, and the system value,
, is checked to see if it is less than a value small enough that the model is valid with respect
that variable of interest. The value is denoted by the symbol . To perform the test a number,
n, statistically independent runs of the model are conducted and a mean or expected value,
E(Y) or for simulation output variable of interest Y, with a standard deviation S is produced.

8

Section 4: Conclusion
This paper surveyed verification and validation (V&V) of simulation models. It emphasized
statistical techniques that yield reproducible, objective, quantitative data about the quality of
simulation models.
For verification it discussed the following techniques (see Section 2):
1) General good programming practice such as objected oriented programming;
2) Checking of intermediate simulation outputs through tracing and statistical testing per
module
3) Comparing final simulation outputs with analytical results for simplified simulation models,
using statistical tests;
4) Animation.
For validation it discussed the following techniques (see Section 3):
1). Building a model that has high face validity.
2). Validating model assumptions.
3). Comparing the model input-output transformations to corresponding input-output
transformations for the real system.





9

References

1. Banks, Jerry; Carson, John S.; Nelson, Barry L.; Nicol, David M. Discrete-Event System
Simulation Fifth Edition, Upper Saddle River, Pearson Education, Inc. 2010 ISBN
0136062121
2. Sargent, Robert G. VERIFICATION AND VALIDATION OF SIMULATION MODELS.
Proceedings of the 2011 Winter Simulation Conference. http://www.informs-
sim.org/wsc11papers/016.pdf
3. Carson, John, MODEL VERIFICATION AND VALIDATION. Proceedings of the 2002 Winter
Simulation Conference. http://informs-sim.org/wsc02papers/008.pdf
4. NAYLOR, T. H., AND J. M. FINGER [ 1967], Verification of Computer Simulation Models,
Management Science, Vol. 2, pp. B92 B101., cited in Banks, Jerry; Carson, John S.;
Nelson, Barry L.; Nicol, David M. Discrete-Event System Simulation Fifth Edition, Upper
Saddle River, Pearson Education, Inc. 2010 p. 396 ISBN
0136062121,http://mansci.journal.informs.org/content/14/2/B-92
5. Sargent, R. G. 2010. A New Statistical Procedure for Validation of Simulation and Stochastic
Models. Technical Report SYR-EECS-2010-06, Department of Electrical Engineering and
Computer Science, Syracuse University, Syracuse, New York.
6. Law, A.M., and Kelton, W.D. (1991), Simulation Modeling and Analysis, 2nd ed., McGraw-Hill,
New York.
7. Jack P.C (1992), Theory and Methodology: Verification and validation of simulation models,
European Journal of Operational Research 82 (1995) 145-162

DOKAI ANN UNOR
090805020

SINGLE SERVER QUEUE AND
QUEUEING NETWORKS









Single Server Queues
The basic scenario for a single queue is that customers, who belong to some population, arrive at
the service facility. The service facility has one or more servers who are capable of performing
the service required by customers. If a customer cannot gain access to a server it must join a
queue, in a buffer, until a server is available. When service is complete the customer departs, and
the server selects the next customer from the buffer according to the service discipline.










In order to describe a service facility accurately we need to know details about each of the terms
emphasized above:
Arrival Pattern of Customers: The ability of the service facility to provide service for an arriving
stream of customers depends not only on the mean arrival rate , but also on the pattern in which
they arrive, i.e. the distribution function of the inter-arrival times.
Service Time Distribution: The service time is the time which a server spends satisfying a
customer. As with the inter-arrival stream, the important characteristics of this time will be both
its average duration and the distribution function. If the average duration of a service interaction
between a server and a customer is 1/ then is the service rate.
Server: In a single server queue, the service facility can only serve one customer at a time,
waiting customers will stay in the buffer until chosen for service; how the next customer is
chosen will depend on the service discipline.
Buffer Capacity: Customers who cannot receive service immediately due to unavailability of the
server must wait in the buffer. This leads to buffer being filled up if the buffer has a finite
capacity. If the buffer gets filled up there are two possibilities:
Buffer
Server
Arrivaltoqueue
Departurefromqueue
Either, the fact that the facility is full is passed back to the arrival process and arrivals are
suspended until the facility has spare capacity (created by the completion of a customer
who is currently being served);
Or, arrivals continue and arriving customers are turned away and lost until the facility has
spare capacity again.
In some systems the buffer capacity is so large as to never affect the behavior of the
customers; in this case the buffer capacity is assumed to be infinite.
Service Discipline: When more than one customer is waiting for service there has to be a rule for
selecting which of the waiting customers will be the next one to gain access to a server. The
commonly used service disciplines are:
FCFS first-come-first-serve (or FIFO first-in-first-out).
LCFS last-come-_first-serve (or LIFO last-in-first-out).
RSS random-selection-for-service.
PRI priority. The assignment of different priorities to elements of a population is one way
in which classes are formed.
Population: The characteristic of the population which we are interested in is usually the size.
Clearly, if the size of the population is fixed, at some value N, no more than N customers will
ever be requiring service at any time. When the population is finite the arrival rate of customers
will be affected by the numbers who are already in the service facility. The arrival rate will be
zero when all the population is already in the facility.
A shorthand notation for these six characteristics of a system is provided by Kendall's notation
for classifying queues. In this notation a queue is represented as A/S/c/m/N/SD:
A denotes the arrival process; usually M, to denote Markov (exponential), G, general, or D,
deterministic distributions.
S denotes the service rate and uses the distribution identifiers.
c denotes the number of servers available in the service facility. C= 1 in a single server case.
m denotes the capacity of the service facility (buffer + server(s)), infinite by default.
N denotes the customer population, also infinite by default.
SD denotes the service discipline, FCFS by default.
The single server queue is stable if on the average, the service time is less than the inter-arrival
time i.e. mean service time < mean inter-arrival time.
Throughput of a single server queue:
Throughput is the average number of completed jobs per unit.
Throughput of a single server queue is the average number of jobs that depart from the queue per
unit time (after they have been serviced)
Example:
The mean service time =10 mins.
-What is the maximum throughput (per hour)?
-What is the throughput (per hour) if the mean inter-arrival time is:



Traffic Intensity
The two most important features of a single queue are the arrival rate of customers , and the
service rate of the server(s), . These are combined into a single parameter which characterises a
single or multiple server system, the traffic intensity.
Traffic intensity, =

c

Many of the important performance measures associated with a queue can be expressed in terms
of . These are measures such as utilisation (the proportion of time the server is busy), residence
time (the average time spent at the service facility by a customer, both queuing and receiving
service), waiting time (the average time spent queuing), queue length (the average number of
customers at the service facility, both waiting and receiving service), and throughput (the rate at
which customers pass through the service facility).
For the system to be stable, must be less than 1: that is the arrival rate of customers must be
less than the rate at which they are served.

Queuing Networks
A queuing network refers to a system where there are several stations of service (identical or
non-identical) and a customer undergoes service at all or few service stations,

There are two types of queuing networks:
1. Open queuing networks: An open queuing network refers to a network in which the
customers arriving to the system, leaves the system after completion of service. The
experiments presented in the lab take into consideration a special class of open queuing
networks called the Jackson networks.
A queuing network is a Jackson network if it satisfies the following conditions:
The network is open and any external arrivals to node i form a Poisson process
All service times are exponentially distributed and the service discipline at all
queues is FCFS.
A customer completing service at queue I will either move to some new queue j
with probability 1 - P
]
m
]=1
which is non-zero for some subset of the queues.
The utilization of all of the queues is less than one.
2. Closed queuing networks: In closed queuing networks, work pieces circulate through the
system. The arrival stream at the first station conforms to the departure process at the last
station. In contrast to open queuing systems, the last station of a closed queuing network
may become blocked if the buffer in front of the input station is full of work pieces. This
holds true under the assumption that the buffer capacities within the system are finite.
Furthermore, the first station may become starved if no carriers with work pieces are
available in the buffer in front of the input station.

Littles Law
Littles law says that under steady state conditions, the average number of items in a queuing
system equals the average rate at which items arrive multiplied by the time that an item
spends in the system. Letting
L =average number of items in the queuing system
W= average waiting time in the system for an item, and
= average number of items arriving per unit time, the law is
L = W






NAME:DURUDUMEBIJULIAN
MATRIC:090805021
COURSE:CSC524

ModellingComputerSystemsNetworks

Abstract:
Trafficmodelsincomputernetworksareaverycomplicatedsystem.Simulationofthesesystemsis
verydifficultbecausetheyshownonlinearbehaviour.However,theprocessisimportantbecause
ComputerNetworkAdministratorsneedtoknowthecapabilitiesoftheirnetworks.Theydonot
wanttheservertobeoverloadedasaresultofreceivingtoomanyrequeststhanitcanhandle.The
restofthiswriteupisdedicatedtomodellingofcomputernetworks.

Introduction:
Inlargescalenetworks,datacanfollowavarietyofroutesingettingfromthesourcetothe
destination.Theroutefolloweddependsheavilyontheamountofinformationnodeshaveontheir
neighbours.Importantparameterslikebottlenecks,congestion,datarates,bandwidthetc.,needto
bemodelledandstudied.However,insufficientoperationofanetworktracesbacktocongestion.

NetworkElements:
Elementsinacomputernetworkaredividedintwobroadgroups.Ontheonehand,thenodesimply
thestorageunits,ontheotherhand,thetransfermediaconnectthenodestoeachother.
.Geometricalconditions:
.Distanceofnodes.
.Materialsofthetransfermedium.
.Numberofusers.
.Seasonaleffects:
.Thenumberofusersusingthenetworkactuallymovesonaverywidescale.
.Diverseexternalfactors:
.Weather
.Electromagneticstorm.

Nodes:
Nodesaretheactiveelementsofanetworkthatcommunicatewitheachother.

Transfermediums
Theprocessofthecommunicationisimplementedthroughthetransfermedium.The
communicationisveryfast.Thismakestransferofpossiblylargeamountsofdatafeasiblewithina
shorttime.Onewaycommunicationisusuallyenforcedtoavoiddatacollisions.

Communication:
Communicationbetweenthenodesofacomputernetworkisalwaysdynamic.Thechangesofthe
dynamicmessagetransmissionsaremodelledwiththehelpofacommunicationmatrix.Thetaskof
thismatrixistotakeintoconsiderationallelementsofthenetworkaswellastheirfeatures,which
regulatethedatatrafficbetweenthenodes.Moreparameterswillhavetobeconsidered,for
instance:lengthandnumbersofdatatransfersections,thephysicalparametersofthemedium,the
sizesofstorageunitsinthenodes,thedegreeoftheutilisationetc.

Somecharacteristicsofcomputernetworkforfaultlessdatatransfer
Inthecourseofdatatransfer,althoughtransfervelocityofonedatabitisconstantintime,
transmissiontimesofpacketswiththesamelongitudemaybedifferent.
Inthecourseofdatatransfer,thedatatransfersectionsrunninginparallelwitheachother
donotaffecteachotherdirectly,butinthenodes,forexampletheappearanceofmultiplied
packetsmakesdisturbingeffects.
Twowaytrafficdoesnotexist.
Theintensityofinnercommunicationchangesintimebetweenthenodesdirectlyconnected
toeachother.Incaseofwronglychosenparametersforexamplethisinner
communicationcapabletocreatepeakloadontheexaminednetworkwithouttraffic
arrivingfromtheexternalnetwork.
Tocontrolaffectingmessagetransmission,aninnercommunicationalsystemworks
betweenthenodesconnectedtoeachother.Forexample,thereceivercanreceivea
messagevainlyifthetransmitterdoesnothaveamessagetobesenton.

Trafficsimulationmodel
Transactionsincomputernetworksshownonlinearfeatures.Forexample,sometimesnodata
transferbetweentwonodes,becausebufferofreceivernodeisfull,thoughthesenderiscapableof
transferringdata.Acommunicationgraphistheoutputofthesimulationmodelthatimitatesthe
physicalarrangementofthecomputernetwork.Thenodesareconnectedbylinescallededges
whichareactuallytransfermedia.


COMMUNICATIONGRAPH
Theabovecommunicationnoderepresentsasimplecommunicationgraphwithroutersand
terminals.Eachnodehasabufferforstoringmessages.Thiscapacityismeasuredindatadensity(l)
thatshowsrateofrecentdataquantityintimeandthemaximaldataquantityinnodei.

B(t)=(numberofrecentdatabitsinbufferofnodeI)=N(t)
(buffersizeofnodeI)A

Datadensitygivesanideaonthenumberofdatabitstransferredinthenexttimeunit.N(t)andb(t)
arevectorsofnelements.
N(t)=A.b(t)

IfelementsofN(t)areknown,totalnumberofdatabitsstoredintheobservedareaiscalculated(4)
asavectorscalarproduct:

N(t)=A.b(t)=diag(A).b(t)

Nowexaminingstateofthenetworkintimet+t.Theamountofdatastoredinnodeichangesas:

N(t+t)=N(t)+N
internal
+N
input
N
output

Inthenetwork,thetransmissionspeedmeanshowmanybitscanbetransportedinseconds
betweentwonodes.Transmissionspeedmaybedifferentinsomepartsofthenetwork,butfrom
nodejtonodeithesamevalueissupposedandmarkedwithvij.Aftersummarisingdatachangesin
allnodes,totaldatachangeinanetworkisgivenas:

LimN(t+t)N(t)=
t+t
t

N(t).dt

Communicationfunction
Someoftheparameterstoconsiderregardingcommunicationare:
.Internalconnectionsofthenetwork
.Environmentalconnectionsofthenetwork
.Sizeofinternalbuffersinnodes
.Capabilitiesofnodestosendandreceivedata
Thesubfunctionofinternalorenvironmentalconnection,markedwithkgrantstheconnection
whennodejcancommunicatewithnodei.Eachnodeisexaminedandifaphysicalconnectionexists
betweennodesjandnodei,fijissetto1,otherwiseitiszero.Hence,fijisasymmetricalmatrix
becauseifnodeIisconnectedtonodej,thennodejisconnectedtonodei.Ifthereexistsno
physicalconnectionbetweenthesetwonodes,thenkisequalto0.

k=fij.pijifconnectionexistsbetweennodesjandnodei.
k=0ifnoconnectionexistsbetweennodesjandi.
whereijand1ijn.
S(t)isaninternalsubfunctionoftransmitternodewithvaluesof1or0.Itshowswhethernodejhas
anymessagetosendornot.Theconnectionisdisabledifdatadensityofnodej(b(t))isequalto0.

S(t)=1b(t)>0
S(t)=0b(t)=0

R(t)isanotherinternalsubfunctionofthereceivernodewithvaluesof1or0.Theconnectionis
enabledifb(t),thedatadensityofnodeIsmallerthan1,anywayitis0.Zeromeansthatbufferof
nodeIhasbeenoverloaded,sonodeIclosesitscommunicationporttodirectionofnodej,therefore
nodeIdoesnotreceiveanymessagefromnodej.

R(t)=1b(t)<1
R(t)=0b(t)=1
ThecommunicationfunctionbetweennodejandIisaproductofthethreesubfunctions.

C=k(t).S(b(t),t).R(b(t),t)


QueueingModel
Queueingtheoryhelpsdeterminetimejobsspendonthequeue.Italsohelpspredictresponsetime.
Twooftheearliestqueueingmodelsofcomputersystemsarethemachinerepairmanmodeland
thecentralservermodel.Themachinerepairmanmodel,asthenameimplies,wasoriginally
developedformodelingmachinerepairshops.Ithasanumberofworkingmachinesandarepair
facilitywithoneormoreservers(repairmen).Wheneveramachinebreaksdown,itisputinthe
queueforrepairandservicedassoonasarepairmanisavailable.Scherr(1967)usedthismodelto
representatimesharingsystemwithnterminals.Userssittingattheterminalsgeneraterequests
(jobs)thatareservicedbythesystem,whichservesasarepairman.Afterajobisdone,itwaitsat
theuserterminalforarandomthinktimeintervalbeforecyclingagain.
ThecentralservermodelshownwasintroducedbyBuzen(1973).TheCPUinthemodelisthe
centralserverthatschedulesvisitstootherdevices.AfterserviceattheI/Odevices,thejobs
returntotheCPUforfurtherprocessingandleaveitwhenthenextI/Oisencounteredorwhenthe
jobiscompleted.
Incomputersystemsmodeling,weencounterthreekindsofdevices.Mostdeviceshaveasingle
serverwhoseservicetimedoesnotdependuponthenumberofjobsinthedevice.Suchdevicesare
calledfixedcapacityservicecenters.Forexample,theCPUinasystemmaybemodeledasafixed
capacityservicecenter.Thentherearedevicesthathavenoqueueing,andjobsspendthesame
amountoftimeinthedeviceregardlessofthenumberofjobsinit.Suchdevicescanbemodeledas
acenterwithinfiniteserversandarecalleddelaycentersorIS(infiniteserver).Agroupofdedicated
terminalsisusuallymodeledasadelaycenter.Finally,theremainingdevicesarecalledload
dependentservicecenterssincetheirserviceratesmaydependupon
theloadorthenumberofjobsinthedevice.AM/M/mqueue(withme2)isanexampleofaload
dependentservicecenter.Itstotalservicerateincreasesasmoreandmoreserversareused.A
groupofparallellinksbetweentwonodesinacomputernetworkisanexampleofaloaddependent
servicecenter.Unlessspecifiedotherwise,itisassumedthattheservicetimesforallserversare
exponentiallydistributed.
Terminals

System MODELDIAGRAM
QueuingAnalysisrequiresrecognitionthefollowingparameters:
Populationsize
Numberofservers
Systemcapacity
Arrivalprocess
Servicetimedistribution
Servicediscipline

PopulationSize:Numberofpotentialcustomersthatcanenterthesystem.Thisnumbercanbe
infiniteorfinite,dependingontheimplementationofthesystem.

Numberofservers:Canbeoneormore.Assumeeachserverimplementsasimilarqueuingsystem.

Systemcapacity:Numberthatcanwaitplusnumberthatcanbeserved.Mostsystemshavefinite
queuelengththoughitiseasiertoanalyseaninfinitequeuelength.

Arrivalprocess:Systemsarriveatt1,t2,tn.Interarrivaltimescanbegivenby=TjTj1.Most
arrivalsarePoissondistributed:f(x)=e
x
.

ServiceTimedistribution:Amountoftimeeachcustomerspendsattheserver.

Servicediscipline:Theorderinwhichcustomersareserviced.MostcommonorderisFCFC(First
ComeFirstService).

TheseparametersarerepresentedinKendallsnotation.

A/S/m/B/K/SD

AArrivaltime
SServicetime
mNumberofservers
BNumberofbuffers
Kispopulationsize
SDisservicediscipline

Networkofqueues

Theflowofrequeststhrowasystemmayinvolveanumberofdifferentservicestationsand
navigationpaths.Example:
FlowofIPpacketsthroughacomputernetwork
Flowofordersthroughamanufacturingsystem
Floworrequests/messagesthroughawebservicesystem
Flowofpaperworkthroughanadministrationoffice.

Theabovesystemscanbeimplementedasnetworksofqueues.Queuescanbelinkedtogetherto
formanetworkofqueueswhichreflecttheflowofcustomersthroughanumberofdifferentservice
stations.

Trafficequation
ForMnodes,ifistheexternalarrivalrateintonodeI,andpijisthebranchingpossibility.Thenthe
effectivearrivalrateisgivenas:

1=1+p

M=M+p

MODELLING OF COMPUTER SYSTEMS


NETWORKS


SYSTEM PERFORMANCE EVALUATION
CSC 524


By



Kesa Oluwafunmilola Elizabeth
090805026

Lecturer: Dr. A.P. Adewole


Introduction

What is performance?
Performance is the quantitative measure of a system. If you are unable to measure the
performance of a system, then you will be unable to control and manage the system. Taking
the performance measurements of an existing running system is relatively simple compared to
when a system totally or partially does not exist. How do we take the performance measurement
of a non-existing system? In this case, a formal way to take the measurements will be to
estimate the performance measurements by means of some kind of mathematical performance
model (Ramon, 2003).

What is performance model?
Performance model is a mathematical representation of the system behaviour in which we try
to keep with a good representation the most significant mechanisms of the system evolution
along the time and we neglect or simplify the representation of the rest of the mechanisms
(Ramon, 2003).
Performance model is created to define the significant aspects of the way in which a proposed
or actual system operates in terms of resources consumed, contention for resources, and delays
introduced by processing or physical limitations (such as speed, bandwidth of communications,
access latency, etc.) (John, 2004). It should be noted however, that a model is always an
approximate. The only perfect system is the system itself.
Computer networks are telecommunications networks that allows computers to exchange data.
In computer networks, networked computing devices transmit data to each other through
network links (either cable or wireless).
Network success is dependent on several performance attributes. The type and location of the
network deployment will influence performance. Network performance is usually measured by
the quality of service. Typically, the parameters that affect the network performance include
throughput, latency, bit error rate and jitter.

Kinds of Models
A brief overview of the kinds of models is given below
1. Physical model:
A physical model is one which is usually a physical replica, often on a reduced scale,
of the system it represents. A physical model looks like the object it represents and
is also called an Iconic Model. For instance, a model of an airplane (scaled down), a
model of the atom (scaled up).

2. Simulation model:
Simulation is the act of executing, experimenting with or exercising a model for a
specific objective such as acquisition, analysis, education, entertainment, research or
training.

3. Analytical model:
Analytical model is one which is solved by using the deductive reasoning of
mathematical theory. An M/M/1 queuing model, a Linear Programming model, a
nonlinear optimization model are examples of analytical models.


Performance modelling techniques

The main goal of describing the behaviour of some system is to evaluate the time needed by
any entity to cross the system. This time has two main components: the strict time needed for
its execution in the different hardware components and the time spent waiting either to use
some resource.
Modelling may be implemented as a simulator (an operation abstraction) or as an abstract
mathematical representation of the system behaviour. The main existing mathematical
techniques are based on the following formalisms: Queuing networks, Petri nets and Process
algebras.

Queuing networks
Queuing theory is the key analytical modelling technique used for computer systems
performance analysis. A queue can be considered as a service facility with customers from
some population or source entering to receive some type of this service. The concept of
customer is used in the generic sense and therefore may be a person, a job, an inquiry, a
message, a packet, a program etc. the service facility has one or more servers (entities that
provide services to customer in a certain time). If all servers are busy when a customer arrives
at the system, it must join the queue until a server is free. Note, the servers associated with
queues correspond to resources such as CPU, disks and other devices and customers that enter
queues correspond to the elements that constitute the workload of the system itself. Therefore,
a simple queue consists of an arrival process, buffer where customers await service and a
number of servers which must be retained by each customer for the service period. Queuing
theory helps in determining the time that the customer spend in various queues in the system.

Queuing Notation
Kendalls notation is the standard system used to describe and classify a queuing node. A queue
is described as A/S/c/k/m/D where A is the arrival process or interarrival time distribution, S is
the service time distribution, c is the number of servers, k is the system capacity, m is the
population size, and D is the service discipline.
The six parameters identified in the Kendalls notation are briefly described below
1. Arrival Process: If the customers arrive at time
1
,
2
, ,

, the random variables


1
are called the interarrival times. The most common arrival process is Poisson
arrivals, which simply means that the interarrival times are Independent and Identically
Distributed (IID) and are exponentially protected.
2. Service Time Distribution: Service time is the time each customer spends at the
terminal. It is common to assume that the service times are random variables, which are
IID. The distribution most commonly used is the exponential distribution.
3. Number of Servers: The terminal room may have one or more terminals, all of which
are considered part of the same queuing system and any terminal can be assigned to any
customer. If all the servers are not identical, they are usually divided into groups of
identical servers with separate queues for each group.
4. System Capacity: The maximum number of customers who can stay may be limited
due to space availability and also to avoid long waiting times. This number is called the
system capacity. In most systems, the capacity is finite. The system capacity includes
those waiting for service as well as those receiving service.
5. Population Size: The total number of potential customers who can ever come to the
system is the population size.
6. Service Discipline: The order in which the customers are served is called the service
discipline. The most common discipline is First Come, First Served (FCFS). Other
disciplines are Last Come, First Served (LCFS), Last Come, First Served with Preempt
and Resume (LCFS-PR).
A queuing network can be formalised as a directed graph in which the nodes are queues, often
called service centres, each representing a resource in the system. Customers representing the
jobs, users or tasks in the system, flow through the model and compete for these resources. The
arcs of the network represent the topology of the system, and together with the routing
specification (e.g. routing probabilities), determine the paths that customers take through the
network.

Petri nets
Petri nets were first introduced by Carl Adam Petri in 1962. Petri net is a diagrammatic tool to
model concurrency and synchronization in distributed systems. Petri nets are represented with
directed graphs with two types of nodes, places (circles) and transitions (rectangles), and
unidirectional arcs (arrows) between them. Places represent possible states of the system;
transitions are events or actions that cause the change of state; and every arc simply connects
a place with a transition and a transition with a place.




Solving techniques

The formalisms discussed previously have an associated set of analytical techniques used for
their solution of performance metrics. The solving techniques include mathematical methods
such as analytical methods and Markov chains. In addition to these mathematical methods,
simulation is used for its easy and intuitive understanding.

Analytical methods
Depending on the demand for the resources and the service rate that the customers experience,
contention may arise leading to the formation of a queue of waiting customers. Asides the
number of customers in the queue, other processes of interest could be: the actual waiting time
of the nth customer in the queue, the busy period, and the output process. These quantities can
be solved by using analytical or simulation techniques, in addition to transforming them into
the underlying Markov chain.
One of the most successful modelling methods in recent years uses so-called Generalised
Stochastic Petri Nets (GSPN). GSPNs can be solved by using analytical or simulation
techniques, or by transforming them into their underlying Markov chain if this is not too large.

Figure 1: Example of a Petri Net
Markov chains
Markov chain refers to the sequence of random variables such a process moves through, with
the Markov property defining serial dependence only between adjacent periods. It can thus be
used for describing systems that follow a chain of linked events, where what happens next
depends only on the current state of the system. The term is reserved for a process with a
discrete set of times (i.e. a discrete-time Markov chain (DTMC)) although the terminology is
also used to refer to continuous-time Markov chain (CTMC). DTMC or CTMC are state
transition systems in which each transition has an associated probability (in DTMCs) or rate
(in CTMCs). They can therefore be used to model a wide class of concurrent systems that
satisfy the Markov property viz. that the evolution of the system after a given time instant
depends only on the state at that instant and not on any past history.

Simulation
Simulation implies writing programme describing the timing behaviour of the system. The
main advantage of a simulation is that it does not have any theoretical limitation and allows us
to study systems that cannot be modelled in such a way that some analytical method could be
applicable. In addition, simulation is very easy to learn and to apply. So, it is a very popular
modelling technique.
Its main disadvantages are the effort (mainly time) spent in developing and debugging the
simulation programme and its execution time.

Practical Hints

How to use modelling for evaluating the system performance?
Very frequently modelling is used to analyse a system that does not exist. In this case it is
impossible to validate the model and it is convenient to proceed by step refinements.
1. Start with a simple model, analytical if possible because they are easier to debug than
the simulated ones.
2. Build a simulation model, debug it and check if the result agree with the result in step
1.
3. Include some new mechanisms or refine some of what you have represented, debug the
new model and check the consistency of the results with the previously obtained.
With these steps, we can refine our model and arrive at a much accurate representation of the
system.

Modelling example

Let us consider a TCP/IP system composed of two IPSs connected by a link at 2 Mbit/s. Each
ISP manages 5 email stations and a sixth one delivering files. The links between the stations
and the ISP are at 256 Kbit/s. Email station generates 10 message/s with a mean size of 1.5
Kbytes with exponential distribution and uniform destination among the other 9 stations. Each
file transfer station generates 0.5 file/minute with a mean size of 2 MB with exponential
distribution and the other file transfer station as destination. It can be assumed that the
maximum message size allowed in the network is 1 KB.


An important point for studying this system is to decide the size of the buffers. As we have no
reference, initially we will study the system by means of an analytical model. Taking into
account the symmetry of the system, we will consider just half of it. Also, as we cannot
represent exactly the mechanisms of datagram generation, we will do some approximations:
one (optimistic, Model 1 in Annex) assuming that the messages for transferring the files and
sending the emails are generated at random with a size of 1KB and the other (pessimistic,
Model 2 in Annex), assuming that the files and the emails are sent in just one message of its
mean size. The queue sizes in KB are the following:
LINK ISP EMAIL FTP
Mean 1 0.3043 0.1765 0.9231 1.143
Max 1 5 4 9 10
Mean 2 177.75 1.3458 1.4331 2341
Mean 3 129.1 7.1365 3.2328 1705
Losses 4 6.5 x 10
-3
6.7 x 10
-3
1.2 x 10
-6
0.09

Now, we have an idea of the sizes to be allocated to the different buffer and we can build a
simulation model with a better representation of the datagram generation procedure. Initially,
in order to decrease its difficulty we will consider that all buffers have infinite capacity (Model
3 in Annex). The main difference between this model and the previous ones is the
representation of the load because now what is following a Poisson arrival is the generation of
an email message or a file to be transferred; then each file or email is decomposed in messages
of 1KB and a residual message of, at least 128 bytes. The next step will be to include the limited
capacity of the buffers (Model 4 in Annex) to evaluate the losses of each service type for some
6
7
8
9
10
1
2
3
4
5
ISP1
ISP2
A
B
Figure 2. Case study network
fixed set of buffer sizes. Respective capacities are: 150 at each ISP, 3000 at each LINK, 160 at
each EMAIL downloading line and 3000 at each FTP downloading time.

Conclusion

I have analysed the main aspects of the performance modelling of computer networks i.e. the
communication network. I have also analysed two main methods of modelling computer
networks and the associative solving techniques. Finally, a simple example has shown the
utility of performance modelling, and the way to proceed in order to reduce the debugging
difficulties.

References

John Daintith. "Performance model." A Dictionary of Computing. 2004. Retrieved May 12,
2014 from Encyclopedia.com:http://www.encyclopedia.com/doc/1O11-
performancemodel.html
Ramon Puigjaner. 2003. Performance modelling of computer networks. In Proceedings of the
2003 IFIP/ACM Latin America conference on Towards a Latin American agenda for network
research (LANC '03). ACM, New York, NY, USA, 106-123. DOI=10.1145/1035662.1035672
http://doi.acm.org/10.1145/1035662.1035672
Modeling and Simulation Glossary. Retrieved May 13 2014 from ACM SIGSIM: www.acm-
sigsim-mskr.org/glossary.htm

090805028
MATTHEW OMOLABAKE O.

CSC524 ASSIGNMENT



ANALYSIS OF SIMULATION OUTPUT


MAY 2014
1

INTRODUCTION
A computer Simulation is the discipline of designing a model of a real life or
hypothetical situation using a computer so that it can be studied to see how the
system works.

The greatest disadvantage of simulation is that you dont get the exact answers and
results are only estimates. Therefore, careful design and analysis is needed to make
estimates as valid and prcised as possible and interpret their meaning properly. It
is important that the length of the simulation be properly chosen. If the simulation
is too short, the results may be highly variable. On the other hand, if the simulation
is too long, computing resources and manpower may be unnecessarily wasted.

Output analysis is the modeling stage of simulation that focus on the analysis of
simulation result. Output analysis gives estimate of system performance via
simulation cycle time. Statistical methods are used but its difficult to apply
classical statistical techniques to the analysis of simulation output because
simulations almost never produce raw output that is independent and identically
distributed normal data e.g
Customer waiting times from a queuing system:
(1) Are not independent: typically, they are serially correlated. If one customer
at the post office waits in line a long time, then the next customer is also
likely to wait a long time,
(2) Are not identically distributed: customers showing up early in the morning
might have a much shorter wait than those who show up just before the
closing time.
(3) Are not normally distributed: they are usually skewed to the right(and are
certainly never less than zero)

The purpose of this analysis is therefore to give methods to perform statistical
analysis of output by
Estimating the standard error or confidence interval
Figure out the number of observations required to achieve desired error.

There are two types of simulations with respect to output analysis: terminating and
non-terminating (steady state). The type of analysis depends on the goal of the
study.
2

Terminating simulation is one where there is a specific starting and stopping
condition that is part of the model e.g. a bank with an opening time of 8.am and
closing time of 5pm. While a steady state simulation is one where there is no
specific starting and ending conditions e.g. an emergency room. Here we are
interested in the steady state behavior of the system.

Terminating simulations
Also known as transient simulation, it is one that runs for some duration of time T
E
where E is the specified event (or set of events) which stops the simulation. When
simulating a terminating system, the initial conditions of the system at time 0 must
be specified and the stopping time or event T
E
must be well defined.
Examples:
A bank that operates from 9am to 4.30pm daily and starts empty and idle at
the beginning of each day. The output of interest may be average wait time
of first 50 customers in the system. Then T
E
=480mins
A military confrontation between a blue force and a red force. The output of
interest may be the probability that the red force loses half its strength before
the blue force loses half of its strength.
A communication system consists of several components. Consider the
system over a period of time, T
E
,until the system fails, E = {A fails, or D
fails or (B and C both fail)}
The objective of analysis of terminating simulations is to obtain a point estimate
(sample mean) and confidence interval for some parameter (average time in system
for n customers, machine utilization, work-in-progress etc). Confidence interval for
terminating simulations usually uses independent replications.
Analysis for terminating simulations
Make n independent replications of the model
Let Y
i
be the performance measure from the ith replication
Y
i
= average time in the system, or
3

Y
i
= Work-in-progress
Y
i
= Utilization of a critical facility
Performance measures from different replications Y
1
, Y
2
,Y
n
are
independent and identically distributed
But, only one sample is obtained from each replication
Apply classical statistics to Y
i
s, not to observations within a runs
Select confidence level 1-a(0.90,0.95,etc)
Approximate 100(1-a) % confidence interval for observations:
unbiased estimator
unbiased estimator of Var(Y
i
)

covers approximate probability (1 a)
is the Half-Width expression




Example:
Consider the single server (M/M/1) queue. The objective is to calculate a
confidence interval for the delay of customers in the queue.

n = 10 replications of single server queue
Y
i
= average delay in queue from ith replication
Y
i
s = 2.02, 0.73, 3.20, 6.23, 1.76, 0.47, 3.89, 5.45, 1.44, 1.23

For 90% confidence interval
= 2.64, S
2
(10)= 3.96 t
9,0.95
=1.833

Approximate 90% confidence interval is
4

2.64 1.15, or [1.49,3.79]

Interpretation
100(1-a) % of the time, the confidence interval formed in this way covers
between 1.49 and 3.79
Limitations
The analysis outlined above assumes that the distributions are normal or near
normal in form. If the distributions are seriously skewed (e.g Weibull distributions)
then calculated confidence intervals may deviate significantly from the calculated
(1-a) values.
Non-terminating or steady state Simulations
A non-terminating system is a system that runs continuously or at least over a very
long period of time. A steady state simulation is a simulation whose objective is to
study long run or steady state behavior of a non-terminating system.
The stopping time or event, T
E
, is determined not by the nature of the problem but
rather by the simulation analyst, either arbitrarily or with certain statistical
precision in mind.
Examples:
An emergency room
A communication system where service must be provided continuously.
An Atm machine
A manufacturing company that operates 16 hours a day. The system here is a
continuous process where the ending condition for one day is the initial
condition for the next day. The output of interest here may be the expected
long-run daily production
Problems that may arise
How should the simulation be started (initialization at time zero)
5

How long must it run before data representative of steady state can be
collected
A steady state condition implies that a simulation has reached a point in time
where the state of the model is independent of the initial startup condition. Before a
simulation can be run, one must provide initial values for all of the simulations
state variables. Since the experiment may not know what initial values are
appropriate for the state variables, these values might be chosen somewhat
arbitrarily.
For instance, we might decide that it is most convenient to initialize a queue as
empty and idle. Such a choice of initial conditions can have a significant but
unrecognized impact on the simulation runs outcome.
Thus, the initialization bias problem can led to errors particularly in steady state
output analysis.
Examples of problems concerning simulation initialization:
Visual detection of initialization effects is sometimes difficult especially in
the case of stochastic processes having high intrinsic variance such as
queuing systems
How should the simulation be initialized? Suppose that a machine stop
closes at a certain time each day, even if there are jobs waiting to be served.
One must therefore be careful to start each day with a demand that depends
on the number of jobs remaining from the previous day.
Initialization bias can lead to point estimators for steady state parameters
having high mean squared error as well as confidence interval having poor
coverage.
Since it raises important concerns, we have to detect and deal with it. We first list
methods to detect it.
Attempt to detect the bias visually by scanning a realization of the simulated
process. This might not be easy, since visual analysis can miss bias. Further,
a visual scan can be tedious. To make the visual analysis more efficient, one
might transform the data (e.g take logs or square roots), smooth it, average it
across several independent replications etc
6

Conduct statistical test for initialization bias: Various procedures check to
see if mean or variance of process changes overtime: change point detection
from statistical literature etc

If initialization bias is detected, one may want to do something about it. Two
simple methods for dealing with bias include
1. Truncate the output by allowing the simulation to warm up before
data are retained for analysis. Experimenter hopes that the remaining
data are representative of the steady state system. But how can one
fine a good truncation point? If the output is truncated too early,
significant bias might exist in the remaining data. If it is truncated
too late, then good observations might be wasted. A reasonable
practice is to average observations across several replications, and
then visually choose a truncation point based on the average run.

2. Make a long run to overwhelm the effects of initialization bias. This
method of bias control is conceptually simple to carry out and may
yield point estimators having lower mean squared errors than the
analogous estimators from truncated data. However a problem with
this approach is that it can be wasteful with observations; for some
systems, an excessive run length might be required before the
initialization effects are rendered negligible.

Analysis for steady state simulation
The objective: Estimate the steady state mean
v = lim
i


E(Y
i
)
The basic question here is should you do many short runs or one long run?
Many short runs
The analysis is exactly the same as for terminating systems. The (1-a) %
confidence interval is computed before. The problem here is that because of
initial bias, the sample mean may no longer be an unbiased estimator for the
steady state mean.
7

Advantages
1. Simple analysis, similar to the analysis of terminating systems
2. The data from different replications are independent and identically
distributed
Disadvantage
1. Initial bias is introduced several time

One long run
Make just one long replication so that the initial bias is only introduced
once. This way, you will not be throwing out a lot of data. The problem
here is how you estimate the variance because there is only one run.
Advantages
1. Less initial bias
2. No restarts
Disadvantages
1. Sample size of 1
2. Difficult to get a good estimate of the variance
A number of methodologies have been proposed to solve the problem the
long run, they include
1. Batch means: The idea of batch means is to divide one long simulation
run into a number of contiguous batches, and then appeal to a central
limit theorem to assume that the resulting batch sample means are
approximately independent and identically distributed normal
The two important issues here are:
How do we choose the batch size k? We choose k large enough so
that the batch means are approximately uncorrelated.
How many batches n? Due to autocorrelation, splitting the run into
a larger number of smaller batches degrades the quality of each
individual batch. Therefore, 20 to 30 batches are sufficient.

8

Divide a run of length m into n adjacent batches of length k
where m =nk
Let Y
j
be the sample or batch mean of the j
th
batch
The grand sample mean is computed




The sample variance is computed as



The approximate 100(1-a) % confidence interval


2. Standardized time series: One often uses the central limit theorem to
standardize independent and identically distributed random variables into
an (asymptotically) normal random variable.
Schruben and various colleagues generalized this idea in many ways
using a process central limit theorem to standardize a stationary
simulation process into a Brownian bridge process.
Properties of Brownian bridges are then used to calculate a number of
good estimators for Var(Y
n
) and confidence interval for E. This method
is easy to apply and has some asymptotic advantages over batch means.

3. Spectra analysis: This approach operates in the so-called frequency
domain, whereas batch means uses the time domain.
4. Regeneration: Many simulations can be broken into independent and
identically distributed blocks that probabilistically start over at certain
9

regeneration points. This method effectively eliminates any initialization
problems but on the other hand it may be difficult to define natural
regeneration points, and extremely long simulation runs are often needed
to obtain a reasonable number of independent and identically distributed
blocks.




















10


References
Dave Goldsman (May 26 2010). Simulation output analysis
Analysis of Simulation Experiments.Output-analysis.ppt from
http://www.cs.bc.edu
Output Analysis for a single Model. Lecture-9.ppt from
http://www.personal.Cityu.edu.hk
Wiley Computer Publishing, John Wiley & Sons, Inc. (1991). .Art of Computer
Systems Performance Analysis Techniques for Experimental Design Measurements
Simulation and Modeling. Raj Jain


CSC S24
0uB0Au0
NKECBINYERE
0u0NNA

ANALYSISOFSIMULATION
OUTPUTDATAUSING
STATISTICALANALYSIS

Table of Contents
INTRODUCTION.......................................................................................................................................2
WhatisSimulation?.............................................................................................................................2
WhyDoWeSimulate?.........................................................................................................................2
WhatistheProblemofSimulations?..................................................................................................2
WhatistheAim?.................................................................................................................................2
Whatmaywewanttoknowaboutthesystem?................................................................................2
SIMULATIONOUTPUTDATAANALYSIS...................................................................................................3
FINITEHORIZON/TERMINATINGSIMULATION:..................................................................................4
ANALYZINGFINITEHORIZONSIMUALTIONOUTPUTDATA............................................................4
STEADYSTATESIMULATION:..............................................................................................................5
ANALYSISOFSTEADYSTATEOUTPUTDATA...................................................................................5
THEINITIALIZATIONPROBLEM............................................................................................................6
CONCLUSION...........................................................................................................................................7
REFERENCES............................................................................................................................................7


INTR0B0CTI0N
What is Simulation.
Simulationtalksaboutlearningbydoing.Itisan(logicalorphysical)imitationofthe
operationofrealworldproblems.
Simulationsareabstractionsofrealitywhichdeliberatelyemphasizeonepartofrealityat
theexpenseofanothertofocusonanimportantaspectofthesimulation.Itcanbesaidto
betheapplicationofmodelstoarriveatsomeoutcome.
Asimulationstudyconsistsofseveralstepssuchasdatacollection,codingandverification,
modelvalidation,experimentaldesign,outputdataanalysis,andimplementation.
Why Bo We Simulate.
Theprimarypurposeofmostsimulationexercisesistoapproximateprescribedsystems
parameterswiththeobjectiveofidenifyingparametervaluesthatoptimizesomesystem
performancemeasures.
What is the Pioblem of Simulations.
Simulationsareknowntohardlyeverproducerawoutputthatshowsindependentand
identicallydistributednormaldata.Wecanderivedifferentoutputdatafromtwodifferent
runsoftheexactsamemodel
What is the Aim.
Theaimistogivemethodstoanalyzeoutputdatafromdiscreteeventcomputersimulations
becauseimproperanalysiscanmakeallresultsinvalid.
What may we want to know about the system.
Averagetimeinsystem,Worst(longest)timeinsystem,Average,worsttimeinqueue(s),
maximumlengthofqueue(s)e.t.c

SIMULATION OUTPUT DATA


ANALYSIS
Assimulationprogressesthroughtime,therearetwokindsofprocessesthatare
observed;
1.Biscietetime piocess:Thiscanonlybeobservedasithappens(i.e.thenumber
ofobservationscanbeaconstantorfixedvariable).
Supposewehaveadatasety1,y2,y3,...,ymwheremisthetotalrandomnumbers
observedduringtimeT.HereweestimateanexpectedvalueEofthesamplemeansof
theobservation:=E[m],
wheremisgivenasm=
1
m
y

m
=1
;averagetimeinsystem
y*(m)=mox
=1,2,,m
yi;maximumstayinsystem.
2.Continuoustime piocess:Thiscanappearinthesystematanypoint
(i.e.observationiscontinuous{Y(t)|0tT}overaspecifiedtimeinterval[0,T]).
Supposewehaveadatasety1,y2,y3,....,ymwheremisthetotalvaluesobserved
duringtimeT.
y(t)=numberofpartsinaparticularqueueattimet [0,).WerunsimulationforTunits
ofsimulatedtime.
(T)=
] (t)dt
T
0
1
;Timeaveragelengthofqueue.
y*(T)=mox
=1,2,,m
y(t).Maximumtimespentinqueue.

Therearetwo(2)typesofsimulationswithrespecttooutputdataanalysis
FINITEB0RIZ0NTERNINATINu SIN0LATI0N:
Theseareforsystemsofinterestwhichneverreachasteadystateandterminate
afteraspecified,finiteperiodoftimeinterval[0,T].Parametersareestimatedbasedonthe
modelsspecificinitalandstoppingconditions.
ANALYZINu FINITEB0RIZ0N SIN0ALTI0N 00TP0T BATA
Whateverperiodischosenfortheanalysis,thebasicprocedureisthesame.Thesimulation
isrunntimes,eachtimeusingadifferentrandomnumberstreamtoensureindependent
trials.
Anunbiasedestimatorforthemean(e.gaveragetimeinsystem)is:
Samplemean(m) =

i
m
m
=1
;
Weneedtoestimatevariance((m))butclassicstatistical
SampleVariance:
1
m(m-1)
((y

- (m))
2

m
=1
;
maybeserverelybiasedforVar((m))sinceyisarenotnecessarilyindependentand
identicallydistributedrandomvariablesandunbiasednessofvarianceestimatorsfollows
fromtheindependenceofdatawhichisnottruewithinasimulation.
ThereforeanunbiasedvariancecanbeestimatedforVar((m))bymakingnindependent
replicationsofthewholesimulationsuchthateachsimulationconsistsofmobservations.
Weapplyclassstatisticstothereplicationsyjandnottheobservationsyiwithinasimulation
run.Therefore:Approximate100(1)%confidenceintervalfor:
(n) =

]
n
]=1
n
;isandunbiasedestimatorforthemean
S
2
(n) =
(
]
-(n))
2
n-1
;isanunbiasedestimatorofVar(y
]
)
Thebasicingredientstotheanalysisaretheperformancemeasuresfromthedifferent,
independentreplications.Ifthenumberofobservationsperreplication,m,islargeenough,
acentrallimittheoremtellsusthatthereplicatesamplemeansareapproximately
independentandidenticallydistributedlynormal.
Wehave:(n) _ t
n-1,1-u2
S(n)
n
astheconfidenceintervalcoveringwithapproximately1
probability.

STEABYSTATE SIN0LATI0N:
Thereisnorealisticwaythatthemodelterminates.Hereweestimatethenormal
operationinthelongrun.Iftheperformancemeasureofinterestisacharacteristicofa
steadystatedistributionoftheprocess,itisasteadystateparameterofthemodel.This
generallydoesnotdependoninitialconditions;therefore,wemustensurethatthe
simulationrunislongenoughsothateffectsoftheinitialconditionsaregone.
ANALYSIS 0F STEABYSTATE 00TP0T BATA
Thisismuchmoredifficultthananalyzingfinitehorizonsimulations.Assumewehaveon
handastationarysimulationoutput,y1,y2,y3,....,ynandwewouldliketoestimatesome
parameterofinterest.
Thereareanumberofmethodologiesproposedforanalyzingtheoutputofsteadystate
simulation,butwewilllookatthebatchmeans
BatchMeans:ThisisoftenusedtoestimateVar((n))orcalculatecondenceintervalsfor.
Theideaistodivideonelongsimulationrunintoanumberofcontiguousbatches,andthen
appealtoacentrallimittheoremtoassumethattheresultingbatchsamplemeansare
approximatelyindependentandidenticallydistributednormal.
Supposewepartitiony1,y2,y3,....,ynintobnonoverlappingcontiguousbatches,each
consistingofmobservationseach.Y1,....Ym,Ym+1,....,Y2m,Y2m+1,......,Yn.Thebatchmeanisthe
samplemeanofmobservationsofeachbatchi.
Mean(Z
i)

1
m

(-1)m+]
m
]=1

ThebatchestimatorforVar(Zi)is:
Var(Zi)
1
b-1
(Z

- Hcon(Z
b
))
2 b
=1

Iftheobservationsminthebatchesarelargeenough,thesetisindependentandidentically
distributednormaldata.Theconfidenceintervalforthemeanisgivenas:
Hcon(Z
b
) _tu
2
,b-1
vai(Zi)b
Problemscancomeupifthereisinitializationbias,thebatchmeansarenotnormalorthey
arenotindependent.Thisyoucanbesolvedincreasingm.
TBE INITIALIZATI0N PR0BLEN
Beforewerunasimulation,onemustprovideinitialvaluesforallofthesimulations
statevariables.Sincetheexperimentermaynotknowwhatinitialvaluesareappropriatefor
thestatevariables,thesevaluesmightbechosenrandomly.Forinstance,wemightdecide
thatitismostconvenienttoinitializeaqueueasemptyandidle.Suchachoiceofinitial
conditionscanhaveasignicantbutunrecognizedimpactonthesimulationrunsoutcome.
Thus,theinitializationbiasproblemcanleadtoerrors,particularlyinsteadystateoutput
analysis.
Ifinitializationbiasisdetected,onemaywanttodosomethingaboutit.Twosimple
methodsfordealingwithbias
(a) Truncatetheoutputbyallowingthesimulationtowarmupbeforedataare
retainedforanalysis;theexperimenterhopesthattheremainingdataarerepresentativeof
thesteadystatesystem.Outputtruncationisprobablythemostpopularmethodfordealing
withinitializationbias;andallofthemajorsimulationlanguageshavebuiltintruncation
functions.
Theproblemofthisisthatiftheoutputistruncatedtooearly,significantbias
mightstillexistintheremainingdata.Ifitistruncatedtoolate,thengoodobservations
mightbewasted.Areasonablepracticeistoaverageobservationsacrossseveral
replications,andthenvisuallychooseatruncationpointbasedontheaveragedrun.
(b) Makeaverylongruntooverwhelmtheeffectsofinitializationbias.Thismethodof
biascontrolisconceptuallysimpletocarryoutandmayyieldpointestimatorshavinglower
meansquarederrorsthantheanalogousestimatorsfromtruncateddata.
However,aproblemwiththisapproachisthatitcanbewastefulwithobservations;
forsomesystems,anexcessiverunlengthmightberequiredbeforetheinitializationeffects
arerenderednegligible.
CONCLUSION
Simulationoutputdataanalysisismostusefulincomparingcompetingsystemsor
alternateconfigurationsystems.Therearemanytechniquesthatcanbeused;(i)classical
statisticalCIs,(ii)commonrandomnumbers,(iii)antitheticvariates,(iv)andrankingand
selectionprocedures.
Thefutureholdsmoreforus;
Useofmoresophisticatedvarianceestimators
Automatedsequentialruncontrolproceduresthatcontrolfor
initializationbiasanddelivervalidcondenceintervalsofspecied
length
Changepointdetectionalgorithmsforinitializationbiastests
Incorporatingcombinationsofvariancereductiontools
Multivariatecondenceintervals
Betterrankingandselectiontechniques

REFERENCES
1.http://www2.isye.gatech.edu/~sman/courses/Mexico2010;Module09
OutputAnalysis_100526.pdf
2.http://www.cyut.edu.tw/~hchorng/downdata/1st/SS9_Output%20Analysis.pdf;OutputData
AnalysisForASingleSystem
3.http://minitorn.tlu.ee/~jaagup/uk/ds/chp11/CHAP11A.HTM;Chapter11SimulationOutput
Analysis.
4.http://www.ist.ucf.edu/background.htm;SimulationandItsTypes.

CSC 524
Systems Performance and Evaluation
Okoro Ugochukwu Christian
090805043
Dr A. P Adewole
May 6, 2014

Modelling Computer Systems Networks

Introduction
Computer networks have become essential to the survival of businesses, organizations, and
educational institutions, as the number of network users, services, and applications has increased
alongside advancements in info on technology. Given this, efforts have been put forward by
researchers, designers, managers, analysts, and professionals to optimize network performance and
satisfy the varied groups that have an interest in network design and implementation. The
optimization of network performance to satisfy groups of people or organizations that have interest
in network design and implementation can only be achieved by careful study and/or observation of
network infrastructures. For this to happen, professionals and analysts have to model and simulate
network infrastructures. Thus the importance of modelling computer systems networks.
In this write-up, I discuss the different applications of modelling and simulation in the design of
networked environments, the network modelling life cycle and particular considerations when
modelling network infrastructures. Before I go into details I would like to clear up the common
misconception of modelling and simulation being single entity. Modelling and simulation are
distinctly different as:
A model is logical representation of a complex entity, system, phenomena, or process. In the
context of communications and networking, a model often an analytical representation of
some phenomena.
A simulation is an imitation of a complex entity, system, phenomena, or process meant to
reproduce behaviour. . Within the context of a communications network, a simulation is most
often computer software that to some degree of accuracy functionally reproduces the
behaviour of the real entity or process often through the employment of one or more models
over time.
In the area of computer network design and optimization, software simulators (network simulators)
are a valuable tool given todays complex networks and their protocols, architectures, and dynamic
topologies. With these simulators, systems performance analysts can carry out performance-related
studies without the trial and error burdens of hardware implementations. A typical network model
or simulator can provide an engineer, programmer, or analyst with the abstraction of multiple threads
of control.

Network Modelling and Simulation Process
The schematic diagram below shows refinements of a block diagram describing the typical phases in
the modelling and simulation process.




1PhysicalSystem
(Existingorproposed)
3Mathematical
Modellingofconceptual
model
7Representationof
numericalsolution
6NumericalSolutionof
computerprogrammodel
5DiscretizationandAlgorithm
selectionformathematicalmodel
4Computerprogramming
ofdiscretemodel
2Conceptual
Modelling
Withinthediagramabove,
Block1representsdeterminationoftheactualsystemtobemodelledandsimulated.
Blocks2and3constitutemodelling.
Blocks4and5constitutemodelimplementation
Block6representstheactualsimulationexecution
Block7representsanalysisofsimulationdata

Developing Models
The modelling process shown in blocks 2 & 3 frequently requires making assumptions and
approximations to reduce the models complexity or to simplify the model. These activities can be
carried out in two levels:
1. Modelling: In this level, the functional description of network elements are simplified. For
example we may assume that packet transmission is error free, or we might consider
transmission channel contention to be negligible. Such simplifications generally fall into 3
categories namely:
a. System modelling: Simplifications in this category occur at the highest level of
description of the interaction between elements of the simulation.
b. Element/Device modelling: Simplifications occur at the level of description of the
behaviour within a single element of the simulation.
c. Systems models and element models: These are frequently expressed with reference
to random process.
2. Performance Evaluation: Here, measurements being made of the simulation are simplified in
order to provide less precise but more useful estimates of the systems behaviour.
With reference to the figure above, the diagram above aims to elaborate further: the modelling and
simulation process:

The figure below shows the modelling and simulation life cycle:



A number of network modelling and simulation tools exist. Some of which include:
REAL
INSANE
NetSim
Maisie
OPNET
SimJ ava
Network simulator (ns2)
Fast ns2 simulator
Simulink
The OPNET modelling and simulation tool shall be used as a case study in this write-up.
(Optimized Network Engineering Tool) is an object-oriented simulation environment that meets all
the requirements of a network modelling tool and is the most powerful general-purpose network
simulator available today. OPNETs comprehensive analysis tool is especially ideal for interpreting
and synthesizing output data. A discrete-event simulation of the call and routing signalling was
developed using a number of OPNETs unique features such as the dynamic allocation of processes
to model virtual circuits transiting through an ATM switch. Moreover, its built-in Proto-C language
support gives it the ability to realize almost any function and protocol. OPNET provides a
comprehensive development environment for the specification, simulation, and performance analysis
of communication networks. A large range of communication systems from a single LAN to global
satellite networks can be supported. Discrete-event simulations are used as the means of analysing
system performance and behaviour. Key features of OPNET include:
o Modelling and simulation cycle: OPNET provides powerful tools to assist users to go
through three of the five phases in a design circle (i.e., the building of models, the execution of a
simulation, and the analysis of the output data).
o Hierarchical modelling: OPNET employs a hierarchical structure to modelling. Each
level of the hierarchy describes different aspects of the complete model being simulated.
o Specialized in communication networks: Detailed library models provide support for
existing protocols and allow researchers and developers to either modify these existing models or
develop new models of their own.
o Automatic simulation generation: OPNET models can be compiled into executable code. An
executable discrete-event simulation can be debugged or simply executed, resulting in output data.
This sophisticated package comes complete with a range of tools that allow developers to specify
models in detail, identify the elements of the model of interest, execute the simulation, and analyse
the generated output data.
OPNET follows a generic approach to network modelling by using the OSI Reference model as its
basis, as shown in the figure below.

This approach allows the implementation of different network protocols which are compatible with
the OSI layer boundaries. OPNET models are composed of three primary model layers namely
1. Process layer
This is the lowest modelling layer. It uses a state transition diagram for the generation of
packets.


2. Node Layer
Each element in the Node model is either a predefined OPNET artefact or defined by its own
STD.
3. Network Layer
This is the highest modelling layer in the OPNET model hierarchy. This model may represent
a hierarchy of sub networks, it may be used to model a single network, subnet, or segment.
Reference
1. http://en.wikipedia.com
2. Online: http://www.opnet.com/
3. Network modeling and simulation: A practical perspective. Mohsen Guizani, Ammar Rayes,
Bilal Khan, Ala Al-Fuqaha.
4. Art of Computer Systems Performance Analysis Techniques For Experimental Design
Measurements Simulation And Modeling by Raj J ain

NAME: OPEOLUWA, JOSEPH O.

MATRIC: 090805048

COURSE TITLE: SYSTEM
PERFORMANCE EVALUATION

COURSE CODE: CSC524

LEVEL: 500

LECTURER: DR. A.P. ADEWOLE



Q0E0EINu NETW0RKS

QUEUEINGNETWORKS
WHAT IS A QUEUEING NETWORK?
A model in which jobs departing from one queue arrive at another queue (or possibly the
same queue)
There are a number of systems that consist of several queues. A job may receive service at
one or more queues before exiting from the system. Such systems are modeled by queueing
networks.
OPEN AND CLOSED QUEUEING NETWORKS
OPEN QUEUEING NETWORKS
An open queueing network has external arrivals and departures. The jobs enter the system at
In and exit at Out. The number of jobs in the system varies with time. In analyzing an
open system, we assume that the throughput is known (to be equal to the arrival rate), and the
goal is to characterize the distribution of number of jobs in the system.
CLOSED QUEUEING NETWORKS
A closed queueing network has no external arrivals or departures. The jobs in the system
keep circulating from one queue to the next. The total number of jobs in the system is
constant. It is possible to view a closed system as a system where the Out is connected back
to the In. The jobs exiting the system immediately reenter the system.
The flow of jobs in the Out-to-In link defines the throughput of the closed system. In
analyzing a closed system, we assume that the number of jobs is given, and we attempt to
determine the throughput (or the job completion rate).
It is also possible to have mixed queueing networks that are open for some workloads and
closed for others.
A SIMPLE QUEUEING NETWORK CONSISTING OF K M/M/1 QUEUES IN
SERIES.
Utilization of the ith server pi =/i
Probability of ni jobs in the ith queue
= (1 -P
I
)n
I

The joint probability of queue lengths of M queues can be computed simply by multiplying
individual probabilities.
For example:
p(n1, n2, nS, , nS4) = (1 -p)p1
n1
(1 -p2)p2
n2
(1 -pS)pS
n3
(1 -pH)pH
nM

= p1(n1)p2(n2)pS(nS) pH(nH)
This queueing network is therefore a product form network. In general, the term applies to
any queueing network in which the expression for the equilibrium probability has the
following form:
p(n1, n2, , n
M
) =
1
0(N)
_i(ni)
M
=1

When fi(ni) is some function of the number of jobs at the ith facility, G(N) is a normalizing
constant and is a function of the total number of jobs in the system .
Baskett, Chandy, Muntz, and Palacios (1975) showed that product form solutions exist for an
even broader class of networks. This class consists of networks satisfying the following
criteria:
1. Service Disciplines: All service centers have one of the following four types of service
disciplines:
First Come, First Served (FCFS), Processor Sharing (PS), Infinite Servers (ISs or delay
centers), and Last Come, First Served Preemptive Resume (LCFS-PR).
2. Job Classes: The jobs belong to a single class while awaiting or receiving service at a
service center but may change classes and service centers according to fixed probabilities at
the completion of a service request.
3. Service Time Distributions: At FCFS service centers, the service time distributions must be
identical and exponential for all classes of jobs. At other service centers, where the service
times should have probability distributions with rational Laplace transforms, different classes
of jobs may have different distributions.
4. State-dependent Service: The service time at a FCFS service center can depend only on the
total queue length of the center. The service time for a class at PS, LCFS-PR, and IS centers
can also depend on the queue length for that class, but not on the queue length of other
classes. Moreover, the overall service rate of a subnetwork can depend on the total number of
jobs in the subnetwork.
5. Arrival Processes. In open networks, the time between successive arrivals of a class
should beexponentially distributed. No bulk arrivals are permitted. The arrival rates may be
state dependent. A network may be open with respect to some classes of jobs and closed with
respect to other classes of J obs.
Networks satisfying these criteria are referred to as BCMP networks after the authors of the
criteria.
Denning and Buzen (1978) further extended the class of product form networks to non-
Markovian networks with the following conditions:
1. Job Flow Balance: For each class, the number of arrivals to a device must equal the
number of departures from the device.
2. One-Step Behavior: A state change can result only from single jobs entering the system,
moving between pairs of devices in the system, or exiting from the system. This assumption
asserts that simultaneous job moves will not be observed.
3. Device Homogeneity: A devices service rate for a particular class does not depend on the
state of the system in any way except for the total device queue length and the designated
classs queue length.
This assumption implies the following:
(a) Single-Resource Possession: A job may not be present (waiting for service or receiving
service) at two or more devices at the same time.
(b) No Blocking: A device renders service whenever jobs are present; its ability to render
serviceis not controlled by any other device.
(c) Independent Job Behavior: Interaction among jobs is limited to queueing for physical
devices; for example, there should not be any synchronization requirements.
(d) Local Information: A devices service rate depends only on local queue length and not on
the state of the rest of the system.
(e) Fair Service: If service rates differ by class, the service rate for a class depends only on
thequeue length of that class at the device and not on the queue lengths of other classes. This
means that the servers do not discriminate against jobs in a class depending on the queue
lengths of other classes.
(f) Routing Homogeneity: The job routing should be state independent. In the last condition,
the term routing is used to denote a jobs path in the network. The routing homogeneity
condition implies that the probability of a job going from one device to another device does
not depend upon the number of jobs at various devices.
QUEUEING NETWORK MODELS OF COMPUTER SYSTEMS
Two of the earliest queueing models of computer systems are the machine repairman
model and the central server. The machine repairman model, as the name implies, was
originally developed for modeling machine repair shops. It has a number of working
machines and a repair facility with one or more servers (repairmen). Whenever a machine
breaks down, it is put in the queue for repair and serviced as soon as a repairman is available.
In computer systems modeling, we encounter three kinds of devices. Most devices have a
single server whose service time does not depend upon the number of jobs in the device.
Such devices are called fixed-capacity service centers. For example, the CPU in a system
may be modeled as a fixed-capacity service center.
Then there are devices that have no queueing, and jobs spend the same amount of time in the
device regardless of the number of jobs in it. Such devices can be modeled as a center with
infinite servers and are called delay centers or IS (infinite server). A group of dedicated
terminals is usually modeled as a delay center.
Finally, the remaining devices are called load-dependent service centers since their service
rates may depend upon the load or the number of jobs in the device. A M/M/m queue (with
m e 2) is an example of a load-dependent service center. Its total service rate increases as
more and more servers are used. A group of parallel links between two nodes in a computer
network is an example of a load-dependent service center.



NAME: OYEWOLE, Mopelola O.
MATRIC NO.: 090805054
DEPARTMENT: COMPUTER SCIENCES
COURSE: CSC 524
ASSIGNMENT: PERFORMANCE MODELING
LECTURER: ADEWOLE, A. P. (DR.)
1

PERFORMANCE MODELING
Performance modeling is a structured and repeatable approach to modeling the performance
of your software. It begins during the early stages of an application design and continues
throughout the application life cycle.
The goal of performance modeling is to gain understanding of a computer system's
performance on various applications, by means of measurement and analysis, and then to
wrap up these characteristics in a compact formula. The resulting model can be used to gain
greater understanding of the performance phenomena involved and to project performance to
other system/application combinations.
Application scenarios and performance objectives are identified when performance models
are created. Our performance objectives are our measurable criteria, such as response time,
throughput (how much work in how much time), and resource utilization (CPU, memory,
disk I/O, and network I/O). We break down our performance scenarios into steps and assign
performance budgets. Our budget defines the resources and constraints across our
performance objectives.
Upfront performance modeling is not a replacement for scenario-based load testing or
prototyping to validate our design. In fact, we have to prototype and test to determine what
things cost and to see if our plan makes sense. Data from our prototypes can help us evaluate
early design decisions before implementing a design that will not allow us to meet our
performance goals.
Why Do We Model Performance?
A performance model provides a path to discover what we do not know. The benefits of
performance modeling include the following:
Performance becomes a feature of our development process and not an afterthought.
Modeling helps answer the question "Will our design support our performance
objectives?" We can evaluate our tradeoffs earlier in the life cycle before we actually
build and analyze models.
We know explicitly what design decisions are influenced by performance and the
constraints performance puts on future design decisions. If these decisions are not
captured, it can lead to maintenance efforts that work against our original goals.
2

Surprises are avoided in terms of performance when our application is released into
production.
We end up with a document of itemized scenarios that help us to quickly see what is
important. That translates to where to instrument, what to test for, and how to know
whether we are trending toward or away from the performance goals throughout our
application life cycle.
Modeling allows us to evaluate our design before investing time and resources to implement
a flawed design. Having the processing steps for our performance scenarios laid out enables
us to understand the nature of our application's work. We can make more informed decisions
by knowing the nature of this work and the constraints affecting that work.
Our model can reveal the following about our application:
The relevant code paths and how they affect performance.
Where the use of resources or computations affect performance.
The most frequently executed code paths. This helps us identify where to spend time
tuning.
The key steps that access resources and lead to contention.
Where our code is in relation to resources (local, remote).
The tradeoffs we have made for performance.
The components that have relationships to other components or resources.
Where our synchronous and asynchronous calls are.
What our I/O-bound work and CPU-bound work are.
And the model can reveal the following about your goals:
What the priority and achievability of different performance goals are.
Where our performance goals have affected design.
Risk Management
The time, effort, and money we invest up front in performance modeling should be
proportional to project risk. For a project with significant risk, where performance is critical,
we may spend more time and energy up front developing your model. Our modeling
approach might be as simple as white-boarding our performance scenarios for a project where
performance is less of a concern.
3

Budget
Performance modeling is essentially a "budgeting" exercise. Budget represents our
constraints and enables us to specify how much we can spend (resource-wise) and how we
plan to spend it. Constraints govern our total spending, and then we can decide where to
spend to get to the total. We assign budget in terms of response time, throughput, latency, and
resource utilization.
Performance modeling does not need to involve a lot of up-front work. In fact, it should be
part of what we already do. We can even use a whiteboard to quickly capture the key
scenarios and break them down into component steps to get started.
We can quickly assess if our scenarios and steps are within range, or if we need to change our
design to accommodate the budget, if we know our goals. We also need to define our
baselines if we do not know our goals (particularly resource utilization). Either way, it is not
long before we can start prototyping and measuring to get some data to work with.
What Must Be Known
Performance models are created in document form by using the tool of our choice (a simple
Word document works well). The document becomes a communication point for other team
members. The performance model contains a lot of key information, including goals, budgets
(time and resource utilization), scenarios, and workloads. Use the performance model to play
out possibilities and evaluate alternatives, before committing to a design or implementation
decision. We need to measure to know the cost of our tools. For example, how much will a
certain API cost us?
Best Practices
We should consider the following best practices when creating performance models:
Determine response time and resource utilization budgets for our design.
Identify our target deployment environment.
Do not replace scenario-based load testing with performance modeling, for the
following reasons:
o Performance modeling suggests which areas should be worked on but cannot
predict the improvement caused by a change.
4

o Performance modeling informs the scenario-based load testing by providing
goals and useful measurements.
o Modeled performance may ignore many scenario-based load conditions that
can have an enormous impact on overall performance.
The Performance Model Information
The information in the performance model is divided into different areas. Each area focuses
on capturing one perspective. Each area has important attributes that help us execute the
process. Table 1 shows the key information in the performance model.
CATEGORY DESCRIPTION
Application description The design of the application in terms of its layers and its target
infrastructure.
Scenarios Critical and significant use cases, sequence diagrams, and user
stories relevant to performance.
Performance Objectives Response time, throughput, resource utilization.
Budgets Constraints we set on the execution of use cases, such as
maximum execution time and resource utilization levels,
including CPU, memory, disk I/O, and network I/O.
Measurements Actual performance metrics from running tests, in terms of
resource costs and performance issues.
Workload Goals Goals for the number of users, concurrent users, data volumes,
and information about the desired use of the application.
Baseline Hardware Description of the hardware on which tests will be run in terms
of network topology, bandwidth, CPU, memory, disk, and so
on.
Table 1: Information in the Performance Model




5

Other elements of information we might need include those shown in Table 2.
CATEGORY DESCRIPTION
Quality-of-Service (QoS)
Requirements
QoS requirements, such as security, maintainability, and
interoperability, may impact our performance. We should have
an agreement across software and infrastructure teams about
QoS restrictions and requirements.
Workload Requirements Total number of users, concurrent users, data volumes, and
information about the expected use of the application.
Table 2: Other Information you might need
Inputs
A number of inputs are required for the performance modeling process. These include initial
(maybe even tentative) information about the following:
Application design and target infrastructure and any constraints imposed by the
infrastructure.
Scenarios and design documentation about critical and significant use cases.
QoS requirements and infrastructure constraints, including service level agreements
(SLAs).
Workload requirements derived from marketing data on prospective customers.
Outputs
The output from performance modeling is the following:
A performance model document.
Test cases with goals.
Performance Model Document
The performance model document may contain the following:
Performance objectives.
Budgets.
Workloads.
Itemized scenarios with goals.
Test cases with goals.
6

An itemized scenario is a scenario that we have broken down into processing steps. For
example, an order scenario might include authentication, order input validation, business
rules validation, and orders being committed to the database. The itemized scenarios include
assigned budgets and performance objectives for each step in the scenario.
Test Cases with Goals
We use test cases to generate performance metrics. They help to validate our application
against performance objectives. Test cases help us to determine whether we are trending
toward or away from your performance objectives.
Process
The performance modeling process model is summarized in Table 3.
Performance Modeling Process
1. Identify Key Scenarios
2. Identify Workloads
3. Identify Performance Objectives
4. Identify Budget
5. Identify Processing Steps
6. Allocate Budget
7. Evaluate
8. Validate
Table 3: Eight Step Performance Model
The performance modeling process involves the following steps:
1. Identify Key Scenarios
We are to identify scenarios where performance is important and scenarios that pose the most
risk to our performance objectives.
2. Identify Workload
We are to identify how many users and concurrent users our system needs to support.
3. Identify Performance Objectives
Define performance objectives for each of our key scenarios. Performance objectives reflect
business requirements.
Iterate
7


4. Identify Budget
We are to identity our budget or constraints. This includes the maximum execution time in
which an operation must be completed and resource utilization constraints, such as CPU,
memory, disk I/O, and network I/O.
5. Identify Processing Steps
Break down our key scenarios into component processing steps.
6. Allocate Budget
Spread our budget (determined in Step 4) across our processing steps (determined in Step 5)
to meet our performance objectives (defined in Step 3).
7. Evaluate
Evaluate our design against objectives and budget. We may need to modify our design or
spread our response time and resource utilization budget differently to meet our performance
objectives.
8. Validate
We need to validate our model and estimates. This is an ongoing activity and includes
prototyping, assessing, and measuring.
Summary
Beginning performance modeling early exposes key issues to us and allows us to quickly see
places to make tradeoffs in design or help us to identify where to spend our efforts. A
practical step in the right direction is to simply capture our key scenarios and break them
down into logical operations or steps. Most importantly, we identify our performance goals
such as response time, throughput, and resource utilization with each scenario.
We need to know our budgets in terms of how much CPU, memory, disk I/O, and network
I/O our application is allowed to consume and be prepared to make tradeoffs at design time,
such as using an alternative technology or remote communication mechanism.
8

By adopting a proactive approach to performance management and adopting a performance
modeling process, we are able to address the following:
Performance becomes a feature of our development process and not an afterthought.
We evaluate our tradeoffs earlier in the life cycle based on measurements.
Test cases show whether we are on or off track from the performance objectives,
throughout our application life cycle.




















9

References
1. www.google.com
2. Microsoft Developer Network: Performance Modeling
3. Performance Modeling: Understanding the Past and Predicting the Future by David H
Bailey and Allan Snavely


CSC524
ANALYSISOFASINGLE
SERVERQUEUEANDQUEUE
NETWORKS

NAME:ADEYEMIMONSURATADEOLA
MATRICNO:100805008

MAY2014
Analysis of a Single Server Queue.

Queuing analysis is one of the most important tools for those involved with computer and
network analysis. It can be used to provide approximate answers to a host of questions, such
as:
What happens to file retrieval time when disk I/O utilization goes up?
Does response time change if both processor speed and the number of users on the
system are doubled?
How many lines should a time-sharing system have on a dial-in rotary?
How many terminals are needed in an on line inquiry center, and how much idle time
will the operators have?
The number of questions that can be addressed with a queuing analysis is endless and
touches on virtually every area in computer science. The ability to make such an analysis is
an essential tool for those involved in this field.
Although the theory of queuing is mathematically complex, the application of queuing
theory to the analysis of performance is, in many cases, remarkably straightforward. A
knowledge of elementary statistical concepts (means and standard deviations) and a basic
understanding of the applicability of queuing theory is all that is required. Armed with
these, the analyst can often make a queuing analysis on the back of an envelope using
readily available queuing tables, or with the use of simple computer programs that occupy
only a few lines of code.

The Single Server Queue
The simplest queuing model is one that has only one queue. Such a model can be used to
analyse individual resources in computer systems. The central element of the system is a
server, which provides some service to items. Items from some population of items arrive
at the system to be served. If the server is idle, an item is served immediately. Otherwise,
an arriving item joins a waiting line2. When the server has completed serving an item, the
item departs. If there are items waiting in the queue, one is immediately dispatched to the
server. The server in this model can represent anything that performs some function or
service for a collection of items. For example, if all jobs waiting for the CPU in a system are
kept in one queue, the CPU can be modelled using results that apply to single queues.
Assumptions on the Single Server Queue
1. Item population: Typically, we assume an infinite population. This means that the arrival
rate is not altered by the loss of population. If the population is finite, then the population
available for arrival is reduced by the number of items currently in the system; this would
typically reduce the arrival rate proportionally.

2. Queue size: Typically, we assume an infinite queue size. Thus, the waiting line can grow
without bound. With a finite queue, it is possible for items to be lost from the system. In
practice, any queue is finite. In many cases, this will make no substantive difference to the
analysis.

3. Dispatching discipline: When the server becomes free, and if there is more than one
item waiting, a decision must be made as to which item to dispatch next. The simplest

approach is first-in, first-out; this discipline is what is normally implied when the term queue
is used.
Another possibility is last-in, first-out. One that you might encounter in practice is a
dispatching discipline based on service time. For example, a packet-switching node may
choose to dispatch packets on the basis of shortest first (to generate the most outgoing
packets) or longest first (to minimize processing time relative to transmission time).
Unfortunately, a discipline based on service time is very difficult to model analytically.




A convenient notation has been developed for summarizing the principal assumptions that
are made in developing a queuing model. The notation is X/Y/N, where X refers to the
distribution of the interarrival times, Y refers to the distribution of service times, and N
refers to the number of servers. The most common distributions are denoted as follows:
G = general independent arrivals or service times
M = negative exponential distribution
D = deterministic arrivals or fixed length service.
Thus, M/M/1 refers to a single-server queuing model with Poisson arrivals and exponential
service times.

Type of Stochastic Process in a Single Server Queue
Birth-Death Process
A birth-death process is useful in modelling systems in which jobs arrive one at a time (and
not as a batch). The state of such a system can be represented by the number of jobs n in
the system.
Arrival of a new job changes the state to k + 1. This is called a birth. Similarly, the departure
of a job changes the system state to k 1. This is called a death. The number of jobs in such
a system can therefore be modelled as a birth-death process.



Fig 2: State transition diagram for a Birth-Death Process
The state transition diagram of a birth-death process is shown in Figure 2. When the system
is in state k, it has k jobs in it. The new arrivals take place at a rate k. The service rate is
k. We assume that both the interarrival times and service times are exponentially
distributed.

Formulas of a Single Server Queue
Table 3a provides some equations for single server queues that follow the M/G/1 model.
That is, the arrival rate is Poisson and the service time is general. Making use of a scaling
factor, A, the equations for some of the key output variables are straightforward. Note that
the key factor in the scaling parameter is the ratio of the standard deviation of service time
to the mean. No other information about the service time is needed. Two special cases are
of some interest. When the standard deviation is equal to the mean, the service time
distribution is exponential (M/M/1).
This is the simplest case and the easiest one for calculating results. Table 3b shows the
simplified versions of equations for the standard deviation of r and Tr, plus some other
parameters of interest. The other interesting case is a standard deviation of service time
equal to zero, that is, a constant service time (M/D/1). The corresponding equations are
shown in Table 3c.


Figures 4 and 5 plot values of average queue size and residence time versus utilization for
three values of Ts/Ts . This latter quantity is known as the coefficient of variation, and
gives a normalized measure of variability. Note that the poorest performance is exhibited
by the exponential service time, and the best by a constant service time. Usually, one can
consider the exponential service time to be a worst case. An analysis based on this
assumption will give conservative results. This is nice, because tables are available for the
M/M/1 case and values can be looked up quickly.
What value of Ts/Ts is one likely to encounter? We can consider four regions:
- Zero: This is the rare case of constant service time. For example, if all transmitted
messages are of the same length, they would fit this category.




- Ratio less than 1: Because this ratio is better than the exponential case, using
M/M/1 tables will give queue sizes and times that are slightly larger than they should
be. Using the M/M/1 model would give answers on the safe side. An example of this
category might be a data entry application for a particular form.
- Ratio close to 1: This is a common occurrence and corresponds to exponential
service time. That is, service times are essentially random. Consider message lengths
to a computer terminal: a full screen might be 1920 characters, with message sizes
varying over the full range. Airline reservations, file lookups on inquires, shared LAN,
and packet-switching networks are examples of systems that often fit this category.
- Ratio greater than 1: If you observe this, you need to use the M/G/1 model and not
rely on the M/M/1 model. A common occurrence of this is a bimodal distribution,
with a wide spread between the peaks. An example is a system that experiences
many short messages, many long messages, and few in between.
The same consideration applies to the arrival rate. For a Poisson arrival rate, the interarrival
times are exponential, and the ratio of standard deviation to mean is 1. If the observed ratio
is much less than 1, then arrivals tend to be evenly spaced (not much variability), and the
Poisson assumption will overestimate queue sizes and delays. On the other hand, if the ratio
is greater than 1, then arrivals tend to cluster and congestion becomes more acute.

Queue Networks
A queueing system describes the system as a unique resource. A queueing network describes
the system as a set of interacting resources.
Queueing networks can also be defined as a model in which jobs departing from one queue
arrive at another queue (or possibly the same queue).

Classification of Queueing Network
Unlike single queues, there is no easy notation for specifying the type of queueing network.
The simplest way to classify a queueing network is either open or closed.
1. OPEN QUEUEING NETWORKS: It has external arrivals and departures, as shown in
the diagram below. The jobs enter the system at In and exit at Out. The number
of jobs in the system varies with time. In analyzing an open system, we assume that
the throughput is known (to be equal to the arrival rate), and the goal is to
characterize the distribution of number of jobs in the system.
An Open Queueing Network.


2. CLOSED QUEUEING NETWORKS: It has no external arrivals or departures. As shown
in the diagram below, the jobs in the system keep circulating from one queue to the
next. The total number of jobs in the system is constant. It is possible to view a
closed system as a system where the Out is connected back to the In. The jobs exiting
the system immediately re-enter the system. The flow of jobs in the Out-to-In link
defines the throughput of the closed system. In analyzing a closed system, we assume
that the number of jobs is given, and we attempt to determine the throughput (or
the job completion rate).

A Closed Queueing Network

It is also possible to have MIXED QUEUEING NETWORKS that are open for some workloads
and closed for others. The system is closed for interactive jobs and is open for batch jobs.
The term class refers to types of jobs. All jobs of a single class have the same service
demands and transition probabilities. Within each class, the jobs are indistinguishable.
A Mixed queueing Network

Product Form Network
The simplest queueing network is a series of M single-server queues with exponential service
time and Poisson arrivals. The jobs leaving a queue immediately join the next queue. It can
be shown that each individual queue in this series can be analyzed independently of other
queues. Each queue has an arrival as well as a departure rate of . If i is the service rate
for the ith server,
Utilization of the ith server pi =/i
Probability of ni jobs in the ith queue = (1 pi)


This queueing network is therefore a product form network. In general, the term applies
to any queueing network in which the expression for the equilibrium probability has the
following form:

When fi(ni) is some function of the number of jobs at the ith facility, G(N) is a normalizing
constant and is a function of the total number of jobs in the system.

Product form networks are easier to analyze than non-product form networks. The set of
networks that have a product form solution is continuously being extended by the
researchers. First among these was Jackson (1963), who showed that the above method of
computing joint probability is valid for any arbitrary open network of m-server queues with
exponentially distributed service times.
Particularly, if there is any feedback in the network, so that jobs can return to previously
visited service centers, the internal flows are not Poisson. It is surprising that even though
the flows are not Poisson, the queues are separable and can be analyzed as if they were
independent M/M/m queues.
Jacksons results were later extended to closed networks by Gordon and Newell (1967).
They showed that any arbitrary closed networks of m-server queues with exponentially
distributed service times also have a product form solution.



Queueing Network Model for Computer Systems
Two of the earliest queueing models of computer systems are the machine repairman
model and the central server model shown in Figures 6 and 7, respectively. The machine
repairman model, as the name implies, was originally developed for modeling machine
repair shops. It has a number of working machines and a repair facility with one or more
servers (repairmen). Whenever a machine breaks down, it is put in the queue for repair and
serviced as soon as a repairman is available. Scherr (1967) used this model to represent a
timesharing system with n terminals. Users sitting at the terminals generate requests (jobs)
that are serviced by the system, which serves as a repairman. After a job is done, it waits
at the user terminal for a random think-time interval before cycling again.
The central server model shown in Figure 7 was introduced by Buzen (19173). The CPU in
the model is the central server that schedules visits to other devices. After service at the
I/O devices, the jobs return to the CPU for further processing and leave it when the next
I/O is encountered or when the job is completed.



FIG 6: A Machine Repairman Model
FIG 7: A Central Server Model



Types of Service Centers
In computer systems modelling, we encounter three kinds of devices. Most devices have a
single server whose service time does not depend upon the number of jobs in the device.
Such devices are called fixed-capacity service centers. For example, the CPU in a system
may be modelled as a fixed-capacity service center. Then there are devices that have no
queueing, and jobs spend the same amount of time in the device regardless of the number
of jobs in it. Such devices can be modelled as a center with infinite servers and are called
delay centers or IS (infinite server). A group of dedicated terminals is usually modelled as
a delay center. Finally, the remaining devices are called load-dependent service centers
since their service rates may depend upon the load or the number of jobs in the device. A
M/M/m queue (with m e 2) is an example of a load-dependent service center. Its total
service rate increases as more and more servers are used. A group of parallel links between
two nodes in a computer network is an example of a load-dependent service center.



Modeling
of
computer
system
networks
Csc524Assignment
100805026DiejomaohChinelo
MODELING OF COMPUTER SYSTEM NETWORKS
When we describe the temporal behavior of some system, our main goal is to evaluate the time
needed by any entity to cross the system. This time has two main components: the strict time
needed for its execution in the different hardware components and the time spent waiting either
to use some resource because it is used by another entity or the arrival of some other entities to
some synchronization points. Modelling techniques try to tackle these two sources of delay.
Modelling may be an operational abstraction implemented as a simulation or an abstract
mathematical representation of the system behavior, frequently in steady state. The main existing
mathematical techniques are based on the following formalisms: Queuing networks, Petri nets
and Process algebras, with some variants or closely related formalisms like Stochastic Automata
Networks.
QUEUING NETWORKS
The network of a computer system can be modelled using Queuing Networks.
A queuing network can be described as a model in which jobs depart from one queue and arrive
at another queue or at the same queue. This is represented by connecting the output of one queue
to the input of another queue.
2
1 5
3

4


A single queue can receive input from the output of other queues and the output of a queue can
also form input streams for several other queues. A queue forms the NODE of a network which
is usually 1 to N, where N is the number of nodes in the network. We can also define the routing
probability of a queuing network. ROUTING PROBABILITY is the probability that a departure
which occurs at node 1 would go into node 2 immediately.
A network of queues can be used to model an interaction between devices. For instance, let us
consider a simple computer system which consists of a CPU and a Disk. This can be modeled by
a 2 queue network as shown below
Q1
Q12
Q13
Q14
Q5
Jobsleavingthe
network
Jobsarrivingthe
network



CPU queue Disk queue

TYPES OF QUEUING NETWORKS
There are two types of queuing networks. We have:
Open Queuing Networks
Closed Queuing Networks

OPEN QUEUING NETWORKS
An open queuing network is one in which jobs enter the system at a particular point and exit the
system at a different point i.e. it has external arrivals and departures. Here, jobs arrive at some of
the nodes from an external source and can leave the network from other nodes. The source has an
indefinite number of jobs. The particular number of jobs in the system is not constant. It varies
with respect to time. When analyzing open systems, we assume that the output is equal to the
arrival rate and our goal is to characterize the distribution of the number of jobs within the
system.
The open queuing network is illustrated by the diagram below









Inputofjobs
Outputof
jobs
IN
CPU
QUEUE
QUEUE1
QUEUE2
OUT
CLOSED QUEUING NETWORKS
In a closed queuing network, a fixed number of jobs circulate within the nodes of the network
(from one queue to the other). No new job enters or leaves the network system i.e. it has no
external arrivals or departures. The number of jobs in the system is constant. We can also view it
as a closed system where the out (exit) is connected back to the in (entrance) i.e. the jobs leaving
the system immediately, re-enter the system. The throughput of a closed system can be defined
by the flow of jobs in the out-to-in link.
The closed queuing network is illustrated by the diagram below







Closed networks have been practically successful in modeling computer system networks.
There are two models that are widely used to illustrate this
Central Server model
Machine Repairman Model
Central Server Model

Q12

Q11
Q13 .
.

Q1N
IN
CPU
QUEUE
QUEUE1
QUEUE2
OUT
1
2
3
N
Illustrating with the diagram above, queue 1 represents the CPU queue and queues 2 through N
represent the system Input/output devices which are usually disk controllers. J obs receive service
at the CPU and are being routed with probability to I/O. After being serviced by the I/O device,
the job returns to the CPU. The processing of a job by a computer system is represented by the
several visits of the job to the CPU and I/O devices. The path from the CPU back to itself (under
probability) represents the completion of one job in the system and the immediate
commencement of another.
The central server network is especially useful for modeling the inner resources (CPU and disks)
of multi programmed computer systems. In these systems the level of multiprogramming
(number of jobs in memory simultaneously) is often limited to a small number of jobs in the
interest of system efficiency. Under moderate to heavy loads the system multiprogramming level
is usually at or near this limit. Thus the assumption that the number of jobs in the network is
constant is a good approximation to the real system.

Machine Repairman Model


Q12


Q13 .
.


Q1N


The network as shown above has one multi-server node which represents the user terminals. The
other (system) nodes represent the other system resources. These may be arbitrarily connected
although a variant of the central server network is often used. J obs begin at the terminal node and
1
2
3
N
Terminals
System
then enter the system where they circulate among the system nodes before returning to the
terminal node. The time from when the job leaves the terminal node until it returns is the system
response time, and the time it spends at the terminal node is called the user think time. The
number of jobs in the network is the same as the number of servers at the terminal node. Hence
there is no waiting for a job to begin to receive service at this queue and its service time is the
same as the think time.

PETRI NETS
Petri nets are a formalism for the description of the concurrency and synchronization inherent in
computer (and other interacting) systems. Petri nets are directed graphs with two types of nodes,
places and transitions, and unidirectional arcs between them. In general terms, tokens move
between places according to the firing rules imposed by the transitions. A transition can fire
when each of the places connected to it has at least one token. When it fires, the transition
removes tokens from each of these places and deposits tokens in each of the places it is
connected to. Petri nets have proved to be very useful for studying qualitative or logical
properties of computer systems exhibiting concurrent and asynchronous behavior. However, for
quantitative performance evaluation, the concept of time must be included in the definition of
Petri nets. There are several types of Timed Petri Nets depending on where the time consumption
is located.

PROCESS ALGEBRAS
Process algebras are abstract languages which have been introduced for the specification and
understanding of complex systems with concurrent phenomena. These mathematical theories
provide apparatus for reasoning about the structure and behavior of the model, as qualitative
system properties. Examples include the Calculus of Communicating Systems (CCS),
Communicating Sequential Processes (CSP) and the Algebra of Communicating Processes
(ACP). In these algebras composition is an essential part of the language, and ancillary theory,
such as equivalence relations, is developed in such a way as to be able to exploit the
compositionality. Interest in using process algebra as a performance modelling formalism has led
to the development of stochastic process algebras (SPA). In these formalisms activities generally
have an integrated delay of uncertain length, which is represented by a random variable. The
compositionality of the process algebra can be exploited in all stages of performance modelling,
including model construction, model simplification and model solution.



VRIFICATION AND
VALIDATION OF
SIMULATION MODLS
BY SUN FAPOHUNDA
100805032



Simulation modls ar incrasingly bing usd in problm solving and in dcision making. Th
dvloprs and usrs of ths modls, th dcision makrs using information drivd from th
rsults of th modls, ar all concrnd with whthr a modl and its rsults ar "corrct", and to
know if th modl implmntation is built right. This concrn is addrssd through modl
vrification and validation.

Modl Validation is usually dfind to man that a computrizd modl posssss a rang of
accuracy consistnt with th intndd application of th modl, in othr words; validation is th
procss of dtrmining that a modl implmntation accuratly rprsnts th dvlopr's
concptual dscription of th modl and th solution to th modl.

Modl Vrification is oftn dfind as nsuring that th computr program of th computrizd
modl and its implmntation ar corrct. Also, vrification is th procss of dtrmining that a
modl implmntation accuratly rprsnts th dvlopr's concptual dscription of th modl
and th solution to th modl.


Validation
Concptual modl validity is dtrmining that th thoris and assumptions undrlying th
concptual modl ar corrct, it asks th qustion ar w building th right modl?. Th thoris
and assumptions undrlying th modl should b tstd using mathmatical analysis and statistical
mthods on problm ntity data. xampls of applicabl statistical mthods ar fitting distributions
to data, stimating paramtr valus from th data. Additionally, Thoris applid should b
carfully rviwd to nsur its corrct.
If modl has sub-modls, th sub-modls and th modl must b valuatd to know if thy ar
rasonabl for th intndd purpos of th systm. Th primary validation tchniqus usd for
ths valuations ar fac validation and tracs. Fac validation has xprts on th problm ntity
chck th concptual modl to s if its corrct and rasonabl for its purpos. Th us of tracs is
th tracking of ntitis through ach sub-modl and ovrall modl to s if th logic is corrct and
If th ncssary accuracy is maintaind.
Validation nsurs building th right modl.
Validation Tchniqus
Animation: Th modls oprational bhaviour is shown with th aid of graphics as th modl
progrsss through tim.
Comparison to othr modls: Outputs of th simulation modls bing validatd ar compard to th
rsults of othr valid modls.
Dgnrat Tsts: Th dgnracy of th modl's bhaviour is tstd by appropriat slction of
valus of th input and intrnal paramtrs. xampl, dos th avrag numbr in th quu of a
singl srvr continu to incras ovr tim whn th arrival rat is largr than th srvic rat?
vnt Validity: Th vnts of occurrncs of th simulation modl ar compard to thos of th ral
systm to dtrmin if thy ar similar. For xampl, comparr th numbr of cars assmbld pr
day in a car assmbly plant to th actual numbr of cars.
xtrm Condition Tsts: th modl structur and outputs should b plausibl for any xtrm and
unlikly combination of lvls of factors in th systm.
Fac Validity: Individuals that hav xprtis about th systm ar askd whthr th modl and/or
its bhaviour ar rasonabl.
Historical Mthods: Th 3 historical mthods of validation ar rationalism, mpiricism, positiv
conomics.
(i) Rationalism assums that vrybody knows whthr th clarly statd undrlying
assumptions of a modl ar tru.
(ii) mpiricism rquirs vry assumption and outcom to b mpirically validatd.
(iii) Positiv conomics rquirs only that th modl b abl to prdict th futur and is not
concrnd with a modl's assumption or structur.
Intrnal Validity: Svral rplications of a stochastic modl ar mad to dtrmin th amount of
stochastic variability in th modl. A larg amount of lack of consistncy may caus th modl's
rsults to b qustionabl.
Oprational Graphics: Valus of various prformanc masurs .g th numbr in quu and
prcntag of srvrs ar shown graphically as th simulation modl runs through tim to nsur
thy bhav corrctly.
Paramtr Variability/Snsitivity Analysis: this tchniqu consists of changing th valus of th input
and intrnal paramtrs of a modl to dtrmin th ffct upon th modl's bhaviour or output.
Prdictiv Validation: Th modl is usd to prdict th systm's bhaviour and thn comparisons ar
mad btwn th systm's bhaviour and th modl's prdiction to dtrmin if thy ar th
sam.
Tracs: Th bhaviours of diffrnt typs of spcific ntitis in th modl ar followd through th
modl to dtrmin if th modl's logic is corrct and if th ncssary accuracy is obtaind.
Turing Tst: Individuals who ar knowldgabl about th oprations of th systm bing modlld
ask if thy can distinguish btwn systms and modl outputs.
To prform validation, th following stps should b takn
1. Dvlop a modl with high fac validity:
Th objctiv of this stp is to dvlop a modl that, on th surfac, sms
rasonabl to popl who ar familiar with th systm undr study.
This stp can b achivd through discussions with systm xprts, obsrving th
systm, or th us of intuition.
It is important for th modlr to intract with th clint on a rgular basis
throughout th procss.
It is important for th modlr to prform a structurd walk-through of th
concptual modl bfor ky popl to nsur th corrctnss of modls
assumptions
2. Tst th assumptions of th modl mpirically:
In this stp, th assumptions mad in th initial stags of modl dvlopmnt ar
tstd quantitativly. For xampl, if a thortical distribution has bn fittd to
som obsrvd data, graphical mthods and goodnss of fit tsts ar usd to tst
th adquacy of th fit.
Snsitivity analysis can b usd to dtrmin if th output of th modl significantly
changs whn an input distribution or whn th valu of an input variabl is
changd. If th output is snsitiv to som aspct of th modl, that aspct of th
modl must b modld vry carfully.
3. Dtrmin how rprsntativ th simulation output data ar:
Th most dfinitiv tst of a modls validity is dtrmining how closly th
simulation output rsmbls th output from th ral systm.
Th Turing tst can b usd to compar th simulation output with th output from
th ral systm. Th output data from th simulation can b prsntd to popl
knowldgabl about th systm in th sam xact format as th systm data. If
th xprts can diffrntiat btwn th simulation and th systm outputs, thir
xplanation of how thy did that should improv th modl.
Statistical mthods ar availabl for comparing th output from th simulation
modl with thos from th ral-world systm

Vrification
Vrification nsurs building th modl right
Concptual Modl Vrification nsurs that th computr programming and implmntation of th
concptual modls ar corrct.Th main factor affcting vrification whthr a simulation languag
or a high programmr lvl languag such as Java is usd.
Th us of simulation languag will rsult in asir implmntation; rduc programming tim and
fwr rrors, whn simulation languag is usd, vrification is concrnd with nsuring that thr
ar no rrors in simulation.
Th primary tchniqus usd to dtrmin that th modl has bn programmd corrctly ar
walkthroughs and tracs.
Thr ar 2 approachs to tsting simulation softwar: static and dynamic tsting.
(i) Static Tsting: In this approach, th computr program is analysd to s if its corrct by
using corrctnss proofs and walkthroughs.
(ii) Dynamic Tsting: in this approach, computr program is tstd undr diffrnt
conditions and th rsults rcordd, thy ar usd to know If th computr program is
corrct. Tchniqus usd in dynamic tsting ar tracs, probing of input-output
rlations and intrnal consistncy chcks.

Vrification Tchniqus
1. Us good programming practic:
Writ and dbug th computr program in moduls or subprograms.
In gnral, it is always bttr to start with a modratly dtaild modl, and latr
mbllish, if ndd.
2. Us structurd walk-through:
Hav mor than on prson to rad th computr program.
3. Us a trac:
Th analyst may us a trac to print out som intrmdiat rsults and compar
thm with hand calculations to s if th program is oprating as intndd.

4. Chck simulation output for rasonablnss:
Run th simulation modl for a varity of input scnarios and chck to s if th
output is rasonabl.
In som instancs, crtain masurs of prformanc can b computd xactly and
usd for comparison
5. Animat:
Using animation, th usrs s dynamic displays (moving picturs) of th simulatd
systm.
Sinc th usrs ar familiar with th ral systm, thy can dtct programming and
concptual rrors
6. Compar final simulation output with analytical rsults:
May vrify th simulation rspons by running a simplifid vrsion of th simulation
program with a known analytical rsult. If th rsults of th simulation do not
dviat significantly from th known man rspons, th tru distributions can thn
b usd.
For xampl, for a quuing simulation modl, quuing thory can b usd to
stimat stady stat rsponss (.g., man tim in quu, avrag utilization).
Ths formulas, howvr, assum xponntial intrarrival and srvic tims with n
srvrs (M/M/n).





Th xpctd outcom of th modl validation and vrification procss is th quantifid lvl of
agrmnt btwn xprimntal data and modl prdiction, as th prdictiv accuracy of th
modl.


















Rfrncs:
Vrification and Validation of Simulation Modls - Robrt G. Sargnt
AYEOMONI OLATUNDE EMMANUEL
080805022

DISCRETE EVENT SIMULATION (DES)


WHAT IS DISCRETE EVENT SIMULATION
Firstly, a brief description of the terminologies that are relevant to the
comprehension of what discrete event simulation is all about.
SYSTEM: it is a collection of objects called entities that have certain properties
called attributes that acts together towards the accomplishment of some logical end
A system can either be discrete or continuous. We are concerned with the discrete
system
DISCRETE SYSTEM: in discrete system, state variables change instantaneously at
separated point in time, e.g. a bank, since the state variables- number of customers,
change only when a customer arrives or when a customer finishes being served and
departs.
STATE: it is a collection of attributes or state variables that represent the entities
of the system
EVENT: it is an instantaneous occurrence in time that may alter the state of a
system e.g. an event can be an arrival of a customer, or departure of a customer or
in a bank.
SIMULATION: it is the process of designing a model of a real system and
conducting experiments with this model for either of the purpose of understanding
the behavior of the system or of evaluating various strategies( within the limits
imposed by a criterion or set of criteria) for the operation of a system.


WHEN IS SIMULATION SUITABLE?
Many systems are highly complex, precluding the possibility of analytical
solution
The analytical solutions are extraordinarily complex, requiring vast
computing resources
Thus, such systems should be studied by means of simulation


DISCRETE EVENT SIMULATION: discrete events simulation models a
system whose state may change only at discrete point in time.
Each event occurs at a particular instant in time and marks a change of state in the
system. Between consecutive events, no change is assumed to occur, thus the
simulation can directly jump in time from one event to the next

DISCRETE EVENT SIMULATION(DES) is stochastic ,dynamic and
discrete

Stochastic meaning it is probabilistic
-inter-arrival times and service times are random variables
-have cumulative distribution functions

Discrete(instantaneous events are separated by intervals of time)
-The state variables change instantaneously at separate points in time
-These points in time are the ones at which an event occurs

Dynamic( changes over time)
-simulation clock: keeps track of the current value of simulated time as the
simulation proceeds

ADVANCEMENT OF SIMULATION TIME
All models contain a variable called the internal clock, or the simulation clock.
Time may be modeled in a variety of ways within the simulation
Simulated time can be advanced by
Time as linked events ( Next- event time advance)
Time divided into equal increments(Fixed-increment time advance)

TIME AS LINKED EVENTS
State changes occur only at event times for a discrete event simulation
model
Periods of inactivity are skipped over by jumping the clock from event time
to event time





















COMPONENTS AND ORGANIZATION OF DISCRETE
SIMULATION MODEL








System states: the collection of state variables necessary to describe the
system at a particular time
Simulation clock: a variable giving the current value of simulated time
Event list: a list containing the next time when each type of event occur
Statistical counters: variables used for storing statistical information about
system information
Initialization routine: a subprogram that determines the next event from the
event list and then advances the simulation clock to the time when that event
is to occur
Report generator: a subprogram that computes estimates(from the statistical
counters) of the desired measures of performance and produces a report
when the simulation ends.
Event routine: a subprogram that updates the system state when a particular
type of event occur(note: there is one event routine for each event type)
Library routines : a set of subprogram used to generate random observations
from probability distributions that were determined as part of the simulation
model
Main program: a subprogram that invokes the timing routine( determine the
next ), transfer control to the corresponding event routine(update the system
state appropriately) and check for termination( invoke the report generator
when the simulation is over).




EXAMPLE OF DISCRETE EVENT SIMULATION
An example of Discrete event simulation(DES) is to model a queue, for
instance , customers arriving at the bank to be served by a teller. In this
example the system entities are customer queue and tellers.
The system events are customer- arrival and customer departure.
The system states which are changed by this events, are the number of
customers in the queue( an int from 0-n)
The teller status (busy or idle).
The random variables that need to be characterized to model this system
stochastically are customer-interarrival-time and teller-service-time.











SAMPLE DESIGN FOR EVENT SCHEDULING
Main(executive routine):
1. Set clock=0
2. Set cumulative statistics to 0
3. Define initial system state(queue empty, server idle)
4. Generate the occurrence time of the first arrival and place in event
list
5. Select the next event on event list(arrival or departure event)
6. Advance simulation clock to time of next event
7. Process this event( execute the corresponding event routine)
8. If not end-of-simulation, goto step 5

DESIGN OF EVENT LIST
Events are chronologically ordered in time
Event list: it is sometimes called the pending event set
because it lists events that are pending. It contains all
scheduled events arranged in chronological time order.





NWOSU IKECHUKWU
080805063
CSC 524
Performance modelling is a structured and repeatable approach to modelling the
performance of your software. It begins during the early phases of your
application design and continues throughout the application life cycle.
Generally this does not require the use of a load test environment. This is
typically much cheaper than performance testing and can produce accurate
results. Performance Modelling is also used to validate design decisions and
infrastructure investment decisions at development stage.
This definition though is a very vague definition and is provided for purposes of
simplicity. When looked at from the eyes of a Practical Performance Analyst,
Performance Modelling is a science that very clearly fits into the proactive
performance paradigm and can be applied across the Software Development
Life Cycle. The various areas phases where performance modelling can be
applied are:
Performance Modelling at Requirements Gathering
Performance Modelling at Design
Performance Modelling at Performance Test
Performance Modelling at Go Live
Performance Modelling when in Production
As a Practical Performance Analyst you should be looking to use Performance
Modelling techniques to validate decisions and predict potential impact on
applications and infrastructure performance in a proactive manner.
Why would I use performance modelling?
Performance Modelling offers a set of modelling techniques that should be
used across the Software Delivery Cycle. Unfortunately so far, most of this has
been rocket science and the few tools out there that claim to provide the
modelling capability are themselves quite challenging to learn and implement.
Performance Modelling at Requirements Gathering: At requirements
gathering stage performing the role of the Practical Performance Analyst
your focus is to determine the overall Non Functional Requirements
across the application. As part of this role youll focus on identifying key
business and infrastructure workload across the various tiers and obtain
an understanding of the user workload that drives utilization and hence
application performance across the various application tiers. At this stage
you have the opportunity to use Analytical and Simulation
Modelling techniques to validate your Non Functional requirements. At
this stage you dont have infrastructure specifications for your
applications yet but you should be in a position where you can start
looking at feasible options and recommend sizing guidelines based on
your understanding of the Non Functional Requirements, business
workload, infrastructure workload, operational SLAs and your
experience managing performance across clients.
Performance Modelling at Design: At design stage, you have high level
and detailed design specifications now made available to you. This is a
wonderful opportunity to get hold of the design specifications, application
architecture, and deployment architecture to validate the various
decisions. You can use a combination of Analytical and Simulation
Modelling techniques at this stage to validate the ability of the application
architecture and underlying infrastructure to meet the overall Non
Functional Requirements.
Performance Modelling at Performance Test: At Performance Test
you would have started generating good amounts of statistical data
through the various Performance Testing runs. As a Practical
Performance Analyst you should use this as an opportunity to use a
combination of different Statistical, Analytical and Simulation Modelling
techniques to validate the scalability of the application. Your ability to
use Statistical Modelling techniques at this stage is a key advantage in
predicting application performance for growth of key workload drivers
while being able to call out potential breaches in operational SLAs.
Statistical Modelling techniques tend to be easier to use, require you to
make fewer assumptions and are easy to build and implement. Statistical
Modelling techniques are also very useful when you have gaps between
the size of your performance testing and production environment and
need to extrapolate the performance and scalability of your systems based
on performance testing results.
Performance Modelling at Go Live: At go live you could use Statistical
Modelling techniques to understand performance of the application and
predict potential breach in operational SLAs for a given combination of
infrastructure and software configuration. Statistical
Modelling techniques are easy to apply and using data generated from
Performance Testing you should be in a position where you can forecast
application performance and infrastructure utilization for growth in
business workload. You can also use a host of Analytical and Simulation
Modelling techniques to understand changes in application performance
for growth of different business workload drivers.
Performance Modelling when in Production: When in production you
have started generating large amounts of data (assuming you have
instrumented your applications well and have started collecting
infrastructure workload and business workload data) and should be able
to use Statistical Modelling techniques to identify relationships between
your business workload drivers and infrastructure workload drivers for
the business critical applications with the objective of forecasting
potential impacts to application performance. You could also use a
combination of Analytical and Simulation Modelling techniques to
validate what-if scenarios for changes in infrastructure specifications or
application configuration. Statistical Modelling techniques can be used to
predict change in application performance when your baseline hasnt
changed i.e. when your software configuration, OS configuration,
hardware configuration, etc. is constant and hasnt changed. For what-if
analysis in scenarios where you need to understand application
performance for different hardware and software configurations you
would need to use a combination of Analytical and Simulation Modelling
techniques.


What aspects of performance could I model?
You should be keen to understand the following performance metrics for
change in business workload, infrastructure specifications and software
configuration -
Transactional Response times
Utilization levels across the different tiers
Queue length across the various different tiers
Wait time across the different tiers
Services times across the different tiers
Performance modelling should be ideally performed on a continuous basis
across the Software Development Life Cycle with the objective of identifying
changes in application behaviour due to key design, infrastructure or
configuration changes. Performance Modelling offers a set of techniques that
give you the Performance Analyst the opportunity to understand change in
application performance for variation in business workload including the ability
to predict application performance based on key design and infrastructure
decisions.

Why performance modelling is important:
Understand end user performance at design
Validate design decisions
Provide guidelines for infrastructure sizing
Validate infrastructure specifications
Extrapolate application performance using data from performance test
Forecast application performance using data from production
environments
Predict change in application performance for growth in business
workload
Determine infrastructure impacts due to growth of business workload
Predict change in application performance due to change in software
configuration
Predict change in application performance due to change in infrastructure
specifications

Activities involved in performance modelling:
Determine Non Functional Requirements
Understand Business Objectives & Goals
Understand Application Architecture
Understand Deployment Architecture (Infrastructure Platform, designs,
etc.)
Determine Business Workload
Create Workload models (use workload modelling techniques)
Obtain Performance data from existing applications in production (if
possible)
Extract data from production environments (if older versions of
applications exist)
Determine which modelling techniques are applicable
Use a combination of modelling techniques and validate your results
Present findings and use the findings to review your design decisions and
infrastructure investment decisions





Challenges involved in Performance Modelling:
Lack of understanding of Non Functional Requirements
Lack of details around Application Architecture due to poor
documentation
Lack of details around Infrastructure Specifications (youll be surprised
how many support teams really dont know what hardware they are using
in production)
Lack of buy in from various stake holders across the teams
Lack of understanding of Performance Modelling techniques across the
various stake holders
Lack of software to automate Performance Modelling
Lack of monitoring tools in production environment, hence inability to
collect performance metrics
Inability to extract business and infrastructure workload data from
production environments
Lack of tools to ETL business and infrastructure workload data for
purpose of analysis
Lack of analytics tools to analyse and visualize data extracted from
performance testing and production environments.

Tools for performance modelling:
Performance modelling can be performed with various objectives in mind; this
could include validation of the solution approach or solution architecture,
validation of infrastructure requirements for a given approach and possibly
impact analysis for change in business workload volumes.
Application Performance Modelling can be performed using a combination of
statistical, analytical and simulation modelling techniques. Unfortunately there
arent any standard commercial or Open Source tools out there for performance
modelling. The list below suggests applications with their areas of focus.
Any logic (Discrete Event Simulation Modelling)
IBM SPSS (Statistical Modelling)
J MT (Queuing Networks, Mean Value Analysis, Markovs Chains)
Minitab (Statistical Modelling)
R Project (Statistical Modelling & Data Visualization)
Stat soft (Statistical Modelling & Data Visualization) Simpy (Discrete
Event Simulation Modelling)
Simul8 (Discrete Event Simulation Modelling)
TIBCO Spot Fire S+(Statistical Modelling)

Deliverables when performing performance modelling:
At Requirements Gathering
o A report covering the following:
Realistic Non Functional Requirements
Infrastructure recommendations
Design recommendations
At Design
o A report covering the following:
Performance models that highlight performance constraints
Validation of Application Architecture in meeting NFRs
Validation of Infrastructure specifications
Validation of Design
At Performance Test
o A report covering the following:
Forecasted performance for given business workload
Utilization levels for forecasted performance
Infrastructure impacts for given business workload
Impacts to customer experience for given business workload
Performance extrapolated for production environments
At Go Live
o A report covering the following:
Forecasted performance for given business workload
Utilization levels for forecasted performance
Infrastructure impacts for given business workload
Impacts to customer experience for given business workload
In Production
o A report covering the following:
Forecasted performance for given business workload
Utilization levels for forecasted performance
Infrastructure impacts for given business workload
Impacts to customer experience for given business workload

S-ar putea să vă placă și