Sunteți pe pagina 1din 9

DISCSIM REVIEWER

CHAPTER 1: Discrete Event Simulation


-Tool for understanding system behavior
Simulation1) Imitation of a real-world system over time.
2) Generation of an artificial history.
3) Numerical method of solving.
What does it do for us?
1) Allows us to draw inferences concerning the
operating characteristics of the real system.
2) Shows the behavior of a system as it evolves
through time.
When can we use Simulation?
1) When no analytical solution is possible.
2) When system involves too many random
factors.
3) When experimentation is needed.
4) When performing validation or testing of a
solution.
Advantages
1) Changes can be explored without disrupting the
system.
2) No investment for acquisition
3) Time compressed or expanded.
4) Obtained insights on variables and their
interaction on the performance of the system.
5) Bottleneck analysis can be performed.
Disadvantages of Simulation
1) Model building requires special training.
2) Simulation results may be difficult to interpret.
3) Modeling and analysis can be time consuming
and expensive.
4) Tendency to be used instead of an analytical
solution.
Applications of Simulation
1) Manufacturing Systems
2) Public Systems
3) Transportation Systems
4) Construction Systems
5) Restaurant and Entertainment System
6) Business Process Reengineering
7) Food Processing
8) Computer Performance Systems
Components of a System
1. Entity - object of interest in the system
2. Attribute - property of an entity
3. Activity - time period of specified length
4. State - collection of variables that describes a
system
5. Event - occurrence that changes the state of a
system. (exogenous or endogenous)

Discrete vs Continuous Systems


1) Discrete System -state variables change only at
a discrete set of time points
2) Continuous System -state variables change
continuously over time.
How do you represent a system?
1. Model -a representation of a system for the
purpose of studying it
2. components that are relevant
3. Sufficiently detailed to permit valid conclusions.
4. Classes of Simulation Models
5. Static or dynamic
6. Deterministic or stochastic
7. Discrete or continuous
Discrete-Event System Simulation
1. Analyzed by numerical methods rather than
analytical methods.
2. Steps
3. Problem Formulation
4. Objective Setting
5. Model Conceptualization and Data Collection
6. Model Translation
7. Verification and Validation
8. Experimental design, Model runs, and Analysis
9. Documentation, reporting, and Implementation
CHAPTER 2: Manual Simulation
-Simulation without the aid of a computer
Steps in Manual Simulation
1. Determine characteristics of each of the input
to the simulation. They are normally expressed
in the form of a distribution.
2. Construct a simulation table.
3. For each repetition, generate a value for each of
the inputs and evaluate the responses.
A Simple Queueing System
1. Components are Calling Population, Waiting
Line, and a Server.
2. Generate flowcharts for Departure and Arrival
Events.
3. Generate table for Queue and Server status
when entity enters and leaves the system.
How do we simulate?
Maintain:
1) Event List-contains the times at which different
types of event occur for each entity in the
system.

2)

Simulation Clock.-Inject randomness by using


random numbers to represent uncertainty of
events.

CHAPTER 3: General Principles of Discrete Event


Simulation
Terminologies
1) System
2) Model
3) System State
4) Entity
5) Attributes
6) List
7) Event
8) Event Notice
9) Event List
10) Activity
11) Delay
12) Clock
What is Discrete Event Simulation?
DES is used for systems where the state of a system
changes in discrete points in time.
A system can change in only countable number of
points in time.
Makes use of a Time-Advance Algorithm.
Time-Advance Algorithm (Event Scheduling)
This method requires an event list.
The Simulation Clock Time willprogress only up to the
time when the event is expected to occur.
Time-Advance Algorithm (Event Scheduling)

World Views
It is the orientation or perspective that the modeler will
adapt when modeling.
1) Event Scheduling- analyst concentrates on
events and their effect on system state.
2) Process-Interaction
The analyst thinks in terms of the processes.
The process is the life cycle of the entity inside the
system.

The life cycle consist of various events and activities.


These processes will force the entity to interact (such as
queue and wait)
A process is a time-sequence list of events, activities,
and delays that define the cycle of one entity as it
moves through the system.
3) Activity Scanning
Can lead to slow run times due to repeated
scanning of the activity list and check whether
an activity can begin.
CHAPTER 4: Modeling Variability
Random numbers
main ingredient for simulation model
Sequence of numbers that appear in random
order.
Follow the properties of uniformity and
independence.
represented as real numbers from 0 to 1 and
are converted to random integers
Generating Random Numbers
RAND function in Excel
Pseudo Random Numbers are created to represent
random numbers
Linear Congruential Method
A pseudo random number generator
Where:
Xi = stream of pseudo random numbers integers from
the interval (0, m-1)
a = multiplier constant
c = additive constant
m = modulus or remainder of m
Using Random Numbers to generate events
Example: Service Frequency Data
If RN = 47, then Service Time = 3 minutes
Random Variates- randomly sampled information
which will be used as inputs to the simulation model.
Random numbers are together with empirical or
statistical distributions to generate random variates.
Conceptual Modeling
Most critical part of the simulation modeling
process.
Model design can impact the following:
1) Data requirements
2) Speed and ease of model development
3) Validity of the model
4) Speed of experimentation

5) Level of confidence on the simulation results


It expects the modeler to have a thorough
understanding of the operations of the system
being modeled.
Often the least understood and removed from
the modeling process
It is considered as an art due to the lack of
defined methods and procedures
What is conceptual modeling?
A non-software specific description of the
simulation model that is to be developed
(Robinson, 2004)
It describes the input, output, content,
assumptions, and simplifications of the model in
relation to the system problem and objectives
Elements
Problem and Objectives purpose of the model and the
simulation project
Inputs elements that can be altered to create an
improvement (experimental factors)
Outputs results of the simulation model
Content components in the model and their
interactions
Assumptions uncertainties or beliefs about the real
world being modeled
Simplifications ways of reducing the complexity of the
model
Representing the Conceptual Model
System Component List Description of the
components in the model
Ex. Supermarket Payment System
Entities = Customer Inter-arrival time
Attribute = Paying customers
Activity = Coding of purchased items, payment, and
packing
Process Flow Diagram process map of the flow of the
entities across processes
Logic Flow Diagram process map involving logical
decisions across the process flow.
Framework for Conceptual Modeling
Methods of Model Simplification
Model simplification Way of handling the complexity
of the model.
Done by: Removing of components and
interconnections that have little effect on model
accuracy

Representing more simply the components and


interconnections while maintaining a satisfactory level
of model accuracy
Simplification Approaches
Aggregation of components
Black box modeling
Grouping of entities
Excluding component details
Replacing components with random variables
Excluding infrequent events
Reducing set rules
Splitting models
Guidelines for simplification
Use judgment whether simplification will have a
significant effect on model accuracy. Get agreement
with client. Aim is faster development time
Do a comparison between with and without
simplification and compare performance. Better
certainty on the simplification, but longer development
time.
Simplification should not compromise transparency and
result to a loss of confidence by the client or decision
maker
Data Collection
Uses of Data in the simulation modeling process:
Preliminary or contextual data
Qualitative information leading to understanding of the
problem and its situation
Model realization data
Quantitative data for developing and running the model
Model validation data
Quantitative and qualitative data of the real world
system for comparison to the output of the simulation
model
Types of Data
Data that is readily available
Layout, throughput, staffing levels, schedules, service
times
Data that is NOT available but collectible
Arrival patterns, machine failure rates, repair times,
nature of decision making
Data that is NOT available and NOT collectible
Rare failure times, availability of personnel for data
collection, machine failures, lost transactions
Dealing with Unobtainable Data

Data may estimate from other sources. Use surrogate


data from similar systems. Example: predetermined
time and motion information, standard times, etc
Consider data as an experimental factor. Example: if
machine failure is not available, what should the
acceptable machine failure to achieve desired
throughput?
Other Data Issues
Data Accuracy historical data is not necessarily a good
indicator of future patterns. Example: historical
breakdown patterns and arrival patterns may not occur
in the future
Data Format contextual meaning of the data should
be explicit. Example: time between failures (TBF)
Data Representing Unpredictable Variability (Random)
Traces Stream of data based actual sequence of
events of the real world system
Empirical Distributions Summarize Trace data
converted into a frequency distribution
Statistical Distributions known probability density
functions
Bootstrapping re-sampling from a small data set
Verification and Validation
Verification process of ensuring that the conceptual
model has been successfully transformed into a
computer model
Validation process of ensuring that the model is
sufficiently accurate to represent the real world system
being modeled.
CHAPTER 7: RANDOM NUMBER GENERATION
Properties of Random Numbers
Uniformity
Independence
Characteristics
Continuous uniform distribution between 0 to
1.
If the interval (0,1) is divided into n classes, n/n.
The probability of observing a particular value is
independent of previous numbers.
Psuedo-Random Numbers:
A sequence of numbers between 0 to 1 which
simulates the ideal properties of uniform
distribution and independence as closely as
possible.
Possible errors of Psuedo-Random Numbers

Generated numbers may not be uniformly


distributed
Numbers may be discrete valued instead of
continuous
Mean and variance may be too high or too low
Cyclic variations may occur such as
autocorrelation, successive numbers, groupings
of numbers

Characteristics of Routines (Generators):


Fast
Portable
Long cycle
Replicable
Imitate randomness
Techniques for Generating Psuedo-Random Numbers
o Mid square method
o Mid product method
o Linear Congruential method
o Combined Congruential method
o Linear Congruential method
o proposed by Lehmer (1951)
o produces a sequence of integers between
0 to m-1
o if c = 0, multiplicative Congruential
method
o if c is not equal 0, mixed Congruential
method
Where X0 = seed
Xi = random integer
C = increment
A = constant multiplier
M = modulus
Ri = random numbers = Xi / m
Combined Congruential method
combination of two or more multiplicative
congruential generators
Hypothesis Tests for Random Numbers
Hypothesis when testing for uniformity:

Ho: Ri U [ 0,1]
H1: Ri U [ 0,1]
Hypothesis when testing for independence:

Ho : Ri independently
Hi : Ri independently
Test for Random Numbers:
a. Frequency test
i. Kolmogorov-Smirnov test
ii. Chi-square test
b. Runs test

i. Runs up
ii. Runs down
iii. Runs above the mean
iv. Runs below the mean
c. Autocorrelation test
d. Gap test
e. Poker test
f. Other Tests
i. Goods Serial Test [1953, 1967]
ii. Median-Spectrum Test [Cox and
Lewis, 1966, Durbin, 1967]
iii. A Variance Heterogeneity Test
[Cox and Lewis, 1966]
CHAPTER 9: RANDOM VARIATE GENERATION
Random Variate- A value being sampled from a proven
distribution of an input variable.
Ex. inter-arrival time and service time.
RV generators techniques used to generate
random variates.
Techniques in Generating Random Variates
1) Inverse Transform Technique
This technique is used to sample from
distributions such as exponential, weibull,
triangular, and empirical distributions. Most
straightforward, but not always the most
efficient.
Steps in an Inverse Transform Technique
1. Compute the cdf of the desired random variable
x.
2. Set F(X)=R on the range of X.
3. Solve the equation F(X)=R for X in terms of R.
X=f-1 (r).
4. Generate uniform random numbers and
compute the desired random variates by Xi = f1
(Ri).
Derivation of RV generator for an exponential
distribution
Ex. Given exponential distribution,

1 1 x
e ,x 0
f ( x)
0 , otherwise

Thus, the Random Variate Generator is

xi ln(1 Ri )
or
xi ln( Ri )
2) Uniform Distribution

1
,a x b

f ( x) b a

0 , otherwise
Thus, the RVG is

xi a (b a ) Ri
3) Triangular Distribution

x , a x b

f ( x ) c x , b x c
0 , otherwise

Thus, the RVG is (if a=0, b=1, and c=2)

2
R
,
0

2
x
2 2(1 R) , 1 R 1

2
4)

Direct Transformation for the Normal


Distribution
Normal distribution

f ( x)

t 2
2

e dt - x

Using 2 normal random variables, plotted as a


point and represented in a polar coordinates as:

Z1 (2 Ln( Ri ))1/ 2 cos(2Ri )


Z 2 (2 Ln( Ri ))1/ 2 sin(2Ri ) and
X i Z i
5) Convolution Method
The probability distribution of a sum of two or
more independent random variables is called a
convolution of the distributions of the original
variables.
The convolution method refers to adding
together two or more random variables to

obtain a new random variable with the desired


distribution. This technique is useful for Erlang
and binomial variates.
For Erlang distribution:
Acceptance and Rejection Technique
The efficiency of the technique depends on
being able to minimize the number of
rejections.
Example of Acceptance Rejection Technique
Generate uniformly distributed random variates
[1/4,1]:
STEP 1: generate RN
STEP 2: if RN > or = , accept, let X=RN. If RN < , reject
and return to 1.
STEP 3:if another uniform random Variate on [1/4,1] is
needed, go to step 1.
Analysis of Simulation Data
Data Collection
Identification of Distribution of Data
Parameter Estimation
Goodness of fit Test
CHAPTER 9: Input Modeling
4 Steps to Input Modeling
1. Collect data from real system
Substantial time and resources
When data is unavailable (due to time
limit or no existing process):
Use expert opinion
Make educated guess based
from knowledge of the process
2. Identify probability distribution to represent
input process
Develop frequency distribution or
histogram
Choose a family of distributions
3. Choose the parameters of the distribution
family.
These parameters are estimated from the
data.
4. Evaluate the chosen distribution and its
parameters.
Goodness of fit test : chi-square or KS
test.
This is an iterative process of selecting
and rejecting the different distributions
until the desired is found.
If none is found, create an empirical
distribution.

Data Collection Problems


a. Inter-arrival times are not homogenous
b. Service times which are dependent on other
factors
c. Service time termination
d. Machine breakdowns
Data Problems with Simulation
a. No Data
b. Old Data
c. Missing Data
d. Guesstimates
e. Erroneous Data
f. No resource
g. Simplifying assumptions
h. Using Averages
i. Outliers
j. Optimistic Data
k. Politics
Consequence of Data Problems
a. Bad data equals bad models
b. The Best models fail under bad data
c. Successful simulation is unlikely with bad
data
Guidelines in Data Collection
a. Always question data
b. Electronic data does mean good data.
c. Know the source
d. Allocate sufficient time to collect and
analyze data
Suggestions to facilitate data collection:
1. Plan
Collect data while pre-observing
Create forms and be prepared to modify
them when needed
Video tape is possible and extract date
later
2. Analyze.
Determine if data is adequate.
Do not collect superfluous data.
3. Try to combine homogenous data.
Use two sample t-test.
4. Be wary of data censoring.
5. Look for relationships between variables using a
scatter plot.
6. Be aware of autocorrelations within a sequence
of observations.

CHAPTER 10: Validation and Verification


Adapted from Jerry Banks
Verification
Concerned with building the model right
Comparison of conceptual model and computer
representation
Is the model implemented correctly in the
computer?
Are the inputs and logical parameters
represented properly?
Validation
Concerned with building the right model
Accurate representation of the real system
This is achieved through the calibration of the
model
Iterative process until accuracy is acceptable

Model Building, Verification, and Validation


Common sense suggestions for verification
a. Have someone check the computerized model
b. Make a flow diagram (with logical actions for
each possible event)
c. Examine model output for reasonableness
d. Print the input parameters at the end of the
simulation
e. Make the computerized representation as self
documenting as possible
f. If animated, verify what is seen
g. Use IRC or debuggers
h. Use graphical interface
Three Classes of Techniques for Verification
a. Common sense techniques
b. Thorough documentation
c. Traces
Calibration and Validation
Validation is the overall process of comparing
the model and its behavior to the real system
and its behavior
Calibration is the iterative process of comparing
the model to the real system and making
adjustments to the model, and so on.
Iterative Process of Calibration
3 Step Approach by Naylor and Finger (1967)
Build a model with high face validity
Validate model assumptions

Compare the model input-output


transformations to corresponding input-output
transformations of the real system
Possible validation techniques in order of
increasing cost-value ratio by Van Horn (1971)
a. High face validity. Use previous
research/studies/observation/experience
b. Conduct statistical test for data
homogeneity, randomness, and goodness
of fit test
c. Conduct Turing test. Have a group of
experts compare model output versus
system output and detect the difference
d. Compare model output to system output
using statistical tests
e. After model development, collect new data
and apply previous 3 tests
f. Build a new system or redesign the old one
based on simulation results and use this
data to validate the model
g. Do little or no validation. Implement results
without validating
h.
CHAPTER 11: Output Analysis of a Single Model
Purpose
Analysis of data generated by a simulation.
To predict and compare performance of a
model
Simulation exhibits randomness, thus it is
necessary to estimate the:
Performance measure of the model, .
And by the models precision of the point
estimator or std. Error (variance).
Types of simulation w/ respect to output analysis:
Terminating/transient simulation
Nonterminating/steady state simulation
Terminating/transient simulation
One that runs for some duration TE, where E is a
specified event (or set of events) which will stop
the simulation.
Such a model opens at time 0 under specified
conditions and closes at TE.
Nonterminating/Steady state simulation
Simulation whose objective is to study the long
run, or steady state, behavior of nonterminating
systems.
The system opens at time 0 under defined
initial conditions by the analyst, and runs for
some analyst-specified period of TE.
Measures of Performance and their estimation:

Point estimation of the performance values


from the model.
Two types:
A. Within replication.
B. Among replication.
Interval estimation.
For Terminating Simulations:
Compute for confidence intervals with fixed
replications using same formulas except that n
= R.
Compute for confidence intervals with specified
precision using half length criterion.
For Steady State Simulations:
Choose the run length with the following
considerations:
A.) Bias in the point estimator due to
artificial or arbitrary initial condition.
B.) Bias can be severe if run length is too
short, but decreases as run length is increased.
C.) Precision of the estimator is measured
based on an estimate of point-estimator
variability.
D.) Budget constraints on computer
resources.
Initialization Bias can be minimized by:
Intelligent initialization - initialized based on
expected long run state
A.) Use existing data of a system as basis
(if system exists)
B.) Use results from a simplified model (if
system does not exists)
Deletion - reduce impact of initial conditions by
dividing a run into two phases. Let the first
phase be from t = 0 to t = to, followed the 2nd
phase which is from to to TE., Thus the
simulation will stop at time t = to + TE.
Deletion can be done is the following ways:
Ensemble averages. Plotting the mean and
confidence limits. The intervals can used to
judge whether or not the plot is precise enough
to judge that bias has diminished. Preferred
method.
Using Cumulative averages. Useful in situations
where single replication is only possible.
Computing for the sample size in steady state
simulation:
1. Solve for R, based on initial sample of Ro.
2. Compute for R/Ro.

3. Apply it to first phase, (R/Ro)( to), and (R/Ro)(


to +TE ), for the second phase.
CHAPTER 12: Comparison and Evaluation of
Alternative Designs
2 Statistical techniques used in comparing Systems:
Independent Sampling the systems are
responding in negative correlation between
each other. This has been observed in some
inventory problems.
Correlated Sampling or Common Random
Numbers responds in the same direction for
each input of random variates (monotonic). This
is common for certain simple queuing
problems.
Comparing 2 Systems:
It is necessary to make use of confidence
intervals when comparing two systems.
The confidence interval should be the
difference between two systems.
Three possible scenarios when computing for the
confidence interval of the differences (90%, 95%, or
99%):
When using independent sampling:
Independent sampling with equal variances
Independent sampling with unequal variances
When using correlated sampling:
Dedicate a random number stream for a specific
purpose. Use as many as needed.
Use attributes of an entity to consistently apply
same service times, order quantity, etc. (which
are dependent on the entity)
Use a specific stream for activities with cycles.
Examples are changes in shifts.
Synchronize if possible. Otherwise, use
independent random numbers
Comparison of Several Designs
Possible Goals of an Analyst:
Estimate each performance measure
Compare each performance measure to a
present system (control)
All possible comparison
Selection of the best
Using Bonferroni Approach for Comparison:
When making statements about several
alternatives, an analyst would like to be

confident that all statements are true


simultaneously.
This method can be used in three ways:
Individual C.I.s of a single system with multiple
performance measures. The alpha is simply the
product of all alphas used in the comparison.
They are assumed to be independent.
Comparison to a present system. Construct a 1alpha confidence interval for each comparison.
All possible comparison. Use the equation
above. This assumes that correlated sampling
was used.
Selecting the Best
Determining the best of the alternatives
Determining how much the best is relative to
the rest of the alternatives
NOTE: It might be possible that selecting the
second best is more practical, less costly, more
feasible, but still very insignificantly close to
being the best.
Understanding the Effect of the Design
Alternatives:
Use the power of design of experiments
Some of the tools under DOE are:
Factorial Designs (useful for understanding the
effects of the alternatives)
Screening, these are fractional factorial and
Placket Burman (useful for trimming down the
unimportant alternatives)
Response Surface, these are Central Composite
and Box Benhken (useful for identifying the
optimal setup in an alternative)
Metamodeling
Constructing a relationship between the
performance measure, Y, and the design
variables, X.
Some common relationships are: Simple Linear
Regression, Nonlinear relationships, Multiple
Linear Regression.
To verify whether these relationships are
reliable for predicting the effect on the
performance measure, it is necessary to test the
significance of the regressions (ANOVA is used).

S-ar putea să vă placă și