Sunteți pe pagina 1din 2

 Closely examine model output under various inputs to see if it appears correct

– Plug in values where you know exactly what the output should be
– Plug in extreme values or rare events to “stress test” the model
 The goal of validation is to develop a model that can be used as a substitute for the real system
for the purposes of experimentation
– If the model is “valid”, then it can be used to make decisions about the system similar to
those that would be made if it were feasible to experiment with the system itself
– By validating the model of the system “as is”, we give credibility to its validity for models
of system configurations that do not yet exist
 No such thing as a 100% valid model
 Should view validation from a cost-benefit perspective.
– More detail, time will generally lead to better validity, but at what cost, and how much
time? Sensitivity analyses is very helpful here.
 Validation is performed throughout the project timeline (not just something done at the end)

• Involves interaction with project managers, analysts, system experts throughout model
development
– Is the representation of the conceptual model reasonable for its intended purpose?
– Are the underlying theories and assumptions correct?
– Do the outputs of the operational model seem reasonable?

• Structural assumptions
– Related to how the system operates, and usually involve simplifications and abstractions
of reality
– E.g., in a queueing system is there one line feedings multiple servers, or one line per
server? What is the queue discipline? Do customers jump lines if one is shorter? Can
customers behind others jump ahead of them to the other lines?

• Data assumptions
– Based on collection of data and correct statistical analysis
– We’ve seen much of this already:
• Do data appear IID?
• Identify a family of probability distributions for the data
• Estimate the parameters of the distribution
• Validate these choices by Q-Q plots, goodness-of-fit tests, etc.
• Are the assumptions required for a Poisson process satisfied?
• Might think of comparing these two via formal hypothesis test (e.g., does the mean of the
model = mean of the actual system?)
– Avoid hypothesis testing and focus on CIs.
– Move away from focus of whether there is a statistically sig diff to whether there is a
pratically sig diff.
• Graphical displays are quite helpful
– E.g., compare boxplots of simulation output with real output
– Compare counts, averages, etc. using barcharts
– Dynamic displays (e.g., variables, plots) and animation can also be useful

• How do outputs change over range of input changes?


• How does optimal decision change over range of inputs?
• Do outputs change in direction you expect, as inputs change in different directions (e.g., speed
up arrival rate, slow down arrival rate)
• Stress testing: change inputs to very large values and very small values
• What happens if you eliminate variability (e.g., set arrival and service rates to be constant
durations)?

S-ar putea să vă placă și