0 Voturi pozitive0 Voturi negative

0 (de) vizualizări16 paginijurnal

May 22, 2019

© © All Rights Reserved

PDF, TXT sau citiți online pe Scribd

jurnal

© All Rights Reserved

0 (de) vizualizări

jurnal

© All Rights Reserved

- The Woman Who Smashed Codes: A True Story of Love, Spies, and the Unlikely Heroine who Outwitted America's Enemies
- Steve Jobs
- NIV, Holy Bible, eBook
- NIV, Holy Bible, eBook, Red Letter Edition
- Hidden Figures Young Readers' Edition
- Cryptonomicon
- Console Wars: Sega, Nintendo, and the Battle that Defined a Generation
- Make Your Mind Up: My Guide to Finding Your Own Style, Life, and Motavation!
- The Golden Notebook: A Novel
- Alibaba: The House That Jack Ma Built
- Hit Refresh: The Quest to Rediscover Microsoft's Soul and Imagine a Better Future for Everyone
- Hit Refresh: The Quest to Rediscover Microsoft's Soul and Imagine a Better Future for Everyone
- Autonomous: A Novel
- The 10X Rule: The Only Difference Between Success and Failure
- Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are
- Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy

Sunteți pe pagina 1din 16

journal homepage: www.elsevier.com/locate/simpat

T

experiments in exploratory modelling

⁎

Enayat A. Moallemi , Sondoss Elsawah, Michael J. Ryan

Capability Systems Centre, School of Engineering and Information Technology, The University of New South Wales, Canberra, Australia

A R T IC LE I N F O ABS TRA CT

Keywords: Exploratory modelling is an approach for modelling under uncertainty based on the generation

Exploratory modelling and analysis of computational experiments. The results of exploratory modelling are sensitive to

Uncertainty the way that experiments are designed, such as the way that the uncertainty space is delineated.

Simulation This article introduces an agent-monitored framework—i.e. a design metaphor of the interactions

Design of experiments

among modellers and stakeholders and the simulation process—for controlling the design of

Scenario

Decision support

experiments based on monitoring model behaviour in the output space. To demonstrate the

beneﬁts of the suggested framework in the exploratory modelling process, the article shows how

the use of the framework with an output-oriented approach informs the delineation of an ap-

propriate uncertainty space with an illustrative example in the decision-making context. The

article concludes that the design of experiments based on feedback from the output space can be

a useful approach: to control simulations in exploratory modelling; to build more conﬁdence in

ﬁnal results; and to inform the design of other aspects of experiments, such as selecting policy

levers and sampling method.

1. Introduction

Exploratory modelling is a computational approach to modelling—in decision making, theory development, and other applica-

tions—under various forms of uncertainties in model input and model structure [2,3,32,40,44]. This approach deals with un-

certainties by running many computer simulations and by analysing the implications on ﬁnal results of a variety of parametric and

non-parametric assumptions. At the core of exploratory modelling is the use of simulation models—models being conceptualised

either as fully parameterised sets of interconnected functions (e.g. model deﬁnition in [2]) or as a single model ﬁle (e.g. model use in

[33])—for the generation of an extensive database of computational experiments. Each experiment corresponds to one simulation run

which is set up based on one assumption from the input space (such as model ﬁle and input parameters) and represents the results of

the simulation in the output space (such as solutions in a decision problem or performance measures in a performance assessment).

Exploratory modelling uses generated experiments to assess possible impacts of a variety of assumptions from an input space on an

output space using a range of search strategies and analytical methods [33].

The ways that experiments—in relation to their input and output space—are designed and then generated have been discussed in

previous studies [8,33,34,37,53]. These studies have discussed the design of experiments in terms of: the selection of outcomes of

interest to analyse in the output space; the list of policy levers from the input space whose impacts need to be investigated; simulation

⁎

Corresponding author.

E-mail address: e.moallemi@unsw.edu.au (E.A. Moallemi).

https://doi.org/10.1016/j.simpat.2018.09.008

Received 12 February 2018; Received in revised form 7 September 2018; Accepted 14 September 2018

Available online 15 September 2018

1569-190X/ © 2018 Elsevier B.V. All rights reserved.

E.A. Moallemi et al. Simulation Modelling Practice and Theory 89 (2018) 48–63

model(s) (deﬁned as a single or a group of model ﬁles in the input space) with which to generate experiments; a list of critical

uncertainty factors and an appropriate uncertainty space (i.e. range of variation for each uncertainty factor) over which policy levers

are investigated; and a sampling method and sample size to choose from the uncertainty space. An appropriate design of experiments

is critical as it aﬀects the conﬁdence of results and their interpretation [15]. The uncertainties caused by inconsistent methods,

assumptions, and boundaries can lead to confusion in the understanding of available information [46]. Diﬀerent initial setups can

lead to diﬀerent ensembles of experiments and various model behaviour in the output space, and therefore, to divergent (modelling)

insights and (decision) conclusions [54,56]. An appropriate design of experiments is also critical as it aﬀects the computational

burden and simulation time [38].

Several decision-making and planning frameworks, which adopt exploratory modelling, are based on the generation and analysis

of computational experiments which need to be designed. Among them are several potential applications of exploratory modelling in

Robust Decision Making [43], Adaptive Policymaking [23], Dynamic Adaptive Policy Pathways [36], Multi-objective Robust Decision

Making [28], and a participatory exploratory modelling approach [53]. One common feature of these exploratory modelling ap-

plications is that the analysis of experiments within these frameworks is ‘goal-oriented’, in the sense that the ensemble of generated

experiments is analysed to serve a speciﬁc goal (or answer a speciﬁc question), i.e. to explain, to test, or to manage output(s) of

interest. Despite the goal-oriented process of exploratory modelling, experiments are often designed with an input-oriented approach

focusing on sampling from the input uncertainty space with limited knowledge about which areas of that uncertainty space could lead

to more interesting model behaviour and serve the goal of better modelling. An input-oriented approach can lead to missing some

exceptional model responses in models with non-linear behaviours when only a speciﬁc area of the input uncertainty space could

generate a wide spectrum of behaviours in the output space [26]. Further, the input-oriented approach could impose extra com-

putational burden when endeavouring to cover an unnecessarily, but safely, wide uncertainty space without considering the re-

levance of generated outcomes.

Experiments are also often designed with the modeller's interactions between the output space and input space—a back-and-forth

process when a number of experiments are being performed before ﬁnally settling on the set of experiments to use for analysis. These

interactions for enhancing the modeller's insights usually take place in an ad hoc manner, often implicitly and not reported in

discussion of the published results, which inhibits reproducibility of results and aﬀects the model's credibility [11]. This implies the

lack of a systematic approach to learn from feedback between input and output spaces for informing the design of experiments as an

explicit step within the process of exploratory modelling.

This article addresses the challenge of the input-oriented approach by developing an ‘agent-monitored framework’ to inform the

design of experiments in exploratory modelling. The agent-monitored framework adds to previous exploratory modelling research by

guiding and structuring the design of experiments in the modelling process through agent-monitored simulation; a modelling and

simulation concept where the deliberation, learning, and interaction of an agent with the simulation results are used as a design

metaphor to control the simulation process [60,66]. The agent is deﬁned as a human (e.g. modeller), an autonomous computer entity,

or an interaction of both. This concept focuses on ‘agents for simulation’ as opposed to the prevalent use of ‘simulation of agents’ in

Agent-Based Modelling.

This agent can beneﬁt from the computational discovery ability of computer and/or the cognitive capabilities of modeller for

informing the design of experiments through an ‘output-oriented’ knowledge processing of model results. The output-oriented ap-

proach is when modeller controls and sets up simulations through screening the model behaviour and learning from the output space

over time (see e.g. Yilmaz [66], Watson and Kasprzyk [64], and Islam and Pruyt [26]). The agent-monitored framework enables co-

evolution and symbiotic adaptation of the input space based on monitoring model behaviour (the output space). This leads to

designing experiments which are ﬁt for the purpose of exploratory modelling and facilitate a more in-depth investigation of the model

behaviour with a more eﬃcient computational process.

This idea of an agent controlling the simulation process builds on studies in the context of robust decision making by RAND

Corporation (e.g. [44]) where they place on the human in the loop process of exploratory modelling and emphasise on combining

machine and human capabilities interactively. These studies argue that model users should be also responsible for modelling and

control the computational process when they observe counterintuitive results based on their shared visions of the problem and model

behaviour. However, this discussion has remained mainly conceptually and has not been implemented so far.

We focus on the design of experiments in exploratory modelling for purposes of decision making and policy analysis where the

normative direction of the output space (i.e. which model behaviour is desired and what direction should be pursued) is meant to be

clearly stated in the problem formulation. We also demonstrate the application of this framework to the design of experiments in

terms of ‘delineating an appropriate uncertainty space’ . However, the structure of the suggested agent-monitored framework is

generic and can be extended and equipped with other methods to cope with the design of experiments in other aspects of experiments

(such as choosing sample size) too. This article discusses brieﬂy how an output-oriented approach can inform the design of these

other aspects.

49

E.A. Moallemi et al. Simulation Modelling Practice and Theory 89 (2018) 48–63

The rest of the article is structured as follows. Section 2 gives the background needed for this research with examples from exiting

literature on the design of experiments. Section 3 introduces the agent-monitored framework conceptually. Section 4 illustrates the

implementation of the framework for choosing an appropriate uncertainty space in a decision-making example. Section 5 discusses

the beneﬁts of the output-oriented approach to other aspects of experiments. Section 6 discusses brieﬂy some future research di-

rections.

2. Background

This section reviews the design of experiments with examples from the previous studies explicitly dealing with them.

Researchers using exploratory modelling in the decision-making frameworks need to design experiments by organising the

available relevant knowledge. This knowledge can be classiﬁed into diﬀerent aspects (such as choosing outcomes of interest and

specifying policy levers) as stated in Lempert et al. [44]. These aspects of experimental design are summarised as follows.

Outcomes of interest are used in the exploratory modelling process as indicators for measuring a model behaviour of interest. This

behaviour can be in the form of a speciﬁc system performance represented in boxplots and Kernel Density Estimates (see e.g. [51]); a

minimum performance threshold expected from a system (see e.g. [10]); or the fulﬁlment of decision objective(s) in a decision-

making problem (see e.g. [28,52]. Outcomes can be used as a single measure or as collective measures, and with scalar or time series

values (noting that the use of time series values has been less common in exploratory modelling literature, except of some system

dynamics works (e.g. [34,51]). Choosing ﬁt-for-purpose outcomes of interest is important in the design of experiments as the analysis

of selected outcomes conveys information regarding the goal of modelling and will be the basis for decision making.

Policy levers are potential decision alternatives available at the present time which shape a long-term future. These decision

alternatives can be represented as diﬀerent sets of model ﬁles (e.g. two model ﬁles with feed-in tariﬀ or emissions trading system

policy mechanisms in an energy transition case) or diﬀerent values for key model variables (e.g. various rates of carbon tax in an

energy model). Earlier works on policy analysis often assumed a shortened list of policy levers and conducted the analysis by

assessing the impact of these pre-selected levers in the future. The selection of appropriate policy levers in the design of experiment is

important since not all initially chosen policy levers would lead to a robust and desired performance of model behaviour.

Exploratory modelling in the decision-making context uses a computational model to explore plausible futures and to assess the

impacts of policy actions in what-if scenarios. The model should be capable of representing the system in suﬃcient detail for the

particular goal of modelling. The model also needs to be fast and eﬃcient in running many simulations in a short time. Haasnoot et al.

[21] argued that a ﬁt-for-purpose model in exploratory modelling needs to integrate knowledge and methods from diﬀerent disciplines

to represent the various dimensions of a system. They argued that the model also needs to be agile in the sense that it can run many

simulations under various assumptions quickly and with low computational burden. Developing (or choosing) a ﬁt-for-purpose

simulation model in the design of experiments is signiﬁcant as a model that is too abstract would not generate useful information for

addressing the goal of modelling and a model structure that is too detailed imposes a high computational burden on simulations.

Exploratory modelling can cope with uncertainty factors exogenous or endogenous to the system, continuous or discrete, and

parametric or non-parametric uncertainties (such as variation in model structure) [39,62]. These uncertainty factors can represent

diﬀerent techno-economic, social, cultural, and political characteristics. However, the inclusion of many uncertainty factors increases

the computational burden of the exploratory modelling process, which necessitates the selection of critical uncertainty factors.

Critical uncertainty factors are usually selected among those factors which are out of the control of decision makers and are highly

unpredictable and those whose variation signiﬁcantly inﬂuences the model behaviour in the output space.

The uncertainty space, from which samples are drawn to run simulations in the exploratory modelling process, is hypothesised as

a multi-dimensional space based on the combination of identiﬁed critical uncertainty factors, each with an initial range of variation.

Delineating an appropriate uncertainty space is critical in the design of experiments because: (a) an uncertainty space that is too

broad imposes a high computational burden and generates very broad model behaviours which do not necessarily convey a useful

message to the modeller; (b) an uncertainty space that is too narrow increases the risk of missing some future possibilities and

drawing conclusions that are vulnerable to some unforeseen (future) circumstance(s).

50

E.A. Moallemi et al. Simulation Modelling Practice and Theory 89 (2018) 48–63

The sampling method deﬁnes the strategy of selecting random samples from the uncertainty space to run the simulation model

and to generate an ensemble of experiments for exploratory modelling. The sample size is the number of simulation runs ideally

needed to generate the desired diversity of model behaviour, including any exceptional behaviour of interest. Choosing the eﬃcient

sampling method is a critical task since an ineﬃcient sampling method will not contain samples of interesting areas of the uncertainty

space, which can result in some new unforeseen model behaviour. For example, sampling uniformly from the input uncertainty space

of a non-linear model—where the dynamic, complex, and uncertain nature of exploratory modelling problems can create non-linear

and unpredictable model response—may only generate a limited range of model behaviours and miss a certain behaviour of interest

in the output space [26]. Choosing a suﬃcient number of samples (the sample size) is also an important task as a high number of

experiments can increase the computational burden, and a low number of experiments leads to an inadequate resolution in the output

space, and subsequently to less-reliable conclusions.

The ways to design experiments have been discussed in diﬀerent areas including in the Design of Experiments [30,31,55],

experiment description languages [15], model-driven engineering guidelines [60], Active Non-Linear Testing [49], and agent-

monitored simulation [66,67]. Sensitivity analysis (see e.g. Borgonovo and Plischke [8] and Pianosi et al. [56]) and uncertainty

estimation (see e.g. Beven and Binley [4]) are other areas that discuss ways which can be used in designing experiments; for example

for identifying critical uncertainty factors, uncertainty space and the size of samples. Apart from these broad areas, there are previous

studies which discussed the design of experiments in the context of exploratory modelling for planning and decision making—more

related to the focus of this article. In Table 1, we identify some of these studies, which considered potential feedback from the output

space for informing the initial design. The two following conclusions can be drawn from a review of examples in Table 1:

• The examples conducted the design of experiments in the early step(s) of the modelling process and then used feedback from the

later step(s) (e.g. analysis of the results) to modify this initial design. This can be seen as a ‘process thinking’ approach to the

design of experiments which mainly considers linear interaction between the early and later modelling steps. A diﬀerent way of

looking at this interaction—as used in the suggested framework of this article—is through the concept of agent-monitored si-

mulation [66,67] where ‘systematic interactions’ between the initial design and outcomes are advocated using computational

tools and controlling agents.

• The examples used diﬀerent ways to inform the design of experiments, such as sensitivity analysis for identifying critical un-

certainties and optimisation for searching for policy levers (i.e. policy discovery) and for searching over the uncertainty space (i.e.

extreme case scenario discovery). A generic framework with an output-oriented approach—as pursued in this article—can guide

the mixed-use of these various ways of informing the design of experiments and enhance computational discovery with human

common visions and human perception with quantitative evidences.

We initially explain the generic structure of our suggested agent-monitored framework, which can be applicable to designing

diﬀerent aspects of experiments. We then discuss one explicit application of the framework to the delineation of the uncertainty

space, on which we focus in this article.

A commonly-used process for exploratory modelling in the context of planning and decision-making can be found in the Robust

Decision Making framework [45]. It initially starts by problem formulation to characterise the design of experiments. It continues with

experiment generation where various model behaviours are generated by performing computational experiments using a simulation

model. These various model behaviours are explored and/or a particular behaviour of interest is further investigated by analysing the

ensemble of generated experiments in a computational exploration and discovery process. The results of analysis are subsequently used

in trade-oﬀ analysis, adaptation and deliberation to address the goal of the modelling, such as a trade-oﬀ between candidate policies and

the development of an adaptive plan. See Walker et al., [61] for the alternatives to this process in other decision-making frameworks.

This subsection develops a framework for monitoring the output space and controlling the design of experiments in interaction with

this typical exploratory modelling process for decision making. We remain at high level of description of the framework in this

subsection to keep this idea applicable to designing diﬀerent aspects of experiments. The next subsection, however, explains the

implementation of the framework for the delineation of the uncertainty space in detail.

We conceptualise this agent as a simulation coordinator module overseeing the modelling process (see Fig. 1). This concept of

agent is not a simulation entity as it is in agent-based modelling; instead the agent represents knowledge perception and deliberation

resulting from human abilities (i.e. stakeholder or modeller opinion) and computer capabilities (i.e. statistical and machine learning

methods) to interact with the exploratory modelling process for decision making. Through these interactions, the agent monitors the

model behaviour (i.e. in computational exploration and discovery and trade-oﬀ analysis, adaptation and deliberation) over time and

receives feedback from the output space. The agent then controls the input space (i.e. in problem formulation and experiment

generation) to modify the design of experiments accordingly. Monitoring interactions facilitate the active learning from experiments

51

E.A. Moallemi et al. Simulation Modelling Practice and Theory 89 (2018) 48–63

Table 1

Examples of previous exploratory modelling studies in the context of decision making where feedback from the output space informs the way

experiments are designed.

Example Aspect(s) of experiments to be designed Way of designing experiments Feedback from the output space to the

design of experiments

Lempert et al. Exogenous uncertainties (X), near-term The design of experiments is discussed in The initial design of experiments, in terms of

[43] policy levers (L), performance measures initial steps of the analysis based on the policy levers, can be enhanced in an iterative

(M), and relationship(s) (R) which link XLRM framework [44]. Aspects are speciﬁed manner based on the feedback from the

uncertainties to measures mainly through engagement with analysis of vulnerabilities and trade-oﬀ

stakeholders. among initial policies.

Kwakkel et al. Model, list of uncertainties, policy actions, The design is discussed in the initial step of The results of optimisation search and the

[36] outcomes of interest, sampling method and the modelling and for the formulation of an promising sequences of actions inform the

sample size. optimisation problem. Methods used are not initial choice of policy actions in the design

explicitly discussed. However, the of experiments.

involvement of decision makers in deﬁning

uncertainties, policies, and outcomes are

mentioned.

Eker and van Value systems (objectives and preferences) Aspects of the design of experiments are The analysis of the output space in terms of

Daalen [13] of stakeholders (W), outcome indicators to discussed in the problem formulation step of the identiﬁed failure scenarios (obtained

represent performance (O), policy variables modelling and using a model-based policy from scenario discovery) and robustly

(P), external factors/uncertainties (X), and analysis framework [62] which supports optimal values of policy variables (obtained

system model (R) interaction with stakeholders and from a multi-objective robust optimisation)

policymakers in deﬁning each aspect. can enable decision makers to look at a

narrower decision space in the initial design

of experiments.

Haasnoot et al. The study focuses on developing a fast and The study explains steps for the development Incorporating the purpose of exploratory

[21] integrated model for simulation (among of a ﬁt for purpose model, known as an analysis in terms of scenarios, policy actions,

other aspects of experiments). Integrated Assessment Metamodel (IAMM), outcome indicators into the model

which can describe the complete cause-eﬀect development process results in a model ﬁt to

chain using theory-motivated meta-models. the purpose of analysis in strategic decision

making.

Halim et al. [22] Uncertainty factors and their ranges, The study uses brainstorming, discussion The study performs worst-case scenario

simulation model, outcomes, sampling among experts, and making assumptions to discovery using optimisation to search over

method, and sample size identify key uncertainties and their ranges. the uncertainty space. This can inform the

The study does not explicitly discuss methods delineation of the uncertainty space in the

for the initial selection of model, outcomes, initial design.

sampling method and sample size.

Islam and Pruyt The study focuses on how to select an An adaptive output-oriented sampling The initial design of experiments is adapted

[26] appropriate sampling method for selecting approach is introduced. This approach through the identiﬁcation of regions of the

from the input uncertainty space. iteratively samples from the uncertainty input space that can generate particular/

space to cover the gaps the spectrum of interesting behaviours in a wide spectrum of

model behaviour in the output space. model behaviour revealed by the adaptive

output-oriented sampling approach.

Watson and Model and decision levers, performance The initial design of experiments is speciﬁed The identiﬁcation of failure scenarios with

Kasprzyk measures, and uncertainty factors using the XLRM framework [44] which scenario discovery informs the initial

[64] supports interactions with stakeholder. selection of scenarios and running multiple

multi-objective search within the modiﬁed

future scenarios enhances the robustness of

resulting candidate solutions.

and inform the initial design. Controlling interactions give the exploratory modelling process an adaptation capability to modify the

design of experiments according to the output space. We structure the way that the agent-monitored framework interacts with this

exploratory modelling process to inform the design of experiments in a hypothesis-experiment-learning iteration as follows:

1 Hypothesis: The agent of the framework interacts with problem formulation in the exploratory modelling process and analyses

initial assumptions regarding existing conditions and the goal of modelling. The agent generates hypotheses for diﬀerent aspects

of experiments to be designed in interactions with stakeholders as initial settings of the model. For example, in the context of

decision making, the exploratory modelling goal can be to develop adaptive robust solutions for a multi-objective decision

problem. The agent then develops initial hypotheses regarding critical uncertainty factors and outcomes of interest by, respec-

tively, identifying those signiﬁcant uncertainties aﬀecting decision objectives and by selecting those indicators reﬂecting the

fulﬁlment of decision objectives.

2 Experiment: The framework interacts with experiment generation in the exploratory modelling process. The agent tests the

implications of hypotheses by performing a limited (test-only) number of computational experiments using the simulation model

and by generating a temporary output space. The generated output space is used to evaluate the relevance and signiﬁcance of the

assumed design of experiments (hypothesis) for the exploratory modelling process. For example, ﬁve hundred simulation runs are

executed to produce the model behaviour and to test how signiﬁcant the impact of initial uncertainty space is on the output space.

If it is signiﬁcant, the modiﬁcation of the uncertainty space in the initial design becomes worthy of further investigation.

52

E.A. Moallemi et al. Simulation Modelling Practice and Theory 89 (2018) 48–63

Fig. 1. An overview of the agent-monitored framework and its interactions with a typical modelling process in robust decision making.

3 Learning: The agent of the framework interacts with computational exploration and discovery and trade-oﬀ analysis, adaptation

and deliberation in the exploratory modelling process. The agent screens the generated output space and performs goal-directed

deliberative knowledge processing to modify the design of experiments according to observed areas of interest in the output space

and. For example, the output space is analysed to identify which areas of the uncertainty space lead to possible model behaviour

in the output space which cannot be a feasible behaviour in a real-world condition, and therefore can be removed from or replaced

by other areas of uncertainty in the modiﬁed design of experiments.

3.2. The application of the agent-monitored framework to the delineation of the uncertainty space

The agent-monitored framework can interact with the exploratory modelling process using the hypothesis-experiment-learning

iteration to specify the appropriate delineation of the uncertainty in the design of experiments. The feedback interactions are based

on narrowing down the uncertainty space to only those areas which can create an interesting (in that it relates to the objective(s) of

the analysis) model behaviour in the output space. In the following two subsections, we show one type of the implementation of the

framework for delineation of the uncertainty space. Fig. 2 shows a summary of steps, functions, and ways to perform steps in a

workﬂow to guide this type of the implementation of the framework. We also acknowledge that there are other possible im-

plementations of the framework for delineation of uncertainty space, for example by factor prioritisation which can be used to reduce

the dimensionality of uncertainty space.

3.2.1. Steps

In accordance with the generic structure in Section 3.1, the following steps are taken:

• Hypothesis: The function of this step is to make an initial hypothesis regarding the delineation of the uncertainty space.

○ The agent initially sets up the experiments by identifying outcomes of interest, policy levers, simulation model, critical un-

certainty factors, sampling method and sample size.

○ The agent then speciﬁes the (feasibly) widest range of variation for each critical uncertainty factor considering the physical

limitations of the case and modeller's understanding of the model sensitivity to input parameters. The selection of a wide range

is to make sure no interesting model behaviour (for the aim of the analysis) is missed from experiments through the selection of

an unnecessarily narrow uncertainty space. This initial hypothesis can be enhanced by interactions with stakeholders in real

53

E.A. Moallemi et al. Simulation Modelling Practice and Theory 89 (2018) 48–63

Fig. 2. The workﬂow for the implementation of the agent-monitored framework for the delineation of the uncertainty space.

case studies.

• Experiment: The function of this step is to generate experiments based on the initial hypothesis about the uncertainty space.

○ The agent chooses diﬀerent areas (i.e. diﬀerent ranges of variation) of the delineated uncertainty space (in Hypothesis) in order

to test if changes in the uncertainty space create statistically signiﬁcant diﬀerent experiments in the Learning step.

○ The agent generates an ensemble of experiments by sampling from each area. It uses the selected simulation model (in

Hypothesis) for generating experiments. The agent samples from each area of the uncertainty space using the selected sampling

method and based on the chosen sample size (in Hypothesis). The agent then sets up the model parameters based on chosen

samples from the uncertainty space and executes a model simulation to generate one experiment per each sample. Each

experiment contains information regarding the initial values of parameters (chosen sample from the uncertainty space) and the

model behaviour in terms of selected outcomes of interest (in Hypothesis).

• Learning: The function of this step is to modify the initial delineation of the initial uncertainty space to suit the goal of modelling

(e.g. to include an interesting model behaviour) based on learning/feedback from the output space.

○ The agent tests if changes in the uncertainty space (based on the samples from diﬀerent areas in Experiments) can create

statistically diﬀerent experiments by comparing the spectrum of model behaviour in terms of the selected outcomes of interest

(in Hypothesis) for each ensemble of experiments. If the test shows a statistically signiﬁcant diﬀerence, then the agent needs to

modify the initial delineation of the initial uncertainty space based on the feedback from the output space.

○ To modify the uncertainty space, the agent initially chooses the ensemble of experiments generated from the initial (wide)

54

E.A. Moallemi et al. Simulation Modelling Practice and Theory 89 (2018) 48–63

uncertainty space (chosen in Hypothesis) for observing the spectrum of model behaviour in the output space. The agent then

clusters similar model behaviours to show what classes of model behaviour can be expected.

○ The stakeholder opinion is used as a heuristic to identify which generated cluster represents the relevant model behaviour in

the output space in the light of the aim of research.

○ The area of the uncertainty space, responsible for the generation of experiments in Cluster 1, is identiﬁed. Sampling from this

speciﬁc area of the uncertainty space (instead of the one in Hypothesis) will more likely lead to experiments within the relevant

range of the model behaviour in the output space.

We choose a mix of ways for implementing the agent-monitored framework for the delineation of the uncertainty space and for

serving each step's function. We also use modeller and stakeholder knowledge to fulﬁl the functions of some steps—which we do not

explain separately below.

• The XLMR framework: It is a generic framework from Robust Decision Making [44] for formulating a decision problem through

specifying four aspects: Exogenous factors (X) which represent the uncertainty space, policy levers (L) which are decision al-

ternatives to be tested, measures (M) which are model outcomes of interest, and relationships (R) which represent the mapping

process from external factors and policy levers to measures. The XLMR framework is used to fulﬁl the function of Hypothesis.

• The Exploratory Modelling (EM) workbench: The EM workbench is an open-source Python library for exploring the implications for

various decision assumptions from an input space to an output space though performing series of computational experiments [33].

The EM workbench is used for supporting the generation and execution of experiments to fulﬁl the function of Experiments. It is

also used for supporting the visualization and analysis of the experiments to fulﬁl the function of Learning.

• ANOVA: ANalysis Of VAriance (ANOVA) is a statistical method to test the statistical signiﬁcance of diﬀerence between means of

multiple groups [55]. It is assumed that a null hypothesis of no signiﬁcant diﬀerence among means is true. The null hypothesis is

rejected if ANOVA concludes that an observed diﬀerence among group means did not happen by chance. To make this conclusion,

ANOVA generates the F-statistic (i.e. a ratio of two variances) and compared its associated probability of occurrence (p-value)

with a threshold (signiﬁcance) level. The null hypothesis is rejected when the p-value is below the threshold. See [27] for further

explanation of ANOVA. ANOVA is used for testing the statistical signiﬁcance of model behaviour under various areas of the

uncertainty space to fulﬁl the function of Learning.

• Multi-dimensional clustering: Multi-dimensional clustering is a method used to cluster generated model results based on the si-

milarity of their behaviour [17]. The method creates a mixture of Gaussian distributions to estimate the distribution of model

results for a selected number of clusters. The appropriate number of clusters is selected based on two metrics: Bayesian In-

formation Criterion (BIC) and Aikake's Information Criterion (AIC). See [47] for the explanation of each metric. Multi-dimensional

clustering is used for identifying clusters of similar model behaviours based on samples from the uncertainty space to fulﬁl the

function of Learning.

• Scenario discovery: Scenario discovery is a method in Robust Decision Making [44] for identifying the areas of the uncertainty

space which generate similar classes of model behaviour based on using machine learning methods [10,19]. Scenario discovery

uses an ensemble of computational experiments as input and distinguishes similar classes of behaviour among experiments. It then

chooses among alternative subsets from the uncertainty space which can explain the classes of behaviour based on two measures

of quality—coverage and density—and a p-value. Coverage and density vary between zero and one and measure how universal

(i.e. covering all experiments from a same class of behaviour) and pure (i.e. excluding experiments resulting in other classes of

behaviour) a selected subset is. The p-value measures the statistical signiﬁcance of the identiﬁed relationship between the subset

and the class of behaviour. Scenario discovery has been implemented [33] using a number of algorithms [41] such as Classiﬁ-

cation and Regression Tree (CART) [9] and Patient Rule Induction Method (PRIM) [16] in diﬀerent case studies (see e.g.

[42,43,51]). Scenario discovery is used for identifying the area of the uncertainty space related to a selected cluster of experiments

with a behaviour of interest to fulﬁl the function of Learning.

To illustrate how the proposed framework works, we implement it to the delineation of the uncertainty space in the following

subsections. We use the capability acquisition and maintenance management of aircraft ﬂeets. This is a relevant case to use as the life

cycle of aircraft ﬂeets is usually greater than 30 years, which necessitates the use of robust planning—which can consider diﬀerent

future strategic and operational uncertainties—to achieve competitive advantages. We use our agent-monitored framework to assist

the robust planning of this case by delineating the uncertainty space.

4.1. Context

The capability acquisition and maintenance management supports decision making in choosing the appropriate trade-oﬀ among

the number of new aircraft acquisitions and the capacity size of deep and operational maintenance services. We use a model to

simulate the performance of the ﬂeet (e.g. availability, waiting time, and cost) under diﬀerent acquisition and maintenance strategies

[14]. The decision objectives are to maximise average ﬂying hours of aircraft and to minimise total acquisition and maintenance costs

55

E.A. Moallemi et al. Simulation Modelling Practice and Theory 89 (2018) 48–63

Table 2

Critical uncertainty factors and their range of variation.

Critical uncertainty factor Range of variation

Lifetime of aircraft 37,440 – 336,690 (hour)

Total required ﬂying hours 12 – 200 (hour/week)

Expected time spent by an aircraft in Capability Assurance Program (CAP) 8 – 45 (week)

Time between CAP events 16 – 40 (week)

Expected time spent by an aircraft in DM (Time in DM) 5 – 25 (week)

Time (ﬂying hours) between DM events 200 – 1800 (hour)

Expected time spent by an aircraft in OM (Time in OM) 3 – 15 (week)

Time between OM events 50 – 450 (hour)

CAP available capacity 1 – 7 (aircraft)

Number of purchased aircraft 1 – 7 (aircraft)

OM available capacity 1 – 7 (aircraft)

DM available capacity 1 – 7 (aircraft)

over one hundred weeks. The case study is used for illustrative purpose and the input data is hypothetical. The model simulates the

performance of a ﬂeet (e.g. availability, waiting time, and cost) under diﬀerent acquisition and maintenance strategies.

4.2. Implementation

The agent-monitored framework is implemented in accordance with the hypothesis-experiment-learning iteration suggested in

Section 3.

4.2.1. Hypothesis

We focus on the delineation of the uncertainty space as a hypothesis to be experimented. However, we still need to specify

diﬀerent aspects of experiments (mentioned in Section 2) to set up the exploratory modelling process in our illustrative case. The

initial design of experiment, in accordance with the XLMR framework, is speciﬁed as follows:

• Outcomes of interest: the average number of available aircraft for service (in-service aircraft) and the total acquisition and

maintenance costs of aircraft (total costs) are outcomes to be used as two decision objectives to maximise and to minimise,

respectively.

• Policy levers: the variation of the number of purchased aircraft, available capacity for Operational Maintenance (OM) and

available capacity for Deep Maintenance (DM) creates policy levers which can inﬂuence the fulﬁlment of decision objectives.

• Simulation model: the model is an object-oriented model developed based on a hybrid (system dynamics - discrete event)

modelling approach in AnyLogic 8.0. It is capable of simulating the performance of the aircraft ﬂeet [14].

• Critical uncertainty factors: we include 13 uncertainties (see Table 2) with inﬂuence on the performance of aircraft operations.

• Sampling method and sample size: we use Monte Carlo sampling in AnyLogic with an initial sample of size of 500 experiments.

For the identiﬁed critical uncertainty factors, we choose ﬁxed continuous range of variation (see Table 2) to represent the initial

uncertainty space (hypothesis). As explained in Section 3.2, ranges are chosen based on our general perception of the case and

modeller's understanding of the model sensitivity to input parameters.

4.2.2. Experiment

We ran experiments based on the initial design, set in Section 4.2.1. We generated an ensemble of 500 experiments by sampling

from the full uncertainty space (Table 2) using the EM workbench. We also generated two other ensembles of 500 experiments by

sampling from the ﬁrst and fourth quartile of the full ranges of uncertainty speciﬁed in Table 2 as two alternative areas from which to

sample. The output space in each ensemble is represented by the state of two outcomes of interest: in-service aircraft and total costs.

4.2.3. Learning

To test the signiﬁcance of the way we delineated the uncertainty space (hypothesis) on outcomes of interest, we compare the

resulted output space of each ensemble visually with Kernel Density Estimate (KDE) diagram and statistically using ANOVA. Fig. 3

shows the distribution of this output space in each ensemble. It is visually evident that the delineation of the uncertainty space in each

ensemble leads to diﬀerent distributions for the output space. This observation endorses the sensitivity of ﬁnal results to sampling

from diﬀerent areas of the uncertainty space. We also performed ANOVA to verify statistically the visual observation. The results of

ANOVA in Table 3 reject the similarity of the means of distributions (null hypothesis) and verify that the output space is statistically

sensitive to the way the uncertainty space was delineated.

We further investigated the eﬀect of sampling from diﬀerent areas of the uncertainty space by comparing the spectrum of model

behaviour revealed in the output space for each ensemble of experiments using a scatterplot. According to Fig. 4, the full uncertainty

space results in a wide spectrum of model behaviour in the output space. For example, the red dots (full range) show a wide spectrum

56

E.A. Moallemi et al. Simulation Modelling Practice and Theory 89 (2018) 48–63

Fig. 3. KDEs for (a) total costs and (b) in-service aircraft for the full uncertainty space, ﬁrst quartile and fourth quartile of the hypothesised

uncertainty space.

Table 3

ANOVA (5% signiﬁcance level) in (a) total costs and (b) in-service aircraft for the three ensembles of experiments.

ANOVA SUMMARY

Ensemble Average Variance F P-value F critical

First quartile 308.946 13,644.183

Fourth quartile 238.402 11,610.105

(a)

ANOVA SUMMARY

Groups Average Variance F P-value F critical

First quartile 2.363 1.015

Fourth quartile 0.792 0.099

(b)

Fig. 4. Clusters of experiments with similar behaviour regarding in-service aircraft and total costs based on the full uncertainty space (red), ﬁrst

quartile (blue), and fourth quartile (green) of uncertain parameters. (For interpretation of the references to color in this ﬁgure legend, the reader is

referred to the web version of this article.)

of model behaviour with respect to in-service aircraft (0 to 8) by considering the full uncertainty space compared to blue dots (ﬁrst

quartile) and green dots (fourth quartile) where the results are associated to truncated uncertainty spaces. However, a full uncertainty

space (red dots) only show a low-resolution of a particular behaviour of interest, as opposed to a truncated uncertainty space which

can result in a high-resolution picture of the output space in one particular area (in terms of density of experiments in one area). For

57

E.A. Moallemi et al. Simulation Modelling Practice and Theory 89 (2018) 48–63

Fig. 5. Discovering the area of the uncertainty space related to a speciﬁc cluster of the model behaviour in the output space using scenario

discovery.

example, blue dots and green dots show a narrower but high-resolution area of the output space associated to the model behaviour

with medium-to-high (1 to 7) and low (0 to 2) in-service aircraft respectively. A wide spectrum of model behaviour with high-

resolution can be captured in the output space at the same time by considering the full uncertainty space and increasing the sample

size together. However, this also leads to increasing the computation burden of the exploratory modelling process.

Since sampling from diﬀerent areas of the uncertainty space is statistically signiﬁcant for the selected outcomes of interest, the

framework needs to modify the initial delineation of the initial uncertainty space based on the feedback from the output space. We

use the initially-set uncertainty space for the purpose of observing the spectrum of model behaviour in the output space. The model

behaviour in the output space (in terms of in-service aircraft and total costs), which was generated in Section 4.2.2, is projected in a

scatterplot (see Fig. 5), and similar model behaviours are clustered using multi-dimensional clustering (see Section 3.2.2). The

stakeholder knowledge is used as a heuristic to identify a relevant model behaviour in the output space. We assumed that stake-

holders choose Cluster 1 as a relevant area to the aim of research since they prefer at least medium in-service aircraft and at most

medium total costs. The next task is to identify the areas of the uncertainty space responsible for this relevant model behaviour. The

area of the uncertainty space, responsible for the generation of experiments in Cluster 1, is identiﬁed using scenario discovery (see

Section 3.2.2). It is observed that experiments in Cluster 1 are more likely to be generated under a truncated range of uncertainty for

required ﬂying hours (13 – 61 (hour/week)) and DM capacity (13 – 61 (aircraft)) while the rest of the uncertainty space can remain

unchanged (see the table in Fig. 5). Sampling from this modiﬁed uncertainty space (instead of the initial hypothesis in Section 4.2.1)

would result in experiments closer to the relevant model behaviour in Cluster 1. The results of this test run subsequently modiﬁes the

delineation of the uncertainty space. The modiﬁed uncertainty space can be then used for the purpose of the exploratory modelling

process and with a larger sample size to better represent the diversity of model behaviours.

4.3. Discussion

We suggested a modiﬁed uncertainty space based on the cluster of experiments with the relevant model behaviour of at least

medium in-service aircraft and at most medium total costs. But how can this informed delineation of uncertainty space be useful and

enhance the conﬁdence of exploratory modelling results as claimed earlier in Section 1? We investigate the conﬁdence of the results

of sampling from this modiﬁed uncertainty space in terms of their robustness in a multi-objective decision-making problem. We show

how the informed delineation of the uncertainty space obtained in Section 4.2.3 can enhance the robustness of the results in terms of

generating outcomes closer the desired decision objectives.

The decision problem is to ﬁnd Pareto optimal solutions—each solution specifying the number of purchased aircraft, OM capacity

and DM capacity—to maximise in-service aircraft and to minimise total costs. We ran new experiments under the area of the

uncertainty space associated with the relevant model behaviour in Cluster 1. We searched for Pareto solutions using an epsilon non-

dominated sort algorithm in Python [65] and plotted the results in Fig. 6(a). We also identiﬁed and plotted Pareto solutions in the

previous generated experiments under the First quartile and the Full range of the uncertainty space for comparison (see Fig. 6(b) and

(c)). The Pareto solutions in each ensemble of experiments are contrasted from all feasible solutions with bright and shaded colours.

58

E.A. Moallemi et al. Simulation Modelling Practice and Theory 89 (2018) 48–63

Fig. 6. Pareto optimal solutions based on sampling from (a) Cluster 1, (b) ﬁrst quartile, and (c) and full uncertainty space.

Solutions are also represented based on the state of their decision levers (number of purchased aircraft, OM capacity, DM capacity)

and the associated fulﬁlment of their decision objectives (in-service aircraft and total costs) in parallel coordinate plots.

We compare generated Pareto solutions in each plot based on a robustness measure from Kwakkel et al. [35]. The robustness

measure is to assess the mean and the undesirable deviation from a threshold in decision objectives as Eq. (1):

k

⎧ Min (−μi , ∑k = 1 (xk − threshold )2 [xk > threshold])

fi (x ) =

⎨ Max (μ , − ∑k (xk − threshold)2 [xk < threshold])

⎩ i k=1 (1)

Where k is each individual solution in a set of pareto solutions and xk is the state of the decision objective in kth solution. The

minus mean and the sum of squared diﬀerences from a threshold should be minimised in case of total costs objective when deviation

above a threshold is undesirable. The mean and the minus sum of squared diﬀerences from a threshold should be maximised in case of

the number of in-service aircraft objective when deviation below a threshold is undesirable. In our example, based on the earlier

assumption that stakeholders prefer at least medium in-service aircraft and at most medium total costs, we choose 3 in-service aircraft

and 300 total costs as thresholds. The comparison of Pareto solutions based on this robustness measure in Table 4 shows that

although the means of in-service aircraft and total cost in Cluster 1 are not maximum and minimum (respectively), both decision

objectives in Cluster 1 are fulﬁlled with less undesirable deviations from the speciﬁed thresholds, compared the deviations in the

other two areas of the uncertainty space.

59

E.A. Moallemi et al. Simulation Modelling Practice and Theory 89 (2018) 48–63

Table 4

The components of the robustness measure.

Cluster 1 First quartile Full range

Undesirable deviation from the threshold (in-service aircraft) 16.320 17.430 49.479

Mean (total costs) 183.375 179.444 152.778

Undesirable deviation from the threshold (total costs) 141,903 194,765 293,827

This section brieﬂy discusses how the output-oriented approach that we adopted in the agent-monitored framework can be

applied to the other aspects of experiments based on previous experiences. We did not cover how to choose outcomes of interest in

this section as they are often chosen based on robustness measures which have been discussed extensively in other works (see e.g.

Giuliani & Castelletti [18] and McPhail et al. [48]).

Revising the initial list of policy levers based on feedback from the output space can enhance the robustness of ﬁnal decision

insights. This feedback interaction has been implemented in some studies (see e.g. [28]) using optimisation methods to search

through the space of possible policy levers, to analyse the impact of various levers on the output space, and to ﬁnd the most promising

policy levers which could lead to the robust desired performance in the output space. An output-oriented approach can also lead to an

understanding of the eﬀectiveness and validity of initial policy levers, which can subsequently inform the design of long-term policy

pathways with some short-term actions and milestones for switching between them (see e.g. [36]). This can be realised through

setting a performance threshold in the output space and then monitoring when the levers fail to meet the performance threshold. This

leads to the identiﬁcation of vulnerabilities of initial policy levers and informs the design of protective measures for coping with these

vulnerabilities and can update the initial policy levers for enhancing their robustness.

Model development is an iterative process of problem articulation, conceptualisation (dynamic hypothesis), formulation, testing,

and evaluation [59]. An output-oriented approach to the design of experiments can inform the modelling process from one iteration

to another and can yield insights leading to revisions in earlier steps. First, analysing the output space can enable modeller to monitor

the fulﬁlment of the goal and intended use of modelling and inform the conceptualisation and formulation of a simulation model—ﬁt

for the purpose of analysis. The fulﬁlment of goal and intended use of modelling inﬂuences how the problem at hand is con-

ceptualised, inﬂuences how the conceptual model is developed (e.g. the boundary of the system to model), and aﬀects how the model

is speciﬁed (e.g. the level of abstraction and aggregation). For example, a case-speciﬁc system's model which aims to suggest policy

recommendations for a certain context (see e.g. [50] would need extensive empirical data and participation of stakeholders and

should be able to reproduce the state of diﬀerent variables to draw relevant case-speciﬁc insights. However, another model, which

aims at providing explanation for a potential phenomenon or to test certain theoretical assumptions, can keep only a high-level of

abstraction and maintain a loose link to empirical data while being generic and applicable to diﬀerent cases (see e.g. [12]). Second,

analysing the output space can also inform the evaluation of the model; an evaluation to test whether the model can serve the goal of

modelling rather than only quantifying the model details accurately (see [20,21,24]).

Monitoring the output space can reveal information about the criticality of uncertainty factors; i.e. the degree of variations that

they can create on ﬁnal outcomes. This output-oriented approach can be realised through expert-informed approaches (such as a

participatory process) or computational approaches (such as sensitivity analysis). The former relies on qualitative (informal) as-

sumptions regarding the criticality of various uncertainties in model. These assumptions can be extracted in a participatory process

and in interaction with stakeholders or resulted from the modeller's understanding of the speciﬁc features of the context of the study,

the model structure, and its sensitivity to input uncertainties. The latter, however, is based on a quantitative analysis and can be

performed using, for example, sensitivity analysis [58]. As two examples from the case use of sensitivity analysis in this context, we

can identify uncertainty factors with negligible inﬂuence on outcomes using Factor Prioritisation (such as Correlation analysis [25]).

We can also rank uncertainty factors based on their impact on outcomes using Factor Fixing (e.g. variance-based [7] and density-

based [1]). A joint use of both participatory and computational approaches can be also used for the identiﬁcation of critical un-

certainties [53], where ﬁrst the areas of uncertainties are identiﬁed in a participatory process and then the criticality of uncertainty

factors within each area is ranked using standard sensitivity analysis.

60

E.A. Moallemi et al. Simulation Modelling Practice and Theory 89 (2018) 48–63

An output-oriented approach to sampling method and the size of sample can inform the design of experiments. For sampling,

some previous studies have used adaptive sampling to sample from the areas of the uncertainty space responsible for undiscovered

model behaviours or the areas responsible for high variation of the model behaviour (e.g. [26]). Adaptive sampling is for an eﬃcient

search of the uncertainty space based on their likelihood of uncertainty estimation [6,29]. Adaptive sampling particularly considers

the complexity of the likelihood surface of the uncertainty space [4]; the complexity resulted from the interaction of multiple

dimensions in the uncertainty space or from the model structure [5]. Adaptive sampling partitions the uncertainty space and takes

samples from the area of higher likelihood. Regarding an output-oriented selection of sample size, convergence analysis [63] and

robustness analysis [57] are among methods which verify a posteriori the appropriateness of the sample size considering the state of

the output space. In convergence analysis, the degree of independence between the model behaviour in the output space and the

sample size is assessed by analysing whether the same model behaviour would be achieved under smaller sub-samples or not. In

robustness analysis, the degree of independence between the output space and samples is assessed by taking new samples from the

uncertainty space with the same size and by analysing the similarity of the output space.

• The current framework monitors and controls the design of experiments oﬄine, in the sense that a set of experiments is run, the

results are analysed, and the design is modiﬁed. An extension of this framework could work in real time, in the sense that the

framework monitors and analyses the output space and modiﬁes the design of experiments accordingly as the model is running. At

the same time, it monitors the impact of this modiﬁed design in the output space again to adjust further the design of experiments.

• The current framework also uses the original simulation model to create the output space for controlling the design of experi-

ments. An alternative approach would be to develop a simple form of the original simulation model to work as a control model

relating the output space to the input space for the purpose of modifying the design of experiments. The beneﬁt of this control

model is in its simple structure which can be run faster, and therefore can better suit the real-time monitoring and controlling of

the exploratory modelling process. A future research eﬀort can develop this control model in the exploratory modelling process

and compare its performance with the agent-monitored framework suggested in the current work.

• The work implemented the suggested agent-monitored framework in one aspect of experiments. However, ways for designing

other aspects of experiments were introduced in Section 5. Future research can incorporate the suggested ways into the agent-

monitored framework and investigate their implementation in other aspects, such as specifying policy levers, in practice. The

current work used robustness measures in a multi-objective optimisation problem to show the higher conﬁdence of the results

based on an informed uncertainty space—obtained by the implementation of the suggested framework—compared to an initial

space of uncertainty. Future research can take another approach and show the superiority of the suggested framework by com-

paring its results with those from an alternative approach—for example a conventional input-oriented approach to the design

experiments. The current work also used a mixed use of ANOVA and KDE for testing the proposed hypothesis regarding the

delineation of the uncertainty space. An alternative testing approach for future can be to use Kolmogorov-Smirnov test to compare

the distribution of outcomes as a whole.

Acknowledgements

We are thankful to the anonymous reviewers for their constructive comments on the earlier version of this article.

References

[1] B. Anderson, E. Borgonovo, M. Galeotti, R. Roson, Uncertainty in climate change modeling: can global sensitivity analysis be of help? Risk Anal. 34 (2) (2014)

271–293.

[2] S. Bankes, Exploratory modeling for policy analysis, Oper. Res. 41 (3) (1993) 435–449.

[3] S.C. Bankes, R.J. Lempert, S.W. Popper, Computer-assisted reasoning, Comput. Sci. Eng. 3 (2) (2001) 71–77, https://doi.org/10.1109/5992.909006.

[4] K. Beven, A. Binley, The future of distributed models: model calibration and uncertainty prediction, Hydrol. Process. 6 (3) (1992) 279–298.

[5] K. Beven, A. Binley, GLUE: 20 years on, Hydrol. Process. 28 (24) (2013) 5897–5918.

[6] R.-S. Blasone, H. Madsen, D. Rosbjerg, Uncertainty assessment of integrated distributed hydrological models using GLUE with Markov chain Monte Carlo

sampling, J. Hydrol. 353 (1) (2008) 18–32.

[7] E. Borgonovo, A new uncertainty importance measure, Reliab. Eng. Syst. Saf. 92 (6) (2007) 771–784.

[8] E. Borgonovo, E. Plischke, Sensitivity analysis: a review of recent advances, Eur. J. Oper. Res. 248 (3) (2016) 869–887 https://doi.org/10.1016/j.ejor.2015.06.

032.

[9] L. Breiman, J. Friedman, C.J. Stone, R.A. Olshen, Classiﬁcation and Regression Trees, CRC press, 1984.

[10] B.P. Bryant, R.J. Lempert, Thinking inside the box: A participatory, computer-assisted approach to scenario discovery, Technol. Forecast. Soc. Change 77 (1)

(2010) 34–49.

[11] S. Chakladar, A model driven engineering framework for simulation experiment management, Auburn University, 2016.

[12] F.J. de Haan, B.C. Rogers, R.R. Brown, A. Deletic, Many roads to Rome: The emergence of pathways from patterns of change through exploratory modelling of

sustainability transitions, Environ. Model. Softw. 85 (2016) 279–292 http://dx.doi.org/10.1016/j.envsoft.2016.05.019.

[13] S. Eker, E. van Daalen, A model-based analysis of biomethane production in the Netherlands and the eﬀectiveness of the subsidization policy under uncertainty,

Energy Policy 82 (2015) 178–196 http://dx.doi.org/10.1016/j.enpol.2015.03.019.

[14] S. El Sawah, M. Ryan, A modular and approach to dynamic modelling approach for capability planning, Proceedings of the Australasian Simulation Congress

(SimTect), Melbourne, 2016 26-29 September.

[15] R. Ewald, A.M. Uhrmacher, SESSL: A domain-speciﬁc language for simulation experiments, ACM Trans. Model. Comput. Simul. (TOMACS) 24 (2) (2014) 11.

61

E.A. Moallemi et al. Simulation Modelling Practice and Theory 89 (2018) 48–63

[16] J.H. Friedman, N.I. Fisher, Bump hunting in high-dimensional data, Stat. Comput. 9 (2) (1999) 123–143 http://dx.doi.org/10.1023/a:1008894516817.

[17] M.D. Gerst, P. Wang, M.E. Borsuk, Discovering plausible energy and economic futures under global change using multidimensional scenario discovery, Environ.

Model. Softw. 44 (2013) 76–86 http://dx.doi.org/10.1016/j.envsoft.2012.09.001.

[18] M. Giuliani, A. Castelletti, Is robustness really robust? How diﬀerent deﬁnitions of robustness impact decision-making under climate change, Clim. Change 135

(3) (2016) 409–424.

[19] D.G. Groves, R.J. Lempert, A new analytic method for ﬁnding policy-relevant scenarios, Global Environ. Change 17 (1) (2007) 73–85.

[20] J. Guillaume, A. Jakeman, Providing scientiﬁc certainty in predictive decision support: the role of closed questions, Proceedings of the Sixth International

Congress on Environmental Modelling and Software (iEMSs), Leipzig, Germany, 2012 July 1-5, 2012.

[21] M. Haasnoot, W.P.A. van Deursen, J.H.A. Guillaume, J.H. Kwakkel, E. van Beek, H. Middelkoop, Fit for purpose? Building and evaluating a fast, integrated model

for exploring water policy pathways, Environ. Model. Softw. 60 (Supplement C) (2014) 99–120 https://doi.org/10.1016/j.envsoft.2014.05.020.

[22] R.A. Halim, J.H. Kwakkel, L.A. Tavasszy, A scenario discovery study of the impact of uncertainties in the global container transport system on European ports,

Futures (2018) In Press http://dx.doi.org/10.1016/j.futures.2015.09.004.

[23] C. Hamarat, J.H. Kwakkel, E. Pruyt, Adaptive Robust Design under deep uncertainty, Technol. Forecast. Soc. Change 80 (3) (2013) 408–418.

[24] M. Hofmann, Simulation-based exploratory data generation and analysis (data farming): a critical reﬂection on its validity and methodology, J. Def. Model.

Simul. 10 (4) (2013) 381–393.

[25] R.L. Iman, J.C. Helton, An investigation of uncertainty and sensitivity analysis techniques for computer models, Risk Anal. 8 (1) (1988) 71–90.

[26] T. Islam, E. Pruyt, Scenario generation using adaptive sampling: the case of resource scarcity, Environ. Model. Softw. 79 (Supplement C) (2016) 285–299 https://

doi.org/10.1016/j.envsoft.2015.09.014.

[27] G.R. Iverson, H. Norpoth, Analysis of Variance, Sage Publications, Thousand Oaks, CA, 1987.

[28] J.R. Kasprzyk, S. Nataraj, P.M. Reed, R.J. Lempert, Many objective robust decision making for complex environmental systems undergoing change, Environ.

Model. Softw. 42 (2013) 55–71 http://dx.doi.org/10.1016/j.envsoft.2012.12.007.

[29] S.-T. Khu, M.G. Werner, Reduction of Monte-Carlo simulation runs for uncertainty estimation in hydrological modelling, Hydrol. Earth Syst. Sci. Discuss. 7 (5)

(2003) 680–692.

[30] J.P. Kleijnen, Design and Analysis of Simulation Experiments 20 Springer, 2008.

[31] J.P. Kleijnen, S.M. Sanchez, T.W. Lucas, T.M. Cioppa, State-of-the-art review: a user's guide to the brave new world of designing simulation experiments,

INFORMS J. Comput. 17 (3) (2005) 263–289.

[32] J. Kwakkel, W. Walker, M. Haasnoot, Coping with the Wickedness of Public Policy Problems: Approaches for Decision Making under Deep Uncertainty, J. Water

Resour. Plann. Manage (2016) 10.1061/(ASCE)WR.1943-5452.0000626, 0181600, 1.

[33] J.H. Kwakkel, The Exploratory Modeling Workbench: An open source toolkit for exploratory modeling, scenario discovery, and (multi-objective) robust decision

making, Environ. Model. Softw. 96 (2017) 239–250 http://dx.doi.org/10.1016/j.envsoft.2017.06.054.

[34] J.H. Kwakkel, W.L. Auping, E. Pruyt, Dynamic scenario discovery under deep uncertainty: the future of copper, Technol. Forecast. Soc. Change 80 (4) (2013)

789–800 http://dx.doi.org/10.1016/j.techfore.2012.09.012.

[35] J.H. Kwakkel, S. Eker, E. Pruyt, How robust is a robust policy? Comparing alternative robustness metrics for robust decision-making, in: M. Doumpos,

C. Zopounidis, E. Grigoroudis (Eds.), Robustness Analysis in Decision Aiding, Optimization, and Analytics, Springer International Publishing, Cham, 2016, pp.

221–237.

[36] J. Kwakkel, M. Haasnoot, W. Walker, Developing dynamic adaptive policy pathways: a computer-assisted approach for developing adaptive strategies for a

deeply uncertain world, Clim. Change 132 (2015) 373.

[37] J.H. Kwakkel, E. Pruyt, Using System Dynamics for Grand Challenges: the ESDMA Approach, Syst. Res. Behav. Sci. (2013), https://doi.org/10.1002/sres.2225 n/

a-n/a.

[38] J.H. Kwakkel, E. Pruyt, Using System Dynamics for Grand Challenges: The ESDMA Approach, Syst. Res. Behav. Sci. 32 (3) (2015) 358–375, https://doi.org/10.

1002/sres.2225.

[39] J.H. Kwakkel, W.E. Walker, V.A. Marchau, Classifying and communicating uncertainties in model-based policy analysis, Int. J. Technol. Policy Manage. 10 (4)

(2010) 299–315.

[40] R.J. Lempert, A new decision sciences for complex systems, Proc. Natl. Acad. Sci. 99 (suppl 3) (2002) 7309–7313, https://doi.org/10.1073/pnas.082081699.

[41] R.J. Lempert, B.P. Bryant, S.C. Bankes, Comparing Algorithms for Scenario Discovery, RAND, Santa Monica, CA, 2008.

[42] R.J. Lempert, D.G. Groves, Identifying and evaluating robust adaptive policy responses to climate change for water management agencies in the American west,

Technol. Forecast. Soc. 77 (2010) 960.

[43] Lempert, R.J., Kalra, N., Peyraud, S., Mao, Z., Tan, S.B., Cira, D., & Lotsch, A. (2013). Ensuring Robust Flood Risk Management in Ho Chi Minh City Policy

Research Working Paper 6465: World Bank.

[44] R.J. Lempert, S.W. Popper, S.C. Bankes, Shaping the Next One Hundred years: New Methods for quantitative, Long-Term Policy Analysis, Rand Corporation,

2003.

[45] R.J. Lempert, D.G. Groves, S.W. Popper, S.C. Bankes, A general, analytic method for generating robust strategies and narrative scenarios, Manag. Sci. 52 (4)

(2006) 514–528.

[46] K. Madani, S. Khatami, Water for energy: inconsistent assessment standards and inability to judge properly, Curr. Sustain. Renew. Energy Rep. 2 (1) (2015)

10–16.

[47] G. McLachlan, D. Peel, Finite Mixture Models, John Wiley & Sons, 2004.

[48] C. McPhail, H.R. Maier, J.H. Kwakkel, M. Giuliani, A. Castelletti, S. Westra, Robustness metrics: how are they calculated, when should they be used and why do

they give diﬀerent results? Earth's Future (2018), https://doi.org/10.1002/2017EF000649 n/a-n/a..

[49] J.H. Miller, Active nonlinear tests (ANTs) of complex simulation models, Manag. Sci. 44 (6) (1998) 820–830.

[50] E.A. Moallemi, L. Aye, F.J. de Haan, J.M. Webb, A dual narrative-modelling approach for evaluating socio-technical transitions in electricity sectors, J. Cleaner

Prod. 162 (2017) 1210–1224 https://doi.org/10.1016/j.jclepro.2017.06.118.

[51] E.A. Moallemi, F. de Haan, J. Kwakkel, L. Aye, Narrative-informed exploratory analysis of energy transition pathways: a case study of India's electricity sector,

Energy Policy 110 (2017) 271–287 https://doi.org/10.1016/j.enpol.2017.08.019.

[52] E.A. Moallemi, S. Elsawah, M. Ryan, Model-based multi-objective decision making under deep uncertainty from a multi-method design lens, Simul. Model. Pract.

Theory 84 (2018) 232–250 2018 https://doi.org/10.1016/j.simpat.2018.02.009.

[53] E.A. Moallemi, S. Malekpour, A participatory exploratory modelling approach for long-term planning in energy transitions, Energy Res. Soc. Sci. 35 (2018)

205–216 https://doi.org/10.1016/j.erss.2017.10.022.

[54] E.A. Moallemi, S. Elsawah, M.J. Ryan, Informing the delineation of input uncertainty space in exploratory modelling using a heuristic approach, Proceedings of

the INCOSE International Symposium, 28 2018, pp. 553–565 2018 https://doi.org/10.1002/j.2334-5837.2018.00499.x.

[55] D.C. Montgomery, Design and Analysis of Experiments, 5 ed, Wiley, New York, 2001.

[56] F. Pianosi, K. Beven, J. Freer, J.W. Hall, J. Rougier, D.B. Stephenson, T. Wagener, Sensitivity analysis of environmental models: a systematic review with

practical workﬂow, Environ. Model. Softw. 79 (2016) 214–232 http://dx.doi.org/10.1016/j.envsoft.2016.02.008.

[57] J.P. Romano, A.M. Shaikh, On the uniform asymptotic validity of subsampling and the bootstrap, Ann. Stat. 40 (6) (2012) 2798–2822.

[58] A. Saltelli, M. Ratto, T. Andres, F. Campologno, J. Cariboni, D. Gatelli, S. Tarantola, Global Sensitivity Analysis, John Wiley and Sons, Chichester, England, 2008.

[59] J. Sterman, Business dynamics: Systems Thinking and Modeling for a Complex World, Irwin-McGraw-Hill, USA, 2000.

[60] A. Teran-Somohano, A.E. Smith, J. Ledet, L. Yilmaz, H. Oğuztüzün, A model-driven engineering approach to simulation experiment design and execution,

Proceedings of the Proceedings of the 2015 Winter Simulation Conference, 2015.

[61] W.E. Walker, M. Haasnoot, J.H. Kwakkel, Adapt or perish: a review of planning approaches for adaptation under deep uncertainty, Sustainability 5 (3) (2013)

955–979.

62

E.A. Moallemi et al. Simulation Modelling Practice and Theory 89 (2018) 48–63

[62] W.E. Walker, V.A. Marchau, J.H. Kwakkel, Uncertainty in the framework of policy analysis, in: W.A.H. Thissen, W.E. Walker (Eds.), Public Policy Analysis,

Springer, Berlin, 2013, pp. 215–261.

[63] J. Wang, X. Li, L. Lu, F. Fang, Parameter sensitivity analysis of crop growth models based on the extended Fourier Amplitude Sensitivity Test method, Environ.

Model. Softw. 48 (2013) 171–182.

[64] A.A. Watson, J.R. Kasprzyk, Incorporating deeply uncertain factors into the many objective search process, Environ. Model. Softw. 89 (2017) 159–171 http://dx.

doi.org/10.1016/j.envsoft.2016.12.001.

[65] M. Woodruﬀ, J. Herman, Nondominated sorting for multi-objective problems, Retrieved 2017, from, 2013. https://github.com/matthewjwoodruﬀ/pareto.py.

[66] L. Yilmaz, Toward agent-supported and agent-monitored model-driven simulation engineering, Concepts and Methodologies for Modeling and Simulation,

Springer, 2015, pp. 3–18.

[67] L. Yilmaz, S. Chakladar, K. Doud, The Goal-Hypothesis-Experiment framework: a generative cognitive domain architecture for simulation experiment man-

agement, Proceedings of the Winter Simulation Conference (WSC), 2016 11-14 Dec 2016.

63

- One Sided EvidenceÎncărcat deEfisio Pafnuzio
- DOE Mentos and Soda Final PaperÎncărcat dejxl6599
- CQE Body of KnowledgeÎncărcat deRaj Lally Batala
- Slide StellaÎncărcat defasihah
- 03Încărcat deVictoria Jo Johnsi
- GFD Response Time Analaysis 2017-07-12Încărcat deDennis Korir
- Sisa coast and sea.docxÎncărcat deDinda Nurianti
- Reliable Structures and Wearable Systems Call for Multiphysics SimulationÎncărcat deJohn Steward
- Different Techniques for the Modeling of Post-tensioned Concrete Box - Girder BridgesÎncărcat deneodisciple-rego6719
- Deep Cut Vacuum Tower Incentives for Various CrudesÎncărcat denghiemta18
- A Hydrodynamic-ecological Model for Lake RerewhakaaituÎncărcat deRamonik Rbela
- Builder_Fact_Sheet_2Încărcat dePeyman Moradi
- Evaluation criteria for velocity distributions in front of bulb hydroÎncărcat dezenisha
- Armbruster 1987Încărcat dekeramatboy88
- JSS-19-1-063-2009-587-Omoegun-O-M-TtÎncărcat deikeokezelynda
- Numerical Prediction of Wave Loads on Subsea StrucÎncărcat deDiego bonfa
- heizer_om10_modFÎncărcat deJamie Lynn Sy
- paperÎncărcat deNadiyatul Alkhoiriyah
- SAP help_bpc10.pdfÎncărcat decooldude_rico
- Proceedings_CILAMCE2018.pdfÎncărcat desuzanavila
- 2003-07-25-31Încărcat deNatKTh
- Sensores ElectroquímicosÎncărcat deMarcos Reyes Villa
- Ma2266 aNotesÎncărcat dehasan
- 1-s2.0-S0718539112701040-mainÎncărcat decesar
- hetrogeÎncărcat deMaylin Garcia
- Elegance CV TemplateÎncărcat defajarRS69
- Granato2014Încărcat delacisag
- Full Project on StressÎncărcat desrinivasan008
- DASÎncărcat dePond Juprasong
- anregÎncărcat desdssd sds

- rhetorical analysis 3Încărcat deapi-355870432
- Jason Anthony ResumeÎncărcat deaialosangeles
- Amended competion document.docÎncărcat deMohamad Abdulghani
- Landscape ArchitectureÎncărcat dehbarceinas
- Social Recreation and Leadership TrainingÎncărcat deMyMy Margallo
- College Building DBRÎncărcat dechetanchinta
- FitÎncărcat deAlex Tunsu
- NPTEL_Design of Machine Elements 1Încărcat deHardik Gupta
- 3-architectural design studio 1 project 2 brief aug2016Încărcat deapi-302681845
- Annual Report Final 2017-18 EnÎncărcat deNidhi Desai
- Incremental and Radical Innovation.pdfÎncărcat deJrobertohh
- ShahwerqÎncărcat deshahimulk
- Case Study - Automotive IndutriesÎncărcat deasrar
- Architectural Design Project_Project 1a Brief_March 2017_updated.pdfÎncărcat deYvonne Chin
- VISUAL-CRASH-COURSE-2015.pdfÎncărcat deMagdiel Oliva
- Effects of Management Control Mechanisms- Towards a More ComprehenÎncărcat deellen
- Criminal record managementÎncărcat deabhay
- digital unit plan - color studyÎncărcat deapi-451615963
- A Proposed Davao City Multi-media Library_ Final Paper - Carpio, CoÎncărcat deKevin Co
- A summary on FPGAÎncărcat deTharanga M Premathilake
- Task 2 - Instructions Sheet(1)Încărcat deaaa
- Norway Life Extension of Subsea SystemÎncărcat deMichael Shelton
- [23118027]_1Încărcat deRijalKoesSurya
- 7. Placement Information SystemÎncărcat deAnil Kumar Singh
- 1st Grade Ecology Habitats and AdaptationsÎncărcat dePuteri Pamela
- DSG_MechÎncărcat deVashish Ramrecha
- stannard-09Încărcat deAdriana Caversan
- Theory testingÎncărcat desumpetar
- 467 Sum FinalÎncărcat deSaurabh Natu
- BMMÎncărcat deVladimir Jose Mata Perez

## Mult mai mult decât documente.

Descoperiți tot ce are Scribd de oferit, inclusiv cărți și cărți audio de la editori majori.

Anulați oricând.