Sunteți pe pagina 1din 32

A DYNAMIC MODEL

FOR STRUCTURING DECISION PROBLEMS


James Corner*
Department of Management Systems
Waikato Management School
University of Waikato
Private Bag 3105
Hamilton, New Zealand
jcorner@waikato.ac.nz
phone: +64 7 8384563
fax: +64 7 8384270

John Buchanan
Department of Management Systems
Waikato Management School
University of Waikato
New Zealand
jtb@waikato.ac.nz

Mordecai Henig
Faculty of Management
Tel Aviv University
Tel Aviv, Israel
henig@post.tau.ac.il

* Corresponding Author
Keywords
Problem Structuring; Descriptive Decision Making; Multi-Criteria

ABSTRACT
This paper develops a new model of decision problem structuring which synthesises a
number of models and approaches cited in the decision making literature in general and the
multi-criteria literature in particular. The model advocates a dynamic interaction between
criteria and alternatives as a decision maker understands his preferences and expands the
set of alternatives. This model endeavours to bridge the gap between prescriptive and
descriptive approaches. It is prescriptive in its orientation, recommending an approach
based on earlier prescriptive work. The model, however, is also validated empirically,
based on the descriptive decision making literature and reported case studies of actual
decision making.

INTRODUCTION
The structuring of decision problems is arguably the most important activity in the decision
making process (Mintzberg et al., 1976; Abualsamh et al., 1990; Korhonen, 1997; Perry and
Moffat, 1997). According to Nutt (1998a), the failure rate for strategic decisions in general lies
at about 50%, and the implementation rate for multicriteria decision-making efforts is even
worse (Kasanen et al., 2000). This suggests to us that existing decision problem structuring
approaches are not entirely adequate to suit the needs of decision makers or, if they are, then
decision makers are not using them. Practitioners and academics are calling for better decision
problem structuring (Korhonen, 1997; Taket and White, 1997; Wright and Goodwin, 1999, to
name a few); recognizing that, in general, improved decision structuring will improve the quality
of the decision outcome. It is this call that motivates the work presented here.

In this paper we develop a conceptual model for multi-criteria decision problem structuring. Our
starting position is consistent with that of Simon (1955), where decision makers are assumed to
be intendedly rational in their decision-making. This perspective of bounded rationality
accommodates the persistent departure of decision makers choice behaviour from strict
economic rationality as a result of behavioural limitations, and is well documented (see, for
example, Kleindorfer et al., 1993 or Bazerman, 1990). Also, this approach brings together our
earlier work on process concepts in multicriteria decision making (Henig and Buchanan, 1996;
Buchanan et al., 1998; Buchanan et al., 1999) and that of other authors, notably Wright and
Goodwin (1999).

Such rational approaches to problem structuring typically involve the determination of


alternatives, criteria, and attributes to measure the criteria. The goal is then to develop ways in
which these components can be generated and considered relative to each other. Previous efforts
at decision problem structuring (which include von Winterfeldt, 1980; Keller and Ho, 1988;
Henig and Buchanan, 1996 and Wright and Goodwin, 1999) address the creation of criteria and
alternatives, and present arguments about their inter-relationships in a static way. The model we
present here involves the dynamic interaction of criteria and alternatives, which helps explain
where these components come from and how they can influence each other in the structuring
process. We offer this model as a descriptive model of problem structuring behaviour, in that it
accommodates a number of empirically validated models of decision making. We further offer
this model as a prescriptive model, in that it follows from the earlier prescription of Henig and
Buchanan (1996), and others cited above.

The distinction, and the gap, between descriptive and prescriptive models is important and
reflects the different perspectives adopted by the Psychology and Management Science research
communities, respectively. The descriptive perspective focuses on how we actually make
decisions and the underlying reasons for such behaviour. The prescriptive perspective considers
how we should make decisions with a view to improving the quality of decision-making, both in
terms of process and outcome.

This paper, then, bridges the gap between prescriptive and descriptive models of decision
making. Prescriptive models lack empirical validity and often require a standard of rationality
that is theoretically satisfying but practically unobtainable. Descriptive models, while rich in

their description of real behaviour, are often context-dependent (in respect of both decision
making context and decision maker cognition), thereby limiting their general applicability for
prescription. One principle underlying our approach here is to draw heavily from both
perspectives and synthesise them into an approach that has a well-reasoned foundation, clear
empirical support and which potentially can be used in practice.

The paper is organized as follows. We first discuss several definitions of problem structuring
found in the literature, then present and explain our proposed dynamic model. We next perform
several validation tests of our model, which demonstrate the suitability of the model as both a
descriptive and prescriptive model of decision problem structuring. Finally, we discuss several
implications for the use of the model in multicriteria decision-making, and end the paper with a
summary.

DEFINING PROBLEM STRUCTURING


Simon (1960) proposed a three-phase decision process that continues to be the foundation for
many decision making methodologies. The three phases of this well-known process model are
intelligence, design and choice. Intelligence involves determining if a problem that has occurred
actually requires a decision, which in turn involves some information gathering activities. Once
the decision has been identified, the design phase begins which we refer to as decision problem
structuring where alternatives, criteria and attributes are identified and considered. The final
phase is choice which, based on identified criteria, describes the activity of selecting the most
appropriate course of action (alternative). Since our emphasis is on multiple criteria, the
identification and role of criteria in the design stage is of immediate and particular importance.

Simon later added, but rarely discussed, a fourth stage entitled review activity, which involved
reviewing past choices.

Despite the development of many models that describe and prescribe the structuring of decision
problems, it remains as much art as it does science. There is no single agreed upon definition of
this important aspect of the decision process, but examples include:
a search for underlying structure (Rivett, 1968), or facts (Thierauf, 1978),
an intellectual process during which a problem situation is made specific (Majone, 1980),
a process for dealing with issues (Taket and White, 1997),
the process by which a problem situation is translated into a form enabling choice (Dillon,
2000), and
the specification of states of nature, options, and attributes for evaluating options, (Keller and
Ho, 1988).
These examples provide general definitions of problem structuring, except for that of Keller and
Ho, who state a definition most often applied in multicriteria decision making and used by us in
this paper.

Woolley and Pidd (1981) propose a scheme for categorising definitions and approaches to
problem structuring. Although their review of problem structuring may appear dated, their
categorisation captures the philosophical differences of these approaches. Their four streams
are:
A Checklist Stream, where problems are seen as failures or breakdowns, so a step-by-step
procedure is sufficient to correct the problem.

A Definition Stream, where problems are seen as a collection of elements such as criteria and
alternatives, and the goal is to put these elements, or variables, into some sort of (decision)
model.
A Science Research Stream, where quantitative data is gathered in an effort to uncover the
real problem and its underlying structure.
A People Stream, where problems are individual and/or social constructions of differing
realities, so problems do not exist as concrete entities, but from different perceptions.

We give particular attention to the definition and science research streams as most approaches
for structuring decision problems fall into one of these. The checklist stream addresses simple,
routine decision problems, such as diagnostic checking. Soft Systems Methodology (Checkland,
1981), where multiple perspectives are considered, is one example of the last stream. In terms of
streams two and three, an important distinction should be made. Problem structuring approaches
that assume a particular structure a priori belong to the definition stream. Most decision problem
structuring (including our proposed approach) falls into this stream. For example, Decision
Analysis (Keeney and Raiffa, 1976) and AHP (Saaty, 1980) assume a particular structure of
criteria (that is, an objectives hierarchy) and alternatives and seek to formulate the decision
problem according to that structure. Stream three, however, is more consistent with traditional
Operations Research (OR) modelling. Here, data are uncovered about the problem and, based on
this data, an appropriate structure is used, typically some type of OR model. While a range of
models potentially exists, the particular choice of model is determined from scientific research of
the problem data; that is, it comes from the data rather than being a priori assumed. In addition
to OR modelling, stream three is well exemplified by Gordon et al.s (1975) nine generic

problem structures, which include new product, distribution channel, acquisition, divestment,
capital expenditure, lease-buy, make-buy, pricing, and manpower planning problem structures.
Our proposed approach is firmly grounded in the Definition stream.

We note that while Woolley and Pidds People stream seems conceptually different from the
others, one can argue that the other three streams might be occurring within the differing realities
of the many individuals involved in a particular decision problem. That is, the first three streams
might capture the different structuring approaches available, while the fourth stream recognises
that not all decision makers structure a given decision problem in the same way. As we point out
later, this latter statement seems to be true.

Most decision modelling/structuring, and especially multicriteria decision making, is based on a


conceptualisation in terms of criteria and alternatives (Keeney and Raiffa, 1976 and Saaty,
1980). Criteria and alternatives are mutually defined, and are the fundamental components of
any multicriteria decision problem. Criteria reflect the values of a decision maker and are the
means by which alternatives can be discriminated. Alternatives are courses of action which can
be pursued and which will have outcomes measured in terms of the criteria. Henig and
Buchanan (1996) and Buchanan et al. (1998) present a conceptualisation of the multicriteria
decision problem structure in terms of criteria and alternatives, with attributes as the bridge
between them. More precisely, attributes are the objectively measurable features of the
alternatives. Therefore, the decision problem was structured so as to separate the subjective
components (criteria, values, preferences) from the objective components (alternatives and
attributes), with a view to improving the decision process. This model is presented in Figure 1.

CRITERIA

ALTERNATIVES

Objective Mapping

Subjective Mapping

ATTRIBUTES
Figure 1 Mapping Between Criteria and Attributes, Attributes and Alternatives
(Buchanan, et al. 1998)

Figure 1 is a static conceptualisation, or model, of a decision problem consistent with Woolley


and Pidds (1981) definition stream. It presents a framework comprising the components of
criteria, alternatives, attributes and their interrelationships, and proposes a partition into
subjective and objective aspects. In their application paper, Henig and Katz (1996) use aspects
of the Figure 1 model in the context of evaluating research and development projects.

Recent discussions have occurred in the literature regarding the creation of and interrelationships between all these structuring elements. Keeney (1992) provides a way of thinking
whereby criteria are identified first in the decision making process, then alternatives are
creatively determined in light of such criteria and choice is then made. Known as value-focused
thinking (VFT), this has been advocated as a proactive approach wherein explicit consideration
of values leads to the creation of opportunities, rather the need to solve problems. However,

Wright and Goodwin (1999) suggest that values are not sufficiently well formed in the early
stages of decision making to be able to do this. They agree with March (1988) that values and
criteria are formed out of experience with alternatives and suggest decision simulations as a way
to discover hidden values and, subsequently, promising alternatives. Subsequent comments by
noted researchers in the same issue as their paper appear to agree with them, but argue that such
simulations might have difficulties in practice.

We call the process of first determining alternatives, then applying value and preference
information to them in order to make a choice, alternative-focussed thinking (AFT), as do many
others. It is clear from the descriptive decision making literature that AFT is easily the more
dominant procedure (for example, Nutt, 1993). This is also the case when one considers the
prescriptive multicriteria decision making literature. The majority of these prescriptive models
explore decision maker values in the context of an already fixed and well-defined set of
alternatives. This imbalance is not surprising. In Figure 1, alternatives are objective, and in the
experience of most decision makers, they are usually concrete and explicit; they are what
decision makers most often confront. In contrast, criteria are subjective, abstract and implicit,
and their consideration requires hard thought. Therefore, it makes some sense descriptively to
start a decision process with what is obvious and measurable, such as alternatives. This
discussion gives rise to the conceptual model of problem structuring presented in the next
section.

A DYNAMIC MODEL
The model presented in Figure 1 merely specifies problem structuring elements and does not give

rise to a process of problem structuring. It does not specifically address the process of how
preferences can be better understood or how the set of alternatives can be expanded. How are
the elements of such a structure (criteria, attributes and alternatives) generated to begin with?
That is, where do alternatives come from? And where do criteria come from?

In order to address these questions, we build upon the model in Figure 1. It is difficult to
determine which should or does come first criteria or alternatives since both are vitally
important to the decision problem structuring process. However, our own casual observations of
decision-making activity suggest that both generate the other interactively. This is supported by
the literature. For example March (1988), as cited by Wright and Goodwin (1999), argues that
experience with and choice among alternatives reveals values and by implication, criteria. This
is the thrust of Wright and Goodwin. Decision makers need to test drive undertake a
decision simulation of the alternatives in order to find out what they really want.

Henig and Buchanan (1996) proposed that attributes form the bridge between criteria and
alternatives and act as a translation mechanism. That is, as we soon illustrate by example, a
decision maker moves from considering criteria to attributes to alternatives. This idea, together
with the above discussion, gives rise to a new model of decision problem structuring involving
these components. Our proposed model is presented simply in Figure 2, without attributes. The
model shows criteria and alternatives joined in a sort of causal loop, with entry into the loop
either using AFT (dotted line) or VFT (solid line). This interactive model states that thinking
about alternatives helps generate criteria, and vice-versa. That is, neither of these two
structuring elements can be thought of alone, independent of the other.

AFT

VFT
Alternatives

Criteria

Figure 2 - A Dynamic Model of Problem Structuring, Relating Alternatives and Criteria

As an illustration of the dynamic model of Figure 2, consider a car purchase decision. While
driving down the road in your car, you decide it is time to replace your car. Perhaps you think
that the car is old and costs a lot to repair, and it does not have the acceleration it used to have.
Thus, you have begun to identify a gap in performance, between what you have and what you
want. In Simons terminology, identification of a gap belongs in the intelligence phase; Nutt
(1993), in his description of decision-making cases, suggests that all decision structuring begins
with a gap. The gap in this illustration is a gap in values or criteria, which suggests that the
process began with VFT, since you are loosely thinking about criteria (or, at least, discrepancies
in criteria). But where did these thoughts about performance, about criteria, come from? They
most likely originated from some consideration of alternatives; perhaps comparing your present
car with a recollection of your car when it was new. It is therefore not entirely clear whether you
necessarily started with VFT or AFT, and continuing the backward regression could justify
starting with either. Perhaps where you started does not matter. However, you do start on the

10

decision journey and you initially give your attention to either criteria or alternatives. Let us
just say that in this illustration, your initial focus is on criteria.

However, as you drive your car, you begin to notice that certain other cars also look appealing,
say Brand X, since they appear quick to accelerate, reasonable in price, and are about the size of
your current car. In terms of our model, then, you now have moved in your thinking from the
criteria oval to the alternatives oval. So, you go to the closest Brand X dealership to investigate
further the Brand X cars you have observed while driving. You quickly notice that their prices
generally are very reasonable and you test-drive a few of them; that is, you experience first
hand some of the alternatives and learn more about their attributes. You discover that while they
are quick and about the same size as your current car, they also seem more tinny than your
sturdy old car. You thus start thinking about safety issues and how the modern highway is full of
sport utility vehicles (which have high profiles). You deem that your small compact is no longer
appropriate on the same road as such vehicles, so safety becomes a salient issue in this decision
problem. By considering these particular attributes of the alternatives, you now have surfaced a
new and different objective (that is, you have moved back over to the criteria oval in Figure 2).

Next you start to investigate other car brands (alternatives), and realise that there are a range of
other manufacturers that address your previously stated criteria, as well as this new criterion of
safety. Test-driving these new models then leads to further new criteria (such as a roominess
or salvage value criterion), and so the iteration between criteria and alternatives continues.
How choice ultimately is made is not our concern in this paper, but as outlined here, structuring
is shown as a dynamic interaction between criteria and alternatives. It demonstrates one problem

11

with just VFT, as implied in a quote by Nutt (1993, page 247), who paraphrased Wildavsky
(1979), managers do not know what they want until they can see what they can get. It also
highlights the main problem with just AFT. AFT would require a decision maker to generate all
alternatives a priori, perhaps only Brand X and a few others, then apply preferences to choose
from among this set. No opportunity would have been available in such a procedure for
discovering what is truly wanted in the decision problem, since values and criteria would have
been considered too late in the decision problem structuring process.

As discussed earlier, how a decision maker enters into this loop may not matter. What does
matter is that the previously static, unidirectional models of decision problem structuring (AFT
and VFT) have been made dynamic. Consideration of alternatives helps identify and define
criteria, while criteria are shown to help identify and define alternatives. If such interaction is
allowed to occur, divergent thinking and creativity is explicitly incorporated in the decision
making process, which is considered desirable for the process (Volkema, 1983; Abualsamh,
1990; and Taket and White, 1997). While convergent thinking might be more efficient,
divergent thinking can lead to more effective decision making.

This dynamic model has, thus far, been motivated both prescriptively (as a recommendation to
decision makers) and descriptively (through the car purchase illustration). The model was first
argued from a theoretical position (belonging to the Definitions stream of Woolley and Pidd
(1981), where a structure has been a priori assumed). Further, it was implicitly presumed that
structuring the decision problem is, in fact, desirable and the use of this decision structuring
model by a decision maker will aid solution. However, arguing from a theoretical position has

12

shortcomings; although the logic may be impeccable, real decision makers generally find
prescriptive models to be unworkable, usually because the foundational assumptions or axioms
have little empirical validity. In other words, it is a nice idea that does not work in practice. As
others and we have noted, the multicriteria landscape is littered with many such models,
consigned to obscurity because they are not relevant to real world decision making. They lack
empirical validity and do not explain decision making behaviour.

The car illustration leads us into the issue of descriptive or empirical validation. There are at
least two issues to be considered. First, how does descriptive decision making theory and data fit
with the dynamic model we have proposed? And second, is any of this behaviour actually
successful when it comes to decision outcomes? We endeavour to demonstrate that the dynamic
model has empirical validity in that it accommodates recognised models and data in the
descriptive literature. We also show that the descriptive decision making models closest in spirit
to the dynamic model are successful in practice.

VALIDATING THE MODEL DESCRIPTIVELY


We now consider the empirical or descriptive validity of the dynamic model. This section
presents validation tests separately below.

Known Theories and Models


The first test determines whether or not our model accommodates current descriptive theories
and models of decision making. There are many descriptive theories of decision making (for
taxonomies of descriptive models see Schoemaker, 1980; Lipshitz, 1994; Perry and Moffat,

13

1997; Dillon, 1998). Many descriptive models deal with the entirety of the decision making
process (Mintzberg, et al., 1976; Nutt, 1984) or just with the choice phase (Einhorn, 1970;
Tversky 1972). However, we pay particular attention here to those decision making models that
on the surface concentrate mainly on decision problem structuring. Consequently, we consider
three of the more enduring and valid models in the descriptive literature, relative to our dynamic
model: Recognition-Primed Decisions (Klein, 1989), Image Theory (Beach and Mitchell, 1990)
and the Garbage Can Model (Cohen et al., 1976).

The first step in the Recognition Primed Decisions model (Klein, 1989) is to recognize that the
current decision situation has some similarity with past decision situations. Thus, prior
experience is drawn upon to guide the current decision making procedure. The model then states
that the decision maker examines goals, cues, expectations and actions in order to gauge the
current situation relative to past situations. This is done through a mental simulation process,
which attempts to project past experience with alternatives into the future to see how this
experience might be used in the current situation. This involves a sequential, dynamic, mentally
simulated generation and evaluation of alternatives. During this simulation exercise, alternatives
can be modified, and goals and expectations can become revised, in order to make the current
situation match with some useful experience, thus enabling a direction in which to proceed with
the decision problem.

Image Theory (Beach and Mitchell, 1990) is similar to Recognition Primed Decisions in that it
draws on the decision makers experience with decision making when trying to decide what to
do in the present situation. In this model, the decision makers view is represented by three

14

different images (or criteria) in order to evaluate alternatives. That is, an alternative-focused
process is begun and an alternative is assessed to determine its fit with the decision makers
value, trajectory, and strategic images. The fit is determined through comparing the attributes of
the alternatives with the decision makers images. If the alternative fits, that is, it exceeds
certain threshold values for the attributes, then it is adopted. Otherwise it is rejected. Finally,
any appropriate choice procedure is then used to choose among the alternatives that fit. Without
going into the complexities of this theory, initially the images are considered as fixed. If the
outcome is that all alternatives are rejected, then the thresholds may be revised or, perhaps,
revised images are developed. However, this further consideration of images, or criteria, only
occurs when all alternatives are initially rejected. If acceptable alternatives are found that fit
with the images, then little or no effort is directed toward considering new criteria.

Both Recognition Primed Decisions and Image Theory contain aspects of the dynamic model of
Figure 2, with some interaction of alternatives and criteria (and goals, expectations, and actions).
Clear attention is given to criteria and alternatives, and one is shown to influence the
development of the other. In the case of Recognition Primed Decisions, alternatives are seen to
influence goals, and perhaps new alternatives are surfaced as a result. In Image Theory, if all
alternatives are rejected, new images/criteria may surface, which in turn, might lead to a new set
of alternatives. Both of these models appear to be alternative focused, and the interaction
between alternatives and criteria seems to be limited to a single iteration between the two.

The Garbage Can Model (Cohen et al., 1976) is a model of organizational decision-making,
although it also applies at the level of the individual. This model says that a rational search for

15

alternatives does not occur, since organizations are really full of anarchy and chaos. Rather,
solutions are randomly identified, and then one is matched to an appropriate and compatible
decision problem in need of a solution. No explicit mention is made of where criteria and
attributes might fit into this scheme, and it remains classified by us as being an alternativefocused approach to decision making. This well-known model of decision-making has little
correspondence with our model in Figure 2.

To summarize this sub-section, we feel that at least two of the currently popular descriptive
models of decision-making (Recognition Primed Decisions and Image Theory) provide empirical
support for our model. That is, there are elements of these two models that indicate that decision
makers develop some sort of interaction between criteria and alternatives, which forms the basic
concept underpinning our model. This interaction, however, is not necessarily valid for all
descriptive models of the decision making process. One might argue that the Garbage Can
Model has no real structuring phase at all, so no current model of problem structuring would
seem to fit this model.

Empirical Evidence
Our second validation test determines if empirical evidence seems to be explained by our model.
We use two previously published large sample studies of decision cases involving real managers,
both by Nutt (1993, 1999).

The first study considers problem formulation processes involving real decision cases and
managers (Nutt, 1993). The author defines formulation as a process that begins when

16

signalsare recognized by key people and ends when one or more options have been targeted
for development (page 227). Based on previously stated definitions, we consider this to be
nearly equivalent to problem structuring. By examining traces of 163 decisions, four types of
formulation process were generated: Issue, Idea, Objective-Directed, and Reframing. These
four processes are briefly summarized below.

The least successful process, as measured by a multidimensional metric measuring each


decisions adoption, merit and duration, is the Issue process. This process is present when an
issue is identified and considered by the decision maker(s). Decision makers become problem
solvers as they suggest solutions to concerns and difficulties. Used in 26% of the 163 cases, this
is similar to Woolley and Pidds (1981) Checklist stream of problem structuring.

Third most successful and used in 33% of cases is the Idea process. Here, an initial idea usually a solution - is presented by a decision maker, which provides direction for the decision
making process. Formulation proceeds as the ideas virtues are discussed and refined. Second
most successful and used in 29% of cases is the Objective-Directed process. In this formulation
process, direction is obtained by setting objectives, goals, or aims early in the process, along with
a freedom to subsequently search for new alternatives.

Finally, most successful and used in only 12% of all cases is the Reframing process. This
process focuses on solutions, problems, or new norms or practices in order to justify the need to
act, thus avoiding Raiffas (1968) errors of the third kind. That is, the decision problem is re-

17

formulated into a problem that the decision makers are confident they can handle, based on some
new direction.

In comparing our model of Figure 2 to Nutts four types of formulation process, some
observations can be made. The Issue and Idea processes, which comprise 60% of the decision
cases examined by Nutt, appear to be alternative-focused with no dynamic interaction and little
explicit consideration of criteria. Together they are the most used but least successful. The
Objective-Directed process appears to be value-focused, including explicit consideration of goals
and criteria and search for alternatives. It also performs better that the previous two processes.
However, a better descriptive process is available.

The Reframing process, least used but most successful, incorporates more aspects of our model
that the other three, when viewed closely. The details of the Reframing process show that while
solutions to apparent problems are presented early in the process, new norms and criteria grow
out of thinking about these solutions and their underlying problems. Based on this revised set of
norms and criteria, other alternatives are then sought. This approach to problem formulation is
thought to open the decision process to possibilities before the narrowing that occurs during
problem solving (Nutt, 1993, p. 245). This divergent and creative approach to problem
formulation seems close in principle to our conceptual model of dynamic interaction between
criteria and alternatives.

Reframing can be further illustrated as we extend our car example. Assume that thinking about
the large size of the sport utility vehicles led you to reflect upon even larger vehicles, say, city

18

buses. You then start to realise that what you really are after is just a way to commute to work.
Thus, your thinking process has led to a reframing of the problem, not as a car selection problem,
but as a transportation problem. The real issue was the best choice of transport to and from
work; this reframing greatly enlarged the set of alternatives (to mass transportation) and
introduced other criteria (comfort, time and cost savings saved by commuting, etc.).

As discussed earlier in this car purchase/transportation illustration, it remains unclear as to


whether one begins with criteria or alternatives. Once started, thinking only about alternatives
early in the process, as found in the Issue and Idea processes, appears to lead to relatively
unsuccessful decisions, while thinking about objectives and criteria early (Objective-Directed
process) is more successful. However, the most successful process (Reframing) also considers
alternatives early, and then uses them to identify new norms and criteria. It appears that the
starting point matters less than the need to ensure some iterating between criteria and
alternatives.

Our second validation test in this sub-section involves Nutts (1999) study involving 340
strategic level decisions from as many organizations (a sample of public, private, and third sector
U.S. and Canadian organizations). In this study, tactics used to uncover alternatives were
determined, as was the success of the decision-making effort, given such tactics. Also, the type
of decision was considered (that is, technology, marketing, personnel, financial, etc., decisions,
to name four of the eight found in the study) in light of the success of the different tactics. Thus,
this second study of Nutt concentrates on aspects of decision problem structuring that are related

19

to our proposed model. The empirical evidence presented in this study can be used to determine
the extent to which our model fits decision makers behaviour descriptively.

Nutts study found six distinctly different tactics used to uncover alternatives, namely: Cyclical
Search (used in 3% of the 340 decisions studied), Integrated Benchmarking (6%), Benchmarking
(19%), Search (20%), Design (24%), and Idea (28%) tactics. Cyclical Search involves multiple
searches and redefined needs, dependent on what is found during the search. The two
Benchmarking tactics involve adaptation of existing outside practices and Search involves a
single request-for-proposal type search, based on a pre-set statement of needs. Design involves
generating a custom-made solution, and Idea involves the development of pre-existing ideas
within the organization.

In comparing these six tactics with our model, we note that Integrated Benchmarking,
Benchmarking, and Idea tactics all seem to be alternative-focused processes while Search and
Design seem to be value-focused. Furthermore, the Idea tactic resembles the Garbage Can
Model presented earlier, as Nutt himself points out (Nutt, 1999, page 15). Interestingly, Cyclical
Search seems to fit surprisingly well with our concept that a dynamic relationship should (or
does) exist between criteria and alternatives. As stated by Nutt (page 17), Decision makers who
applied a cyclical search tactic set out to learn about possibilities and to apply this knowledge to
fashion more sophisticated searches. Each new search incorporated what has been learned in past
searches. Upon close inspection, this tactic also displayed elements similar to Image Theory in
its application.

20

What is more interesting is the success of each tactic. Using the same multidimensional success
metric as he used in his 1993 formulation study reported above, Nutt found Cyclical Search and
Integrated Benchmarking to be the most successful of the six tactics, with the former more
successful than the latter. His explanation for the success of these two is based on the fact that
they generated more alternatives than the other four tactics, they were not subjected to hidden
motives as often, and time was not wasted trying to adapt sub-optimal solutions. Furthermore,
these two successful tactics encouraged learning and did not lead to a premature selection of a
course of action. The same might be said for a decision process utilizing our problem structuring
model. Learning about what a decision maker values in a decision problem should stem from
knowledge and experience with alternatives, and learning about creative alternatives should
occur as one considers ones values and criteria. Taking the time to let each of these structuring
elements inform the other also should keep the decision process from settling prematurely.

In summary, this section provides empirical support for Figure 2 as a descriptive model of the
problem structuring process for some decision makers. That is, Nutts (1993) Reframing
formulation process and Nutts (1999) Cyclical Search alternative uncovering process each seem
to be similar to our model. However, we note that each of these processes were the least used in
Nutts two studies. This is not surprising. It is well known that when using iterative processes, a
decision makers tendency toward satisficing often results in premature termination this is what
satisficing is. In practice, decision makers seem to stop early and consequently do not gain the
benefits from that extra iteration.

21

Also, since these two models of Reframing and Cyclical Search were the most successful in
Nutts studies, this suggests that Figure 2 has potential as a good prescriptive model for the
decision problem structuring process.

DISCUSSION
Our discussion is motivated by the following quote:
A manager should work on integrating his formal models of rational decision
making with his intuitive, judgmental, common sense manner of solving choice
problems and seek to adapt the former to fit the latter, rather than submit to a
bastardization of his intuition in the name of some modern mathematical
technique. (Soelberg, 1967, page 28)

Watson (1992) and Dando and Bennett (1981), among others, reject the supremacy of the
rational model of decision making as being impractical and at odds with actual decision making
behavior. The tendency for decision makers to discard values and even attributes is well
documented. French (1984) and Kasanen, et al. (2000) argue that the assumptions assumed by
many methods may not agree with the practical findings of behavioral science. It should not be
surprising, then, that some non-quantitative methods, apparently without any rational basis,
attract the attention of decision makers. This suggests that there is a demand for decision making
tools provided they are tailored to actual decision maker behavior. Furthermore, Henig and
Buchanan (1996), conclude that the decision making process should not focus on selection,
which is at the heart of the rational decision making paradigm, but rather involve understanding
and expanding. They advocate a shift in the rational model of decision making and not a total

22

rejection of it. The goal of finding the right answer is not what decision making should be all
about.

The reader may now justifiably ask, "so which method do you recommend for the decision
maker who needs advice?" Our answer is: clearly, any method that endeavors to understand and
expand - in this case, as applied to alternatives and criteria. Figure 2 shows how this is done;
criteria not only evaluate the alternatives but also generate them. Alternatives are not only to be
ranked and selected by criteria, but they also reveal criteria. This paper does not suggest a
method for doing this in detail, but we have outlined here the theoretical, empirically grounded,
foundation for decision problem structuring. This shifts the focus of methods away from
techniques of selection, which are too technical, and should attract at least a few decision
makers.

We reserve the development of a method for doing this for future research. Certainly, the
decision simulation ideas mentioned by Wright and Goodwin (1999) could be part of this
development, and the literature on creativity should contribute as well (for example, Altshuller,
1985). However, we anticipate that decision makers should iterate between alternatives and
criteria, at least twice, regardless of where they enter the loop. Certainly, more experienced
decision makers might have an established history of such iterations, and they may not need as
many new iterations as those less experienced. As Simon (1956) has pointed out, the observed
behavior of decision makers is to satisfice. In terms of our model, this means they perform only
one iteration and prematurely stop the decision making process. The entire point of our model is
that they can do better in their decision making if they perform two or more iterations.

23

We have restricted our discussion to the field of multi-criteria, since most real-life problems
involve more than a single criterion. We note that our model, which attempts to allow decision
makers to understand and expand their thoughts regarding criteria and alternatives, is similar to
ideas mentioned by others in this field. Vanderpooten (1992) argues that in any multi-criteria
procedure, iteration should, or perhaps does, occur between two phases of the decision making
process: learning and search. He states that decision makers do not have well formed
preferences when facing a decision problem. Preference structures become better formed as
decision makers relate past experiences and present explorations of alternatives to the decision
problem at hand. He further states that the decision process is an iterative sequence of such
activity, which appears conceptually similar to what we propose in this paper. Also, Kasanen, et
al. (2000) discuss how decision alternatives are not well developed during most decision
processes and therefore need to be developed further using creative methods. They further
discuss how not all decision-relevant criteria are known by the decision maker at the start and
need to be discovered during the decision process. We think that our model helps address these
problems in a practical way.

SUMMARY
This paper offers a new model of decision problem structuring that highlights the interactive and
dynamic nature of criteria and alternatives. It is offered as both a prescriptive and descriptive
model. It is validated as a descriptive model of structuring behaviour based on well-accepted
theories of decision making behaviour, as well as on large sample empirical studies of the
problem structuring process. The model is also prescriptive, based on the success of similar
structuring behaviours found in the large sample empirical studies, as well as having extended

24

Henig and Buchanans (1996) prescription involving understanding and expanding in the
decision process. New methods for multi-criteria should consider building in mechanisms that
allow for criteria to generate new alternatives, and in turn, new alternatives to help re-define the
set of criteria.

We caution, however, as does Watson (1992), that there probably is no single best problem
structuring method for all decisions. The influence of contextual factors on decision process is
significant (Dillon, 1998; Nutt 1999), as is the initial frame taken by the decision maker (Nutt
1998b). These and other influences should be studied further as they impact the problem
structuring process. In a world where the environment is chaotic, wants and desires are
constantly changing, and alternatives appear before decision makers perhaps randomly, problem
structuring methodologies need to be made more dynamic.

Acknowledgements
This project was financed by grants from The Leon Recanati Graduate School of Business
Administration and the Waikato Management School. We also acknowledge The Arizona State
University, where the first author was on sabbatical leave during much of the writing of this
paper.

25

REFERENCES
Abualsamh RA, Carlin B, McDaniel Jr RR. 1990. Problem structuring heuristics in strategic
decision making. Organizational Behavior and Human Decision Processes 45: 159-174.
Altshuller GS. 1985. Creativity as an Exact Science; New York: Gordon and Breach.
Bazerman M. 1990. Judgement in Managerial Decision Making; New York: John Wiley.
Beach LR, Mitchell TR. 1990. Image theory: A behavioral theory of decisions in organizations.
In Research in Organization Behavior (Vol. 12). Staw BM, Cummings LL (eds). Greenwich,
CT; JAI Press; 1-41.
Buchanan J, Henig M, Corner J. 1999. Comment on Rethinking value elicitation for personal
consequential decisions, by G. Wright and P. Goodwin. Journal of Multi-criteria Decision
Analysis, 8(1): 15-17.
Buchanan JT, Henig EJ, Henig MI. 1998. Objectivity and subjectivity in the decision making
process. Annals of Operations Research 80: 333-345.
Checkland PB. 1981. Systems Thinking, Systems Practice. Chichester: Wiley.
Cohen MD, March JG, Olsen JP. 1976. A garbage can model of organizational choice.
Administrative Science Quarterly 17: 1-25.
Dando MR, Bennett PG. 1981. A Kuhnian crisis in Management Science? Journal of the
Operational Research Society 32: 91-103.
Dillon SM. 1998. Descriptive decision making: Comparing theory with practice, Proceedings of
the 33rd Annual Conference of the New Zealand Operational Research Society, August 31 September 1; Waikato University, Hamilton: 99-108.
Dillon SM. 2000. Defining problem structuring. Unpublished Working Paper; Department of
Management Systems, University of Waikato, New Zealand.
Einhorn HJ. 1970. The use of nonlinear, noncompensatory models in decision making. British
Psychological Society Bulletin 73(3): 221-230.
French S. 1984. Interactive multi-objective programming: Its aims, applications and demands.
Journal of the Operational Research Society 35(9): 827-834.
Gordon LA, Miller DM, Mintzberg H. 1975. Normative Models in Managerial Decision-Making.
New York: National Association of Accountants.
Henig M, Buchanan J. 1996. Solving MCDM problems: Process concepts. Journal of MultiCriteria Decision Analysis 5: 3-21.

26

Henig M, Katz H. 1996. R&D project selection: A decision process approach. Journal of MultiCriteria Decision Analysis 5(3): 169-177.
Kasanen E, Wallenius H, Wallenius J, Zionts S. 2000. A study of high-level managerial decision
processes, with implications for MCDM research. European Journal of Operational Research
120(3): 496-510.
Keeney R. 1992. Value Focused Thinking. MA: Harvard University Press.
Keeney R, Raiffa, H. 1976. Decisions with Multiple Objectives. New York: Wiley.
Keller LR, Ho JL. 1988. Decision problem structuring: Generating options. IEEE Transactions
on Systems, Man, and Cybernetics 18(5): 715-728.
Klein GA. 1989. Recognition primed decisions. In Advances in Man-Machine Systems Research.
Rouse WB (ed). Greenwich, CT: JAI Press; 47-92.
Kleindorfer P, Kunreuther H, Schoemaker P. 1993. Decision Sciences: An Integrative
Perspective. Cambridge: Cambridge University Press.
Korhonen P. 1997. Some thoughts on the future of MCDM and our schools. Newsletter,
European Working Group Multicriteria Aid for Decisions 2(10): 1-2. CHECK
Lipshitz R. 1994. Decision making in three modes. Journal for the Theory of Social Behavior
24(1): 47-65.
Majone G. 1980. An anatomy of pitfalls. In Pitfalls of Analysis, IIASA International Series on
Applied Systems Analysis. Majone G, Quade E (eds). Chichester: Wiley; 7-22.
March JG. 1988. The technology of foolishness. Civilrkonomen (Copenhagen)18, 1971.
Reprinted in March JG. Decisions and Organisations. Oxford: Blackwell.
Mintzberg H, Raisinghani D, Theoret A. 1976. The structure of unstructured decision processes.
Administrative Science Quarterly 21(2): 246-275.
Nutt P. 1984. Types of organizational decision processes. Administrative Science Quarterly,
29(3): 414-450.
Nutt P. 1993. The formulation processes and tactics used in organizational decision making.
Organization Science 4(2): 226-251.
Nutt P. 1998a. Surprising but true: Fifty percent of strategic decisions fail. Academy of
Management Executive, In Press.
Nutt P. 1998b. Framing strategic decisions. Organization Science 9(2): 195-216.

27

Nutt P. 1999. A taxonomy of strategic decisions and tactics for uncovering alternatives.
Unpublished Working Paper. Ohio State University, Columbus, Ohio, USA.
Perry W, Moffat J. 1997. Developing models of decision making. Journal of the Operational
Research Society 48: 457-470.
Raiffa H. 1968. Decision Analysis. Reading, MA: Addison-Wesley.
Rivett BHP. 1968. Concepts of Operational Research. New Thinkers Library.
Saaty TL. 1980. The Analytic Hierarchy Process. New York: McGraw-Hill.
Schoemaker PJH. 1980. Experiments on Decisions Under Risk: The Expected Utility Theorem.
Boston: Martinus Nijhoff Publishing.
Simon HA. 1955. A behavioral model of rational choice. Quarterly Journal of Economics 69:
99-118.
Simon HA. 1956. Rational choice and the structure of the environment. Psychological Review
63: 129-152.
Simon HA. 1960. The New Science of Management Decision. New Jersey: Prentice Hall.
Soelberg P. 1967. Unprogrammed decision making. Industrial Management Review 8(2): 19-29.
Taket A, White L. 1997. Wanted: Dead or alive ways of using problem-structuring methods in
Community OR. International Transactions in Operational Research 4(2): 99-108.
Thierauf RJ. 1978. An Introductory Approach to Operations Research. New York: Wiley.
Tversky A. 1972. Elimination by aspects: A theory of choice. Psychological Review 79(4): 281299.
Vanderpooten D. 1992. Three basic conceptions underlying multiple criteria interactive
procedures. In Multiple Criteria Decision Making, Proceedings of the Ninth International
Conference: Theory and Applications in Business, Industry, and Government. Goicoechea A,
Duckstein L, Zionts S (eds). Berlin: Springer-Verlag; 441-448.
Volkema R. 1983. Problem formulation in planning and design. Management Science 29: 639652.
von Winterfeldt D. 1980. Structuring decision problems for decision analysis. Acta Psychologica
45: 71-93.
Watson SR. 1992. The presumptions of prescription. Acta Psychologica 80: 7-31.

28

Wildavsky A. 1979. Speaking Truth to Power, Chapter 5, Between Planning and Politics:
Intellect vs. Interaction as Analysis. Boston, MA: Little Brown.
Wright G, Goodwin P. 1999. Value elicitation for personal consequential decisions. Journal of
Multi-criteria Decision Analysis 8(1): 3-10.
Woolley RN, Pidd M. 1981. Problem structuring A literature review. Journal of the
Operational Research Society 32: 197-206.

29

CRITERIA

ALTERNATIVES

Objective Mapping

Subjective Mapping

ATTRIBUTES

Figure 1 Mapping Between Criteria and Attributes, Attributes and Alternatives


(Buchanan, et al. 1998)

30

AFT

VFT
Alternatives

Criteria

Figure 2 - A Dynamic Model of Problem Structuring, Relating Alternatives and Criteria

31

S-ar putea să vă placă și