Documente Academic
Documente Profesional
Documente Cultură
2, 2002
1. INTRODUCTION
The need for such ranking arises in a variety of
situations. For example: thousands of military and
civilian sites have been identified as contaminated
with toxic substances; myriad risk scenarios are
commonly identified during the development of
software-intensive engineering systems; and thousands of mechanical and electronic components of
the Space Shuttle are placed on a critical item list
(CIL) in an effort to reveal significant contributions
to program risk. In all such risk identification
procedures we must then prioritize a large number
383
384
of risk scenarios according to their individual contributions to the overall system risk. A dependable
and efficient ranking and filtering of identified risk
elements can be an important aid toward systematic
risk control and reduction.
Infrastructure operation and protection highlights the challenges to risk filtering, ranking, and
management in large-scale systems. Infrastructures
that are becoming increasingly vulnerable to natural
and willful hazards are our manmade engineered
systems; these include telecommunications, electric
power, gas and oil, transportation, water-treatment
plants, water-distribution networks, dams, and
levees. Fundamentally, such systems have a large
number of components and subsystems. Most waterdistribution systems, for example, must be addressed
within a framework of large-scale systems, where a
hierarchy of institutional and organizational decision-making structures (e.g., federal, state, county,
and city) is often involved in their management
(Haimes et al. 1997). Coupling exists among the
subsystems (e.g., the overall budget constraint
imposed on the overall system), and this further
complicates their management. A better understanding of the interrelationship among natural,
willful, and accidental hazards is a logical step in
helping to improve the protection of critical national
infrastructures. Such efforts should build on the
experience gained over the years from the recovery
and survival of infrastructures assailed by natural
and human hazards. Furthermore, it is imperative to
model critical infrastructures as dynamic systems in
which current decisions have impacts on future
consequences and options.
Within the activity known as total risk management of a system (Haimes 1991), the term risk
assessment means identifying the risk scenarios,
i.e., determining what can go wrong in the system
and all the associated consequences and likelihoods.
The next steps are to generate mitigation options,
evaluate each in terms of its cost, benefit, and risk
tradeoffs, and then decide which options to implement and in what order. Filtering and ranking aids
this decision process by focusing attention on those
scenarios that contribute the most to the risk.
This article presents a methodological framework to identify, prioritize, assess, and manage
scenarios of risk to a large-scale system from
multiple overlapping perspectives. The organization
of the article is as follows. After reviewing earlier
efforts in risk filtering and ranking, we discuss
hierarchical holographic modeling as a method for
385
Hierarchical holographic modeling has been
extensively and successfully used for identifying
the risk scenarios in numerous projects (Haimes
1981, 1998; Lambert et al. 2001). The HHM framework was developed because it is impractical to
represent within a single model all the important
and critical aspects of complex systems. HHM offers
multiple visions and perspectives, which add
strength to a risk analysis. It has been extensively
and successfully deployed to study risks for government agencies such as the Presidents Commission
on Critical Infrastructure Protection (PCCIP), the
FBI, NASA, the Virginia Department of Transportation (VDOT), and the National Ground Intelligence Center, among others. The HHM
methodology/philosophy is grounded on the premise
that in the process of modeling large-scale and
complex systems, more than one mathematical or
conceptual model is likely to emerge. Each of these
models may adopt a specific point of view, yet all
may be regarded as acceptable representations of
the infrastructure system. Through HHM, multiple
models can be developed and coordinated to capture
the essence of the many dimensions, visions, and
perspectives of infrastructure systems.
Perhaps one of the most valuable and critical
aspects of hierarchical holographic modeling is its
ability to facilitate the evaluation of the subsystem
risks and their corresponding contributions to the
risks in the total system. In the planning, design, or
operational mode, the ability to model and quantify
the risks contributed by each subsystem markedly
facilitates identifying, quantifying, and evaluating
risk. In particular, HHM has the ability to model the
intricate relationships among the various subsystems
and to account for all relevant and important
elements of risk and uncertainty. This makes for a
more tractable modeling process and results in a
more representative and encompassing risk assessment process.
As pointed out by Kaplan et al. (2001), HHM
can be regarded as a general method for identifying
the set of risk scenarios. It has turned out to be
particularly useful in modeling large-scale, complex,
and hierarchical systems such as defense and civilian
infrastructure systems. To understand HHM in this
way, we first remind ourselves of the principle that
the process of identifying the risk scenarios for a
system of any kind should begin by laying out a
diagram that represents the success, or as
planned, scenario of the system. In the HHM
method this diagram takes the form of a master
386
chart showing different perspectives on the system requirements (for an example, see Fig. 1).
Perspectives are portrayed by columns in the chart,
each with a head topic. In Fig. 1, head topics include
technological, organizational, legal time-horizon,
user-demands, and socioeconomic. Each perspective
in the chart is then broken down into boxes or
subtopics. Each subtopic box can then be thought of
as representing a set of success criteria, i.e.,
actions or results that are supposed to occur as part
of the definition of the systems success. Consider
now the set of such criteria represented by the jth
box in the ith perspective. For each such box we can
then generate a set of risk scenarios by asking:
What can go wrong with respect to this class of
success criteria? i.e., How could it happen that we
would fail to achieve this set of success criteria?
(More pointedly, if we wanted to identify or anticipate terrorism-type scenarios, we might ask: If I
wanted to make something go wrong with respect to
this class of success criteria, how could I do it?
(Kaplan et al. 1999)).
By answering these questions we generate a set
of risk scenarios associated with the jth subtopic box
of the ith perspective, and it is now natural to think of
this box as a source of risk. The union of these
sets of risk scenarios, over all the boxes, should now
yield a complete set of risk scenarios for the system
or operation as a whole.
Taking the union only over the boxes in one
perspective would typically yield a subsetan
approximationof the complete set of risk scenarios. Similarly, the union of the sets of success
criteria corresponding to one perspective yields a
subsetan approximationto the total set of
success criteria of the system as a whole. No one
perspective, typically, is adequate on its own to
consider the welfare of all current and future
stakeholders. Multiple perspectives of success are
useful for developing an inclusive set of answers to
What can go wrong?
The nature and capability of HHM is thus to
identify a comprehensive, therefore large, set of
risk scenarios. It does this by presenting multiple,
complementary perspectives of the success scenario
requirements. To deal with this large set we need a
systematic process that filters and ranks these
identified scenarios so that we can prioritize risk
mitigation activities. The first purpose of this
article is to assemble and discuss a number of
published approaches toward such a systematic
process.
387
Fig. 1. Excerpt from a hierarchical holographic model developed to identify sources of risk to operations other than war (Dombroski et al.
2002).
388
389
390
Undetectability refers to the absence of modes by which the initial events of a scenario can be discovered before harm occurs.
Uncontrollability refers to the absence of control modes that makes it possible to take action or make an adjustment to prevent harm.
Multiple paths to failure indicates that there are multiple and possibly unknown ways for the events of a scenario to harm the system, such as
circumventing safety devices, for example.
Irreversibility indicates a scenario in which the adverse condition cannot be returned to the initial, operational (pre-event) condition.
Duration of effects indicates a scenario that would have a long duration of adverse consequences.
Cascading effects indicates a scenario where the effects of an adverse condition readily propagate to other systems or subsystems, i.e.,
cannot be contained.
Operating environment indicates a scenario that results from external stressors.
Wear and tear indicates a scenario that results from use, leading to degraded performance.
HW/SW/HU/OR (Hardware, Software, Human, and Organizational) interfaces indicates a scenario in which the adverse outcome is
magnified by interfaces among diverse subsystems (e.g., human and hardware).
Complexity/emergent behaviors indicates a scenario in which there is a potential for system-level behaviors that are not anticipated from a
knowledge of the components and the laws of their interactions.
Design immaturity indicates a scenario in which the adverse consequences are related to the newness of the system design or other lack of a
concept proof.
391
High
Unknown or undetectable
Unknown or uncontrollable
Unknown or many paths
to failure
Unknown or no reversibility
Unknown or long duration
Unknown or many
cascading effects
Unknown sensitivity or
very sensitive to operating
environment
Unknown or much wear
and tear
Unknown sensitivity or very
sensitive to interfaces
Unknown or High degree
of complexity
Unknown or highly immature
design
Medium
Low
Not Applicable
Late detection
Imperfect control
Few paths to failure
Early detection
Easily controlled
Single path to failure
Not applicable
Not applicable
Not applicable
Partial reversibility
Medium duration
Few cascading effects
Reversible
Short duration
No cascading effects
Not applicable
Not applicable
Not applicable
Sensitive to operating
environment
Not applicable
Not applicable
Sensitive to interfaces
No sensitivity to interfaces
Not applicable
Medium complexity
Low complexity
Not applicable
Immature design
Mature design
Not applicable
392
Country HHM;
U.S. HHM;
Alliance HHM; and
Coordination HHM.
393
Table III. List of 11 Scenarios to be Filtered in Phase II
Subtopic
1.1
1.2
1.3
1.4
1.5
2.
3.1
3.2
4.
5.
6.
Telephone
Cellular
Radio
Television
Technology
Cable
Computer Information Systems (CIS)
Management Information Systems (MIS)
Satellite
International
Regulation
Telephone
Cellular
Radio
Television
Cable
Computer Information Systems (CIS)
Management Information Systems (MIS)
Satellite
International
394
Risk Scenario
1.1 Telephone
1.2
2.
3.1
4.
5.
Fig. 5. Qualitative severity scale matrix.
Telephone
Cellular
Cable
Computer Information Systems (CIS)
Satellite
International
395
Table VII. Scoring of Subtopics for OOTW Using the Criteria Hierarchy
Criteria
Undetectability
Uncontrollability
Multiple Paths to Failure
Irreversibility
Duration of Effects
Cascading Effects
Operating Environment
Wear and Tear
Hardware/Software/Human/Organizational
Complexity and Emergent Behaviors
Design Immaturity
1.1 Telephone
1.2 Cellular
Low
Med
High
Med
High
Med
High
Med
High
Med
Med
Low
Med
Med
High
High
Med
High
High
High
High
High
2. Cable
3.1 CIS
4. Satellite
5. International
Med
High
High
Med
High
Low
High
Low
Med
Low
Med
High
High
High
High
High
Low
High
High
High
High
High
Low
Med
Med
High
High
High
Med
Med
High
High
High
High
High
High
Low
High
High
High
High
High
High
Med
396
likelihood of 0.15 to this scenario. Even if it did
occur, its effects may be somewhat reversible within
six hours.
Assuming that we filter out all subtopics (risk
scenarios) attaining a risk valuation of moderate or
low risk, CIS is filtered out. Therefore, the remaining five critical risk scenarios are: Telephone,
Cellular, Cable, Satellite, and International Communications. Based on the assessments shown above
and in Fig. 6, planners of the operation would surely
want to concentrate resources and personnel on
ensuring that the cellular, cable, satellite, telephone,
and international communications networks are well
protected and guarded.
397
Mahoney, B., Quantitative risk analysis of GPS as a critical
infrastructure for civilian transportation applications, Masters
of Science Thesis, Department of Systems and Information
Engineering, University of Virginia, 2001.
Matalas, N. C. and M. B Fiering, Water-resource system
planning, in climate, climate change and water supply. In
Studies in geophysics, pp. 99109, National Research
Council, National Academy of Sciences, Washington, DC,
1977.
Morgan, M. G., B. Fischhoff, L. Lave, and P. Fischbeck, A
proposal for risk ranking within federal agencies. In Comparing environmental risks: Tools for setting government
priorities, J. Clarence Davies (Ed.), Resources for the Future,
Washington, DC, 1999.
Morgan, M. G., H. K. Florig, M. L. DeKay, and P. Fischbeck,
Categorizing risks for risk ranking. Risk Analysis, 20(1), 49,
2000.
Roland, H. E. and B. Moriarty, System safety engineering and
management, 2nd ed. New York: John Wiley & Sons,
1990.
Sokal, R. R. Classification: purposes, principles, progress, prospects. Science, September 27, 1974.
Webler, T., H. Rakel, O. Renn, and B. Johnson, Eliciting and
classifying concerns: A methodological critique. Risk Analysis, 15(3), 421, 1995.