Sunteți pe pagina 1din 10

Port Security Risk

1.0 OVERVIEW
The reflections collected in this article arose in the context of the preparation of a relatively
ambitious project: the SAFEPORT Project [1], under NATO's Defense Against Terrorism PoW
on the Protection of Harbours and Ports. Along with other purposes, its main goal is the
development of a Decision Support System (DSS), capable of producing, for virtually any port
and any particular operations setting, recommendations for the configurations of resources that
provide adequate surveillance and protection over the area of interest (AoI). The main focus is
in trying to find the best mix of sensors, platforms and personnel, and their locations or default
trajectories, rather than the types of effectors and their best usage.
The scope of this work is to provide scientific support to planning decisions of what resources
should be allocated, and where and how they should operate for adequate protection of ports and
harbours from asymmetric threats originating from the waterside, below or above the sea
surface. For now, this is approached only from a strategic level point of view (what and where),
not entering into considerations on task planning, or on tactics (how).
Among other possibilities, terrorist threats can consist of intruder platforms divers, small
boats, AUVs/UUVs , carrying improvised explosive devices, targetting vessels, energy
systems, or other critical infrastructures.
Especially since the 9/11 terrorist attack, many proposals have been made in the literature, and
methodologies have been developed and applied to support similar planning decisions for the
protection of critical systems and areas. Most of the scientific contributions focus on the design
of sensor networks, on signal classification and data fusion, and occasionally on the planning of
courses of action. In the context of port protection, such courses of action refer mostly to the
movements required to intercept suspects, with fewer works addressing other types of response
measures, including the usage of effectors [2].
Many relevant scientific contributions can be clustered into two broad, and different
approaches:
Probabilistic risk assessment/analysis this has been extensively applied in the
context of safety assessment [7-9], and became also the standard approach to support
strategic security decisions [10-13]; and,
Adversarial risk analysis, and defender-attacker modelling these are being proposed
to explicitly account for the intelligence and attitudes of terrorists, and may be better
suited for task planning than for strategic planning goals [14-17].
Many researchers position themselves in either of these two sides, and lately there has been
some exchange of arguments, with criticisms being raised also from independent researchers
[10,18-25]. Most of those criticisms, even if partial, are relevant. In most cases, it is
unreasonable to try to guesstimate threat-related probabilities, or to assume that terrorists have
perfect intelligence about the defense system or that they choose rationally from possible
courses of action. Both views are therefore limited by what is feasible or reasonable to include
in a mathematical model, either because of lack of data or because of the daunting effort put into
solving the problem by considering a large number of possible states of nature or of possible
courses of action for attackers and defenders. Nevertheless, each worldview contributes with a
useful key concept:
Defense decisions should be supported by some form of risk assessment and estimation,
before and after implementation of those decisions;
Terrorists, as much as the defense forces, should be modelled as intelligent agents that
dynamically adapt to circumstances in order to reach their goals.
In fact, both views can be used complementarily, and several researchers have presented ways
to combine them [26-28].
From our point of view, a decision support system can produce reasonable cost-effective
strategic solutions and recommendations for port protection, by using a more reliable way to
estimate security risk. On the other hand, such solutions should be validated, or revised, through
some form of simulation experimentation either through war gaming and red teaming [29]
or, in an fully-automated way, through data farming [30,31].

2.0 RISK
After several revisions along the past decade, the most common way, nowadays, to
quantitatively estimate security risk is through the product of three components:
Risk = Threat x Vulnerability x Consequence
This TVC analysis methodology is adopted by the US Department of Homeland Security and
the US Coast Guard, among others. Willis [32-34] advanced the following interpretation for
those components, in order to define terrorism risk:
Risk = P[attack occurs]
X P[attack results in damage | attack occurs]
X E[damage | attack occurs and results in damage]
Therefore, three estimates must be obtained:
An educated guesstimate of the probability of an attack occurring;
A relatively more objective estimate, of the ability of the defense system coping with an
attack attempt;
An educated guesstimate of the global impact of a successful attack.
Although some studies try to express the third item in monetary units, as a requirement for cost-
benefit analysis [35], it is very disputable or unreasonable to try to do so when there may be
lives lost, environmental damages, or the disruption of different types of systems, civilian or
military.
In general, T, V and C are very uncertain quantities. Suppose one is sure that
4 3
10 10 T

s s
(within some time interval),
2 1
10 10 V

s s , and
6 7
10 10 C s s (say, monetary units); then,
risk would be estimated as anything between 1 and 1000 MU. Despite problems like this,
several risk assessment methodologies rely heavily on the judgment of experts and on the TVC
formula for risk estimation [36,37], and we nevertheless agree that this approach is yet a useful
one for the evaluation of some specific risks.
However, we need to find a way of computing risks associated to decisions of locating sensors
in space. Different possible configurations of the surveillance system will lead to different
values of an appropriate security risk figure of merit. For that purpose, we propose the
evaluation of 2D risk maps.
Risks have to be estimated for any relevant scenario and, before that, for any relevant operations
setting. Here, we are making an explicit distinction between the concept of scenario and the
concept of operations setting [38 see Annex I]. Both concepts subsume the inclusion of a
description of the background geographical and socio-political context. A setting focusses on
the definition of the environmental conditions (meteorological, maritime traffic, sea currents,
etc) and the type of threat, while a scenario further includes the definition of the defence assets.
Therefore, a setting may refer to a scenario where no protective measures are yet implemented.
In the context of port and harbour protection, and in many cases, two 2D risk maps may be
sufficient for each setting: one corresponding to threats at the sea surface level, and another for
underwater threats. Naturally, this may be extended to more layers, to encompass aerial threats
or to account for large depths in the waterside, considering range limitations of the detection
capabilities of some underwater sensors.
In Section 5, we formally propose the concept of spatial security risk (SSR), based on the
concepts of criticality, (inn)effectiveness, and susceptibility. The definition of SSR indices
resembles that of TVC analysis, but several appropriate modifications were introduced. The
following quoted definitions are taken from [39], a formidable glossary whose value is not
restricted to the realm of military health systems:
Criticality can be defined as a relative measure of the consequences of a failure
mode. The word relative stresses the fact that a criticality index seeks to encompass,
in a single figure, the perceived degree of loss or damage resulting from a successful
attack, from multiple points of view; criticality will be further discussed in Section 5;
Effectiveness can be defined as how well a [defense] system will work under
operational conditions to accomplish a defined mission; this will be further discussed
in the following Section;
Susceptibility can be defined as the degree to which a () system is open to effective
attack due to one or more inherent weaknesses. The word inherent stresses the fact
that vulnerability is not only due to the possible inneffectiveness of the defense system.
Different zones of a port are more or less exposed to intruders, due to geographical
characteristics, the presence of breakwaters, possible exclusion zones, the density of maritime
traffic (higher traffic favours concealment), sea currents, etc. While a susceptibility index can be
estimated from objective, quantified information, it may also be used to incorporate subjective
perceptions of the natural vulnerability of different parts of the port. It may even be used to
induce solutions of a certain type, such as the definition of layers in the defense configuration,
namely through sensor barriers and barrier patrols.
This is especially important, since we are considering only a high-level approach to modelling
risk, from a strategic, or planning point of view, without considering the dynamics of actual
operations, i.e., the movements of threats and of the defense mobile platforms. Therefore, it is
through this susceptibility index concept that one may also enforce the principle that the earlier a
threat is detected, the more time and better chances defense has to produce an effective
response. On the other hand, the introduction of cost considerations see Section 7 will
demote solutions where the assets are put too far away from the harbour.
3.0 VULNERABILITY
The concept of security risk as much as safety risk or financial risk has been defined in
the literature in quite disparate ways, but, in most cases, it includes the concept of vulnerability,
which has, by itself, been defined in many, often contradictory ways [40]. Although in a
different context, vulnerability was once defined as the degree to which a system is susceptible
to, or unable to cope with, adverse effects () [41; emphasis added]. This definition is
especially useful when we seek to assess the benefits of installing a protection system.
Following this interpretation, in the context of defense [discussing the value of a protection
system], vulnerability should be evaluated for each possible setting, and decomposed into two
parts:
Susceptibility: the natural level of exposure to potential attacks, not taking into account
the protection system to be installed; and,
Inneffectiveness: the probability of the defense system failing to cope with an attack
attempt.
The concept of susceptibility was already addressed in the previous Section, and we now
discuss the estimation of the effectiveness of the defense system. Since, in this article, we focus
on surveillance systems, comprised of sensors of different types, the effectiveness of the system
will be measured in terms of the probability of detecting potential threats.
Suppose one has n fixed sensors located at positions ( )
1
,...,
n
c c . Each sensor has a detection
probability function, ( )
i
D x , evaluated for all points x . Some studies oversimplify, by
considering circular ranges of guaranteed detection,
( ) 1, ; 0, .
i i
D x if x c r o w = s =
but it is more reasonable to consider probabilistic sensor models, where ( ) 0 1
i
D x < < are
probabilities that degrade with the distance e.g., the Poisson scan and lambda-sigma models
[42].
In either case, the global probability of detection of an object at position x , by at least one of
the sensors, is then given by:
( ) ( ) ( )
1
1 1
n
i
i
D x D x
=
=
[

One possible goal would be to find the best locations of n sensors so to maximize the expected
detection performance, averaged in space:
( ) ( )
1
,..., : max
n n
c c f E D x = (


Alternatively, one may consider a maximin objective: maximizing the minimum value of ( ) D x
. Yet another interesting alternative is to find the number of (identical) sensors that are
necessary to provide a desired detection performance.
This approach is not applicable only to sensors fixed at specified positions, but may be adapted
to mobile sensors, as well: a group of mobile sensors say, UUVs, patrol boats may have
the task of surveying a prescribed area, and collectively possessing some expected performance.
In terms of a surveillance problem, inneffectiveness will then be defined as the probability
( ) ( ) 1 I x D x =
If, further than the simple ability of detecting a possible threat, we consider also the ability to
correctly classify the contact as a true positive (i.e., a meaningful potential threat) or as a true
negative (a safe contact), and of successfully applying the subsequent neutralization measures,
effectiveness will be assessed by a stricter equation, involving all relevant conditional
probabilities:
Effectiveness = P[Detection] X P[Correct classification | Detection] X
However, this may be quite difficult to detail: among other more or less complex issues,
classification may or may not require interception, and further (binary) classification may be
needed to decide upon the application of softer or harder response measures.
4.0 CRITICALITY
We now discuss another component of risk, criticality, also to be estimated at any point of the
AoI. A criticality index value represents, somehow, the danger of having a threat present at a
given location, considering the distance to likely targets, such as critical infrastructures, and the
expected impact of attacks on those targets. The following describes a convenient practical
approach for the definition of those criticality indices, where a decision-maker is required to
provide only the most essential input information.
One should estimate or declare the degree of importance of any critical node (an asset,
infrastructure or zone) in a port. Something is interpreted as critical if it is judged to have a
higher likelihood of being targetted by a terrorist attack, and if that attack can result in
significant loss or damage. Critical nodes should be defined even if the potential targets are not
permanently present at those specific locations e.g., LNG and cruise terminals.
Firstly, we note that criticality indices to be assigned to the main potential targets should be
judgmentally synthesized from some kind of multi-attribute utility function for instance, the
additive aggregation model , where all relevant factors for the assessment of the
consequences of a successful attack are considered: possible number of casualties, economic
and environmental impacts, etc. Even if these expectable consequences may not be bounded, we
propose that at least the resulting criticality values be defined in a bounded scale, say, 0 to 10,
linearly translatable to 0 to 1, for the sake of normalization.
One needs to define a suitable criticality function over all the AoI, ( ) 0 1 C x < < . Ideally, this
should be generated from a limited set of input information, including:
the set of local maxima, called critical nodes, g
i
{ }
i =1,...,m
;
the corresponding criticality indices, 0 1
i
<< < .
A radial basis function is assigned to each node, ( ) 0 ,
i i
C x < s , with values decaying with
the distance of a point the possible position of a threat to the node. One simple example of
such a function is:
( ) ,
i
i
i
C x
x g

=
+

where the common parameter l helps to regulate the decay rate of all basis functions.
The criticality function then results from the combination of these individual functions:
( ) ( ) ( )
*
1
1 1 ,
m
i
i
C x C x
=
=
[

The above formulation can be adapted to those cases where critical infrastructures, or other
likely targets, do not reduce almost to a point in space, but rather cover a relatively large area. In
the latter case, it is just a matter of specifying the radius of that area,
i i
x g r s , where
( ) ,
i i
C x = . Critical infrastructures or areas may have other shapes, or may even extend
along lines, such as pipelines. The adaptation of the above approach to those cases is possible,
although not as straightforwardly.
A further difficulty might be posed by the fact that distance is not the only relevant factor. Due
to breakwaters or other obstacles, and also sea currents, a straight line is not always the shortest
or easiest path for a threat approaching a target. However, such considerations can and should
be dealt with through the susceptibility component of risk.
5.0 RISK MAPS
A risk map can be estimated, for all points x AoI e , and for a given scenario s , as a collection
of spatial security risk indices, which we will define as:
( )
( ) ( )
,
( , ) , ,
SSRI s Susceptibility Inneffectivess Criticality
S s I s C s
=
=
x
x x x

where all factors are measured in the interval (0,1).
Note that both the geographic features and the environmental conditions have an influence on
sensor performance, and also on the behaviour of threats.
A relatively high value for the SSRI is attained when simultaneously all three factors have
relatively high values. Conversely, if a point has a low value for susceptibility, or for criticality,
or for inneffectiveness, its risk index will be relatively low.
As stated before, we consider 2D risk maps but, for the sake of convenience, in Fig.1 we
illustrate cut views for a very simple, pseudo-realistic case. This may help better understand the
concept of prior risk, that the defense solution seeks to compensate for, in order to reduce the
overall (posterior, or residual) risk. For better interpretation of how risk is distributed along
space, prior risk is shown as the geometric mean of susceptibility and criticality. In case one
wishes to pay attention only to threats originating from the waterside, the landside risk is
ignored.






Figure 1: Cut views of the surfaces associated to the spatial security risk components
and their main combinations; landside depicted on the left, waterside on the right.
The residual risk is shown in Fig.1(f), also as a geometric mean:

3
Residual risk SSRI =
However, note that all values of SSRI in this example are less than 0.05. This is not a
probability, nor an estimate of the expected conditional loss or damage. SSRI is defined in the
interval (0,1) but expressed in abstract measuring units, combining hard probability estimates
with judgmental estimates. It is just a practical, operational concept, useful for the purpose of
describing how risk is distributed in space, as needed for optimization. It may also be useful to
compare proportions between:
The basic prior risk within a port, ignoring any current or future protective measures;
The current risk, considering only the current protective measures;
The residual risk, expected after the implementation of an alternative defense solution.
There are several possible criteria that may be considered to globally reduce SSRI in space.
Instead of, for instance, minimizing the average value of SSRI, it should be more appropriate to
minimize its maximum value. This is numerically safe, since the scale where this index is
evaluated is bounded; otherwise, the presence of outliers say, extreme values for the
estimated consequences of an attack might originate awkward solutions, with all defense
resources concentrated around a single point.
10.0 REFERENCES
[1] Martins, M., R.P. Casimiro, S. Gonalves, J. Calado, M. Manso, J. Lopes, A. Rodrigues,
M.E. Captivo, J.C. Freitas, M.A. Abreu, G. Gonalves, J. Sousa, M. Bezzeghoud, R.
Salgado (2010). The SAFE-PORT Project: An approach to port surveillance and
protection. In Proceedings of WSS 2010 2nd International Conference on Waterside
Security, Marina di Carrara (Italy), Nov 3-5, 2010.
[2] Kessel, R. (2007). Protection in ports: Countering underwater intruders. Proc. of
Underwater Defence Technology (UDT) Europe 2007, Naples, June 2007. Reprint:
NURC-PR-2007-005.
[3] Wang, B. (2010). Coverage Control in Sensor Networks. Springer-Verlag, London.
[4] Wu, Q., N.S.V. Rao, X. Du, S.S. Iyengar, V.K. Vaishnavi (2007). On efficient deployment
of sensors on planar grid. Computer Communications 30, 2721-2734.
[5] Caiti, A., A. Munaf, R. Viviani (2007). Adaptive on-line planning of environmental
sampling missions with a team of cooperating autonomous underwater vehicles.
International Journal of Control, 80(7), 1151-1168.
[6] Borges de Sousa, J.J. Estrela da Silva, F.L. Pereira (2009). New problems of optimal path
coordination for multi-vehicle systems. In Proceedings of the 10th European Control
Conference, Budapest, Hungary, August 2009.
[7] Trbojevic, V.M., B.J. Carr (2000). Risk based methodology for safety improvements in
ports. Journal of Hazardous Materials, 71(13), 467480.
[8] Bedford, T.M., R.M. Cooke (2001). Probabilistic Risk Analysis: Foundations and Method.
Cambridge University Press, Cambridge, UK.
[9] lusu, ., . zba, T. ltio, . r (00). is analysis of the vessel traffic in the
Strait of Istanbul. Risk Analysis, 29(10), 1454-1472.
[10] Greenberg, M.R. (2011). Risk analysis and port security: some contextual observations
and considerations. Annals of Operations Research, 187(1), 121-136.
[11] Linacre, N.A., M.J. Cohen, B. Koo, R. Birner (2010). The use of Threat, Vulnerability,
and Consequence (TVC) Analysis for decision making on the deployment of limited
security resources. In Wiley Handbook of Science and Technology for Homeland Security,
J.G. Voeller (ed.), John Wiley & Sons, 1613-1621.
[12] Norman, T.L. (2010). Risk Analysis and Security Countermeasure Selection. CRC Press.
[13] Apostolakis, G.E. (2010). Probabilistic risk assessment (PRA). In Wiley Handbook of
Science and Technology for Homeland Security, J.G. Voeller (ed.), John Wiley & Sons,
162-185.
[14] Brown, G.G., W.M. Carlyle, J. Salmern, K. Wood (2005). Analyzing the vulnerability of
critical infrastructure to attack, and planning defenses. In Tutorials in Operations
Research, INFORMS, 102-123.
[15] Rios Insua D., J. Jesus, D. Banks (2009). Adversarial risk analysis. Journal of the
American Statistical Association, 104(486), 841854.
[16] Jaob, M., . Van, . rban, P. enda, and M. Pchoue (010). gentC: gent-
based testbed for adversarial modeling and reasoning in the maritime domain. In Proc. of
the 9th Int. Conf. on Autonomous Agents and Multiagent Systems: Volume 1
(MS010), van der Hoe et al (eds.), International Foundation for utonomous
Agents and Multiagent Systems, Richland, SC, 1641-1642.
[17] Zhuang, J., V.M. Bier, O. Alagoz (2010). Modeling secrecy and deception in a multiple-
period attacker-defender signaling game. European Journal of Operational Research, 203,
409-418.
[18] Cox, Jr., L.. (008). Some limitations of ris = threat vulnerability consequence
for risk analysis of terrorist attacks. Risk Analysis, 28(6), 17491761.
[19] Dillon, R., R. Liebe, T. Bestafka (2009). Risk-based decision-making for terrorism
applications. Risk Analysis, 29(3), 321335.
[20] Cox, L.A., Jr. (2009). Improving risk-based decision-making for terrorism applications.
Risk Analysis, 29(3), 336341.
[21] Dillon, ., . Liebe (00). Invited response to Coxs comment: Improving ris-based
decision-making for terrorism applications. Risk Analysis, 29(3), 342343.
[22] Hall, Jr., J.R. (2009). The elephant in the room is called game theory. Risk Analysis, 29(8),
1061.
[23] Cox, Jr., L.A. (2009). Game theory and risk analysis. Risk Analysis, 29(8), 1062-1068.
[24] Brown, G.G., L.A. Cox, Jr. (2011). How probabilistic risk assessment can mislead
terrorism risk analysts. Risk Analysis, 31(2), 196-204.
[25] Engel, R.S. (2011). Game Theory, Probabilistic Risk, and Randomized Strategy: The
Rulebook Revisited with Emphasis on Coast Guard Mission Space. MSc thesis, Naval
Postgraduate School, Monterey, CA.
[26] Pat-Cornell, M.E., S. Guikema (2002). Probabilistic modeling of terrorist threats: A
systems analysis approach to setting priorities among countermeasures. Military
Operations Research, 7(4), 520.
[27] Parnell, G.S., C.M. Smith, F.I. Moxley (2010). Intelligent adversary risk analysis: A
bioterrorism risk management model. Risk Analysis, 30(1), 3248.
[28] Merrick, J., G.S. Parnell (2011). A comparative analysis of PRA and intelligent adversary
methods for counterterrorism risk management. Risk Analysis, 31(9), 1488-1510.
[29] , A Guide to Red Teaming. DCDC - Ministry of Defence, UK. February 2010.
[30] Horne, G.E., T.E. Meyer (2004). Data Farming: Discovering surprise. In Proceedings of
the 2004 Winter Simulation Conference, R.G. Ingalls et al (eds.), IEEE, 807-813.
[31] Chua, C.L., W.C. Sim, C.S. Choo, V. Tray (2008). Automated red teaming: An objective-
based data farming approach for red teaming. In Proceedings of the 2008 Winter
Simulation Conference, S.J. Mason et al (eds.), IEEE, 1456-1462.
[32] Willis, H.H., A.R. Morral, T.K. Kelly, J.J. Medby (2005). Estimating Terrorism Risk. MG-
388-RC, RAND Corporation, Santa Monica, CA.
[33] Willis, H.H. (2007). Guiding resource allocations based on terrorism risk. Risk Analysis,
27(3), 597-606.
[34] Willis, H.H., T. LaTourrette, T.K. Kelly, S. Hickey, S. Neill (2007). Terrorism Risk
Modeling for Intelligence Analysis and Infrastructure Protection. RAND Corporation,
Santa Monica, CA.
[35] Stewart, M.G. (2010). Risk-informed decision support for assessing the costs and benefits
of counter-terrorism protective measures for infrastructure. International Journal of
Critical Infrastructure Protection 3, 29-40.
[36] Scouras, J., G.S. Parnell, B.M. Ayyub, R.M. Liebe (2010). Risk analysis frameworks for
counterterrorism. In Wiley Handbook of Science and Technology for Homeland Security,
J.G. Voeller (ed.), John Wiley & Sons, 75-92.
[37] Brashear, J.P., J.W. Jones (2010). Risk analysis and management for critical asset
protection. In Wiley Handbook of Science and Technology for Homeland Security, J.G.
Voeller (ed.), John Wiley & Sons, 93-106.
[38] RTO Studies, Analysis and Simulation Panel (2003). Handbook on Long Term Defence
Planning. RTO-TR-069, NATO Research and Technology Organization.
[39] (2009). Glossary of Acronyms and Terms. OCIO Military Health System, Department
of Defense, US.
[40] De Len, J.C.V. (2006). Vulnerability: A Conceptual and Methodological Review. United
Nations University Institute for Environment and Human Security, Bonn, Germany.
[41] McCarthy, J.J., O.F. Canziani, N.A. Leary, D.J. Dokken, K.S. White (eds.) (2001).
Climate Change 2001: Impacts, Adaptation, and Vulnerability. Cambridge University
Press.
[42] Lee, S.H., K. Kim (01). pproximating the Poisson Scan and () acoustic detection
model with a Random Search formula. Computers & Industrial Engineering, 62(3), 777-
783.

S-ar putea să vă placă și