Sunteți pe pagina 1din 20

A THEORY OF DECEPTIVE CYBERSECURITY

Richard Baskerville Pengcheng Wang


Georgia State University USA Georgia State University
Curtin University, Australia
baskerville@acm.org pwang12@student.gsu.edu

ABSTRACT
This paper describes a framework for evaluating the specific application of deceptive
cybersecurity devices in particular design settings. There is an innate asymmetry in the
relationship between the advantages of an attacker and the disadvantages of the defender.
The essential goal of cybersecurity is to increase the security of local information and
information systems. One way of achieving this goal can be by increasing the amount of
work required on the part of an attacker while decreasing the amount of work on the part
of the defender. New cybersecurity devices based on deceptive technologies aim to
achieve this adjustment to the asymmetry. The framework embodies a theory that
explains the principles that deceptive cybersecurity aims to achieve. Using probability of
compromise as an indicator of the amount of work required on the part of an attacker, we
evaluate the underlying mechanism of deceptive cybersecurity. The degree of security
provided by deceptive cybersecurity to a network cannot be evaluated only by
considering the cybersecurity alone. Intruder characteristics must be included in the
evaluation system, and by modeling their behavior and incentives, we can derive further
qualitative and quantitative characteristics that help to objectively evaluate the
effectiveness of deceptive cybersecurity configurations and devices.

INTRODUCTION
“All warfare is based on deception”
Sun Tsu, The Art of War, from about two thousand five hundred years ago (1983,
p. 17).

Cybersecurity design is of paramount importance in safely acquiring economic benefits


from Internet connections (Hunt, 1998; Soomro et al., 2016). However, in practice,
cybersecurity application is not a simple matter of open the box, install the device, and
operate. Cybersecurity designs at the application level need to be highly unique because
of the widely varying security requirements for different organizational settings and
applications (Oppliger, 1997; Pfleeger & Cunningham, 2010). The population and
complexity of these installations are rising dramatically because of the increasing use of
bigger and bigger “fat pipes” to the Internet (Ohlhausen, 2000) and the increasing
sophistication of the intruder population (Friedberg et al., 2015). Consequently,
cybersecurity elements represent an increasing share of the cost of electronic commerce
(Yasin, 2000).

Honeypots and honeynets have been available in cybersecurity applications for many
years (e.g., Sherif & Ayers, 2003; Weiler, 2002). These are false system operating
environments that simulate attractive hacking targets. Traditionally, these devices detect

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 1
intruder activity and alert cybersecurity operations of the threat before any genuine
resources are compromised. While the past effectiveness of these devices has been
questioned (Krawetz, 2004; McCarty, 2003), they are currently resurging as central
components in a new approach to cybersecurity known as deceptive security or reverse
deception (Bodmer et al., 2012). Is deceptive technology a paradigm shift in
cybersecurity; or is this merely old wine in new bottles?

In this paper, we explore the possibility that deceptive cybersecurity responds to a novel
theoretical paradigm even if its origins spring from early work in honeypots. The core
aspect of this theoretical contribution arises because cybersecurity engagement has
previously been asymmetric. Deceptive cybersecurity holds promise for reversing this
asymmetry. The asymmetry exists because, on the system complexity side, the defender
must discover all system vulnerabilities in order to correct them, while the attackers need
to discover only a single vulnerability in order to breach the system. While this
asymmetry is immediately obvious in practice, it has received little attention in the
research literature. For example, the asymmetry is notable when considering network
security through the lens of game theory (Li et al., 2011).

On the task complexity side, the defender needs to ensure the information systems’
confidentiality, integrity, and availability continuously (C.I.A) to help the organization
achieve and maintain information superiority (Stair & Reynolds, 2013). In contrast, the
motivations of the intruders become more and more unpredictable ranging from
individual financial gain to state or nation sponsored cybercrime. An intruder only needs
to comprise any single demission of C.I.A, and the consequential damage can be
tremendous and irreversible. Lastly, the asymmetry also exists in the cybersecurity tool
availability. Evaluating information security is hard for multiple reasons including
complex and dynamic security requirements, changing environment and context,
interdependent information systems, etc. (Pfleeger & Cunningham, 2010). Most of the
cybersecurity effectiveness measurement rely on penetration testing of cybersecurity
products rather than formal modeling for cybersecurity application design and
configuration (Harris & Hunt, 1999; Tang, 2014). On the intruder side, their effectiveness
can be simply measured whether they can breach into the network, and there is no
restriction on the hacking methods. Therefore, achieving the cybersecurity success is a
continuous process, while achieving the success of breaching the network is dyadic.
Asymmetric cybersecurity engagement causes the challenge that the cybersecurity is not
about whether the system can be compromised but when and how.

Therefore, our research question is:


how can cybersecurity asymmetry be reversed?

If we cannot control whether the breach would happen, it would be wise to shift the focus
on controlling how and when the security breach will happen. The deception strategies
based cybersecurity technology might help us reverse the security engagement
asymmetry.

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 2
The proposed theory of deceptive cybersecurity shows that cybersecurity designs based
on principles of skilled programming alone cannot provide adequate security in a large
population of potential intruders. Instead, the cybersecurity design must also be based on
the amount of time/efforts required by a skilled intruder to effect an intrusion and system
defender to respond to the incident. A good cybersecurity design is enhanced by
reversing the asymmetry where the intruders need to spend more time and effort on
intrusion, while system defenders have better control on when and how the system can be
“hacked.”

BACKGROUND LITERATURE

In the paradigm of information warfare, there are two basic activities: defensive
information operations (activities taken to protect information or information systems)
and offensive information operations (activities taken to affect an adversary’s information
or information systems) (Denning, 1999). Under the laws of warfare, an attack may be a
legitimate operation. However, in a setting governed by criminal and civil law (such as a
commercial operation), even a counterattack is typically regarded as an illegal assault.
Typically, preventative, defensive operations are the only allowable technical operations,
and successful criminal attacks are only assuaged by legal remedies (such as damages
awarded in lawsuits and criminal prosecutions).

However, this situation may change in situations where the information operations are
entirely underway within the information systems owned by the defender. In other
words, when the attacker is projecting offensive information operations within the
systems of the defender, then those particular operations of the attacker are a legitimate
target for both defensive and offensive information operations. While we can employ
only defensive information operations against an intruder who may be, or is, attempting
to exploit a vulnerability; we can bring both defensive and offensive information
operations to bear on any attacker’s operations within our own systems after a
vulnerability has been exploited; such as machine-learning-based honeypots and
malicious decoy data. In other words, once our system is breached, we can attack
elements operating within our own system. We have many more options when we are on
our own home ground.

Proposition 1: Within the protected system, the diversity of available defenses increases
because offensive information operations become accessible to defenders.

Sun Tzu: “Knowing the place and the time of the coming battle, we may concentrate from
the greatest distances in order to fight.” (1983, p. 39)

CYBERSECURITY
Cybersecurity is defined by the Merriam-Webster’s dictionary as, "measures taken to
protect a computer or computer system (as on the Internet) against unauthorized access or
attack " (""Cybersecurity"," 2018). The term cybersecurity also implies a number of
policies, system arrangements, technical controls and procedures that secure a

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 3
subnetwork (or subnet) from its external network environment (Wack & Carnahan,
1994). Network cybersecurity devices are typically some combination of firewalls,
screening routers, application gateways, intrusion detection and access control.

The need for cybersecurity arises in the increasing requirements for organizations to
interconnect their private computer networks to more-or-less public computer networks.
These networks may include special EDI (electronic data interchange) networks or the
Internet. An essential advantage of such interconnections is the improved access to data
services. This includes the organization's access to outside data services and the public
access to the organization's data services. Access is a key motivator for internetworking,
and access is improved by openness and minimized restrictions. Consequently,
internetwork access channels are easily abused to gain illegitimate access to computing
resources.

Cybersecurity devices often present a conflict between open internetwork access and
security. Essentially cybersecurity devices shut off certain forms of access to certain
segments organizational computing machinery. The more secure a cybersecurity design
becomes, the more constrained becomes the access paths between organizational
computers and internetworked computers. Cybersecurity design and operation becomes a
careful balancing act between access and safety.

Many organizations maintain fairly insecure internal computer networks. This situation
arises from the need to make networks highly flexible and easy-to-use. Without care,
configuring for ease-of-use may sacrifice the effectiveness of their security profile. In
addition, end-user computing has resulting in having many desktop computer network
nodes that have virtually no security installed. To a large degree, cybersecurity devices
create a two-layer cybersecurity requirements structure: Relatively low-security
requirements for access from inside the organizational network; and relatively high-
security requirements for access from outside the organizational network.

CYBERSECURITY NETWORK ELEMENTS


Network security devices usually are built around specially programmed routers and
gateways that limit the illegitimate traffic between the public internet and the private
organizational network. Routers normally decide whether or not to retransmit a message
on another subnet based on simple entries in the routing tables. A screening router,
sometimes called a packet filter, will examine messages and make retransmission
decisions based on other policies. Screening routers may only retransmit packets that
come from recognized IP addresses, screening routers may also examine TCP and UDP
socket addresses and apply certain filtering policies. For example, TCP port 23
datagrams may not be retransmitted, which effectively screens out access to normal telnet
services through the router. In addition, intrusion detection systems provide logging and
monitoring to detect and identify unusual behavior on the network that could indicate an
intruder.

Application gateways are security devices, also known as proxy servers or gatekeepers,
which operate at a higher protocol level than routers, interpreting application protocols

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 4
rather than simply rejecting all types of traffic. For example, a telnet session originating
from a host within the organization may be interpreted by the gateway and forwarded
with the gateway's own socket information. The host-to-host sockets are established
between the foreign host and the gateway. No host-to-host socket is established between
a foreign black network host and an internal red network host. The application gateway
can apply more sophisticated network security policies about the kind and content of
network applications that cross the network security. Application gateways are
sometimes called proxy servers, because they establish a server relationship with any
client wishing to communicate across the network security, and a corresponding client
relationship with the intended server (if the network security policy permits). Thus the
gateway acts as a proxy for application servers.

Special Network Security Hosts In addition to the network devices, there are a variety of
network hosts that can perform certain network security functions. These are not
exclusive categories, because one node may serve several functions in several of these
categories. In particular, application gateways may be combined in creative ways with
the functions of these security hosts. Examples of security hosts include honeypots and
honeynets.

Honeypots are security hosts that appear to be poorly protected, and designed to be as
easily accessible as possible. They are often carefully backed-up, with the assumption
that these will be regularly attacked and compromised. When the compromise of the
honeypot host is discovered, it is simply taken offline and restored to its original state
with the backup. Typically, honeypot hosts operate low-priority public servers, such as
world wide web servers that need to be openly accessible and are not critical to the
organization. They provide, intentionally, excellent hacker targets.

Honeynets are simulated networks or subnetworks comprised entirely of honeypots.


Honeynets occupy an intruder’s time and resources by providing an innocent looking
network search space and very many hosts for plunder.

ASYMMETRIC NETWORK SECURITY ENGAGEMENT


In this study, we argue that risk and effort are the two fundamental factors that cause
cybersecurity engagement asymmetry. For this paper, we define risk as the possibility
that the desired outcome cannot be achieved. Effort refers to the resources (including time
and effort) that are needed to achieve the desired outcome.

The life cycle of a cybersecurity incident can be represented by two stages: the
prevention stage (before the incident happens) and the response stage (after the incident
happens) (Baskerville, Spagnoletti, et al. 2014). In the prevention stage, the system’s
defenders need to continuously expend time and effort on protecting the system’s
confidentiality, integrity and availability. Even though the system defenders may have
spent a great deal of effort in protecting the system, the asymmetry still exists because the
intruders can spend much less effort in discovering and compromising any weak point
remaining: any of myriad weaknesses the defenders easily may have overlooked.
Attackers need compromise only a weak point in the system. They do not have to

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 5
compromise all of the entry points. In the response stage, the situation can be even
worse when the intruder gains illegitimate access to the system. The system defender
must detect every single action the intruder makes while inside of the network , while the
intruder can bury their malicious actions among myriad other legitimate system
activities..

The system defenders suffer higher risk in both stages. The risk can be manageable only
when it can be identified. The nature of the internet allows intruders to attack the system
without spatial and temporal limitations. With advanced persistent threats (APTs), the
defender may have no advance knowledge of how and when the incident will happen and
who will be responsible for the attacks. In contrasts, the intruders may have a virtually
unlimited range of tools and methods to start their attack, while the system defender has
only a few tools to evaluate the state of their system security. For example, most of the
cybersecurity effectiveness measurements rely on penetration testing of cybersecurity
products (rather than formal modeling) for cybersecurity application design and
configuration (Harris & Hunt, 1999; Tang, 2014). Such formal modeling is needed if
defenders hope to discover all system vulnerabilities.

Further, intruders only face the risk of being detected before they can collect sufficient
data to erase their activities, while defenders must stop the intruders before they collect
any data. If defenders fail, the consequences can be very severe. For example, the new
European General Data Protection Regulation (GDPR) puts the organization in the
situation that any single data breach can cause massive losses scaled by global revenue.

Proposition 2: Cybersecurity asymmetry is created because defenders must protect


numerous unknown points of weakness (software vulnerabilities), while attackers need
only discover and exploit one of these.

Sun Tzu: “The general is skillful in attack whose opponent does not know what to
defend…” (1983, p. 37)

As mentioned earlier, Proposition 2 is well known in practice, but poorly explored by


researchers. Better theoretical work is needed in this area.

DECEPTIVE NETWORK SECURITY

Network security devices usually divide the organization’s network into one or more
security zones relative to the external internetwork. The two basic security zones are
called "black networks" and "red networks."

The black network zone is considered as carrying uncontrolled traffic. That is, this
network area is unprotected by any network security devices. The messages on this
network may be sent from or to any of the nodes on the public internet. Consequently,
the messages are presumed to carry both legitimate and illegitimate communication.
White symbols represent black network devices in Figure 1.

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 6
The red network zone is also called the "blue" network. This network is protected from
illegitimate message traffic by the network security arrangement. Since, illegitimate
message traffic has been removed, this network zone is considered trustworthy. This
trustworthiness usually extends to the veracity of the source, destination and general type
of each message on the network. However, hosts on this network still need normal
application-level protection, such as passwords, etc.

Figure 1. Black-Red Network Security Design.

DECEPTION TECHNOLOGY- DECEPTIVE HONEYNETS AND HONEYPOTS

Deceptive Honeynets and Honeypots are generated by the deception technology when the
system detects suspected behaviors. The system proactively generates the deceptive
honeypots to attract network abuse. Honeypots have enticing node names (an example
might be strategy.navy.mil) and may be designed with a few seemingly vulnerable
accounts (an example might be an account named "strategy" with a password "strategy").
The purposes of the deceptive honeynets and honeypots are to attract the intruders and
encourage the intruder to keep searching “valuable” data and additional possible
“vulnerabilities”.

Honeynets and honeypots have been available for years. However, the deception used in
these devices has typically been easily detected and their purpose thereby defeated. For
example, the Sebek data capture tool, developed by Honeynet Project had early setbacks
(McCarty, 2003). Since this early period, however, issues that defeated the early, cruder
deception have been blunted. Commercial products have entered the market that do not
openly share their technology.1 The introduction of artificial intelligence into these
products has scaled up the “arms race” between attackers and defenders (McCarty, 2003,
p. 79). We distinguish between the earlier, less sophisticated, forms of honeynets and
honeypots from the more recent intelligent versions as deceptive honeynets and
honeypots. These later products more completely fulfill the propositions of deceptive
cybersecurity theory.
1
Examples include Attivo Networks, Illusive Networks and TrapX.

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 7
The basic operation of deceptive honeynets and honeypots is described next.

There are few security devices with the main purpose to attract intruders and occupy
them with relatively harmless activities. Honeypots usually have some functions, such as
a web server, but are so isolated and of such low value that no protection, other than a
data backup system for the purposes of recovery after compromise, is provided. As
described earlier, deceptive honeynets distract, but also have the purpose of carefully
logging every detail about their sessions for purposes of warning, programming other
network security elements, or apprehending and prosecuting the intruder.

However, high-quality deception can provide an excellent sink for the intruder time, and
thus reduce the intruders time ready to spend (TRS) on more valuable targets. Time is a
very important element in our framework. The more time the intruders spend on trying to
hack and trying to gather valuable asset, the less likely they are willing to continue to
perform the hacking activities given the condition that the final hacking outcomes are
unknown. Under this assumption, intruders are prepared only to spend a certain limited
amount of time, because they have other offline jobs, hacking activities or social
activities, etc. as “distractions.” Let us make two further assumptions:

1. Before starting, the intruder has an idea as to how much time the “job” (the intrusion)
will take. He or she would not start the "job" if the estimated time is too forbidding.

2. If, in the process of doing the "job," an intruder discovers that it will take
significantly longer than expected, then he or she will drop the job. For example, if an
intruder spends practically all of her or his available time in breaking through a
network security, and then discovers that there is a second network security, then she
or he will break off the attempt and move on to other, perhaps easier, victims.

If the valuable network resources, network security included, exist as a small portion of a
large population of worthless network resources, then intruders face a large time sink
discovering the valuable network resources from the worthless resources. In this way
distracter devices operate as other security devices requiring significant time to break-in.

DECEPTION TECHNOLOGY- UNDERLYING MECHANISM

Deception in a communications setting can be defined as, “a message knowingly


transmitted by a sender to foster a false belief or conclusion by the receiver. More
specifically, deception occurs when communicators control the information contained in
their messages to convey a meaning that departs from the truth as they know it.”
(Buller & Burgoon, 1996, p. 205). Deceptive security is used to reverse the asymmetric
engagement in cybersecurity. The engagement is usually asymmetric because the
defender must discover all vulnerabilities, while the attackers need discover only one.
Deception can belabor the attackers with the task to search and examine all network
resources in a network while the defender can virtually generate an infinite number of
these.

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 8
We will use a simple honeypot/honeynet example to illustrate the usefulness of this
framework, we will focus on developing deceptive security devices that are deceptive for
the purpose of significantly decreasing the TRS for the intruder population. We will use
the “herd survival law” to illustrate this mechanism:

The law survival in the herd: It is not necessary to outrun


the wolf. It is only necessary to outrun at least one of the
other sheep.

Consider the modification of the Figure 1 black-red network illustrated in Figure 2. A


dotted circle delineates the original red network. The large population of worthless, but
equally interesting-looking sub-networks and nodes represent a herd of distracters
invoking the law of survival in a herd. If many other nodes seem “nearer and tastier,”
then the real network will attract much fewer “wolves.”

Figure 2. Green Hole as a simulated network.

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 9
It is not necessary to actually construct such a large network with thousands of nodes. It
is only necessary to realistically simulate such a network. In other words, the real
network might consist of the configuration illustrated in Figure 3. A single router and
network node that is programmable in such a way that it can simulate thousands of other
routers and network nodes, both in terms of host activity and simulated network traffic.
Artificial intelligence has reduced the cost and improved the effectiveness of such
simulation technologies {McCarty, 2003 #14}. In effect this technology becomes a
“green hole” in which virtually all intruders would disappear, hacking away at simulated
sub-networks, network security, and host nodes that do not exist in reality. The green
hole essentially simulates a herd of slow sheep into which the real network resources are
projected as fast sheep. The green hole qualifies as a device on the network that is
designed to make the break-in procedure for the intruder a simple, but an improbable
process. The green hole envelopes the real network resources within a large herd of false
network resources. On the one hand, the effect of this simulated herd is a dramatic
reduction in the probability of compromising the real network resources. On another end,
the intruder cannot identify legitimate data even if the system is compromised.

Another important concept is that all sheep should look alike. The fast sheep can be
easily identified if all other sheep are running slow. There are two approaches. The
system should be able to simulate the green network that looks and functions as same as
the red (real) network. The second approach is that the system can make the red network
itself look and function like the “slow sheep” (a decoy), so the red network can be hidden
in the intruder’s perceived dead zone such that which they will overlook the red network
indetectable within the ocean of the “too good to be true” faked data.

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 10
Figure 3. Green hole as a physical network.

Prior to the use of deception (such as the green hole), the defender had many unknown
vulnerabilities to protect while the attacker need only discover one to attack. Once
deception is present, the attacker has many targets of unknown quality while the defender
has only one difficult-to-discover asset to protect.

PROBABILITY ANALYSIS OF THE THEORY OF DECEPTIVE NETWORK


SECURITY

The essential goal of network security is to increase the security of a local network. This
goal can be traditionally expressed as a criterion in terms of probability of compromising
the red network. Given infinite time, one should assume that, sooner or later, the red
network will be compromised, and thus the probability will be always equal to one.
Therefore, the criterion is meaningless under the assumption of infinite time. For a
meaningful criterion, some time limit must be defined. This time limit represents a
period during which the criterion will hold and permits the possibility of a criterion value
less than one. This time limit is defined as the interval between initializing the network
security and the expiration of the criterion. In this paper, we call this limit a protection
interval (PI).

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 11
We can formulate the criterion for security measures in two different ways:

1. To minimize the probability of compromising (we shall call it PCmin) the red network
during PI,
2. To maximize the PI (we shall call it PImax) while accepting an upper limit on the PC.

The choice between these two is a matter of planning horizons. If the owner of the red
network has a policy of periodic reviews of the security procedures (e.g. once a year),
then criterion formulation 1 is appropriate. If the owner is more concerned with long-term
solutions, then criterion formulation 2 reflects this situation better.

If we consider the number of break-ins during the PI as a random variable, then we have
here a classic example of the Poisson process 3 (DeGroot, 1986). If we subdivide PI into
small intervals of time, then the three conditions of a Poisson process are satisfied as
follows:

1. the number of break-ins in any two disjoint intervals of time are independent of
each other;
2. the probability of breaking-in during any particular short interval of time is
approximately proportional to the length of that interval;
3. the probability that there will be two or more break-ins in any particular short
interval has a smaller order of magnitude than the probability that there will be
just one occurrence.2

If the events being considered occur in accordance with a Poisson process, then both the
waiting time until the event occurs and the period of time between two successive events
will have exponential distributions (DeGroot, 1986). In other words, the time before the
first break-in and between break-ins has an exponential distribution. This exponential
distribution follows directly from the fact that the number of break-ins during PI has a
Poisson distribution.

The formulae which represent these assumptions, useful for our purposes are:

Pr(X  t) = b*e-b*t
E(X) = 1/b
Var (X) = 1/b2

WHERE:
X – the variable, e.g. the time before the first break-in happens
t – the criterion value in protection interval time, any positive number
b – the parameter of the Poisson distribution,
E(X) – expected value of X

2
As these probabilities depend on the length of the short interval, we can always choose the length to
satisfy this condition.
3
For the simplicity reason, we assume the number of break-ins are independent.

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 12
Var (X) – variance of X

In a practical sense, the use of this distribution informs you that if your red network in
recent years was compromised on average six times a year and you did not improve
anything on the information security, then the next break in will happen in February with
the probability .63, in April with the probability .86, and almost certainly (Pr(X  t) =
.95) in June.

How would we apply this criterion to our network security problem? The use of the
exponential distribution makes it allowable to evaluate an intrusion by a population of
multiple hackers (DeGroot, 1986, p291). Let Xi be the elapsed time before the first break-
in by hacker i. Suppose that the variables X1, … , XN form a random sample from an
exponential distribution with parameter b. Then the distribution of Y = min [ X1, … , XN]
will be an exponential distribution with parameter Nb. This distribution means that, in
the case of multiple hackers, the criterion is also exponential, but the exponential power
of the criterion is significantly greater.

For example, let us assume that an intruder is attracted to a particular red network and
compromises it regularly 6 times per year. Then the average period of uncompromised
network operation will be 1/6-year, or 2 months, and the probability of the network will
be comprised in every month will equal to 0.5. However, we must acknowledge that there
is still a probability of 0.14 that the uncompromised operation will extend to 4 months.
Alternatively, we must also acknowledge of uncompromised operation. In other words,
if 10 intruders become attracted to our red network, it will almost certainly be
compromised in the first month (see Table 1).

Table 1: Comparative probabilities of compromising the red network by only one


intruder versus by 10 independent intruders (b = .5).

Probability, 1 Probability, 10
N of month intruder intruders
1 0.3935 0.9933
2 0.6321 1.0000
3 0.7769 1.0000
4 0.8647 1.0000
5 0.9179 1.0000
6 0.9502 1.0000
7 0.9698 1.0000
8 1.0000 1.0000

We can see how the criterion introduced in this section links the security goals of the
network security with the number of intruders who are willing and able to compromise it.
Both criteria (the probability of compromise and the number of intruders) can be used. In
the following sections, we shall learn how the network security as security devices
decrease the probability of compromise (or increase the required number of intruders). In

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 13
a practical sense, these criteria and their distribution are of interest because they allow
relative evaluations. For example, these criteria reveal the degree of safety in a red
network with a security device, comp black to the degree of safety without the security
device. Since it is unlikely that absolute estimates can be derived, then these relative
estimates become the best means of network security evaluation.

MODELING INTERACTION BETWEEN NETWORK SECURITY AND INTRUDERS

The degree of security provided by the network security to a red network cannot be
evaluated only by considering the network security alone. Such a monolithic view of
network security assumes that the possible intruder is a "red box", fixed in capability and
dedication. Every intruder is assumed to be willing and able to penetrate any security
obstacle, raising the stakes in design and evaluation of any security measures. Under this
assumption, the addition of any security measure is very much like giving another
charade to a child who likes to crack charades. We suggest that, by including intruder
characteristics in the evaluation system and by modeling their behavior and incentives,
we can derive further qualitative and quantitative characteristics that may help to
objectively evaluate network security configurations and devices.

Assumption: An intruder is ready to spend on a task only a limited amount of time.

Within a large population of these intruders, each will have a certain amount of “time
ready to spend” (TRS) on an intrusion job. TRS represents the commitment of the
attackers: their persistence and their impatience. For each there will exist a TRS ratio.
The TRS ratio is the ratio of time that each intruder is willing to dedicate to their
intrusion to the total amount of time they have available for all of their activities (e.g.,
schoolwork, employment, family, other hacking activities, etc.). The TRS ratio in the
population of intruders will vary widely, but the value can be reasonably expect to be <1
in almost all cases. We may safely assume that TRS will be well-described by an
exponential distribution. This assumption agrees with the logic of an exponential
distribution, where there are many intruders who are willing to undertake a short job, but
only a few willing to undertake a long one. Typical examples described by exponential
distribution show this analogy like time to service a customer (the time customer spends
with the server once the service started), serviceable lifespan of technical devices, etc.

Although it is not simple to define the parameters of such a distribution, some properties
of the distribution may be very useful in making a network security design decision. For
example, if we double the time necessary to break in, then the ratio of the population of
intruders who are ready to spend this amount of time to the total population of intruders
becomes equal to the square of the initial ratio (which is significantly less).

Let us assume that the real network resources consist of twelve network devices (routers
and host nodes). Let us further assume that the red hole effectively simulates additional
devices with a total of twelve hundred devices and projects this simulated forest around
the real resources. The effect of this hundred-fold increase in perceived network
resources is a simple hundred-fold decrease in the probability of compromising real

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 14
resources. This significantly reduces the probability of break-in (criterion PBmin). The
PImax criterion shows similar results. For example, in Table 4, we illustrate the effect of
these false nodes in a network, protected by a network security. The probability of an
intrusion with an excellent design, for example, can be reduced from 0.022542 to
0.000225

The firewall design is: Probability of break-in Probability of break-in


(without deceptive network ) (with deceptive network)
Average 0.393469 0.003935
Good 0.146748 0.001467
Excellent 0.022542 0.000225
Outstanding 0.001299 0.000013
Table 4. Continuation of the example Table 3: Probability of break-in of a real node
with a hundblack-fold population of other (false) network nodes.

But current deceptive security appliances are usually more sophisticated than simply
wrapping the red network within a virtualized green network. Nodes in the green hole can
use artificial intelligence to evaluate an intruder in terms of their skills, techniques, and
TRS. Nodes in the green network can then be regenerated to appeal to the intruder,
putting a further drain on the TRS(Labs, 2017).

In this example, we can see the usefulness of the probability-theory framework for
evaluating deceptive network security in a simple honeypot/honeynet setting. It
illustrates that network security designs based on principles of skilled programming alone
cannot provide adequate security for a large population of potential intruders. New
devices and good network security designs can be shown within the framework to
improve network security dramatically.

DISCUSSION

We have illustrated deception-based cybersecurity with a simple honeypot/honeynet


example. Deception-based cybersecurity promises to be considerably more expansive.
Honeypot/honeynets deliver attractive targets to intruders. These targets may exhibit
known flaws such as unpatched software. These targets may also be used to reveal zero-
day attacks: vulnerabilities heretofore unknown by security experts.

The attraction of these targets necessarily involves deceptive content as well as deceptive
devices. The honeynets and honeypots must seem completely real, with network traffic
and host processes that are as convincing as the operational systems. Deceptive content
is important. For example, well-constructed, convincing, fictional customer data, credit
card data, etc. might flood the dark web market with worthless data. Intellectual property
can seem convincing, but is worthless when usage is actually attempted. When deceptive
cybersecurity delivers convincing, but worthless content, intruders will invest more
resources in distributing the booty. Such results can discredit the intruders. Such content
makes intrusion very risky because the intruders may get useless deceptive data while
bearing the same, or even higher, opportunity risk.

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 15
Comparing with the prevention-focused information security technologies, the deception
strategies based security systems assume that the intruders have presences inside the
organization network, so it shifts the focus from blocking the intruders outside of the
organizational network to controlling the intruders inside of the network.

incident incident
Risk

Risk
Time/Effort Time/Effort
Intruder Defender
Figure 4: Security engagement asymmetry illustration
Security engagement asymmetry illustration is presented in figure 4. The incident is
defined as the event in which intruders successfully enter into the network. Before the
incident happened, the network defends to suffer more risks and have to spend more
efforts to block the intruder outside, because they have to discover and get rid of all
possible system vulnerabilities. However, the asymmetry can be reversed when the
network defender can engage the intruders in the environment, which is the inside the
network, where they are more familiar with and have more control rights. In contrast, the
intruder will suffer much more pressures and have more risks. For instance, there is no
doubt that the hacking activities can be discovered sooner or later, and intruders have to
collect all possible data and finish the task as soon as possible.

This effect leads us to our final 3 propositions:

Proposition 3: An attack in the presence of defense deception is more likely to fail


because the defender’s knowledge of the attacker’s operations increases.

Sun Tzu: “By discovering the enemy's dispositions and remaining invisible ourselves, we
can keep our forces concentrated, while the enemy's must be divided.” (1983, p. 38)

Proposition 4: An attack in the presence of defense deception is more likely to fail


because the attacker’s knowledge of the defender’s operations decreases.

Sun Tzu: “A general is skillful … in defense whose opponent does not know what to
attack.” (1983, p. 37)

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 16
Proposition 5: The asymmetry between cybersecurity offense and defense is reversed in
the presence of defense deception because the burden of knowledge acquisition shifts
from the defender to the attacker.

Sun Tzu: “By discovering the enemy's dispositions and remaining invisible ourselves, we
can keep our forces concentrated, while the enemy's must be divided.” (1983, p. 38)

Figure 4 represents Propositions 3 (right side) and 4 (left side). These two propositions
represent the asymmetry inversion described in Proposition 5.

The deceptive strategies based information technology can reverse the asymmetry for
following reasons. First, the deceptive strategies based information technology
proactively distinguish legitimate users and intruders. The intruder hard to hide inside of
the network, because they have different motivation and behave differently compared
with the legitimate users. Second, the system defenders have more resources and are
more familiar with the network. The detail of the network remains unknown for the
intruders. The system defenders can utilize all possible resources they have (i.e.,
computer power, authority rights and human power, etc.) to “attack”/control the intruders,
while the intruders have to spend extra efforts to remain to be undiscovered and to gain
additional systems rights, so the resources distribution can be reversed. Lastly, deceptive
strategies based information technology can drive the intruder to spend more efforts
inside of the decoy network while offering the network security defender more response
time.

Deceptive strategies are tightly related to the creation of misplaced trust in the system.
By creating deception, the defenders mislead the attackers into a belief that fictive
systems are real. Attackers trust network nodes and systems that are, in fact,
untrustworthy. When attackers detect the deception, they reverse the mistrust effects.
Actions intended to mislead the defenders can provide false indictors of attack vectors.
Such misleading information can increase defender workload and exacerbate the
asymmetry. Trust, or rather mistrust, is wielded as a double-edged sword that can
deflect to injure the defenders.

Deception-based cybersecurity has other issues. Unsophisticated deceptions are easily


detected and, once revealed, will quickly become widely known (Krawetz, 2004;
McCarty, 2003). Artificial intelligence is necessary to create usable deceptions that are
both difficult to detect and inexpensive to generate. There are also potential legal traps.
We assume that monitoring intruder activities does not violate wiretapping laws (Bringer
et al., 2012). If deceptive content is intentionally flawed, and subsequently relied on in
usage, can the organization that created this content ultimately be held accountable?

Deception itself assumes at least a small degree of underlying foreknowledge of attack.


Setting up deceptive technologies can be difficult in practice because APT attacks are
unpredictable in detail; the exact motivation of that attack and the exact attack targets are
unknown. Consequently, deceptive technologies can never be omnipotent. Further,
deceptive technologies assume that systems can distinguish between the activities of

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 17
legitimate users and the activities of intruders. There are issues of false positives in
detecting, and then misleading, intruders. If legitimate users are incorrectly identified as
intruders, and then misled into deceptive systems, the legitimate users can produce or
harvest misleading results. The result could be severe damage to the organizational
missions.

Cybersecurity based on deception strategies has several benefits. First, it can reduce the
frequency of false security alerts. Once the suspected users “touch” the honeypots in
which contain “sensitive” and “valuable” information, they can be accurately identified
as intruders. Second, it can dramatically reduce the detection time and offers the network
defenders more response time. When intruders are attracted by the honeypots, an alarm is
generated to warn the network managers that network abuse is occurring, so security on
the remained of the network can be tightened. Third, cybersecurity based on deception
strategies takes advantage of intruder cognitive weaknesses, because when the intruder
compromises one node of the network, intruders are more likely to perform
reconnaissance to search for the high-value assets and for more network vulnerabilities.
The similar phenomena are well explained by the prospect theory that people tend to
make decisions based on the potential gains (i.e. the data in the inside of network are
valuable) or lose (i.e. the hacking activities can be detected sooner or later) rather than
the final outcome (Kahneman & Tversky, 1979). Also, individuals tend to continuously
commit to falling course action, that keep searching perceived valuable information, due
to the previous experiences, invested efforts and potential action payoff, etc.(Keil, 1995).
Lastly, once the intruders are attracted by the trap, their hacking activities can be heavily
monitored and controlled in the simulated environment. The main purpose of such hosts
is to attract intruders and occupy them with relatively harmless activities while carefully
logging every detail about their sessions. Login, activity, packet, message and session
logs from the honeypots and honeynets inform policies on screening routers and
application gateways. That is, IP packets and TCP sockets matching those noted on the
honeynets can be specially handled. For example, the proxy server can simply route
every rogue client message to the honeynets, making it appear that the rogue is hacking
many different machines and harvesting valuable assets, when in fact, the abuse has
never left the deception technologies, and the system damage has been well controlled.
In addition, the forensic information may lead to the identification and prosecution of
network abusers.

Future research is needed to develop alternatives to deception as a means to offset the


asymmetry between attacker and defender. In terms of redressing this imbalance, we
currently have only one tool in the toolbox. We believe approaching the asymmetry
problem theoretically may help improve the problems that are well known in practice.
Such future research could also address the cost issues for attackers and defenders in
terms other than time-ready-to-spend. The increasing consumerization of information
technologies may have decreased the cost of information technology for organizations,
but it has also increased the equality of technology costs and assets between attackers and
defenders.

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 18
CONCLUSION

The deception based cybersecurity technology can be very fruitful in many ways, and it
changes out security thinking from purely defense based to defense and attack at same
time. In this study, we show that we can reverse the cybersecurity asymmetry by using
the deceptive honeynets to better control how and when the attack inside of the network,
but it is worth to note that the deceptive honeynet is just one of the example of deception
based cybersecurity, and the key is that we should invent more proactive cybersecurity
technologies that will not only make the intrusion harder but also riskier. This framework
shows that network security designs based only on principles of skilled programming
lead to inadequate security in a setting that includes a large population of potential
intruders. Good network security design must incorporate principles derived from the
time required by a skilled intruder to affect an intrusion. Such designs are enhanced by
mechanisms like deceptive networks that increase the intrusion time with specific regard
for the skills of the intruders.

ACKNOWLEDGMENT

The authors respectfully acknowledge the work of the late Victor Portougal, who
developed an earlier version of the time-ready-to-spend risk analysis for an unpublished
manuscript with Richard Baskerville.

REFERENCES

Bodmer, S., Kilger, M., Carpenter, G., & Jones, J. (2012). Reverse deception: organized
cyber threat counter-exploitation. New York: McGraw Hill Professional.
Bringer, M. L., Chelmecki, C. A., & Fujinoki, H. (2012). A survey: Recent advances and
future trends in honeypot research. International Journal of Computer Network
and Information Security, 4(10), 63.
Buller, D. B., & Burgoon, J. K. (1996). Interpersonal Deception Theory. Communication
Theory, 6(3), 203-242.
"Cybersecurity". (2018). Merriam-Webster, accessed 6 Mar. 2018.
DeGroot, M., H. (1986). Probability and Statistics (second edition) (Second ed.).
Reading, Mass: Addison-Wesley.
Denning, D. E. R. (1999). Information warfare and security. New York
Reading, Ma.: ACM Press ;
Addison-Wesley.
Friedberg, I., Skopik, F., Settanni, G., & Fiedler, R. (2015). Combating advanced
persistent threats: From network event correlation to incident detection.
Computers & Security, 48, 35-57.
Harris, B., & Hunt, R. (1999). Firewall certification. Computers & Security, 18(2), 165-
178.
Hunt, R. (1998). Internet/Intranet firewall security - policy, architecture and transaction
services. Computer Communications, 21(13), 1107-1123.

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 19
Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under
Risk. Econometrica, 47(2), 263-291.
Keil, M. (1995). Pulling the Plug: Software Project Management and the Problem of
Project Escalation. MIS Quarterly, 19(4), 421-447.
Krawetz, N. (2004). Anti-honeypot technology. IEEE Security & Privacy, 2(1), 76-79.
Labs, T. R. (2017). Deception in Depth:The Architecture of Choice for Deception
Technology. TrapX Security, Inc.
Li, H., Yang, X., & Qu, L. (2011). On the offense and defense game in the network
honeypot. In G. Lee (Ed.), Advances in Automation and Robotics, Vol. 2 (pp. 239-
246): Springer.
McCarty, B. (2003). The honeynet arms race. IEEE Security & Privacy, 1(6), 79-82.
Ohlhausen, P. (2000). Fat pipe perils. Security, 37(4), 59.
Oppliger, R. (1997). Internet security: Firewalls and beyond. Communications of the
ACM, 40(5), 92-102.
Pfleeger, S. L., & Cunningham, R. K. (2010). Why Measuring Security Is Hard. IEEE
Security & Privacy Magazine, 8(4), 46.
Sherif, J. S., & Ayers, R. (2003). Intrusion detection: methods and systems. Part II.
Information management & computer security, 11(5), 222-229.
Soomro, Z. A., Shah, M. H., & Ahmed, J. (2016). Information security management
needs more holistic approach: A literature review. International Journal of
Information Management, 36(2), 215-225.
Stair, R. M., & Reynolds, G. W. (2013). Fundamentals of information systems (Seventh
ed.): Boston : Thomson/Course Technology, 2013.
Sun Tzu. (1983). The art of war. New York: Delacorte Press.
Tang, A. (2014). Feature: A guide to penetration testing. Network Security, 2014, 8-11.
Wack, J., & Carnahan, L. (1994). Keeping Your Site Comfortably Secure: An
Introduction to Internet Firewalls ( NIST Special Publication 800-10 ed.).
Washington: U.S. Department of Commerce, National Institute of Standards and
Technology.
Weiler, N. (2002). Honeypots for distributed denial-of-service attacks. Paper presented at
the Enabling Technologies: Infrastructure for Collaborative Enterprises, 2002.
WET ICE 2002. Proceedings. Eleventh IEEE International Workshops on.
Yasin, R. (2000). The Cost Of Security -- Protective Measures Can Cause Their Own
Problems. Internetweek(801), 1,12.

Proceedings of 2018 IFIP 8.11/11.13 Dewald Roode Information Security Research Workshop 20

S-ar putea să vă placă și