Sunteți pe pagina 1din 8

FEATURE

The challenge of
measuring cyberdependent crimes

Steven Furnell

David Emm

-iii]*1i >` ]>i>L>>


*>>`>]*1i

LiViii>Li`>`ii>iiviv
VLiV>>iiqLi>}ivVL>>VV
>`Li}i`>>i`i}`>iV]>`
}>}ii>}iVi]iVi`>`Lii>i}
>viV>i>`Vv>>V-i>vi}L>>V]
ivVi>V>iVvV}iV>i}]>`i
i>iiVv>iVvV>>V
However, even attempts to measure the
same thing can result in dramatically
varied numbers, highlighting the difficulty of trying to establish the scale, cost
and impact of attacks. Indeed, measuring
the problem is not straightforward, and
while there are many sources that seek to
provide related indications, there is little standardisation in terms of approach.
Nonetheless, it is desirable to get an
overall measure of the problem, and it is
therefore relevant to consider appropriate
options (and sources) for achieving this.
Relying on media reporting is unlikely
to reveal a clear picture, and while part
of this is clearly down to the nature of
reporting itself, the lack of recognised
measures does not help.
This article examines some of the
underlying challenges in measuring cyberdependent crime, examining the nature
of information that is typically available
from current published sources, including general security surveys and threat
reports from specific vendors, as well as
data that may be directly collectable from
the sources to which security vendors
have access. The focus then moves to
consideration of which measures would
be the most useful in practice, supported
by a direct survey of relevant experts.
The material is based on a report arising

October 2015

from a UK Home Office-funded study


into understanding the scale, trends and
measurement of cyber-dependent crime,
although it should be noted that any
views expressed are those of the authors
and are not necessarily the views or
policy of the Home Office (or of the UK
Government more widely).

Understanding
categories
Based on the definitions provided
by the Serious and Organised Crime
Strategy, cyber-dependent crimes are
distinguished from other forms of
cybercrime as follows:4
U
Li`ii`iVi are
offences that can only be committed
by using a computer, computer networks or other form of ICT. These
acts include the spread of viruses
and other malicious software, hacking and distributed denial of service
(DDoS) attacks ie, the flooding
of Internet servers to take down
network infrastructure or websites.
Cyber-dependent crimes are primarily acts directed against computers
or network resources, although there
may be secondary outcomes from the
attacks, such as fraud.

Maria Papadaki

U
Lii>Li`Vi are traditional
crimes that are increased in their
scale or reach by the use of computers, computer networks or other ICT.
Unlike cyber-dependent crimes, they
can still be committed without the
use of ICT. Examples can include
fraud (including phishing and other
online scams), theft and sexual
offending against children.

It is perhaps unsurprising to
find that things are not any
more clearly defined when it
comes to the interpretation
of the threats
Part of the challenge of measuring
cyber-dependent crimes (and indeed
other security breaches) is that the
domain itself gives rise to variations in
the use of terminology. This in turn can
lead to confusion over what should be
counted, and the potential for resultant
misrepresentation has existed for some
time. Indeed, discussion and reporting
of the topic will frequently involve the
use of terms such as attacks, risks and
threats, without a clear sense that they
are being used appropriately. In fact,
even the notion of discussing cyberdependent crime is only relevant if the
Computer Fraud & Security

FEATURE
activities under consideration can actually be classified as criminal acts (leading
to potential complications in relation to
the existence, scope, and jurisdiction of
relevant legislation).
Given that the basic terminology can
cause confusion, it is perhaps unsurprising
to find that things are not any more clearly
defined when it comes to the interpretation of the threats (which in turn directly
complicates the measurement of scale, as
similar incidents can consequently be categorised in an inconsistent manner). Even
taking a look at just the subset of threats
relating to malware serves to reveal the
challenge of the situation. For example,
some terms (eg, virus, worm and trojan)
are technical definitions insofar as they
group programs according to how the code
functions. In this respect, at least, there is
common agreement across the industry.
However, sometimes other, non-technical
terms are used to describe malware and
malware-related programs. The term spyware is one such example of this. As the
name suggests, it refers to software that
monitors activity on a computer, but this

could include programs that are malicious


and those that are not.

Existing measurement
There is no shortage of security surveys,
threat reports and other similar publications that seek to present a view of the
situation for those seeking to gain an
understanding. However, there is again
considerable variation in the underlying
data sources and in the ways in which
the related issues have been examined.
Existing studies that seek to measure
the scale of cyber-security threats include
the Eurobarometer Cyber Security
Survey (published by the European
Commission) and the ENISA Threat
Landscape (ETL) report.5,6 They include
a good deal of information, but the
variation in metrics and approach of
the sources they draw on means that
they do not offer a clear mechanism for
quantifying cyber-related crime. Looking
more specifically, there are a number of
long-established and widely cited survey series, all of which give attention to

2010/11 Computer Crime and Security


Survey7

cyber-dependent crimes alongside other


categories of security incident. Although
they are by no means definitive sources
(and make no claim to be), it is relevant
to consider their treatment of the issue,
as they are often used as a point of reference, with the findings at risk of being
perceived to be more representative than
they actually are. Three specific studies
are considered here as examples:
U /iv>-iV i>Vi
Survey series, which has been run
in the UK in some form since
1994 (under the auspices of the
Department of Trade & Industry).
The report is now commissioned
by The Department for Business,
Innovation and Skills (BIS) and
conducted by PwC.
U /i
i-iVi
Computer Crime and Security
Survey, a US-based annual survey
series dating back to 1995, and originally published jointly with the FBI.
It became a CSI-only report in 2007,
and continued through to a final
edition in 2011.

Global Information Security


Survey 20148

Information Security
Breaches Survey 20159

No. of respondents

351

1,825

664

Respondent types

Security practitioners (but respondent job


titles included CEOs, CIOs and system administrators alongside security-specific ones)

CIOs, CISOs, CFOs, CEOs and other


information security executives

IT professionals, business
managers, executives,
non-executive directors

Collection method(s)

Post and email

Majority via face-to-face interviews, and others via online


questionnaire

Online questionnaire

Cyber-dependent crime
categories

Malware infection.
Bots/zombies within the organisation.
Password sniffing.
Denial of service.
Website defacement.
Other exploit of public-facing website.
Exploit of wireless network.
Exploit of DNS server.
Exploit of client web browser.
Exploit of users social network profile.
Instant messaging abuse.
Insider abuse of Internet access or email
(ie, pornography, pirated software, etc).
Unauthorised access or privilege escalation
by insider.
System penetration by outsider.

Cyber-attacks to disrupt or deface


the organisation.
Cyber-attacks to steal financial
information (credit card numbers,
bank information, etc).
Cyber-attacks to steal intellectual
property or data.
Internal attacks (eg, by disgruntled
employees).
Malware (eg, viruses, worms
and trojan).
Zero-day attacks.

Infection by viruses or
malicious software.
Actual penetration into the
organisations network.
Denial of service attack.
Attack on Internet or
telecommunications traffic.

Table 1: Cyber-dependent crime categorisations from leading surveys.

Computer Fraud & Security

October 2015

FEATURE

Vendor

Source(s) of information

Frequency

Cyber-dependent
threats covered

Key measures

F-Secure Threat
Report

Collated and cited from various


external sources (including media
reporting), and F-Secures blog.

Biannual

Malware (PC, mobile,


Mac). Web-based attacks.

Detection volume of top


10 threats (overall and by
geographic region).

HP Cyber Risk
Report

HP security products and services,


plus external sources (eg, National
Vulnerability Database).

Annual

Malware (Windows/.NET,
ATM, Linux, Android, PoS)

Top exploits. Top malware


samples. Malware volume.
Platform-specific malware
statistics.

McAfee Labs
Threat Report

Threat statistics are presented from


McAfee Labs source.

Quarterly

Malicious signed binaries.


Malware (general, mobile,
rootkit). Ransomware.

Volume (new and total) for


several threat categories. Top
attack categories. Top threat
locations (region or country).

Symantec
Internet Security
Threat Report

Symantec Global Intelligence


Network (>41.5 million attack sensors, monitoring threat activity in
over 157 countries and territories).

Annual

Bots.
Malware (Android, Mac
OS X, Windows).
Ransomware.
Web attacks.

Threat volume and prevalence


(overall, and in specific contexts
such as mobile, email and web).

TrendLabs
Security Roundup

Trend Micro Smart Protection


Network.

Quarterly

Adware.
Exploit kits.
Malicious sites.
Malware (Android,
online banking, PoS).
Ransomware.

Volume of malware and adware.


Volume of mobile threats.
Volume and location of visits
to malicious sites. Volume and
location of connections to C&C
servers from infected machines.
Measure of the number of
threats blocked per second.

Table 2: Examples of cyber-dependent crime coverage in vendor reports

U /iL>v>-iV
Survey, conducted and published
by Ernst & Young. The survey is
performed annually, and dates back
over a decade. As the name suggests,
the sample base in this case is global,
with designated Ernst & Young professionals in different countries being
used to administer the survey.
In order to gauge the value of these
sources in the context of measuring cyberdependent crime, it is relevant to examine
the specific aspects that they report on,
as well as the approaches taken in order
to obtain the findings. A summary view,
based on the most recent editions of each
survey, is presented in Table 1. In each
case, the lists exclude any non-cyberdependent incident categories that the
surveys may have reported on, such as
frauds, device thefts, identity theft (including phishing), accidental damage and
natural disasters. In some cases, however,
it is not always clear what the categories
might actually mean. For example, in the
Ernst & Young set, the category cyber-

October 2015

attacks to steal intellectual property or data


might refer to system penetration and data
theft via hacking (ie, cyber-dependent), or
might equally be based on a phishing-style
exploit (ie, cyber-enabled).

Methods involved
While the presentation of the key results
in such surveys will naturally be framed
around the incident categories, it is relevant to consider how these sit against the
participants and data collection methods
involved. For example, are the results
ultimately based on facts or opinions?
Are the responses offered based on objective records or subjective recollections?
The likelihood is that most responses
will tend towards the latter. However,
there is a clear risk that the consequent
results end up being viewed and interpreted as more concrete findings.
In addition, there is the consideration
of whose recollections are being used to
inform the results. Looking at the different types of participant, it is clear that

some could have significantly different


levels of insight into the problems than
others. For example, looking at the
granularity of the attack categories utilised within the CSI 2010/11 survey and
comparing it to the types of participant
that responded, one is left to wonder
about the extent to which those from
higher-level roles would have been in
a position to offer informed comment
around some of the very specific types
of incident eg, it is hard to imagine
CEOs having knowledge around the various exploit of categories. Furthermore,
the quality of information may vary
depending on whether it was collected
during face-to-face interviews (where the
participants would conceivably have an
opportunity to seek clarification on the
questions, as well as qualify and explain
their answers) or the online questionnaire (where responses would potentially
be offered based on a misinterpretation
of the question etc). As such, one might
wonder whether the responses collected
via a face-to-face discussion with the

Computer Fraud & Security

FEATURE
CISO might be more informed and reliable than those arising from the online
responses of a CEO. It is notable that
the presentation of the findings does
not differentiate between the underlying
sources when reporting the results.

Vendor reports
Looking beyond the general surveys,
there is an increasing tendency for security vendors to produce threat reports
and the like, with contributions variously covering both the general landscape
and focusing on specific threat categories. In some cases such reports are very
much based around data and intelligence
gathered by the companies themselves,
whereas others may take a wider view.
Considering their potential utility as
a means of tracking the scale and extent
of the problem, it is relevant to consider
what they are measuring, how often, and
where from. Table 2 lists a variety of the
key cyber-dependent threat categories
from a sample of popular reports published at the time the study was made,
and it is immediately clear that they do
not all treat things in the same way. For
example, while all of the reports devote
some coverage to malware, there are variations in the extent to which they examine and discuss sub-categories within
this, or the platforms affected (the table
attempts to give a sense of the latter
by listing the main categories that the
reports themselves have used). Similarly,
the studies also differ in terms of the
level of granularity to which they categorise the threats and where they position
things. For example, the McAfee Lab
Q4 2014 report refers to a number of
distinct categories of malware, including
trojans, ransomware, bots and viruses. It
then also introduces a further category
called Potentially Unwanted Programs
(PUPs), which includes a variety of other
categories including adware, remote
administration tools and tracking tools
such as spyware. By contrast, if we look
at how something like adware is referenced in the other reports, Trend Labs
8

Computer Fraud & Security

treats it as a distinct category of problem, Symantecs report only makes reference to it as a category of mobile threat,
the HP study only mentions the term in
passing, and F-Secure makes no mention
of it at all (and none of the other reports
makes use of the PUP terminology).

It would clearly be unwise,


and potentially misleading,
to take a single report in
isolation and consider it as
a source of truth. The variations in reporting confirm
the challenge of getting
relevant and reliable measures of the cyber-dependent
crime problem
There are also issues of consistency
and focus for anyone wishing to regard
the reports as basis for identifying and
prioritising the key trends. Given that
all of the reports sampled are reflecting a
similar underlying timeframe, one might
arguably expect them to be highlighting
a similar set of issues. However, it can be
observed that while some are ostensibly
covering the same themes, they are often
not directly comparable. For example,
while McAfee and Symantec both present
measures relating to mobile malware,
McAfee does so in terms of new and
total volume of collected samples, while
Symantec presents more specific figures
including the total number of Android
malware families, the total number of variants, and the average variants per family.
In view of the above, it would clearly
be unwise, and potentially misleading,
to take a single report in isolation and
consider it as a source of truth. At the
same time, the variations in reporting
again confirm the challenge of getting
relevant and reliable measures of the
cyber-dependent crime problem. In terms
of improving on this situation, it is clearly
desirable to have reporting sources that
can be based on facts rather than the feelings or instincts of the people involved.
With this in mind, it is relevant to consider what can potentially be gathered

from the data that security vendors have


access to, and which some already use to
inform their threat reports.
Various providers now have cloudbased threat intelligence infrastructures,
with illustrative examples including the
Kaspersky Security Network, McAfee
Global Threat Intelligence, the Symantec
Global Intelligence Network and the
Trend Micro Smart Protection Network.
While these differ in terms of the specific sensors and analytics in use, they share
the characteristic of giving the associated
vendors a deeper level of insight, based
on data collected directly at source.
The cloud-based infrastructure lets
Internet security vendors analyse the programs running on computers protected
by their software. In effect, each protected
device becomes a listener in a global
neighbourhood watch approach, feeding
data into the overall system. As a result,
vendors now have more accurate data
on what is happening in the wild than
they did in the past. Nevertheless, an,
important factor is that Internet security
vendors are only able to measure what is
detected on a protected device. By definition, therefore, they measure malware
that is blocked and they do not measure
anything that escapes detection. Although
this does not invalidate the data collected,
it is important to understand that estimates based on these data will still underestimate the scale of the problem.

Relevant measures
Having reviewed the style of existing
measurements and metrics from the
literature, further opinions were specifically sought from practitioners and
other relevant respondents with a direct
knowledge of cyber-dependent crime
and/or their access to related information through their workplace. A total
of 25 experts were approached via targeted mailing, and via additional routes
provided by TechUK and the National
Crime Agency (NCA). Twelve responses
to an associated questionnaire were ultimately received, with the vast majority
October 2015

FEATURE
coming from Internet security vendors
and IT companies, alongside some individual practitioners with direct experience and access to threat data.
The starting point was to determine
the extent to which the respondents felt
the problem was appropriately understood. Half felt that our current understanding is best characterised as confused, while a further significant proportion considered it to be understated.
As such, the vast majority felt that the
current view is in some way inaccurate.
The key reasons for confusion were centred on a lack of understanding of what
we are trying to track, alongside inconsistency in what it ends up being called.
I think cybercrime is understated both in
terms of scale and cost purely because we honestly have no idea of the true depth of how bad
it actually is; we are unfortunately in a reactive environment the majority of the time.
In terms of more specific understanding
of the crimes, malware was commonly
felt to be the most understood (with half
of the respondents citing this), on the
basis that the industry itself being more
mature and the fact that statistics of some
nature are regularly collected. Equally, a
significant proportion of respondents also
observed that malware is often less visible
than other types of crime, and so measuring it may not be viable unless an infection has actually been detected.
As observed earlier, the usage of terminology in relation to cyber-dependent
crimes is far from consistent, and as such
this was one of the themes explored with
the respondents.
We tend to use terminology interchangeably using risk when we mean threat or
vulnerability and (even more commonly)
using threat when we mean vulnerability.
Although the majority were in agreement that there is currently a lack of clarity
and consistency in this respect, opinion
was somewhat divided over the prospects
of actually improving matters. While half
of the respondents explicitly suggested
the need for some form of standardisation, others considered that there was little
potential for standardisation to succeed

October 2015

(Many tried. Everyone failed). One


respondent did, however, draw attention
to the Vocabulary for Event Recording and
Incident Sharing (VERIS) initiative, which
has established a set of metrics designed to
provide a common language for describing security incidents in a structured and
repeatable manner, enabling organisations
to collect and anonymously share it with
others in order to better inform the community as a whole.10
Most vocabulary seems to come from
vendors marketing teams as new vendors
think of better ways of dealing with security
they need to change the view of security professionals to fit in with their paradigm.

Dynamic domain
One of the factors contributing to
the inconsistency of vocabulary is the
dynamic nature of the domain. The
appearance of new threats leads to
new names being introduced, and (as
observed in the quote) some further
potential for confusion arises from the
industry itself seeking to differentiate its
product and service offerings. This again
is ultimately difficult to control, and
with the exception of some well-accepted, top-level labels, it seems unlikely that
attempts to further define taxonomy will
lead to consistent naming behaviour.

If we consider the distinction between the means and


the end result, it becomes
apparent that a single attack
may actually involve a variety of methods working in
combination
The term cyber-dependent crime is also
somewhat unhelpful it focuses on particular
attack vectors rather than recognising that
there is normally an intent behind such crime
Sometimes it feels like we are focusing on
the means not the ends akin to categorising whether someone has forced a window,
broken a door or defeated an alarm system,
rather than the end objective which is burglary or sometimes malicious damage.

Indeed, evidence of the difficulty in


achieving an agreed naming was that a
significant proportion of the respondents
actually took issue with the classification of crimes as cyber-dependent. The
above comment is illustrative of several
that considered the distinction between
cyber-dependent and cyber-enabled
to be focusing on the means of attack
rather than the motivation and intended
outcome. This is in many ways a fair
observation, and it is important to consider how much currency the distinction
actually has in the field ie, is it aligned
with how individuals and businesses
think of cybercrime? Indeed, if we consider the distinction between the means
and the end result, it becomes apparent
that a single attack (when viewed in
terms of its outcome eg, disruption,
espionage, fraud, theft etc) may actually
involve a variety of methods working in
combination. For example:
U Lv>>i>i>`
the establishment of a botnet, which
in turn may lead to DDoS the consequence of which is disruption to
business and loss of revenue.
U }>>Vi>ised acquisition of credentials could
in turn lead to a hacking incident,
that results in data theft from the victim organisation (with the data itself
potentially going on to be used in
other ways eg, to commit fraud).
As such, tracking and measuring each
method in isolation would consequently
lead to multiple reporting of cybercrime
activities, and would serve to inflate the
statistics as a result. In addition, the second
example is notable in that it intermixes
cyber-dependent and cyber-enabled methods within a single attack. The upshot is
that while the classification of crimes as
cyber-dependent and cyber-enabled can be
valid as an academic or conceptual distinction, it is often less meaningful in practical
terms (and thereby not necessarily the most
useful way in which to try to partition
the topic). If a distinction is to be made
between dependent and enabled categories, then it is perhaps more meaningfully

Computer Fraud & Security

FEATURE
applied to the underlying methods, leaving
the crimes to be considered as cyber-related,
regardless of how they happen.

Means available
The remaining questions focused around
the means available to measure the different categories of cyber-dependent crimes,
their credibility and usefulness, and the
things that it would be most useful to get
information about. To set the scene, it is
already clear that different metrics will give
different levels of insight. Top-level measures (eg, volume-based metrics to count
occurrences) could be generally consistent
across a range of threat types, while other
measures will be threat-specific eg, lowerlevel measures for DDoS could include
target addresses, traffic volume and the
duration of downtime, whereas for malware it would be possible to form metrics
around aspects such as infection vectors
and payload effects. However, looking further it becomes clear that even looking for
a simple measure of volume one may be
left with several options of what to count.
For instance, in the case of malware, the
earlier reports were differing in terms of
whether they tracked samples collected,
families and variants, or detections. These
become progressively more difficult to
determine. For example:
U />V}>>i>iii`
them to be seen and tallied.
U />V}v>i>`>>ii`
an agreed classification (naming) to
be in place.
U />V}`iiVii`i
the field.
Yet even the volume of malware detections does not in itself represent the
scale of a problem, because the systems
concerned have generally been protected
eg, to quote Trend Micros TrendLabs
Security Roundup, detections refer
to instances when threats were found
on users computers and subsequently
blocked by any Trend Micro security software. This links to one of the strongest
messages emerging from the consultation
with experts; there is a clear disconnect
10

Computer Fraud & Security

between what we can measure and what


we would like to measure.
Most security metrics are operational in
nature. For example, numbers of viruses
detected at the gateway may (or may not)
be a good indicator of the effectiveness of
your AV system but the figure will be
completely meaningless to the business.
I dont really care about the scale what
I do care about is how much I have to spend
to deal with it, and how much the successful
attempts are costing me to remediate.

Even if a system is successfully infected by malware, it


does not denote a successful attack if the actions that
the malware then attempts
to perform eg, acting as a
DDoS bot are still blocked
at another level
Keywords appearing in responses
around what is currently measured were
detections, duration, frequency and
incidents, essentially reflecting statistical
metrics that can somehow be counted
in terms of volume. By contrast the
keywords that repeatedly appeared in
response to what we should ideally be
measuring were: cost, impact, loss,
outcome and severity. Indeed, the vast
majority of respondents made reference to
one or more of these aspects. Moreover, a
significant proportion made specific reference to caring about the success of an
attack in terms of achieving its intended
result. For example, even if a system is
successfully infected by malware, it does
not denote a successful attack if the
actions that the malware then attempts to
perform eg, acting as a DDoS bot are
still blocked at another level.
Im not sure measuring is of much value
disruption/stopping seems to be where investment would be better spent as measuring is a
good stat but doesnt change anything.
Attempts vs successful attacks is completely useless in the discussion of any breach
scenario Not only is this information not
relevant it distracts from the most important
metric. How was the attack conducted?

We have got to focus on business


impacts, that means focusing on attempted
attacks that have been successful ... No one
counts how many burglars dont burgle
their house, they just count how many
times theyve been burgled.
These views were reflected in several
responses, and again suggest potentially
limited value to be gained from the things
that can be measured most easily. So,
while scale does say something in isolation,
it is ultimately hard to divorce it from
the impacts on the organisation, and it is
these factors that that are more desirable
to understand. Indeed, over a decade ago
Garfink and Landesmann argued that
global statistics are not even meaningful
for individual businesses.11 They suggested that it makes more sense for a business
to look at the impact of actual attacks, or
attempted attacks on the business and to
assess the financial impact on that business. However, it can be observed that
even in these respects, a term such as
loss could have varying interpretations,
including direct and indirect financial
loss, loss of reputation, loss of shareholder
value, etc. As such, they are more difficult
to collect, and rely on both the recognition of an incident by an affected organisation and the willingness to disclose it.
I dont believe you can accurately measure the impact and costs without the collection and assessment of specific local data.
There is indeed a need to collect specific
local data in order to properly measure the
impact and associated cost of cybercrime
incidents. Current threat metrics describe the
threat, but do little to probe the impact or
true cost.
The wide variety of targets and attacks can
mean global or even national statistics seem
irrelevant to be relevant and useful then
parameters around the target must also be
collected so that businesses can compare their
metrics and results with industry standards.

Local data
When considering the level at which metrics could be most appropriately collected,
there was a clear view that high-level
October 2015

FEATURE
statistics would have limited value. Local
data collection was felt to be the only
level at which impacts could really be
understood, whereas higher level information would have more value if organisations could still use it in some way, such
as linking it to specific business sectors
(such that organisations can at least
understand the threats affecting their peer
group, and get a sense of their risk in a
more specific context). Even here, however, the nature of impacts (or lack thereof )
would potentially be more informative
than raw information relating to scale. In
particular, it could be relevant to establish
why, in the face of the same threat, one
organisation fell victim and another did
not, and then determine what were the
differences in the protection practices
underlying these experiences.
There was ultimately recognition of
a clear obstacle to obtaining the data
that would be most useful; namely
that organisations are not inclined to
share them. A significant proportion of
responses made some sort of reference to
the need to increase reporting, whether
by voluntary means or legal obligation.
Downstream we would like more voluntary reporting of cybercrime in all of its
forms to inform the resources we assign to
crime prevention and investigation.
Its impossible to measure the cost &
impact from this today without any legal
obligation in the EU today to notify there
is no source to read and track.
Another related observation was that it
would also be valuable to encourage (or
demand) disclosure of the security controls in use, so that intelligence could be
shared regarding those that are effective
(and indeed those that are not).
If we continue to go down the road of
never disclosing or identifying the security
components that failed or the components
that were not in place when a breach happen, we will never make any progress.
In summary, the vast majority of
experts surveyed shared very similar
opinions about current understanding
of cyber-dependent crime and the value
of measuring it. While the group as a

October 2015

whole was able to suggest a range of


metrics that might be collected in order
to track scale and trends, they were also
unconvinced about the value of doing
so. The general belief was that greater
value would be gained by understanding the impacts of incidents (and how
to prevent them), but there was also
recognition that this would pose a different set of challenges in terms of data
availability. As such, it is this area that
emerges as key route for further consideration as an outcome of this study.

Conclusions
As the cybercrime landscape broadens, so too do the dimensions that are
relevant to measure in getting a clear
picture of the scale and impact of the
problem. However, while big numbers
have considerable dramatic effect, they
are not necessarily meaningful for businesses. In fact, it is worth considering
whether the whole notion of measuring the scale of cyber-dependent crime
risks fixing our attention on the wrong
thing. It is arguably sufficient to know
that significant threats exist in volume, and that they are widespread. It
makes more sense for an organisation
to look at the impact of actual attacks,
or attempted attacks, on the business
and to assess the financial impact in
terms of down-time, real or potential
loss of stolen data and clean-up costs.
This offers a more realistic view of how
a specific organisation has been, or may
be, affected by cybercrime.
Consistent terminology and clarity
over data collection methodologies are
necessary to enable data from different
sources to be utilised in a more informed
manner. But even then data is only
meaningful when it has a context, and
what is meaningful here is related to
managing risk (which may be specific to
an organisation, or type of organisation),
and the results of not doing so. As such,
attention is also needed in terms of enabling potential victims to relate this to
their own situations.

Current measurements are often guesstimates at some level, particularly when it


comes to understanding incidents rather
than prevalence of threats. So we need
to find ways to improve our visibility of
incidents, which in turn means finding a
means of further encouraging (and perhaps obliging) the disclosure of incidentrelated information.

Data is only meaningful


when it has a context, and
what is meaningful here is
related to managing risk
(which may be specific to
an organisation, or type of
organisation), and the results
of not doing so
There is also value in trying to motivate
those that have not been breached to share
details of successful protection mechanisms. General sharing of intelligence on
best practice (perhaps by business sector)
could add considerable value in harmonising protection and reducing the potential
for cyber-dependent crimes to succeed.

About the authors


Steven Furnell is a professor of information systems security and leads the Centre
for Security, Communications & Network
Research at Plymouth University. He is also
an adjunct professor with Edith Cowan
University in Western Australia and an
honorary professor with Nelson Mandela
Metropolitan University in South Africa.
His research interests include usability of
security and privacy technologies, security
management and culture, and technologies
for user authentication and intrusion detection. He has authored over 250 papers in
refereed international journals and conference proceedings, as well as books including
Cybercrime: Vandalizing the Information
Society (2001) and Computer Insecurity:
Risking the System (2005). Furnell is the
BCS representative to Technical Committee
11 (security and privacy) within the
International Federation for Information
Processing, and is a member of related
working groups on security management,

Computer Fraud & Security

11

FEATURE
security education, and human aspects
of security. He is also a board member
of the Institute of Information Security
Professionals, and chairs the academic partnership committee and southwest branch.
David Emm is principal security
researcher at Kaspersky Lab, a provider of
security and threat management solutions.
He has been with Kaspersky Lab since
2004 and is a member of the companys
Global Research and Analysis Team. He
has worked in the anti-malware industry
since 1990 in a variety of roles, including
that of senior technology consultant at Dr
Solomons Software, and systems engineer
and product manager at McAfee. In his
current role, Emm regularly delivers presentations on malware and other IT security threats at exhibitions and events, highlighting what organisations and consumers
can do to stay safe online. He also provides
comment to broadcast and print media.
Emm has a strong interest in malware, ID
theft and the human aspects of security.
Dr Maria Papadaki received her PhD
in 2004 from the University of Plymouth.
Prior to joining academia in 2006, she
was working as a security analyst for
Symantec EMEA. Her research interests
include incident response, insider threats,
intrusion prevention and detection, secu-

rity information and event management,


security assessment, social engineering, security usability and security education. Her
research outputs include 19 journal and
30 international peer-reviewed conference
papers. Papadaki holds GCIA, GPEN and
CEH certifications and is a member of the
GIAC Advisory Board, as well as the BCS,
IISP, and ISACA. Further details can be
found at www.cscan.org/papadaki/.

References
1. Whittaker, Z. Cybercrime costs
$338bn to global economy; More
lucrative than drugs trade. ZDNet,
7 Sep 2011. www.zdnet.com/article/
cybercrime-costs-338bn-to-globaleconomy-more-lucrative-than-drugstrade/.
2. The Cost of Cybercrime. Detica, 17
Feb 2011. www.gov.uk/government/
publications/the-cost-of-cybercrimejoint-government-and-industry-report.
3. Harris, D. The real costs of cybercrime. GIGAOM, 16 Nov 2011.
http://gigaom.com/2011/11/16/thereal-costs-of-cybercrime-infographic/
4. Serious and Organised Crime
Strategy. HM Government,
Cm 8715, October 2013. ISBN
9780101871525.

5. Cyber-security Report, Special


Eurobarometer 423. European
Commission, EB82.2, February
2015.
6. ENISA Threat Landscape 2014.
ENISA, 27 Jan 2015. www.enisa.
europa.eu/activities/risk-management/
evolving-threat-environment/enisathreat-landscape/enisa-threat-landscape-2014.
7. Richardson, R. 15th Annual
2010/2011 Computer Crime and
Security Survey. Computer Security
Institute.
8. Get ahead of cybercrime EYs
Global Information Security Survey
2014. Ernst & Young, October 2014.
EYG no. AU2698.
9. 2015 Information Security Breaches
Survey Technical Report.
Department for Business, Innovation
and Skills, HM Government. URN
BIS/15/302.
10. VERIS, home page. Accessed Sep
2015. http://veriscommunity.net.
11. Garfink, S; Landesman, M. Lies,
damn lies and computer virus costs....
VB2004 Conference, Chicago, US,
29 Sep1 Oct 2004. www.virusbtn.
com/conference/vb2004/abstracts/
sgarfink.xml.

White hats versus vendors:


the fight goes on
/,}]vii>Vi>

Tim Ring

`V>*>}iiiVivv>iV`i}i`
V/>LiV>ii>V>}i]vi-]`ii`L>`
>>iv >iV`>ViV`>`iViii
iqV`}i>VViii>`>`i >iLi-i
iiVii>Vi<>
>i`>vvVViL
}>ViV`i>iii*`>>V>`i`i
qi}ivqii`>>v>}]i
i`V>i`>i]}>i
>v`>iii>`>
iviiVi]i`>i}>>ViVi`
In a similar case in August, 18-year-old
Italian Luca Todesco shared his discovery
12

Computer Fraud & Security

of a zero-day privilege escalation flaw


that could give hackers root access to

Apple Macs, without giving Apple time


to patch it first. While the company did
not comment, Todesco was hit with such
heavy criticism from others in the security community that he tweeted: This
is kinda getting out of proportion. Best
outcome for me would have simply been
to stay quiet.1
October 2015

S-ar putea să vă placă și