Sunteți pe pagina 1din 257

Section VII

Critical Issues
(7.1-7.10)

This section addresses conceptual and theoretical issues related to the field of ubiquitous and
pervasive computing. Within these chapters, the reader is presented with analysis of the most current
and relevant conceptual inquires within this growing field of study. Particular chapters discuss
ethical issues in per- vasive computing, privacy issues, and quality of experience. Overall,
contributions within this section ask unique, often theoretical questions related to the study of
ubiquitous and pervasive computing and, more often than not, conclude that solutions are both
numerous and contradictory.
1350

Chapter 7.1
The Ethical
Debate
Surrounding
RFID
Stephanie Etter
Mount Aloysius College, USA

Patricia G. Phillips
Duquesne University, USA

Ashli M. Molinero
Robert Morris University, USA

Susan J. Nestor
Robert Morris University, USA

Keith LeDonne
Robert Morris University, USA

rfId tecHnoLoGy (2005), there are several methods of identifying


objects using RFID, including the most common
Radio frequency identification (RFID) is a of stor- ing a serial number that identifies a
generic term that is used to describe a system product on a
that trans- mits the identity of an object or
person wirelessly using radio waves (RFID
Journal, 2005). It falls under the broad category
of automatic identifica- tion technologies. RFID
tags, in the simplest of terms, are “intelligent
chips that can be embedded in or attached to a
product to transmit descriptive data” (Gelinas,
Sutton, & Fedorowicz, 2004, p.
6). According to the online RFID Journal
Copyright © 2010, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
microchip that is attached to an RFID tag.
RFID is not a new technology, but it has only
recently been in the spotlight as more
The Ethical Debate Surrounding RFID
businesses are re- ceiving press for putting the
technology to work in their supply chains.
RFID tag technology is sometimes
associated with the term electronic product code
(EPC). An EPC uniquely identifies objects in a
supply chain. According to EPCGlobal, “EPC is
divided into numbers that identify the
manufacturer and prod- uct type. The EPC uses
an extra set of digits, a serial number, to identify
unique items.” The EPC number is placed on a
tag composed of a silicon chip and an antenna,
which is then attached to an item. Using RFID,
a tag communicates its number

1351
to a reader (EPCGlobal, 2005). In broad terms, Italy, France, Spain, Portugal,
RFID tags are placed into one of two
categories: active or passive. According to the
Association for Automatic Identification and
Mobility (AIM,
2005), active RFID tags are powered by an
internal battery and are typically designated as
read-write tags. When a tag has read-write
capabilities, the tag data can be modified.
Passive tags, according to AIM, operate without
a power source and obtain operating power
from the tag reader. Passive tags are typically
read-only tags, having only read-only memory.
Active tags generally have a longer read range
than passive tags.
RFID development dates back, according
to some accounts, to the 1940s work of Harry
Stockman who discussed the possibility of
com- munication by means of reflected power.
Stock- man at that point was early in the
exploration and “admitted that more needed to
be done in solving the basic problems of
reflected-power communication before the
application could be useful” (Landt & Catlin,
2001). According to the RFID Journal, RFID’s
early applications can be found during World
War II when it was used by the military in
airplanes, through the assistance of radar, to
identify friend or foe (IFF).
Two decades later the first commercial use
of RFID-related technology was electronic
article surveillance (EAS), which was designed
to help in theft prevention. These systems often
used 1-bit tags that could be produced cheaply.
Only the presence or absence of the tag could
be detected, which provided effective anti-theft
measures (Landt & Catlin, 2001).
Commercial applications expanded in the
1980s across the world, although not everyone
had the same RFID applications in mind. The
United States found the greatest applications for
RFID to be in the areas of transportation,
personnel access, and to a lesser extent, animal
tracking. “In Europe, the greatest interests were
for short-range systems for animals, industrial
and business applications, though toll roads in
and Norway were equipped with RFID” (Landt
& Catlin, 2001).
Today we see RFID in use in toll collection,
tracing livestock movements, and tracking
freight (Jones, Clarke-Hill, Comfort, Hillier, &
Shears,
2005). While not a new technology, the use of
RFID is slowly gaining momentum for
widespread application, with RFID technology
being used in industries such as retail, banking,
transportation, manufacturing, and healthcare.

PrI VAcy deb Ate

The two main controversies regarding the use


of RFID are privacy and security. While
advances in technology can address the security
issues related to RFID, the ethical debate
surrounding privacy is not as easily solved. As
RFID technol- ogy becomes mainstream, its
privacy protection challenges are becoming the
topic of debate between technologists,
consumer activists, aca- demics, and
government agencies. Yoshida (2005) reports
that there is a “polarizing force tugging at the
technology: the government and industry
groups advocating RFID’s adoptions, and the
civil libertarians concerned about its potential
abuse.” The main question is, will this
technology lead to situations where confidential
information can be improperly disclosed? A
representative from the UK’s Department of
Trade and Industry warned, “RFID tags could
be used to monitor people as well as
merchandise. As the use of RFID spreads,
privacy issues must be weighed in the context
of societal consent” (Yoshida, 2005).
RFID is not the first technology to spur a
privacy debate. While technologies like RFID
are not necessary for the invasion of privacy,
they have made new privacy threats possible
and old privacy threats more powerful. Based
on IT ethics literature, there are three key
aspects to privacy that computer technology
tends to threaten (Baase, 2003):
The Ethical Debate Surrounding RFID

1. freedom from intrusion, uniquely identify


2. control of personal information, and
3. freedom from surveillance.

RFID has the potential to impact all three,


especially in terms of invisible information
gath- ering. Gunther and Speikermann (2005)
argue that RFID has added a “new dimension to
the traditional e-privacy debate because much
more information can potentially be collected
about individuals” (p. 74).
While many understand the origin of RFID
as being related to inventory tracking, privacy
advocates argue that RFID can be used to track
items after the item is out of the supply chain
and in the hands of the consumer. RFID has the
potential to allow anyone with an RFID
scanner, either business or individual, to see the
contents of shopping bags, purses, or other
personal items, a process known as skimming.
The RFID privacy concerns then are three-fold:
pre-sales activities, sales transaction activities,
and post-sales uses (Peslak, 2005).
While some believe that privacy advocates
who argue against the use of RFID are being
overly cautious and unreasonable, it is
important to note that several businesses may
already have plans to use RFID for the
purposes of marketing, advertis- ing, and
tracking. For example, IBM filed a patent
application in 2001 which offers the potential to
use RFID “to track people as they roam through
shopping malls, airports, train stations, bus sta-
tions, elevators, trains, airplanes, rest rooms,
sports arenas, libraries, theaters, museums,
etc.” (Bray, 2005). Unique item identification
made possible through RFID has the potential
to lead to a degree of personal attribution and
surveillance never before possible (Gunther &
Speikermann,
2005).
Global RFID standards are non-existent.
Active RFID tags can often be read outside of
the supply chain, are difficult for consumers to
remove, can be read without consumer
knowledge, and in the future may be able to

72
The Ethical Debate Surrounding RFID
items so that each item is traceable back to a
credit account. According to Gunther and
Speikermann (2005), “Consumers feel helpless
toward the RFID environment” (p. 74) and
“even though the po- tential advantages of
RFID are well understood by a solid majority of
consumers, fear seems to override most of
these positive sentiments” (p.
76). There is some development in the area of
privacy-enhancing technologies (PETs),
technol- ogy designed to enable privacy while
still using RFID, but as Gunther and
Speikerman-n (2005) report, consumers still
feel helpless (p. 74).
Although the ethical debate surrounding
RFID does focus on privacy, it is important to
note that much of the privacy debate itself can
be connected to the other main controversy with
RFID: security. Yoshida (2005) reports that
Elliot Maxwell of The Pennsylvania State
University’s E-Business Research Center
argues, “Fair information prac- tices are
designed for centralized control and personal
verification, but what is emerging from RFID is
surveillance without conscious action.” He
further argues that with RFID, “every object is
a data collector and is always on. There are no
obvious monitoring cues. Data can be broadly
shared, and data that [are] communicated can
be intercepted.” While this information is
stored in databases for later use or sale, there
are potential security risks that arise. If it is
intercepted during transport (electronic or
otherwise), or accessed by an unauthorized
party, the information now becomes more than
just a concern about privacy related to which
products consumers buy or which books they
read, but it then becomes an opportunity for
identity theft.

Our recent study of German consumers found


they feared losing privacy due to the
introduction of RFID technology. Even though
the potential advantages of RFID (such as
enhanced after-sales services) are well
understood by a solid majority of consumers,
fear seems to override most of these positive
sentiments. (Gunther & Speikermann,
2005, p. 76)

73
Those in the RFID industry have responded identified in the literature.
to concerns about privacy by developing EPC
tags that can be equipped with a kill function.
Tags that are killed are totally inoperable after
being sold to a consumer. Without global
standards it is difficult to predict whether kill
functions will be used widely in an attempt to
protect consumer privacy.

Indust ry
IMPAct

Healthcare

Health information management is a critical


concern for any medical facility. RFID technol-
ogy offers a comprehensive solution to
managing patient data, staff, equipment,
pharmaceuticals, supplies, or instruments
because it can be used for both object
identification and tracking. While the
application of this technology offers many op-
portunities for hospital administrations to
improve service delivery and patient care,
ethical concerns also prevail. As with any new
technology, there is a delicate balance between
getting the most out of RFID without infringing
upon patients’ rights. The ethical debate over
electronic supervision with RFID technology in
the healthcare industry comes down to
weighing the benefits of patient safety against
the risks of patient confidentiality.

The Benefits

Applications of RFID in healthcare facilities


vary. In 2005, Correa, Alvarez Gil, and Redin
identified five areas where the technology has
been imple- mented in healthcare, including
“workflow im- provements, maintenance of
medical equipment, patients’ identification,
drug’s procurement and administration, and
inventory management” (p.
3). Each area can benefit from the technology in
different ways; several examples were
Preventing medical and medication errors is
one reason for supporting RFID in healthcare
(FDA, 2003). An example of how this
technology can prevent such errors is the
Surgichip Tag Sur- gical Marker System. This
tagging system marks parts of a patient’s body
for surgery, minimizing the likelihood of a
procedure being performed on the wrong side
or wrong patient (FDA, 2005). Similarly,
medication errors such as wrong dos- age or
wrong drug can also be prevented (FDA,
2003).
In addition to error prevention, another facet
of patient safety that can benefit from RFID
technology is physical security. Because
strategi- cally placed readers can track people,
equipment, staff, or instruments, many
maternity wards frequently use RFID as a
security measure. The technology helps to
ensure the right baby stays with and goes home
with the right parents, and deters would-be
abductors (Correa et al., 2005). Similarly, older
patients in assisted-living facilities can be
monitored, particularly those with memory
impairments who may be prone to wandering
out of the facility and unable to find their way
back (Baard, 2004). Some argue that while this
provides a sense of security for the older
patient’s family, it is a violation of the patient’s
right to privacy. The facility has to gain
permission from the patient to monitor him or
her, but the patient may not be mentally
competent enough to know what it is they are
signing (Baard, 2004).

The Challenges

Ironically, the area that can benefit the most


from RFID technology also has the most risk
associated with it: patient safety. Confidential-
ity, personal privacy, and data security are the
main challenges hospitals face when employing
RFID technology (Borriello, 2005). The biggest
challenge in healthcare information
management today stems from the fact that so
much of it is still on paper (National Committee
on Vital and Health Statistics, 2000). In
addition to being a
time-consuming task, recording patient RFID provides benefits to retailers to aid with
informa- tion by hand leaves room for error in the reduction in theft, to improve the matching
transcribing. Elimination or reduction of hand- of supply to demand for a product, and to
written clinical data in healthcare facilities via improve the speed of distribution (Jones et al.,
the implementa- tion of electronic health 2005). Cur- rent research suggests that RFID is
records could alleviate a lot of the margin of being used to improve customer service;
error, and attributes of RFID make it an improve freshness of products; track products
attractive technology solution for this problem for warranty and recalls; improve efficiency of
(Terry, 2004). A tag could store informa- tion the supply chain; reduce shrinkage, theft, and
about what medications and procedures were counterfeit goods; and track the sales of
given when, as well as the patient’s location in products. By leaving tags active, retailers can
the facility at any given time. offer enhanced after-sales services for
However, securing the information and con- warranties and recalls. Companies can be
trolling who has access to read or write to the proactive and notify customers of a potential
patient’s tag is a threat to the wide adoption of defect or recall.
RFID in hospitals. Data accumulation, or how
will it be used and who will have access to it, is The Challenges
a critical concern for hospitals (Neild, Heatley,
Kalawsky, & Bowman, 2004). The industry is From a consumer standpoint, the privacy threat
making progress towards a universal electronic comes when RFID tags remain active once the
health record, but none has been able to ensure consumer leaves the store. Consumers are con-
an appropriate level of data security. cerned with the use of data collected by
Implementing a system that provides the level retailers. Even though many consumers use
of data security necessary can be cost loyalty cards, RFID adds a greater threat
prohibitive for many facili- ties (Terry, 2004). because each tag can contain more information.
In addition, RFID tags can be attached to goods
retail without the consum- ers’ knowledge.
Consumers are concerned that retailers are
Retailers like Wal-Mart, Prada, Target, and collecting data and surveillance of the
Walgreens are also embracing the use of RFID consumers’ shopping behaviors. There is a
technology, primarily for inventory tracking. concern that retailers will collect the data on
Retailers are constantly looking for ways of im- the customer, build profiles, and in turn create
proving the balance between inventory supply different pricing strategies or different levels
and consumer demand. They want to make sure of service for customers based on these profiles
there are enough products on the shelves to (Jones, Clarke-Hill, Comfort, Hillier, & Shears,
meet demand, but not so much that they are 2004). Consumers are also concerned that data
sitting in a warehouse taking up costly collected by retailers will be sold to other
inventory space. The use of RFID technology is retailers. In addition, there is a concern of RFID
viewed as one of the more promising tools to misuses by criminals who have access to RFID
improve visibility of inventory almost instantly. scanners. According to Eckfeldt (2005):

The Benefits Major companies worldwide have scrapped


RFID programs following consumer backlash,
Retailers can see benefits of using RFID in the and several U.S. states, including California
form of reduction in cost and increase in and Mas- sachusetts, are considering whether
revenue. to implement RFID-specific privacy policies.
(p. 77)
Retailers, who want to embrace RFID useful in increasing security and efficiency at
technol- ogy should attempt to develop ways to the same time.
gain the customer’s confidence. Millar (2004)
provides four privacy concepts that should be
considered when implementing EPC and RFID.
These privacy concepts include:

1. Collection of personal data must be fair


and lawful.
2. Consumers have the right to be informed
about the use of the tags.
3. Personal data must be maintained in a
secure fashion.
4. Consumers must be provided with reason-
able access to their personal information.

To gain the customer’s confidence, retailers


should ensure that customers are aware that
data are being collected, have a knowledge of
what type of data are being collected, and
understand how the data will be used. If the
consumer understands how the data are being
used and the consumer obtains a benefit with
the use of RFID, they may be more willing to
accept RFID data collection practices by retail
organizations.

future ProsPects

Writing about RFID technology in general,


Bor- riello (2005) notes:

As RFID technologists, application developers


and consumers, we must be vigilant as to how
these systems are designed not only for the sake
of efficiency and cost but also to safeguard
consum- ers’ privacy and instill trust in the
technology.

This statement is even more significant


when speaking of RFID in terms of its
application in healthcare. If healthcare facilities
are careful about their implementation
strategies and ensure the right amount of data
security, this technology can be incredibly
Since many organizations are in the early
stages of RFID usage, Cap Gemini Ernst and
Young surveyed 1,000 consumers in
February
2004 to obtain their perceptions of RFID and
their willingness to purchase RFID products
(Jones et al., 2004). The results from the survey
indicated few customers are even aware of this
technology (23% of those surveyed), and of
those aware of this technology, 42% had a
favorable view, 10% had an unfavorable view,
and 48% had no opinion. To those surveyed,
they saw the benefits as “faster recovery of
stolen items; better car anti-theft capabilities;
savings from reduced product cost; better
prescription security; and faster, more reliable
product recall” (Jones et al.,
2004, p. 52). In addition, those surveyed
indicated what might lead them to purchase
RFID products: “lower costs, conveniences,
improved security, a better shopping
experience and privacy assur- ances” (Jones et
al., 2004, p. 52). As this survey indicated,
customers are concerned with the use of data by
a third party, being targeted by direct
marketing, health issues, and environmental
impact. Consumers stressed the need for better
education so that they could know what was
most important to them in regards to RFID.
Research indicates that retailers and compa-
nies investing in RFID technology should
govern themselves to protect the consumers
because there is little legislation and policies to
protect the consumer from the misuse of data
collected through this technology. They should
provide notice to consumers that RFID tags are
present. In addition, consumers should have a
choice to disable the RFID tags on purchases.

concLusIon

Consumers will be more open to accept RFID


in their lives if they can see the benefits for
themselves. Organizations, such as those in the
healthcare and retail industries, must take the
initiative to educate the consumer regarding
RFID capabilities. There needs to be open com- lean
munication about the advantages of RFID and
the potential risks associated with it.
Organizations should provide privacy poli-
cies to explain what data is being collected,
how it is being used, and how it will be
destroyed. Privacy policies should address data
from cradle to grave.
In addition, organizations must tie the
presence of tags directly to consumer benefit in
terms of better service, reduced prices, better
products, or quicker checkout (Eckfeldt, 2005).
Companies must lower the risk to consumers of
losing their personal data and privacy, and
increase the ben- efit to consumers in terms of a
more convenient shopping experience, lower
prices, and quicker checkout (Eckfeldt, 2005).

references

AIM (Association for Automatic Identification


and Mobility). (2005). What is radio frequency
identification (RFID)? Retrieved May 18,
2006, from ww w.aimglobal.org
Baard, M. (2004). RFID keeps track of seniors.
Wired News, (March). Retrieved January 27,
2006, from
http://www.wired.com/news/medtech/
0,1286,62723,00.htm
l
Baase, S. (2003). The gift of fire: Social, legal
and ethical issues for computers and the
Internet. Up- per Saddle River, NJ: Pearson
Education.
Borriello, G. (Ed.). (2005). RFID: Tagging the
world. Communications of the ACM, 48(9), 34-
37. Retrieved January 27, 2006, from
ht t p://deliver y. acm.org/10.1145/1090000/
1082017/p34borriello. html
Bray, H. (2005). You need not be paranoid to
fear RFID. Boston Globe. Retrieved December
1, 2005, from http://www.Bostonglobe.com
Correa, F.A, Alvarez Gil, M.J., & Redin, L.B.
(2005, July). Benefits of connecting RFID and
principles in health care. Retrieved January 27,
National Committee on Vital and Health
2006, from
Statistics. (2000). Report to the secretary of the
ht t p://docubib.uc3m.es/ WOR K I NG-
U.S. Depart-
PAPERS/WB/wb054410.pdf
Eckfeldt, B. (2005). What does RFID do for the
consumer? Communications of the ACM, 48(9),
77-79.
EPCGlobal. (2005). Frequently asked
questions. Retrieved May 20, 2005, from
http://www.epc-
globalinc.org/about/faqs.html#6
FDA. (2003, March). Bar code label
requirements for human drug products and
blood. Proposed Rule Federal Register, 68, 50.
Retrieved April
13, 2005, from ht t p://ww w.fda.gov/OH R MS/
dockets/98fr/03-5205.html
FDA. (2005). Technology for safer surgery.
FDA Consumer, 39(1). Retrieved August 25,
2005, from ht t p://ww w.f i nda r
t icle s.com /p/a r t icl e s /
mi_m1370/is_1_39/ai_n8694482
Gelinas, U., Sutton, S., & Fedorowicz, J.
(2004). Business processes and information
technology. Boston: Thompson South-Western.
Gunther, O., & Speikermann, S. (2005). RFID
and the perception of control: The consumer’s
view. Communications of the ACM, 48(9), 73-
76.
Jones, P., Clarke-Hill, C., Comfort, D., Hillier,
D.,
& Shears, P. (2005). Radio frequency
identifica- tion and food retailing in the UK.
British Food Journal, 107, 356-360.
Jones, P., Clarke-Hill, C., Comfort, D., Hillier,
D.,
& Shears, P. (2004). Radio frequency
identification in retailing and privacy and public
policy issues. Management Research News, 27,
46-56.
Landt, J., & Catlin, B. (2001). Shrouds of time:
The history of RFID. Retrieved from
ww w.aimglobal.
org/technologies/rfid/resources/shrouds_of_tim
e. pdf
Millar, S.A. (2004). RFID & EPC systems.
Paper, Film and Foil Converter, 78(11), 16.
ment of Health and Human Services on uniform Electronic Product Code (EPC): A
standards for patient medical record number designed and used to uniquely identify
information. Retrieved April 13, 2005, from a specific item in the supply chain. The number
htt p://ww w.ncvhs. hhs.gov/hipaa00006.pdf is placed on a chip and read with RFID
Neild, I., Heatley, D.J., Kalawsky, R.S., & technology.
Bow- man, P.A. (2004). Sensor networks for Invisible Information Gathering: The col-
continuous health monitoring. BT Technology
lection of personal information without a
Journal, 22(3). Retrieved January 27, 2006,
person’s knowledge.
from http://www. springerlink.com/media
Kill Function: Disables the functionality of
Peslak, A. (2005). An ethical exploration of pri-
an
vacy and radio frequency identification.
Journal of Business Ethics, 59, 327. RFID tag after consumers purchase a product.

RFID Journal. (2005). What is RFID? Passive Tag: A type of RFID tag that
Retrieved December 1, 2005, from operates without a power source and is typically
ht t p://ww w.r fidjour nal. com designated as a read-only tag.
/article/articleview/1339/2/129/ Privacy Enhancing Technology (PET):
Terry, N.P. (2004). Electronic health records: Hardware and software designed to protect an
International, structural and legal perspectives. individual's privacy while using technology.
Retrieved January 27, 2006, from ht t p://law.slu.
Read-Only Tag: A tag that only has read-
edu/nicolasterry/NTProf/ALM_Final.pdf
only memory. When manufactured, this tag is
Yoshida, J. (2005). RFID policy seeks identity: pre-programmed with a unique and/or
Global economic body debates controversial randomly assigned identification code.
tech- nology. Electronic Engineering Times.
Retrieved January 31, 2006, from Lexis Nexis Read-Write Tag: A tag that allows for full
Academic Universe Database. read-write capacity. A user can update informa-
tion stored in a tag as often as necessary.

Key terMs Skimming: When someone other than the


intended party reads an RFID tag, usually
Active Tag: A type of RFID tag that is without the owner’s knowledge.
powered by an internal battery and is typically
designated as a read-write tag. Active tags
generally have longer read ranges than passive
tags.

This work was previously published in Encyclopedia of Information Ethics and Securty, edited by M. Quigley, pp. 214-
220, copyright 2007 by Information Science Reference (an imprint of IGI Global).
Privacy Issues of Applying RFID in Retail Industry

1358

Chapter 7.2
Privacy Issues of
Applying
RFID in Retail
Industry
Haifei Li
Union University, USA

Patrick C. K. Hung
University of Ontario Institute of Technology, Canada

Jia Zhang
Northern Illinois University, USA1

David Ahn
Nyack College, USA

eXecutIVe suMMA ry the way retailers do business. With the dramatic


price drop of RFID tags, it is possible that
Retail industry poses typical enterprise RFID
computing challenges, since a retailer normally
deals with multiple parties that belong to
different organiza- tions (i.e., suppliers,
manufacturers, distributors, end consumers).
Capable of enabling retailers to effectively
and efficiently manage merchan- dise
transferring among various parties, Radio
Frequency Identification (RFID) is an emerging
technology that potentially could revolutionize
could be applied to each item sold by a retailer. authorization model that aims to precisely
However, RFID technology poses critical pri- define comprehensive RFID privacy policies.
vacy challenges. If not properly used, the data Extended from the role-based access control
stored in RFID could be abused and, thus, cause model, our privacy authorization model ensures
privacy concerns for end consumers. In this ar- the special needs of RFID-related privacy
ticle, we first analyze the potential privacy issue protection. These policies are designed from the
of RFID utilization. Then we propose a privacy perspective of end

Copyright © 2010, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Privacy Issues of Applying RFID in Retail Industry

consumers, whose privacy rights potentially organizations (i.e., suppliers, manufacturers,


could be violated. Finally, we explore the dis- tributors, end consumers). Nowadays, the
feasibility of applying Enterprise Privacy focus of enterprise computing efforts of
Authorization Language (EPAL) as the vehicle retailers mainly aims at suppliers. To date, there
for specifying RFID-related privacy rules. has been little work conducted on how to
provide enterprise-level computing capability
for individual customers. In addition to the
IntroductIon
security issue, one such capability we have
identified is consumer privacy protection. There
In present retail industry, retailers are under
is a growing concern for data privacy among
tremendous pressure to improve efficiency. One
businesses and consumers due to the possible
way to increase productivity is through the
unwanted revelation of confidential or personal
adop- tion of new technologies. History has
data stored within RFID devices.
revealed that retailers are the early adopters of
Privacy is a state or condition of limited ac-
Electronic Data Interchange (EDI) and
cess to a person (Schoeman, 1984). In
Business-to-Business (B2B) e-commerce. The
particular, information privacy refers to an
benefits of adopting these new technologies are
individual’s right to determine how, when, and
obvious: reduced time to market and reduced
to what extent personal information will be
cost associated with office and manufacturing
released to people or to organizations (Westin,
floor automation. In recent years, Radio
1967). To date, information privacy mainly
Frequency Identification (RFID) has caught
aims to ensure the confidentiality of sensitive
significant attention in the retail industry. RFID
information. In other words, one major
is a generic term for the technologies that use
objective of enforcing privacy is to protect
radio waves to automatically identify individual
personally identifiable information (PII). Many
items wirelessly. RFID is capable of enabling
authorization technologies can be applied to
retailers to effectively and efficiently track the
protect PII. However, information privacy
entire circula- tion process of items from
contains other privacy concepts, such as
suppliers to end users. It can provide identify,
purpose and obligation (Fischer-Hubner, 2001).
orientate, and trace objects directly and
In more detail, authorization focuses on
continuously. In addition, RFID is able to
prevent- ing unauthorized users from accessing
deliver information at real time. As a result,
sensitive information, while privacy focuses on
RFID is considered an emerging technology
managing authorized users to use information
that potentially could revolutionize the way
effectively and to achieve an organization’s
retailers do business. Among other examples,
strategy within necessary constraints (Bucker et
Wal-Mart mandated its top 100 suppliers to use
al., 2003).
RFID by January 2005 (Vijayan & Brewin,
In addition, privacy control does not focus
2003); the U.S. Department of Defense also
on individual subjects. A subject releases his or
made a similar request to its military suppliers
her data to the custody of an enterprise with an
(U.S. Department of Defense, 2003).
agreement to the set of purposes for which the
Although it seems like RFID is a boon to e-
data may be used. In the U.S., the Privacy Act
commerce, the actual adoption of RFID in retail
of 1974 requires that federal agencies grant
industry is quite slow (Bradner, 2005). Retail
indi- viduals access to their identifiable records
in- dustry poses typical enterprise computing
that are maintained by the agency, ensure the
(Neogi
accuracy and timeliness of existing
& Ghosal, 2004) challenges, as a retailer
information, and limit the collection of
normally deals with multiple parties belonging
unnecessary information and the disclosure of
to different
79
Privacy Issues of Applying RFID in Retail Industry
identifiable information to third

80
parties. In summary, the U.S. mostly relies on and to provide adequate opportunities for
self-regulation and limited legislation. opting out of personal information disclosure to
However, federal agencies circumvent these non- affiliated third parties (Hinde, 2002). All
constraints by subscribing to commercial of this information indicates that privacy is
surrogates, who col- lect and store the same currently a critical topic.
data with no constraints. Because of the identity Therefore, in our opinion, the traditional
thefts that occurred at ChoicePoint and view of an authorization model should be
LexisNexis in early 2005, U.S. lawmakers have extended with an enterprise-wide privacy policy
pushed for more aggressive data privacy in order to man- age and enforce individual
legislation (Gross, 2005). privacy preferences. In this article, we aim to
In contrast, the Europe Union (EU) Data propose a privacy autho- rization model and to
Protection Directive (Steinke, 2002) contains explore its implementation issues, focusing on
two statements that contradict the U.S. act. The language specification. The reminder of the
first statement requires that an organization article is organized as follows. In the second
must inform individuals why it collects and section, we briefly introduce the background of
uses in- formation, how to contact the RFID technology and introduce problem
organization, and the types of third parties to domain. In the third section, we discuss related
which it discloses the information. The second work. In the fourth section, we propose our
statement requires that personal data on EU RFID-oriented privacy authorization model. In
citizens only may be trans- ferred to countries the fifth section, we discuss the design of the
outside the 15-nation blocks that adopt these model. In the sixth section, we discuss the
rules or are deemed to provide adequate implementation of the model. In the seventh
protection for the data. As a result, these two section, we perform self assessments. In the
statements imply that no information of any EU eighth section, we make conclusions and
citizen can be transferred to the U.S. due to the discuss future work.
conflicts between two privacy acts.
Consequently, these policies create obstacles
for conducting business activities between the bAcKGround And ProbLeM
EU and the U.S. To solve the problem, the U.S. doMAIn
government has made a voluntary scheme
called Safe Harbor to provide an adequate level Due to its potential to dramatically increase
of data protection that safeguards transfers of pro- ductivities, many organizations have shown
personal data to the U.S. from the EU. U.S. strong interest in the area of applying RFID
companies conducting business in the EU must technology to retail industry. Among others,
certify to the U.S. Department of Commerce IEEE has played an essential role in the rise of
that they will follow the regulations of the EU wireless ID by sponsoring conferences and
directive. Any violation is subject to pros- publishing papers in this area (Leventon, 2005);
ecution by the Federal Trade Commission IBM has been active in pursuing business
(FTC) for deceptive business practices. opportunities for several years (IBM, 2004);
Furthermore, based on a recent survey, bank researchers and engineers at HP have developed
officers said that they had ongoing concerns, an RFID-based solution for tracking IT assets
mostly procedural, about how to handle the (Schwartz, 2005). In this sec- tion, we will
anticipated privacy regu- lations of the U.S. The briefly introduce the basic concept of RFID
Gramm-Leach-Bliley Act (GLB) requires and then discuss the problem domain to be
financial institutions regularly to communicate addressed.
privacy policies to customers
A typical rfId system system to keep track of where individual
cartons of goods are in their supply chain or,
Radio Frequency Identification (RFID) is a perhaps someday in the future, what products
generic term for the technologies that use radio are in a shopper’s physical cart. However,
waves to automatically identify individual although RFID is a boon to retail industry, it
items. A typical RFID system contains three comes at the high price of shaky security and
components: an RFID tag, an RFID reader, and privacy. For example, a retailer must deal with
a computer network (see Figure 1). An RFID different categories of users that may have
tag is actually a microchip with a coiled access to the data stored in RFID tags. These
antenna. When an RFID tag re- ceives users could be professional buyers, cashiers,
electromagnetic waves from the reader, it sends store managers, warehouse keepers, warehouse
stored data to the reader. An RFID reader can managers, and so forth. The data contained in
read and write data, depending on the types of RFID tags may or may not be PII. If it is PII,
RFID tags with which it interacts. A reader also people naturally have big concerns for possible
can send data to the associated computer privacy invasion when various users have
network. A computer network can receive data access to RFID tags. Even if the information is
from the reader and perform further processing not personally identifiable, there is still some
on the data collected. Potentially, computers concern, because the information subject poten-
also can send data to readers. Figure 1 tially could be identified. That is why a new
illustrates the main components of an RFID role came into place called privacy policy
system as well as the interactions between enforcers, which may have alternative names
them. such as CPO (Chief Privacy Officer).
At present, interest remains high enough in
requirements Analysis implementing RFID that the lack of security
and privacy is not a bottleneck to retail
The RFID system illustrated in Figure 1 is a industry’s adoption of this new business model.
typical scenario used by a retailer. For example, But why wait to find a solution? This is the
big retail companies like Wal-Mart with large momentum of this research that aims at
database infrastructures can use such an RFID investigating the security and
privacy issues of an RFID system and exploring

Figure 1. Main components of a RFID system

Back-endco mpu ters


collect andprocessda ta

01-000169DC-E09
01-000169DC-E09

Potential ly, a readercanwri te Potential ly, the compu tercan


data back to the RFID tag send data to a reader

A tag has a micr ochip


holds data, the rest of it
isanantenna tha t
A rea der uses rad io waves to
trans mits da ta
rea d the ta g and sends the
toareader
data to
computers.
possible solution. In this article, we focus on • Interoperability: A privacy authorization
the privacy enforcement issue. model should be able to interpret and use
Information privacy usually is concerned credentials issued by any other issuing au-
with the confidentiality of sensitive thorities.
information. In general, information privacy is • Expressiveness: Credentials should
enforced by privacy policies that describe an contain not only an individual identity but
organization’s data practices concerning (1) also other useful information, such as
what information it collects from individuals purpose and obligation.
(subjects), (2) for what purpose the information • Extensibility: The credential system
(objects) will be used, (3) whether the should be flexible enough to register new
organization exposes access to the information, individuals and organizations with new
(4) who are the recipients of any result types of infor- mation.
generated from the information, (5) how long • Anonymity: An individual identifier
the information will be retained, and (6) who should
will be informed in the circumstances of not be revealed under any circumstance.
dispute. • Scalability: Credential systems should be
One of the most significant objectives of robust enough to handle the increasing
enforc- ing privacy is to protect PII. num- ber of users, service providers, and
Traditionally, many authorization technologies issuing authorities. Scalability here means
were applied directly to protect PII. However, that an application can be used more
traditional coarse-grained authorization frequently or by more users without the
techniques merely focus on prevent- ing loss of perfor- mance.
unauthorized users from accessing sensitive
information. Mostly authorization privileges are In order to fulfill the challenging
associated with specific roles and are requirements of such an enterprise-level RFID-
immutable after granted. In an enterprise retail oriented privacy authorization model, a
RFID sys- tem, however, authorization alone powerful and flexible description language for
cannot satisfy privacy requirements. Such a extensible privacy poli- cies is imperative.
system requests granting authorized users to Therefore, our strategy in this research is to
use information ef- fectively and to achieve an explore an enterprise RFID-oriented privacy
organization’s strategy within necessary authorization model and to further explore the
constraints. Therefore, much more complicated effective and efficient description of privacy
authorization mechanisms need to be explored. policies. In addition, we indicated earlier that
Various merchandises may enforce different an enterprise retailer needs to deal with
access rights from different roles at different different categories of users that may have
stages. In addition, privacy en- compasses access to the data stored in RFID tags. As we
broader concepts than authorization, such as will explore fur- ther in the following
purpose and obligation. Therefore, in our discussions, these different roles may require
opinion, the traditional view of an authorization different authorization models. Since each
model should be extended with an enterprise- RFID tag uniquely identifies specific
wide privacy policy for managing and enforcing merchandise, we will use interchangeably the
indi- vidual privacy preferences. In more detail, terms RFID tag and the specific merchandise in
such a privacy authorization model needs to the remainder of the article.
exhibit the following attributes:
reL Ated WorK benefit the economy and society. Some critics
(Cline, 2003) believe that the risk of RFID pri-
Global privacy advocates such as Electronic vacy invasion was exaggerated. In fact, in the
Frontier Foundation (http://www.eff.org) and same report released by a coalition of privacy
Elec- tronic Privacy Information Center advocates, the following examples of
(http://www. epic.org) have been active in acceptable uses of RFID were provided:
raising the public’s awareness of potential
privacy violations in RFID implementations. In • Tracking pharmaceuticals from the point
an RFID position statement (Privacy and Civil of
Liberties Organizations, 2003) issued and manufacturing to the point of dispensing.
endorsed by more than 40 organizations and • Tracking manufactured goods from the
individuals, five potential threats to privacy and point of manufacturing to the location
civil liberties were identified: where they will be shelved for sale.
• Detection of items containing toxic sub-
• Hidden Placement of Tags. RFID tags stances.
may be embedded into/onto objects and
documents without the knowledge of the Generic privacy authorization technologies
individuals who obtain those items. have been investigated for a long period of time
• Unique Identifiers for All Objects (Powers, Ashley & Schunter, 2002). In order
World- wide. The Electronic Product to address the growing need for standard and
Code (EPC) potentially enables every uniform privacy authorization languages, many
object on earth to have its own unique ID. companies and organizations have been market-
• Massive Data Aggregation. RFID de- ing various privacy tools in the past few years
ployment requires the creation of massive (Senicar, Jerman-Blazic & Klobucar, 2003).
databases containing unique tag data. For example, the Privacy Preferences Project
These records could be linked with (P3P) working group at World Wide Web
personal iden- tifiable data, especially as Consortium (W3C) developed a P3P
computer memory and processing specification to enable Web sites to express
capacities expand. their privacy practices in a standard and
• Hidden Readers. Tags can be read from a machine-readable XML format. P3P user
distance, not restricted to line of sight, by agents allow users to choose to be informed of
readers that may be incorporated invisibly site practices and to automate decision-making
into nearly any environment where human processes based on the Web site’s privacy prac-
beings or items congregate. tices. P3P also provides a language called P3P
• Individual Tracking and Profiling. If Preference Exchange Language 1.0 (APPEL),
personal identity were linked with unique which is used to express the user’s preferences
RFID tag numbers, individuals could be for making automated or semi-automated
profiled and tracked without their decisions regarding the acceptability of
knowledge or consent. machine-readable privacy policies from P3P-
enabled Web sites
Even though RFID may cause privacy con- (ht t p://ww w.w3.org / TR / P3P-preferences). P3P
cerns, the technology does not invade provides a base schema for data collection and a
information privacy. It is the people who use vocabulary to express purposes, recipients, and
the technology that invade privacy. If properly retention policies. However, P3P efforts cannot
used, RFID is a useful and practical technology be applied directly to RFID-oriented retail
that can greatly industry. Although the P3P mechanisms
capture common
elements of privacy policies, Web sites may model is composed of two parts: a role-based
have to provide further explanation in human- retailer enterprise boundary framework and an
readable policies. It is also obvious that P3P and RFID access control framework.
APPEL are applicable only to the extent of the
Web server and browser. Furthermore, P3P role-based retailer enterprise
does not provide authorization mechanisms to boundary framework
check a given access request against a stated
privacy policy. A virtual enterprise (i.e., retailer) boundary is
Another example is the eXtensible rights introduced to ensure authorization-based
Markup Language (XrML) (Wang et al., 2002), privacy. As shown in Figure 2, all users that
which can be used to describe the rights and have access to RFID tags are divided into three
conditions for owning or distributing digital categories: suppliers, role players, and end
media. XrML concepts include license, grant, users. A supplier is a provider of an RFID tag,
principal, right, resource, and condition. Based who can be either the original manufacturer or
on the specification of licenses, an XrML agent another retailer. A role player that serves in an
can determine whether to grant a certain right enterprise retail store can be a cashier, a store
on a certain resource to a certain principal. manager, and so forth. An end user takes an
However, XrML does not consider the privacy RFID tag out of an enterprise retailer’s
entities in its access control model. boundary, which can be either a profes- sional
buyer or another retail store. It should be noted
that an entity can play different roles under
rfId-orIented PrI VAcy different circumstances. For example, a retail
AutHorIz AtIon ModeL store can act as a supplier for another retail
store or an end user of yet another retail store. A
In this section, we propose an enterprise-level person can act as a role player in one retail store
RFID-oriented privacy authorization model. and an end user of merchandise, if he or she
The actually pays for

Figure 2.

Enterprise (Retailer)’s Bo undary

Rol e Player 2:
Rol e Player 1: Store Mana ger
Cashi er
RFID Gate
Tag Kee per
Origin al
Data from
....
Suppli er
End
User
Suppli er
Role
Player n
it. In other words, the journey of merchandise In short, our role-based retailer enterprise
can start from an original supplier and flow boundary framework provides fundamental
through multiple enterprise retailers before specification and context to perform role-based
finally reaching a customer’s hand. Figure 2 access control.
also shows that the circulation of an RFID tag is
a directional flow from a supplier to one or rfId Access control framework
more enterprise retailers and then to an end
user. Based on the previous specification, an access
As illustrated in Figure 2 at the left-hand control system should enforce the policy stated
side, suppliers provide retailers with the by the enterprise. Under this circumstance, an
original data from the manufacturing facility. information access control mechanism also
The data are embedded and stored into an RFID should be embedded with privacy-enhancing
tag. After the RFID tag enters the enterprise’s technolo- gies. All these evidences show the
boundary, dif- ferent role players (e.g., cashiers, importance of integrating privacy concepts into
store managers, etc.) can interact with it. A access control mechanisms in order to resolv
solid line from the tag to a role player refers to the RFID privacy problems.
an action of reading the tag data, while a dotted Let us take a quick review of the traditional
line from the role player to the tag refers to a access control mechanism. The family of Role-
possible action of writing data to the tag. Based Access Control (RBAC) is commonly
Before the tag leaves the enterprise boundary, referred to as the RBAC96 model, which
such as a retailer store (i.e., purchased by an focuses on security control using roles and
end user, as shown in Figure 2, the RFID tag organizations. RBAC96 presents a conceptual
needs to pass through a GateKeeper, an au- model to describe different approaches such as
tomated program to ensure that the customer’s base model, role hierarchies, constraint model,
privacy will be properly protected. For and consolidated model. In particular, the
example, it will examine whether some role National Institute of
players (e.g., cashiers) intentionally or Standards and Technology (NIST) conducted
unintentionally write unauthorized data into the
tag.

Figure 3. RFID-oriented access control framework

rfId Access control


reque st Permission
obligation Purpose

role based Access contro l


Input output
market analysis for identifying RBAC features • assigned_users: (r:ROLES) → 2USERS, the
into two layouts: the RBAC Reference Model mapping of role r onto a set of users. For-
and the RBAC Functional Specification. The mally, assigned_users(r) = {u ∈ USERS |
RBAC Reference Model describes a common (u,r)
vocabulary of RBAC element sets and relations ∈ UA
for specifying requirements and the scope of }
the RBAC features included in the standard. • PRMS = 2{OPS OBS}, the set of permissions
The RBAC Functional Specification describes • PA ⊆ PRMS ROLES, a many-to-many
the requirements of administrative operations mapping between permissions and roles
for creating and managing RBAC element sets (role-permission assignment relation)
and relations and system functions for creating • assigned_ permissions(r:ROLES) → 2PRMS,
and managing RBAC attributes on user the mapping of role r onto a set of permis-
sessions and making access control decisions. sions. Formally, assigned_ permissions(r)
In order to integrate privacy concepts into =
access control mechanism to resolve the RFID {p ∈PRMS|(p,r) ∈PA}
privacy problems, we propose an RBAC- • SUBJECTS, the set of subjects
extended framework. Our proposed model is • subject_user(s:SUBJECT) → USERS, the
based on the core RBAC model discussed in mapping of subject s onto the subject’s as-
OASIS (2003), with privacy-based extensions sociated user
tailored for RFID. As shown in Figure 3, • subje ct _ roles(s:SU BJ ECT ) → 2 ROL E
RFID-oriented access control is regulated by an S
,
access control boundary that is actually the the mapping of subject s onto a set of
retailer enterprise boundary discussed in the roles. Formally, subject_role(s
i
) ⊆ {r
previous section. When a request arrives at the ∈ROLES|(subject_user(s
i ), r) ∈UA}
access control boundary, the core RBAC is
enhanced with the privacy-based extension The following are the sets of
(e.g., purpose and obligation). Once a decision privacy- based entities (e.g., purpose and
is made, obligation) proposed to the core RBAC, as
the access control permission to the subject shown in Figure 3:
either
• PURPOSES = {pp , pp , …, pp } is the set
of 1 2 n
can be granted or denied. n purposes. In the RFID scenario, an
In order to facilitate enterprise-level RFID entity has to describe its purpose(s) of a
privacy control, it is imperative to automatically request operation.
verify and validate privacy information. There- • OBLIGATIONS = {obl1, obl2 , …, obln } is
fore, we formalize here our model using the ad operations, and objects, respectively)
hoc • UA ⊆ USERS  ROLES, a many-to-
standard privacy description language many mapping between users and roles
eXtensible Access Control Markup Language (user-to- role assignment relation)
(XACML) from OASIS.
Our model can be notated as a tuple that in-
cludes mainly the following entities:

• USERS, ROLES, OPS, and OBS (users,


roles,
the set of n obligations that may be taken
after the decision of permission is made.
In general, an obligation is opaque and is
returned after the permission is granted.
The obligations describe what promises a
subject must make after gaining
permission.

In RBAC, a subject never can have an active


role that is not authorized for its users. With all
these privacy-based extensions (purposes and
obligations), the role authorization in the core
RBAC model is revised as follows:
∀ s:SUBJECTS, u:USERS, r:ROLES, each other. Privacy protection upon overlapped
op:OPS, boundaries may become highly complex. For
{o1 , o2 , …, oi } ⊆ OBS, {pp1 , pp2, …, ppj } ⊆
PURPOSES, {obl , obl , …, obl } ⊆ simplicity, in this article, we do not consider au-
OBLIGA- 1 2 l
TIONS thorization across overlapped boundaries.
r ∈subject_roles(s) Ù u ∈subject_user(s) ⇒ After a user is authorized to enter a retailer
u ∈assigned_users(r) enterprise boundary, the second layer of autho-
• access: SUBJECTS  OPS  OBS rization control is applied over data. Different
 PURPOSES  OBLIGATIONS → roles of authorized users have distinct levels of
BOOLEAN access to RFID data. For example, not all kinds
access(s, op, {o1, o2, …, oi}, {pp1, pp2, …, of information can be embedded and stored into
ppj}, an RFID tag, even from an authorized role
{rp , rp , …, rp }, {obl , obl , …, obl }) = 1 player. In
if 1 2 k 1 2 l
subject s can access any object in {o1 , o2 , …, oi } a more detailed scenario, a cashier may store the
using operation op for any purpose in {pp , pp , last four digits of a customer’s credit card
1 2
number
…, pp } with a set of obligations {obl , obl , into an RFID tag but may not be allowed to
…, j 1 2
input
obll }, (0, ∅, ∅) all the digits. In other words, authorization over
otherwise. data is ruled by the access rights associated
with each role predefined by each specific
retailer. A retailer customizes roles that have
desIGn access to the enterprise together with relative
access rights. In
In order to realize our proposed model, we need share the access of their boundaries.
to consider two core issues: authorization Meanwhile, since retailers may have
design and privacy policy specification. hierarchical relationship among them (e.g.,
parent and child retailer companies),
design of Authorization boundaries may overlap with

Authorization in our model implies two layers


of control: authorization over users and
authoriza- tion over data. Authorization over
users is the prerequisite for authorization over
data.
Before accessing an RFID tag, a user has to
be authorized into a specific retailer enterprise
boundary. Furthermore, all role players
(e.g., cashiers, store manager, etc.) have to use
on-site equipments to pass the authorization
process (e.g., using unique login ID and
password pairs. As shown in Figure 2, these
role players are de- limited by an enterprise
boundary. An enterprise boundary is defined
for a specific retailer. Different retailers define
their respective boundaries; and they may not
this article, we focus on the privacy policies
that
can be defined in access rights.

design of Privacy Policy


Specification

In order to guard and ensure privacy


authorization, one core issue is that we have to
explore approaches to precisely describe and
define privacy policies. In this article, a privacy
policy generally is defined as a set of rules and
practices that specifies or regulates how a
system or organization provides services to
protected information. A privacy as- sertion
typically is scrutinized in the context of privacy
policy. In more detail, the engineering of a
privacy policy starts with risk analysis and ends
with a set of privacy assertions that is ready for
integration into the system architecture of a
subject. Risk analysis identifies privacy threats
in a business process and then forms a set of
privacy assertions that refer to rules and
practices to regu- late how sensitive data or
activity information will be managed and
protected within a loosely coupled execution
environment. A privacy policy often is
formalized or semi-formalized in an that must be taken by the environment of
authorization model that provides a basis for a EPAL.
formal analysis of privacy properties. A
powerful and extensible description language
is, thus, on demand.
Instead of introducing another new policy
de- scription language into the already
overcrowded description language family, our
strategy was first to examine the feasibility of
utilizing cur- rent existing relative languages
for the purpose of enterprise-level RFID
privacy control. Our procedure was to examine
every candidate policy description language to
verify whether it was sufficient enough to
realize the core privacy poli- cies in our RFID-
oriented privacy authorization model. Through
this process, we finally found a right candidate
called Enterprise Privacy Au- thorization
Language (EPAL) (Ashley, Hadam, Karjoth,
Powers & Schunter, 2003). EPAL is an
interoperability language for defining
enterprise privacy policies. It is used to govern
data-handling practices in the context of fine-
grained positive and negative authorization
rights. The detailed realization of our RFID-
oriented privacy policy specification using
EPAL will be discussed in the next section.
Here, we first briefly discuss how to utilize the
basic terminology of EPAL to define
fundamental concepts of RFID-oriented
privacy policies.
EPAL includes lists of hierarchies of data
categories, user categories, and purposes, as
well as sets of actions, obligations, and
conditions. Data categories can be used to
define categories of collected data handled
differently from a pri- vacy perspective (e.g.,
RFID information). User categories can be used
to describe the users or groups (e.g., cashiers)
that can access collected RFID data. Purposes
can be used to model the intended service for
which data are used (e.g., cash register
payment). Actions can be used to model how
data, such as reading, printing, and storing
RFID information in computer systems, are
used. Obligations can be used to define actions
One example of obligation is that a retailer must
delete the collected data after one year of
storage. Overall, these elements can be used to
specify privacy authorization rules, which
allow or deny actions on data categories by user
categories for certain purposes under certain
conditions while mandating certain obligations.
Furthermore, pri- vacy authorization rules are
sorted by descending precedence in EPAL
policies.
From a technical perspective, there are two
major components in EPAL: Vocabulary and
Privacy Policy. Enterprises usually use a single
vocabulary to exchange policy data with other
enterprises. Thus, the concept of vocabulary
is somehow similar to the concept of ontology.
Since the recent version of EPAL vocabulary
only provides a basic structure for describing
data categories, the semantic Web may help to
provide explicit meaning to information
available on the Web for automatic process and
information integration based on the concept of
ontology.
After carefully examining EPAL, we
decided to adopt EPAL as the tool to implement
the privacy authorization model of RFID.
Beyond the fact that EPAL is one of the most
promising privacy authorization languages in
the market, we have successfully utilized the
basic building blocks of EPAL to define
various comprehensive RFID- related privacy
policies, which will be reported in detail in the
next section.

reALIz AtIon

In this section, we will discuss how we realize


the core privacy policies in our RFID-oriented
privacy authorization model utilizing essential
EPAL constructs. The major RFID privacy
poli- cies are divided into two categories: (1)
privacy authorization model related to data
collection; and (2) privacy authorization model
related to data processing.
Privacy Authorization Model related c o n d i t io n = “/C u s t o m e rR e c or d
to data collection / D at a / DataType=PII”/>

Privacy Policies on Cashiers’ Handling Guarantee That the Same Tag


of Private Information Cannot Be Read by Other Types
of RFID Readers
Let us start from a simplified example, where
an RFID tag is manufactured as read only. In While interoperability among heterogeneous
the following example, the retailer specifies that information systems is highly appreciated, it is
the cashier can read any data stored in an RFID not a good idea to allow RFID readers to read
for payment purposes. This is a very basic all the information stored in an RFID tag. One
policy for the task that a cashier normally privacy concern for the RFID technology is that
conducts. It says that a cashier is able to read data stored in one type of tag (e.g., tags used
the data in order to get the price information ex- clusively by Wal-Mart) should not be able
(potentially stored in a computer, not in the to be read by other types of readers (e.g.,
RFID itself ) by matching the data in the RFID. readers from Target). The reason is obvious: it
is unreasonable to design a system that
<ALLOW encourages the sharing of information among
user-category = “cashier” competitors.
data-category = “RFID” In the following policy, it says that if the
purpose= “payment” types of RFID tags do not match, the reading of
operation = “read” the RFID data is denied. This type of privacy
condition = “TRUE”/> policy also helps to monitor the behavior of
dishonest employees who may leak confidential
A more complicated situation is that some information to competitors.
RFID tags are manufactured as read/write.
Writ- ing data into RFID may cause privacy <DENY
concerns. For example, if the cashier is able to user-category = “other”
write data into the RFID tags, the cashier may data-category = “RFID”
write some PII. This may happen if the cashier purpose= “collect”
has malicious intentions, for example. Later on, operation = “read”
when the same RFID tag is read by other condition = “/RFID/TagReaderType != /
readers, this PII may be unintentionally TagReader/TagReaderType”/>
revealed to the third party. Therefore, a privacy
policy needs to be defined in order to specify
that writing PII into RFID will be denied. The PrI VAcy AutHorIz AtIon
following is the essential part of the rule that reL Ated to dAtA ProcessInG
says that cashiers are denied the right to write
any PII into an RFID tag. Guarantee That the RFID Tag Can Be
Destroyed
<DENY
user-category = “cashier” The ultimate solution to prevent privacy
data-category = “RFID” violation in the RFID is to destroy the RFID tag
purpose= “tracking” perma- nently. One privacy policy for the
operation = “write” retailer would be that the store manager (not
the cashier) is al-
lowed to destroy the RFID tag permanently
after the customer has purchased the products. <ALLOW
It may be a rarely used privacy policy, but it is user-category = “store_manager”
desirable to provide this type of services to data-category = “RFID”
customers so that customers will be assured that purpose= “block”
their privacy is fully protected. Of course, the operation = “Insert”
store manager has to get the customer’s consent condition = “/CustomerRecord/BlockingRe-
before destroying the RFID tag. In the quest = TRUE”/>
following, we realize this policy by equipping a
customer record with the information regarding Guarantee That the Collected Data Will
whether the destruction of the tag is needed. Not Be Sold to the Third Party

<ALLOW In most cases, the customer may not be willing


user-category = “store_manager” to reveal the PII to the third party. Therefore,
data-category = “RFID” the retailer should define a policy rule to
purpose= “PreventViolation” describe it. The following is the essential part
operation = “destroy” of the defini- tion.
condition = “/CustomerRecord/Destruction-
Consent = TRUE”/> <DENY
user-category = “store_manager”
It is very important for customers to data-category = “RFID”
understand the consequences if they choose this purpose= “Marketing”
option. Fur- thermore, customers may have to operation = “sell”
guarantee that they will not return the products. condition = “/CustomerRecord/ThirdParty-
The reason is that if the RFID is destroyed, it Consent = FALSE”/>
will be difficult for the retailer to resell it. An
alternative may be that the retailer can charge a
small fee for the RFID disposal in order to AssessMents
compensate the reinstallation of a new RFID.
We just presented our privacy authorization
Guarantee That the RFID Tag Can Be model tailored for enterprise-level of RFID
Blocked usages. The advantages of our introduction of
enterprise (re- tailer) boundary are threefold:
Recently, RSA Security developed a tag called (1) it enables the authorized users to access
RSA Blocker Tag, which effectively can block information effectively and to achieve a
the reading of data in the RFID tag. When a retailer’s strategy within its boundary; (2) it
Blocker is in proximity to ordinary RFID tags, prevents sensitive information from leaking out
it benefits from its shielding behavior; when the of the corresponding boundary; and (3)
Blocker tag is removed, ordinary RFID tags interfaces that can be built based upon
may be used normally. We realize this feature enterprise boundaries potentially can provide
through a rule, which indicates that the flexible and friendly interaction paths between
customer can get a free blocker tag if the suppliers and retailers, retailers and retailers,
approval from the store manager has been and retailers and customers.
granted. The core part of the rule is shown as
follows.
Our introduction of two-layered concLusIon And future WorK
authorization model over users and data
establishes a stronger protection of privacy In this article, we have investigated the privacy
control. We believe that our model will not only design issues for adopting RFID in the retail
provide a flexible and powerful privacy industry. An enterprise RFID-oriented privacy
protection model for the retail industry that authorization model is proposed. Since the
wishes to adopt RFID technology but also will retail industry has yet to widely use RFID for
strengthen the confidence of customers that will identi- fying each individual item to be sold, the
be involved with RFID technology. work presented here may serve as a good
Our model is associated with the capability starting point for more elaborate work in the
of precisely defining comprehensive privacy future. However, we firmly believe that it is
polices. The policy description is realized only a matter of time before we actually can use
partially based on EPAL, a language for item-level RFIDs in retail stores. When RFID
specifying enterprise privacy policy. Our technologies become mainstream, which could
research proved that EPAL can be adopted to be soon, privacy will become the bottleneck to
specify RFID-related privacy policy. The its extensive adoption. In our view, it is never
advantages of proving the effectiveness of an too early for researchers to consider the privacy
existing description language are threefold: (1) implications of adopting RFID in the retail
we can save the time to develop another lan- industry. The rest of the work will follow
guage to an already overcrowded language set; logically from that.
(2) the learning curve can be largely shorted as The following are the tasks we plan to
well; and (3) using a proven and tested existing conduct in the future. First, we want to gather
language will add more confidence to the best practices of RFID privacy. Since
stakeholders of retail industry. scenarios for applying RFID in the retail
However, we do realize that our work in this industry are futuris- tic in nature, we may have
article is just a starting point for further investi- to depend mainly on the Internet to conduct the
gation and exploration of RFID-specific research. Second, we want to investigate the
privacy specification and protection. Our current usage of P3P (Agre, 1997; Cranor, 2002)
adoption of EPAL language is based upon our outside the domain of personal data collection
experiments, proving that RFID-related privacy for Web sites. Third, we want to conduct direct
polices can be defined using EPAL constructs. or indirect interviews with representatives of
We need to collect all possible privacy policies industrial companies and privacy advocates.
from differ- ent enterprise retailers in order to
validate and complete the library of RFID-
oriented privacy policies. The information AcKnoWLedGMent
could also help us to evaluate the usage of
EPAL language. In addi- tion, we have not This research is partly funded by a discovery
explored deeply the realization of GateKeeper grant (NSERC PIN: 290666) from the Natural
to guarantee that a defined pri- vacy policy Science and Engineering Research Council
using our implementation will not be violated. (NSERC) of Canada.
In order to provide a comprehensive RFID-
oriented privacy model, there is still seri- ous
research work ahead.
references RFID_eBrief_Final_2a.pdf

Agre, P. (1997). Technology and privacy: The


new landscape. Cambridge, MA: MIT Press.
Ashley, P., Hada, S., Karjoth, G., Powers, C., &
Schunter, M. (2003). Enterprise privacy autho-
rization language (EPAL 1.2). Retrieved from
ht t p: // ww w.w3.or g /Sub m ission /
20 03/SU BM- EPAL-20031110
Bradner, S. (2005). Security issues swamp
RFID. Retrieved from
http://www.techworld.com/mobil-
ity/features/index.cfm?featureid=1178
Bucker, A. et al. (2003). IBM tivoli privacy
manager solution design and best practices.
IBM Redbooks. Retrieved from
ht t p://ww w.redbooks.
ibm.com/redbooks/pdfs/sg246999.pdf
Cline, J. (2003). RFID privacy scare is over-
blown. Retrieved from http://www.comput-
er world.com
/securitytopics/security/privacy/
story/0,10801,87286,00.html
Cranor, L.F. (2002). Web privacy with P3P.
New
York:
O’Reilly.
Fischer-Hubner, S. (2001). IT-security and pri-
vacy [Lecture]. In Computer Science (pp.
1958). Springer-Verlag.
Gross, G. (2005). U.S. lawmakers push for data
privacy legislation. Retrieved from http://www.
computerworld.com/governmenttopics/govern-
ment/legislation/story/0,10801,100405,00.html
Hinde, S. (2002). The perils of privacy.
Orlando, FL: Elsevier Science.
IBM. (2004). Item-level RFID technology rede-
fines retail operations with real-time,
collabora- tive capabilities. Retrieved from
ht t p://ww w-1.
ibm.com/industries/wireless/doc/content/bin/
Leventon, W. (2005, March 8). RFID tags take
hold—IEEE plays role in the rise of wireless
ID. The Institute.
Neosazgi, A., & Ghosal, S. (2004). Enterprise
computing in the on demand era. In
Proceedings of 10th IEEE International
Workshop on Future Trends of Distributed
Computing Systems (FT- DCS’04), Suzhou,
China.
OASIS. (2003). eXtensible access control
markup language (XACML), Version 1.0
(OASIS Stan- dard). Retrieved from
ht t p://ww w.oasis-open.
org/committees/xacml/repository/oasis-xacml-
1.0.pdf
Powers, C.S., Ashley, P., & Schunter, M.
(2002). Privacy promises, access control, and
privacy management—Enforcing privacy
throughout an enterprise by extending access
control. In Pro- ceedings of the 3rd
International Symposium on Electronic
Commerce.
Privacy and Civil Liberties Organizations.
(2003). RFID Position Statement. Retrieved
from http://
ww w.pr ivacyr ights.org/ar/ R FIDposition.ht m
Schoeman, E.D. (1984). Philosophical
dimensions of privacy: An anthology. New
York: Cambridge University Press.
Schwartz, E. (2005). HP labs unveils RFID
solu- tions. Computerworld.
Senicar, V., Jerman-Blazic, B., & Klobucar, T.
(2003). Privacy-enhancing technologies—Ap-
proaches and development. Computer
Standards
& Interfaces, 25, 147-
158.
Steinke, G. (2002). Data privacy approaches
from US and EU perspectives. Telematics and
Informatics,19, 193-200.
U.S. Department of Defense. (2003). DoD an-
nounces radio frequency identification policy.
Re- trieved from
ht t p://ww w.dod.mil/releases/2003/
nr20031023-0568.html
Vijayan, J., & Brewin, B. (2003). Wal-Mart endnotes
backs RFID technology. Retrieved from http://
ww w.computer world.com /sof t wa ret opics/er p/ 1
The third author is also a Guest Scientist
story/0,10801,82155,00.html of National Institute of Standards and
Wang, X. et al. (2002). XrML—eXtensible Technol- ogy (NIST).
rights markup language. In Proceedings of the
2
This research is partly funded by a discov-
2002 ACM Workshop on XML Security, ery grant (NSERC PIN: 290666) from the
Fairfax, Virginia. Natural Science and Engineering Research
Council (NSERC) of Canada.
Westin, A. (1967). Privacy and freedom.
New
York: Atheneum.
1374

Chapter 7.3
An Evaluation of the RFID
Security Benefits of the
APF System:
Hospital Patient Data
Protection
John Ayoade
American University of Nigeria, Nigeria

Judith Symonds
Auckland University of Technology, New Zealand

AbstrAct APF provides security for the information stored in


the tags. Aprototype solution for hospital patient
The main features of RFID are the ability to data protection for information stored on RFID
identify objects without a line of sight between bracelets is offered.
reader and tag, read/write capability and ability of
readers to read many tags at the same time. The IntroductIon
read/write capability allows information to be
stored in the tags embedded in the objects as it Radio Frequency Identification (RFID) refers
travels through a system. Some applications to an Auto-Identification system comprised of
require information to be stored in the tag and be RFID tags, RFID readers and the requisite RFID
retrieved by the readers. This paper discusses the middleware that interprets tag information and
security and privacy challenges involve in such communicates it to the application software.
applications and how the proposed and RFID tags contain specific object information
implemented prototype system Authentication in their memory, accessed via radio signal of an
Processing Framework (APF) would be a solution RFID reader. RFID tags contain a microchip
to protect hospital patient data. The deployment of capable of holding stored information, plus a
the APF provides mutual authentication for both small coiled antenna or transponder (Psion,
tags and readers and the mutual authentication 2004).
process in the

Copyright © 2010, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
An Evaluation of the RFID Security Benefits of the APF System

In theAPF (Authentication Processing It is mainly the


Frame- work) implementation (Ayoade 2005) an
Omron’s RFID tag “V720S-D13P01” was used.
It is a passive tag that has read and write tag
memory capability. The memory capacity of
this tag is 112 bytes (user area). This means it
has EEPROM/ RAM memory capability. The
reader used was manufactured by FEIG
electronic (ID ISC MR
100). It has a frequency of 13.56 MHZ. This
type of RFID system was used because its
frequency has the widest application scope and
it is the most widely available high frequency
tag world-wide. Its typical read range is
approximately 1m.
APF is a system that could allow many
read- ers to read from and write to the RFID
tags and it prevents unauthorized readers from
reading information from the tags without the
knowledge of the tags.
In a nutshell, APF prevents privacy
violation of information in the RFID system.
The APF sys- tem was developed based on the
existing typical RFID system and will therefore
work with the existing system.
In the RFID system, many proposals have
been presented to solve common privacy and
security problems, however, these proposals
face one dis- advantage or another, making them
insufficient to completely address the problems
in question. We agreed that a simple approach
for dealing with the problem of privacy is to
prevent readers from receiving data coming
from tags (Avoine, 2004). However, as
mentioned earlier, all the propositions to date
have one disadvantage or another.
RFID technology can be used to collect a
lot of data related to persons, objects or animals,
thus there are data protection concerns. The first
type of risks arises when the deployment of
RFID is used to collect information that is
directly or indirectly linked to personal data. In
a digital world, collecting and analyzing
personal data is a task that computers and agents
can do diligently. This is an issue connected to
ICT in general, rather than to RFID specifically.
95
An Evaluation of the RFID Security Benefits of the APF System
widespread use of RFID, and its use in mobile
situations accompanying persons that could lead
to unpredictable situations – and thus unpredict-
able threats (ECISM 2006).
A second type of privacy implication arises
where personal data is stored in RFID tags. Ex-
amples of this type of use can be found in
passports or RFID-based health records. The
relative open- ness of the area where the
application is deployed will greatly influence the
options to illicitly access the data (ECISM
2006).
A third type of data protection implication
arises from uses of RFID technology which
entail individual tracking. As soon as a RFID-
profile is known (because the tags are linked to
personal data) the comings and goings of people
could be followed. This is possible for
company-level ap- plications (e.g. by using
access cards), but could theoretically also be
used in tracking where you are. This could be in
your car (if the car or clothes are tagged, as also
indicated in the example), or in person, in public
locations (ECISM, 2006). This could have
implications for people who could come to harm
if their health records were to be accessed such
as in the case of HIV/AIDS, mental illness, past
medical history or even pregnancy.

tHe ProPosed concePt of tHe


AutHentIc AtIon
ProcessInG frAMeWorK

A framework that will authenticate readers


before they can access the information stored in
tags was proposed in (Ayoade, 2004). The
proposed proce- dure is called Authentication
Processing Frame- work - APF. The main
concept of this framework is that tags and
readers will register with the APF database
which will authenticate readers prior to reading
data stored on RFID tag. Implementing this kind
of framework in the RFID system will alleviate
security and privacy concerns.

96
overview of the APf system iv. The Maintenance’s Application is the part
of the system that maintains the APF
The APF was proposed to deter the data security database.
problem in the RFID system.
APF is a framework that makes it APF System Operation and Methods
compulsory for readers to authenticate
themselves with theAPF database before they The tag writer (writer application) subsystem
can read the information in the registered tags. reads tags in its vicinity and then generates a
random- ized encryption key. The next step is to
Figure 1, shows that APF system comprises input and encrypt the information into the tag
of four application segments: for security purposes. The next paragraph
explains how the authentic tag reader (reader
i. The Tag Writer’s (writer application) is the application) subsystem reads the encrypted
part of the APF that encrypts the informa- information in the tag.
tion in the tag and produces the decryption The reader subsystem sends a “challenge”
key which will be submitted along with its command to the tag in its vicinity (just as any
identification number, to the APF database. typical RFID reader will read the information in
ii. The Reader’s Application queries the tag the tag within its vicinity) and the tag responds
and registers readers’ identification number with its unique identification and the content of
with the APF database. This is also the part the information in it. However, in case of the
of the system that uses the decryption key to APF system, the content of the information
decrypt the information after it has been stored in the tags is encrypted. This means the
authenticated by the APF database. reader can not decrypt the information in the tag
iii. The Authentication’s Application is the part without the decryption key which is kept in the
of the system that integrates both the reader APF database system.
application and theAPF database The next stage of the operation is that the
maintenance reader will submit its ID to the APF database
application. subsystem. Then, the APF key inquiry
subsystem will check whether or not the reader
is authorised

Figure 1. The functional diagram of the APF


READER APPLIC ATION
TAG WRITER APPLIC ATION
5. Decode the encrypted dat a
1. Read Tags
TAGS
READERS 2. Read Tag ID(Serial
No.)
and encrypted dat a 0. Make access key and
write the encrypted
dat a

3. Query APF with Reader -ID and Tag -ID


4. Get the access key from
APF

AUTHENTIC ATIO
N APPLIC ATION
(WEB SYSTEM)

APF
DATABAS
E
0. Register Reade r-ID 0. Register Tag -ID and Decryption key
APF DATA
MAIN TENANCE
APPLIC ATION
(WEB
SYSTEM)
Figure 2. The flowchart of the APF

to be granted the decryption key to have access the decryption key to decrypt the encrypted data
to a particular tag. If it is authorised, the in the tags.
decryption key will be granted and the reader In order to prevent illegal access to the in-
will be granted access and if not, the decryption formation stored in the tags there should be a
key will be de- nied and the reader will not be procedure for access control to the information
able to decrypt the information stored in the tag. stored in the tags. As shown in Figure 3, and
discussed above, each tag will register its
the Methodology of the APf system unique ID and decryption key with the APF
database. This is necessary for the protection of
Figure 2 is the step by step representation of the tags from unscrupulous readers that may have
APF. Initially, tags will register their identifica- ulterior inten- tions. Once a tag registers its
tion numbers and the decryption keys with the unique identity and decryption key with the
APF database. Also, readers will register their APF, it will be difficult for unregistered readers
identification numbers with the APF database. to have access to the data in the tag without
Normally, readers will send a “challenge” com- possessing the decryption key to the tag. This
mand in order to access the information in the means every registered reader will be
tags. However, with the APF protocol, tags will authenticated prior to getting the decryption key
send a “response” command consisting of the to access stored data in the tag.
tags’ identification numbers and the encrypted In the next paragraph we discuss how the
data to the readers. The response message from authenticated reader would have access to stored
the tag will instruct the reader to get the data in the tag.
decryption key from the APF database in order Every reader will register its identification
to decrypt and read the data in the tag. Since, number with the APF in order for it to be
authenticating readers would have registered authen- ticated prior to the time the reader will
with the APF database then, only authenticating request the decryption key to access the data in
readers would be given the tag. In a nutshell, every reader will register
its unique identification number with the APF
and this will
Figure 3. The registration of tags with the APF

Figure 4. The registration of readers with the APF

be confirmed by the APF before releasing the Also, we discuss the registration of readers with
decryption key to the reader in order to read the the APF prior to accessing the information in
encrypted data in the specific tag. the tags. When the reader sends a “read”
Figure 4 shows that every reader registers command to the tag, it replies with its
its unique identification number with the APF. identification num- ber and encrypted data. In
However, since both readers and tags register this case the data is encrypted and the reader
their identification numbers with the APF, this registered with the APF will be able to get the
serves as mutual authentication and it protects decryption key in order to decrypt the data.
the information in the tags from malicious read- Once the key is received the data in the tag will
ers which is one of the concerns users have. be readable. In this framework there are two
This means that unauthorized access to the tag important processes: first, mutual authentication
will be almost impossible if the APF system is is carried out by the APF because it
correctly implemented. In the next paragraph authenticates the reader and the tag; secondly,
the authors of this paper discuss the registration privacy is guaranteed because the data stored
and access control of readers to the APF. in the tag is protected from malicious readers.
In the previous paragraphs, the authors of Since, the information the reader obtained from
this paper discuss the registration of the tags’ the tag is encrypted, it can only be read after the
unique ID and the decryption key with the APF. decryption key needed to access the information
is received from the APF.
APPLIc AtIon of tHe APf One of the areas in which the APF could be
deployed in a hospital is for the protection of
The APF described in this paper has many ap- medical records.
plications. For example, it can be deployed in In (Patient Tracking, 2005), the
the supply chain management, or in granting or implementa- tion of a RFID system for hospital
in restricting access to information for certain asset manage- ment was discussed. Such RFID
groups of people in a hospital. This has been a systems may be used to track patients, doctors
major concern in many large hospitals and expensive equipment in hospitals. The
(RFIDGa- zette, 2004). The APF could help to RFID tags can be attached to the ID bracelets
control these security concerns. Take for of all patients, or just patients requiring special
example, in a hospital where the RFID system attention, so their location can be monitored
is used. The APF will guarantee total data continuously.
security of the information in the tag from One of the benefits of the above mentioned
malicious readers because every authentic system is the use of the patient’s RFID tag to
reader will register its ID with the APF prior to access a patient’s information for review and
reading the information in the tags and all tags update via hand-held computer or PDA (Patient
that will be read by those readers will register Tracking, 2005). In such applications there is
with the APF. This means that there will be a tendency for unauthorized readers to access
mutual authentication and the information in the the information stored in a patient’s tag. This is
tags will be secure. obviously of great concern to patients. In order

Figure 5. The registration/access control of readers to the APF/Tag

Note: O-means access granted X-means access denied


Figure 6. The tag-writer application

to prevent this kind of problem, the APF would application and it shows the practical possibility
offer secure solutions. of using RFID tags for storing the medical
Hospital patient data is not easily protected record of patients in the hospital. However,
by the currently available security measures patients will not want their medical record
already discussed. The Kill Command would accessed by an un- authorised person because
not work because the data does not have a they want their privacy protected from others
finite life as products on a shop shelf do. The except their doctor.
Faraday Cage Approach would not be practical Moreover, with a typical RFID system any-
as a metal mesh or foil container would make an body who has a reader can access the
RFID bracelet very difficult to work with and to information in the tag within its read or write
wear.Active Jamming would be dangerous and vicinity. This means that any patient that has
would interfere with other systems in a hospital their confidential information stored in the tag is
environment. Similarly, the Blocker tag method prone to abuse and invasion of privacy.
would interfere with other systems in the However, using the APF, the information
hospital environment. Therefore, hospital stored in the tag will be encrypted in order to
patient data is a good case study for the APF secure it from unauthorized readers. This is the
because conventional privacy and security underline text shown in Figure 6. As Figure 6
measures are not appropriate for application and shows the APF tag writer (writer application)
the problem is hindering the development of subsystem reads the Tag ID in its vicinity,
RFID patient care systems in hospitals. then generates a random encryption key. The
encryption key is used to encrypt the plaintext
the APf case study information about the patient about to be written
to the tag. After the encryption of the
This experimental case study was carried out to information, the encrypted text will be written
test the possibility of deploying the APF to deter into the tag. This information will be secured
illegal access by unauthorized readers to RFID from unauthorized readers, unlike a typical
tags containing medical records of patients. RFID system.
Figure 6 is a screenshot of the tag writer (writer Figure 7 shows that readers have to be
application) regis- tered. This means that, only readers
registered in
APF database can access the information in the mation stored in the tag. However, prior to that
tag. In this case study it was demonstrated that it needs to send its ID to the APF database and
readers unregistered in the APF database would the APF database will check whether or not it is
not be able to access the medical records stored an authenticating reader and once that is
in the tag. Once an authenticating reader is confirmed the decryption key will be released
opened, then the tag reader (reader application), for it to ac- cess the encrypted information
has to obtain the decryption key of the stored in the tag, provided it is an authentic
encrypted infor- reader. However, if it

Figure 7. The reader application

Note: Readers need to declare their IDs prior to reading the information in the tag

Figure 8. The reader application

Note: Authenticating reader declares its ID and accesses the decrypted information. Also, an unregistered reader declares its
ID and is denied access to the information
is not an authenticating reader, the reader will content stored in the tag and therefore
be denied access to the stored information. This
is shown in Figure 8.
In this case study, the authors assumed that
the patient’s doctor alone will be in control of
the three application subsystems that is: the tag
writer (writer application), the tag reader (reader
application), and the APF protected application
software.
Thus, the patient whose information is
stored within the APF protected system can rest
assured that their confidential medical
information stored in their tag are secure from
violation of unauthor- ized readers.
There are a number of well-established
RFID
security and privacy threats:

i. Sniffing: RFID tags are designed to be read


by any compliant reading device. Tag read-
ing may happen without the knowledge of
the tag bearer or from a far distance from
the bearer of the tag (Rieback, 2006). In the
APF system, sniffing threat is considered to
be very difficult because with the
deployment of the APF system every reader
that will be part of the system will register
and be mutu- ally authenticated before such
reader could be functional or carry out any
operation. In essence, it will be impossible
for just any reader to function within the
APF system and that will make sniffing
threat to be difficult.
ii. Tracking: RFID readers in strategic
locations can record sightings of unique tag
identifiers (or “constellations” of non-
unique tag IDs), which are then associated
with personal identities. The problem arises
when individu- als are tracked involuntarily.
Subjects may be conscious of the unwanted
tracking (i.e. school kids, senior citizens,
and company employees), but that is not
always necessar- ily the case (Rieback
2006). The purpose and goal of the
architectural framework design of the APF
system is to protect and secure the data
the focus is not to protect the tracking threat
that is possible through the readers
associating personal identities with the tag
ID’s.
iii. Spoofing: Attackers can create “authentic”
RFID tags by writing properly formatted
tag data on blank or rewritable RFID tran-
sponders. One notable spoofing attack was
performed by researchers from John
Hopkins University and RSA Security
(Rieback 2006). Spoofing will be difficult
or almost impos- sible because even if the
tag ID is spoofed the content of the tag is
encrypted and that means without the
decryption key from the APF system the
content of the tag will not be readable.
Therefore, without the mutual
authentication between the authenticated
reader and the APF system, the content of
the spoofed tag is useless.
iv. Replay attacks: Attackers can intercept
and retransmit RFID queries using RFID
replay devices. These transmissions can fool
digital passport readers, contactless
payment systems, and building access
control sta- tions. Fortunately,
implementing challenge response
authentication between the RFID tags and
back-end middleware improves the situation
(Rieback 2006). Replay attack will be
difficult with the deployment of the APF
system because the APF system employs
mutual authentication process between the
tags and the readers.
v. Denial of Service: is when RFID systems
are prevented from functioning properly.
Tag reading can be hindered by Faraday
cage or “signal jamming”, both of which
prevent radio waves from reaching RFID
tagged objects. Denial of Service (DoS)
can be disastrous in some situations, such
as when trying to read medical data from
VeriMed subdermal RFID chips in the
trauma ward at the hospital (Rieback, 2006).
The DoS is a difficult threat to handle most
especially when it is through “signal
jamming”.
Furthermore, an attacker can spy out data in sends radio signals so as to block the
a situation in which he uses his own reader to operation of any nearby RFID readers.
read data from the tags. The device can be However, this approach could be illegal for
installed in a hidden place, or it can be used in a example if the broadcast power is too high it
mobile manner (Oertel 2004). In case of the could disrupt all nearby RFID systems. It
APF system mutual authentication is required could also be dangerous and cause problems
between the reader and the tag and this made it in restricted areas like hospitals (Juels
difficult for attacker to falsify the identity of the 2003).
reader because every reader has its unique d. The Blocker tag: The blocker tag is the tag
identity number. Also, the attacker can change that replies with simulated signals when
the contents of the tag but not the ID (serial que- ried by reader so that the reader can
number) of an existing tag. This is only possible not trust the received signals. Like active
if the data associated with the ID are stored on jamming, however, it may affect other legal
the tags themselves (and not in the backend). In tags (Juels
this kind of scenario, deployment of the APF 2003).
secures the application from attack- ers because
the information stored in the data is encrypted. All these approaches could have been
We will briefly describe some of the ap- effective solutions to the privacy problem but
proaches and their adverse effects: the disadvan- tages make them unacceptable. In
this paper, we propose that a good
a. The Kill Command: The standard mode of authentication procedure will be the best option
operation proposed by the AutoID Center is to tackle this problem. The rea- son is that our
for tags to be “killed” upon purchase of the proposed solution - APF - provides solutions to
tagged product. With their proposed tag de- the privacy problem and enhances the security
sign, a tag can be killed by sending it a of RFID systems. However, in this paper we
special “kill” command. However, there are identified the specific area of application in
many environments in which simple which the APF system could be used.
measures like this are undesirable for
privacy enforcement. For example,
consumers may wish RFID tags to remain tHe scenArIo of tHe
operative while in their possession (Liu AutHentIc AtIon ProcessInG
2003). frAMeWorK In tHe HosPI tAL
b. Faraday Cage Approach: An RFID tag
may be shielded from scrutiny using what is According to (Parkinson, 2007) hospital patients
known as a Faraday Cage - a container are used to wearing wristbands, but now those
made of metal mesh or foil which is bands have gone high-tech. At the Birmingham
impenetrable to radio signals (of certain Heartlands hospital patients wear RFID
frequencies). There have been reports that wristbands that carry personal data embedded.
some thieves have been using foil-lined When they ar- rive they have a digital photo
bags in retail shops to prevent shoplifting- taken and loaded on to an electronic tag
detection mechanisms (Liu 2003). contained in a wristband worn throughout their
c. Active Jamming: An active jamming ap- stay. Staff dealing with the tagged patients has
proach is a physical means of shielding tags access to PDAs with which they can scan the
from view. In this approach, the user could bands and also access patient details, via wifi,
use a radio frequency device which actively from a secure area on the hospital’s central
computer system. A ‘traffic light’ system
flashes up when a patient is ready for their
operation, and as they go through the theater’s
doors, a sensor
reads the bar code on their wrist and their details Future development might also include
are displayed on the theater’s computer screen expan- sion beyond hospital patient data. For
(Parkinson, 2007). example, where RFID tags might implanted into
This is the type of RFID application human flesh for every day use to access
scenario in the hospital which requires the APF property and vehicles. This will open up the
systems implementation. The information stored RFID market place to longer range RFID tags
in the wristbands of the patients needs to be and readers to replace the current short range (7-
protected from various potential security threats. 10cm) equipment.
The authors of this paper believe the Regarding the issue of scalability, the APF
deployment of the APF system will serve as a system will register only tags and readers that
deterrent and countermea- sures to such security are necessary for a particular application. It will
vulnerabilities. From the above scenario, not register tags and readers that are not related
without necessary security coun- termeasures to that particular application.
such as the one APF system will be providing, it Furthermore, the traffic flows of steps ⑤
means anybody with the PDA that has a reader and ⑥ in Figure 2 will be encrypted by a secure
can read the information stored in the tag about sockets layer in order to protect the information
the tagged patients within or around the hospital decrypted by the authenticating reader from
without the consent or awareness of the patients. being exploited maliciously.
In the APF system every reader will be an The authors are convinced that theAPF
authenticating reader which means any other system will go a long way to defuse the fears
reader will not be able to access the data in the and con- cerns that consumers have regarding
tags associated with the APF system. the present lack of privacy in the RFID system.
Moreover, in the prototype system the
authors extended his research to employ
concLusIon Secure-HTTP and SSL protocols for the
protection of the APF database. The authors
The potential applications of the RFID system further their research work on how the APF
may be identified in virtually every sector of database will be protected from various
industry, commerce and services where data is malicious attacks.
to be managed. However, RFID systems have In summary, the application of the APF
faced widespread resistance due to lack of pri- system for securing patients medical records in
vacy (Kumar, 2003). This calls for a prompt and hospitals will be a secure system that will
concrete solution for the full realization of the prevent the invasion and abuse of a patient’s
RFID system’s potential. confidential information. It is believed that the
This research focuses on an experimental deployment of the RFID system for the
prototype system that uses fictitious data. Future management of medical records in hospitals will
research will seek to implement and test a small enhance the efficiency and accuracy of medical
implementation with live data and applications treatment. However, without employing
as proof of concept. Rigorous testing could effective privacy and security protec- tion for
investi- gate whether this system can in fact the confidential information stored in the tag,
stand up to the security and privacy threats privacy problems will negate the benefits that
established earlier in this paper such as sniffing, RFID offers. In conclusion, the authors believe
tracking, spoofing and denial of service attacks. that the application of the APF system is an
effective solution to patients’ privacy concerns
regarding their confidential information stored
and has a wide range of other potential
applications.
references rfidprivacy.org/2003/papers/kumar-interaction.
pdf
(Avoine 2004) Avoine J., Oechslin P. “RFID
Traceability: A Multilayer Problem” http://fc05. (Liu 2003) Liu D., Kobara K., Imai H. “Pretty-
ifca.ai/p11.pdf (2004) Simple Privacy Enhanced RFID and Its
Applica- tion” (2003)
(Ayoade 2004) Ayoade J., “Security and
Authenti- cation in RFID” The 8th World Multi- (Molnar 2006 ) Molnar D. “Security and
Conference on Privacy in Two RFID Deployments, With New
Methods for PrivateAuthentication and RFID
Systemics, Cybernetics and Informatics, U.S.A
Pseudonyms”
(2004).
http://www.cs.berkeley.edu/~dmolnar/papers/
(Ayoade 2005) Ayoade J., Takizawa O., Nakao masters-report.pdf (2006)
K. “ A prototype System of the RFID
(Oertel 2004). Oertel B., Wolf M. “Security
Authentication Processing
Aspects and Prospective Applications of RFID
Framework” International Workshop on Systems” (2004)
Wireless Security Technology (2005)
(Parkinson 2007) Parkinson C. “Tagging
http://iwwst.org.uk/
improves patient safety” BBC News
Files/2005/Proceedings2005.pdf
http://news .bbc. co.uk/2/hi/health/6358697.stm
(ECISM 2006) Consultation Initiatives on Radio
(Patient Tracking 2005) “RFID in Hospitals –
Frequency Identification (RFID) “RFID
Patient Tracking ” http://www.dassnagar.com/
Security, Data Protection and Privacy, Health
Software/AMgm/RF_products/it_RF_hospitals.
and Safety Issues”
htm (2005)
http://ww w.rfidconsultation.eu/docs/
ficheiros/Framework_paper_security_final_ver- (Psion 2004) Understanding RFID and Associ-
sion.pdf ated Applications Http://wwwpsionteklogix.com
(the page is
(Juels 2003) Juels A., Rivest R., Szydlo “The
Blocker Tag: Selective Blocking of RFID Tags dynamic) May,2004
for Consumer
(RFIDGazette 2004) “RFID in the Hospital”
Privacy” http://ww w.rsasecurit y.com/rsalabs/ Http://www.rfidgazette.org/2004/07/rfid_in_the_
staff /bios/ajuels/publications/blocker/blocker. hos.html (2004)
pdf (2003)
(Rieback 2006) Rieback M. Crispo B.
(Juels 2005) Juels A., Molnar D., Wagner D. Tanenbaum A. “Is your cat Infected with a
“Se- curity and Privacy Issues in E-passports” Computer Virus”
http:// eprint.iacr.org/2005/095.pdf (2005) http://www.rfidvirus.org/papers/percom.06.pdf
(2006)
(Kumar 200) Kumar R., “Interaction of RFID
Technology and Public Policy” (2003)
http://www.

This work was previously published in International Journal of Advanced Pervasive and Ubiquitous Computing, Vol.1, Issue
1, edited by J. Symonds, pp. 44-59, copyright 2009 by IGI Publishing (an imprint of IGI Global).
1386

Chapter 7.4
Security and Privacy in
RFID Based Wireless
Net works
Denis Trček
University of Ljubljana, Slovenia

AbstrAct IntroductIon

Mass deployment of radio-frequency identifica- Radio-frequency identification (RFID) has its


tion (RFID) technology is now becoming roots in WWII when it was used for the first
feasible for a wide variety of applications time to distinguish British from German
ranging from medical to supply chain and retail aircrafts. An aircraft was challenged to
environments. Its main draw-back until recently communicate a certain piece of information and
was high pro- duction costs, which are now on this basis a decision was made on whether to
becoming lower and acceptable. But due to attack it or not.
inherent constraints of RFID technology (in This principle is the core of contemporary
terms of limited power and computational RFID technology, although, of course, the
resources) these devices are the subject of imple- mentation technology is significantly
intensive research on how to support and different. It is now based on low-cost integrated
improve increasing demands for security and circuits (ICs) called tags. Due to the ability to
privacy. This chapter therefore focuses on currently store up to two kilobytes of data on
security and privacy issues by giving a general these tags, they constitute a very attractive
overview of the field, the principles, the current technology in many areas. These include
state of the art, and future trends. An manufacturing, supply chain management,
improvement in the field of security and inventory management, healthcare applications,
privacy solutions for this kind of wireless air-transportation, and so forth. All
communications is described as well.
Copyright © 2010, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.

Security and Privacy in RFID Based Wireless Networks


Security and Privacy in RFID Based Wireless Networks

items (in containers) can be scanned together, munications) security states that security means
while each item can be uniquely identified and minimization of vulnerabilities of assets and
traced. These properties give RFID technology resources (ISO, 1989). Wireless security thus
significant advantages over existing bar-code means minimization of vulnerabilities of assets
systems that currently serve for low level, and resources when communicating
opera- tional acquisition of data in the above information in electro-magnetic media through
mentioned business environments. a free-space environment. Finally, RFID
These appealing properties also have draw- technology will be defined as wireless
backs, many of them in the area of security and identification technology which operates on
privacy. But as RFID is already finding its radio frequencies and deploys low-cost ICs.
place in contemporary information systems A model of RFID environment is described
(ISs), these issues need to be addressed in Figure 1. It consists of tags (also called
seriously, which is the goal of this chapter. In respond- ers) and readers (also called
the second section, the background of RFID transceivers). This is the front-end of RFID
technology is given. In the third section, applications, which have their back-end in
threats are described and countermeasures are database management systems, where they are
given. In the fourth section anticipated future integrated with the rest of the IS (see Figure 1).
trends are discussed. There is a conclusion in It is generally assumed that RFID security and
the fifth section, while the chapter ends with privacy is concerned with the front- end part
references and key definitions. (the left-hand side of the dashed vertical line in
Figure 1). This is actually the part that is
covered by the reader’s signal; the tag’s signal
bAcKGround oVe rVIeW usually falls within its range.
Tags consist of a microchip and an antenna,
Some definitions have to be given first. One both encapsulated in polymer material. The
basic definition in the area of computer (com- micro- chip has encoded data, called
identification (ID),

Figure 1. A model of the RFID security and privacy environment

RFID back-end
RFID
tag information
reader
system

tag's range
SECURE ENVIRONMENT
reader's range

107
which typically include the manufacturer, 2003).
brand, model, and serial number. A typical communication channel with a
Communication takes place on radio- pas- sive RFID is asymmetric. This means that
frequencies, for example, from 125 kHz to 134 forward communication, that is,
kHz for security cards and from 800 communication from a
MHz to 900 MHz for retail applications
(Roussos,
2006). However, increasing the frequency
means increased accumulation of signal in
bodies con- taining large quantities of water or
in metal.
Communication is achieved by electromag-
netic coupling between readers and tags. A
reader transmits a signal, which induces a
voltage in the tag’s antenna. This coupling
provides sufficient power for a tag to respond
(after performing some calculations if
required). If a tag is powered through this
coupling, it is called a passive tag. However, if
a tag has some source of energy, for example, a
battery, it is called an active tag. Each type has
certain advantages and disadvantages. Passive
tags are cheap, but remain active until be- ing
explicitly destroyed. They have a low operating
perimeter (typically 3 meters) with a relatively
high error rate. In contrast, active tags have a
greater operating perimeter (up to a few
hundred meters), lower error rate, and cease
functioning when the source of power is
exhausted. However, they are significantly more
expensive. Both kinds of tags can be read only,
write once-read many, or rewritable.
The main barrier to mass-deployment of
RFID tags is their price. A wish-price is limited
by five cents, but depending on quantities and
using current technologies, many application
niches can already be covered. The total cost
consists mainly of cost of an antenna, which
can be from
€/US$ 0.01 to €/US$ 0.02, cost of silicon, and
IC production; silicon typically costs €/US$
0.04/ mm 2 (Weis, 2003), while IC production
depends on the number of logical gates, that is,
technology. But roughly, the cost ranges from
€/US$ 0.025/ mm2 with 1500 gates/mm2 to
€/US$ 0.08/mm 2 with 60.000 gates (Weis,
reader to a tag, has one order of magnitude
larger in range than backward communication,
that is, from the tag to the reader. In the former
case this is typically up to 100 meters, while in
the latter case this is typically up to 3 meters.
The reason, of course, is the power
consumption constraint, which means that
practical applications are limited to a range of
up to 3 meters.
Thus, the cost factor dictates that a typical
RFID, or a reference RFID implementation, is
currently expected to have the following
charac- teristics. It is passively powered and has
96 bits of read-only memory. These
standardized bits serve to carry the tag’s
identity, which is unique for each tag (these IDs
are stored in silicon by an imprinting process).
A chip operates at 20,000 clock cycles,
providing 200 read operations per second. An
algorithm to respond to read primitives from a
reader may be probabilistic (e.g., Aloha (Prasad
& Rugierre, 2003) or deterministic (e.g., a
binary walking tree) (Juels, Rivest, & Szydlo,
2003). With such algorithms, a single tag can be
identified and isolated. The related process is
called singulation. Finally, the number of
available gates that can be devoted to security
operations is in the range of 400 to 4,000.
The above estimates are based on figures
from Weis (2003) by applying Moore’ s law,
which states that for the same price the
available processing power doubles every year
and a half. It is therefore clear that processing
resources to support security in RFID
environments are very limited and lightweight
cryptographic solutions thus provide an answer
to this problem.
Moore’s law also implies that there is always
a point where “ordinary” cryptographic
algorithms become feasible for computationally
weak devices. An example of a thick RFID
implementation, which is based on AES to
provide authentication, can be found in the
work of Feldhofer, Dominikus, and
Wolkerstorfer (2004). Despite this, a perma-
nent need exists for lightweight cryptographic
protocols and also algorithms. One main reason
is the gap between ordinary devices where
space
and power consumption are not a serious This leads to a whole new research area (Juels,
concern (e.g., tag readers, desktop systems), 2004).
and weak devices with limited space and power
consump- tion (e.g., RFID tags, smart-cards). rfId threats and countermeasures
This gap means that increased processing
power affects both kinds of devices equally; in The very basic threat to each and every tag is
the case of a cryptographic algorithm, the key- that it remains active when it is no longer
length of this algorithm is extended. supposed to be active. To counter this problem,
As a consequence, weak devices are again RFID logic may implement kill operation,
less protected because they cannot deploy such which means that upon receipt of a certain
inten- sive computations with enlarged keys. communication primitive, the tag becomes
Further, if the above use of a cryptographic permanently inoperative by, for example,
algorithm can be seen as a kind of variable cost blowing a fuse in its circuitry. A more bullet-
(the longer the key, the higher the processing proof solution is exposure of RFID to mi-
overhead), cryp- tographic protocols can be crowave radiation that melts its metalized layer.
seen as a fixed cost. Note that cryptographic Risk management drives each and every
protocols are ordinary communication pro- vision of security and privacy in ISs. A
protocols that deploy crypto- typical process is depicted in Figure 2. It starts
graphic algorithms, and cryptographic with the
protocols
identification of assets A (A = {a , a , …, a })
and 1 2 n
are often referred to as security services, while threats T (T = {t1, t2 , …, tm }) to those assets. For
cryptography algorithms are referred to as each asset and threat, that is, Cartesian product
security A
mechanisms. Both kinds of costs contribute to ⋅ T = {(a1, t1), (a1, t2), …, (an, tm)}, related
the total processing power requirements, and vulner- abilities are identified together with the
have to be kept low while at the same time likelihood of a threat to get into interaction
enabling a comparable level of security to with the asset
weak devices. during a certain period of time. On this basis,
the

Figure 2. Risk management process


estimated damage D(a , t ) caused by • Constellation and transaction threats:
interaction i j
between asset a and threat t during this period Constellation threat is similar to location
is i j
calculated. The result presents the upper bound for example, more wealthy ones.
for investment in safeguards. A certain degree
of risk, called residual risk, is usually accepted
and taken into account. This often makes sense
economi- cally. But in the majority of cases, a
threat cannot be completely neutralized (Trček,
2006).
The challenging parts of this process are
identification of threats and their probability.
For identification of threats in RFID
environments a comprehensive taxonomy from
Garfinkel, Juels, and Pappu (2005) can be
used. The first four threats are related to
corporate security, and the rest to personal
privacy:

• Corporate espionage threat: Tagged


prod- ucts may enable remote acquisition
of supply chain details like logistics
details, volumes, and so forth.
• Competitive marketing threat: Tags
may enable access to customers’
preferences and use the data gathered for
competition.
• Infrastructure attacks threat: Where
RFID is central to a competitor’s
advantage; disruption of RFID operations
becomes an important point for attack.
• Trust perimeter threat: Gathering ad-
ditional volumes of data through RFID
introduces new challenges related to
sharing information in a trustworthy way.
• Action threat: Individuals actions may be
monitored.
• Association threat: When tagged
products are associated with an
individual’s ID (e.g., loyalty programs),
these persons can be as- sociated not only
with the type of product, but with the
exact product, due to its unique ID.
• Location threat: Tags can be triggered by
covert readers at various locations to
reveal a person’s location.
• Preference threat: Tags disclose prefer-
ences of customers and help to identify,
threat, but in this case the identity of a
customer is not known. Despite this, a
par- ticular person can be spotted and
traced. Further, chaining one constellation
threat with another, a whole chain of
actions, or transactions, becomes
traceable.
• Breadcrumb threat: When products are
disposed with their original tags, an
attacker may use them and is tracked with
falsified identity. This is actually just
another kind of identity theft.

On top of all this, a fundamental threat ex-


ists, called tag cloning, and such cloning has
been successfully demonstrated (Bono, Green,
Stubblefield, Juels, Rubin, & Szydlo, 2005).
What countermeasures are at our disposal?
The basic option was mentioned at the
begin- ning with the physical destruction of a
tag (e.g., by exposure to microwaves or
implementation of a logical kill command that
makes chip inoperable). But the fact is that the
latter approach often has flaws in
implementations: logically killed tags may
remain active or be reactivated (Roussos,
2006). In many situations, it might be even
beneficial to keep these tags active; for
example, tagged items may be used for smart-
home applications or to help disabled people.
The most common approach to security and
privacy is by deploying cryptography. Using
cryptographic mechanisms (e.g., symmetric
and asymmetric cryptographic algorithms,
strong one way hash functions), the following
cryptographic services can be implemented
(ISO, 1995):

• Authentication: This ensures that the


peer communicating entity is the one
claimed.
• Confidentiality: This prevents
unauthorized disclosure of data.
• Integrity: This ensures that any
modi- fication, insertion, or deletion of
data is detected.
• Access control: This enables authorized 2. After obtaining metaID, the reader looks
use of resources. up the internal table to find the
• Nonrepudiation: This provides proof corresponding key, which is unique for
of origin and proof of delivery, such that each tag. It sends this key to the tag.
false denying of the message content is 3. After obtaining the key, the tag hashes
prevented. this key and compares it with its metaID.
• Auditing: This enables detection of suspi- If values match, the tag provides its full
cious activities and analysis of successful functional- ity.
breaches. It provides evidence when
resolv- ing legal disputes. It can readily be observed that
confidentiality is not used in the above
In case of RFID tags, authentication, con- scenario. This means that the communicated
fidentiality, and access control can be applied data are exchanged in plain and can be read by
to counter threats described at the beginning of an adversary. In the above scenario, it suffices
this section. But to make these security services to intercept the key, and afterwards to falsely
operational, key management (i.e., handling of authenticate to tag.
cryptographic algorithms’ keys) has to be This is not the only threat to confidentiality.
resolved (Trček, 2005). This is a complex issue Confidential data are stored on RFID, the most
in open environments and has been known as sensitive piece being its ID. Due to processing
such for almost two decades. Suffice it to say requirements and key-management problems,
that only very simple key management schemes the tendency is to store data in plain. In this
are acceptable for RFID environments. case, many other kinds of attacks can be
With regard to security and privacy, it is re- applied that exploit a tag’s tamper resistance,
quired that authentication, and consequently ac- and may consequently lead to the tag’s cloning
cess control, is provided only to legitimate (Anderson
readers. Further, rogue readers should not be & Kuhn, 1996). Such attacks can be prevented
disclosed a tag’s ID, but should also be by careful circuitry design principles that are
prevented from trac- ing a tag, regardless of the common in smart-card design (e.g., scrambling
inaccessibility of its ID. Put another way, when of memory addresses, proper positioning of the
rogue readers interact with a tag, it should be memory layer within the integrated circuit, and
practically impossible (i.e., computationally inclusion of dummy components).
difficult) to link the multiple manifestations of But, as is always the case with security and
a tag to this very tag. privacy in ISs, one should not rely solely on
An example of recent solutions that meet cryp- tography. One alternative architectural
these requirements is the YA-TRAP protocol approach is the use of tag pseudonyms
(Tsudik, (Garfinkel et al.,
2006). However, to demonstrate a typical 2005). In this approach, each tag is given a
simple authentication RFID protocol, an cycling sequence of pseudonyms that are
example by Weis (2003) is given. In this case, chosen to reply to reader requests. But by
RFID con- tains a derivative of some key, from repeatedly scanning the same tag the whole
which it is computationally hard to obtain the cycling sequence would be discovered. A
real key. This derivative, called metaID, can be solution is to throttle the queries, that is, to use
a cryptographi- cally strong hash of the key. each pseudonym for some extended period of
The tag authenticates the reader as follows: time.
Another option is blocker tags that are based
1. The reader quer ies a tag to send its on the already mentioned tree-walking
metaID.
algorithm. The singulation process deploying
this algorithm uses a k-bit identifier represented
as a binary tree.
Each leaf in this tree is the tag’s ID. The strong one-way hash function to obtain the value
algorithm
0
goes as follows: 1
= hash(S ). Next, calculate S = hash(S ), S
principle
2
goes as follows. Take a seed value S0
1. The reader starts at the root (depth d=0). and apply a
To decide whether to proceed along the
subtree with 0 or the subtree with zero, it
first emits
0.
2. If all tags broadcast 0, the reader proceeds
by stepping one level down to the subtree
beginning with 0 (depth d=1). If all tags
broadcast 1, it steps one level down to the
subtree beginning with 1 (depth d=1). If
the reader receives both zeros and ones, it
proceeds down both trees.
3. The procedure is repeated at each level
(the total depth d of the tree equals the
number of bits that are used for identifiers,
that is, d=k), until only one singular tag is
responding.

To spoof the reader, a blocker tag simply


emits both bits 0 and 1, which means traversing
the complete tree. This reply actually forces the
reader to do an exhaustive search on the whole
set of 296 combinations, which is impossible.
Such a tag would, of course, disrupt all
operations. Thus only a subtree of the whole ID
space can be allocated to privacy protecting bits
(e.g., only that subtree that starts with bit 1).
When a reader would enter such a subtree, it
would cease the process of further singulation.
Another option is to regulate the range of
emitted signal from a tag (Fishkin & Roy,
2003). Based on the strength of a signal, a tag
can cal- culate an estimated distance to the
reader and adapt its emitting power to the level,
at which signal-to-noise ratio is such that a
reader is able to read the reply. Other devices
that are out of this range are automatically
disabled from trac- ing the response.
Finally, we propose a new hybrid technique
called one-time pseudonyms. This technique
deploys the idea of tag pseudonyms and one-
time password principle, which was proposed
for the first time by L. Lamport (1981). The
= hash(S ), and so on. Now, when a tag is to be S 2 1 3
identified by a reader, the reader sends an
integer that denotes the index of the iteration to
be done on the seed. Of course, in the case of
RFIDs, the unique ID is assumed to be used as
a seed.
From the point of background (secure envi-
ronment) operations, these one-time
pseudonyms introduce a workload comparable
to that for ordi- nary pseudonyms. The
advantage is that there is no need for a logic
circuit to throttle the queries (i.e., to change
pseudonyms only every few min- utes).
Further, the cycle of one-time pseudonyms is
not deterministic. Thus it is much harder for an
attacker to follow the tag to collect the
complete sequence, which is the case with the
basic tags pseudonyms technique. There is a
slight drawback with this technique if a large
number of iterations are required. A
straightforward possibility would be to limit the
highest index (number of itera- tions), while
another approach would be to store in an RFID,
for example, each 20th iteration of its hashed ID.
This would also significantly expand the range
of applicable iterations, because the up- per
limit for this technique presents a computation
load that is related to the response of a tag and
to power consumption.

future trends

With regard to cryptography, it is anticipated


that encryption of RFIDs content will become
the norm in providing confidentiality.
Basically, in many scenarios it is not necessary
that a tag be capable of performing strong
encryption of its data. Once it has successfully
authenticated a reader, changes to relevant data
can be encrypted by the reader and written
(back) to the tag. Permanent sensitive data can
be also decrypted at the reader’s side. But this
requires efficient lightweight authentication
protocols, and this area is likely to get further
attention from researchers.
However, cryptography is no panacea. One wireless tech-
basic reason, described in the second section,
is the gap between ordinary computational de-
vices (devices without stringent limitations on
semiconductor area and power consumption)
and weak processing devices. Thus
noncryptographic solutions, as described in the
previous section, may become more important.
It should be mentioned that no IS security
and privacy can be fully exercised if there is no
legal coverage. Although security and privacy
related regulation is becoming broad and
diverse, the RFID area has many specific issues
that require tailored legislation. But this topic is
outside the scope of this chapter.

concLusIon

A ubiquitous and pervasive computing


paradigm is almost inherently tied to wireless
communica- tions. In addition, this paradigm
rests on mas- sive deployment of computing
devices, which therefore have to be cheap.
RFID technology is the special kind of such
paradigm. Due to its specific properties it
places further constraints on wireless security
and privacy. These specific properties and
available existing solutions have been discussed
in detail in this chapter.
The majority of solutions that support
security and privacy are built on cryptography,
while some promising attempts are emerging
that are not tied to cryptography (e.g., blocker
tags, antenna- energy analysis, tag
pseudonyms). Further, a new hybrid technique
for tags authentication has been proposed
that deploys the principles of tags pseudonyms
and one-time passwords. It compensates the
basic drawback of the pure tag pseudonyms
approach, while the price to be paid (in terms of
additional calculations and slightly prolonged
response times) may be acceptable for many
environments.
There is no doubt that RFID devices are
about to become one kind of mainstream
nology. Currently being implemented mostly in
supply chains and retail, their deployment pos-
sibilities are numerous, for example:

• Health care information systems can be


improved for better handling of patients’
data.
• Elderly and disabled people can benefit
from
new applications that are based on RFID.
• New applications for intelligent, smart-
homes can be built on top of RFID.
• Scientific research of certain species can
be
improved, and so forth.

As for every technology, RFID technology


has its advantages and disadvantages. To
properly ensure security and privacy related
measures, a proper risk management strategy
has to be taken. Only the big picture of security
and privacy assures to benefit from the use of
this kind of wireless technology.

references

Anderson, R., & Kuhn, M. (1996). Tamper


resis- tance: A Cautionary Note. In
Proceedings of the USENIX Workshop on
Electronic Commerce (pp.
1-11). Berkeley:
USENIX.
Bono, S., Green, M., Stubblefield, A., Juels, A.,
Rubin, A., & Szydlo, M. (2005). Security
analysis of a cryptographically-enabled RFID
device. In Proceedings of the 14th USENIX
Security Sym- posium (pp. ?). Berkeley:
USENIX.
Feldhofer, M., Dominikus, S., & Wolkerstorfer,
J. (2004). Strong authentication for RFID
systems using the AES algorithm. In
Proceedings of the 6th International Workshop
Cryptographic Hardware and Embedded
Systems (LNCS 3156, pp. 357-370). Heidelberg:
Springer.
Fishkin, K.P., & Roy, S. (2003). Enhancing
RFID privacy via antenna energy analysis
(Memo IRS- TR-03-012, Intel Research). Santa
Clara: Intel.
Garfinkel, S.L, Juels, A., & Pappu, R. (2005). at the International Conference on Pervasive
RFID privacy: An overview of problems and
proposed solutions. IEEE Security and Privacy,
3(3), 34-43. Los Alamitos: IEEE.
International Standards Organization (ISO).
(1989). Information processing systems: Open
systems interconnection - basic reference
model, security architecture, part 2 (ISO 7498-
2). Ge- neva: ISO.
International Standards Organization (ISO).
(1995). IT, open systems interconnection:
Security frameworks in open systems (IS
10181/1 thru 7). Geneva: ISO.
Juels, A. (2004). Minimalist cryptography for
RFID tags. In C. Blundo & S. Cimato (Eds.),
4th Conference on Security in Communication
Networks (pp. 149-164). Heidelberg:
Springer Verlag.
Juels, A., Rivest, R., & Szydlo, M. (2003). The
blocker tag: Selective blocking of RFID tags for
consumer privacy. Paper presented at the 8th
ACM Conference on Computer and
Communications Security (pp. 103-111). New
York: ACM.
Lamport, L. (1981). Password authentication
with insecure communication. Communications
of the ACM, 24(11), pp. 770-772. New York:
ACM.
Prasad, R., & Ruggiere, M. (2003). Technology
trends in wireless communications. London:
Artech House.
Roussos, G. (2006). Enabling RFID in retail.
Computer, 39(3), 25-30. Los Alamitos:
IEEE.
Trček, D. (2005). E-business systems security
for intelligent enterprises. In M. Khosrow-Pour
(Ed.), Encyclopedia of information science and
technology (Vol. 2, pp. 930-934). Hershey, PA:
IGI Global, Inc.
Trček, D. (2006). Managing information
systems security and privacy. Heidelberg/New
York: Springer.
Tsudik, G. (2006). A-TRAP: Yet another trivial
RFID authentication protocol. Paper presented
Computing and Communications PerCom 2006
(pp ?-?). Los Alamitos: IEEE.
Weis, S.A. (2003). Security and privacy in
radio- frequency identification devices.
Unpublished master’s thesis, Cambridge MIT.

Key terMs

Active Tag: A tag that has some source of


energy, for example, a battery.
Kill Operation: Upon receipt of a certain
communication primitive a tag becomes
perma- nently blocked, for example, by
blowing a fuse in its circuitry.
Passive Tag: A tag that is powered through
electromagnetic coupling and obtains power
from the reader.
Reader: A device that queries tags to obtain
their IDs and that is connected to the back-end
part of information systems.
RFID Technology: Wireless identification
technology that operates in radio frequencies
and deploys low-cost ICs.
Security Mechanism: A basis for a security
service, where using a particular security
mecha- nism (e.g., cryptographic algorithm)
enables the implementation of security service.
Security Service: A service provided by an
entity to ensure adequate security of data or
sys- tems in terms of authentication,
confidentiality, integrity, and nonrepudiation.
Singulation: A process (an algorithm) that
en-
ables isolation and identification of a single
tag.
Tag: A small, low cost IC with unique ID
and computational capabilities to support
identifica- tion processes.
Tag’s Identity (ID): This is a unique
number for each tag that is stored in silicon
with imprint- ing process.
Wireless Security: Minimization of vulner-
abilities of assets and resources when commu-
nicating information in electromagnetic media
through a free-space environment.

This work was previously published in Handbook of Research on Wireless Security, edited by Y. Zhang, J. Zheng, and M.
Ma, pp. 723-731, copyright 2008 by Information Science Reference (an imprint of IGI Global).
1396

Chapter 7.5
Humans and Emerging
RFID Systems:
Evaluating Data Protection Law on
the
User Scenario
Basis

Olli Pitkänen
Helsinki Institute for Information Technology (HIIT), Finland

Marketta Niemelä
VTT Technical Research Centre of Finland, Finland

AbstrAct IntroductIon

Radio Frequency Identification (RFID) technol- Radio Frequency Identification (RFID) is an


ogy offers a lot of promises. To redeem them, important technology to enable the Internet of
RFID applications have to respect privacy and Things, ubiquitous computing (ubicomp),
they need to be supported by the legal system. ambient intelligence (AmI), and other
The article evaluates how the current EU promising future platforms. In short, the main
directives on data protection support emerging components of RFID technology are a tag and a
applications that are based on RFID tags. The reader. The tag has an electronic circuit storing
evaluation is based on user scenarios that data and an antenna to communicate using
illustrate human needs in rela- tion to radio waves. The reader also has an antenna,
technologies and applications. The article and electronics to translate the incoming data to
continues earlier analyses and uses more be processed by a computer. A reader may thus
realistic and state-of-the-art applications and send a radio signal requesting tags to identify
scenarios. We conclude by pointing out further themselves, and tags reply by sending the
research needs in the field of RFID and data information that is stored in them.
protection. [Article copies are available for The simplest and the most inexpensive RFID
purchase from InfoSci-on-Demand.com] tags are called passive tags. They do not have
any internal power supply. Enough power for
the tag
Copyright © 2010, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Humans and Emerging RFID Systems
Humans and Emerging RFID Systems

to power up and transmit a response is induced (2006) have discussed various RFID
in the antenna by the incoming radio frequency related threats and potential solutions to
signal. Passive tags are typically quite small, in them.
the size range of a stamp. Therefore, a passive
tag is relatively easy and cheap to place in
almost any object.
Active tags, in contrast, include internal
power supplies. They are able to communicate
further, and store and process more
information. Although active tags are more
versatile than passive tags, they can be much
more expensive, larger, and more difficult to
place.
While RFID tags become smaller and
cheaper, reader technology is also developing.
It is already possible to equip, for example,
mobile phones with RFID readers. Thus not
only tags, but also readers are spreading widely
and enabling an unforeseeable amount of new
services.
RFID technology is said to advantage not
only businesses but also individuals and public
organizations in many ways. It enables useful
new services and applications. The benefits of
RFID tags are apparent, but their exploitation
has been retarded by notable obstacles. So far,
there have been three main problems that have
hindered the diffusion of RFID technology:
First, the technol- ogy has not been mature
enough. Second, there has been a lack of
standards. Third, there have been severe
concerns on the risks that RFID poses to the
end-users privacy. In this article, we
concentrate on the third problem. Especially,
with the help of RFID tags, it is possible to
collect and process personal information on
human-beings.
Many researchers have studied RFID
privacy issues in recent years. The following
brief list includes some of the important
studies related to this topic.

• Ohkubo, Suzuki, and Kinoshita (2003,


2005), Lahlou, Langheinrich, and Röcker
(2005), Garfinkel, Juels, and Pappu
(2005), Juels (2005) and Garfinkel
117
Humans and Emerging RFID Systems
• Langheinrich, Coroama, Bohn, and
Mattern (2005) have presented some of
the conse- quences of ubiquitous
computing implied by several scenarios.
• Spiekermann and Ziekow (2006) have
ana- lyzed consumer fears associated with
the introduction of RFID technology.
• Goel (2007) has outlined critical barriers
in implementing RFID technologies, spe-
cifically for authentication and privacy,
and provided a set of initial responses.
• Langheinrich (2007) has gathered a good
overview of earlier studies in this field.
• From the legal viewpoint, Kardasiadou
and Talidou (2006) have discussed the
implica- tions with emphasis to data
protection.
• Kosta and Dumortier (2008) have
excellently analyzed European data
protection legisla- tion and its ambiguity in
relation to RFID. Also, some official reports
have been published
on RFID privacy issues. For example, in
Europe, the advisory body called Article 29
Working Party has published a working
document, which aims to provide guidance to
RFID deployers, manufacturers, and
standardization bodies. (Art
29 WP 105, 2005) In the USA, Federal
Trade
Commission (FTC) has published a staff report
on RFID Applications and Implications for
Con- sumers (2005).
The above mentioned working paper by Article
29 Working Party was criticized that the
examples of RFID applications do not represent
reality. Societal benefits and a realistic
appreciation of technical possibilities should be
looked at when judging RFID applications. (Art
29 WP 111, 2005) Similar problems can be
found in all the studies in this field. This is not
surprising, since it is very difficult to predict,
what the actual applications will be. The
examples are always somewhat limited. This
article is not trying to exhaustively patch up the
lack of realistic RFID applications. Yet, we aim
at improving the picture on privacy challenges
by studying several RFID, ubicomp, and AmI
applications that have been developed

118
in research projects. We are also evaluating the future possibilities. (Niemelä et al, 2005)
current European data protection law to find out
how well it will suit the future needs.
In the following, we first describe a few sce-
narios and forthcoming applications of RFID
tech- nology to illustrate potential privacy
problems. In the next chapter, we depict sample
technological solutions to those problems, and
show that each of them has shortcomings. Thus,
technology alone is not enough, but needs
support from legal tools. In the following
chapter, we introduce the European data
protection law and evaluate its applicability to
the RFID scenarios described earlier. In the last
chapter, we conclude that even though the
European system provides users with a
reasonable protection, it will be necessary to
continuously follow the development to ensure
that the law will not harm useful businesses and
that the law remains adequate to emerging
technologies.

rfId APPLIc AtIons

Scenario methods can be used to evaluate pos-


sible future applications and services.
(Pitkänen,
2006a) In the following, we go through several
RFID applications and scenarios to see, what
sort of privacy threats are likely to occur in the
future.

MIMosA

Microsystems platform for Mobile Services and


Applications (MIMOSA) was a European
research project supported within the IST
priority of the Sixth Framework Programme. It
developed a set of scenarios to show how RFID
technology could look and feel in different
everyday situations. The scenarios were
evaluated for credibility, accept- ability and
technical feasibility. Therefore they represent
realistic and societally beneficial appli- cations
while also showing ambitious and guiding
Below, the two health care scenarios are pre-
sented in more detail, since they include
interest- ing data protection issues. The other
MIMOSA scenarios are quoted more
superficially.

Health Care Scenarios

Travelling and taking care of diabetes.


(Niemelä et al, 2005) Ines is retired and travels
a lot despite of her diabetes. For diabetic
persons, it is vitally important to frequently
monitor their blood sugar (glucose) level. At
home, Ines uses a quick blood test to monitor
her blood sugar level and injects insulin
regularly. However, when travelling, she feels
that diabetic-special smart plasters are handier
because of irregular life during travel. The
smart plaster can be worn for 24 hours at a time
and it monitors glucose level of the blood as
well as automatically adjusts the insulin dosage
according to the user. Although quite
expensive, the plaster is easy to use and wear.
The smart plaster analyses the glucose level
of the blood sampled by a set of micro needles.
This information is sent via Bluetooth to a
mobile phone. The mobile phone warns if the
blood sugar level starts to be seriously offset. In
addition, the information is sent to a server,
which stores the data for later analysis. Based
on the long-term infor- mation of the glucose
level variation, the diabetic together with the
supporting team can evaluate whether the
treatment has been effective.
In a four-week travel to China, Ines notices
that her insulin will be running out in a few
days. Ines goes to the local pharmacy, in which
all medicine labels are written in China only.
Ines uses her mobile phone to recognise
equivalent insulin. The hand-held device
indicates appropri- ate insulin with a light
signal, and it also checks compatibility of the
medicine to other medicines Ines is using as
well as allergies.
If all the relevant information is not
available in the database, or if the connection to
the database fails, the mobile phone suggests
contacting Ines’ family doctor or the local call
centre for advice.
The family doctor has access to Ines’ medical database on a server. When
history as agreed with Ines earlier, so the doctor
is able to follow Ines’ medical conditions on-
line whenever needed.
From privacy viewpoint, the processing of
medical information – like described in the sce-
nario – represents most sensitive a case. One
should be extremely sure that unauthorized
access to that personal information is made
impossible.
Another important privacy issue highlighted
in this scenario is internationality: while travel-
ling abroad, even in other continents, the indi-
vidual should be as confident with regards to
the processing of personal information as in the
home country.
Looking after Louis the toddler. (Niemelä et
al, 2005) Rosalita and Jorge have a 20-month
old son Louis suffering from several allergies
and an often repeating flu. The parents has put
small, light-weight wearable sensors on Louis’
skin that continuously measure his skin
temperature and sweating. In Louis’ clothes,
there are sensors that measure his heart-beat
and breathing patterns. All sensor information
is wirelessly send to both Rosalita’s and Jorge’s
mobile phones. If Louis is crying and bad-
tempered with no obvious reason, the parents
check Louis’ condition in the mobile phone. If
the sensor data values exceed certain threshold
values, the mobile phone will alert it’s owner.
In addition, Rosalita and Jorge have
installed a movement and activity monitoring
system in their home and backyard. The system
includes activity sensors in walls, floors, and
furniture, as well as in the garden. The system
monitors vivid Louis’ activities when he’s
awake and in sleep, whether he is inside the
house or in the backyard. If Louis is trying to
access dangerous places, for instance, to walk
away from the backyard, the system alerts the
parent at home by calling an alarm sound in the
mobile phone.
The sensor and activity data are
continuously collected to the hand-held
device’s memory and regularly send to a
Louis is taken to the family doctor either for
his regular examination or because of alerting
symptoms, the doctor is able to check his health
condition history from a year’s time.
Considering the legal aspects of this
scenario, the sensitive personal information on
one’s healthy is again at the central. The
outsiders should never get an access to that
information. But in this case, it is also
interesting, who is an outsider and who is
allowed to view the information. A small child
needs the parents to take care of healthy issues,
while adults do not necessarily want to share
their own medical information even within the
family. Therefore, it is necessary to define the
group that may access the information case by
case.

Other MIMOSA Scenarios

MIMOSA published some of the applications


un- der the label of “Everyday Scenarios”.
‘Everyday’ refers implicitly to normal usage for
ordinary people in the European Union, i.e. the
scenarios do not presume any special
circumstances. Yet, the scenarios do not
explicate cultural aspects and thus it is possible
that they do not represent everyday situations in
other cultures. The every- day scenarios of
MIMOSA illustrate the use of mobile centric
ambient intelligence in common situations,
which often take place in public en-
vironments. The scenarios especially describe
the use of tags and physical selection of tags for
interaction by touching them or pointing at
them from a distance with the mobile phone.
(Niemelä et al, 2005)
The sport and fitness scenarios demonstrate
the use of sensor measurements to understand
better performance in exercising and to
maintain motiva- tion. Several performance-
related measures can be collected of a person
over a long time, help- ing to follow the
progress and providing instant feedback in the
user’s personal mobile phone. (Niemelä et al,
2005)
The housing scenarios illustrate MIMOSA
applications in housing as well as home and
fam-
ily contexts. The scenarios focus on the benefits viewpoint. It
received from remote monitoring and
controlling housing applications with a mobile
phone, and how this can be used to support
independent liv- ing of elderly people and ease
the burden of their care-takers. (Niemelä et al,
2005)
These scenarios represent several privacy-
related issues. In many cases, the technology is
used in public spaces where others can easily
access the data that is stored in RFID tags or
transferred to a reader. The housing scenarios,
on the other hand, represent situations in which
most private information is to be managed.

MobiLife

MobiLife was an Integrated Project (IST-


511607) in European Union’s 6th Framework
Programme. It was to bring advances in mobile
applications and services within the reach of
users in their everyday life by innovating and
deploying new applications and services based
on the evolving capabilities of 3G systems and
beyond. Enabling technologies include RFID,
Bluetooth, sensors, and so on.
The project created and evaluated a set of
sce- narios. They illustrate the key user
requirements of modern family life that can be
supported by mobile services and applications
and identify the requirements to the underlying
technologies that will enable the provision of
these services. Selected scenarios were further
developed to mock-ups and probes.
Also, the project developed a mobile service
framework that identifies the essential
functional blocks for the implementation of the
new mobile services and applications.
(Räisänen et al, 2006) Especially
personalisation, adaptation, and con- text
awareness building blocks of the framework
introduce significant privacy concerns, which
privacy and trust building block tries to solve.
However, technical solutions cannot alone solve
the privacy issues. Therefore it is essential to
assess the framework also from the legal
seems that MobiLife applications like any
similar mobile service systems will be facing
significant challenges with privacy and data
protection. Lots of personal data will be
processed and transferred. For example, the
system will not only collect in- formation on the
end-users to personalize services, but also –
using e.g. RFID tags and Bluetooth devices –
information on the context, environ- ment, and
circumstances in which the end-users are,
including information on the other people in
proximity. (Pitkänen, 2006b)
The system as a whole can be distributed to
a large extent. There are important legal cross-
border issues related to a distributed system like
those that implement MobiLife architecture. If
a system is distributed in several countries, all
the applicable laws should be obeyed. For
example, transferring personal information
even within the system but between
organizations and/or countries may violate data
protection law. Similar problems arise if
MobiLife system is connected to other systems.
So, both internal and external data processing
should be legal. Also, data protection directives
are implemented in slightly different ways and
they are not applicable outside the EU. Thus
there are differences e.g. which information is
to be provided for data subjects, i.e. for those
whose personal data is processed. (Pitkänen,
2006b)

tecHnoLoGIcAL soLutIons to
PrI VAcy ProbLeMs

RFID developers are aware of the potential pri-


vacy problems that RFID applications may
bring up. The engineers have also found several
clever technical solutions to the problems.
Some of them are briefly introduced below. As
discussed above in relation to the scenarios,
RFID technology will be likely to process
information that must not be accessed by
unauthorized third parties. Also, the end-users
should be able to control how the infor- mation
is processed and who may access it.
Kill tag like to use. Thus RFID blocker significantly
limits the possibilities that the technology
A typical application for RFID technology is to offers. For
use them to identify goods within a logistics
chain and in the store. Many researchers have
pointed out that these RFID tags that replace
barcodes in packaging pose a threat to privacy
since the tags can be used to trace a customer.
(e.g. Garfinkel et al, 2005) The simplest way to
solve these privacy problems is to kill (i.e.
disable), or remove tags at the point-of-sale. If
the tag is no longer able to communicate, it will
not pose any privacy threats either. However, a
tag can be valuable also after sale. It is
presumed that there will be lots of use- ful
services that are based on the “Internet of
things.” If all objects around us were equipped
with RFID tags, it would enable services that
benefit individuals as well as businesses.
Today, it is hard to imagine all the innovations
that it might facilitate. Although the kill tag
approach is inexpensive and relatively secure
way to enhance privacy, it reduces these
possibilities. Also, the MIMOSA scenarios
above present RFID appli- cations in which
killing the tag is not an option. (Garfinkel et al,
2005; Goel, 2007; Ohkubo et al,
2003; Ohkubo et al, 2005)

rfId blocker

A blocker tag prevents RFID tags from being


read. RFID readers cannot read more than one
tag at a time, because the reader is unable to
decipher radio waves reflected back by two tags
simultaneously. So vendors have developed
anti-collision protocols to enable the reader to
communicate with one tag at a time in rapid
sequence. The blocker tag essentially confuses
the reader by always respond- ing, thereby
preventing any tags from being read. (RSA,
2003; Garfinkel et al, 2005)
The blocker tag has some notable limita-
tions. Although it does not disable RFID tags
permanently, it blocks temporarily all the RFID
applications, also those that the person would
example, the applications in the above scenarios
would be much less useful, if the blocker tag
were used in their surroundings.
Also, Garfinkel et al (2005) note that a
sophis- ticated adversary might well be able to
develop a reader that sometimes defeat blocker
tags and thus blocker tags provide nothing like
foolproof protection.

Privacy bit

An alternative is to set aside a logical bit on the


RFID tag. This bit is initially off when items
are in the shop. The bit is flipped to the on
position to deactivate a tag at the point of sale.
If RFID readers in shops refrain from scanning
private tags, i.e., those tags whose privacy bit is
turned on, then a good measure of consumer
privacy will already be in place. Tags belonging
to consumers in this case will be invisible to
shops. At the same time, tags on items on
shelves and storage rooms, i.e., those that have
not yet been purchased, will be perfectly
visible. The privacy bit will not impact normal
industrial use of RFID. (Juels, 2005)
Home ap pl ia nce s, on t he ot he r h a
nd , should contain RFID readers capable of
scan- ning private tags. R FID readers that
scan tags for item returns in shops might
likewise have this capability, if consumers
want it. With proper RFID reader
configuration, the pri- vacy bit is an interesting
compromise between privacy and utility. To
ensure this balance, there is a need to enforce
proper reader configuration and to defend
against rogue readers used intentionally to
infringe privacy. Thus Privacy Bit is an
excellent example of solutions that require both
technologi- cal and legal components. (Juels,
2005)

Access control (MIMosA)

MIMOSA project developed a platform that


includes the following key building blocks: per-
sonal mobile terminal device, wireless sensors
exploiting the RFID technology, highly inte-
grated readers/writers for RFID tags and It is important to notice that all the
sensors, low-power short-range radios, novel technological solutions introduced so far to
sensors for context sensitivity, and intuitive, enhance privacy
user-friendly interfaces.
These building blocks are the enabling
technol- ogy for mobile centric ambient
intelligence. The user is able to communicate
with the surrounding environment by wirelessly
reading local tags and sensors embedded to
everyday objects with her personal mobile
phone. In addition, the phone enables wireless
connection to the internet. As the
communication can be tied to a specific place,
object, and time, this approach enables context
related information and services.
Overall MIMOSA architecture specification
is an example of a highly sophisticated service
architecture that uses extensively RFID
technol- ogy. The architecture includes an
access control component that resides on the
application server side. The access control
component is consulted in case of an incoming
acquisition request in or- der to determine
appropriate access rights for the particular
application with respect to the particular data
requested. (Lappeteläinen et al, 2005)
A mechanism like this can provide an
adequate privacy protection scheme for many
kinds of emerging ambient intelligence
services. However, a sophisticated access
control requires remarkable processing and data
storage resources. Therefore all the tiny
ubiquitous computing devices cannot be
equipped with such technology for the fore-
seeable future. Especially, this kind of solution
is usually too expensive for mass-use.
Consequently, such solutions will be important
in certain types of services, but there will
remain applications that cannot benefit them.
(Garfinkel et al, 2005; Ohkubo et al, 2003)

LeGAL frAMeWorK:
dAtA ProtectIon
dIrectIVes
protection in RFID applications are far from
perfect. Actually, it is very questionable
whether any technological solution alone could
completely protect privacy while
simultaneously enable all the desired
applications. Some of the privacy protection
technologies (e.g. kill tag and all the expensive
solutions) reduce the useful applica- tions area
significantly. The rest of them, to be efficient,
require strong support from the legal or other
non-technological systems (e.g. economic or
social incentives). For example, Garfinkel’s
(2005 and 2006) “RFID Bill of Rights” works
only as long as everybody voluntarily follows
the model, unless there is a law or another
strong incentive that forces them. Likewise, the
Privacy Bit presented above, does not work
without proper legal support. Without good
incentives or regula- tory force, it is more
tempting to ignore technical privacy protection
solutions while developing the systems. Also,
most solutions depend on people’s trust in
something (e.g. in technology that is said to
protect privacy, in a company that claims to
respect their customers’ private data, and so
on). The legal system that ensures reasonable
pro- tection could be the one that is trusted and
thus fosters the technology and business. If the
legal system included a built-in support for
adequate technical solutions, it would reduce
remarkably the cost to implement a working
solution.
In the following, we briefly present the
current legal framework and evaluate its ability
to foster RFID applications.
The legal basis of data protection within the
European Union is the EU Directives on data
pro- tection, especially the general Directive
95/46/EC on the protection of personal data, but
also the more specific Directive 2002/58/EC on
the protection of personal data in the electronic
communications sector. (Kosta & Dumortier,
2008)
The Data Protection Directive applies to the
processing of all personal data. Under the
Direc- tive, ‘personal data’ is very broadly
defined and includes ‘any information relating
to an identi- fied or identifiable natural person’.
In assessing
whether the collection of personal data through are intended.
a specific application of RFID is covered by the
data protection Directive, we must determine
(a) the extent to which the data processed
relates to an individual and, (b) whether such
data concerns an individual who is identifiable
or identified. (Art
29 WP 105, 2005; Directive 95/46/EC; Kosta &
Dumortier, 2008)
Therefore, although not all the data
processed in an ambient intelligence system is
governed by data protection law, there will be
many scenarios where personal information is
collected through RFID technology. Especially,
if RFID technol- ogy entails individual tracking
and obtaining access to personal data, data
protection law is directly applicable, but also in
cases where the information gathered through
RFID technology is linked to personal data, or
personal data is stored in RFID tags, it is likely
that data protec- tion law applies. (Art 29 WP
105, 2005; Kosta & Dumortier, 2008)
The processing of personal data is not illegal
in general. On the contrary, the data protection
law tries to enable useful processing of personal
data. However, the processing needs to be
carried out in accordance with the law.
Especially, the Data Protection Directive
(95/46/EC) requires that personal data must be

• processed fairly and lawfully;


• collected for specified, explicit and
legitimate purposes and not further
processed in a way incompatible with
those purposes;
• adequate, relevant and not excessive in
rela- tion to the purposes;
• accurate and, where necessary, kept up to
date.

Personal data may be processed only if the


data subject has given an unambiguous consent
or there is another lawful basis for processing.
The controller must provide the data subject
with certain information, including the
purposes of the processing for which the data
It is also important that disclosing by
transmis- sion, disseminating or otherwise
making avail- able to others is processing of
personal data and thus needs also consent or
another lawful basis. Especially, transferring
personal data outside the European Union is
highly restricted.
There are some important restrictions to the
applicability of data protection law. Usually, if
a natural person in the course of a purely
personal or household activity processes
personal data, the data protection law is not
applied. Furthermore, the data protection law
applies only partially to journalistic and artistic
context. Also, the law is not always applied to
data processing that is related to e.g. national
security, criminal investigation, or important
national financial interests.
Completely automated individual decisions
are restricted. The directive sets strict
limitations to decisions, which produce legal
effects concerning individuals and which are
based solely on auto- mated processing of data
intended to evaluate the individuals’ personal
aspects, such as performance at work,
creditworthiness, or reliability.
Certain sensitive information should not be
processed at all without special lawful reasons.
These special categories of data include racial
or ethnic origin, political opinions, religious or
philosophical beliefs, trade-union membership,
data concerning health or sex life, and data
relat- ing to offences, criminal convictions or
security measures.

eVALuAtIon

How are RFID tags and other AmI technologies


going to affect data protection? Because devices
that are able to exchange information on people
are spreading, the quantity of privacy problems
will arise. The scenarios above include a
number of privacy issues. Although privacy
problems are not that common today, it is
predictable that they will be increasingly
ordinary.
But will there be also something else? Will also on the
some qualitative changes also occur?
First, current legislation, although it claims
to be technology neutral, is somewhat biased
towards existing technical solutions, like
personal com- puters, large displays, keyboards,
and web pages. For example, according to the
European Direc- tive on privacy and electronic
communications (2002/58/EC), services must
provide continually the possibility, of using a
simple means and free of charge, of temporarily
refusing the processing of certain personal data
for each connection to the network or for each
transmission of a com- munication. It would be
quite easy to fulfil such requirements with a PC
based system, but very difficult with a tiny
AmI device which has a minimal user
interface.
Second, people’s notion on privacy is chang-
ing. We are already getting used to the idea that
while we are using for instance Internet
services, someone can be able to observe our
doings. While travelling abroad, we need to
frequently present our passports and other
documents, even though it makes it possible for
authorities to follow our paths. In the past, that
was not possible, but still most people are not
concerned about the change. Either they accept
the reduction of their privacy, because they
think it is necessary or that they get something
valuable instead, or they do not care. Anyway,
it seems that most people will not object the
gradual impairment of their privacy, and the
younger people have views on privacy that are
different from those of their parents (Acquisti
& Grossklags, 2004; Allen, 2001; Lehtinen,
2007). The expectations of privacy are very
much related to the surrounding culture and
social norms and as they slowly change, people
will also have a different notion on privacy.
For obvious reasons, especially medical sci-
entists have been interested in ethical and legal
questions on privacy in families. For example,
if they study a disease that appears to be
inherited in some families, they want to collect
informa- tion not only on research subjects, but
whole pedigree. Based on his studies on
medical pedigree research, Cook-Deegan
(2001) has shown that studying a family does
not reduce to studying a group of individuals
one at a time. This opens the door to legal and
moral concepts applied to collectives rather
than individuals, which will be an increasingly
important subject in scenarios such like those
of MobiLife.
MIMOSA scenarios, especially the health
care scenarios highlight the importance of data
protection in relation to RFID technologies.
Lots of sensitive information on data subjects’
health is gathered by RFID tags, processed by
mobile devices, as well as stored and further
processed in a server. The scenarios clearly
show how useful and valuable the technology
can be for the end- user, but how urgent it is to
protect the data. As mentioned above, the
processing of sensitive data is strictly restricted
by the Directive. MIMOSA Ines scenario is a
good example to show the im- portance of this
subject.
Louis the Toddler scenario on the other
hand is less dubious since parents – as the legal
guardians
– have a right to get all the information on their
children. Once again, however, it is necessary
to make sure that outsiders are not able to
access the sensitive health information.
Yet, recent studies have pointed out that
especially teenagers consider privacy simply
something that their parents and teachers do not
know. (Lehtinen, 2007) Therefore, it is not
always the optimal solution that parents make
privacy decisions on behalf of their underage
children. Actually, in many cases the biggest
privacy risk is not that a malicious attacker or
unwanted com- mercial marketing accesses our
private informa- tion, but that our friends and
relatives find out about us something that we
do not want them to know. The new
technology enables people to embarrass
themselves in ways that they are not aware of.
As this topic is becoming increasingly
common, the question whether legal,
technologi- cal, or other solutions are needed
will soon require an answer.
The travelling scenarios like MIMOSA Ines technologies and legal systems support them as
scenario underline also the international well as what sort of new
aspects: it is increasingly important to get an
adequate level of protection also in the
countries which are not members of the EU and
in which the EU legislation is not directly
applicable.
It was noted above that many privacy
protec- tion technologies need support from the
legal system to be affective. Currently, the legal
systems hardly provide that support. On the
other hand, if the laws support certain
technologies, it becomes challenging to make
technology-neutral laws that are often
considered desirable.
The sample scenarios and applications
above suggest that it will require a lot work to
develop systems that comply with the data
protection directives, but also to streamline the
directives in a way that they do not
unnecessarily harm societally beneficial
services.

concLusIon

The examples presented in this article show the


importance of privacy and data protection in
relation to RFID and other ambient intelligence
technologies. Because the usage of RFID tags
and AmI technologies increase rapidly, also the
quan- tity of privacy problems will arise. The
European legal system provides individuals
with reasonable privacy and data protection, but
it should be also ensured that the legal system
will not unneces- sarily hinder the development
of useful services and the information society
as it sometimes seem to be the case in the above
examples. Especially, the directives should be
made more technology neutral than they are
today.
To identify research needs from RFID
towards the “Internet of Things” with respect to
privacy and data protection, we conclude that it
is neces- sary to continue studies on user needs
and privacy expectations and how well
threats emerging technologies pose and how the
legal system possibly hinders useful services.

references

Acquisti, A., & Grossklags, J. (2004). Privacy


Attitudes and Privacy Behavior: Losses, Gains,
and Hyperbolic Discounting. In J. Camp, S.
Lewis (eds.) The Economics of Information
Security, Kluwer Academic Publishers.
Allen, A. L. (2001). Is Privacy Now Possible?
A Brief History of an Obsession. Social
Research, Vol. 68 Issue 1.
Article 29 Data Protection Working Party (Art
29 WP 105, 2005). Working document on data
protection issues related to RFID technology.
10107/05/EN, WP 105. Also available at http://
ec.europa.eu/justice_home/fsj/privacy/docs/wp-
docs/2005/wp105_en.pdf.
Article 29 Data Protection Working Party (Art
29
WP 111, 2005). Results of the Public
Consultation on Article 29 Working Document
105 on Data Protection Issues Related to RFID
Technology.
1670/05/EN, WP 111. Also available at http://
ec.europa.eu/justice_home/fsj/privacy/docs/wp-
docs/2005/wp111_en.pdf
Bhuptani, M., & Moradpour, S. (2005). RFID
Field Guide: Deploying Radio Frequency
Identification Systems. Prentice Hall.
Cook-Deegan, R. M. (2001). Privacy, Families,
and Human Subject Protections: Some Lessons
from Pedigree Research. The Journal of Con-
tinuing Education in the Health Professions,
Volume 21.
Directive 95/46/EC of 24 October 1995 on the
protection of individuals with regard to the
processing of personal data and on the free
movement of such data. Also available at
http:// eu r-lex.eu ropa.eu / LexUr iSer v/
LexUr iSer v. do?
uri=CELEX:31995L0046:EN:HTML
Directive 2002/58/EC of 12 July 2002 Computers, Communications of the ACM,
concerning the processing of personal data and March 2005.
the protection of privacy in the electronic
communications sector. Also available at
http://eur-lex.europa.eu/pri/en/oj/
dat/2002/l_201/l_20120020731en00370047.pdf
Federal Trade Commission, FTC (2005). Radio
Frequency IDentification: Applications and Im-
plications for Consumers, A Workshop Report
from the Staff of the Federal Trade
Commission, USA, March 2005. Also available
at http://www. ftc.gov/bcp/workshops/rfid/
Garfinkel, S. L., Juels, A., & Pappu, R. (2005).
RFID Privacy: An Overview of Problems and
Proposed Solutions. IEEE Security & Privacy,
May/June, 2005.
Garfinkel, S. (2006). Privacy Protection and
RFID in G. Roussos (ed.): Ubiquitous and
Pervasive Commerce, New Frontiers for
Electronic Busi- ness, Springer.
Goel, R. (2007). Managing RFID Consumer
Pri- vacy and Implementation Barriers,
Information Systems Security, 16, 2007.
Juels, A. (2005). A Bit of Privacy. RFID Jour-
nal, May 2, 2005. Also available at http://www.
rfidjournal.com/
Kardasiadou, Z., & Talidou, Z. (2006). Legal
issues of RFID technology. LEGAL-IST, IST-2-
004252- SSA, D15. Also available at
ht t p://193.72.209.176/ projects/P1507/
D15%20Report%20on%20Ad- ditional
%20Legal%20Issues%20-%20final%20
version.pdf
Kosta, E., & Dumortier, J. (2008). Searching
the man behind the tag: privacy implications of
RFID technology. International Journal of
Intellectual Property Management (IJIPM),
Special Issue on: “Identity, Privacy and New
Technologies”,
2008.
Lahlou, S., Langheinrich, M., & Röcker, C.
(2005). Privacy and Trust Issues with Invisible
Langheinrich, M. (2007). RFID and privacy. In
M. Petkovic, & Jonker, W. (eds): Security, Pri-
vacy, and Trust in Modern Data Management.
Springer, 2007.
Langheinrich, M., Coroama, V., Bohn, J., &
Mattern, F. (2005). Living in a Smart Environ-
ment – Implications for the Coming Ubiquitous
Information Society, Telecommunications
Review, Vol. 15, No. 1, February 2005
Lappeteläinen, A., Nieminen, H., Vääräkangas,
M., Laine, H., Trossen, D., & Pavel, D. (2005).
Overall MIMOSA architecture specification
(OMAS). MIMOSA, IST-2002-507045,
D2.1(2). Also available at htt p://ww w.mimosa-
f p6.com / fileadmin/pdf/MIMOSA-WP2-
D2.1_2_.pdf
Lehtinen, V. (2007). Maintaining and
Extending Social Networks in IRCgalleria.
University of Helsinki, Faculty of Social
Sciences, Depart- ment of Social Psychology,
Master’s Thesis, May 2007. Also available at
https://oa.doria.fi/ handle/10024/7282
Niemelä, M., Ikonen, V., Kaasinen, E., & Välk-
kynen, P. (2005). MIMOSA updated Usage
Scenarios. MIMOSA, IST-2002-507045, D1.5.
Also available at ht t p://ww w.mimosa-f p6.com /
fileadmin/pdf/MIMOSA-WP1-D1.5.pdf
Ohkubo, M., Suzuki K., & Kinoshita, S.
(2003). Cryptographic Approach to “Privacy-
Friendly” Tags. RFID Privacy Workshop, MIT,
MA, USA,
2003.
Ohkubo, M., Suzuki K., & Kinoshita, S.
(2005). RFID Privacy Issues and Technical
Challenges. Comunications of the ACM, Vol.
48, No. 9, Sep- tember 2005.
Pitkänen, O. (2006a). Legal Challenges to
Future
Information Businesses. HIIT Publications
2006-
1, Helsinki Institute for Information
Technology HIIT. Also available at
htt p://lib.tkk.fi/Diss/2006/ isbn9512279983/
Pitkänen, O. (2006b). Legal and Regulation RSA Security Designs RFID Blocker (RSA
Framework Specification: Competence within 2003). RFID Journal Aug. 28, 2003. Also
Mobile Families and Ad-hoc Communities. IST- available at ht t p://ww w.r fidjour nal.com /
2004-511607 MobiLife, D11 (D1.6) v1.0. Also
available at ht t p://ww w.ist-mobilife.org/ Räisänen, V., Karasti, O., Steglich, S., Mrohs,
B., Räck, C., Del Rosso, C., Saridakis, T.,
Pitkänen, O., and Niemelä, M. (2007). Privacy Kellerer, W., Tarlano, A., Bataille, F., Mamelli,
and Data Protection in Emerging RFID- A., Bous- sard, M., Andreetto, A., Hölttä, P.,
Applications. Proceedings of the EU RFID D’Onofrio, G., Floreen, P., & Przybilski, M.
Forum 2007, Brus- sels, Belgium, 13-14 (2006). Basic Reference Model for Service
March, 2007. Provisioning and General Guidelines. IST-
2004-511607 MobiLife, D34b (D5.1b) 1.0.
Pitkänen, O., Virtanen, P., & Kemppinen, J.
(2008). Legal Research Topics in User-Centric Spiekermann, S., & Ziekow, H. (2006).
Services. IBM Systems Journal, Vol. 47, No. RFID:
1, a Systematic Analysis of Privacy Threats & A
2008. Also available at 7-point Plan to Adress Them. Journal of
ht t p://ww w.research.ibm. Informa- tion System Security, 1 (3) 2006.
com/journal/sj/471/pitkanen.pdf

This work was previously published in the International Journal of Technology and Human Interaction, Vol. 5, Issue 2,
edited by B. C. Stahl, pp. 85-95, copyright 2009 by IGI Publishing (an imprint of IGI Global).
1408

Chapter 7.6
Privacy Factors for
Successful
Ubiquitous Computing
Linda Little
Northumbria University, UK

Pam Briggs
Northumbria University, UK

AbstrAct gies on social engagement and human values.


[Article copies are available for purchase from
Certain privacy principles have been InfoSci-on-Demand.com]
established by industry, (e.g. USCAM, 2006).
Over the past two years, we have been trying to
understand whether such principles reflect the IntroductIon
concerns of the ordinary citizen. We have
developed a method of enquiry which displays An individual has a right to determine how,
a rich context to the user in order to elicit more when and to what extent information about the
detailed information about those privacy factors self will be released to another person –
that underpin our acceptance of ubiquitous something com- monly referred to as individual
computing. To investigate use and acceptance privacy (USACM,
Videotaped Activity Scenarios specifi- cally 2006). Not surprisingly, new developments in
related to the exchange of health, financial, technology present challenges to the
shopping and e-voting information and a large individual’s rights in this respect (Price, Adam,
scale survey were used. We present a detailed & Nuseibeh,
analysis of user concerns firstly in terms of a 2005) and so privacy issues are widely
set of constructs that might reflect user- discussed by academics and designers alike
generated privacy principles; secondly those (Kozlov, 2004; Dine & Hart, 2004), most of
factors likely to play a key role in an whom respect the individuals’ right to control
individuals cost-benefit analysis and thirdly, and protect their per- sonal information
longer-term concerns of the citizen in terms of (Nguyen & Truong, 2003).
the impact of new technolo- Users are well aware of the need for
informa- tional privacy and frequently express concern

Privacy Factors for Successful Ubiquitous Computing


Copyright © 2010, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Privacy Factors for Successful Ubiquitous Computing

about their rights. E-commerce consumers, for propose a programme called ‘none of your
example, have major concerns about who has business (NOYB)’ to protect privacy while
access to their personal data (Cranor, Reagle, & online
Ackerman, 1999; Jackson, et al., 2003; Earp, et
al., 2005); and show a reluctance to disclose in-
formation to commercial web services
(Metzger,
2004).
However, even those consumers who hold
privacy in high regard are able to recognise the
benefits of disclosing information (Hinz, et al.,
2007). We need to understand why it is that
users uphold their right to privacy whilst
simultane- ously giving away sensitive personal
information (Malhotra, Kim, & Agarwal,
2004). In other words, we need to better
understand the cost- benefit trade-off in which
e-consumers will trade personal information
online in order to achieve an improved service
(something referred to as the ‘privacy-
personalisation paradox’ (Awad & Krishnan,
2006)).
The perceived costs and benefits in any
trans- action inevitably reflect personal beliefs.
People differ with respect to the value they
place on privacy – and these individual
differences are reflected in scales which have
been designed to measure the strength of
individual feeling in this regard. These include
the Concern for Informa- tion Privacy (Smith,
Milberg & Burke, 1996) and the Internet Users
Information Privacy Concerns (Malhotra, et al.,
2004).
In keeping with the concept of some kind
of individualised privacy setting, designers are
increasingly allowing users to manage their
own concerns by setting privacy preferences.
On the Internet, at least, various architectures
have been suggested that allow personalized
settings (Kobsa, 2003). For example the
Platform for Privacy Preferences (P3P) allows
users to set their own personal privacy
preferences and if visited sites do not match
these then warnings are shown – leaving
responsibility ultimately with the individual
user (Cranor, 2002). Guha, et al., (2008)
129
Privacy Factors for Successful Ubiquitous Computing
and have tested the system on social networking
sites. NOYB provides fine-grained control over
user privacy in online services while preserving
much of the functionality provided by the
service. They argue NOYB is a first step
towards a ‘new design paradigm of online
services where the user plays an active role in
performing the sensitive operations on data,
while the service takes care of the rest’ (p.53).
Such tools are useful, but they are not
future- proof. Specifically, they could not cope
with the kinds of seamless, anywhere, anyplace
exchanges of personal information that are
anticipated by designers of ubiquitous
computing systems. Systems that collect,
process and share personal information are
prerequisites for the creation of intelligent
environments that can anticipate user’s needs
and desires (Dritsas, Gritzalis, &
Lambrinoudakis, 2006). Pervasive technologies
are expected to be responsive to different
contexts and to act on the user’s behalf
seamlessly – but will privacy violations
inevitably ensue?
Researchers disagree. On the one hand,
(Olsen, Grudin, & Horvitz, 2005) argue that
tools could be constructed to capture quite
complex privacy pref- erences, preferences that
are tailored to the context of the exchange, the
sensitivity of the enquiry and the disclosure
preferences of the individual. Such tools – if
feasible - would prevent privacy viola- tions in
the day to day exchanges of ubiquitous
computing. On the other hand, (Palen &
Dourish,
2003) argue that a-priori privacy configurations
and static rules will not work, but insist that the
disclosure of information needs to be controlled
dynamically and needs, essentially, to be passed
into the hands of software agents designed to
uphold general privacy preferences.
This begs the question of just what kinds of
assurances software agents might look for
before agreeing to the release of personal data.
As a clue to this we might start by looking at
established principles underpinning the right to
privacy (Ko- bsa, 2007). For example, the U.S.
Public Policy Committee of the Association
for Computing

130
Machinery (USACM) has laid down the principles? Do these concerns vary as a
following principles for privacy management: function of context? Will users have enough
confidence in privacy management procedures
a. Minimization: Store and use only
essential data and delete it once no longer
required.
b. Consent: Provide simple opt-in and opt-
out procedures that ensure consent the
storage and use of personal data is
meaningful.
c. Openness: Ensure transparency in data
col- lection and use – making salient the
default procedures for the storage and use
of data and being explicit about how it
might be made available to others. Also
ensure that privacy policies are
communicated effectively.
d. Access: Provide the individual with the
ca- pacity to inspect their data and to
determine how it has been made available
to others, also how to repair any violation
of privacy rights.
e. Accuracy: Ensure that personal
information is sufficiently accurate and
up-to-date and propagate corrections
quickly to parties that have received or
supplied inaccurate data.
f. Security: For all types of storage,
maintain all personal information securely
and protect it against unauthorized and
inappropriate access or modification.
g. Accountability: Be accountable for data
storage and proper adherence to privacy
policies, ensuring that those responsible
are trained, authorized, equipped, and mo-
tivated.

These are the kinds of privacy principles


that have been established by the industry – but
over the past two years, we have been trying to
understand whether such principles reflect the
concerns of the ordinary citizen. Some of the
key research ques- tions we have been
addressing are: What are users’ key concerns
regarding privacy management in a ubiquitous
context and do they reflect ‘expert’ privacy
to hand-over management and administration
of their privacy preferences?
Motahari, et al., (2007) argue people do not
have a complete understanding of the threats to
their privacy. While users of ubicomp systems
are aware of inappropriate use of their personal
information, legal obligations and inadequate
security they are less aware of setting
preferences for who has access and any social
inferences that can be made by observations by
other people. They further argue a holistic
approach is needed as tra- ditional approaches
and current investigations are not enough to
address privacy threats in ubiquitous
computing. Recognising – in line with a
number of other researchers (Harper &
Singleton, 2001; Paine, et al., 2007) – that
privacy concerns are likely to be highly
situation-dependent, we have developed a
method of enquiry which displays a rich
context to the user in order to elicit more
detailed information about those privacy fac-
tors that underpin our acceptance of ubiquitous
computing.

MetHod

To communicate the concept of ubiquitous


computing (ubicomp) to the ordinary citizen we
engaged with a number of key stakeholders to
generate detailed scenarios that communicated
something about pervasive technologies and the
privacy and identity issues they evoke. The
stake- holders included relevant user groups,
researchers, developers, businesses and
government depart- ments with an interest in
ubicomp development. Working in conjunction
with relevant stakeholders we produced
scenarios that were realistic and with high face
validity.
Four scenarios were developed, related to
health, e-voting, shopping and finance that in-
cluded facts about the device, context of use,
type of service or information the system would
be used for.
The elicited scenarios were then immediately informs the emergency services.
professionally scripted and used to develop Within seconds the emer-
Videotaped Activity Scenarios (or VASc’s).
The VASc method is a tool for generating
richly detailed and tightly focused group
discussion and has been shown to be very
effective in the elicitation of social rules (Little,
Briggs, & Coventry, 2004). The VASc method
allows individuals to discuss their own experi-
ences, express their beliefs and expectations.
This generates descriptions that are rich in
detail and focused on the topic of interest.
For this research a media production
company based in the UK was employed to
recruit actors and videotape all scenarios. The
production was overseen by both the producer
and the research team to ensure correct
interpretation. British Sign Language (BSL)
and subtitles were also added to a master copy
of the VASc’s for use in groups where
participants had various visual or auditory
impairments.
The four scenarios are briefly described
below:

Health Scenario: Bob is in his office talking


on his personal digital assistant (PDA) to a
council planning officer with regard to an
important ap- plication deadline. Built into his
PDA are several personalised agents that pass
information seam- lessly to respective
recipients. A calendar agent records and alerts
Bob of deadlines, meetings, lunch appointments
and important dates. As Bob is epileptic his
health agent monitors his health and can alert
people if he needs help. An emergency
management agent takes control in situations
when a host of different information is needed;
this agent has the most permissions and can
contact anyone in Bob’s contact list.
Bob is going to meet his friend Jim for lunch
when he trips over a loose paving slab. He falls
to the ground and looses consciousness. His
health agent senses something is wrong and
beeps, if Bob does not respond by pressing the
appropriate key on the PDA the agent
131
gency services are informed of Bob’s current
situation and his medical history. An
ambulance is on its way. Paramedics arrive,
examine Bob and then inform the hospital of
Bob’s condition on their emergency device. The
hospital staff is now aware of Bob’s medical
history and his present state, therefore on
arrival he is taken straight to the x-ray
department. A doctor receives the x-rays on her
PDA. After examining Bob she confirms that he
has a broken ankle, slight concussion and
needs to stay in hospital overnight. After
receiving treatment Bob is taken to a ward. His
emergency management agent contacts John
(Bob’s boss) of his circumstance. The
emergency manage- ment agent transfers the
planning application files to John’s PDA so the
company do not miss the deadline. The agent
also informs his parents letting them know his
current state of health, exactly where he is so
they can visit and that his dog needs to be taken
care of. As Bob is also head coach at a local
running club the agent informs the secretary
Bob will not be attending training the following
week. The secretary only receives minimal
information through the permissions Bob has
set.
Shopping Scenario: Anita arrives at the
local supermarket grabs a trolley and slips her
PDA into the holding device. A message
appears on screen and asks her to place her
finger in the biometric verification device
attached to the supermarket trolley. Anita
places her finger in the scanner and a
personalised message appears welcoming her
to the shop. She has used the system before and
knows her personalised shopping list will
appear next on the PDA screen. Anita’s
home is networked and radio frequency
identification tags are installed everywhere.
Her fridge, waste bin and cupboards monitor
and communicate seamlessly with her PDA
creating a shopping list of items needed. The
supermarket network is set so that alerts Anita
of special offers and works alongside her
calendar agent to remind her of any important
dates. As she wanders around the supermarket
the screen shows her which items

132
she needs in that particular aisle and their next to her unique response number
exact location. The device automatically
records the price and ingredients of every item
she puts into trolley and deletes the
information if any item is removed. When
Anita is finished she presses a button on the
PDA and the total cost of her shopping is
calculated. Anita pays for the goods by placing
her finger on the biometric device and her
account is automatically debited, no need to
unpack the trolley or wait in a queue. The
trolley is then cleared to leave the supermarket.
Anita leaves the supermarket, walks to her car
and places her shopping in the boot.
E-voting Scenario: Natasha decides she
wants to vote in the next election using the new
on-line system. She goes on-line and requests
electronic voting credentials. Shortly before
polling day a polling card and separate
security card are delivered to Natasha’s home.
They arrive as two separate documents to
reduce the risk of interception. Natasha picks
up two of the letters from the doormat and puts
the letters in her pocket as she rushes out of the
door to head for work. While travelling on the
local underground railway system Natasha
decides to cast her vote on her way to work.
The letters have provided her with a unique
personal voting and candidate numbers which
allows her to register a vote for her chosen
candidate. She takes out her mobile phone and
types her unique number into it. Her vote is
cast by entering this unique number into her
phone and sending it to a number indicated on
the polling card. Her phone then shows a text
message: THANK YOU FOR VOTING. YOU
HAVE NOT BEEN CHARGED FOR THIS
CALL. When Natasha arrives at work she logs
on to the voting site to see if her vote has been
registered. While at her computer with her
polling cards on the desk in front of her a
colleague looks over her shoulder, she can see
that Natasha is checking her vote but can’t see
who she has voted for. Once the result of the
election has been announced Natasha checks
that the correct candidate name is published
to ensure that the system has worked
properly.
Financial Scenario: Dave is at home
writing a ‘to do’ list on his PDA. The PDA is
networked and linked to several services that
Dave has authorised. While writing his list he
receives a reminder from his bank that he
needs to make an appointment with the
manager related to his yearly financial health
check. He replies and makes an appointment
for later that day. When he arrives at the bank
he is greeted by the bank concierge system (an
avatar presented on a large interface). The
system is installed in the foyer of the bank
where most customers use the banks facilities.
The avatar tells Dave the manager, Mr Brown,
will be with him soon. The avatar notes that
Dave has a photograph to print on his ‘to do’
list and asks if he would like to print it out at
the bank as they offer this service. The avatar
also asks Dave to confirm a couple of recent
transactions on his account prior to meeting
Mr Brown.

Procedure

The four VASc’s were shown to thirty-eight


focus groups, with the number of participants
in each group ranging from four to twelve
people. Partici- pants were drawn from all
sectors of society in the Newcastle upon Tyne
area of the UK, including representative groups
from the elderly, the dis- abled and from
different ethnic sectors. Prior to attending one
of the group sessions participants were
informed about the aims and objectives of
the study. Demographic characteristics of all
participants were recorded related to: age,
gender, disability (if any), level of educational
achievement, ethnicity, and technical stance.
A decision was made to allocate participants to
groups based on: age, gender, level of education
and technical stance as this was seen as the best
way possible for participants to feel at ease and
increase discussions. As this study was related
to future technology it was considered impor-
tant to classify participants as either technical
or non-technical. This was used to investigate
any differences that might occur due to existing A sentence-by-sentence analysis was applied to
knowledge of technological systems and that transcribed data using Atlas.ti™ qualitative
het- erogeneity of groups might have a negative soft- ware programme. Two members of the
impact on the social environment and impact research team coded and compared the data for
upon group discussion due to incompatibility consistency,
(Fern, 2001). Therefore participants were
allocated to groups initially by technical
classification i.e. technical/ non-technical,
followed by gender, then level of educational
achievement (high = university edu- cation or
above versus low = college education or
below), and finally age (young, middle, old).
Overall this categorization process culminated
in
24 main groups. 180 male and 145 female
partici- pants took part with an age range of 16
– 89 years. For ethical and practical reasons
only adults aged
16 or above took part in the study. Due to poor
attendance at some group sessions these were
run again at a later date. Although several
participants with physical disabilities attended
the main group sessions a group session for
people with visual and auditory impairments
was carried out at the Disability Forum in
Newcastle. The forum was considered to have
easier access and dedicated facilities for people
with such disabilities.
Participants were told they would be asked
to watch four short videotaped scenarios
showing people using ubicomp systems and
contribute to informal discussions on privacy
and trust permis- sions for this type of
technology. Once all the videos had been
viewed an overall discussion took place related
to any advantage/disadvantages, is- sues or
problems participants considered relevant to
information exchange in a ubiquitous society.
Participant’s attitudes in general towards
ubicomp systems were also noted. The duration
of the ses- sions was approximately ninety
minutes.

quALI tAtIVe fIndInGs


good inter-rater reliability was found. The data
was open coded using qualitative techniques
and several categories were identified.
Categories that frequently arose and reoccurred
across the majority of groups are reported in
this article. Findings from all four scenarios
have culminated in similar categories, however,
data from the project is immense and therefore
we have only provided findings from the health
scenario in the qualitative section of this article.
For clarity and ease of interpretation the
constructs were grouped into three categories.
The first two were based on Hertzberg, et al.,
(1959) Two Factor Theory of Motivation in
which hygiene factors (those factors
indispensible to the acceptance or opera- tion of
a system) were divorced from motivating
factors (those factors that played a more crucial
role in assessing the costs or benefits of
adoption). The first category we would describe
in terms of principles that should underpin any
technology; while the second simply describes
the realities that are more likely to affect a
user’s decision to buy into a particular service.
We also felt a number of constructs fell into a
third category – related to the longer-term
impact of new technologies on human values
society. These categories and their underlying
constructs are shown in Table 1.

Hygiene factors

a. Credible: Participants discussed the ways


in which source credibility would impact
upon what information should or could be
exchanged. In the health context,
participants who visited their GP and/or
hospital consul- tant on a regular basis
discussed access and exchange of health
information in terms of loyalty to a
trusted physician and satisfac- tion with
their performance over the years.
However, participants raised concern over
unknown stakeholders using ubiquitous
sys- tems to gather personal health
information and then using this
information to exploit people:
Table 1. Privacy constructs associated with use of a ubiquitous system and based on Hertzberg’s
(1959)
two factor theory of motivation
Motivators
Hygiene factors
Better healthcare
Longer-term Implications
Credible Convenience
Over-reliance
Secure
Dehumanisation
Reliable
Bystander apathy
Accurate
Reduced social interaction
Transparent
Enforced participation
Context aware Inflexibility
Health risks
Personalised Profile Abuse
Environmental issues
Easy to use Surveillance
Accessible
De-motivators

If you could do it through something like the for example, the problems inherent in a
BBC because it’s typically British, you are
going to trust the BBC, it’s always been there,
it’s something tangible, but for a lot of older
people, it’s new and it’s different, you know
they don’t trust it, whereas they trust their
television because they have watched it all of
their life.

b. Secure: Security of ubiquitous systems


for exchanging and storing health
information emerged as key factor that
would limit adoption and use. Fraudulent
use, hacking, access by third parties,
leakage and storage of information were
all areas discussed. Participants agreed
that being able to verify and access
information stored on systems was
needed.

How secure would the information be? It could


be that you have got a specific condition. You
could have a drink problem or whatever and
that could get back to your employer or it could
get back, you know what I mean, I would have
seri- ous concerns, not that I’ve got a drink
problem, not yet anyway!

c. Reliable: Participants discussed the impli-


cations of system breakdown, recognising,
malfunctioning system that was effectively
‘invisible’. How would the consequences
of the breakdown be detected?

The greater worry I think is that because you


have then got a health system taking care of
Bob on the basis of the information held in the
system, is how correct is that, is the veracity of
that information, because if there was a mistake
in that information, then things could go awfully
wrong. So it says, I see that you are allergic to
aspirin, but say actu- ally I was allergic to
something else. If that was wrong then,
although she verified that, you could verify that
I suppose, but you would worry that there were
going to be pieces of information that might be
false, that people are acting upon.

d. Accurate: Discussion highlighted human


fallibility in keeping systems updated,
enter- ing the correct data and setting
preferences for who has access to their
health informa- tion. Data gathering and
data mining by stakeholders would create
profiles about a person that would contain
false informa- tion.

So it is all about the information, is all this in-


formation accurate or will they make mistakes?
You know will it be useful? Some of it maybe is
good, and some of it not. So I don’t know for your system up on your computer thing and you
other people or for myself if RIFD would be have to go in and tell it I’ve changed my mind, I
accurate information. I don’t think the don’t want to do this, I want to change that.
information will be a hundred percent accurate.
g. Personalised: Participants saw the
e. Transparent: Par ticipants commented benefits of a personalised service in
systems needed to be transparent and ac- certain con- texts. For example, most
cessible so information could be verified agreed having a personalised electronic
and changed. Participants acknowledged health record would bring benefits in
that this was already a problem, since terms of allergy alerts and reminders for
many stakeholders hold personal data files people to take medication. The privacy-
that are difficult or impossible to access. personalisation paradox was apparent with
A sense of losing control over personal health information where data sensitivity
information emerged. was high, but the benefits were clear:

I mean they don’t really know where the I do think the hospital should have access to
informa- tion is going and what individuals are your information so say, If I do have a week
actually accessing it or is it just completely heart, that should be able to convey to the
churned up by computers? I don’t even know hospital that plus your entire medical record.
but the information is going somewhere and the
customer, the con- sumer should actually have, Discussion revealed participants concerns
be allowed to know where that information is over systems being truly sensitive to
going and it should be an open process, open circumstances under which health information
to the consumer, if the consumer wants to know could legitimately be exchanged. Leakage of
of course, some people might not want to know, sensitive information in inappropriate
but if the consumer wants to know how all that circumstances was seen as very problematic.
information is processed it should be open. Would the system only reveal what information
was appropriate at that moment in time, or
f. Context aware: Participants noted the would boundaries be breached? For ex- ample,
dynamic and context-dependent nature of if a person was admitted to hospital with a
human behaviour, and questioned whether broken foot, would a health professional have
‘rules’ for the disclosure of personal infor- full access to a health record that revealed
mation could ever be sensitive enough. depression or a sexually transmitted disease?
For example - a system programmed to
alert parents to a minor accident would h. Easy to use: Participants, in particular
behave inappropriately if one of the in the older age group, discussed concern
parents was very ill or away on holiday. over the complexity of ubiquitous
Participants agreed that the ordeal of systems. Comments related to the fact
changing and re- setting preferences existing tech- nologies are difficult to use.
would be tedious, time consuming and Participants commented setting
complex. preferences for who has access to
information time consuming and
Because if it makes a decision for you and you complicated. Comments related to the
think to yourself, I’ve changed my mind, I’m not dynamic, complex nature of human
in the mood for that and therefore you have behav- iour and that we are not always
mucked predictable. Participants questioned
whether in reality we
could actually set preferences for all types beneficial and would create a more efficient
of information. Discussion also focused on service.
age differences in technology use, experi-
ence and familiarity.

I think that is brilliant. To the younger


generation they have been brought up with that
technology. What about the minority groups,
disabled, etc?

i. Accessibility: Participants commented


widespread exclusion would occur if
people had to adopt ubiquitous systems.
Exclusion would occur due to age,
anxiety, ability, disability and socio-
economic status.

The thought of my Dad using that would cause


more cognitive problems rather than solve
them. It all depends on your technical ability to
start off with.

Motivators

a. Better healthcare: The majority of


partici- pants discussed the concept of
ubiquitous systems for exchanging health
information as advantageous, and in
particular for people with existing medical
conditions. Advan- tages for personal use
related to convenience, allergy alerts and
health professionals having immediate
access to patient records when needed.
Stakeholder benefits were discussed in
terms of monitoring, immediate access
and updating patient records and market-
ing.

Participants agreed the type of information


shared normally depends on who, what, where
and why, but crucially is informed by the type
of relationship they have with the other person.
If their relationship is close e.g. a hospital con-
sultant then the majority of information is
shared quite freely. Participants agreed that
electronic exchange of health information was
I’m just thinking about the benefits of it you
know like, you know the way things work now, I
mean the only benefit I would say now is
electronic exchange of information that the
doctor or hospital sees.

b. Convenience: All par ticipants


agreed the mobility of ubiquitous systems
was advantageous and that through
diffusion, adoption would probably occur.
Participants discussed ubiquitous systems
in terms of convenience related to their
own use and the stakeholder.

Yes, it was useful for him because he has


epilepsy but if you don’t have anything specific
I don’t know that it is that much use, that
particular bit. For an elderly person who really
wanted one, again you have somebody you
trust, like a member of the family, to discuss
what you want put in and if you don’t want
something put in, then you don’t have it put in.

De-Motivators

a. Inflexibility: Participants commented the


pressure to adopt ubiquitous systems
would increase and have a negative
impact on be- haviour. Participants were
concerned about access to health
information by third parties. These
concerns were discussed in terms of
screening people for jobs and insurance.
Participants were concerned ubiquitous
systems would become tools for
marketing by various stakeholders e.g.
advertising diets to people who are
overweight.

I think people who join are going to be


pressurised into it. You know when there are
facilities there and it gets a little bit pushed and
all their friends are doing it and all of their
family is doing it. Look at the time here, I’ve
got to do this, I’ve got to do that and package it
all into one. Let’s just get it all out the way in
one go.
b. Profile abuse: Concerns were raised over Concern arose over trust in the information
the probability that stakeholders would exchanged. For example, how would the user be
collect personal information in an ad hoc assured that his or her health information was
manner without informing the person. The actually secure and free from interference from
concept of profile abuse was a major others? Participants agreed stakeholders would
concern for all participants. Participants have to be very responsible when dealing with
believed profiling would lead to untold any electronic system that contained health
consequence. For ex- ample, a person data. Stakeholders should only be made aware
might be refused insurance as his or her of the relevant health information, therefore
profile states he has high blood pressure. access and exchange limited to pertinent others.

I mean I do think that having all the information The other thing is if you actually hand over all
in one place and an exchange of information responsibility to automated systems you know
and the doctor and the hospital and maybe even if they make a mistake in your calculation and
the ambulance service being able to forward the you are not actually paying any attention, you
information is good but I don’t know whether I are just trusting this, you know it is essentially
like it to that degree. dis-empowering you.

c. Surveillance: Par ticipants commented b. Dehumanisation: Participants found the


when using ubiquitous systems concept of ubiquitous and the use of agent
surveillance was a major issue. They systems as dehumanising (in the scenario
discussed issues related to leakage of used in this study agent systems were
personal information in public settings portrayed with human-like features). Par-
and surveillance by oth- ers. Participants ticipants commented they would not trust
agreed surveillance would be beneficial such systems and found the concept very
for some people with certain medical impersonal.
conditions.
It’s all this de-humanisation is how I see it. Do
It could work against you like at work for his parents really want to know that he has had
checking what you are doing and everything. an accident, by? Why can they not wait until he
Will your boss know what you are doing outside can tell them himself? And alright he can’t do
of work? his running club, but it’s not the end of the
world, they will realise something has
In fact I wouldn’t mind being tracked if I had happened, the message will get there somehow.
epi- lepsy, you know if I was in certain Do we have to have everything working like
circumstances or had a heart condition. In that clockwork?
situation I wouldn’t mind in fact but generally,
no. c. Bystander Apathy: Participants
discussed how existing technology has
Social implications changed the way we behave and were
concerned that ubiquitous systems would
a. Over-reliance: Par ticipants discussed have a greater impact. Reference was
relying too much on the system and/or made to ubiquitous systems making
themselves to exchange information and people lazy, decreasing human cognitive
the responsibility associated with this as ability and reducing the workforce.
very problematic.
On the other hand, if you expected that because it’s
everybody was like that and someone collapsed
in the street, would it stop you going to help
them, because you thought oh well the
paramedics will be here in a minute, I’m not
going to bother!

Participants discussed the possibility that


ubiquitous systems would foster social isolation
as less human-human interaction would take
place, this was considered very problematic.
For example, after being admitted to hospital
talking to a health professional about your
symptoms and being reassured were considered
beneficial. This type of interaction would be
lost as there would be no need for personal
contact or conversation. Participants also
commented in our social world we already leak
information to others in the form of visual cues
e.g. a plaster on your foot, without any serious
implications. In the physical world strangers
knowing certain information about you is not
problematic, however people do not want to
share the same information with friends or even
family e.g. your medical history.

Yes you are losing contact with people if you


are going to be somebody sat in a room by
themselves with a machine like that, talking to
people on this internet kind of thing, but there’s
no substitute for human contact. Its wonderful
discourse with human beings face to face rather
than through a machine I think.

d. Reduced social interaction: Discussion


highlighted how use of ubiquitous systems
would result in less human-human interac-
tion and this was considered very
problem- atic.

We are so anti-social anyway, unless Andrew


has his friends to the house and I must admit I
mean I communicate with a lot of my friends
now by text messages whereas before you would
have called to them or you know send an email
but I see less of people that I care about
more convenient to send them a text or an email
and I hate it, I really do hate it and I think
that’s going to encourage more because then
you’re not even going to have to make the effort
to send the text message, your machine is going
to be send- ing them a text message because
you’re overdue writing to them.

e. Enforced participation: Participants


com- mented little or even no choice
would exist in a ubiquitous society.
Comments suggested
‘forced choice’ would become the ‘norm’,
making people use such systems for all
forms of information exchange even if
they did not want to. Participants
expressed concern over the right not to
reveal information having vast
implications leading to exclusion in some
circumstances.

Participants were concerned about reliance


on ubiquitous systems for exchanging health
information reducing personal control. Discus-
sions revealed ubiquitous systems would create
‘Big Brother’ societies that lacked control and
choice. Concern was raised over how
information would be controlled by
stakeholders, i.e. storage and transmission.

You see all that information where is it going?


And even if you say no I don’t want you to pass
my details on you never really know do you?

f. Health risks and environmental issues:


Participants discussed concerns over
health risks and environmental issues
related to living in a ubiquitous society.
Participants referred to problems with
radiation from the systems and the global
impact of such use. Comments related to
development and cost of ubiquitous
systems and the realisation that in parts of
the world people were starving, therefore
should we not focus resources on global
problems.
Also we are in a time when we are starting to Participants
think more and more about the materials we use
and the amount of energy we are using and From an initial 1687 responses, the data set was
whether we shouldn’t be thinking as humans cleaned up and any incomplete questionnaires
how we should use our energy to think better, were removed. A total of 505 replies were re-
write lists rather than use the technology there. moved from the set (mainly through incomplete
answers) leaving a total of 1182 respondents:
Older adults were concerned younger people 431 health, 309 shopping, 191 finance and 281
would use ubicomp systems for exchanging personal identity. Of the respondents, 623 were
information in an ad-hoc way, in particular if males and
used for voting in political elections. Disabled 559 females. Respondents reported locations
participants discussed clear advantages in terms from all over the world. As might be expected,
of independence and increased autonomy. Visu- the vast majority (1013) were from the United
ally impaired participants commented they States, 158 from the UK and 11 were from
often had to ask others for help in social other locations. This reflects the online
settings e.g. the supermarket and this can often population but also the bias as the survey was
lead to further problems, ubiquitous placed on Zoomerang.com a US site.
technologies where consid- ered a way of Respondents reflected a wide range of ages.
having greater independence. The majority falling in the 36-45 age group,
though with a strong representation in all age
groups from 18 to 65. Only the under 18 and
questIonnAIre over 75 groups showed any tailing off.
deVeLoPMent
Materials
From qualitative findings in the first phase of
this research a questionnaire was developed. Sets of items related to trust, privacy, identity
The questionnaire was posted to all participants management and usability were constructed,
who took part in the focus group sessions and based on findings from the qualitative phase of
promoted on Zoomerang.com website. The first the project i.e. hygiene factors, motivators and
section of the questionnaire addressed patterns de-motivators, longer-term implications (as out-
of disclo- sure across different domains and lined earlier) and known predictor variables in
these detailed disclosure patterns are not the current literature (e.g. Sillence, et al., 2004).
reported here. Participants were asked to indicate their
The second part of the questionnaire was responses on a 5-point scale. Items are detailed
based in part on the qualitative findings above. below:
Participants were asked to choose one of four Trust predictors: When disclosing personal
contexts (health, lifestyle, finance and personal information it is important to me that the person
identity) and complete a number of questions accessing my information is (1 not important to
regarding trust, privacy, usability and identity 5 very important): 1. Credible, 2. Reputable, 3.
is- sues. Demographic variable were recorded Knowledgeable, 4. Expert, 5. Offers
related to age, gender, level of education, personalized service, 6. Predictable and
employment status and country of origin. When consistent, 7. Respon- sible, 8. Will not pass in
completing the questionnaire participants information about me to others without my
responded using consent, 9. Reliability of the system
5-point Likert scales. Privacy predictors: When using a system
that exchanged and monitored personal
information
I would worry about the following (1 not at all Setting preferences for a system that
worried to 5 very worried): exchanged and monitored health information
10. Fraudulent use, 11. Leakage of would be (1 not at all to 5 very) 37. Time
information, consuming, 38. Tedious,
12. Hacking, 13. Surveillance, 14. Being 39. Difficult, 40. Dehumanising, 41.
tracked, Impersonal
15. Increase in social isolation, 16. Invading my way to communicate
privacy, 17. Having less contact with others, 18. A Principal Component Analysis with
Talking less to others, 19. Lack of control, 20. Varimax rotation indicated that information
Reduction in choice exchange in ubicomp contexts was predicted by
Privacy preferences: (1 not at all to 5 very): seven factors. They accounted for 68% of the
21.How concerned are you about the threat to total variance and each had an eigenvalue
your personal privacy? 22. How important is index greater than
personal privacy to you? 23. How concerned 1.0. The interpretation of the factors was based
are you about the misuse of your personal on the grouping of variables from the original
information? questionnaire.
Identity management: When considering a
system that exchanges and monitors personal Factor 1: Security of information, privacy
in- formation how important are the following (infor- mational, physical, social) and
(1 not important to 5 very important): surveillance (10, 11, 12, 13, 14, 15, 17,
24.Convenience, and 20)
25.Immediate access to information, 26.Ability Factor 2: Trust through credibility, responsibil-
to monitor funds, 27.Alert if account ity and personalisation (1, 2, 3, 4, 5, 6,
overdrawn, 7,
28.Reminder to pay a bill, 29.Security of infor- 8, and 24)
mation, 30.No access by 3rd parties, 31.Secure Factor 3: Design of system in relation to
storage of information, 32.I can access informa- usability, reliability and human values (9,
tion stored about me, 33.I can verify 34, 35, 36,
information stored about me 37, 38, 39, 40, and 41)
Usability: If a system existed that Factor 4: Social concerns in relation to control
exchanged and monitored personal information and physical privacy (16, 18, and 19)
the following tasks would be difficult (1 not at Factor 5: Benefits of using Ubicomp systems
all difficult to (25,
5 very difficult): 34. Setting preferences for who 26, 27, 28, and 29)
has access, 35. Keeping the system up to date, Factor 6: Data management in relation to veri-
36. Entering correct information, fication and access to information (30, 31,
32, and 33)
Factor 7: Privacy preferences (21, 22, and
23)

Table 2. Stepwise regression analysis for health information exchange in ubicomp contexts

Predictor Std.
r² B β t-value p-value
factor error
Security .042 .290 .048 .193 6.070 .000
Design .056 -.197 .046 -.121 -4.231 .000
Trust .064 -.274 .080 -.127 -3.416 .001
Data-manage-
.070 .256 .082 .116 3.114 .002
ment
benefit .074 -.093 .042 -.068 -2.219 .027
Table 3. Stepwise regression analysis for financial information exchange in ubicomp context

Std.
Predictor factor r² B β t-value p-value
error
Security .058 .318 .046 .214 6.854 .000
Data-management .065 .336 .081 .155 4.146 .000
Benefit .070 -.088 .041 -.065 -2.119 .034
Trust .073 -.157 .079 -.074 -1.980 .048

Results Personal Identity Model

Stepwise regression analyses were conducted The stepwise regression for the personal
to establish those factors that predict informa- identity model produced a fit (r 23.5%) of the
tion exchange within the four different variance explained. Security (18.9%), design
contexts. The four dependent variables were (3%) and data- management (1.6%) were all
health, finance, lifestyle and identity found to be predic- tive factors for exchanging
information and security, trust, design, social identity information in ubicomp contexts. The
concerns, benefit, data-management and Analysis of Variance (ANOVA) revealed that
privacy preferences as the independent the over all model was significant (F 3, 1178 =
variables. 22.91, P<0.001).

Health Model Lifestyle Model

The stepwise regression for the health model The stepwise regression for the lifestyle model
produced a fit (r 27.1%) of the variance produced a fit (r 23.6%) of the variance
explained. Security (20.5%), design (3.3%), explained. Social (16.3%), design (3.1%),
trust (1.4%), data-management (1.2%) and security (1.7%) and trust (2.5%) were all found
benefit (.7%) were all found to be predictive to be predictive factors for exchanging lifestyle
factors for exchanging health information in information in ubicomp contexts. The Analysis
ubicomp contexts. The Analysis of Variance of Variance (ANOVA) revealed that the over
(ANOVA) revealed that the over all model was all model was significant (F 4, 1177 = 17.286;
significant (F 5, 1176 =18.66, p<0.001). p<0.001).

Finance Model
dIscussIon
The stepwise regression for the finance model
produced a fit (r 27.1%) of the variance Earlier we described three key research ques-
explained. Security (24.1%), trust (.6%), data- tions as follows: 1. What are users’ key
management (1.4%) and benefit (1%) were all concerns regarding privacy management in a
found to be predic- tive factors for exchanging ubiquitous context and do they reflect ‘expert’
financial information in ubicomp contexts. The privacy prin- ciples? 2. Do these concerns vary
Analysis of Variance (ANOVA) revealed that as a function of context? 3. Will users have
the over all model was significant (F 4, 1177 = enough confidence in privacy management
23.32, p<0.001). procedures to hand-over
Table 4. Stepwise regression analysis for personal identifiable information exchange in ubicomp
con- texts
Std.
Predictor factor r² B β t-value p-value
error
Security .036 .257 .039 .207 6.588 .000
Design .048 -.146 .039 -.109 -3.787 .000
Data-management .055 -.169 .056 -.093 -3.015 .003

Table 5. Stepwise regression analysis for lifestyle information exchange in ubicomp contexts
Std.
Predictor factor r² B β t-value p-value
error
Social .027 .085 .026 .103 3.210 .001
Design .037 -.122 .038 -.092 -3.199 .001
Security .044 .168 .042 .138 4.024 .000
Trust .055 -.198 .053 -.113 -3.712 .000

management and administration of their developers


privacy preferences?
In response to 1, we have presented a
detailed analysis of user concerns firstly in
terms of a modified Hertzberg model that
identifies firstly a set of constructs that might
reflect user-generated privacy principles;
secondly those factors likely to play a key role
in an individuals cost-benefit analysis and
thirdly, that reflect longer-term concerns of
the citizen in terms of the impact of new
technologies on social engagement and hu- man
values. The hygiene factors listed in Table 1
and that are captured by the regression analyses
above do not differ greatly from those USACM
principles described earlier, save for two impor-
tant constructs Firstly, trust is important as we
are now talking about the dynamics of releasing
information to one party rather than another.
Trust in this respect is based on credibility,
responsibil- ity and personalisation and
therefore crucial ele- ments involved in
adoption and use of ubiquitous systems.
Secondly, usability emerges strongly (being
part of the ‘design’ factor in three out of the
four regression analyses). The fact usability
emerged is an important message for
and researchers – ease of use is potentially
being overlooked as a crucial component of
privacy management.
In response to 2, we have good reason to
believe that while there are a number of
universal privacy concerns, there is a residual
effect of context. In the qualitative data people
are more accepting of the seamless transmission
of sensitive data in a health context because the
benefits tend to outweigh the perceived costs.
Also the presence and influence of privacy
factors differs across the four contexts we
presented. Different privacy models emerged in
the quantitative data with, for example, the
simplest model underpinning the management
of personal identity and the most complex
model underpinning health. These findings
expand and support the work of Hong et al.
(2004). Hong, et al. state designers of ubicomp
systems need to deploy a privacy risk analysis
considering social and organisational content.
This type of analy- sis considers: Who are the
users? What kind of personal information is
being shared? How is personal information
collected?
Finally – we asked about whether people are
likely to have enough confidence in ubiquitous
technologies to hand-over management of
Dinev, T. & Hart, P. (2004). Internet
privacy preferences. In answer to this we turn
Privacy
to our interview data where it is clear that –
Concerns and their Antecedents - Measurement
providing the benefits are made apparent and
the ‘hygiene factors’ are met, that users would
be willing to hand-over to agent technologies in
this way. One important point to note, however,
is that members of the public are expressing
concerns – about the de-humanising effects of
new technologies – that are relatively rarely
considered in the relevant literature. Such
‘human values’ issues are some- times
marginalized, but may form a key part of the
ubiquitous computing agenda.
In conclusion, our findings provide evidence
that established principles by industry e.g. US-
CAM are inadequate. Basing principles on indi-
vidual privacy or providing services that allow
users to manage their own privacy preferences
is not enough. Mohatari, et al., (2007) argue
tradi- tional approaches are not sufficient in
addressing privacy threats in ubiquitous
systems, we agree with this statement. More
importantly develop- ment of privacy principles
should incorporate situation, context and
design.

references

Awad, N.F., & Krishnan, M.S. (2006). The


personalization privacy paradox: An empirical
evaluation of information transparency and the
willingness to be profiled online for
personaliza- tion. MIS Quarterly, Vol. 30, No.
1, 13-28.
Cranor, L. (2002). Web privacy with P3P. USA:
O’Reilly & Associates
Cranor, L.F., Reagle, J., & Ackerman, M.S.
(1999). Beyond concern: understanding net
users’ atti- tudes about online privacy. In I.
Vogelsang & B. Compaine (Eds.), The Internet
Upheaval: Raising Questions, Seeking Answers
in Communications Policy. USA: MIT Press. pp
47-60.
Validity and a Regression Model. Behaviour &
Information Technology, 23, 6, 413-423
Dritsas, S., Gritzalis, D., & Lambrinoudakis,
C. (2006). Protecting privacy and anonymity in
pervasive computing trends and perspectives.
Telematics and Information. 23 (3), 196-210
Earp, J.B., Anton, A.I., Aiman-Smith, L., &
Stufflebeam, W. (2005). Examining Internet
Privacy Policies within the Context of User Pri-
vacy Values. IEEE Transactions on
Engineering Management, 52(2), 227-237.
Fern, E.F. (2001). Advanced Focus Group Re-
search. London: Sage Publications
Guha, S., Tang, K., & Francis, P. (2008).
NOYB: Privacy in Online Social Networks.
WOSN’08, August 18, 2008, Seattle,
Washington, USA.
49-54
Harper, J., Singleton, S. (2001). With a grain of
salt: what consumer privacy surveys don’t tell
us. http://
ww w.cei.org/ PDFs/with_a_ grain_of_salt.pdf
Herzberg, F., Mausner, B., & Snyderman, B. B.
(1959). The Motivation to Work (2nd Ed.). New
York: John Wiley & Sons.
Hinz, O., Gertmeier, E., Tafreschi, O.,
Enzmann, M., & Schneider, M. (2007).
Customer Loyalty programs and privacy
concerns. 20th Bled eCon- ference eMergence:
Merging and Emerging Technologies,
Processes and Institutions. June
406, Bled, Slovenia.
Hong, J.I., Ng, J.D., Lederer, S. & Landay, J.
(2004). Privacy risk models for designing
privacy-sensi- tive ubiquitous computing
systems, Proceedings of the 2004 conference
on Designing interactive systems: processes,
practices, methods, and techniques,
Cambridge, MA, USA
Jackson, L., von Eye, A., Barbatsis, G., Biocca,
F., Zhao, Y., & Fitzgerald, H.E. (2003). Internet
Attitudes and Internet Use: some surprising
findings from the HomeNetToo project.
Inter-
national Journal of Human-Computer Studies, Nguyen, D.H., & Truong, K.N. (2003).
59. 355- PHEmail: Designing a Privacy Honoring Email
382 System. Proceedings of CHI 2003 Extended
Kobsa, A. (2007). Privacy-Enhanced Person- Abstracts, Ft. Lauderdale, Florida,
alisation. Communications of the ACM, 50 (8), Olsen, K., Grudin, J., Horvitz, E. (2005).A
24-33. study of preferences for sharing and
Kobsa, A. (2003). Component architecture for privacy’. CHI,
dynamically managing privacy constraints in 2005 extended abstracts on Human factors in
personalized web-based systems. In Proceed- computing systems.
ings of the Third Workshop on Privacy Paine, C.B., Stieger, S., Reips, U-R., Joinson,
Enabling Technology, Dresden, A.N.,
(2003).Germany. Springer Verlag. & Buchanan, T. (2007). Internet users’ percep-
tions of ‘privacy concerns’ and ‘privacy
Kozlov, S. (2004). Achieving Privacy in Hyper-
actions’. International Journal of Human-
Blogging Communities: privacy management
Computer Stud- ies, 65, 6, 526-536, (2007).
for Ambient Technologies. htt p://ww w.sics.se/
privacy/wholes2004/papers/kozlov.pdf Palen, L., & Dourish, P. (2003). Unpacking
Privacy for a Networked World. Proceedings of
Little, L., Briggs, P., & Coventry, L. (2004).
the ACM, CHI 2003, 5 (1), 129- 135.
Vid- eotaped Activity Scenarios and the
Elicitation of Social Rules for Public Price, B. A., Adam, K., & Nuseibeh, B. (2005).
Interactions. BHCIG UK Conference, Leeds, Keeping Ubiquitous Computing to yourself: a
September practical model for user control of privacy.
Inter- national Journal of Human-Computer
Maguire, M.C. (1998). A Review of User-
Studies,
Interface Guidelines for Public information
63, (1-2), 228-
kiosk Systems. International Journal of
253
Human-Computer Stud- ies, 50, 263-286
Sillence, E., Briggs, P., Fishwick, L. & Harris,
Malhotra, N.K., Kim, S., & Agarwal, J. (2004).
P. (2004). Trust and Mistrust of Online Health
Internet users’ information privacy concerns,
Sites. Proceedings of CHI’2004, April 24-29
IUIPC: the construct, the scale and a causal
2004, Vienna Austria, p663-670. ACM press
model, Information Systems Research 15, 336–
355. Smith, J.H., Milberg, S.J., Burke, S.J., (1996).
Information privacy: measuring individuals
Metzger, M. J. (2004). Exploring the barriers to
concerns about organizational practices. MIS
electronic commerce: Privacy, trust, and dis-
Quarterly, 167–196.
closure online. Journal of Computer-Mediated
Communication, 9(4), ht t p://jcmc.indiana.edu / Teltzrow M., & Kobsa, A. (2003). A. “Impacts
vol9/issue4/metzger.html of
User Privacy Preferences on Personalized
Motahari, S., Manikopoulos, C., Hiltz, R., & Systems
Jones, Q. (2007). Seven privacy worries in - a Comparative Study”, In Proc’
ubiquitous social computing. Symposium on CHI2003.
usable privacy and security (SOUPS), 171 –
172. USACM (2006). Policy Brief: USACM Policy
Recommendations on Privacy, June 2006.
http://
usacm.acm.org/usacm/Issues/Privacy.htm York: Atheneum
Westin, A. (1967). Privacy and freedom. New

This work was previously published in the International Journal of E-Business Research, Vol. 5, Issue 2, edited by I. Lee, pp.
1-20, copyright 2009 by IGI Publishing (an imprint of IGI Global).
1425

Chapter 7.7
Privacy Threats in
Emerging
Ubicomp
Applications:
Analysis and
Safeguarding

Elena Vildjiounaite
VTT Technical Research Centre of Finland, Finland

Tapani Rantakokko
Finwe LTD, Finland

Petteri Alahuhta
VTT Technical Research Centre of Finland, Finland

Pasi Ahonen
VTT Technical Research Centre of Finland, Finland

David Wright
Trilateral Research and Consulting, UK

Michael Friedewald
Fraunhofer Institute Systems and Innovation Research, Germany

AbstrAct term storage of large quantities of data, and


reasoning based on collected and stored data.
Realisation of the Ubicomp vision in the real An analysis of more
world creates significant threats to personal
privacy due to constant information collection
by numerous tiny sensors, active information
exchange over short and long distances, long-
than 100 Ubicomp scenarios, however, shows thus, are not always applicable to emerging
that applications are often proposed without applications for smart spaces and personal
considering privacy issues, whereas existing devices, especially because the users and their
privacy-enhancing technologies mainly have Privacy Threats in Emerging Ubicomp Applications
data are not spatially separated in such
been developed for networked applications and, applications. A partial solution

Copyright © 2010, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Privacy Threats in Emerging Ubicomp Applications

to the problem of users’ privacy protection from their personal data. Even in these
could be to allow users to control how their applications, no scalable solutions fully
personal data can be used. The authors’ applicable
experience with mobile phone data collection,
nevertheless, sug- gests that when users give
their consent for the data collection, they don’t
fully understand the possible privacy
implications. Thus, application developers
should pay attention to privacy pro- tection;
otherwise, such problems could result in users
not accepting Ubicomp applications. This
chapter suggests guidelines for estimating
threats to privacy, depending on real world
application set- tings and the choice of
technology; and guidelines for the choice and
development of technological safeguards
against privacy threats.

IntroductIon

After having read a large number of scenarios


of emerging Ubicomp applications (found in
project deliverables and research publications
which de- scribe prototypes of smart spaces,
smart personal devices, objects and their
functionalities) and vi- sionary future Ubicomp
scenarios (found mainly in roadmaps), we
concluded that most scenarios present a sunny,
problem-free vision of our future. With the
exception of the surveillance problem in some
cases, most scenarios do not consider the
privacy issues that the new technologies are
likely to raise. For example, they do not discuss
possible privacy problems due to conflicts
between people’s interests or personal curiosity.
The discovery that Ubicomp technologies
raise privacy problems is not new; and research
into privacy protection is actively going on, but
after a state-of-the art review of work on
privacy protection, we have come to the
conclusion that most of this work deals with
privacy protection in such network applications
as m-commerce, Web browsing, virtual
meetings, location-based ser- vices, and so
forth, where users can be physically separated
146
Privacy Threats in Emerging Ubicomp Applications
in real life exist, and this lack of protection
allows large-scale eavesdropping, as we know
from the news (Web site of the American Civil
Liberties Union and the ACLU Foundation,
2006).
The work on privacy protection in smart
spaces and in connection with personal devices
is even less mature than that concerned with
network applications, while visionary Ubicomp
scenarios suggest many situations in which
confidential data and secrets occasionally can
be discovered. When reading Ubicomp
scenarios, however, we rarely found any
discussions about the possible implications of a
new technology for privacy, and even fewer
descriptions of privacy protec- tion measures.
M. Langheinrich has collected a list of excuses
why privacy protection is rarely embedded in
new applications (Langheinrich,
2006), but such a practice can lead to the danger
that problems appear after an application has
already been developed and installed, and then
either the users are left to suffer from privacy
violation problems, or application developers
are faced with the negative reactions of the
users and the need to update the application.
One recent example is a bus ticketing
application in Helsinki which was storing data
about travellers’ routes. The application
received bad publicity (criticism in the
newspaper Helsingin Sanomat (Koponen,
2002)), and updating an already installed appli-
cation would obviously be a costly operation. In
cases where users’ criticism is directed against
an already installed application, which runs on
non-reprogrammable microcontrollers (a
common situation in the case of a commercial
applica- tion), an application update can be very
costly. Thus, embedding privacy protection in
Ubicomp applications at the development stage
would be beneficial for application developers.
The main emphasis in this chapter will be on
possible problems rather than the benefits of
new technologies and applications, because
readers of Ubicomp papers usually encounter
descriptions of benefits rather than descriptions
of problems. The success of Ubicomp
development also requires the

147
understanding of possible problems, however, walk- ing in a noisy place (and it is not easy to
and safeguarding against them, including check
safeguard- ing against possible privacy
implications. There is no doubt that the notion
of privacy alters with time, so that with the
invention of phones (and especially mobile
phones), for example, physical distance from
other people can no longer guar- antee privacy.
Similarly, with the development of cameras
(especially digital cameras, with their
capability for recording more views than their
owners can sort through carefully), people have
become used to seeing more details of other
people’s lives than was ever possible before.
There are very important differences
between past and future technologies, however,
which could change our lives more quickly than
we could possibly adapt our understanding of
the world, human behaviour, ethics and laws to
the new technologies: first, past technologies
were largely controlled by a human, whereas
future technologies will be capable of automatic
actions. Since it is much easier to notice a
human observer than a tiny sensor, it will be
possible to collect much more data without
people being aware of it. Second, large-scale
accumulation of data in a digital form will no
longer require manual (slow) human work in
order to connect information from different
sources, so that it may be easier to as- semble
the full life story of a person in the future than
it was to find scattered pieces of information in
the past. Third, modern devices are smaller in
size, more reliable and move closer to the
human body than was the case in the past, and it
is pro- posed that these could be embedded into
clothes, watches or jewelry. Consequently, it
will become easier to have always-on mobile
devices, but more difficult to switch them off.
Our perception of the privacy aspect known as
the “right to be left alone,” for example, has
changed significantly with the invention of
stationary phones and especially mobile
phones, but it has still been preserved by the
possibility for switching the phone off or not
hearing it ringing when taking a shower or
whether a person did not hear a phone call or
was simply not in the mood to answer it). Will
one still be able to avoid undesired conversation
in the Ubicomp future of embedded
connectivity, or will society change so that
people will not be offended or angry when their
children, relatives or subordinates do not
answer a call that they have evidently heard?
How society will adapt to the capabilities of
new technologies is an open question, but we
think that technology developers should not rely
on human nature changing quickly, and the
results of deploying new technologies in
computer-supported collaborative work
(Bellotti,
1993) support this opinion.
This chapter first summarises the views of
different researchers on what privacy is, after
which it will briefly describe how Ubicomp
researchers see the world of the future and what
possible implications for users’ privacy may not
be safeguarded in the scenarios. After that, the
chapter will present the authors’ experiences of
mobile phone data collection and users’
opinions regarding their privacy expectations
before and af- ter data collection, which suggest
that the privacy implications were under-
estimated before data collection. It will then
present the state of the art in privacy-enhancing
technologies and highlight the gaps that create
privacy risks. After that it will suggest
guidelines for estimating the threats to privacy,
depending on real world application settings
and on the choice of technology, as well as
guidelines for the choice and development of
technological safeguards against these threats.

PrI VAcy eXPec tAtIons In


tHe reAL WorLd

It is suggested in the work of Lahlou et al.


(Lahlou,
2003), that privacy protection requires an
under- standing of how new technologies
change the ways that have developed in the
physical world, where personal privacy is
protected by the following borders (Bohn,
2005):
• Natural Borders: physical borders of ob- for example, the likelihood of encountering
servability, such as walls, clothing, the same people in different circumstances or
darkness, facial expression (a natural
border protecting the true feelings of a
person)
• Social Borders: expectations with regard
to confidentiality in certain social groups,
such as family members, doctors and
lawyers, for example, the expectation that
your col- leagues will not read personal
fax messages addressed to you
• Spatial or Temporal Borders:
expectations by people that parts of their
lives can exist in isolation from other
parts, both temporally and spatially, for
example, a previous wild adolescent phase
should not have a lasting influence on the
current life of a father of four, or a party
with friends should not affect relations
with colleagues
• Borders due to Ephemeral or
Transitory Effects: expectations that
certain action or spontaneous utterances
will soon be forgot- ten or simply
unnoticed because of limita- tions on
people’s attention and memory

These borders are bi-directional, that is,


people expect these borders not only to protect
the person’s feelings, appearance, actions, and
so forth from the outside world, but also to
protect the person from intrusions by the
outside world. Physical borders are perhaps
perceived as most reliable, as can be illustrated
by how poker players control their faces, for
instance, or by the custom of knocking on the
closed door of somebody’s private room or
office. People also have a well- developed
mental model of the limits of their own or
others’ ability to notice and remember details of
what is going on around them. For example,
people in a conference room usually expect that
others’ attention and memory will be devoted to
the content of a presentation rather than to the
auditory aspect. Concerning social and spatial
borders, people perceive them as not so strong;
of broken confidentiality is not negligible. In
general terms, the stronger is the personal
belief that a certain border is reliable, the more
dif- ficult it will be to adapt to its violation by a
new technology. Experiments in the research
area of computer-supported collaborative work
suggest one example of the adaptation
difficulty. In order to facilitate awareness and
communication between colleagues, video
cameras were installed in the offices of
participants. Although this awareness proved to
be useful, the experiments showed that people
often act according to the “old” mental model
of being reliably hidden by office walls
(Bellotti, 1993).

future VIsIon of tHe


ubIcoMP WorLd And ProbLeMs
WItH PrI VAcy

A joint vision of Ubicomp researchers


regarding the future world was formulated after
reading more than 100 roadmap scenarios and
research publications. This vision presents a
world in which everything is connected and
where any activity is possible in any place,
supported by applications installed in the
environment and in personal devices. Research
activities have been devoted to supporting
communications between family members and
colleagues in different locations (e.g., between
workplaces and homes (Aschmoneit, 2002;
Dukatel, 2001; Jansson, 2001) and between
moving people (Aschmoneit, 2002; Dukatel,
2001; ITEA, 2004), often via video links), and
to supporting remote shopping (Dukatel,
2001), learning (Dukatel, 2001), and even
remote health care (Bardram, 2004; ITEA,
2004). The future vision also pictures a very
safe world, in which technologies ensure safe
driving (ITEA,
2004; Masera, 2003) and safe control of home
appliances (e.g., locking and unlocking of
doors at home (Masera, 2003)), and help in
finding keys (Orr, 1994) and toys (Ma, 2005).
Technologies are also expected to help people
to remember past
events, both personal (Gemmel, 2004; of personal data which could potentially lead to
Healey, violations of privacy. In such applications, the
1998) and work-related (Aschmoneit, 2002), most important measure of privacy protection is
and to give reminders regarding current and first to store as little data as are needed for the
future activities and duties (Kim, 2004). application to function properly, and second, to
A typical vision of the Ubicomp future in- ensure that the data cannot be easily accessed
volves technology caring for a person, correctly by unauthorised people. In many other
identifying that person’s wishes and applications, however, it is important not to
environment, and reacting to them trade off privacy for convenience in a blind
appropriately. An attractive example of such a fashion; for example, the personalisation of
vision can be found in the Fly- ing Carpet recommender systems or location-based
“Daily Life” visionary scenarios of the Mobile services can be designed in more or less
IT forum (Kato, 2004). It is worth noting, privacy-protecting ways. The main goal of this
however, that the Flying Carpet scenario differs chapter is to point to certain important
from many others in the sense that its problems and to suggest methods for improving
interaction is human-initiated, whereas many privacy protection in various Ubicomp applica-
other scenarios are more privacy threatening tions, so that application developers can choose
because they sug- gest that technology will be the most suitable methods.
able to anticipate a person’s needs and to act on For example, when reading Ubicomp sce-
behalf of that person (e.g., Ducatel, 2001). narios one can rarely find a description of how
There are many positive sides to the access to personal data or to actuators can be
Ubicomp vision of future, but it does not controlled, whereas user-friendly access control
seem realistic to assume that Ubicomp is one of crucial factors determining the success
technologies will be problem-free. It is thus of the Ubicomp concept. The example of
important to understand the possible problems mobile phones shows that personal data in a
and to safeguard against them whenever phone (such as photos, an address book, or a
possible. In some cases, achieving safety is calendar) will in practice be available to
more important than protecting privacy. It has anybody who picks up the phone, due to the
been observed, for example, that elderly people inconvenience of password authentication,
can trade their privacy for support and safety which takes place once, when the phone is
(Mynatt, being switched on, after which the
2000), and that the saving of people’s lives phone remains in the “on” state for many days
through improvements in health care or
detection of the location of emergency calls
requires the storage

Figure 1. Mobile IT forum, part of a “Daily Life” scenario from FLYING CARPET, Version 2.00
(Kato,
2004), page 4
or weeks, unprotected. Although it has always The threats to privacy presented in this
been possible to look through somebody’s chapter are not really new, because the reasons
address book, diary or photo albums in order to for their existence (including conflicts of
find the desired information, this has usually interest between people and organisations,
required visiting that person’s room and human curiosity, envy, greed, and beliefs in
searching through the items there, which may one’s own right to control others) are age-old
be difficult, at least for a person living in problems. On the other hand, technology has
another place. Nowadays, personal mobile changed the ways in which per- sonal data can
devices can store as much in the way of be disclosed.
information and photos as several old-style The components of a typical Ubicomp
address books, diaries, and photo albums (and applica- tion are shown in Figure 2. Each
will store even more when Personal Lifetime component can cause problems in its own way.
Store application scenarios (Gemmel, 2004) Privacy problems essentially fall into three
become a reality and when mobile payment logs major groups, the best-known of which
can also be stored), but they are far less well concerns problems associated with information
protected, because they are not locked inside flow from the user, that is, due to the
a house or a drawer. Personal mobile devices acquisition, transmis- sion, and storage of
accompany their owners everywhere and can personal data in large quanti- ties. Most
reveal large quantities of stored personal data, privacy-enhancing technologies (PETs) are
because the users often bypass the inconvenient being developed for the protection of personal
security measures available for data protection data in networked applications, but new
(such as entering a password or rolling a finger Ubicomp applications present new challenges.
across a fingerprint sensor). Since no It has often been proposed, for example, that
convenient, user- friendly authentication has awareness between family members and
yet been developed, personal Ubicomp devices colleagues should be supported via the
and non-personal smart spaces are likely to transmission of video data, which violates
disclose their users’ data and secrets, and we traditional personal expectations regarding the
are now obliged to suggest how to reduce this notion that “if I am hidden behind a wall, I am
risk. invisible.” Memory aids (personal
(Gemmel, 2004; Healey, 1998) and recordings

Figure 2. A generic view of an Ubicomp application: the thin arrows indicate information collection,
transmission and storage; the thick arrows indicate information push.
of work meetings (Aschmoneit, 2002)) imply personal physiological responses to information
the storage of raw video data, which violates
personal expectations regarding the limits of
human’s at- tention and memory. There are two
reasons for these problems. First, as work in the
computer- supported cooperative activity
domain has shown (Bellotti, 1993), humans are
not accustomed to environments full of sensors,
and continue to behave according to their
expectations regarding their privacy in the real
world. The second reason is the blurring of
boundaries between “traditional” application
domains. For example, work-related
communications from home can intrude into
one’s personal life, and conversations on private
matters from smart workplaces can be recorded
automati- cally along with work-related
conversations. In addition, sensors, which were
traditionally used only in certain domains (e.g.,
physiological sen- sors associated with health
care, video cameras for security purposes) have
been suggested for use in other domains, such
as entertainment. Since the traditional view of
the entertainment domain assumes that its data
are not very confidential (and consequently do
not require strong protection measures), there is
a danger of the disclosure of health problems
detected by physiological sensors in the
entertainment domain.
The second group of privacy problems con-
cerns those caused by linkages between
different kinds of data (mainly stored data). For
example, it has been proposed that a personal
memory aid should not record everything, but
instead, it should measure the personal arousal
level via skin conductivity sensors and other
physiological sensors and record only the
exciting scenes (Gem- mel, 2004; Healey,
1998). Since none of proposed memory aid
prototypes has good access control over stored
data, these would allow a young boy’s parents,
for example, to find out easily which girl their
son is most interested in. Physiological sen-
sors also have been proposed for measuring the
degree of approval of TV programmes (Nasoz,
2003; Palmas, 2001). In this case, the linking of
on TV programmes can facilitate the
surveillance of citizens from the point of view
of whether they support government decisions
or not. The link- ability problem is in general
acknowledged, and PETs in networked
applications aim at protection from such data
linkability. In such applications as smart spaces
and personal devices, however, the problem of
data linkability has received less attention, and
privacy problems with memory aids, for
example, are usually discussed from two points
of view: first, how to achieve agreement with
the people recorded; and second, whether the
police could search through the recorded data or
not. The problem of avoiding the curiosity of
family members is usually ignored. The
dangers of data linkages are in general
underestimated by researchers, as we have
observed in the ex- ample of our own data
collection system (see next chapter).
The third group of privacy problems
comprises those caused by information flow
towards the users, either because technology-
initiated com- munication intrudes into
personal life, because the content of the
information can disclose private information, or
because actuators fail (e.g., to open or close a
door at home). Intrusions of technology into
personal life can happen when an application
interacts with people (e.g., reminds someone to
do something) or when an application does not
allow people to escape communication with
oth- ers. Currently, it is easy for a person to say
that he missed a phone call because the battery
in his mobile phone was empty, or because of
street noise, and so forth, but will it be as easy
to avoid undesirable communications in the
future, when communication is embedded in
clothes and battery life is longer? Most parents
have observed how their children miss phone
calls or “forget” mobile phones at homes when
they want to escape from their parents’ control;
and although such situations are harmful for the
parents’ nerves, it seems that in most cases, it is
necessary for children to make their own
decisions and take risks.
The content of information can disclose per- 2004) screenplay of “the Rousseaus’ holiday”--
sonal data in two possible ways: if it is holidays spent by a family consisting of a
delivered in the presence of other people and mother, father, and two children (10 and 13
they hear (or see) the message (e.g., if a movie years old).
recommender application suggests that the
users should watch adult videos in the
presence of their children), or if the
information contains data about people other
than the user (as one can notice more details
during the playback of a memory aid than
during a live conversation).
This group of problems is the least studied
of all, and PETs dealing with these problems
are almost non-existent. What is also important
about this group of problems is that technology-
initiated communications can reduce user
acceptance (users do not always like it when
the technology makes the decisions) or hinder
personal develop- ment. As the work of
Nissenbaum (2004) shows, “the right to be
left alone” is very important for personal
development because people need relative
insularity to develop their goals, values, and
self-conceptions. Furthermore, if technol- ogy
cares about personal safety and comfort and
relieves people from many responsibilities
(such as remembering to take one’s keys or to
close a door), it becomes more difficult to
develop responsibility in children. Children
traditionally learn to be responsible for not
losing keys, for do- ing their homework, for
taking the right books to school, and for other
small everyday tasks, but if all these
responsibilities are shifted to Ubicomp
technologies, what will replace them in
growing children? To the best of our
knowledge, the sce- narios do not suggest any
replacement. Instead, the role of children in
many Ubicomp scenarios is limited to playing
computer games. Research into computer-
supported learning is an exception, but even
there learning is mainly supported by
augmented reality (Price, 2004), which is also a
kind of game. One example of Ubicomp
scenarios regarding children is the ITEA
roadmap (ITEA,
The screenplay describes how the family goes
to a summer cottage “at the seaside on Lonely
Island off the Mediterranean coast of France”
and that “the kids are unhappy to leave home
… because of the high-end, virtual reality
video and gaming entertainment equipment,
which was recently installed … in their house”
(p. 134). If the roadmap leads us to a world in
which school children are not interested in
Lonely Islands, will we want such a world?

An eXAMPLe of uneXPected
PrI VAcy ProbLeMs In MobILe
PHone dAtA coLLectIon

In our case study, the mobile phone usage data


were collected with the goal of personalisation
and context adaptation of mobile phone
applications. The data gathered were a rough
location estimate based on phone cell ID
(granularity of cell ID- based positioning in our
case ranged from several hundred metres to
several kilometres), and phone usage data
comprised of the duration of incoming and
user-made phone calls and usage of different
phone applications such as SMS typing, games,
a calendar, whether the keyboard was in use or
not, and so forth. No phone numbers were
logged, nor any SMS contents or application
entries, just the start and end of phone calls, the
opening and clos- ing of an application, the
keyboard being in use, and so on. In addition,
logs of Bluetooth activity were collected; each
phone that participated in data collection was
recording the IDs of all Bluetooth devices in its
communication range. All the data items were
time stamped with absolute time. Data were
collected with respect to five users around the
clock for five-seven days.
It is important to note that the users who
participated in the data collection were
informed about what data were being collected
and gave their consent, largely because they did
not expect any privacy problems. The users did
not want precise location tracking (by GPS, for
example)
and did not want the content of their actions to some other information as “if colleagues know
be logged, but they allowed logging of the it,
simple facts of actions taking place. It is also
important to note that all our users were
application devel- opers in the field of context-
aware computing, with several years of
experience in developing Ubicomp
applications. Thus, one would expect them to
give their consent to data collection with a
much better understanding of the consequences
than the average person.
After the data had been collected and
analysed, however, we found that when all the
seemingly harmless components were linked
together, they revealed a lot about the users.
They actually al- lowed us to figure out what
kind of person each user was: how
communicative they were; whether they usually
initiated communications or just re- ceived calls
and SMS from others and reacted to them;
whether or not they had regular routines in their
life; whether they were hard-working people;
whether they had an active night life; and so on.
After discovering that the data told us a lot
about the users, we asked them whether they
expected such a result, and whether they like
this result. For all the users but one, the result
was quite surprising, and only one of them told
us that he did not care whether other people
could gain such information about him. Thus,
the power and the unpleasant consequences of
information linkage are largely under-estimated
even by developers of Ubicomp applications.
We also asked the users to mark which parts
of the information (not speculations regarding
the user’s personality, but lower-level data
compo- nents) they could make available to
their family members, which parts to
colleagues and which parts they would not like
to become available to a stranger who happened
to access the information accidentally. Most of
the users did not care much whether their
family members or colleagues could access the
data or not (although all but one user marked
some information as “if family members knew
it, it might occasionally be unpleasant” and
the situation might occasionally be
unpleasant”). Only one user would allow
strangers to access this information, however,
and none of them wanted it to appear on the
Web.
After that we asked the users’ opinion
regard- ing where more personal secrets can be
discovered: in public places, at home or at work.
Four users told us that a Ubicomp application
installed in a home environment had high
chances of discov- ering personal secrets,
applications in a work environment and
location tracking applications had medium
chances, and applications installed in public
places had the lowest chances. One user (the
only person in our study who had an active
night life) named location tracking as the most
privacy-threatening, Ubicomp applications in
public places (such as streets) and at home as
moderately dangerous and those at work as the
least dangerous. We realize that five users in a
study is not a significant number, but we think
that the results are interesting because all the
subjects were well acquainted with the
Ubicomp concept.
Our own data analysis confirms the users’
opinions, because most of our conclusions were
made after processing the data acquired in a
home environment. We also observed that if
the time stamps had not revealed absolute
times, it would have been more difficult to
analyse the data. For example, if the time
stamps and location stamps had been encrypted
or relative to certain application-dependent
events, it would have been difficult to
distinguish between the home and work
environments and to make deductions about us-
ers’ personalities. In our case location was
shown only as a cell ID, so it was not very
informative, but a very curious person would
nevertheless be able to “decode” it.
The experiences with collecting IDs of Blu-
etooth devices in the communication range
were also very interesting, as it was possible to
find out from the phone logs when the
neighbours of a test subject came home and
when they went
to sleep and to deduce something about their information networks given current settings,
personalities. and the project has made significant efforts in
the areas of access control

tHe stAte of tHe Art


In
PrI VAcy ProtectIon

The term “privacy enhancement” has been used


for more than a decade to represent
technologies concerned with various aspects of
Internet secu- rity. Privacy protection in
Internet applications should be based on the
main principles of privacy protection as listed
in the Common Criteria for Information
Technology Security Evaluation (anonymity,
pseudonymity, unlinkability, and
unobservability). A survey of privacy-
enhancing technologies in the HiSPEC report of
2002 stated, however, that more effort had been
invested in protecting user identities than
personal data in the previous years (HiSPEC,
2002). Similarly, the PISA project in 2003
concluded that previous research efforts had
mainly been concerned with the protection of
users’ identities, but not very much with users’
actions (Blarkom, 2003). Since then, research
efforts regarding the protection of personal data
and user actions have increased, but they have
mainly been concentrated on Internet
applications.
Nevertheless, the PRIME study on the state
of the art regarding privacy protection in
network applications, carried out in 2005, has
pointed out many performance problems and
security weak- nesses, and reached the
conclusion that even the most recent techniques
and tools are still far from providing a holistic
approach to usable and secure anonymizing
networks (Camenisch, 2005). It is worth noting
that the conclusion refers to the cur- rent
technology settings, not to future technology
settings such as smart environments and
personal memory aids. The goal of the PRIME
project is to develop a framework for privacy
and identity management in electronic
(the term “access control” in PRIME stands
mainly for the access / release / processing of
data by software methods, unlike the more
traditional understanding of the term as the
granting of access rights to a person),
cryptography, communica- tion infrastructure
and user-side (allowing users to specify how
their personal data can be used), and service-
side (management of obligations) identity
management. Access control research is
concerned with developing policies for access
control and a language for their description, in
order to allow users to control the use of their
personal information and to allow negotiations
between different counterparts without
revealing sensitive information.
The PAW project is a continuation of the
privacy protection research with regard to the
use of software agents and is working on cryp-
tographic techniques and licensing languages
(a description of what one is allowed to do with
data during processing and what not).
Licensing languages and machine-readable
privacy policies are an active research area in
which most of the research is concerned with
the privacy policies of Web sites. A recently
developed platform for privacy preferences
(P3P) (Cranor, 2003) allows Web sites to
convey their policies in machine- readable
form, so that they can be checked on the user
side and compared with user preferences. P3P
does not actually force Web sites to stick to
their promises, however.
The goal of the FIDIS project (Bauer, 2005)
is to develop privacy-preserving methods of
identity management for mobile wireless
applications in current technology settings. The
project has proposed a privacy diamond model
for these settings, the main components in
which are user, device, location and action, and
has suggested that user privacy should be
protected by hiding some of the links between
these components of the model. The FIDIS
project has also presented a classification of
identity management systems (IMS), first as
systems for account management (pure IMS,
where the main goal is authentication,
authorization, and accounting), second as how difficult it might be to apply them. The
systems for personalized services which need fair information practices
both user identity and profiles or log histories,
and third as systems for pseudonym
management, for example, in web services.
Good practices for these systems were
proposed, including separate access to the user
authentication data, user account data and
personal data (addresses, etc.) in identity
manage- ment systems of the first type; and an
architecture was developed for a mobile device
security tool for creating partial identities and
using them in wireless and wired networks.
To summarize, the projects listed above, and
some others, mainly deal with privacy protec-
tion in network applications and, to some
extent, with protecting personal data stored in
personal devices. It is mainly proposed that the
data stored in personal devices should be
protected by means of encryption, but the
inconvenience of the related security measures
creates “large holes in security and privacy”
(Caloyannides, 2004, p. 85), which is very
dangerous considering the huge increase in the
amount of personal data stored in modern
mobile phones. Security and privacy problems
affecting personal devices constitute a very
chal- lenging problem in general terms; on the
one hand, the limited computational
capabilities, battery life and screen size of
mobile devices pose problems for developers of
security methods, while on the other hand, the
main burden of configuring and updating
security settings and anti-virus software is
being placed on the owners of these personal
devices, who often have neither the necessary
special education, nor the time or enthusiasm to
do that.
Research into privacy protection in such
emerging domains as smart environments and
smart cars is in its infancy, and only generic
guidelines have been developed. The work of
Langheinrich et al. (2001), for example,
suggests how the fair information practices
(listed in current data protection laws) can be
applied to Ubicomp applications, and shows
state, for instance, that the user must have ac-
cess to the data about him that has been stored,
and the right to change details that are wrong.
In a Ubicomp future, however, it will not be
easy for users to find all the items of data about
them that are stored in the network and in the
personal devices of surrounding people (let
alone check them). Moreover, some data
processing techniques (such as neural networks)
store user models in a form that is difficult to
interpret.
The work of Hong et al. (2004) proposes
high-level privacy risk models based on two as-
pects: first, the social and organisational context
in which an application is embedded (Who are
the data sharers and observers? What kinds of
personal information are shared? What is the
value proposition for information sharing, its
symmetry, etc.?), and second the technological
aspect (How is the collection, storage and reten-
tion of personal data organized? Who controls
the system? Is there any possibility to opt out?).
This is close to our understanding of privacy
threats, but we suggest that other aspects should
also be taken into account, especially the
probability of accidental information flow (not
intended by the designers). Furthermore, this
work mainly suggests guidelines for risk
estimation, not for safeguards.
The work of Lahlou et al. (2003) focuses “on
the specific issues of the data collection phase”
(Forward, p. 2) and proposes high-level guide-
lines. One of the most important guidelines is to
minimize data collection. Such generic design
guidelines as “think before doing” and “under-
stand the way in which new technologies
change the effects of classic issues” (i.e.,
existing solutions in the physical world) (p.3)
can be applied in other spheres as well as data
collection, but these are very generic design
guidelines.
To summarize, most of the research into pri-
vacy protection is concerned with protection of
the information flow from users, whereas other
privacy aspects have not received much
attention from researchers.
GAPs In PrI VAcy enHAncInG methods should be chosen, for example,
tecHnoLoGIes depending on the application that a user
wants to access, or on the user’s location
For most Ubicomp scenarios to work well, ad- and behaviour; access to a calculator
vanced privacy-protecting safeguards, which do applica- tion should not require user effort
not yet exist (although research into them has (Stajano,
started), will be required. We suggest that the 2004), whereas access to personal memory
most important safeguards are the following: aid data should be allowed only to the data
owner. Thus, the current “once and
• Intelligent Reasoning Capabilities: ad- forever” password-based user verification
vanced artificial intelligence algorithms on mobile phones, which facilitates
capable of recognizing sensitive data in unauthorised use when the owner is in
order to avoid recording or publishing it, another room, for in- stance, should be
for example, algorithms capable of replaced with continuous unobtrusive user
intelligent online summarizing of audio verification, for example, based on user
recordings (online conversion of a meeting behaviour or voice recognition, and on
audio stream into a text document, stronger authentication methods if
including only working discussions), unobtrusive authentication fails but access
algorithms capable of detecting that to sensitive data continues to be requested.
persons in a video or photo are naked or In general terms, we suggest that security
kissing, algorithms capable of adaptation should be a fairly effortless matter for
to the user’s ethics and culture (a photo of users (e.g., updates of anti-virus software
a Muslim woman with her head uncovered should be system-initiated and happen at
is private, while for the majority of convenient times) and should be enforced.
Finnish women this would be nothing We suggest this by analogy with control
special) and so on. To some extent these over technical conditions in personal cars
capabilities can be implemented as and the security enforced with regard to
common-sense rules, such as “if a person financial operations, because future
is alone, or if there are only two persons in Ubicomp scenarios envision personal
a room, the probability of discovering devices that perform life-critical tasks
confidential data is higher than if there are (health monitoring and health care
a larger number of people.” In addition, (Bardram, 2004; ITEA, 2004), financial
algorithms for detecting unusual patterns tasks, and identity management (Ducatel,
of copying and processing of per- sonal 2001). A malfunctioning personal device
data are needed (e.g., if a new back-up is could fail to notice a health crisis on the
made soon after the previous back-up it part of the device owner, or fail to
may indicate data theft, and an alarm commu- nicate this to the doctors, for
should be given), because these would example, and it could also create threats to
also be of help when a person authorized other people; for instance, if it sends a lot
to work with the data is dishonest, unlike of spam and mal- ware to surrounding
other access control methods, which work personal devices it can significantly slow
mainly against outsiders. down their operation and hinder their
• User-Friendly Security: advanced access performing of the tasks required of them.
control and security methods, such as fre- Work on user-friendly authentica- tion is
quent unobtrusive context-aware authenti- an emerging research area. Current work
cation of users. Different user verification is mainly concerned with biometrics,
which is not a perfect solution, because of
possibility of spoofing biometric sensors.
Thus, we suggest that biometric out (Schneier, 2005, 2007).
modalities which carry a high danger of
identity theft (e.g., fingerprint and iris)
should be used cautiously and only with
aliveness detection, and that a fusion of
several not so privacy- threatening
biometric modalities (such as voice, gait,
or behaviour) should be used as a
primary or complementary means of
authentication.
• Communication protocols which do not
use Unique Device Identifiers: it is easy
to link a device ID or a smart object ID to
a user and to track the user’s actions.
Commu- nication protocols which hide the
very fact of communication would be an
ideal case, because a lot of information
can be acquired by tracking who
communicates with whom. For example, it
can be concluded from the fact that a
person has started to visit the Web page of
a certain bank that that person has opened
an account at this bank, and a false request
to update a recently created account in this
particular bank has higher chances of
succeeding than when sent to an
established client of the bank, or to a
client of another bank. This safeguard has
the drawback that it would hinder the
discovery of users with malicious
intentions or with malfunctioning personal
devices sending out spam and vi- ruses,
but this problem can be partially solved by
means of good firewalls and anti-virus
software, which would protect against
mal- ware (see “user-friendly security”
bullet). In cases where the detection of
users’ IDs is important, these
communication protocols cannot be used
(e.g., some applications can require the
using of IDs in communication protocols),
but their usage should not be a common
practice because large-scale log- ging of
everybody’s actions is not likely to
improve security in society, as the famous
security expert Bruce Schneier pointed
• Secure ad-hoc communications: if a
device owner enables ad-hoc Bluetooth
communi- cations, for example, the
sending of a large number of requests to
this device can slow down its operation or
even exhaust the bat- tery. This is not a
direct threat to privacy, but it might
engender privacy if the encryption of
personal data becomes delayed, and it is
definitely a violation of the “right to be
left alone” in cases where the user cannot
simply ignore incoming spam because he
is expecting an important message.
• Encryption for untrustworthy
platforms: not all functions can be
executed in encrypt- ed form. Instead, it is
common to decrypt the code and the data
before execution, which allows spying.
• Unified, concise user interface methods
of maintaining user awareness about the
functionality of the application and its pri-
vacy threats, possibly in graphical form
(e.g., similar to road signs). A warning
about video cameras is currently placed on
the doors of shops (although with no
information as to whether video data is
stored or not, or for how long), but other
Ubicomp technologies would require
similar icons.
• More detailed transparency tools for
awareness in average non-technical
users regarding security risks, the correct
usage of anti-virus and firewall
applications, the dangers of data
collection and the accepting of ad-hoc
messages from unknown devices, and so
on. Since users do not want to spend much
time on security education, these tools and
their user interfaces should be really
intelligent and work “just in time.”
Currently, it is too easy for users to make
mistakes (it is often too easy to ignore
important questions that all look the same,
and click yes without even reading a
security-related question). For example,
the current practice of asking users
whether they agree to “accept temporarily,
for this session,” a security certificate
regard-
ing a certain Web site is not really helpful, the application domain. If a person perceives
because the same question is used for all his current situation to be a private one (e.g.,
Web sites, and it is not linked to other data being alone at home) but in fact is being
regarding the website in question. monitored, the chances that personal secrets
• Recovery means: first, if somebody’s will be discovered are higher than if the person
per- sonal data was compromised (e.g., a perceives the current situation as public (e.g.,
finger- print was forged), it is necessary to giving a talk at a large meeting) and takes care
switch quickly and easily to a new of his own privacy.
authentication procedure in all Consequently, we suggest the following
applications (home, work, smart car, dimen- sions for the analysis of privacy threats:
banking, etc.), unlike the current situation,
in which recovery from an identity theft • Real-world dimensions:
requires significant efforts on the part of  People’s personalities
the victim and can harm that person’s  People’s activities
reputation. Second, if a personal device is  The environment where an activity
lost, the personal data contained in it can takes place
be protected from strangers by security • Dimensions of technology functionality:
measures such as data encryption and  Information flow
strict access control. However, it is  Computer control level vs. personal
important that the user does not need to control level
spend time customising and training a new  Balance between technology aspects
device (so that denial of service does not (storage and communication vs. rea-
occur). In- stead, the new device should soning capabilities and control level)
itself load user preferences, contacts,
favourite music, and so forth, from a back- real World dimensions
up service, probably a home server. We
suggest that ways be developed to People’s personalities are important because the
synchronize data in personal devices with notion of what is considered private and what
a back-up server in a way that is secure is not depends on the person and the situation
and requires minimal effort from the user. (context) (Nissenbaum, 2004). For example, the
chances that a married man’s personal data will
accidentally be accessed by his wife or children
dIMensIons of PrI VAcy are fairly high, even though secrets withheld
tHre Ats AnALysIs from family members are not unusual; for
example, parents often prefer to keep children
Privacy risks fall into two major groups, the unaware of the existence of adult videos at
first of which is application domain-dependent home, in order to prevent them from watching
risks, which depend on the personal or these while the parents are away. Personal
organizational activity being supported. Health activity is obviously an important dimension
data, for ex- ample, are considered sensitive, for privacy risk analysis because an activity
and designers of applications for hospitals are consumes and produces a flow of information;
obliged to follow corresponding privacy for instance, large quantities of financial data
protection regulations. Second, privacy risks are involved in paying bills, and health and
are caused by a mismatch between personal identity data are involved in a call to a doctor.
expectations regarding current privacy levels The environment is an important di- mension
and reality, which do not depend on (one which unfortunately is not always
considered) because people’s mental models of people
current privacy levels are based on traditional
perceptions of their environment (e.g., “now I
am alone in my office, so that nobody can see
me”) and people behave more or less freely
depending on their estimation of current
privacy levels. We suggest that applications
should take the follow- ing into account:

• traditional perceptions of the


environment (e.g., perception of the home
as a private environment; perception of a
wall as a non- transparent object,
perception of a street as a public place)
• common activities in the environment
(e.g., in an office people usually work)
• other probable activities in the environ-
ment (e.g., calling a doctor or flirting with
a colleague in an office environment).
Previ- ous guidelines for the estimation of
privacy threats (Hong, 2004) took account
of the activity dimension mainly in the
sense of the primary user activity
supported by the application, but
secondary activities are also very
important.

Privacy threats coming from real-world set-


tings can be roughly categorized as high,
medium, and low in intensity. We suggest that
application developers should always consider
the privacy risk to be high when the application
can run in the presence of children (which
concerns most home-domain applications).
Furthermore, we suggest that application
developers should not give parents unlimited
power to check and con- trol what their children
are doing. Instead, the children’s privacy
should be protected carefully, because they
need this privacy for their personal
development (Nissenbaum, 2004).
We suggest that high-intensity threats exist
in connection with activities dealing with health
care, finance, and communication between
family members and close friends. High threats
appear in the home environment, first because
perceive it as private and behave freely, and
second because the security of home computers
and personal devices is to a large extent the re-
sponsibility of their users, whereas many people
(elderly people and children especially) do not
have the education, skills or in many cases the
desire to take care of the security of personal
Ubicomp applications, which makes them
vulnerable to all kinds of security faults. High-
intensity threats also exist in an office
environment, because on the one hand people
cannot avoid dealing with private issues at work
and are highly dependent on their work, and on
the other hand, they are not free to decide on the
environment in which they have to work,
whereas organizations invest a lot of money in
the development and installation of Ubicomp
applications in workplaces. It is, thus, quite
probable that Ubicomp applications will be
deployed in workplaces sooner than in homes.
Medium threats to privacy appear in con-
nection with shopping (increasing competition
between retailers can lead to advertisements
that are targeted at personal preferences and to
hunting for personal data), learning and
mobility activities (by mobility we mean
travelling within a city as well as on holiday or
for one’s work), and relatively low-level threats
are associated with entertainment activities.
Our informal grading of the dimensions of
the privacy threats caused by real-life activities
and the environment is presented in Figure 3.

technology choice dimensions

Information flows start from data collection


performed by sensors. The most popular sensors
in Ubicomp scenarios are audio, video,
position- ing, physiological, safety, and comfort
sensors, together with those used for logging
human- computer interactions.
Physiological sensors are most dangerous
from the privacy point of view, because they
detect what is inside a person’s body, that is,
they “break into” the most private sphere.
These sensors are the
Figure 3. Guidelines for evaluation of real-world privacy threats caused by certain environments
and activities

basis for building health care applications, monitor their children,


where strict rules for the protection of health
data exist. Ubicomp scenarios, nevertheless,
suggest that these sensors could be used for
purposes other than health and fitness.
Detection of a person’s mood and emotions is
an active research area (Nasoz, 2003), and
suggested applications include the detection of
interesting scenes for automatic audio and video
capture for lifetime personal stores (Gemmel,
2004; Healey, 1998) and the estima- tion of a
user’s preferences for TV programmes (Palmas,
2001). If physiological data are linked to the
content of TV programmes and to the pres-
ence of other people, however, personal feelings
become dangerously “naked” and can reveal to
parents such facts as who their child is in love
with, or else they can be used by governments
for monitoring the loyalty of citizens.
Physiological sensors can also detect health
problems, but such data will not be properly
protected, because the data protection
requirements in the domain of TV
personalization are not very strict.
Video and audio sensors violate natural pri-
vacy-protecting borders such as walls, and
video cameras can reveal a lot more than audio
sensors. In Ubicomp scenarios, they are
suggested for use in real-time communication
between people and for helping parents to
for instance, by logging potentially dangerous
situations (Ma, 2005). Second, such sensors
have been suggested for memory augmentation,
for example, the recording of work meetings
(Aschmoneit, 2002) or personal memory aids
(Gemmel, 2004; Healey, 1998). The first type
of application “breaks the walls,” while the
second type violates people’s belief in the
limits of others’ attention and memory.
Biometric sensors have mainly been
suggested for access control, and carry a danger
of identity theft. Safety and comfort sensors
(temperature, light, car acceleration etc.) can
reveal users’ per- sonalities and often initiate
information push; for example, they may issue
reminders to switch the stove off or employ
actuators to do it auto- matically. This is
beneficial for people suffering from dementia
or for families with babies, but if teenagers are
assumed to be as irresponsible in caring about
home safety as babies, there may be little
opportunity left for them to develop a sense of
responsibility.
The application control level denotes how
much technology does on behalf of its users.
An application that reminds its user to take
pills in the event of high blood pressure, for
example, has a high control level because it
initiates the mea- suring of blood pressure and
a dialogue with the
Figure 4. Guidelines for evaluating privacy threats caused by technology choices

user. Such a dialogue may annoy the individual data from several sources is possible, for
or reveal personal health details if it happens at example,
the wrong moment or in public. An application
which filters shopping advertisements
according to user preferences also has a high
control level, because the user can never know
about certain shopping alternatives if they are
filtered out. (An important question for such
applications is who sets the filtering rules and
how they can be pre- vented from favouring a
particular shop.)
With more extensive information collec-
tion, transmission and storage capabilities and
higher control levels, technology poses more
privacy threats. Most Ubicomp scenarios
involve application-dependent information
storage and a lot of wireless communication
(between objects, people, and organizations).
We suggest that sig- nificant threats to privacy
can arise if technology penetrates walls and the
human body, for instance, by using
physiological, video and/or audio sensors.
Significant threats are also likely to be caused
by high control levels (i.e., the capability of a
tech- nology to act on behalf of a person, e.g.,
to call an ambulance in an emergency) or by
biometric sensors (due to the possibility of
identity theft).
We also suggest that privacy threats should
always be regarded as high when the linkage of
when either of a lot of data about one person
can be aggregated (as in most personal devices),
or certain data about a large number of people.
We suggest that the dangers of information
linkage are often under-estimated, as we have
observed in the case of our data collection
system.
Medium threats are associated with
positioning sensors (without time stamps they
provide location data, but not much activity
data, whereas location plus time information is
a much greater threat to privacy) and with a
medium level of technology control (the
capability to make proactive sugges- tions, e.g.,
to issue reminders). Fairly low threat levels are
associated with a low level of control (e.g.,
ranking advertisements according to criteria
explicitly set by the user) and with comfort
sensors (lighting, heating, etc.).
We would like to emphasize that threats to
personal privacy are very often caused by mis-
matches between the application control level
and application intelligence, and particularly by
the fact that the technology is already capable
of storing and transmitting a lot of data, but is
not capable of detecting which data it should
not store or transmit (with the exception of
predefined data categories such as health and
finance). In order to ensure “the right to be left
alone,” however, and to prevent the accidental
disclosure of confidential
data, for example, via an audio reminder to take not found any for applications which do not
medicine when the user is in somebody’s have either high technology risks, or high real-
company, it is very important that the world risks, or
intelligence of an ap- plication should
correspond to its level of control (in other
words, to its level of autonomy: what
technology can do on its own initiative).
Another example can be found in (Truong,
2004), which presents scenarios of Ubicomp
applications made by end users, where one of
the users suggested automatic recordings of
parties in his home. If such an application is
deployed in a large home and records two
persons discussing personal matters in a room
without any other guests, for example, it can
lead to privacy problems. These would not
appear if the application were intelligent
enough not to record such a scene.

GuIdeLInes for sAfeGuArdInG


AGAInst PrI VAcy tHre Ats

Estimates of the threats to privacy created by


the combining of real-world settings and
technology choices in certain popular Ubicomp
scenarios are presented in Figure 5. Since the
scenarios do not describe implementation
details, the estimates are only approximate. The
threats in the “safe driv- ing” application
scenario, for example, depend on data storage
(e.g., whether a time-stamped log of speed,
acceleration etc. is stored or not) and data
exchange (e.g., between cars driving behind the
other), but a high application control level is in
any case a threat to privacy, because the
technol- ogy might be wrong, and because the
users don’t always accept its superiority.
Similarly, “issuing reminders about the weather
forecast for the destination when on a journey”
presents privacy threats because it is a form of
technology-initiated interaction. What if the
reminder is given when the user is in the
company of a person whom he would prefer to
be unaware of his journey?
When reading Ubicomp scenarios, we have
both. In fact, most scenarios fall into the
category of high technology risks. We suggest
that if an application implies high technology
risks, these should be reduced by lowering the
control level of the technology, choosing the
sensors differently and reducing the linkability
of the data and by other applicable methods
(see below).
By lowering the control level of technology,
we mean that applications should ask the user’s
permission before taking potentially privacy-
threatening actions, for instance, for video and
audio recording. By a different choice of
sensors, we mean that same kind of data can
often be acquired in many ways, each of them
presenting different privacy threats. Movie
recommendation applications, for example,
need user feedback data, and the ways of
obtaining it include the use of physiological
sensors, the analysis of facial expressions,
speech recognition, monitoring of the noise
level in a room, and monitoring user actions
such as fast forward scrolling (which is the
safest in terms of privacy). Even if fast forward
scrolling and noise level monitoring might not
give as good results as physiological sensors
(which have not actually been tested), they
should be preferred because they pose less of a
threat to privacy. From our data collecting
experiences, we would argue that what we can
tell about a person through the linkage of
different kinds of data it is frequently under-
estimated. We regard reducing data linkage as
very important, and suggest that absolute time
stamps should be avoided; that is, data should
be stamped with the time relative to the
application and as much real-time data
processing should be done as possible.
Furthermore, we suggest that since applica-
tions with both high threats due to real-world
set- tings and high threats due to technology
settings require advanced safeguards (such as
intelligent reasoning capabilities or user-
friendly security), which do not yet exist, such
applications should be deployed only in
domains with strict legal regu- lations, such as
healthcare or banking, and then only with a
fairly low level of technology control.
Figure 5. Examples of levels of privacy threats in popular scenarios

Be sure that
Real Word - created only the Health Care and
High Threats user
Safe driving understands Security
a message!
Avoid location Video-based
data logging surveillance of kids
”Take a pill” Physiological
and other sensors in
reminders Personal Memory
Home Automation:
Aids and TV
door locks, stove Time-stamped personalization
safety etc log of location
and/ or activity Live Video Link or
Provide an easy data Video recording
c

way to overr ide


techn ology settings M-commerce
and consider
possible prese nce of
children Ensure Avoid doing it!
anonymity Time-stamped
User actions - and Avoid time and
unlinkabili ty log of phone
usage data location stamps
Physical
based TV Shopping Choice of Technology - created
personalisation High Threats

RFID tags-based Bluetooth-based


finding objects ad-hoc advertising

Be sure that Weather forecast


only the reminders Choose
user technology with
understands
a message! access control to
UID

In other domains we suggest that the data immediately in real time (performing
deployment of such applications should be real-time feature selection, or finding an-
postponed until the technology becomes more swers to predefined “pattern exists or not”
intelligent. For example, we suggest that the use queries), so that the storage of raw data
of physiological and video sensors and data (even temporarily) is avoided;
stamped with absolute times should be avoided • Encrypted or relative location stamping
unless it is critical for the preservation of life and time stamping: For example, instead
and security. The sugges- tions made above do of investigating the dependence of high
not apply to cases where the technology blood pressure on absolute time, an
performs its tasks reliably and the users do not application should stamp the data relative
perceive the privacy problems as being to the moment of taking a pill or calculate
important; for example, elderly people may be the average time when the user’s blood
willing to trade off privacy against the gaining pressure was above a given threshold;
of support in time, and babies do not care about • Data deletion or editing after an
privacy at all. applica- tion-dependent time: For
In addition, we suggest the following good example, when a user buys clothes, all
practices: information about the material, price,
designer, and so forth, should be deleted
• Real-time data processing: select algo- from the clothes’ RFID tags. For
rithms and hardware capable of processing applications that require active RFID tags
(such as finding lost objects (Orr, 1999), privacy violation problems which might
the RFID tag should be changed so that no result from
links are left between the shop database
and the personal clothes. Similarly, the
location of an emergency call does not
require the storage of long-term location
data, so that this should be avoided;
• Data processing in a personal device
instead of sending data to the environ-
ment: Instead of submitting a query with
personal financial preferences to a shop in
order to find suitable products, for
example, the application should submit a
more generic query, even at the cost of an
increase in data filtering in personal
devices, and anonymous payment
procedures should be used when- ever
possible.
• Choice of communication technologies
which do not use permanent hardware
IDs in their protocols, or at least have
control over access to these IDs, and
which allow the communication range to
be controlled. The current situation with
Bluetooth communi- cation, for example,
is that if a device owner enables ad-hoc
communication (in order to use the full
range of possible applications), the device
responds to each request with its ID,
allowing user tracking even over walls,
due to the fairly large communication
range that is beyond user control.
• Detection of hardware removals and
replacements: Users are currently not
warned about replacements/ removal of
attached sensors or memory cards when
devices are in the “off ” state, thus
making physical tampering easier
(Becher, 2006). Since personal devices
will be monitoring a user’s health in the
future (Bardram, 2004; ITEA, 2004),
unauthorized replacement of sensors could
result in a death if they failed to detect a
health crisis.
• Transparency tools: These are user-
friendly ways to warn users about possible
the technologies deployed around him/her
and ways to configure technology settings
easily. For example, users might prefer to
sacrifice some of the benefits of an
applica- tion for the sake of anonymity, to
reduce the level of control of applications
or adjust the way in which incoming
advertisements are filtered (if
advertisements which are considered
uninteresting by the applica- tion are
completely removed, this carries a danger
that the user will never hear about some
options). One solution could be to have
several “privacy profiles” in devices, so
that each profile defines which groups of
applications and means of communication
are enabled and which not in different set-
tings. Users would then just need to
switch between profiles instead of dealing
with a bundle of options with the risk of
forgetting some of them. Our own
experiences with data collection have
shown that since even Ubicomp
application developers do not fully
understand the possible consequences of
their data collection, transparency tools
should be really carefully designed.
• Means of disconnecting gracefully:
Users should be able to switch an
application or device off completely, or to
switch off some of its functionalities in
such a way that other people do not take it
as a desire by the user to hide, and in
such a way that the device is still usable
(e.g., users should be able to check
calendar data while having the com-
munication functionality switched off ).

concLusIon

We have presented here an analysis of


Ubicomp scenarios from the point of view of
possible impli- cations regarding privacy and
have considered the state of the art in research
into privacy protection, which does not allow
safeguards to be provided against all possible
problems. Recent news reports
suggest that large-scale surveillance by means nevertheless, convinced us that since it is diffi-
of ubiquitous technologies (the Internet and cult to over-estimate what kind of discoveries
phones) has already started (Web site of the an
American Civil Liberties Union and the ACLU
Foundation,
2006). The analysis of Ubicomp scenarios does
show, however, that privacy protection is not
yet considered a necessary design requirement,
which can lead to a lack of user acceptance.
A typical approach to privacy threat analysis
is to estimate the sensitivity of data that have
been collected and stored, which depends on
the application domain (e.g., health care data
are considered sensitive) and on the consumers
of the information (Hong, 2004). We suggest
that privacy protection should also depend on
which borders of real-life privacy are violated
by the technology, because the likelihood of
acquiring sensitive data accidentally is high if
the technology penetrates through supposedly
reliable physical borders. Furthermore, we
suggest that privacy protection should consider
not only information flows from users, but also
information flows towards users.
The design guidelines for the estimation of
pri- vacy threats and for privacy protection in
emerging Ubicomp applications have been
proposed after a thorough analysis of Ubicomp
scenarios, observa- tions made during long-
term runs with Ubicomp applications in a
work environment (Bellotti,
1993) and our own experiences. Our guidelines
are intended to protect individuals both from
regu- lar leakage of confidential data (such as
location tracking data) and from the accidental
discovery of sensitive data, for example, the
discovery that two guests at a party had a
heated discussion on a balcony. The
effectiveness of such guidelines is very difficult
to evaluate, due to the rare occasions on which
such events happen and the fact that attempts to
“capture” them would be unethical. We are not
aware of any work presenting results on how
certain privacy-protecting guidelines actually
protect or disclose real secrets.
Our experiment with phone data collection,
application can make, developers should be
very cautious and take care to protect users
against infringement of their privacy by various
categories of interested persons and
organizations ranging from the limited number
of experienced hackers up to the large numbers
of curious family members, relatives,
colleagues, neighbours and so on, who luckily
are most probably not endowed with such
advanced computer skills. The guidelines and
safeguards are proposed in order to help
applica- tion developers decide which problems
they should pay attention to and choose the
most appropriate safeguards in relation to the
application and the device capabilities.
Implementation of some of the proposed
safeguards would require a signifi- cant
increase in the computational capabilities of
personal devices, but such notable hardware
improvements have been achieved recently, that
it is likely to become possible in the near future
to dedicate more memory to data processing al-
gorithms instead of only to data storage. In
cases where the capabilities of personal devices
are insufficient for the desired safeguards, we
suggest that users should be made aware of the
possible problems (proper transparency tools
should be developed for non-technical users)
and allowed to choose a trade-off between the
benefits and problems of the applications.
Ubicomp technologies can help to make life
better if they are accepted by users, but this ac-
ceptance will be jeopardized if the problems
created by the new technologies are not
analysed and minimized. One of the
important benefits of Ubicomp technologies
will be to increase the security of individuals
and society as a whole, for instance, making it
possible to locate an emer- gency phone call,
which could help to save the users’ lives, or to
access descriptions of crimes in remote
locations and to compare them, which could
help to find criminals. Similarly, access to a
patient’s lifelong health record could help to
reveal an allergy and save the person’s life. In
general, new technologies provide support for a
safer and more convenient life and for
communications, so
that it would be possible to access everybody surveillance on privacy and civil
and everything (family members, doctors,
services, etc.) from any place and any time. The
current situation is, nevertheless, such that the
benefits are emphasized more than the possible
problems, and thus we would like to emphasize
the problems in this paper. New technologies
can have implica- tions for privacy with respect
to the surveillance of citizens by governments
and surveillance between people, for example,
control exercised by parents or spouses over the
activities of their family members. Although in
some cases sur- veillance is clearly undesirable
(e.g., parents do not want their children to be
able to discover the prices of their purchases
easily, to know which videos they watch or to
see all their photos), it is an open question
whether the surveillance of citizens by a
government and the surveillance of children by
their parents can increase the safety of society
as a whole. The surveillance of children can
help to save them from abuse, traumas or drug
addiction, but in many cases such surveillance
is not likely to do any better than the old-style
trust and love in a family. Thus, it may be
better when developing new technologies to
aim at detecting when children are in real
danger rather than at simply providing their
parents with means of control all their actions?
Although it is easier to develop technology by
which parents can monitor their children, it
might lead to their growing into irresponsible
or helpless people.
Regarding whether the surveillance of
citizens by a government can increase safety in
society as a whole, we would like to cite the
opinion of the famous security expert Bruce
Schneier, who says in his essay “Why Data
Mining Won’t Stop Terror” (2005) that “we’re
not trading privacy for security; we’re giving
up privacy and get- ting no security in return.”
Schneier continues to debate over the idea of
trading off privacy for extra security in later
articles. For example, in “On Police Security
Cameras: Wholesale Surveillance” (2007), he
says that “the effects of wholesale
liberties is profound; but unfortunately, the this will significantly improve the
debate often gets mischaracterized as a
question about how much privacy we need to
give up in order to be secure. This is wrong.”
Schneier suggests that, although the police
should be allowed to use new technologies to
track suspects, data on people who are not
currently under suspicion should not be stored.
The decision of the society regarding the bus
ticketing application in Helsinki was in line
with this opinion, that is, society decided
against trading off privacy for security. In
many cases, however, new technologies can
help to increase safety without threatening
privacy. The possibil- ity to locate an
emergency phone call can help to save the
users’ lives, for example, but the threats to
users’ privacy can be minimized first by
keeping only short-term, recent location data,
and second by strict control over access to this
data.
We suggest that the most important
safeguards are an appropriate balance between
the level of technology control and its level of
artificial intel- ligence (how advanced the
reasoning is and how the access control
methods are implemented), an appropriate
choice of sensors (sensors with powerful
capabilities for violating natural privacy-
protecting borders should not be used
wantonly), and other hardware (such as
communication chips with a configurable
communication range and access control to
their ID), the prevention of data linkability by
avoiding absolute time stamps and location
stamps, especially in applications which cannot
provide user anonymity (such as smart spaces
and personal devices), user-friendly security
and user-friendly system configuration
methods. The list of the proposed safeguards is
not exhaustive and could well change with the
development of new technologies. Novel appli-
cation scenarios or the unpredictable use of
new technologies, for example, could introduce
new threats to privacy, which could require
more safe- guards. On the other hand, if
methods of reliable unobtrusive biometric
recognition with aliveness detection can be
developed for mobile devices in the near future,
protection of personal data and make some of of
our recommendations outdated. However, since
our analysis is based on application scenarios
and roadmaps for Ubicomp technology
development and for the development of
privacy-enhancing technologies, we believe
that our recommenda- tions for the evaluation
of privacy threats and safeguarding against
them will be valid for as long as the scenarios
analysed here are valid, and for as long as gaps
in privacy-enhancing technolo- gies pointed out
here continue to exist. Since one fairly common
reason for privacy problems in these scenarios
is an insufficient level of system intelligence for
the complexity of the tasks, and since computer
capabilities for data collection, storage and
transmission are growing faster than the
intelligence of data processing algorithms,
protection against privacy violations is likely to
remain an important problem in the future.

AcKnoWLedGMent

This Article is based on research supported by


the EU project SWAMI: Safeguards in a World
of Ambient Intelligence (IST-2004-006507)

references

Aschmoneit, P., & Höbig, M. (Ed.) (2002).


Context-aware collaborative environments for
next generation business networks: scenario
document (COCONET deliverable D2.2). Tele-
matica Institute.
Bardram, J. E. (2004). The personal medical
unit - a Ubiquitous Computing infrastructure
for personal pervasive healthcare. In T. Adlam,
H. Wactlar, I. Korhonen, (Ed.), UbiHealth 2004
- The 3rd In- ternational Workshop on
Ubiquitous Computing for Pervasive
Healthcare Applications.
Bauer, M., Meints, M., & Hansen, M. (2005).
Structured overview on prototypes and concepts
identity management systems (FIDIS Institute for
Deliverable D3.1). Retrieved on March 15,
2006, from http://
ww w.fidis.net/fileadmin /fidis/deliverables/fidis-
wp3-del3.1.overview_on_IMS.final.pdf
Becher, A., Benenson, Z., & Dornseif, M.
(2006). Tampering with motes: real-world
physical at- tacks on wireless sensor networks.
In J. A. Clark, R. F. Paige et al. (Ed.), The
Third International Conference on Security in
Pervasive Computing (pp. 104-118).
Bellotti, V., & Sellen, A. (1993). Design for pri-
vacy in Ubiquitous Computing environments.
Proceedings of the The Third European Confer-
ence on Computer Supported Cooperative
Work (ECSCW’93) (pp. 77-92). Kluwer.
Blarkom, G. W. van, Borking, J. J., & Olk, J. G.
E. (Ed.). (2003). Handbook of privacy and
privacy- enhancing technologies: the case of
intelligent software agents, TNO-FEL, The
Hague.
Bohn, J., Coroama, V., Langheinrich, M., Mat-
tern, F., & Rohs, M. (2005). Social, economic,
and ethical implications of Ambient
Intelligence and Ubiquitous Computing. In W.
Weber, J. Rabaey, E. Aarts (Eds.), Ambient
Intelligence (pp. 5-29). London: Springer-
Verlag.
Caloyannides, M.A. (2004). The cost of conve-
nience: a Faustian deal, IEEE Security &
Privacy,
2(2), 84 – 87.
Camenisch, J. (Ed.). (2005). First annual
research report (PRIME deliverable D16.1).
Retrieved on March 12, 2006, from
ht t p://ww w.pr ime-project. eu.org/public/prime_
products/deliverables/rsch/
pub_del_D16.1.a_ec_wp16.1_V1_final.pdf
Cranor, L. F. (2003). P3P: making privacy poli-
cies more useful. IEEE Security and Privacy,
1(6), 50-
55.
Ducatel, K., Bogdanowicz, M., Scapolo, F.,
Lei- jten, J., & Burgelman, J.-C. (2001).
Scenarios for Ambient Intelligence in 2010.
Prospective Technological Studies (IPTS), EC- ht t p://ww w.m it f.org /publ ic_ e/a r ch ive s/i ndex.
JRC, Sevilla. html
Gemmel, J., Williams, L., Wood, K., Lueder,
R.,
& Bell, G. (2004). Passive capture and ensuing
issues for a personal lifetime store, In Proceed-
ings of the First ACM Workshop on Continuous
Archival and Retrieval of Personal Experiences
(pp. 48-55).
Healey, J. & Picard, R.W. (1998).
StartleCam: a cybernetic wearable camera. In
The Second International Symposium on
Wearable Comput- ing (pp. 42-49).
HiSPEC project (2002), Privacy enhancing
technologies: state of the art review, version
1. HiSPEC Report. Retrieved on March 1,
2006, from ht t p://ww w.hispec.org.u k /public_
documents/7_1PETreview3.pd
f
Hong, J., Ng, J, Lederer, S., & Landay, J.
(2004). Privacy risk models for designing
privacy-sensi- tive Ubiquitous Computing
systems. In Proceed- ings of the Conference on
Designing Interactive Systems (pp. 91-100).
Information Technology for European
Advance- ment (2004), ITEA Technology
Roadmap for Software-Intensive Systems (2nd
ed.). Retrieved on March 1, 2006, from
ww w.itea-of fice.org
Jansson, C. G., Jonsson, M., Kilander, F. et al.
(2001). Intrusion scenarios in meeting contexts
(FEEL Deliverable D5.1). Royal Technical
Uni- versity. Retrieved on March 1, 2006, from
http:// dsv.su.se/FEEL/zurich/Item_3-
Intrusion_sce-
narios_in_meeting_contexts.pdf
Kato, U., Hayashi, T., Umeda, N. et al. (Ed.).
(2004). Flying Carpet: Towards the 4th
Genera- tion Mobile Communications Systems,
Version
2.00. 4th Generation Mobile Communications
Committee. Retrieved on March 2, 2006, from
Kim, S. W., Kim, M. C., Park, S. H. et al. Proceedings of the ACM Conference on
(2004). Gate reminder: a design case of a smart Universal Usability (pp. 65-71).
reminder. In D. Benyon, P. Moody et al. (Ed.)
Conference on Designing Interactive Systems
(pp. 81-90).
Koponen, K., Matkakorttien käytöstä syntyy
val- tava tietokanta matkustajista, Helsingin
Sanomat (Finnish newspaper), 19.9.2002.
Lahlou, S. & Jegou, F. (2003). European disap-
pearing computer privacy design guidelines v1
(Ambient Agora Deliverable D15.4), Retrieved
on March 2, 2006, from
ht t p://ww w.ambientagoras.
org/downloads/D15%5B1%5D.4_-
_Privacy_De- sign_Guidelines.pdf
Langheinrich, M. (2001). Privacy by design –
principles of privacy-aware Ubiquitous
Systems. In G. D. Abowd, B. Brumitt et al.
(Ed.) Proceed- ings of the Third International
Conference on Ubiquitous Computing
(UbiComp 2001) (pp.
273-291). Springer-Verlag (Lecture Notes
in
Computer Science).
Langheinrich, M. (2006). Personal privacy in
Ubiquitous Computing. Presentation in UK-
Ubinet Summer School 2004. Retrieved on
March
2, 2006, from
ht t p://ww w.vs.inf.ethz.ch /publ/
slides/ukubinet2004-langhein.pdf
Ma, J., Yang, L. T., Apduhan, B. O. et al.
(2005). Towards a smart world and ubiquitous
intelli- gence: a walkthrough from smart things
to smart hyperspaces and UbicKids.
International Journal of Pervasive Computing
and Communications
1(1), 53-
68.
Masera, M., & Bloomfeld, R. (2003). A
Depend- ability Roadmap for the Information
Society in Europe (AMSD Deliverable D1.1).
Retrieved on March 2, 2006, from
https://rami.jrc.it/roadmaps/ amsd
Mynatt, E., Essa, I., & Rogers, W. (2000). In-
creasing the opportunities for aging in place, In
Nasoz, F., Alvarez, K., Lisetti, C., & Schneier, B. (2005). Why Data Mining Won’t
Finkelstein, N. (2003). Emotion recognition Stop Terror. Wired News, March 9, 2005.
from physiological signals for user modelling of Retrieved on April 19, 2007, from
affect. In Proceedings of the Third Workshop htt p://ww w.schneier.com / essay-108.html
on Affective and Attitude User Modelling.
Retrieved on March 3, 2006, from Schneier, B. (2007). On Police Security
ht t p://ww w.cs.ubc.ca/~conati/um03-affect/ Cameras: Wholesale Surveillance. San
nasoz-final.pdf Francisco Chronicle, January 2007. Retrieved
on April 19, 2007, from
Nissenbaum, H. (2004). Privacy as Contextual htt p://ww w.schneier.com /essay-147.ht ml
In- tegrity. Washington Law Review, 79(1),
101-139. Stajano, F. (2004). One user, many hats; and,
sometimes, no hat – towards a secure yet usable
Orr, R. J., Raymond, R., Berman, J., & Seay, F. PDA. In Proceedings of the Security Protocols
(1999). A system for finding frequently lost Workshop 2004 (pp. 51-64).
objects in the home (Tech. Rep. 99-24),
Graphics, Visual- ization, and Usability Center, Truong, K. N., Huang, E. M., Stevens, M. M.,
Georgia Tech. & Abowd, G. D. (2004). How do users think
about Ubiquitous Computing. In Proceedings
Palmas, G., Tsapatsoulis, N., Apolloni, B. et al. of CHI
(2001). Generic Artefacts Specification and ‘04 extended abstracts on Human factors in
Ac- ceptance Criteria. (Oresteia Deliverable computing systems (pp. 1317 – 1320).
D01). Retrieved on March 2, 2006, from
http://www. Web site of the American Civil Liberties Union
image.ntua.gr/oresteia/deliverables/ORESTEIA and the ACLU Foundation (2006).
- IST-2000-26091-D01.pdf Eavesdropping 101: What Can The NSA Do?
31.01.2006, Retrieved on March 2, 2006, from
Price, S., & Rogers, Y. (2004). Let’s get http://www.aclu.org/safefree/
physical: the learning benefits of interacting in nsaspying/23989res20060131.html
digitally augmented physical spaces.
Computers and Edu- cation 43(1-2), 137-151. Wright, D., & Gutwirth, S. (2008). Safeguards
in a world of ambient intelligence. Springer

This work was previously published in Advances in Ubiquitous Computing: Future Paradigms and Directions, edited by S.
Mostefaoui, Z. Maamar, and G. Giaglis, pp. 316-347, copyright 2008 by IGI Publishing (an imprint of IGI Global).
Deciphering Pervasive Computing

1450

Chapter 7.8
Deciphering
Pervasive
Computing:
A Study of Jurisdiction, E-Fraud
and
Privacy in Pervasive
Computing
Environmen
t

Grace Li
University of Technology, Sydney, Australia

AbstrAct emergence brings challenging issues to the legal


framework surrounding it. As well recognized,
Pervasive computing and communications is law is a discipline that has direct relevance to
emerging rapidly as an exciting new paradigm human behaviour and its adjoining
and discipline to provide computing and com- environment. Thus, a study of law can be a
munication services all the time and study of the living
everywhere. Its systems are now invading
every aspect of life to the point that they are
disappearing inside all sorts of appliances or
can be worn unobtrusively as part of clothing
and jewelry. This emergence is a natural
outcome of research and technological
advances in wireless networks, embedded sys-
tems, mobile computing, distributed computing,
and agent technologies. At the same time, this
environment and the people who are in it. This introduction piece, the main part of this chapter
surely brings difficulties for us to study the law concentrates on three particular issues:
in a future scenario such as pervasive Jurisdiction and the choice of law issue,
computing environment. Attempting to forecast electronic fraud issue, and the privacy issue.
the future of law, technology, and human These three are unsettled issues in the current
behaviour is a very risky proposition. Hence, it computing environ- ment and believed to
is impossible to fully discuss topics such as become more complicated and controversial in
“legal aspects of pervasive computing”. This the near future with a wider adoption of
chapter aims to provide a general observation of ubiquitous computing technology. In the end,
various legal issues connecting with pervasive this chapter suggests that, to serve the
computing technologies. To avoid a skeleton

Copyright © 2010, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Deciphering Pervasive Computing

future computing environment better, the legal healthcare records, lab order entry and results
and regulatory framework should focus on the reporting, billing and costs as well as personnel
improvement of internal monitoring of risks
and vulnerabilitiesgreater information sharing
about these risks and vulnerabilities. Moreover,
the role of government should focus on
education and training on the care and use of
these technologies and better reporting of risks
and responses. A fully embedded computing
environment that is safe and sound to live in
will need more collaboration between
individuals, commercial organizations, and the
government.

IntroductIon

Pervasive/ubiquitous computing refers to the


ubiq- uitous presence of computing in both
mobile and embedded environments, with the
ability to access and update information
anywhere, anyplace and anytime. At their core,
all models of ubiquitous computing share a
vision of small, inexpensive, robust networked
processing devices, distributed at all scales
throughout everyday life and generally turned
to distinctly quotidian ends (Greenfield
2006). The term “pervasive computing” does
not have any orthodox definition. People use
this term to describe the kind of computing that
will result from the trends of convergence in
communications and information technology -
and particularly, wireless technologies and the
Internet. Put it in a simplE way, pervasive
computing is what happens when the Internet
gets ubiquitous, embedded, and animated
(Kang & Cuff 2005).
Although the phrase “ubiquitous
computing” was coined by Mark Weiser1 about
20 years ago2, only in the past few years, it is
truly taking root due to the technology
developments and the com- mercial taking-ups.
Despite the fact that smart devices have been
used widely in military for years (Nanomarket
2006), nowadays, they have already been used
in many areas of our daily life, such as
171
Deciphering Pervasive Computing
scheduling (Acharyulu 2007). We see other us-
ages include use cell phone for vending
machine and pay for train tickets (Kilburn
2001). Also, it is not rare to see PDA with
wireless connections to Web, broker, child’s
school, appointments and telephone numbers.
Networked coffee shop becomes more and more
popular such as Wi-Fi at StarBucks (StarBucks
2008).
Pervasive computing devices are not
personal computers as we tend to think of them,
but very tiny - even invisible - devices, either
mobile or embedded in almost any type of
object imaginable, including cars, tools,
appliances, clothing and vari- ous consumer
goods - all communicating through increasingly
interconnected networks. According to Dan
Russell, director of the User Sciences and
Experience Group at IBM’s Almaden Research
Centre, by 2010 computing will have become so
naturalised within the environment that people
will not even realise that they are using
computers. Russell and other researchers expect
that in the future smart devices all around us
will maintain current information about their
locations, the con- texts in which they are being
used, and relevant data about the users
(SearchNet 2008).
Technologies have had long history of being
utilized to make our life easier and more
interest- ing. In a way, pervasive computing is
far more ambitious than all other technologies
we experi- enced in the past. It aims to provide
us with an entire new living environment.
Although this new living environment is made
up by different pieces of technologies, the final
product (which is the ubiquitous computing to
be created) is sig- nificant. As one expert
explained - the Internet is going away in the
same sense that electricity and plumbing did in
the 20th century - out of sight and out of mind
(Brenner 2006).
Application of all these technology based in-
novations had already made our life much
easier and more colorful, however, at the same
time, our human dependence of machine had
been increased to an extreme level and as a
consequence, the vulnerability of our living
environment became

172
substantial. With a full embedment of pervasive an
computing technology in the near future, the
vulnerability is bound to be raised to a more
significant level.
While it can be difficult to predict precisely
how technology will evolve, studying the his-
tory of written letter to telegraphy, telegraphy
to telephone, telephone to Internet, mainframe
to personal computer, it seems reasonable to
note that in the not-too-distant future,
interactive compute ring technology, in
whatever form, will be an integral, invisible
constituent of our lives. In the course of doing
it, the computing technol- ogy will also most
definitely raise problems in relation to the legal
frameworks that surrounds it. The following
part of this chapter is therefore to identify and
analyse three major legal aspects connecting
with the future embedded computing
environment. They are jurisdiction issue, online
fraud and privacy.

LeGAL concerns of Pe rVAsIVe


coMPutInG

As explained, pervasive computing


environment has created and is still creating
regulatory chal- lenges in all aspects of the
current legal framework. With the blurring line
between real world and computing world, the
traditional way of utilising laws to regulate
human activities is changing. The following
part is to demonstrate some of these changes in
three traditional areas of law including
jurisdiction, fraud, and privacy.

Jurisdiction and the choice of Law

The Internet attacks on State jurisdiction advo-


cates an important technological determinism
that is problematic for the relationship between
law and technology. In general, the advocates
of denying State jurisdiction would effectively
transfer rule-making power to technologists and
technologies. Sovereign States, however, have
obligation to protect their citizens and to assure
that technologies empower rules of law rather
than undermine the protection of citizens;
States must be able to assure their citizens’
rights within their national territories. As
technology enables noxious behaviour online,
States need ways to prevent and sanction
Internet activities that violate their chosen rules
of law. This means that States cannot allow
technological attacks to defeat their citizens’
politically chosen rights. In effect, the rule of
law as expressed by sovereign States must be
supreme over technological claims. The rule of
law must take precedence over technological
choices in establishing the boundaries that
society imposes on noxious online behaviour.
The supremacy of law, at the same time, must
provide incentives for innovation and the
development of technologies that can support
public policy choices made by States
(Reidenberg 2005).
Unfortunately, the current technology of the
Internet creates ambiguity and challenge for
both States jurisdiction and person jurisdiction
because network boundaries intersect and
transcend na- tional borders. Jurisdiction over
activities on the Internet has become a
battleground for the struggle to establish the
rule of law in the information society. In the
pervasive computer environment, the challenge
in determining jurisdictional issue is to be even
more problematic.
It has been seen that at least three main
trends exist concerning jurisdiction issue in
cyberspace. Firstly, at the early days of Internet
development, cases seemed to follow a rule of
location, which means cases were seeking to
deny jurisdiction, choice of law and
enforcement to States where users and victims
are located constitutes a type of ‘denial of
service’ attack against the legal system. In
effect, the defenders tend to use
technologically based arguments to deny the
applicability of rules of law interdicting their
behaviour (Boutin 2003). This type of
argument always ties closely with the term
‘physical loca- tion’. Jurisdiction is decided
hence by according to ‘location’ – either the
location of the offence or
the location of the offenders. For example, U.S. copy- right law to streaming video on the
courts have looked to online targeting and to Internet and
deleterious effects within the forum to
determine if personal jurisdiction is
appropriate, In Dow Jones & Co. v. Gutnick,3
the High Court of Australia subjected Dow
Jones to suit in Australia for defamation in that
country under Australian law arising from a
Web posting on a U.S.-based server (Weinberg
2002). Nevertheless, with the popularisation of
Internet, early enthusiasm about the Internet
rested partly on obliterating space as a relevant
dimension, thereby undermining the
significance of physical location and distance.
Numerous commentators have examined the
political and jurisdictional implications of this
phenomenon. Pervasive computing ubiquity
seems to contribute to the irrelevance of
location. One can argue that if exchanging
information is the only point, and information
can be exchanged from anywhere to anywhere,
then it scarcely matters where one is physically
located. As modern mobile telephone usage
demonstrates, one need not be co-present to be
in the same conversation (Berman 2002). More
recently, on the opposite, many scholars
believed that the physical embeddedness of per-
vasive computing will, however, reintroduce the
significance, and at times the primacy, of
physical space. Due to pervasive computing
tight coupling with the physical world, it will
increasingly matter where you are. Put in
another way, the functioning and experience of
pervasive computing will not be space-neutral;
instead, pervasive computing will pay close
attention to an individual’s location (Kang &
Cuff 2005). As it was said “Wi-Fi users are the
most location-obsessed people on Earth.”
(Boutin 2003).
Based on these different views to physical
location, there have been various precedents of
making jurisdiction determinations. Firstly,
sov- ereign authority can assert itself against
Internet activists through many recent cases. In
Twentieth Century Fox Film Corp. v. iCrave
TV4, a film studio fought successfully to apply
obtained an injunction against a Canadian
service that could legally stream video in
Canada from servers in Canada. In France, the
Yahoo! court determined that the French penal
code applied to Yahoo!’s activities because the
illegal content could be visualized in France
(TGI 2000). The United Kingdom recently
followed the same ap- proach in a libel case,
Lewis v. King5, finding the place of
downloading dispositive for the choice of law.
Another method of determining the choice
of law issue is through the enactment. For
example, the Children’s Online Privacy
Protection Act6 in the United States contains a
choice of law provision in its definitions that
applies the protections of the American statute
to any Website, regardless of its place of origin
that collects personal information from
children.7 The European Directive on data
privacy contains a similarly extensive choice of
law rule that purports to apply European
substan- tive law to any organisation that uses
means within the European Union to collect
personal data.8
In addition, professor Matwyshyn from
North- western University promoted a new
approach to jurisdictional determinations – a
trusted systems approach (Matwyshyn 2004),
which was believed to be able to provide a
better solution as to the issue of jurisdiction. To
explain his approach, professor Matwyshyn
carefully applied some tra- ditional network
theories and examined in details of Internet
jurisdiction in context of intentional torts and
intellectual property harms before he claimed
there are a number of benefits if this trusted
system applies, which includes avoidance of
harms occurring in connection with Network
Communications and balancing the need for
intel- lectual entrepreneurship. Moreover, he
claimed that this system will work with, instead
of against, the structural realities of network
communications while preserving technology
neutrality. Moreover, it is alleged that this
approach strives to reflect the fundamental
changes in economic identity and corporate
citizenship (Matwyshyn 2004).
More recently, a contractual theory based
juris-
dictional determination is also promoted by States can use filters and packet interceptors as
many scholars from the US (Timofeeva 2005).
Unlike the inadequacies of the Zippo passive
versus active test (Slutsky, King & Spalding
1996), this approach is now fitting to identify a
more effective standard for determining when it
is appropriate to assert jurisdiction in cases
involving predominantly Internet-based
contacts. The solution is alleged to move
toward a targeting-based analysis, which would
seek to identify the intentions of the parties and
to assess the steps taken to either enter or avoid
a particular jurisdiction (Kumar 2006).
Targeting would also lessen the reliance on
effects-based analysis, the source of
considerable uncertainty since Internet-based
activity can ordinarily be said to create some
effects in most jurisdictions. As to the
appropriate criteria for a targeting test, the core
jurisdictional principle – foreseeability was
used (Geist 2002).
Nevertheless, this chapter promotes that in-
novations in information technology will pos-
sibly undermine the technological assault on
state jurisdiction as well as the issue of choice
of law on the cyberspace. To explain it further,
on the State jurisdiction side, innovation can
create a counter-intuitive effect because more
sophisticated computing enlists the processing
capabilities and power of users’ computers.
This interactivity can give the victim’s state a
greater nexus with offending acts and provides
a direct relationship with the offender for
purposes of personal jurisdiction and choice of
law. As a mat- ter of fact, there are more and
more information can be collected by computer
and system with IT development. In the
pervasive computing envi- ronment, it will be
fair to predict that automatic data collection and
information interaction would be more
sophisticated and comprehensive. In addition, it
had been argued that some of these same
innovations also enable States to enforce their
decisions electronically and consequently
bypass the problems of foreign recognition and
enforcement of judgments (Reidenberg, 2005).
well as hacker tools like viruses and worms to
enforce decisions and sanction malfeasance.
These electronic tools might establish
electronic borders that prevent offending
material and foreign wrong- doers from
entering the state’s electronic zone. On this
point, China presents an interesting case with
its firewall; electronic blockades that prevent
offenders from transmitting outside the borders
of the wrongdoer’s state; or electronic sanctions
such as a denial-of-service attack to take down
an offender’s site (Reidenberg, 2003).
In conclusion, because the rationality of
extant legal structures is fundamentally rooted
in ter- ritorial jurisdiction based on physical
borders, the pervasive impact of computer
networks must be regulated, if at all, through
some territorial connec- tion between courts
and litigants (Zembek 1996). Different States
may also want to choose or invent their own
method of protecting sovereignty on the
cyberspace, which will however need
recognitions from other global members. State
governments are most likely to remain relevant
in modern contexts if they recognise the
growing needs for legal and regulatory
uniformity in cyberspace. To remain viable,
state and local authorities must focus more of
their attention outward - understanding, assess-
ing, and serving the interdependent interests of
the nation and the world. Paradoxically, the less
classically insular and independent State bodies
become, the more likely they are to preserve
their dominion. Prudent State governments will
move cooperatively towards the effective
use of model codes, standards, and assimilation
to govern the Internet (Salbu 1998). Moreover,
the legal and regulatory framework should
focus on the improvement of internal
monitoring of risks and vulnerabilities, greater
information sharing about these risks and
vulnerabilities.

e-fraud

The technological shift to Web based


technologies and the coming pervasive
computing environment has had a direct impact
in conducting commer-
cial transactions and all other related industries. increase the probability
The online environment made some industries
extremely vulnerable such as financial industry,
most obviously in the areas of e-banking and
online securities trading.
On October 14th, 2004 New York Federal
Reserve President Timothy Geithner stated:
“The increased risk of terrorist attacks and
increased sophistication of cyber-attacks on
electronic networks have added new
dimensions to the traditional concerns of safety
and soundness and operational resilience,” he
told a financial services conference in Atlanta.
Geithner said the growing challenge of cyber-
attacks will require a “major ongoing
commitment of resources”. “Beyond the direct
financial losses from criminal activity, these
threats pose a broader risk to con- fidence in
the integrity of financial institutions, payments
systems and, ultimately, the global payments
network,” he said (Geithner 2004). It is
therefore rather obvious that the safety issue in
today’s online environment is critical and it
will be more so in the future with the
pervasiveness of computing technology.
Electronic finance has been utilised widely
around the globe, including that of emerging
markets such as Korea, China and India. With
the coming of computer ubiquity, the trends
will be more pressing. It is believed that the e-
finance will expand access, provide
opportunities and effectively circumvent some
of the constraints of traditional modes by using
the new delivery channels (Kellermann &
McNevin 2005).
However, over time, it has been clear that
online fraud rates for e-commerce are much
higher than for those transactions completed via
more traditional modes (Glaessner & McNevin
2003). For instance, the convergence of certain
innovations in the securities market over the
past decade is fostering a fertile environment
for fraud, increasing operational risk and is
amplifying the potential for systemic failure.
By utilising E- platforms, brokers can reduce
costs and barriers to market participants but
of investor fraud. “Buyer bewares” takes on
sig- nificantly new meaning in the online all
weather environment. E-brokering coupled with
the use of unsecured wireless devices by traders
and those in the pit are indicative of the new
kinds of opera- tional risks that should be
understood, analysed and mitigated before they
are incorporated into a business architecture.
Certain costs and risks associated with the e-
finance revolution have yet to be fully
appreciated (Baylis 2007). Recently,
technologists have published books which state
that not all information or transactions belong
in the online world. Decisions to put any
information online should be made in a
reasonably prudent manner after a thorough
risk benefit analysis and awareness of the
weaknesses and vulnerabilities inherent in the
system (Kellermann & McNevin
2005).
At the meantime, analysis of the computer
crime statistics reveals that in recent years, the
number of hacking cases dropped, while
electronic banking thefts and electronic fraud
increased (Wong 2005), which naturally leads
to the con- cerns of e-fraud in a more advanced
environment with pervasiveness of computing
technology.
Many countries passed legislations in this
field to deal with e-fraud particularly, such as
the Computer Fraud and Abuse Act (CFAA) in
the USA, which was originally enacted in 1984
as a criminal statute to address hacking and the
growing problem of computer crime.9 After the
enactment, there have been several attempts to
utilise the civil action provisions of the CFAA,
these attempts have surprisingly succeeded in
convincing US federal courts that hacking
includes accessing and using the factual
information a company has chosen to post on a
publicly available Website (Galbraith 2004).
However, this approach is not generally
followed by other countries; and, there has been
no widely accepted rule for e-fraud. Although
there has been studies done in relation to setting
up of a cyber court (Radhakrishna 2007), there
is still lacuna in this field.
Pervasive computing is bringing complica-
tions to this issue. As it said that pervasive commercial organisations and relevant govern-
com- puting is to establish an environment in ment departments, States should also work with
which, computer and Web are so important and
natural to our life just like oxygen in the air we
breathe (MIT Oxygen Project 2004). If this is to
be the reality, should we be worried about more
online fraud activities?
Countries want to build an information-rich
and knowledge-based economy for the twenty-
first century and beyond. In constructing the
informa- tion super-highway, all the States are
concerned with deviance and disorder in
cyberspace. Many governments have
established their agencies to study the
technology problems and legal issues in making
the cyberspace safe. Unfortunately,
technological measures can only be pursued at
an early stage of pervasiveness of computing,
regula- tory measures have to keep the pace.
Nevertheless, how to keep pace with the
technological develop- ment had presented an
unprecedented challenge to regulator and policy
makers throughout the globe (Madison 2003,
Hale 2005).
To face the new environment, this chapter
pro- motes that countries should adopt a
comprehensive approach in formulating and
implementing an online fraud policy.
Governance in cyberspace is a matter of
successfully managing the combina- tions of
laws, norms, the market, the computer
architecture and ethics to achieve order. A com-
prehensive approach to fighting online fraud
also calls for cooperation from all those who
have a vested interest and can make a
difference. In this regard, governments should
take a stronger leading role in promoting public
awareness and mobilising public support for
computer security. The role of government
should also focus on education and training on
the care and use of these technolo- gies and
better reporting of risks and responses. Fighting
e-fraud will requires joint effort from various
governing regimes and assistance from
different legal jurisdictions. Apart from a great
need of more collaboration between individuals,
international institutions and overseas
regulators in sharing information, developing
best practices, and adopting international laws
to facilitate cy- berspace governance.

Privacy

As one of the most debateable legal issues,


privacy is a topic can not be left out in any
technology based discussions. As it said that
privacy is a distinctly modern product (Godkin
1890), which reflects not only the attitudes
towards retention of information or data, but
also the norm and value of different societies.
During the last decade, the rise of
widespread Internet access has caused
individuals to re- evaluate the importance of
control over personal information within our
society. In mid-1990’s questions of data control
and privacy began to gain momentum as issues
of heightened legal and national importance
internationally, both among consumers and
corporate entities (Winn
& Wrathall 2000). As consumers increasingly
ventured online to engage in information sensi-
tive activities, such as checking bank balances
or transmitting credit card information in
connec- tion with purchases, issues of data
privacy and security entered their thoughts as a
consequence of a medium that was novel to
them (Fox 2000). Meanwhile, corporate entities
had begun to place a premium on consumer
information databases and to change the way
consumer data is valued in corporate
acquisitions (Winn & Wrathall
2000).
The privacy issue this chapter pursues is a
two-fold notion. Privacy protection can be put
forward against government or information col-
lector obtaining information without a warrant;
it can also be put forward against illegal
information collecting or data leaking towards
unauthorised party, the unauthorised party can
be another in- dividual, organisation or even
general public.
Firstly, the concern is privacy protection
against government obtaining information
without
a warrant. We now live in a world of pervasive, members is now communicated to databases
ubiquitous data collection and retention (Blau managed by third parties. The increasing
2004). Modern computer technology permits us sophistication of re-
to acquire and retain knowledge, communicate
instantly and globally, purchase goods and ser-
vices, engage in hobbies, and participate in
politics and cultural affairs, all in less time and
with less expense than once dreamed possible.
One major effect of this revolution has been a
serious reduc- tion in an individual’s rights and
expectations of privacy. It has become
increasingly common for data about our
transactions and ourselves to be collected and
retained by third parties (collec- tors) who
often disclose more intimate details of our
lives and lifestyles than would have ever been
imaginable or acceptable just a decade ago. In
turn, this retention creates an unprecedented
risk that a local, state or federal government can
obtain, without the need for a warrant, data
about individuals (consumers) to which it has
never had access (Elon University, Pew Internet
& American Life Project 2004). Should we be
worried?
When we talk about privacy protection
against illegal information collecting or data
leaking towards unauthorised party, there are
many real life examples. For instance, many
Internet users now rely on third-party providers
for the digital storage of private documents,
correspondence (including e-mail), business and
financial records, family photographs and
hobby information. Do we lose our privacy
interest in those materials when we entrust
them to a third party? In the past, when
information was disclosed to educational,
religious and medical institutions it was done
either orally or in scattered chapter documents.
Now, such information is stored in a digital
format allowing that information to be
collected, sorted and reported in ways never
possible before. With the increasing
computerisation of home services, from home
security services and cable television to
“smart/aware home,”10 security information
that was previously available only to family
mote sensing and database technology means
that the amount of information available to
providers of utility and telecommunications
services has dramatically increased. Do we lose
our privacy interest in that information because
it is now more efficient to collect it in a
database where it can be searched and sorted in
a myriad of ways? (Brenner & Clarke 2006).
Now, if we envision a future where we can
“right click” on any object with our
communica- tors and receive contextually
relevant information. People already “Google”
each other before going on dates or to
interviews. Think about having the option of
one-click “Googling” anyone you walk past, as
you walk past. PDA-sized gadgets that provide
this sort of datasense about fellow conference
attendees have already rolled out. This process
could become automated; no specific “request”
to pull information will be required. Rather,
software will manage our data-sense and
constantly seek out and filter information about
nearby people (Kang & Cuff 2005). Consider
how the privacy of people can be possibly
protected in this emerging environment. .
All these questions and concerns are
existing issues currently in both real and online
world. More importantly, following closely
with the technological developments, they are
going into the pervasive computing age without
a doubt. Thus, we then need to ask that whether
or not the current privacy protection can really
be ad- opted to deal with a world in which
technology is increasingly pervasive - a world
of ubiquitous technology? Unfortunately, we
have not found the answers to these questions
yet.
Having looked at the numerous discussions
of Internet privacy issues currently, many of
them view the Internet as the only relevant
variable. In pervasive computing environment,
this would not be the same. People access
information at anytime, any place through any
type of devices, which mean the
information/data need to be collected for
everything and used for everyone. Therefore,
the traditional view of privacy will need to be
recon-
sidered rather than finding the way of re- scare in the early
applying it to the new environment. More 1980s, when cyanide was injected into bottles
specifically, the traditional conceptions of the of the medicine. Johnson & Johnson responded
issue maintain that the Internet may affect by
privacy issues by increas- ing the quantity and
ease of invasions of privacy, or by making
forms of privacy invasion possible that were
previously unimaginable. Yet privacy itself is,
as many precedents had put it, a matter of
“reasonable expectations,” (Katz v. United
States 1967, R. v. Wong 1990 & R. v.
Duarte1990) and the Internet and related
developments may dramatically change our
conception of what is public and what is
private. Unless we realise that the Internet
changes our expectations of privacy, and makes
much of life more public, we will not be able to
understand the way the Internet changes our
conception of privacy as much, if not more
than, it changes the way in which privacy might
be invaded – as once said, if you do not change
your view towards what is or is not private,
people will probably say to you - “you have no
privacy. Get over it.” (McNealy & Chronicle
2003).
The majority views that the very foundation
of the privacy law as a personal right to control
the use of one’s data, need to be reconsidered in
the computerised age. Professor Paul M.
Schwartz has promoted a so-called
“constitutive privacy” model in his research, in
which, he augured that elaborates information
privacy as a constitutive value that helps both to
form the society in which we live and to shape
our individual identities and he promoted that
State can improve the functioning of a privacy
market and play a positive role in the
development of privacy norms (Schwartz
2000). So far, this is a widely accepted model
supported by many academics (Cate 2000).
Another notable argument relating to
Internet privacy puts forward by Swire is called
a “trust- wrap” model (Swire 2003), which
promotes the use of legally binding instrument
to protect pri- vacy. The idea of using the word
trustwrap arose in thinking about the Tylenol
re-engineering every sale of Tylenol. Today,
every bottle of pills has a plastic wrap around
the outside of the bottle. Every bottle has a foil
seal inside the cap. Inside the bottle, the
medicine exists in tamper-proof caplets or
tablets, rather than the earlier capsules into
which the malicious person had injected the
poison. Swire proposed the term “trustwrap” to
bring together the physical trans- actions of
Tylenol and the virtual transactions of E-
Commerce. Also in his augment, he compares
the effectiveness of using legal binding
instrument to regulate privacy and using self
regulation in the Web environment. The
conclusion he reaches is that binding legal rules
for Internet privacy may well spur E-
Commerce and provide more effec- tive
“trustwrap” than self-regulatory alternatives
(Swire 2003).
Moreover, Brenner and Clarke termed the
fu- ture privacy protection as a “relation-based
shared privacy” (Brenner & Clarke 2006). This
model explained how the societal benefits of
pervasive, ubiquitous technology can only be
achieved if people recognise the privacy of
certain stored transactional data. In the absence
of a constitu- tional recognition of that privacy,
the only alter- natives are to forego utilisation
of the technology or to resort to inefficient
barriers to exploitation of privacy (Brenner &
Clarke 2006).
Nevertheless, one thing seemed in common
of all these arguments are a proposed change of
the foundation of traditional privacy protection.
The definition and scope of privacy is to be
changed with the pervasiveness of computing
technol- ogy. Future is unknown; the way of
abusing of technology to violate privacy is also
unknown. Something used to be called
“personal” may not be personal anymore in the
near future with emerg- ing environment such
as smart/aware home or office. People may
therefore change their attitude towards what is
their privacy. Just as Professor Cate noted -
definition’s exclusive focus on individual
control has grown incomplete in a world in
which most privacy concerns involve data
which we inevitably generate in torrents as go
through our
lives in an increasingly computerised, us with particular issues that were almost
networked environment, and which can be
collected and used by virtually anyone, usually
without us knowing anything about it. Few of
us have the awareness and expertise to
consider trying to control all of the data we
generate, few of us have the time or, frankly,
even the incentive to attempt to do so, and the
sheer volume of data, variety of sites where
they are collected and used, and economic
incentive for doing so would make the attempt
laughably futile (Cate 2000).
Therefore, instead of inventing more legal
instruments (if these instruments can ever be
invented) or utilising more of current legal
frame- work, this chapter promotes that – to
establish a better privacy protection, laws made
by law- makers or government can do little
because the government itself or the laws
promoted by them can sometimes be the real
privacy offender. This is not to say that there
is nothing they can do to help. The role of the
government can focus more on educating their
people to understand and appreciate the
emerging technologies; and, at the same time,
be aware of the changing no- tion and boundary
of individual privacy as well as the purpose and
procedure of data collecting and processing.
One thing is for sure, people will still have
rights to be left alone by either govern- ment or
others. This right is however needed to be based
upon a better understanding of the new
computerised world.

concLusIon

Ubiquitous technology presents us with the


ques- tion of deciding how we want to apply the
existing laws in a world that is very different
from the world from which it sprang. Hence,
many traditional areas of law are to be
challenged by the environ- ment of pervasive
computing if they have not been challenged by
the current online environment yet. The
emergence of the new technologies also brings
nonexistent in the old days when there were no
computers, no copying machines, no credit card
transactions, no telephones or other services
pro- vided by externalities, no insurance
companies, no educational or employment
records, none of the kinds of data we routinely
generate in the course of our lives. Although it
is true that people can sometimes avoid these
aspects of modern urban life like this, long-
term resistance is pointless.
Therefore, many traditional approaches in
law are no longer appropriate in dealing with
technology based disputes. The physical and
informational barriers we once used to differ-
entiate between our “real world” and “online
world” are being eroded by technology, and the
erosion is accelerating. If we persist in utilising
the current format, we will very likely render
the future to a greater level of vulnerability.
Thus, we must try to equip us with
comprehensive updated laws before the time
comes. Although as scholar predicted -
decentralised economic, social, and cultural
forces will lead to the widespread and
significantly unavoidable adoption of pervasive
computing in the next two decades. We have
less time than that to plan (Kang & Cuff 2005).
As clearly stated at the beginning of this
chapter, attempting to forecast the future of
law, of technology and of human behaviour is a
very risky proposition. However, this is not
to say we can not assume technology will
become far more complex than it is today. We
can even go a step further by saying the new
technology will open up increased possibilities
for misuse. Un- fortunately, it is difficult for us
to imagine how pervasive technology will be
misused because the context is so alien to us. If
this article were being written in the 1950s, it
would assume mainframe technology and
would therefore analyse how to control misuse
by programmers and others with face to face
access to mainframe computers. We are in a
similar situation with regard to perva- sive
technology; we know enough to understand the
kinds of things it is intended to do, but not
enough, really, to be able to forecast how it can
be misused (Brenner 2006). We respond, there- our lives and there has no precedent yet. In fact,
fore, to what we see as “challenges”, which, for each of these three
the purpose of this chapter, are (but not limited
to) jurisdiction issue, e-fraud and privacy. This
chapter effectively argued that, without
knowing precisely what the future technology
will lead us to, the role of government should
focus on the improvement of internal
monitoring of risks and vulnerabilities, greater
information sharing about these risks and
vulnerabilities. Moreover, more efforts in
education and training on the care and use of
these technologies and better reporting of risks
and responses are desirable.
Finally, the study of legal aspects of
pervasive computing is still in its infancy;
there is much to be discovered by scholars,
researchers and professionals in the field. It is
crucial for us to act before pervasive computing
rolling out and the technology-based disputes
reaches crisis propor- tions - a price too high for
us to pay. We are now standing at the
beginning of yet another era in computing
technology development. The battle lines are
drawn - let the game begins.

future reseArcH dIrectIons

Pervasive computing environment, like a brand


new world, presents enormous challenge to
every aspect of traditional law. Although the
pervasive- ness of computing technology is to
be a gradual process and the current online
environment has already requested many new
ways of thinking in law, to make our laws
functional in the new computerised world, the
changes in need are still far from sufficient,
which actually opened up many great research
opportunities for scholars, researchers and
professionals in the field.
This chapter identifies and analyses three
ma- jor concerns connecting to computing
ubiquity. As clearly stated in the body of this
chapter, the analysis within is at its very general
level given the fact that the idea is still alien to
areas of law is worth a separate in-depth study
in the future. Moreover, the author believes that
the following areas of laws and regulations are
also in urgent needs for updating. They are:
contracting in pervasive computing
environment, updating convergent
telecommunications regulation, and
international collaboration in IP protection and
trans-boundary crime management in pervasive
computing environment.
In addition, as this chapter claimed, to be
better equipped to this new environment, legal
and regulatory framework should focus on the
improvement of internal monitoring of risks
and vulnerabilities, greater information sharing
about these risks and vulnerabilities. This
requires a better system in educating public on
the care and use of these technologies and
better reporting of risks and responses.
However, questions of who is responsible for
this, should this be part of corporate social
responsibilities, how to achieve this, what
should be the appropriate monitoring system,
who should be the monitor and who should be
monitored…are in need of future studies.

references

Acharyulu, G. V. R. K. (2007). The healthcare


supply chain: Improving performance through
greater visibility. The Icfaian Journal of
Manage- ment Research, 6(11), 32-45.
Allen, G. (2006). The market for nano-enabled
memory and storage - 2006 & Beyond. VA,
US: Nanomarketso. Document Number)
Baylis, E. A. (2007). Parallel courts in post-
conflict Kosovo. Yale Journal of International
Law, 32(1), 1-59.
Berman, P. S. (2002). The globalization of
juris- diction. University of Pennsylvania Law
Review,
151(1), 311–
529.
Blau, J. (2004). German group studies
ubiq-
uitous computing. Data Privacy, Retrieved 16, 1345-1406.
2007 Nov 12, from
Geithner, T. (Writer) (2004). Fed’s Geithner
ht t p://ww w.nwf usion.com /
warns of cyber-attack risk to banks. In
news/2004/1222germagroup.html
REUTERS (Producer). US.
Boutin, P. (2003). Putting the World into the
Glaessner, T., Kellermann, T., & McNevin, V.
Wide Web: An instant messenger that knows
(2004). Electronic Safety and Soundness:
where you are. From
Secur- ing Finance in a New Age (26).
ht t p://slate.msn.com./id/2083733
Washington, D.C: World Bank.
Brenner, S., & Clarke, L. (2006). Fourth
Godkin, E. L. (1890). The rights of the
amend- ment protection for shared privacy
citizen
rights in stored rransactional data. Journal of
IV--To his own reputation. SCRIBNER’s
Law and Policy,
MAG,
14, 211-
280.
Brenner, S. W. (2006). Law in an era of
pervasive technology. Widener Law Journal 15,
768-784.
Cate, F. (2000). Principles of Internet
privacy.
Connecticut Law Review, 32, 889-
891.
Elon University. (2004). Imagining the
Internet.
Pew Internet & American Life Project,
Retrieved
2007 May 15, from http://
ww w.elon.edu/predic- tions/q12.aspx.
Fox, S. (2000). Trust and privacy online: Why
americans want to rewrite the rules. Pew
Internet
& American Life Project, Retrieved 2008 Jan
12, from
ht t p://ww w.pewI nter net.org /re por t s/toc. asp?
Report=19
Galbraith, C. D. (2004). Access denied:
Improper use of the computer fraud and abuse
act to control information on publicly
accessible Internet Web sites. Maryland Law
Review, 63, 315.
Geist, M. A. (2002). Is there a there there?
Towards greater certainty for Internet
jurisdiction. Berkeley Technology Law Journal,
65-66.
Greenfield, A. (2006). Everyware:Tthe
dawning age of ubiquitous computing. New
York: New Riders Press.
Hale, R. V. (2005). Wi-Fi Liability: Potential
legal risks in accessing and operating wireless
Internet. Santa Clara Computer and High
Technology Law Journal, 21, 543.
Kang, J., & Cuff, D. (2005). Pervasive
computing: Embedding the public sphere.
Washington and Lee Law Review, 62(1), 93-
146.
Kilburn, D. (2004). Vending and the mobile
Inter- net. Retrieved 2007 December 12, from
http://
www2.gol.com/users/kilburn/mobvend.htm
Kumar, J. (2006). Determining jurisdiction in
cyberspace. Retrieved 2007 April 1, from
http:// ssrn.com/abstract=919261
Madison, M. (2003). Rights of Access and the
Shape of the Internet. Boston College Law
Review,
44, 433-507.
Matwyshyn, A. (2004). Of nodes and power
laws: A network theory approach to Internet
jurisdiction through data drivacy. Northwestern
University Law Review, 98(2), 493.
McNealy, S., & Chronicle, F. (2003). On the
Record. Retrieved 2007 December 9, from
ht t p://ww w.sfgate.com /cgi-bin /ar ticle.cgi?
file=/ c h r o n i c l e /a r c h i v e /2 0 0 3/ 0
9/14/ BU141353. DTL&type=business
MIT. (2004). Oxygen, the pervasive computing
project. Retrieved 2008 Jan 14, from
http://oxygen. csail.mit.edu/Overview.html
Paris, T. G. I. (2000). Retrieved 2007 Oct. 12,
from ht t p://ww w.jur iscom.net/txt/jurisfr/cti/tgi-
paris20001120.pdf
Radhakrishna, G. (2007). Fraud in Internet
bank- ing: A Malaysian legal perspective. Icfai
Journal
of Bank Management, 6(1), 47- Policy Journal, 14, 189.
62.
Reidenberg, R. (2003). States and Internet
enforce- ment. University of Ottawa Law &
Technology. Journal, 1(213), 225-229.
Reidenberg, R. (2005). Technology and Internet
jurisdiction. University of Pennsylvania Law
Review, 153, 1951.
Schwartz, P. (2000). Internet privacy and the
state.
Connecticut Law Review, 32,
815.
SearchNetworking. (Jul 12, 2004). Pervasive
Computing. Retrieved September 30, 2007,
from ht t p://sea rch net work i ng.techt a rget.com /
gDefinition/0,294236,sid7_gci759337, 00.html.
Slutsky, B. A. (1996). Jurisdiction over
commerce on the Internet. The Data Law
Report, 4(2).
StarBucks. (2007 ). Highspeed wireless Internet
access. Retrieved 2007 September 12, from
http:// ww w.starbucks.com /retail/wireless.asp
Swire, P. (2003). Trustwrap: The importance of
legal Rules to electronic commerce and Internet
privacy. Hastings Law Journal, 54, 847.
Timofeeva, Y. A. (2005). Worldwide
prescriptive jurisdiction in Internet content
controversies: A comparative analysis.
Connecticut Journal of International Law,
20(199).
Weinberg, A. (2002). Australia to Dow Jones:
Stay awhile. Retrieved 2007 December 12,
from ht t p:// ww w.forbe s.c om /
20 02/12/10/cx _ aw_1210dowjones.html
Winn, K. J., & Wrathall, J. (2000). Who owns
the customer? The emerging law of commercial
transactions in electronic customer data.
Business Lawyer, 56(213-233).
Wong, K. C. (2005). Computer crime and con-
trol in Hong Kong. Pacific Rim. Law and
Zembek, R. S. (1996). Jurisdiction and the
Internet: Fundamental fairness in the networked
world of Cyberspace. Albany Law Journal of
Science & Technology, 6, 339-343.

AddItIonAL reAdInG

Allen, A. (2007). The Virtuous Spy: Privacy as


an Ethical Limit. University of Pennsylvania
Law School Public Law, Research Chapter No.
07-34.
Baylis, A. (2007). Parallel Courts in Post-
Conflict
Kosovo. Yale Journal of International Law,
32(1).
Callmann, R. (2007). The Law of Unfair Com-
petition: Trademarks and Monopolies, (4 ed.):
Thomas West.
Cuff, D., Hansen, M., & Kang, J. (2007). Urban
Sensing: Out of the Woods. UCLA School of
Law Research Chapter No. 08-02.
Estrin, D., Govindan, R., & Heidemann, J.
(2000). Embedding the Internet.
Communications of the ACM.
Henderson, S. (2007). Beyond the (Current)
Fourth Amendment: Protecting Third-Party
Information, Third Parties, and the Rest of Us
Too. Pepperdine Law Review, 34.
Johnson, D., & Post, D. (1996). Law and
Borders- The Rise of Law in Cyberspace.
Staley Law Review, 48, 1367.
Kang, J. (1998). nformation Privacy in
Cyberspace
Transactions. Staley Law Review, 50.
Kim, S. H. (2006). The Banality of Fraud: Re-
Situating the Inside Counsel as Gatekeeper.
Fordham Law Review, 74.
Penney, S. (2007). Reasonable Expectations of
Privacy and Novel Search Technologies: An
Economic Approach. Journal of Criminal Law party is required to do, or to refrain from doing,
and Criminology, 97(2). certain acts.
Weiser, M. (1991). The Computer for the
Twenty-
First Century. Scientific American.
Weiser, M. (1993). Hot Topics: Ubiquitous
Com- puting. IEEE Computer.
Weiser, M. (1993). Some Computer Science
Prob- lems in Ubiquitous Computing.
Communications of the ACM.
Weiser, M. (1994). The world is not a
desktop.
Interactions 7-8.
Zick, T. (2006). Clouds, Cameras, and Comput-
ers: The First Amendment and Networked
Public Places. St. John’s Legal Studies
Research Chapter No. 06-0048.

Key terMs

Assault: In Australia, assault refers to an act


that causes another to apprehend immediate and
personal violence.
Choice of Law: Choice of law is a
procedural stage in the litigation of a case
involving the con- flict of laws when it is
necessary to reconcile the differences between
the laws of different legal jurisdictions, such as
states, federated or prov- inces. The outcome of
this process is potentially to require the courts
of one jurisdiction to apply the law of a
different jurisdiction in lawsuits.
Defamation: In law, defamation is the com-
munication of a statement that makes a false
claim, expressively stated or implied to be
factual, that may harm the reputation of an
individual, business, product, group,
government or nation.
Injunction: An injunction is an equitable
remedy in the form of a court order, whereby a
Jurisdiction: In law, jurisdiction is the
prac- tical authority granted to a formally
constituted legal body or to a political leader to
deal with and make pronouncements on legal
matters and, by implication, to administer
justice within a defined area of responsibility.
Precedent: In common law legal systems, a
precedent is a legal case establishing a principle
or rule that a court or other judicial body adopts
when deciding subsequent cases with similar
issues or facts.
Substantive Law: Substantive law is the
statutory or written law that governs rights and
obligations of those who are subject to it.
Substan- tive law defines the legal relationship
of people with other people or between them
and the state. Substantive law stands in contrast
to procedural law, which comprises the rules by
which a court hears and determines what
happens in civil or criminal proceedings.

endnotes

1
Mark D. Weiser (July 23, 1952 – April 27,
1999) was a chief scientist at Xerox
PARC. Weiser is widely considered to be
the father of ubiquitous computing, a term
he coined in 1988.
2
Weiser wrote some of the earliest chapters
on the subject, largely defining it and
sketching out its major concerns.
Recognising that the extension of
processing power into everyday scenarios
would necessitate understand- ings of
social, cultural and psychological
phenomena beyond its proper ambit,
Weiser was influenced by many fields
outside computer science, including
“philosophy, phenomenology,
anthropology, psychology, post-
Modernism, sociology of science and
feminist criticism.” He was explicit about
“the humanistic origins of the ‘invisible
ideal
in post-modernist thought’”, referencing Data and on the Free Movement of Such
as well the ironically dystopian Philip K. Data, art. 4, § 1(c), 1995 O.J. (L 281)
Dick novel Ubik. MIT has also 31,
contributed sig- nificant research in this 39 Reidenberg J R. & Schwartz P M.,
field, notably Hiroshi Ishii’s Things That Date Protection Law and On-line
Think consortium at the Media Lab and Services : Regulatory Responses 28
the CSAIL effort known as Project (1998), available at
Oxygen. htt p://europa.eu.int/comm /inter nal_mar-
3
[2002] HCA 56. available at ket/privacy/docs/studies/regul_en.pdf (last
http://www.kent- law.edu/per Website visit 26 May 07).
ritt/courses/civpro/Dow%20 9
It was amended in 1994, 1996 and in 2001
Jones%20&%20Company%20Inc_%20 by the USA PATRIOT Act
v % 20 G u t n ick % 20 % 5 B20 0 2% 5 10
“Smart houses” (or “aware homes”) in-
D % 20 corporate intelligent, embedded systems
HCA%2056%20 (10%20December which interact with the occupants and
%20 with outside systems. See, e.g., Georgia
2002).htm Institute of Technology, The Aware
4
(W.D. Pa. Feb. 8, 2000). Home, http://
5
Lewis v. King, [2004] EWCA (Civ) 1329 ww w.cc.gatech.edu /fce/ahr i/. Philips Re-
(Eng. C.A.), available at search, Ambient Intelligence: A New User
http://www.court- Experience, http:// ww w.research.philips.
service.gov.uk/judgmentsfiles/j2844/lewis com/InformationCenter/Global/FArticle-
- v-king.htm Summary.asp? lNodeId=712; see also e.g.,
6
Pub. L. No. 105-277, 112 Stat. 2681 K. Ducatel et al., European Comm’n, IST
(1998) Advisory Group, Scenarios for Ambient
(codified at 15 U.S.C. 6501- Intelligence in 2010, 4-7, (2001)
6506).
7
15 U.S.C. 6501(2) (2000).
8
See Council Directive 95/46/EC of 24 Oc-
tober 1995 on the Protection of
Individuals with Regard to the Processing
of Personal

This work was previously published in Risk Assessment and Management in Pervasive Computing: Operational, Legal,
Ethical, and Financial Perspectives, edited by V. Godara, pp. 218-232, copyright 2009 by Information Science Reference
(an imprint of IGI Global).
Privacy Control Requirements for Context-Aware Mobile Services

1465

Chapter 7.9
Privacy Control
Requirements for Context-
Aware Mobile Services
Amr Ali Eldin
Accenture BV, The Netherlands

Zoran Stojanovic
IBM Nederland BV, The Netherlands

AbstrAct have been prototyped and integrated in a


UMTS location-based mobile services testbed
With the rapid developments of mobile tele- platform on a university campus. Users have
communications technology over the last two experienced the services in real time. A survey
decades, a new computing paradigm known as of users’ responses on the privacy functionality
‘anywhere and anytime’ or ‘ubiquitous’ has been carried out and analyzed as well.
comput- ing has evolved. Consequently, Users’ collected response on the privacy
attention has been given not only to extending functionality was positive in most cases.
current Web services and mobile service Additionally, results obtained reflected the
models and architectures, but increasingly also feasibility and usability of this approach.
to make these services context-aware. Privacy
represents one of the hot topics that has
questioned the success of these services. In this IntroductIon
chapter, we discuss the different requirements
of privacy control in context-aware services Despite the expected benefits behind ambient
architectures. Further, we present the different technology and the need for developing more
functionalities needed to facilitate this control. and more context-aware applications, we
The main objective of this control is to help end enunciate that privacy represents a major
users make consent decisions regarding their challenge for the success and widespread
private information collection under condi- adoption of these services. This is due to the
tions of uncertainty. The proposed collection of a huge amount of users’
functionalities contextual information, threatening their

Copyright © 2010, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Privacy Control Requirements for Context-Aware Mobile Services

privacy concerns. Controlling users’ the literature (Hauser & Kabatnik,


information collection represents a logical way 2001) to provide a means for the user to know
to let users get more acquainted with these
context-aware services. Additionally, this
control requires us- ers to be able to make what
is known as consent decisions, which face a
high degree of uncertainty due to the nature of
this environment and the lack of experience
from the user side with information collectors’
privacy policies. Therefore, intelligent
techniques are required in order to deal with
this uncertainty.
Context-aware applications are applications
that collect user context and give content that is
adapted to it. There have been different
scenarios in the literature that describe how a
context-aware application would look. Mainly,
the idea is that the user’s environment is
populated with large numbers of sensors that
collect information about users in order to
provide useful content or services that are
adapted to his or her context. Although this
personalized functionality would be very
helpful for the user, it allows collecting parties
to know sensitive information about users that
can violate their privacy, unless these
applications have taken special measures and
practices to support their privacy needs.
Informed consent is one of the requirements
of the European Directive (2002). Accordingly,
a user should be asked to give his or her
informed consent before any context collection.
From a us- ability point of view, it will be
difficult to let each user enter his or her
response each time context is collected.
Increasingly, the type of collected data will
highly influence his or her privacy concerns.
The problem becomes even more complex
when more than one party gets involved in
collecting user information, for example third
parties. Third parties of a certain information
collector represent unknown parties to the user.
Despite that the first information collector
might list in its privacy policy that user
information is being given to those third parties
in one way or another, it is not possible yet in
186
Privacy Control Requirements for Context-Aware Mobile Services
which party collects which information. Thus
uncertainty takes over when a user gets pushed
information or services from unknown
collectors whether to give them consent or not.

ProbLeM descrIPtIon And


reL Ated WorK

In this section, we discuss the motivation


behind this work and the type of research
problem we are addressing. The problem
investigated in this work can be seen as a
multidisciplinary problem where legal, social,
and technical domains are concerned with
providing solutions. In this work, we focus on
the technological perspective, taking into
consideration requirements set by the other
domains.
There is a trade-off between users’ privacy
needs and their motivation behind giving
private information away. Complete privacy is
impossible in a society where a user would have
to interact with other members of the society
such as col- leagues, friends, or family
members. Each flow of user information would
reveal some private information about him or
her, at least to the other destination. Since this
flow of information is needed and may be
initiated by the user, he or she would have to
make sure that the other party (the destination)
is going to keep his or her privacy
requirements. Privacy policies and legal
contracts help users and service providers reach
an agreement on the type of privacy users
would have. However, these contracts do not
provide enough flexibility for users on choosing
the type of privacy they need. It also does not
guarantee that their privacy will not be violated,
but it guarantees that the user would have the
rights to sew them if these agreed-upon
contracts were violated.
Privacy-enhancing technologies (PETs) are
assumed to help reduce privacy threats. Privacy
threats emerge as a result of the linkage
between identities and users’ contextual data.
Therefore, most literature has focused on the
separation

187
between both types of information: whether to Reagle, 2004). However, there have been some
control users’ identities, by deterring identity efforts to extend P3P to the mobile
cap- turing through anonymity solutions environment to provide users
(Camenisch & Herreweghen, 2002; Chaum,
1985; Lysyanskayal, Rivest, Sahai, & Wolf,
1999); or to control private information
perception such as watermarking techniques as
in Agrawal and Kiernan (2002), distributing
and encrypting of data packets in Clif- ton,
Kantarcioglu, Vaidya, Lin, and Zhu (2002), and
physical security through limiting data access
within a specified area (Langheinrich, 2001).
Most of the previous efforts lack the
involvement of us- ers. Stated differently, user
control of their privacy has not been taken
seriously as a requirement for the design of
context-aware services in previous efforts.
Instead, a lot of effort has concentrated on
developing sophisticated encryption mecha-
nisms that prohibit unauthorized access to
private information when stored locally on a
database server managed by the information
collector or by a trusted third party. We argue
that not only user identity information, but also
other informa- tion with different degrees of
confidentiality, can represent a private matter as
well, especially when user context is associated
with them. Therefore, controlling user
contextual information collec- tion could
represent a more realistic approach in such
context-aware systems. Controlling users’
contextual information perception implies
making decisions of whether to allow
contextual entities to be collected by a certain
party or not in what is known as user consent
decisions.
The Platform for Privacy Preferences (P3P),
submitted by the World Wide Web Consortium
(W3C), provides a mechanism to ensure that
users can better understand service providers’
privacy policies before they submit their
personal information, but it does not provide a
technical mechanism to enforce privacy
protection and to make sure that organizations
work according to their stated policies (Cranor,
Langheinrich, Marchiori, Presler-Marshall, &
with control over their location data. Most of
these efforts focused more on the technical
facilita- tion of this extension to suit mobile
devices and communication protocols, such as
Langheinrich (2002) and Nilsson, Lindskog,
and Fischer-Hübner (2001). More work is
required before the P3P pro- tocol becomes
widely applicable for the mobile environment
due to its limitations in automating this
expression and evaluation of privacy policies,
and users preferences by the limited capabilities
of the context-aware mobile devices, the
dynamic changing context of users, and the
large number of context information collectors.
In P3P (Cranor et al., 2004) and APPEL
(Cranor, Langheinrich, & Marchiori, 2002), it
is assumed that users’ consent should be given
as one entity per all collected information. In
privacy-threatening situations, APPEL
evaluation will block only identifying
information from be- ing transferred to the
collector side. In one effort to design a privacy
control architecture, Rodden, Friday, Henk, and
Dix (2002) propose a minimal asymmetry
approach to control personal location
information. A trusted party keeps location
infor- mation structured in such a way that
other parties cannot have full access privileges
until they have reached a service agreement.
Moreover, user iden- tities are replaced with
pseudonyms when other parties collect the
location information. Although this approach
gives users more control capabili- ties, it does
not provide a means of reducing the intensive
involvement of users.
Although a lot of efforts on privacy
protection have been exerted in the literature
(Ackerman, Darrell, & Weitzner, 2001;
Camenisch & Her- reweghen, 2002; Casal,
2001), not many have realized the option that
privacy can be negotiable. A user might be
willing to share his or her in- formation with
information collectors in order to get some
cheaper service or a better offer. What makes it
complex is that users’ privacy concerns can be
influenced not only by mostly known factors
such as culture and age, but also by their
context or situation when the information is
requested.
This influence of context becomes noticeable in • Individual participation principle: An
environments where the users’ context is individual should have the right to control
expected to change. his or her data after being collected by
In the following section, we give a brief erasing, completing, or amending it, and
over- view of the most famous privacy should be able to communicate with the
principles. data collector about the type of data being
collected.
• Accountability principle: The data
PrI VAcy PrIncIPLes And collec- tor should be accountable for
requIreMents complying with measures that give effect
to the principles stated above.
Privacy architectures try to meet the fair
informa- tion practices (FIP) principles Most of privacy laws and self-regulatory
developed since the frameworks basically follow the above-
1970s. The most well-known principles were mentioned principles. In order to meet the
set in 1980 by the Organization for Economic above-mentioned first two principles, users
Coop- eration and Development (OECD) in the should be notified on what information is being
form of the Guidelines on the Protection and collected, which parties are using the
Transborder Flows of Personal Data. We briefly information, for which purpose, and how long it
present these principles (OECD, 2003): will be used (Ackerman, Darrell, & Weitzner,
2001; Casal, 2001). This notification is mainly
• Collection limitation principle: This done through defining what is called data
prin- ciple states that there should be practices. These practices are usually expressed
limits to personal data collection, and that in privacy policies. A privacy policy consists of
it should be obtained in a lawful means a number of statements that represent how an
and with the consent of the user. information collector is going to deal with the
• Data quality principle: Data collection collected information. Most Web sites currently
should be relevant to the purposes for notify users using privacy policies. However,
which it was collected. most of these policies are so long that users do
• Purpose specification principle: not read or understand them completely
Purposes should be specified before the (Cranor, Guduru,
collection of the data and not after. & Arjula, 2006). This leads to the fact that this
• Use limitation principle: Personal data requirement is not always met, and thus users
should not be made available or otherwise can lose one of their rights in having control of
used for purposes other than those their private information. Secondly, users
specified, except by the consent of the should be able to select among different
user or by the authority of law. options. The mostly adopted approach,
• Security safeguards principle: Personal however, “take it or leave it,” should not be any
data should be protected by reasonable longer applied.
security safeguards against such risks as Service providers are asked to give the users
loss or unauthorized access, destruction, a number of alternatives to choose from that
use modification, or disclosure of data. provide them a flexible way of controlling the
• Openness principle: There should be a way their in- formation is being used. After
general policy of openness about develop- notifying users and allowing them to select
ments, practices, and policies with respect among different choices, it is required that the
to personal data. user explicitly declares his or her acceptance of
this type of usage. Most Web sites ask for the
user’s consent, once and for all, after the user
reads the privacy policies (if he or
she does): a user will have to accept or object as functionalities can be implemented using
the previously mentioned option “take it or middle- ware technology acting as a trusted
leave it,” and if he or she accepts the policy, third party.
then he or she is not allowed to change consent
even if his or her preferences change, unless the system Architecture
user stops using the service offered by that Web
site. When designing a solution for effective privacy
management, specifying a proper architecture
to provide a basis for implementation is of
A PrI VAcy controL crucial importance. The standard ANSI/IEEE
functIonAL ArcHItecture 1471-2000 that gives recommended practices
for describing the architecture of software-
With respect to the above-mentioned require- intensive systems defines architecture as the
ments, we argue that the following fundamental organiza- tion of a system
functionalities, as shown in Figure 1 in the embodied in its components, their relationships,
form of high-level domain architecture, must be and the environmentand the principles
taken into consid- eration to automate privacy governing its design and evolution (ANSI,
management. These 2000). Similarly, in the Rational Unified

Figure 1. High-level domain perspective architecture


Service provider network

Persoanl
User
data
profiles

Service Providers Third Parties


Context

Service Provider
Service Provider Agent

Middleware

User Agent (server)

Context Manager

User
Privacy profiles
Manager
Hosting Servers

User Agent (client )


Persoanl
Context Preferences
data
Process (Kruchten, 2003), an architecture is the include other components. A provided interface
set of significant decisions about the represents the services and operations that the
organization of a software system, the selection component provides to the environment accord-
of the struc- tural elements, and their interfaces
by which the system is composed, together with
their behavior as specified in the collaborations
among those elements, the composition of these
structural and behavioral elements into
progressively larger subsystems, and the
architectural style that guides this organization,
these elements and their inter- faces, their
collaborations, and their composition
(Kruchten, 2003). Therefore, for the purpose of
designing an architecture, the main components
of the solutions should be identified and
specified. The Rational Unified Process defines
a compo- nent as a non-trivial, nearly
independent, and replaceable part of a system
solution that fulfils a clear function in the
context of a well-defined architecture
(Kruchten, 2003). A component in- terface is an
essential element of any component since it is
often the only way the consumer of the
component knows the function of the
component. In UML, an interface is defined as
a named col- lection of operations that are used
to specify a service of the component. The
interface provides an explicit separation
between the outside and the inside of the
component, answering the question what (What
useful services are provided by the particular
component to the context of its exis- tence?),
but not the question how (How are these
services actually realized?). A precisely defined
component interface allows the component ser-
vices to be used without knowing how they are
actually realized. A component interior is
hidden and not important for its environment
as long as the component provides services and
follows constraints defined by its contractual
interface.
Two kinds of interfaces can be
distinguished:
provided and required interfaces (OMG-UML2,
2004). A component can provide and require
interfaces to and from its environment that can
ing to its role in it. A required interface
specifies the services and operations that the
component requires from its environment to
perform its role that is, to provide its
services. The main elements of an interface as
defined in Heineman and Council (2001) are:

• Names of semantically related operations,


• Their parameters, and
• Valid parameter types.

In the sequel, in defining and creating the


different architecture component
functionalities, their description and
specification, we followed the UML 2.0
standard specification as a model- ing method.
This version of the UML improves
significantly the ability to represent
components as not only implementation level
artifacts, but also as design-level concepts that,
together with classes, represent the main
mechanisms used to build a logical system
architecture (OMG-UML2,
2004).

Privacy control functional


Architecture and components

For the purpose of creating an effective privacy


control, a user profile can be defined to consist
of user personal information, contextual
informa- tion, and user preferences. It is
expected that every information collector can
create local user profiles. In order to ensure the
consistency and correctness of the information
between the different profiles, an information
collector will try to approach the user to update
his or her profile on a regular basis, and hence
privacy must be rechecked. In order to assign
these different functionalities to loosely
coupled and highly cohesive functional units
(i.e., components), we define five main
components of the privacy control solution:
user agent, service provider (SP) agent, privacy
manager, context manager, and privacy policy,
as shown in Figure
2. In what follows, the components will be
speci-
fied in more detail, and interfaces between for a context-aware or mobile environment,
them will be defined. most of the requests will be made when the user
gets in the range of another registered network
user Agent (client/server) using the wireless connectivity in his or her
device. Thus, a user agent will be realized as a
A user agent is needed to handle incoming re- client/ server software program or a service
quests for user information. When it receives a which is partly on the user’s device and/or the
request from an information collector, it should network, and the user will communicate with
fetch the information collector’s privacy his or her user agent through a customized user
policies. The user agent should send these interface on his or her device.
policies to the privacy manager to be processed In this context, the user agent should define
and to generate a consent decision. After the following interfaces with other components
getting a consent deci- sion from the privacy in the architecture (see Figure 2):
manager, the user agent should contact the
context manager (see Figure • Get_practice: This is an operation
2) asking for the required data to be released provided on the interface of the SP agent
and then to be propagated back to the service component and is a required interface
provider agent. Most user agents are operation for the user agent component
implemented locally on the user machine, such since, by using this operation, the user
as any Internet browser where most of the agent component com- municates with
information requests are made the SP agent component
when a user navigates through the Internet. But

Figure 2. Relationships and functionalities model


to fetch the asked practices of a service SP_component
provider.
• Consent_check: This is an operation pro-
vided on the interface of the privacy
manager component and is a required
interface opera- tion for the user agent
component. By using this operation, the
user agent component communicates with
the privacy manager component to get the
required consent type. This operation
further uses the Get_ practice operation.
• Context_access: This is an operation pro-
vided on the interface of the context
manager component and is a required
interface opera- tion for the user agent
component. By using this operation, the
user agent component communicates with
the context manager component to define
the type of context allowed to be
accessed.
• Send_context: This is an operation
provided on the interface of the user agent
component and is a required interface
operation for the SP agent component.
Using this operation, the SP agent sends
back to the user agent component a
response about the type of context to be
sent.

Information collector Agent


(sP_Agent)

Each information collector by nature should


have a collector agent SP_agent (a
representative) that handles all the outgoing
requests for informa- tion. This functionality
can be implemented as a service. It should be
responsible for propagating user data to the
specified components within the SP_agent
organization. This functional compo- nent
contacts user agent components with all
information required, such as a privacy policy
that organizes how it deals with users’ private
information. A single instance of the SP_agent
component can be engaged with many instances
of the user agent components. The
has the following interfaces with the different
components of the architecture:

• Context_request: This is an operation


pro- vided on the interface of the user
agent and is a required interface operation
for the SP agent components. By using
this operation, the SP agent component
communicates with the user agent
component with respect to the type of
information to be requested.
• Practices: This is an operation provided
on the interface of the information
collector privacy policy components. By
using this operation, the SP agent
component fetches the asked data
practices from the informa- tion collector
privacy policy component.

context Manager

The context manager should be responsible for


all aspects related to managing users’
contextual information such as sensing context
and control- ling access privileges of users’
contextual informa- tion and maintaining
databases. An example of a context manager
can be a distributed database management
system where different types of data objects are
related to the user and are stored in different
databases located at different places, such as
network operators or user-sided devices.
Usually, this component is implemented at the
service provider’s side, and users are asked
only at the beginning of signing a contract to
give their consentan issue that makes it
difficult to enforce any users’ preferences.
With the increase awareness of privacy by
users, and the more anxiousness expected when
using such systems, many third parties  such
as Microsoft Passport and Liberty Alliance
are act- ing as intermediaries in the form of a
middleware or a trusted third party between
users and service providers in order to
guarantee users’ privacy. Nevertheless, in
addition they have to be trusted by users; these
intermediaries do not provide control
capabilities to users, but users are forced
to accept the providers’ ways of dealing with consent decision making is the effective
their privacy (the “take it or leave it” option). description of user
The context manager component imple-
ments the interfaces context_access and send
or send_context with the different components
of the user agent. Additionally, it implements
the operation query on the interfaces of the data
object component.

Privacy Manager

A privacy manager component is needed to


make consent decisions on behalf of users. We
assume that user consent in context-aware
services archi- tectures will be dynamic and
therefore should be requested before any
collection of any contextual information.
Increasingly, this request for mobile users’
consent should be carried out in an
autonomous, flexible, and user-friendly way. It
is expected that when minimizing user
interactions and at the same time taking their
requirements into considerations, uncertainty
will takeover. In other words, uncertainty
represents an obstacle against a good consent
decision. Uncertainty is caused by the unknown
interaction taking place between (undefined)
different factors that influence users’
willingness to share their private information,
and due to the lack of knowledge about the
behavior of the users themselves, when trying
to predict their consent decisions. Thus,
intelligent techniques are needed to deal with
this uncertainty limitation. To deal with this
uncertainty in making consent decisions, we
can adopt the mechanism developed by Ali
Eldin, van den Berg, and Wagenaar (2004)
which is based on a fuzzy logic mechanism.
The main objective of their work is to provide a
consent decision-making mechanism that
minimizes us- ers’ interactions and meets their
privacy concerns at the same time. This consent
decision-making mechanism can be
implemented in an operation called
consent_decider as shown by Figure 2.
Another obstacle that challenges this
privacy preferences that matches their privacy people service’ designed in such a way that it
requirements and that can be easily evaluated in
a machine-readable way to develop this
automatic consent decision. This description
should be able to model the dynamic features of
context-aware environments. The Platform of
Privacy Prefer- ences (Cranor et al., 2004),
introduced earlier, has defined a number of data
practices that together constitute a P3P privacy
policy. A P3P privacy policy is a step towards
automating and simplify- ing users’
assessments of an information collector through
the use of user agents and APPEL. APPEL, a
P3P preference exchange language (Cranor et
al., 2002), was also proposed as the language
for expressing user preferences. APPEL is a
machine- readable specification of user
preferences that can be programmatically
compared against a privacy policy. This
comparison mechanism can be mod- eled in the
form of an operation called comp_ pref as
shown by Figure 2.
The final step is to combine both outputs of
the consent_decider and comp_ pref into one
Boolean consent output. The evaluator
operation evaluator takes responsibility of this
process. The evalua- tor rules must be based on
the assumption that consent decisions should be
as much as possible carefully given because
people tend to be rather more conservative
when it comes to their privacy (Ali Eldin,
2006). For example, if both outputs are equal,
then the final consent output should be the
same. But when any of the two outputs is
different, we should go for the more strict
output.

eXPerIMen tAL WorK

The proposed functional architecture has been


prototyped and integrated in developing one of
the location-based mobile services on a UMTS
testbed offered by MIES (Kar, 2004) and
experimented by real users. In the following we
present the dif- ferent features of the
implemented service.
The aim was to have a ‘privacy-aware
finding
enables users to specify their allowed informa- which user agent is going to find him or her the
tion practices and privacy attributes in order to requested user. The user agent, in the form of
evaluate consent decisions on any incoming re- an asp script, manages requesting user requests
quest for users’ data. The Finding People and propagates them to the context manager,
service (see Figures 3 and 4) was developed to which is considered here to be the MS Access
be one of the services offered by MIES (Kar, database. In user databases, user privacy
2004). MIES offers GIS location-based services preferences are stored and taken from Web
and tourist information to university campus forms. The privacy manager implemented was
visitors. It also enables users to locate and the consent decider mechanism presented in Ali
contact each other. We have implemented the Eldin et al. (2004).
proposed functional architecture for this service
to allow users to control their private
information collection by other users of MIES. eVALu AtIon tests
A service provider agent was implemented
in the form of a “search people” script coded in The participants were asked if they agreed or
asp. net which is navigated through from the disagreed with the statements as listed in the
MIES menu on the user iPAQ. In this ques- tionnaire (see Appendix A). The score
experiment, infor- mation collectors were other could range from “1 highly agree” to “7 highly
users of the service who are looking for others. disagree.” From these scores, an average score
The information collector (requesting user) for each participant for each statement was
must specify for which purpose he or she is derived in addition to an overall average of
requesting the information, which is known as users’ responses. Users were asked about the
“purpose of search.” Additionally, the request- overall performance of the system in terms of
ing user specifies a search criteria based on their satisfaction of privacy protection, type of
preferred control modes, and whether they need
to add more preferences to the predefined
preferences. Most of the collected response of

Figure 3. Finding People menus


users was positive. However, some users were possible, but also operational context-aware ap-
neutral and very few reacted negatively. plications and services platforms. Among these
calls for the next-generation mobile services
platforms, research and development on privacy
future trends was mainly concentrated on authentication,
along with access control mechanisms,
Recently, we have noticed that there is a biometric solu- tions, cryptographic
growing research interest in context-aware mechanisms, and so forth. Most of these
environments, ad hoc and personal networks, privacy solutions were intended for
and location-based mobile services. The interconnected organizations, where protecting
attention has been given to designing and privacy generally means avoiding intrusive
developing not only theoretically access to private information which is stored
somewhere,

Figure 4. Finding People service architecture


where authorized access is known beforehand service technology such as XML, WSDL,
and is granted based on different criteria such SOAP, and UDDI, the Internet once solely a
as roles or activities within these organizations. repository of various kinds of informationis
This chapter, however, sheds light on a new now evolving into a provider of a variety of
approach of dealing with privacy which is business services and applications (Newcomer,
based on user or customer involvement in the 2002). In this manner, Web services technology
consent decision making through the processing and SOA are increas- ingly becoming a
of their prefer- ences. Additionally, this chapter business issue based on the new technology’s
gives designers of context-aware services ability to deliver strategic business value
guidelines on how user privacy can be managed (Berry, 2003).
and maintained in a way that keeps users aware The basic elements of an SOA are a service
of their rights and concerns. provider, a service consumer, and a service
A next step will be to prototype the broker, as shown in Figure 5. The service
functional architecture in different mobile provider makes the service available and
service scenarios, and check whether this advertises it on the ser- vice broker by issuing
automatic privacy control can be completely the service contract. The service consumer
accepted by users and in which situations user finds the service that matches its needs in a
interactions will be necessary. service repository of the service broker using
During the last few years, we have the published service contract. The service
witnessed the further evolution of the Internet consumer and the service provider then interact
in the form of Web services (Kaye, 2003). Web in terms of providing/using the service. It is
services have been introduced as a promising important to note that the communication
way to integrate information systems between a service provider, a service consumer,
effectively inside and across the enterprises. and a service broker is performed using the
They are defined as self-contained and self- same set of interoperable, technology-
describing business-driven functional units that independent standards for communication,
can be plugged in or invoked across the Internet such as XML, SOAP, and WSDL.
to provide flexible enterprise application The service in the form of a Web service
integration within the Service-Oriented represents a contractual agreement between
Architec- provider and consumer. Beside common
ture (SOA) (Kaye, 2003). Using advanced interface
Web

Figure 5. Basic elements of an SOA


that defines operation signatures, the service involvement.
also has attributes of its own such as service-
level agreement, policies, dependencies,
security rules, privacy constraints, and so forth.
All these properties of Web services become
increasingly important when services are
chained in a cascade or organized and
orchestrated in a more complex way to provide
a higher-level business value. In this case, we
potentially have more participants in the
collaboration to provide a more valuable
business service, for example an Internet-based
traveling agency might use the services from
a hotel reservation system, a route information
system, a rental car system, and a city touristic
guide system to provide a complete service to
its customers. Such situations put more
emphasis on the privacy of the end user being
spread over multiple service providers and still
need to be effectively maintained.
Therefore, one of the main challenges and
further directions of our work is placing our
framework of privacy control and maintenance
into a fully service-oriented environment.

concLusIon

In this chapter, we argued the different require-


ments of privacy that are triggered by the
nature of the context-aware environment. We
presented as a proposition a privacy-functional
architecture that maintains these requirements.
The main ob- jective of these functionalities is
to help automate the process of getting users’
explicit consent. We enunciate that a consent
decision-making mechanism should be able
dynamically and au- tomatically to make
recommendations to users about their consent
decisions. Flexibility means that the
recommendation allows users to change their
preferences in response to changing circum-
stances. Users will have different functionalities
of dealing with their privacy. User friendliness
here refers to the minimization of the
requirements for users to interact and their
We have prototyped the proposed http:// shop.ieee.org/store/
architecture functionalities in a real-time
UMTS personal services platform and
experimented with visitors to a university
campus. The prototype showed the technical
feasibility of the approach in mobile services.
Users’ reactions showed the satisfaction
regarding meeting their privacy needs with the
call for a combination of manual and automatic
control capabilities according to their context.
At the end, we presented future trends in the
field and marked the privacy control in the
service- oriented environment as one of our
main future research directions.

AcKnoWLedGMent

The conceptual development and the


experimen- tal work presented here were
conducted during the authors’ employment for
Delft University of Technology, The
Netherlands.

references

Ackerman, M., Darrell, T., & Weitzner, D.J.


(2001). Privacy in context. HCI, 16(2), 167–
179.
Agrawal, R., & Kiernan, J. (2002). Watermark-
ing relational databases. Proceedings of the
28th VLDB Conference, Hong Kong, China.
Ali Eldin, A. (2006). Private information
sharing under uncertainty. Delft: Amr Ali
Eldin.
Ali Eldin, A., van den Berg, J., & Wagenaar, R.
(2004). A fuzzy reasoning scheme for context
shar- ing decision making. Proceedings of the
6th Inter- national Conference on Electronic
Commerce (pp.
371–375 ), Delft, The Netherlands. Retrieved
from
ht t p://doi.acm.org/10.1145/1052220.1052267
ANSI. (2000). ANSI/IEEE 1471-2000 recom-
mended practice for architectural description
of software-intensive systems. Retrieved from
Camenisch, J., & Herreweghen, E.V. (2002). ATM Traffic Management (WATM/EUNICE
De- sign and implementation of Idemix 2001), Paris.
Anonymous Credential System. Zurich: IBM
Heineman, G.T., & Council, W.T. (2001). Com-
Zurich Research Laboratory.
ponent-based software engineering: Putting
Casal, C.R. (2001). Privacy protection for
location based mobile services in Europe.
Proceedings of the 5th World Multi-Conference
on Systems, Cybernetics, and Informatics
(SCI2001), Orlando, FL.
Chaum, D. (1985). Security without
identification card computers to make big
brother obsolete. Com- munications of ACM,
28(10), 1034–1044.
Clifton, C., Kantarcioglu, M., Vaidya, J., Lin,
X.,
& Zhu, M.Y. (2002). Tools for privacy
preserving distributed data mining. ACM
SIGKDD Explora- tions, 4(2), 28–34.
Cranor, L., Langheinrich, M., & Marchiori, M.
(2002). A P3P Preference Exchange
Language
1.0 (APPEL1.0). Working Draft,
W3C.
Cranor, L., Langheinrich, M., Marchiori, M.,
Presler-Marshall, M., & Reagle, J. (2004). The
Platform for Privacy Preferences 1.1 (P3P1.1)
specification. Working Draft, W3C.
Cranor, L.F., Guduru, P., & Arjula, M. (2006).
User interfaces for privacy agents. ACM
Transactions on Human Computer
Interactions.
European Directive. (2002). Directive
2002/58/EC
of the European Parliament and of the Council
of
12 July 2002, electronic communications sector
(directive on privacy and electronic
communica- tions). Official Journal of
European Communities, L, 201–237.
Hauser, C., & Kabatnik, M. (2001). Towards
privacy support in a global location service.
Proceedings of the IFIP Workshop on IP and
the pieces together. Boston: Addison- A lightweight approach to managing privacy in
Wesley
Longman.
Kar, E.A.M. (2004). Designing mobile
informa- tion services: An approach for
organisations in a value network. Unpublished
Doctoral Dis- sertation, Delft University of
Technology, The Netherlands.
Kaye, D. (2003). Loosely coupled: The missing
pieces of Web services (1st ed.). RDS Associ-
ates.
Kruchten, P. (2003). The Rational Unified
Pro- cess: An introduction (3rd ed.). Boston:
Addison- Wesley.
L a n g h e i n r i c h , M . (2 0 01) . P r i v a
c y b y designprinciples of privacy-aware
ubiquitous systems. Proceedings of the 3rd
International Conference on Ubiquitous
Computing (Ubi- comp2001).
Langheinrich, M. (2002). A privacy awareness
system for ubiquitous computing environments.
Proceedings of the 4th International
Conference on Ubiquitous Computing
(UbiComp2002).
Lysyanskayal, A., Rivest, R.L., Sahai, A., &
Wolf, S. (1999). Pseudonym systems.
Proceedings of the 6th Annual Workshop on
Selected Areas in Cryptography (SAC’99).
Nilsson, M., Lindskog, H., & Fischer-
Hübner, S. (2001). Privacy enhancements in the
mobile Internet. Proceedings of the IFIP WG
9.6/11.7
Working Conference on Security and Control
of
IT in Society, Bratislava.
OECD. (2003). Privacy online. In OECD (Ed.),
OECD guidance on policy and practice (p. 40).
Paris: OECD.
OMG-UML2. (2004). Unified Modeling
Language
version 2.0. Retrieved from
ht t p://ww w.uml.org
Rodden, T., Friday, A., Henk, M., & Dix, A.
(2002).
location-based services (No. Equator-02-058),
University of Nottingham and Lancaster, UK.
Stojanovic, Z. (2005). A method for component
based and service oriented software systems
engineering. Unpublished PhD Thesis, Delft
University of Technology, The Netherlands.
APPendIX A: questIonnAIre

Almost disagree

Highly disagree
Highly agree

Neutral

Disagree
Almost agree
Statements

1 2 3 4 5 6 7
I could easily define my privacy settings.

People could contact me, though I did not ask for that.

My privacy should always automatically handle requests for my


WHAinfo.

I should always be asked before letting others contact me.


I need to add new privacy preferences.
I was able to self-control my privacy.
My privacy was guaranteed.

This work was previously published in Personalized Information Retrieval and Access: Concepts, Methods, and Practices,
edited by R. González, N. Chen, and A. Dahanayake, pp. 151-166, copyright 2008 by Information Science Reference (an im-
print of IGI Global).
Access Control in Mobile and Ubiquitous Environments

1481

Chapter 7.10
Access Control in Mobile
and
Ubiquitous
Environments
Laurent Gomez
SAP Research, France

Annett Laube
SAP Research, France

Alessandro Sorniotti
SAP Research, France

AbstrAct user. Traditionally, user authentication is


performed by means of a combination of
Access control is the process of granting
permis- sions in accordance to an authorization
policy. Mobile and ubiquitous environments
challenge classical access control solutions like
Role-Based Access Control. The use of context-
information during policy definition and access
control en- forcement offers more adaptability
and flexibility needed for these environments.
When it comes to low-power devices, such as
wireless sensor networks, access control
enforcement is normally too heavy for such
resource-constrained devices. Lightweight
cryptography allows encrypting the data right
from its production and the access is therefore
intrinsically restricted. In addition, all access
control mechanisms require an authen- ticated
authentication factors, statically specified in IntroductIon
the access control policy of the authorization
service. Within ubiquitous and mobile Ubiquitous computing is the computing
environment, there is a clear need for a flexible paradigm that refers to scenarios in which
user authentication using the available computing is om- nipresent, and particularly in
authentication factors. In this chapter, different which devices that are traditionally perceived as
new techniques to ensure ac- cess control are dumb are endowed with computing capability
discussed and compared to the state-of-the-art. (Stajano, 2002). The use of context information
represents a significant benefit for applications
in the highly dynamic en-

Copyright © 2010, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Access Control in Mobile and Ubiquitous Environments

vironments addressed by ubiquitous computing. office.


The deployment of collaborative mobile
applica- tions in ubiquitous environments is
accompanied by an increasing demand on
security. In addition to technical challenges,
ubiquitous environments raise security issues
such as access control for resources shared
between mobile applications. Access control
represents a real challenge due to the highly
dynamic nature of communications, where
former unknown partners communicate in an
ad-hoc way.
Access control is a standard security
technique to control the access to resources in a
system. It consists of a set of mechanisms and
processes that allow the definition of access
control rules - the authorization policy - and the
enforcement of these rules (Samarati, 2001).
Access control is the pro- cess of granting
permissions in accordance with an
authorization policy. An authorization policy
states “who can do what to what”. The “who” is
a subject, the first “what” is an action, and the
other “what” is a resource. In a context-aware
authorization policy the context is taken into
ac- count as additional constraint. The
statement can be extended as follows: “who can
do what to what under which circumstances”.
The circumstances correspond to the context of
the application.
The availability of context information
allows reconfiguration and enhancement of a
system and application security, depending on
the changing context. Context-aware security is
defined as a dynamic adaptation of security
policies according to the context. For instance,
context information can be used to
automatically reconfigure security mechanisms
in order to provide a predefined level of security
and, at the same time, to optimize the use of
resources. As a concrete example, email
messages sent by mobile workers using a public
WLAN hotspot as an access point for their PDA
can be automatically encrypted, whereas the
same messages could be sent in plain text when
they connect to a secured access point in his
202
Access Control in Mobile and Ubiquitous Environments
scenArIos

In the following section, the challenges of


access control in ubiquitous and mobile
environments are highlighted in 2 different
scenarios.

scenario 1: remote Healthcare


Monitoring

The use of context-aware security techniques is


illustrated in the following e-health scenario: an
application constantly monitors the health and
well-being of elderly at home. The elderly are
wearing body sensors, which register several
measurements related to the physical condition,
like heart rate, oximetry (SpO2), blood glucose
level and body temperature. The homes of aged
people are equipped with ambient sensors,
deliv- ering additional information about the
activities of the monitored subject and of the
environment. All measurements are forwarded
to a backend ap- plication and stored
permanently there as part of the personal
medical records. Since these medical records
contain sensitive data, access to the data has to
be controlled.
In the e-health example, medical records can
be accessed by people in different roles such as:
general practitioners, gerontologists (specialist
for diseases and problems specific to old
people), and nurses. But often the role concept
alone is not sufficient to control access to the
medical data. Additional criteria, like the
relationship between patient and doctor or
context information, e.g. the health status or
location, have to be considered. Normally, only
family doctors can access the entire personal
medical record of a patient. But in an
emergency situation, any physician, who is
close to the patient, can get access to the data.
An emergency situation can be described as a
complex type of context information derived
from the body sensor readings. The proximity
of two individuals also represents complex
context information, calculated out of the
position gained, for example, by GPS sensors.

203
The personal medical record of a person Nevertheless, the broadcast of such informa-
con- tains in general more than the recorded tion raises a major security issue: information
sensors readings from body and ambient confidentiality. In a car accident for example,
sensors. Medical data may include the clinical all involved cars send information such as GPS
history, medica- tion history, hospital stays, position, insurance contract ID to their
activity records, and personal data. Looking on insurance company. This information is used to
the variety of roles defined in the e-health automati- cally fill the accident report and
scenario, it is not always necessary to disclose supports insurance companies to find an
the entire medical record: a nurse needs not to agreement. In the case of car theft, the car ID
know all the information that a doctor in turn and GPS location of car is sent to the police in
needs to access to perform a task. In most order to track the car.
situations, it is sufficient to grant access to the In the same way as in the e-health scenario,
smallest subset of information needed. an access control to the exchanged data within
In addition a doctor should have the rights this ubiquitous environment is required, in the
to book a room for a patient in a hospital or to case of the car accident scenario: one of the
plan a surgery only when he is physically in the security threats is the interception of the GPS
hospital or in his office and not on vacation. An location by fake garage owner. Pretending to be
emergency team member can get access to all certified ga- rage owner, they could steal the
medical information about a victim, only if he damaged car.
is close to the patient and the patient is
unconscious, whereas access to the private
medical informa- tion is normally restricted to conteX t-AWAre Access
the assigned doctors (general practitioner, controL
specialists, etc.).
An additional concept is represented by state-of-the-Art
context-aware delegation of rights. For
example, when a manager is out of office, he A classical approach to tackle the challenge of
can delegate some of his rights to his secretary protecting data access is role-based access
until he is back. Depending on the urgency of control (RBAC), introduced in (Sandhu, 1996).
certain tasks, the context-aware system can RBAC associates to each user one or more
decide whether delega- tion is allowed or not. roles. Permis- sions to objects (resources) are
defined for each role. However powerful,
scenario 2: e-Insurance simple RBAC reaches its limits when the
access control policy becomes more complex.
The use of context-aware security techniques is Additional information has to be integrated in
not restricted to e-health business domain. Car the policy enforcement process. Organization-
accident management is an e-Insurance based access control (OrBAC) (Kalam et al.,
scenario from the automotive business domain. 2003) is a solution for this kind of access
In this scenario, a car is uniquely identified control policies. As the name suggests, OR-
with a car ID. This information allows BAC uses contextual rules related to specific
authorized third party (e.g. police, insurance) to organizational structures. Other extensions of
authenticate a car and to map it to any the RBAC model, like generalized RBAC
information such as the car owner driving (GRBAC) or dynamic role-based access control
licenses, vehicle registration. In addition to the (DRBAC), also consider the use of context
car ID, speed information, fuel consumption, information to extend the standard RBAC
brakes usage information can be distributed. model.
Table 1 provides a set of approaches for
access control. Extended with contextual
information,
Table 1. Access control families

Model name Description Example


Mandatory access Subject permissions on target are defined and enforced by the Biba (Biba, 1975),
control operating system itself
Discretionary access Each owner of a resource defines the triple (Subject, Object, BLP (Bell, 1973)
control Permission). A matrix therefore maps Object to Subject’s
permissions.
Role-based access A set of rules are defined as (Subject, Resource, Action), XACML (OASIS, 2007)
control where a Subject can perform an Action on a Resource.
Task-based access A task is considered as a logical unit of work in an TBAC (Thomas, 2004)
control application and may consist of subtasks. At each task within
a workflow process, the security model will evaluate and
check dynamically the user authorization in order to perform
this task.
Context-aware access An extension of RBAC model which includes contextual OrBAC (El Kalam, 2003), GRBAC
control information (e.g. Subject’s GPS location), in addition to (Covington,2000), proximity-based access
Subject’s role. control (Gutpa, 2006), encounter-based
access control (Thomas, 2004), DRBAC
(Zhang, 2004)

several architectures have been developed for Based Access Control (DRBAC) (Zhang, 2004)
context-aware access control. extends the traditional RBAC to use dynamic
Proximity-based access control defines a set context information during the decision
of security rules based on the proximity of enti- process. DRBAC addresses two key
ties or groups (Gupta, 2006). Temporal-RBAC requirements: (1) A user’s access privileges
(Bertino, 2001) supports periodic role enabling must change when the user’s context changes.
and disabling and temporal dependencies (2) A resource must adjust its access
among permissions by introducing time into the permissions when its system informa- tion
access control infrastructure. Encounter-based changes. (Roman, 2002) defines generic
access control (Thomas, 2004) is used to define context-based software architecture for physical
special policies related to the occurrence of spaces, so-called Gaia. A physical space is a
defined situa- tions or events. Covington et al geographic region with limited and well defined
(Covington, 2002) propose a uniform access boundaries, containing physical objects, hetero-
control framework for environmental roles, geneous networked devices, and users
named generalized RBAC (GRBAC), which is performing a range of activities. Derived from
an extension of the role- based access control the physical space concept, the Active Space
model. In an administrative domain, a role can system provides the user a computing
be, for example, an employee or a manager. A representation of physical space. Active Space
role determines the user’s posi- tion or ability in helps the user to interact with the physical
an administrative domain. An environmental space. Cerberus is a framework for context-
role is a role that captures environ- mental aware identification, authentication and access
conditions. Unlikely in the RBAC model which control and reasoning about context, based on
is only subject-oriented, GRBAC allows the Kerberos (Neuman, 1994) authentication and
definition of access control policies based on Gaia. Cerberus focuses on user’s identification
subject, object or environment. Dynamic Role via user’s context information such as
fingerprint, voice and face recognition. The
context-aware
authorization architecture proposed in backend system in charge of users’
(Wullems, authentication is the context-aware
2004) is based on the Kerberos authentication
and enables to activate or deactivate roles
assigned to a user depending on the context. In
(Hu, 2004), the authors propose a dynamic,
context-aware access control especially suited
for distributed healthcare application.
Permissions are associated with context-related
constraints that are dynami- cally evaluated.

challenges

Context-aware access control can be seen from


two different perspectives:

• Adaptation of security policies based on


the context: As defined in (Dey, 1999),
context is any kind of information, which
can be used to characterize the state of an
entity. An entity might be any kind of
asset of a system such as user, software,
hardware, media storage or data. Context
aware access control can be then seen as
an extension of common access control,
that takes context information into
account to perform the decision.
• Secure acquisition of context: In order to
be used, context information must be
acquired in the system: this additional
acquisition potentially opens up new
threats and creates new challenges.

The first refers to the use of context


information within the definition and
enforcement of security policies. A system is
considered as context-aware if it uses context
information before or during service
provisioning. The smart floor infrastructure is a
good illustration of a context-aware system
(Orr,
2000). The smart floor is a device equipped
with force measuring sensors, so that it can
detect users walking on it. The smart floor is
connected to a backend application which maps
users’ identity to their walking pattern. The
system; the context information is the pressure
measured by the smart floor; and the latter acts
as the context information provider.
In order to enforce context-aware security
policies, context information have to be
securely integrated in the system. Since clearly
any forg- ery or modification of contextual
information could compromise the enforcement
of security policies.

solutions

A context-aware authorization service must en-


force authorization policies featuring rules
based on contextual information. Raw
contextual data, such as location or heart rate, is
gathered from sensors and further processed in
order to derive complex information such as
proximity or health status. The authorization
process thus relies on actual circumstances in
addition to the common role-based access
control model.
Figure 1 outlines an approach for context-
aware access control presented in (Laube,
2007). This architecture has been designed and
implemented in the scope of the MOSQUITO
project (MOS- QUITO, 2006). It provides a
security framework for mobile applications
based on web services.
All SOAP messages between the application
(web service client) and the web service itself
have to pass intermediaries, to enforce the
configured security policies on message
(SOAP) level. Inter- mediaries are a pipeline of
message filters (proxy) which support WS-
Security (OASIS, 2006). The client-side
intermediary adds encryption, integrity checks
and credentials. The server-side intermediary
decrypts, verifies the integrity and checks the
credentials validity. It offers the same interface
as the web service plugged behind it and is
therefore completely transparent. The Security
Token Service (STS) (OASIS, 2005) on the cli-
ent device generates signed context information
retrieved from the client’s Context Information
Providers (CIP). The CIP is in charge of collect-
ing contextual information, such as the patient
Figure 1. Context-aware access control

pulse, which characterizes his health condition. data representation.


The server-side authorization filter extracts the
credentials from the incoming SOAP request.
The filter verifies the signature. The credentials
and the target are provided to the STS. The STS
provides the credentials to the Authorization
Service that is in charge of authorizing or
denying access based on the credentials, the
SOAP message and the defined authorization
policies. The Policy Decision Point (PDP) then
enforces the access control policy. If access is
granted, the original request from the web
service client is passed to the web service and
processed there. Otherwise, an exception is sent
to the client application. The response of the
web service is as well passed through the
security proxy with its message pipeline to take
care of encryption/decryption and integrity.
The access control policies are defined in
XACML that supports RBAC authorization
poli- cies as well as context-aware access
control (see an example in Figure 2). The policy
enforcement relies on verifying attribute
values distributed in four categories, related to
the subject, the re- source, the action and the
environment. To support evaluation of context
information, the existing implementation was
extended by defining new primitive attributes
types that offer a higher level of abstraction for
The architecture to enforce context-aware
access control policies for web service based
application, is highly applicable to mobile and
ubiquitous environments. Following the service
oriented approach (SOA), based on the loose
cou- pling of services provided by different
parties, the enforcement of flexible access
control policies is highly demanded. The
approach proves how stan- dard role-based
access control can be made more flexible by
using any kind of context information available
in the system. Access control can now
dynamically adapt to the current situation.
In a SOA based system, web service
operations are considered as resources for
which permission is granted or denied. Web
services, especially en- terprise services which
expose business objects to the outside world,
can return complex results. In not all cases, like
depicted in e-health scenario, it is desirable to
grant access to the entire returned object. A
solution is the use of resource hierarchies in
combination with context-aware access control,
like proposed in (Laube, 2007).
The granularity used during the access
control should not only be defined by the (web)
services but should also be more abstract and in
relation to the service using application.
Resource hierarchies allow the definition of the
resource granularity
Figure 2. Authorization policy example

<Policy PolicyId="EmergencyPolicy">
<Target>...</Target>
<Rule RuleId="MixedLocalisationRule" Effect="Permit">
<Target>
<Resources>...</Resources>
<Actions>...</Actions>
</Target>
<Condition FunctionId="function:string-equal">
<Apply FunctionId="function:string-one-and-only">
<SubjectAttributeDesignator DataType=string AttributeId="role"/>
</Apply>
<AttributeValue DataType="string">physician</AttributeValue>
</Condition>
<Condition FunctionId="function:and">
<Apply FunctionId="coolFunction#CloseTo">
<Apply FunctionId="coolFunction#findLocation">
<SubjectAttributeDesignator DataType=cool#GPSLocation
AttributeId="SubjectLocation"/>
</Apply>
<Apply FunctionId="coolFunction#findLocation">
<SubjectAttributeDesignator DataType="cool#GPSLocation"
AttributeId="ObjectLocation"/>
</Apply>
<AttributeValue DataType="integer">50</AttributeValue>
</Apply>
<Apply FunctionId="coolFunction#IsEmergency">
<Apply FunctionId="coolFunction#findEmergency">
<SubjectAttributeDesignator DataType="cool#Emergency"
AttributeId="ObjectEmergency"/>
</Apply>
</Apply>
</Condition>
</Rule>
</Policy>

outside of the service itself. The authorization Several concrete classes are derived from the
policy can then contain access control rules for abstract class BusinessPartner, see the class
parts of the resources. dia- gram in Figure 3. Each derivation adds
A resource hierarchy can be described as specific attributes that are only relevant for this
directed acyclic graph over a finite set of nodes, type of object. For example, an employee has a
built from a resource and all its direct salary grade, a private bank account, a private
descendants at any depth. The definition of the and a business address. A customer has
node set in the hierarchy and the relations marketing data, a shipping address and an
between the nodes are highly dependent on the invoice address.
application and the authorization policy. The use of the object BusinessPartner as
An example is a business application that interface for the application allows an identical
exposes a service to access an object Business- manipula- tion of the different classes in the
Partner. This object maintains the information hierarchy. The following authorization rules
of partners of the company, such as its shall, for instance, be applied to the retrieve
personnel or external companies like customers method of the Employee object:
or suppliers.
Figure 3. BusinessPartner class diagram

• Only a member of HR can access all data creates a very high dependency to the
of an employee. authoriza- tion policy.
• An employee has access to the business The second option is to have many smaller
ad- dress of all employees and to his own services that allow retrieving parts (sub
private address and bank account. He has resources) of the BusinessPartner, like the name
no write access to his salary. or bank account. This would create a big impact
• A manager has access to the data of on per- formance. Instead of having only one
his/her employees, except for the sensible service call to get all data, multiple calls are
personal data, like private address or bank necessary.
account. With the use of resource hierarchies, there is
no change of the service interface according to
RBAC roles would be powerful enough to the need of the authorization policy necessary.
differentiate between a HR accountant and a The originally defined interface
normal employee. However, RBAC gives either (BusinessPartner in our previous example) can
permission to a resource or denies it completely be used and, at the same time, it is possible to
(all or nothing paradigm). In order to enforce access control with a finer granularity.
implement the authorization rules described On the other hand, the concept facilitates the
above, it would be necessary to implement a integration of all kind of context information
much more detailed and fine-grained service into the policy enforcement process and also in
interface. In this case, there are two the process of defining the resource hierarchy.
possibilities: the first is to imple- ment an This result is highly fined grained,
interface specialized for each role, which
dynamically adaptable authorization
policies needed for mobile applications in ubiq- In (Sorniotti, 2008), the authors present a
uitous environments. pos- sible solution to the problem of access
control to data produced by wireless sensor
network, rely- ing on cryptography: right from
Access controL to WIreLess its production, data can be encrypted, and
sensor dAtA therefore its access is intrinsically restricted.
This way, sensors can encrypt data and publish
challenges it regardless of the pres- ent consumer(s). In a
centralized authorization module, the related
Under a standard access control scenario, access control policies are en- forced. If a user
entities that wish to benefit from the produced or application provides sufficient credentials to
information, have to authenticate themselves, get access to a certain authorization class, he
receive a cre- dential, produce the credential to gets the associated key to decrypt the data: the
the data source and receive a specialized stream knowledge of the cryptographic key used to
of information that contains just the encrypt data, belonging to a given level in the
information the requester received an defined hierarchy, allows proper decryption –
authorization for. Many solutions exist for this and therefore access – to data belonging to that
problem however most of them are unsuitable level. Conversely, it is impossible to access
for low-power ubiquitous environ- ments such encrypted data for consumers who do not have
as wireless sensor networks, given the the proper decryption key.
technological constraints of the nodes. Such In scenarios such as the e-health one, the
devices with limited capacities on memory, sensed data is often highly sensitive. Moreover,
CPU and battery power are rarely capable to the sensed data often has very different levels
evaluate complex access control policies. of sensitivity: the mere information on the room
occupancy of a hospital is not highly sensitive,
solution whereas the ECG of a given patient is indeed
very private informa- tion, since it could
Sensor nodes produce, on a broadcast medium, possibly reveal information about the health
highly diverse data, which is often very status of the person.
sensitive. Sensor listeners may be numerous, There can also be several consumers of
diverse and have different access rights to wireless sensor data, belonging to a
sensor data. The problem of multiple- heterogeneous popula- tion, and having
resources/multiple-accesses is usually solved intrinsically different data access rights: within
using access control. Under a standard access a healthcare scenario, patients, social workers,
control scenario, entities that wish to benefit nurses, relatives, generic physi- cians and
from the produced information, have to specialists naturally form a hierarchy of entities
authenticate themselves, receive a cre- dential, that are interested in the data delivered by a
produce the credential to the data source and healthcare WSN. Data consumers can be
receive a specialized stream of information that therefore conveniently organized in hierarchies.
contains just the information the requester Low levels in the hierarchy can just access data
received an authorization for. Many solutions with low level of sensitivity whilst higher levels
exist for this problem; however most of them can also access more sensitive data.
are unsuitable for WSN scenarios, given the To satisfy the hierarchical requirement, the
techno- logical constraints of the nodes. In idea is to map each distinct sensor data type to
addition, nodes produce data in real-time, hence an authorization level. Data, whose disclosure
the generation of multiple streams is difficult. does not rise high privacy issues, is mapped to
low authorization levels. Similarly, highly
private data
will be mapped to high authorization levels. tiation. It assures that a user, who is given the
The resulting mapping expresses the security decrypting key of a class, can generate the keys
prefer- ences of a central access control policy
point. The hierarchy of authorization levels is
then mapped to keys in a hierarchical structure,
whereby low-level keys can be derived from
high-level ones.
The hierarchy of authorization levels can be
modeled as a tree. The adoption of encryption
as a way to enforce access control reduces the
problem of granting, denying and revoking ac-
cess rights to a problem of key management.
The scheme assumes the presence of a central
access control manager (ACM) which – after
evaluation of data consumers’ (from now on
also referred to as users) credentials – takes
care of granting, denying and revoking access
rights. Granting a user to a given authorization
level means giving her the key to decrypt all
data units mapped to that level and to
descendant ones. Denying access simply
implies not providing the decryption key(s).
Finally, revocation of access rights is based on
rekeying: changing the keys used at a given
point, forces data consumers to re-contact the
ACM in order to receive the new keys.
Consumers whose access rights have been
revoked do not receive the new keys, which
accomplish the revocation. This approach
achieves the desirable property of no specific
interactions between data producers (the sensor
nodes) and data consumers, other than data
publishing.

state-of-the-Art

The seminal work of Akl and Taylor (Akl,


1983) first proposes a solution for data access
control based on cryptography. Access
controlled re- sources (data), users and
cryptographic keys are mapped to a hierarchy
of classes, represented by a directed acyclic
graph. Data belonging to a given class is
encrypted with the key associated to that class.
The key generation scheme uses the
homomorphic properties of modular exponen-
of that class’ descendants, and therefore access
data mapped to descendant classes as well. On
the contrary, the inverse – generating the key of
a parent class – is unfeasible. However, the ex-
pensive operations used in the scheme
(modular exponentiation) make this scheme
unsuitable for a WSN environment.
In (Chien, 2004), Chien proposes a much
lighter key generation scheme, based on one
way hash functions instead of modular
exponentia- tion. In addition, the author places
a time bound on keys, introducing time
periods: during each time period, a new key for
each class of data is derived. However, this
scheme suffers from a few drawbacks: first of
all it requires tamper resistant devices, in order
to store secret material used to derive keys.
Second, similarly to Akl’s scheme, it is
impossible to revoke a user’s access right to a
lower class in the hierarchy. Finally, in (Yi,
2005), Yi showed an attack where, despite the
tamper resistance requirement, a coalition of
three user can access some secret class keys
that they should not know according to Chien’s
scheme.
In (Tzeng, 2002), Tzeng proposes a time-
bounded key assignment scheme for
hierarchies. The computation of the keys
however, involves particularly expensive Lucas
function computa- tion. This scheme is not
suitable for resource- constrained WSN nodes
due to the high cost op- erations required for
the computation of keys.
In (Shehab, 2005), Shehab et al. propose a
mechanism to generate and distribute
hierarchical keys. Although efficient and very
well suited for WSN, this scheme has no time
bound on keys, and therefore it is not ready to
represent a fully flourished access control
solution.
In (Atallah, 2007), Atallah and colleagues
pro- pose a general and efficient scheme to
incorporate time bounds in existing
management scheme. In addition, they show
how to create a full-fledged hierarchical access
control scheme with time capabilities. The
scheme is elegant and efficient, relies just on
one way hash functions, but – seen from a
WSN viewpoint – requires a too elevated
amount of public information in order to allow problem is to weigh the different factors,
for efficient key derivation. assigning a metric to each of them.

state-of-the-Art
user AutHentIc AtIon
In the literature, several researchers have
Access control mechanisms require the access already proposed models for authentication
requester to authenticate himself. After estab- factor metric. In (Reither, 1999), the authors
lishing a more flexible access control propose a set of principles for designing a
framework better responding to the challenges metric for authentication factors. Nevertheless,
of mobile and ubiquitous environments, they only focus on issuers of authentication
additional flexibility is needed also for the factors and not on supported authentication
authentication phase of the access control mechanisms. In (Burr, 2006), an assurance
model. Users of a ubiquitous computing system level on authentication factors is defined in an
should be able to authenticate themselves with arbitrary manner. It consists basically of a
the means at their disposal. For instance a categorization of authentication mechanisms.
physician should be able to authenticate with a Moreover, the authors do not propose any
login-password mechanism, with a certifi- cate solution for combining authentication factors in
stored on his private smart card or PDA, with order to achieve a better authentication level.
biometric information like fingerprints, or with (Al-Muhtadi,
a combination of two mechanisms. 2005) is closer to the authentication-level ap-
proach by introducing the notion of confidence
challenges values for authentication mechanisms. The
authors use the Gaia authentication framework,
In order to gain access to a resource protected which calculates the net confidence value of
by an authorization service, users are required available Gaia authentication modules. It
to authenticate. User authentication is implies that the user has to authenticate by
traditionally performed by producing a means of all avail- able authentication
combination of authen- tication factors (e.g. mechanisms. Moreover the authors do not
two-factor authentication) statically specified in consider the use of heuristics for combining
the access control policy of the authorization authentication mechanisms. In addi- tion, the
service. An authentication fac- tor is any piece confidence in the service implementing the
of information used to assess the identity of a authentication mechanisms is not considered as
user. Depending on the context, the user may criteria on authentication mechanisms. To
have access to different authentication services. combine confidence values, the authors finally
The flexibility of user’s authentication can be suggest using the consensus operator from
enhanced by allowing users to authenti- cate subjec- tive logic. In (M. Covington, 2004), the
using different authentication factors at his authors still propose to abstract authentication
disposal. In order to achieve that, the authoriza- factors to subjective logic opinions. In order to
tion service specifies an authentication level to calculate the confidence in a combination of
be reached in order to get access to a resource. authentication fac- tors, the author also uses the
Resource owner’s authentication preferences consensus operator from subjective logic.
are thus comprised in an authentication level Liberty Alliance (Liberty Alliance, 2005)
policy. The user is bound to reach a pre-defined introduces the notion of identity provider which
authentication level with the factor he owns. is in charge of federating user identities. When
The users want to consume a service, they
authenticate to their identity provider by means
of an authentication context encapsulated
in SAML assertions where the circumstance of In order to simplify user’s authentication,
the authentication (e.g. mechanism used, three
service) are described. With this additional objectives are defined:
information, the service provider can evaluate
its trust during user’s authentication. Moreover, • The authentication level specification is
the identity pro- vider can still combine done
different authentication context. Nevertheless, by resource owners.
the service provider still imposes the user to • The authentication level specified is met
authenticate by using statisti- cally defined by
authentication factors. legitimate users.
• The enforcement of access control can be
solution done based on a specified authentication
level, reached by combining different au-
In (Gomez, 2007), the following authentication thentication factors.
process (see Figure 4) is introduced:
The approach defines a metric for
• A user wants to gain access to a resource authentica- tion levels based on subjective logic.
protected by an authorization service. The The definition of confidence values for
authorization service responds to the user authentication mechanism on a fined grained
with an obligation stating an level enables to distinguish between, for
authentication level to be reached. example, a password of length of 4 characters
• The user attempts to reach the expected and another of length of 10 characters. The
authentication level by combining confidence values assigned to authentication
authenti- cation factors, using available factors and their combinations allow going
authentication services at his disposal. beyond the models described in the literature
• Then, the user forwards the chosen (Schneier,
combina- tion of authentication factors to 2005). The approach capitalizes on subjective
the autho- rization service, which then logic in order to define a trust metric for au-
checks if they meet the required thentication level. A new operator on subjective
authentication level. logic for mitigating opinions on combination of
authentication factors was defined. Figure 5 de-
picts the evolution of opinion combination. This
combination of two opinions, ωa and ωb fulfills
the two following requirements:

Figure 4. User Authentication Flexibility


• It must always result in an increase of of an authentication factor to a confidence
opin- ion: it tends to reward the value. Resource owners may specify their
combination of authentication factors, preferences in authentication factor by means of
which is considered as stronger authentication level. Moreover, a resource
authentication factor rather than a single requester should be able to combine available
authentication factor. Combining an X509 authentication factors in order to reach the
certificate, plus a basic password expected authentication level.
authentication, is more trustworthy than
only X509 certificate, or a password
authentica- tion. future trends
• It must be proportional to the | ωa - ωb | and
to the max(ω , ω ): it tends to reward the In ubiquitous environments, compromised
a b context
combination of strong opinions on providers (e.g. a malicious sensor node within a
authenti- wireless sensor network) represent a big threat
cation factors rather than weak opinions. for the context-aware security approaches
The goal is to avoid the combination of previously discussed. The challenge arises from
multiple weak authentication factors in the fact that sensor nodes often need to be low-
order to reach a high level of confidence. cost to justify their deployment, which makes it
very hard to sat- isfy tamper-resistance
Numerous operators are already available in requirements. An attacker could take control of
the subjective logic framework. Nevertheless a sensor node in a fraudulent way in order to
none of them fulfills those two requirements. maliciously craft data or to alter
A new operator for subjective logic, ωcombine (see the data processing. Once a node is
compromised,
Figure 5), has been defined in (Gomez, 2007). the key material contained within is completely
Each authentication factor is associated with an exposed and usable by the attacker. In order to
authentication level. The latter is an abstraction cope with such threats, a few trust frameworks

Figure 5. Evolution of combine operator


have been proposed in the literature to detect context information related to the
bogus sensor data. This implies a trust communicating partners or the technical
evaluation of sensor data at acquisition and infrastructure used as
aggregation time: trust refers to the reliability
and accuracy of sensed information and it is
related to the quality of the delivered sensor
data. Computing the distance between the
delivered context information and the real
context and evaluating the trustworthi- ness of
delivered context information may be
approached in several ways: (i) context provider
failure detection, (ii) reputation systems, or (iii)
trust based framework. (i) refers to the failure
detection (e.g. crash, omission, timing, value
and arbitrary (Tanenbaum, 2001) of context
providers such as sensor nodes. (ii) aims at
determining the reputation of context providers
(Ganeriwal, 2004). Reputation is defined as the
perception that an entity has of another’s
intentions, based on past experiences with a
given entity. At the contrary, (iii) enables trust
to encompass objective and sub- jective
characteristic of an entity. The goal of (iii) the
trust based framework proposed in (Zhang,
2006) for wireless sensor networks is to
establish trust in all sensor nodes based on the
expectation that they will deliver non-
compromised data.
From the access control to sensor data view-
point, the increasing interest in the subject of
lightweight access control schemes for WSNs is
showing how the problem is important for the
both academic and enterprise environment.
(Atallah,
2007) represents the state-of-the-art approach
to hierarchical data access control with time
capa- bilities. It is foreseeable in the near future
that new schemes will improve the latter to
make it suitable in resource constrained
environments such as WSN’s.

concLusIon

Access control in ubiquitous and mobile envi-


ronments is a challenging task. The use of the
communication channels offers new
possibilities towards adaptable security
mechanisms. A first step is the use of (static)
context information in the access control
policies to define fine-grained rules to access
the available resources. Context information
can extend the wide spread RBAC and make it
more flexible. The dynamic enforce- ment of
context-aware access control policies is the
next step towards adaptive security. During the
enforcement process, context information is
retrieved and processed. Raw data obtained
from physical or logical sensors is aggregated
to high- level context information. Thereby,
many ques- tions related to trust and
dependability of context information are still
subject of research.
The use of context information in the access
control policies and the dynamic enforcement
facilitate the definition of a flexible access con-
trol that can adapt automatically to the current
situation. Context-aware access control is often
combined with the concept of separation of se-
curity and application logic. Based on the SOA
principle and implemented with web services,
access control policies are defined and enforced
in a security framework instead in the (web)
service exposing resources. This increases the
reusability of the services and avoids the
complexity of se- curity and application logic
during development and implementation.
Based on the concept of separation of
security, the use of resource filters to define the
granular- ity of resources also outside the
implementing service is another step towards
more flexible and adaptable security
mechanisms.
Access control is only one part of the chain
of security means needed to protect resources.
Access to resources is in general only granted
to authenticated users. Authentication in
ubiquitous and mobile environment has to
fulfill the same requirements regarding
flexibility and adaptabil- ity as the access
control. In mobile applications, users use
different technical devices and com-
munication infrastructures. The combination of
authentication factors assigned to the different
means, which a user has at his disposal, process of obtaining trusted and reliable context
requires a metric of authentication level. The information introduces new challenges which
approach based on subjective logic is a very have to be addressed in the near future.
promising as it combines not only the
authentication factors but also the
authentication services and allows a fined
grained characterization of authentication
factors and means. In addition, subjective logic
allows the distinction between subjective aspect
(e.g. reputation of the authentication service)
from concrete aspects (e.g. type of
authentication mechanism, quality of service).
The subjective aspects are based on the past
experience with a given authentication
mechanism, while the concrete aspects are
derived from measurable elements which
characterize an authentication mean.
Additionally, combination of authentication
means benefits from subjective logic operators
for combining opinions on them. Beside, the
subjective logic framework provides a set of
logi- cal operators for the combination
opinions, and allows the definition of new
operators.
Especially in ubiquitous environments,
where all kind of devices can produce context
informa- tion, it is often difficult to enforce
access control. On resource-restricted or
ubiquitous devices, the use of standard access
control mechanism like RBAC or the use of
context-aware access control is often not
possible. An approach for access control based
on lightweight cryptography can easily be
extended for ubiquitous devices. The sensors or
devices produce encrypted context information
and only authenticated users with a sufficient
authorization level are able to obtain the key to
decrypt the data. The calculation of the authori-
zation level is in a simple case based on user’s
credential but can also include the evaluation of
any kind of context information.
The use of context information for
authentica- tion and access control in
ubiquitous and mobile environments is a way to
reach a higher level of flexibility and
adaptability of the systems’ security. But the
references for securing future applications. 23rd National
Information Systems Security Conference.
Akl, S. G., & Taylor, P. D. (1983).
Cryptographic solution to a problem of access
control in a hi- erarchy. ACM Transaction
Computing Systems.,
1(3),239–248.
Al-Muhtadi, J. (2005). An Intelligent
Authentica- tion Infrastructure for Ubiquitous
Computing Environments.
Atallah, M. J., Blanton, M., & Frikken, K. B.
(2007). Incorporating temporal capabilities in
existing key management schemes. Cryptology
ePrint Archive, Report.
Bell, D., & La Padula, L. (1973). Secure
Computer Systems: Mathematical Foundations.
Technical Report MTR 254, 1. MITRE
Corporation.
Bertino, E., Bonatti, P. A., Ferrari, E. (2001).
TRBAC: A Temporal Role-Based Access
Control Model. ACM Transactions on
Information and System Security, 4(3), 191-
223.
Biba, K. (1975). Integrity Considerations for
Secure Computer Systems. Technical Report
MTR-3153, Mitre Corporation.
Burr, W. E., Dodson, D. F., Polk, W. T.
(2006). Electronic authentication guideline.
NIST Special Publication 800 63. National
Institue of Standards and Technology.
Chien, M.-H.-Y. (2004). Efficient time-
bound hierarchical key assignment scheme.
IEEE Trans- actions on Knowledge and Data
Engineering,
16(10), 1301–
1304.
Chris, W., Looi, M., & Clark, A. (2004).
Toward context-aware security: an
authorization archi- tecture for intranet
environments. Second IEEE Annual
COnference on Pervasive Computing and
Communication Workshops (PERCOMM’04)
Covington, M. J., Moyer, M. J., & Ahamad, M.
(2000). Generalized role-based access control

S-ar putea să vă placă și