Documente Academic
Documente Profesional
Documente Cultură
Critical Issues
(7.1-7.10)
This section addresses conceptual and theoretical issues related to the field of ubiquitous and
pervasive computing. Within these chapters, the reader is presented with analysis of the most current
and relevant conceptual inquires within this growing field of study. Particular chapters discuss
ethical issues in per- vasive computing, privacy issues, and quality of experience. Overall,
contributions within this section ask unique, often theoretical questions related to the study of
ubiquitous and pervasive computing and, more often than not, conclude that solutions are both
numerous and contradictory.
1350
Chapter 7.1
The Ethical
Debate
Surrounding
RFID
Stephanie Etter
Mount Aloysius College, USA
Patricia G. Phillips
Duquesne University, USA
Ashli M. Molinero
Robert Morris University, USA
Susan J. Nestor
Robert Morris University, USA
Keith LeDonne
Robert Morris University, USA
1351
to a reader (EPCGlobal, 2005). In broad terms, Italy, France, Spain, Portugal,
RFID tags are placed into one of two
categories: active or passive. According to the
Association for Automatic Identification and
Mobility (AIM,
2005), active RFID tags are powered by an
internal battery and are typically designated as
read-write tags. When a tag has read-write
capabilities, the tag data can be modified.
Passive tags, according to AIM, operate without
a power source and obtain operating power
from the tag reader. Passive tags are typically
read-only tags, having only read-only memory.
Active tags generally have a longer read range
than passive tags.
RFID development dates back, according
to some accounts, to the 1940s work of Harry
Stockman who discussed the possibility of
com- munication by means of reflected power.
Stock- man at that point was early in the
exploration and “admitted that more needed to
be done in solving the basic problems of
reflected-power communication before the
application could be useful” (Landt & Catlin,
2001). According to the RFID Journal, RFID’s
early applications can be found during World
War II when it was used by the military in
airplanes, through the assistance of radar, to
identify friend or foe (IFF).
Two decades later the first commercial use
of RFID-related technology was electronic
article surveillance (EAS), which was designed
to help in theft prevention. These systems often
used 1-bit tags that could be produced cheaply.
Only the presence or absence of the tag could
be detected, which provided effective anti-theft
measures (Landt & Catlin, 2001).
Commercial applications expanded in the
1980s across the world, although not everyone
had the same RFID applications in mind. The
United States found the greatest applications for
RFID to be in the areas of transportation,
personnel access, and to a lesser extent, animal
tracking. “In Europe, the greatest interests were
for short-range systems for animals, industrial
and business applications, though toll roads in
and Norway were equipped with RFID” (Landt
& Catlin, 2001).
Today we see RFID in use in toll collection,
tracing livestock movements, and tracking
freight (Jones, Clarke-Hill, Comfort, Hillier, &
Shears,
2005). While not a new technology, the use of
RFID is slowly gaining momentum for
widespread application, with RFID technology
being used in industries such as retail, banking,
transportation, manufacturing, and healthcare.
72
The Ethical Debate Surrounding RFID
items so that each item is traceable back to a
credit account. According to Gunther and
Speikermann (2005), “Consumers feel helpless
toward the RFID environment” (p. 74) and
“even though the po- tential advantages of
RFID are well understood by a solid majority of
consumers, fear seems to override most of
these positive sentiments” (p.
76). There is some development in the area of
privacy-enhancing technologies (PETs),
technol- ogy designed to enable privacy while
still using RFID, but as Gunther and
Speikerman-n (2005) report, consumers still
feel helpless (p. 74).
Although the ethical debate surrounding
RFID does focus on privacy, it is important to
note that much of the privacy debate itself can
be connected to the other main controversy with
RFID: security. Yoshida (2005) reports that
Elliot Maxwell of The Pennsylvania State
University’s E-Business Research Center
argues, “Fair information prac- tices are
designed for centralized control and personal
verification, but what is emerging from RFID is
surveillance without conscious action.” He
further argues that with RFID, “every object is
a data collector and is always on. There are no
obvious monitoring cues. Data can be broadly
shared, and data that [are] communicated can
be intercepted.” While this information is
stored in databases for later use or sale, there
are potential security risks that arise. If it is
intercepted during transport (electronic or
otherwise), or accessed by an unauthorized
party, the information now becomes more than
just a concern about privacy related to which
products consumers buy or which books they
read, but it then becomes an opportunity for
identity theft.
73
Those in the RFID industry have responded identified in the literature.
to concerns about privacy by developing EPC
tags that can be equipped with a kill function.
Tags that are killed are totally inoperable after
being sold to a consumer. Without global
standards it is difficult to predict whether kill
functions will be used widely in an attempt to
protect consumer privacy.
Indust ry
IMPAct
Healthcare
The Benefits
The Challenges
future ProsPects
concLusIon
references
RFID Journal. (2005). What is RFID? Passive Tag: A type of RFID tag that
Retrieved December 1, 2005, from operates without a power source and is typically
ht t p://ww w.r fidjour nal. com designated as a read-only tag.
/article/articleview/1339/2/129/ Privacy Enhancing Technology (PET):
Terry, N.P. (2004). Electronic health records: Hardware and software designed to protect an
International, structural and legal perspectives. individual's privacy while using technology.
Retrieved January 27, 2006, from ht t p://law.slu.
Read-Only Tag: A tag that only has read-
edu/nicolasterry/NTProf/ALM_Final.pdf
only memory. When manufactured, this tag is
Yoshida, J. (2005). RFID policy seeks identity: pre-programmed with a unique and/or
Global economic body debates controversial randomly assigned identification code.
tech- nology. Electronic Engineering Times.
Retrieved January 31, 2006, from Lexis Nexis Read-Write Tag: A tag that allows for full
Academic Universe Database. read-write capacity. A user can update informa-
tion stored in a tag as often as necessary.
This work was previously published in Encyclopedia of Information Ethics and Securty, edited by M. Quigley, pp. 214-
220, copyright 2007 by Information Science Reference (an imprint of IGI Global).
Privacy Issues of Applying RFID in Retail Industry
1358
Chapter 7.2
Privacy Issues of
Applying
RFID in Retail
Industry
Haifei Li
Union University, USA
Patrick C. K. Hung
University of Ontario Institute of Technology, Canada
Jia Zhang
Northern Illinois University, USA1
David Ahn
Nyack College, USA
Copyright © 2010, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Privacy Issues of Applying RFID in Retail Industry
80
parties. In summary, the U.S. mostly relies on and to provide adequate opportunities for
self-regulation and limited legislation. opting out of personal information disclosure to
However, federal agencies circumvent these non- affiliated third parties (Hinde, 2002). All
constraints by subscribing to commercial of this information indicates that privacy is
surrogates, who col- lect and store the same currently a critical topic.
data with no constraints. Because of the identity Therefore, in our opinion, the traditional
thefts that occurred at ChoicePoint and view of an authorization model should be
LexisNexis in early 2005, U.S. lawmakers have extended with an enterprise-wide privacy policy
pushed for more aggressive data privacy in order to man- age and enforce individual
legislation (Gross, 2005). privacy preferences. In this article, we aim to
In contrast, the Europe Union (EU) Data propose a privacy autho- rization model and to
Protection Directive (Steinke, 2002) contains explore its implementation issues, focusing on
two statements that contradict the U.S. act. The language specification. The reminder of the
first statement requires that an organization article is organized as follows. In the second
must inform individuals why it collects and section, we briefly introduce the background of
uses in- formation, how to contact the RFID technology and introduce problem
organization, and the types of third parties to domain. In the third section, we discuss related
which it discloses the information. The second work. In the fourth section, we propose our
statement requires that personal data on EU RFID-oriented privacy authorization model. In
citizens only may be trans- ferred to countries the fifth section, we discuss the design of the
outside the 15-nation blocks that adopt these model. In the sixth section, we discuss the
rules or are deemed to provide adequate implementation of the model. In the seventh
protection for the data. As a result, these two section, we perform self assessments. In the
statements imply that no information of any EU eighth section, we make conclusions and
citizen can be transferred to the U.S. due to the discuss future work.
conflicts between two privacy acts.
Consequently, these policies create obstacles
for conducting business activities between the bAcKGround And ProbLeM
EU and the U.S. To solve the problem, the U.S. doMAIn
government has made a voluntary scheme
called Safe Harbor to provide an adequate level Due to its potential to dramatically increase
of data protection that safeguards transfers of pro- ductivities, many organizations have shown
personal data to the U.S. from the EU. U.S. strong interest in the area of applying RFID
companies conducting business in the EU must technology to retail industry. Among others,
certify to the U.S. Department of Commerce IEEE has played an essential role in the rise of
that they will follow the regulations of the EU wireless ID by sponsoring conferences and
directive. Any violation is subject to pros- publishing papers in this area (Leventon, 2005);
ecution by the Federal Trade Commission IBM has been active in pursuing business
(FTC) for deceptive business practices. opportunities for several years (IBM, 2004);
Furthermore, based on a recent survey, bank researchers and engineers at HP have developed
officers said that they had ongoing concerns, an RFID-based solution for tracking IT assets
mostly procedural, about how to handle the (Schwartz, 2005). In this sec- tion, we will
anticipated privacy regu- lations of the U.S. The briefly introduce the basic concept of RFID
Gramm-Leach-Bliley Act (GLB) requires and then discuss the problem domain to be
financial institutions regularly to communicate addressed.
privacy policies to customers
A typical rfId system system to keep track of where individual
cartons of goods are in their supply chain or,
Radio Frequency Identification (RFID) is a perhaps someday in the future, what products
generic term for the technologies that use radio are in a shopper’s physical cart. However,
waves to automatically identify individual although RFID is a boon to retail industry, it
items. A typical RFID system contains three comes at the high price of shaky security and
components: an RFID tag, an RFID reader, and privacy. For example, a retailer must deal with
a computer network (see Figure 1). An RFID different categories of users that may have
tag is actually a microchip with a coiled access to the data stored in RFID tags. These
antenna. When an RFID tag re- ceives users could be professional buyers, cashiers,
electromagnetic waves from the reader, it sends store managers, warehouse keepers, warehouse
stored data to the reader. An RFID reader can managers, and so forth. The data contained in
read and write data, depending on the types of RFID tags may or may not be PII. If it is PII,
RFID tags with which it interacts. A reader also people naturally have big concerns for possible
can send data to the associated computer privacy invasion when various users have
network. A computer network can receive data access to RFID tags. Even if the information is
from the reader and perform further processing not personally identifiable, there is still some
on the data collected. Potentially, computers concern, because the information subject poten-
also can send data to readers. Figure 1 tially could be identified. That is why a new
illustrates the main components of an RFID role came into place called privacy policy
system as well as the interactions between enforcers, which may have alternative names
them. such as CPO (Chief Privacy Officer).
At present, interest remains high enough in
requirements Analysis implementing RFID that the lack of security
and privacy is not a bottleneck to retail
The RFID system illustrated in Figure 1 is a industry’s adoption of this new business model.
typical scenario used by a retailer. For example, But why wait to find a solution? This is the
big retail companies like Wal-Mart with large momentum of this research that aims at
database infrastructures can use such an RFID investigating the security and
privacy issues of an RFID system and exploring
01-000169DC-E09
01-000169DC-E09
Figure 2.
Rol e Player 2:
Rol e Player 1: Store Mana ger
Cashi er
RFID Gate
Tag Kee per
Origin al
Data from
....
Suppli er
End
User
Suppli er
Role
Player n
it. In other words, the journey of merchandise In short, our role-based retailer enterprise
can start from an original supplier and flow boundary framework provides fundamental
through multiple enterprise retailers before specification and context to perform role-based
finally reaching a customer’s hand. Figure 2 access control.
also shows that the circulation of an RFID tag is
a directional flow from a supplier to one or rfId Access control framework
more enterprise retailers and then to an end
user. Based on the previous specification, an access
As illustrated in Figure 2 at the left-hand control system should enforce the policy stated
side, suppliers provide retailers with the by the enterprise. Under this circumstance, an
original data from the manufacturing facility. information access control mechanism also
The data are embedded and stored into an RFID should be embedded with privacy-enhancing
tag. After the RFID tag enters the enterprise’s technolo- gies. All these evidences show the
boundary, dif- ferent role players (e.g., cashiers, importance of integrating privacy concepts into
store managers, etc.) can interact with it. A access control mechanisms in order to resolv
solid line from the tag to a role player refers to the RFID privacy problems.
an action of reading the tag data, while a dotted Let us take a quick review of the traditional
line from the role player to the tag refers to a access control mechanism. The family of Role-
possible action of writing data to the tag. Based Access Control (RBAC) is commonly
Before the tag leaves the enterprise boundary, referred to as the RBAC96 model, which
such as a retailer store (i.e., purchased by an focuses on security control using roles and
end user, as shown in Figure 2, the RFID tag organizations. RBAC96 presents a conceptual
needs to pass through a GateKeeper, an au- model to describe different approaches such as
tomated program to ensure that the customer’s base model, role hierarchies, constraint model,
privacy will be properly protected. For and consolidated model. In particular, the
example, it will examine whether some role National Institute of
players (e.g., cashiers) intentionally or Standards and Technology (NIST) conducted
unintentionally write unauthorized data into the
tag.
reALIz AtIon
Chapter 7.3
An Evaluation of the RFID
Security Benefits of the
APF System:
Hospital Patient Data
Protection
John Ayoade
American University of Nigeria, Nigeria
Judith Symonds
Auckland University of Technology, New Zealand
Copyright © 2010, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
An Evaluation of the RFID Security Benefits of the APF System
96
overview of the APf system iv. The Maintenance’s Application is the part
of the system that maintains the APF
The APF was proposed to deter the data security database.
problem in the RFID system.
APF is a framework that makes it APF System Operation and Methods
compulsory for readers to authenticate
themselves with theAPF database before they The tag writer (writer application) subsystem
can read the information in the registered tags. reads tags in its vicinity and then generates a
random- ized encryption key. The next step is to
Figure 1, shows that APF system comprises input and encrypt the information into the tag
of four application segments: for security purposes. The next paragraph
explains how the authentic tag reader (reader
i. The Tag Writer’s (writer application) is the application) subsystem reads the encrypted
part of the APF that encrypts the informa- information in the tag.
tion in the tag and produces the decryption The reader subsystem sends a “challenge”
key which will be submitted along with its command to the tag in its vicinity (just as any
identification number, to the APF database. typical RFID reader will read the information in
ii. The Reader’s Application queries the tag the tag within its vicinity) and the tag responds
and registers readers’ identification number with its unique identification and the content of
with the APF database. This is also the part the information in it. However, in case of the
of the system that uses the decryption key to APF system, the content of the information
decrypt the information after it has been stored in the tags is encrypted. This means the
authenticated by the APF database. reader can not decrypt the information in the tag
iii. The Authentication’s Application is the part without the decryption key which is kept in the
of the system that integrates both the reader APF database system.
application and theAPF database The next stage of the operation is that the
maintenance reader will submit its ID to the APF database
application. subsystem. Then, the APF key inquiry
subsystem will check whether or not the reader
is authorised
AUTHENTIC ATIO
N APPLIC ATION
(WEB SYSTEM)
APF
DATABAS
E
0. Register Reade r-ID 0. Register Tag -ID and Decryption key
APF DATA
MAIN TENANCE
APPLIC ATION
(WEB
SYSTEM)
Figure 2. The flowchart of the APF
to be granted the decryption key to have access the decryption key to decrypt the encrypted data
to a particular tag. If it is authorised, the in the tags.
decryption key will be granted and the reader In order to prevent illegal access to the in-
will be granted access and if not, the decryption formation stored in the tags there should be a
key will be de- nied and the reader will not be procedure for access control to the information
able to decrypt the information stored in the tag. stored in the tags. As shown in Figure 3, and
discussed above, each tag will register its
the Methodology of the APf system unique ID and decryption key with the APF
database. This is necessary for the protection of
Figure 2 is the step by step representation of the tags from unscrupulous readers that may have
APF. Initially, tags will register their identifica- ulterior inten- tions. Once a tag registers its
tion numbers and the decryption keys with the unique identity and decryption key with the
APF database. Also, readers will register their APF, it will be difficult for unregistered readers
identification numbers with the APF database. to have access to the data in the tag without
Normally, readers will send a “challenge” com- possessing the decryption key to the tag. This
mand in order to access the information in the means every registered reader will be
tags. However, with the APF protocol, tags will authenticated prior to getting the decryption key
send a “response” command consisting of the to access stored data in the tag.
tags’ identification numbers and the encrypted In the next paragraph we discuss how the
data to the readers. The response message from authenticated reader would have access to stored
the tag will instruct the reader to get the data in the tag.
decryption key from the APF database in order Every reader will register its identification
to decrypt and read the data in the tag. Since, number with the APF in order for it to be
authenticating readers would have registered authen- ticated prior to the time the reader will
with the APF database then, only authenticating request the decryption key to access the data in
readers would be given the tag. In a nutshell, every reader will register
its unique identification number with the APF
and this will
Figure 3. The registration of tags with the APF
be confirmed by the APF before releasing the Also, we discuss the registration of readers with
decryption key to the reader in order to read the the APF prior to accessing the information in
encrypted data in the specific tag. the tags. When the reader sends a “read”
Figure 4 shows that every reader registers command to the tag, it replies with its
its unique identification number with the APF. identification num- ber and encrypted data. In
However, since both readers and tags register this case the data is encrypted and the reader
their identification numbers with the APF, this registered with the APF will be able to get the
serves as mutual authentication and it protects decryption key in order to decrypt the data.
the information in the tags from malicious read- Once the key is received the data in the tag will
ers which is one of the concerns users have. be readable. In this framework there are two
This means that unauthorized access to the tag important processes: first, mutual authentication
will be almost impossible if the APF system is is carried out by the APF because it
correctly implemented. In the next paragraph authenticates the reader and the tag; secondly,
the authors of this paper discuss the registration privacy is guaranteed because the data stored
and access control of readers to the APF. in the tag is protected from malicious readers.
In the previous paragraphs, the authors of Since, the information the reader obtained from
this paper discuss the registration of the tags’ the tag is encrypted, it can only be read after the
unique ID and the decryption key with the APF. decryption key needed to access the information
is received from the APF.
APPLIc AtIon of tHe APf One of the areas in which the APF could be
deployed in a hospital is for the protection of
The APF described in this paper has many ap- medical records.
plications. For example, it can be deployed in In (Patient Tracking, 2005), the
the supply chain management, or in granting or implementa- tion of a RFID system for hospital
in restricting access to information for certain asset manage- ment was discussed. Such RFID
groups of people in a hospital. This has been a systems may be used to track patients, doctors
major concern in many large hospitals and expensive equipment in hospitals. The
(RFIDGa- zette, 2004). The APF could help to RFID tags can be attached to the ID bracelets
control these security concerns. Take for of all patients, or just patients requiring special
example, in a hospital where the RFID system attention, so their location can be monitored
is used. The APF will guarantee total data continuously.
security of the information in the tag from One of the benefits of the above mentioned
malicious readers because every authentic system is the use of the patient’s RFID tag to
reader will register its ID with the APF prior to access a patient’s information for review and
reading the information in the tags and all tags update via hand-held computer or PDA (Patient
that will be read by those readers will register Tracking, 2005). In such applications there is
with the APF. This means that there will be a tendency for unauthorized readers to access
mutual authentication and the information in the the information stored in a patient’s tag. This is
tags will be secure. obviously of great concern to patients. In order
to prevent this kind of problem, the APF would application and it shows the practical possibility
offer secure solutions. of using RFID tags for storing the medical
Hospital patient data is not easily protected record of patients in the hospital. However,
by the currently available security measures patients will not want their medical record
already discussed. The Kill Command would accessed by an un- authorised person because
not work because the data does not have a they want their privacy protected from others
finite life as products on a shop shelf do. The except their doctor.
Faraday Cage Approach would not be practical Moreover, with a typical RFID system any-
as a metal mesh or foil container would make an body who has a reader can access the
RFID bracelet very difficult to work with and to information in the tag within its read or write
wear.Active Jamming would be dangerous and vicinity. This means that any patient that has
would interfere with other systems in a hospital their confidential information stored in the tag is
environment. Similarly, the Blocker tag method prone to abuse and invasion of privacy.
would interfere with other systems in the However, using the APF, the information
hospital environment. Therefore, hospital stored in the tag will be encrypted in order to
patient data is a good case study for the APF secure it from unauthorized readers. This is the
because conventional privacy and security underline text shown in Figure 6. As Figure 6
measures are not appropriate for application and shows the APF tag writer (writer application)
the problem is hindering the development of subsystem reads the Tag ID in its vicinity,
RFID patient care systems in hospitals. then generates a random encryption key. The
encryption key is used to encrypt the plaintext
the APf case study information about the patient about to be written
to the tag. After the encryption of the
This experimental case study was carried out to information, the encrypted text will be written
test the possibility of deploying the APF to deter into the tag. This information will be secured
illegal access by unauthorized readers to RFID from unauthorized readers, unlike a typical
tags containing medical records of patients. RFID system.
Figure 6 is a screenshot of the tag writer (writer Figure 7 shows that readers have to be
application) regis- tered. This means that, only readers
registered in
APF database can access the information in the mation stored in the tag. However, prior to that
tag. In this case study it was demonstrated that it needs to send its ID to the APF database and
readers unregistered in the APF database would the APF database will check whether or not it is
not be able to access the medical records stored an authenticating reader and once that is
in the tag. Once an authenticating reader is confirmed the decryption key will be released
opened, then the tag reader (reader application), for it to ac- cess the encrypted information
has to obtain the decryption key of the stored in the tag, provided it is an authentic
encrypted infor- reader. However, if it
Note: Readers need to declare their IDs prior to reading the information in the tag
Note: Authenticating reader declares its ID and accesses the decrypted information. Also, an unregistered reader declares its
ID and is denied access to the information
is not an authenticating reader, the reader will content stored in the tag and therefore
be denied access to the stored information. This
is shown in Figure 8.
In this case study, the authors assumed that
the patient’s doctor alone will be in control of
the three application subsystems that is: the tag
writer (writer application), the tag reader (reader
application), and the APF protected application
software.
Thus, the patient whose information is
stored within the APF protected system can rest
assured that their confidential medical
information stored in their tag are secure from
violation of unauthor- ized readers.
There are a number of well-established
RFID
security and privacy threats:
This work was previously published in International Journal of Advanced Pervasive and Ubiquitous Computing, Vol.1, Issue
1, edited by J. Symonds, pp. 44-59, copyright 2009 by IGI Publishing (an imprint of IGI Global).
1386
Chapter 7.4
Security and Privacy in
RFID Based Wireless
Net works
Denis Trček
University of Ljubljana, Slovenia
AbstrAct IntroductIon
items (in containers) can be scanned together, munications) security states that security means
while each item can be uniquely identified and minimization of vulnerabilities of assets and
traced. These properties give RFID technology resources (ISO, 1989). Wireless security thus
significant advantages over existing bar-code means minimization of vulnerabilities of assets
systems that currently serve for low level, and resources when communicating
opera- tional acquisition of data in the above information in electro-magnetic media through
mentioned business environments. a free-space environment. Finally, RFID
These appealing properties also have draw- technology will be defined as wireless
backs, many of them in the area of security and identification technology which operates on
privacy. But as RFID is already finding its radio frequencies and deploys low-cost ICs.
place in contemporary information systems A model of RFID environment is described
(ISs), these issues need to be addressed in Figure 1. It consists of tags (also called
seriously, which is the goal of this chapter. In respond- ers) and readers (also called
the second section, the background of RFID transceivers). This is the front-end of RFID
technology is given. In the third section, applications, which have their back-end in
threats are described and countermeasures are database management systems, where they are
given. In the fourth section anticipated future integrated with the rest of the IS (see Figure 1).
trends are discussed. There is a conclusion in It is generally assumed that RFID security and
the fifth section, while the chapter ends with privacy is concerned with the front- end part
references and key definitions. (the left-hand side of the dashed vertical line in
Figure 1). This is actually the part that is
covered by the reader’s signal; the tag’s signal
bAcKGround oVe rVIeW usually falls within its range.
Tags consist of a microchip and an antenna,
Some definitions have to be given first. One both encapsulated in polymer material. The
basic definition in the area of computer (com- micro- chip has encoded data, called
identification (ID),
RFID back-end
RFID
tag information
reader
system
tag's range
SECURE ENVIRONMENT
reader's range
107
which typically include the manufacturer, 2003).
brand, model, and serial number. A typical communication channel with a
Communication takes place on radio- pas- sive RFID is asymmetric. This means that
frequencies, for example, from 125 kHz to 134 forward communication, that is,
kHz for security cards and from 800 communication from a
MHz to 900 MHz for retail applications
(Roussos,
2006). However, increasing the frequency
means increased accumulation of signal in
bodies con- taining large quantities of water or
in metal.
Communication is achieved by electromag-
netic coupling between readers and tags. A
reader transmits a signal, which induces a
voltage in the tag’s antenna. This coupling
provides sufficient power for a tag to respond
(after performing some calculations if
required). If a tag is powered through this
coupling, it is called a passive tag. However, if
a tag has some source of energy, for example, a
battery, it is called an active tag. Each type has
certain advantages and disadvantages. Passive
tags are cheap, but remain active until be- ing
explicitly destroyed. They have a low operating
perimeter (typically 3 meters) with a relatively
high error rate. In contrast, active tags have a
greater operating perimeter (up to a few
hundred meters), lower error rate, and cease
functioning when the source of power is
exhausted. However, they are significantly more
expensive. Both kinds of tags can be read only,
write once-read many, or rewritable.
The main barrier to mass-deployment of
RFID tags is their price. A wish-price is limited
by five cents, but depending on quantities and
using current technologies, many application
niches can already be covered. The total cost
consists mainly of cost of an antenna, which
can be from
€/US$ 0.01 to €/US$ 0.02, cost of silicon, and
IC production; silicon typically costs €/US$
0.04/ mm 2 (Weis, 2003), while IC production
depends on the number of logical gates, that is,
technology. But roughly, the cost ranges from
€/US$ 0.025/ mm2 with 1500 gates/mm2 to
€/US$ 0.08/mm 2 with 60.000 gates (Weis,
reader to a tag, has one order of magnitude
larger in range than backward communication,
that is, from the tag to the reader. In the former
case this is typically up to 100 meters, while in
the latter case this is typically up to 3 meters.
The reason, of course, is the power
consumption constraint, which means that
practical applications are limited to a range of
up to 3 meters.
Thus, the cost factor dictates that a typical
RFID, or a reference RFID implementation, is
currently expected to have the following
charac- teristics. It is passively powered and has
96 bits of read-only memory. These
standardized bits serve to carry the tag’s
identity, which is unique for each tag (these IDs
are stored in silicon by an imprinting process).
A chip operates at 20,000 clock cycles,
providing 200 read operations per second. An
algorithm to respond to read primitives from a
reader may be probabilistic (e.g., Aloha (Prasad
& Rugierre, 2003) or deterministic (e.g., a
binary walking tree) (Juels, Rivest, & Szydlo,
2003). With such algorithms, a single tag can be
identified and isolated. The related process is
called singulation. Finally, the number of
available gates that can be devoted to security
operations is in the range of 400 to 4,000.
The above estimates are based on figures
from Weis (2003) by applying Moore’ s law,
which states that for the same price the
available processing power doubles every year
and a half. It is therefore clear that processing
resources to support security in RFID
environments are very limited and lightweight
cryptographic solutions thus provide an answer
to this problem.
Moore’s law also implies that there is always
a point where “ordinary” cryptographic
algorithms become feasible for computationally
weak devices. An example of a thick RFID
implementation, which is based on AES to
provide authentication, can be found in the
work of Feldhofer, Dominikus, and
Wolkerstorfer (2004). Despite this, a perma-
nent need exists for lightweight cryptographic
protocols and also algorithms. One main reason
is the gap between ordinary devices where
space
and power consumption are not a serious This leads to a whole new research area (Juels,
concern (e.g., tag readers, desktop systems), 2004).
and weak devices with limited space and power
consump- tion (e.g., RFID tags, smart-cards). rfId threats and countermeasures
This gap means that increased processing
power affects both kinds of devices equally; in The very basic threat to each and every tag is
the case of a cryptographic algorithm, the key- that it remains active when it is no longer
length of this algorithm is extended. supposed to be active. To counter this problem,
As a consequence, weak devices are again RFID logic may implement kill operation,
less protected because they cannot deploy such which means that upon receipt of a certain
inten- sive computations with enlarged keys. communication primitive, the tag becomes
Further, if the above use of a cryptographic permanently inoperative by, for example,
algorithm can be seen as a kind of variable cost blowing a fuse in its circuitry. A more bullet-
(the longer the key, the higher the processing proof solution is exposure of RFID to mi-
overhead), cryp- tographic protocols can be crowave radiation that melts its metalized layer.
seen as a fixed cost. Note that cryptographic Risk management drives each and every
protocols are ordinary communication pro- vision of security and privacy in ISs. A
protocols that deploy crypto- typical process is depicted in Figure 2. It starts
graphic algorithms, and cryptographic with the
protocols
identification of assets A (A = {a , a , …, a })
and 1 2 n
are often referred to as security services, while threats T (T = {t1, t2 , …, tm }) to those assets. For
cryptography algorithms are referred to as each asset and threat, that is, Cartesian product
security A
mechanisms. Both kinds of costs contribute to ⋅ T = {(a1, t1), (a1, t2), …, (an, tm)}, related
the total processing power requirements, and vulner- abilities are identified together with the
have to be kept low while at the same time likelihood of a threat to get into interaction
enabling a comparable level of security to with the asset
weak devices. during a certain period of time. On this basis,
the
future trends
concLusIon
references
Key terMs
This work was previously published in Handbook of Research on Wireless Security, edited by Y. Zhang, J. Zheng, and M.
Ma, pp. 723-731, copyright 2008 by Information Science Reference (an imprint of IGI Global).
1396
Chapter 7.5
Humans and Emerging
RFID Systems:
Evaluating Data Protection Law on
the
User Scenario
Basis
Olli Pitkänen
Helsinki Institute for Information Technology (HIIT), Finland
Marketta Niemelä
VTT Technical Research Centre of Finland, Finland
AbstrAct IntroductIon
to power up and transmit a response is induced (2006) have discussed various RFID
in the antenna by the incoming radio frequency related threats and potential solutions to
signal. Passive tags are typically quite small, in them.
the size range of a stamp. Therefore, a passive
tag is relatively easy and cheap to place in
almost any object.
Active tags, in contrast, include internal
power supplies. They are able to communicate
further, and store and process more
information. Although active tags are more
versatile than passive tags, they can be much
more expensive, larger, and more difficult to
place.
While RFID tags become smaller and
cheaper, reader technology is also developing.
It is already possible to equip, for example,
mobile phones with RFID readers. Thus not
only tags, but also readers are spreading widely
and enabling an unforeseeable amount of new
services.
RFID technology is said to advantage not
only businesses but also individuals and public
organizations in many ways. It enables useful
new services and applications. The benefits of
RFID tags are apparent, but their exploitation
has been retarded by notable obstacles. So far,
there have been three main problems that have
hindered the diffusion of RFID technology:
First, the technol- ogy has not been mature
enough. Second, there has been a lack of
standards. Third, there have been severe
concerns on the risks that RFID poses to the
end-users privacy. In this article, we
concentrate on the third problem. Especially,
with the help of RFID tags, it is possible to
collect and process personal information on
human-beings.
Many researchers have studied RFID
privacy issues in recent years. The following
brief list includes some of the important
studies related to this topic.
118
in research projects. We are also evaluating the future possibilities. (Niemelä et al, 2005)
current European data protection law to find out
how well it will suit the future needs.
In the following, we first describe a few sce-
narios and forthcoming applications of RFID
tech- nology to illustrate potential privacy
problems. In the next chapter, we depict sample
technological solutions to those problems, and
show that each of them has shortcomings. Thus,
technology alone is not enough, but needs
support from legal tools. In the following
chapter, we introduce the European data
protection law and evaluate its applicability to
the RFID scenarios described earlier. In the last
chapter, we conclude that even though the
European system provides users with a
reasonable protection, it will be necessary to
continuously follow the development to ensure
that the law will not harm useful businesses and
that the law remains adequate to emerging
technologies.
MIMosA
MobiLife
tecHnoLoGIcAL soLutIons to
PrI VAcy ProbLeMs
rfId blocker
Privacy bit
LeGAL frAMeWorK:
dAtA ProtectIon
dIrectIVes
protection in RFID applications are far from
perfect. Actually, it is very questionable
whether any technological solution alone could
completely protect privacy while
simultaneously enable all the desired
applications. Some of the privacy protection
technologies (e.g. kill tag and all the expensive
solutions) reduce the useful applica- tions area
significantly. The rest of them, to be efficient,
require strong support from the legal or other
non-technological systems (e.g. economic or
social incentives). For example, Garfinkel’s
(2005 and 2006) “RFID Bill of Rights” works
only as long as everybody voluntarily follows
the model, unless there is a law or another
strong incentive that forces them. Likewise, the
Privacy Bit presented above, does not work
without proper legal support. Without good
incentives or regula- tory force, it is more
tempting to ignore technical privacy protection
solutions while developing the systems. Also,
most solutions depend on people’s trust in
something (e.g. in technology that is said to
protect privacy, in a company that claims to
respect their customers’ private data, and so
on). The legal system that ensures reasonable
pro- tection could be the one that is trusted and
thus fosters the technology and business. If the
legal system included a built-in support for
adequate technical solutions, it would reduce
remarkably the cost to implement a working
solution.
In the following, we briefly present the
current legal framework and evaluate its ability
to foster RFID applications.
The legal basis of data protection within the
European Union is the EU Directives on data
pro- tection, especially the general Directive
95/46/EC on the protection of personal data, but
also the more specific Directive 2002/58/EC on
the protection of personal data in the electronic
communications sector. (Kosta & Dumortier,
2008)
The Data Protection Directive applies to the
processing of all personal data. Under the
Direc- tive, ‘personal data’ is very broadly
defined and includes ‘any information relating
to an identi- fied or identifiable natural person’.
In assessing
whether the collection of personal data through are intended.
a specific application of RFID is covered by the
data protection Directive, we must determine
(a) the extent to which the data processed
relates to an individual and, (b) whether such
data concerns an individual who is identifiable
or identified. (Art
29 WP 105, 2005; Directive 95/46/EC; Kosta &
Dumortier, 2008)
Therefore, although not all the data
processed in an ambient intelligence system is
governed by data protection law, there will be
many scenarios where personal information is
collected through RFID technology. Especially,
if RFID technol- ogy entails individual tracking
and obtaining access to personal data, data
protection law is directly applicable, but also in
cases where the information gathered through
RFID technology is linked to personal data, or
personal data is stored in RFID tags, it is likely
that data protec- tion law applies. (Art 29 WP
105, 2005; Kosta & Dumortier, 2008)
The processing of personal data is not illegal
in general. On the contrary, the data protection
law tries to enable useful processing of personal
data. However, the processing needs to be
carried out in accordance with the law.
Especially, the Data Protection Directive
(95/46/EC) requires that personal data must be
eVALuAtIon
concLusIon
references
This work was previously published in the International Journal of Technology and Human Interaction, Vol. 5, Issue 2,
edited by B. C. Stahl, pp. 85-95, copyright 2009 by IGI Publishing (an imprint of IGI Global).
1408
Chapter 7.6
Privacy Factors for
Successful
Ubiquitous Computing
Linda Little
Northumbria University, UK
Pam Briggs
Northumbria University, UK
about their rights. E-commerce consumers, for propose a programme called ‘none of your
example, have major concerns about who has business (NOYB)’ to protect privacy while
access to their personal data (Cranor, Reagle, & online
Ackerman, 1999; Jackson, et al., 2003; Earp, et
al., 2005); and show a reluctance to disclose in-
formation to commercial web services
(Metzger,
2004).
However, even those consumers who hold
privacy in high regard are able to recognise the
benefits of disclosing information (Hinz, et al.,
2007). We need to understand why it is that
users uphold their right to privacy whilst
simultane- ously giving away sensitive personal
information (Malhotra, Kim, & Agarwal,
2004). In other words, we need to better
understand the cost- benefit trade-off in which
e-consumers will trade personal information
online in order to achieve an improved service
(something referred to as the ‘privacy-
personalisation paradox’ (Awad & Krishnan,
2006)).
The perceived costs and benefits in any
trans- action inevitably reflect personal beliefs.
People differ with respect to the value they
place on privacy – and these individual
differences are reflected in scales which have
been designed to measure the strength of
individual feeling in this regard. These include
the Concern for Informa- tion Privacy (Smith,
Milberg & Burke, 1996) and the Internet Users
Information Privacy Concerns (Malhotra, et al.,
2004).
In keeping with the concept of some kind
of individualised privacy setting, designers are
increasingly allowing users to manage their
own concerns by setting privacy preferences.
On the Internet, at least, various architectures
have been suggested that allow personalized
settings (Kobsa, 2003). For example the
Platform for Privacy Preferences (P3P) allows
users to set their own personal privacy
preferences and if visited sites do not match
these then warnings are shown – leaving
responsibility ultimately with the individual
user (Cranor, 2002). Guha, et al., (2008)
129
Privacy Factors for Successful Ubiquitous Computing
and have tested the system on social networking
sites. NOYB provides fine-grained control over
user privacy in online services while preserving
much of the functionality provided by the
service. They argue NOYB is a first step
towards a ‘new design paradigm of online
services where the user plays an active role in
performing the sensitive operations on data,
while the service takes care of the rest’ (p.53).
Such tools are useful, but they are not
future- proof. Specifically, they could not cope
with the kinds of seamless, anywhere, anyplace
exchanges of personal information that are
anticipated by designers of ubiquitous
computing systems. Systems that collect,
process and share personal information are
prerequisites for the creation of intelligent
environments that can anticipate user’s needs
and desires (Dritsas, Gritzalis, &
Lambrinoudakis, 2006). Pervasive technologies
are expected to be responsive to different
contexts and to act on the user’s behalf
seamlessly – but will privacy violations
inevitably ensue?
Researchers disagree. On the one hand,
(Olsen, Grudin, & Horvitz, 2005) argue that
tools could be constructed to capture quite
complex privacy pref- erences, preferences that
are tailored to the context of the exchange, the
sensitivity of the enquiry and the disclosure
preferences of the individual. Such tools – if
feasible - would prevent privacy viola- tions in
the day to day exchanges of ubiquitous
computing. On the other hand, (Palen &
Dourish,
2003) argue that a-priori privacy configurations
and static rules will not work, but insist that the
disclosure of information needs to be controlled
dynamically and needs, essentially, to be passed
into the hands of software agents designed to
uphold general privacy preferences.
This begs the question of just what kinds of
assurances software agents might look for
before agreeing to the release of personal data.
As a clue to this we might start by looking at
established principles underpinning the right to
privacy (Ko- bsa, 2007). For example, the U.S.
Public Policy Committee of the Association
for Computing
130
Machinery (USACM) has laid down the principles? Do these concerns vary as a
following principles for privacy management: function of context? Will users have enough
confidence in privacy management procedures
a. Minimization: Store and use only
essential data and delete it once no longer
required.
b. Consent: Provide simple opt-in and opt-
out procedures that ensure consent the
storage and use of personal data is
meaningful.
c. Openness: Ensure transparency in data
col- lection and use – making salient the
default procedures for the storage and use
of data and being explicit about how it
might be made available to others. Also
ensure that privacy policies are
communicated effectively.
d. Access: Provide the individual with the
ca- pacity to inspect their data and to
determine how it has been made available
to others, also how to repair any violation
of privacy rights.
e. Accuracy: Ensure that personal
information is sufficiently accurate and
up-to-date and propagate corrections
quickly to parties that have received or
supplied inaccurate data.
f. Security: For all types of storage,
maintain all personal information securely
and protect it against unauthorized and
inappropriate access or modification.
g. Accountability: Be accountable for data
storage and proper adherence to privacy
policies, ensuring that those responsible
are trained, authorized, equipped, and mo-
tivated.
MetHod
132
she needs in that particular aisle and their next to her unique response number
exact location. The device automatically
records the price and ingredients of every item
she puts into trolley and deletes the
information if any item is removed. When
Anita is finished she presses a button on the
PDA and the total cost of her shopping is
calculated. Anita pays for the goods by placing
her finger on the biometric device and her
account is automatically debited, no need to
unpack the trolley or wait in a queue. The
trolley is then cleared to leave the supermarket.
Anita leaves the supermarket, walks to her car
and places her shopping in the boot.
E-voting Scenario: Natasha decides she
wants to vote in the next election using the new
on-line system. She goes on-line and requests
electronic voting credentials. Shortly before
polling day a polling card and separate
security card are delivered to Natasha’s home.
They arrive as two separate documents to
reduce the risk of interception. Natasha picks
up two of the letters from the doormat and puts
the letters in her pocket as she rushes out of the
door to head for work. While travelling on the
local underground railway system Natasha
decides to cast her vote on her way to work.
The letters have provided her with a unique
personal voting and candidate numbers which
allows her to register a vote for her chosen
candidate. She takes out her mobile phone and
types her unique number into it. Her vote is
cast by entering this unique number into her
phone and sending it to a number indicated on
the polling card. Her phone then shows a text
message: THANK YOU FOR VOTING. YOU
HAVE NOT BEEN CHARGED FOR THIS
CALL. When Natasha arrives at work she logs
on to the voting site to see if her vote has been
registered. While at her computer with her
polling cards on the desk in front of her a
colleague looks over her shoulder, she can see
that Natasha is checking her vote but can’t see
who she has voted for. Once the result of the
election has been announced Natasha checks
that the correct candidate name is published
to ensure that the system has worked
properly.
Financial Scenario: Dave is at home
writing a ‘to do’ list on his PDA. The PDA is
networked and linked to several services that
Dave has authorised. While writing his list he
receives a reminder from his bank that he
needs to make an appointment with the
manager related to his yearly financial health
check. He replies and makes an appointment
for later that day. When he arrives at the bank
he is greeted by the bank concierge system (an
avatar presented on a large interface). The
system is installed in the foyer of the bank
where most customers use the banks facilities.
The avatar tells Dave the manager, Mr Brown,
will be with him soon. The avatar notes that
Dave has a photograph to print on his ‘to do’
list and asks if he would like to print it out at
the bank as they offer this service. The avatar
also asks Dave to confirm a couple of recent
transactions on his account prior to meeting
Mr Brown.
Procedure
Hygiene factors
If you could do it through something like the for example, the problems inherent in a
BBC because it’s typically British, you are
going to trust the BBC, it’s always been there,
it’s something tangible, but for a lot of older
people, it’s new and it’s different, you know
they don’t trust it, whereas they trust their
television because they have watched it all of
their life.
I mean they don’t really know where the I do think the hospital should have access to
informa- tion is going and what individuals are your information so say, If I do have a week
actually accessing it or is it just completely heart, that should be able to convey to the
churned up by computers? I don’t even know hospital that plus your entire medical record.
but the information is going somewhere and the
customer, the con- sumer should actually have, Discussion revealed participants concerns
be allowed to know where that information is over systems being truly sensitive to
going and it should be an open process, open circumstances under which health information
to the consumer, if the consumer wants to know could legitimately be exchanged. Leakage of
of course, some people might not want to know, sensitive information in inappropriate
but if the consumer wants to know how all that circumstances was seen as very problematic.
information is processed it should be open. Would the system only reveal what information
was appropriate at that moment in time, or
f. Context aware: Participants noted the would boundaries be breached? For ex- ample,
dynamic and context-dependent nature of if a person was admitted to hospital with a
human behaviour, and questioned whether broken foot, would a health professional have
‘rules’ for the disclosure of personal infor- full access to a health record that revealed
mation could ever be sensitive enough. depression or a sexually transmitted disease?
For example - a system programmed to
alert parents to a minor accident would h. Easy to use: Participants, in particular
behave inappropriately if one of the in the older age group, discussed concern
parents was very ill or away on holiday. over the complexity of ubiquitous
Participants agreed that the ordeal of systems. Comments related to the fact
changing and re- setting preferences existing tech- nologies are difficult to use.
would be tedious, time consuming and Participants commented setting
complex. preferences for who has access to
information time consuming and
Because if it makes a decision for you and you complicated. Comments related to the
think to yourself, I’ve changed my mind, I’m not dynamic, complex nature of human
in the mood for that and therefore you have behav- iour and that we are not always
mucked predictable. Participants questioned
whether in reality we
could actually set preferences for all types beneficial and would create a more efficient
of information. Discussion also focused on service.
age differences in technology use, experi-
ence and familiarity.
Motivators
De-Motivators
I mean I do think that having all the information The other thing is if you actually hand over all
in one place and an exchange of information responsibility to automated systems you know
and the doctor and the hospital and maybe even if they make a mistake in your calculation and
the ambulance service being able to forward the you are not actually paying any attention, you
information is good but I don’t know whether I are just trusting this, you know it is essentially
like it to that degree. dis-empowering you.
Table 2. Stepwise regression analysis for health information exchange in ubicomp contexts
Predictor Std.
r² B β t-value p-value
factor error
Security .042 .290 .048 .193 6.070 .000
Design .056 -.197 .046 -.121 -4.231 .000
Trust .064 -.274 .080 -.127 -3.416 .001
Data-manage-
.070 .256 .082 .116 3.114 .002
ment
benefit .074 -.093 .042 -.068 -2.219 .027
Table 3. Stepwise regression analysis for financial information exchange in ubicomp context
Std.
Predictor factor r² B β t-value p-value
error
Security .058 .318 .046 .214 6.854 .000
Data-management .065 .336 .081 .155 4.146 .000
Benefit .070 -.088 .041 -.065 -2.119 .034
Trust .073 -.157 .079 -.074 -1.980 .048
Stepwise regression analyses were conducted The stepwise regression for the personal
to establish those factors that predict informa- identity model produced a fit (r 23.5%) of the
tion exchange within the four different variance explained. Security (18.9%), design
contexts. The four dependent variables were (3%) and data- management (1.6%) were all
health, finance, lifestyle and identity found to be predic- tive factors for exchanging
information and security, trust, design, social identity information in ubicomp contexts. The
concerns, benefit, data-management and Analysis of Variance (ANOVA) revealed that
privacy preferences as the independent the over all model was significant (F 3, 1178 =
variables. 22.91, P<0.001).
The stepwise regression for the health model The stepwise regression for the lifestyle model
produced a fit (r 27.1%) of the variance produced a fit (r 23.6%) of the variance
explained. Security (20.5%), design (3.3%), explained. Social (16.3%), design (3.1%),
trust (1.4%), data-management (1.2%) and security (1.7%) and trust (2.5%) were all found
benefit (.7%) were all found to be predictive to be predictive factors for exchanging lifestyle
factors for exchanging health information in information in ubicomp contexts. The Analysis
ubicomp contexts. The Analysis of Variance of Variance (ANOVA) revealed that the over
(ANOVA) revealed that the over all model was all model was significant (F 4, 1177 = 17.286;
significant (F 5, 1176 =18.66, p<0.001). p<0.001).
Finance Model
dIscussIon
The stepwise regression for the finance model
produced a fit (r 27.1%) of the variance Earlier we described three key research ques-
explained. Security (24.1%), trust (.6%), data- tions as follows: 1. What are users’ key
management (1.4%) and benefit (1%) were all concerns regarding privacy management in a
found to be predic- tive factors for exchanging ubiquitous context and do they reflect ‘expert’
financial information in ubicomp contexts. The privacy prin- ciples? 2. Do these concerns vary
Analysis of Variance (ANOVA) revealed that as a function of context? 3. Will users have
the over all model was significant (F 4, 1177 = enough confidence in privacy management
23.32, p<0.001). procedures to hand-over
Table 4. Stepwise regression analysis for personal identifiable information exchange in ubicomp
con- texts
Std.
Predictor factor r² B β t-value p-value
error
Security .036 .257 .039 .207 6.588 .000
Design .048 -.146 .039 -.109 -3.787 .000
Data-management .055 -.169 .056 -.093 -3.015 .003
Table 5. Stepwise regression analysis for lifestyle information exchange in ubicomp contexts
Std.
Predictor factor r² B β t-value p-value
error
Social .027 .085 .026 .103 3.210 .001
Design .037 -.122 .038 -.092 -3.199 .001
Security .044 .168 .042 .138 4.024 .000
Trust .055 -.198 .053 -.113 -3.712 .000
references
This work was previously published in the International Journal of E-Business Research, Vol. 5, Issue 2, edited by I. Lee, pp.
1-20, copyright 2009 by IGI Publishing (an imprint of IGI Global).
1425
Chapter 7.7
Privacy Threats in
Emerging
Ubicomp
Applications:
Analysis and
Safeguarding
Elena Vildjiounaite
VTT Technical Research Centre of Finland, Finland
Tapani Rantakokko
Finwe LTD, Finland
Petteri Alahuhta
VTT Technical Research Centre of Finland, Finland
Pasi Ahonen
VTT Technical Research Centre of Finland, Finland
David Wright
Trilateral Research and Consulting, UK
Michael Friedewald
Fraunhofer Institute Systems and Innovation Research, Germany
Copyright © 2010, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Privacy Threats in Emerging Ubicomp Applications
to the problem of users’ privacy protection from their personal data. Even in these
could be to allow users to control how their applications, no scalable solutions fully
personal data can be used. The authors’ applicable
experience with mobile phone data collection,
nevertheless, sug- gests that when users give
their consent for the data collection, they don’t
fully understand the possible privacy
implications. Thus, application developers
should pay attention to privacy pro- tection;
otherwise, such problems could result in users
not accepting Ubicomp applications. This
chapter suggests guidelines for estimating
threats to privacy, depending on real world
application set- tings and the choice of
technology; and guidelines for the choice and
development of technological safeguards
against privacy threats.
IntroductIon
147
understanding of possible problems, however, walk- ing in a noisy place (and it is not easy to
and safeguarding against them, including check
safeguard- ing against possible privacy
implications. There is no doubt that the notion
of privacy alters with time, so that with the
invention of phones (and especially mobile
phones), for example, physical distance from
other people can no longer guar- antee privacy.
Similarly, with the development of cameras
(especially digital cameras, with their
capability for recording more views than their
owners can sort through carefully), people have
become used to seeing more details of other
people’s lives than was ever possible before.
There are very important differences
between past and future technologies, however,
which could change our lives more quickly than
we could possibly adapt our understanding of
the world, human behaviour, ethics and laws to
the new technologies: first, past technologies
were largely controlled by a human, whereas
future technologies will be capable of automatic
actions. Since it is much easier to notice a
human observer than a tiny sensor, it will be
possible to collect much more data without
people being aware of it. Second, large-scale
accumulation of data in a digital form will no
longer require manual (slow) human work in
order to connect information from different
sources, so that it may be easier to as- semble
the full life story of a person in the future than
it was to find scattered pieces of information in
the past. Third, modern devices are smaller in
size, more reliable and move closer to the
human body than was the case in the past, and it
is pro- posed that these could be embedded into
clothes, watches or jewelry. Consequently, it
will become easier to have always-on mobile
devices, but more difficult to switch them off.
Our perception of the privacy aspect known as
the “right to be left alone,” for example, has
changed significantly with the invention of
stationary phones and especially mobile
phones, but it has still been preserved by the
possibility for switching the phone off or not
hearing it ringing when taking a shower or
whether a person did not hear a phone call or
was simply not in the mood to answer it). Will
one still be able to avoid undesired conversation
in the Ubicomp future of embedded
connectivity, or will society change so that
people will not be offended or angry when their
children, relatives or subordinates do not
answer a call that they have evidently heard?
How society will adapt to the capabilities of
new technologies is an open question, but we
think that technology developers should not rely
on human nature changing quickly, and the
results of deploying new technologies in
computer-supported collaborative work
(Bellotti,
1993) support this opinion.
This chapter first summarises the views of
different researchers on what privacy is, after
which it will briefly describe how Ubicomp
researchers see the world of the future and what
possible implications for users’ privacy may not
be safeguarded in the scenarios. After that, the
chapter will present the authors’ experiences of
mobile phone data collection and users’
opinions regarding their privacy expectations
before and af- ter data collection, which suggest
that the privacy implications were under-
estimated before data collection. It will then
present the state of the art in privacy-enhancing
technologies and highlight the gaps that create
privacy risks. After that it will suggest
guidelines for estimating the threats to privacy,
depending on real world application settings
and on the choice of technology, as well as
guidelines for the choice and development of
technological safeguards against these threats.
Figure 1. Mobile IT forum, part of a “Daily Life” scenario from FLYING CARPET, Version 2.00
(Kato,
2004), page 4
or weeks, unprotected. Although it has always The threats to privacy presented in this
been possible to look through somebody’s chapter are not really new, because the reasons
address book, diary or photo albums in order to for their existence (including conflicts of
find the desired information, this has usually interest between people and organisations,
required visiting that person’s room and human curiosity, envy, greed, and beliefs in
searching through the items there, which may one’s own right to control others) are age-old
be difficult, at least for a person living in problems. On the other hand, technology has
another place. Nowadays, personal mobile changed the ways in which per- sonal data can
devices can store as much in the way of be disclosed.
information and photos as several old-style The components of a typical Ubicomp
address books, diaries, and photo albums (and applica- tion are shown in Figure 2. Each
will store even more when Personal Lifetime component can cause problems in its own way.
Store application scenarios (Gemmel, 2004) Privacy problems essentially fall into three
become a reality and when mobile payment logs major groups, the best-known of which
can also be stored), but they are far less well concerns problems associated with information
protected, because they are not locked inside flow from the user, that is, due to the
a house or a drawer. Personal mobile devices acquisition, transmis- sion, and storage of
accompany their owners everywhere and can personal data in large quanti- ties. Most
reveal large quantities of stored personal data, privacy-enhancing technologies (PETs) are
because the users often bypass the inconvenient being developed for the protection of personal
security measures available for data protection data in networked applications, but new
(such as entering a password or rolling a finger Ubicomp applications present new challenges.
across a fingerprint sensor). Since no It has often been proposed, for example, that
convenient, user- friendly authentication has awareness between family members and
yet been developed, personal Ubicomp devices colleagues should be supported via the
and non-personal smart spaces are likely to transmission of video data, which violates
disclose their users’ data and secrets, and we traditional personal expectations regarding the
are now obliged to suggest how to reduce this notion that “if I am hidden behind a wall, I am
risk. invisible.” Memory aids (personal
(Gemmel, 2004; Healey, 1998) and recordings
Figure 2. A generic view of an Ubicomp application: the thin arrows indicate information collection,
transmission and storage; the thick arrows indicate information push.
of work meetings (Aschmoneit, 2002)) imply personal physiological responses to information
the storage of raw video data, which violates
personal expectations regarding the limits of
human’s at- tention and memory. There are two
reasons for these problems. First, as work in the
computer- supported cooperative activity
domain has shown (Bellotti, 1993), humans are
not accustomed to environments full of sensors,
and continue to behave according to their
expectations regarding their privacy in the real
world. The second reason is the blurring of
boundaries between “traditional” application
domains. For example, work-related
communications from home can intrude into
one’s personal life, and conversations on private
matters from smart workplaces can be recorded
automati- cally along with work-related
conversations. In addition, sensors, which were
traditionally used only in certain domains (e.g.,
physiological sen- sors associated with health
care, video cameras for security purposes) have
been suggested for use in other domains, such
as entertainment. Since the traditional view of
the entertainment domain assumes that its data
are not very confidential (and consequently do
not require strong protection measures), there is
a danger of the disclosure of health problems
detected by physiological sensors in the
entertainment domain.
The second group of privacy problems con-
cerns those caused by linkages between
different kinds of data (mainly stored data). For
example, it has been proposed that a personal
memory aid should not record everything, but
instead, it should measure the personal arousal
level via skin conductivity sensors and other
physiological sensors and record only the
exciting scenes (Gem- mel, 2004; Healey,
1998). Since none of proposed memory aid
prototypes has good access control over stored
data, these would allow a young boy’s parents,
for example, to find out easily which girl their
son is most interested in. Physiological sen-
sors also have been proposed for measuring the
degree of approval of TV programmes (Nasoz,
2003; Palmas, 2001). In this case, the linking of
on TV programmes can facilitate the
surveillance of citizens from the point of view
of whether they support government decisions
or not. The link- ability problem is in general
acknowledged, and PETs in networked
applications aim at protection from such data
linkability. In such applications as smart spaces
and personal devices, however, the problem of
data linkability has received less attention, and
privacy problems with memory aids, for
example, are usually discussed from two points
of view: first, how to achieve agreement with
the people recorded; and second, whether the
police could search through the recorded data or
not. The problem of avoiding the curiosity of
family members is usually ignored. The
dangers of data linkages are in general
underestimated by researchers, as we have
observed in the ex- ample of our own data
collection system (see next chapter).
The third group of privacy problems
comprises those caused by information flow
towards the users, either because technology-
initiated com- munication intrudes into
personal life, because the content of the
information can disclose private information, or
because actuators fail (e.g., to open or close a
door at home). Intrusions of technology into
personal life can happen when an application
interacts with people (e.g., reminds someone to
do something) or when an application does not
allow people to escape communication with
oth- ers. Currently, it is easy for a person to say
that he missed a phone call because the battery
in his mobile phone was empty, or because of
street noise, and so forth, but will it be as easy
to avoid undesirable communications in the
future, when communication is embedded in
clothes and battery life is longer? Most parents
have observed how their children miss phone
calls or “forget” mobile phones at homes when
they want to escape from their parents’ control;
and although such situations are harmful for the
parents’ nerves, it seems that in most cases, it is
necessary for children to make their own
decisions and take risks.
The content of information can disclose per- 2004) screenplay of “the Rousseaus’ holiday”--
sonal data in two possible ways: if it is holidays spent by a family consisting of a
delivered in the presence of other people and mother, father, and two children (10 and 13
they hear (or see) the message (e.g., if a movie years old).
recommender application suggests that the
users should watch adult videos in the
presence of their children), or if the
information contains data about people other
than the user (as one can notice more details
during the playback of a memory aid than
during a live conversation).
This group of problems is the least studied
of all, and PETs dealing with these problems
are almost non-existent. What is also important
about this group of problems is that technology-
initiated communications can reduce user
acceptance (users do not always like it when
the technology makes the decisions) or hinder
personal develop- ment. As the work of
Nissenbaum (2004) shows, “the right to be
left alone” is very important for personal
development because people need relative
insularity to develop their goals, values, and
self-conceptions. Furthermore, if technol- ogy
cares about personal safety and comfort and
relieves people from many responsibilities
(such as remembering to take one’s keys or to
close a door), it becomes more difficult to
develop responsibility in children. Children
traditionally learn to be responsible for not
losing keys, for do- ing their homework, for
taking the right books to school, and for other
small everyday tasks, but if all these
responsibilities are shifted to Ubicomp
technologies, what will replace them in
growing children? To the best of our
knowledge, the sce- narios do not suggest any
replacement. Instead, the role of children in
many Ubicomp scenarios is limited to playing
computer games. Research into computer-
supported learning is an exception, but even
there learning is mainly supported by
augmented reality (Price, 2004), which is also a
kind of game. One example of Ubicomp
scenarios regarding children is the ITEA
roadmap (ITEA,
The screenplay describes how the family goes
to a summer cottage “at the seaside on Lonely
Island off the Mediterranean coast of France”
and that “the kids are unhappy to leave home
… because of the high-end, virtual reality
video and gaming entertainment equipment,
which was recently installed … in their house”
(p. 134). If the roadmap leads us to a world in
which school children are not interested in
Lonely Islands, will we want such a world?
An eXAMPLe of uneXPected
PrI VAcy ProbLeMs In MobILe
PHone dAtA coLLectIon
user. Such a dialogue may annoy the individual data from several sources is possible, for
or reveal personal health details if it happens at example,
the wrong moment or in public. An application
which filters shopping advertisements
according to user preferences also has a high
control level, because the user can never know
about certain shopping alternatives if they are
filtered out. (An important question for such
applications is who sets the filtering rules and
how they can be pre- vented from favouring a
particular shop.)
With more extensive information collec-
tion, transmission and storage capabilities and
higher control levels, technology poses more
privacy threats. Most Ubicomp scenarios
involve application-dependent information
storage and a lot of wireless communication
(between objects, people, and organizations).
We suggest that sig- nificant threats to privacy
can arise if technology penetrates walls and the
human body, for instance, by using
physiological, video and/or audio sensors.
Significant threats are also likely to be caused
by high control levels (i.e., the capability of a
tech- nology to act on behalf of a person, e.g.,
to call an ambulance in an emergency) or by
biometric sensors (due to the possibility of
identity theft).
We also suggest that privacy threats should
always be regarded as high when the linkage of
when either of a lot of data about one person
can be aggregated (as in most personal devices),
or certain data about a large number of people.
We suggest that the dangers of information
linkage are often under-estimated, as we have
observed in the case of our data collection
system.
Medium threats are associated with
positioning sensors (without time stamps they
provide location data, but not much activity
data, whereas location plus time information is
a much greater threat to privacy) and with a
medium level of technology control (the
capability to make proactive sugges- tions, e.g.,
to issue reminders). Fairly low threat levels are
associated with a low level of control (e.g.,
ranking advertisements according to criteria
explicitly set by the user) and with comfort
sensors (lighting, heating, etc.).
We would like to emphasize that threats to
personal privacy are very often caused by mis-
matches between the application control level
and application intelligence, and particularly by
the fact that the technology is already capable
of storing and transmitting a lot of data, but is
not capable of detecting which data it should
not store or transmit (with the exception of
predefined data categories such as health and
finance). In order to ensure “the right to be left
alone,” however, and to prevent the accidental
disclosure of confidential
data, for example, via an audio reminder to take not found any for applications which do not
medicine when the user is in somebody’s have either high technology risks, or high real-
company, it is very important that the world risks, or
intelligence of an ap- plication should
correspond to its level of control (in other
words, to its level of autonomy: what
technology can do on its own initiative).
Another example can be found in (Truong,
2004), which presents scenarios of Ubicomp
applications made by end users, where one of
the users suggested automatic recordings of
parties in his home. If such an application is
deployed in a large home and records two
persons discussing personal matters in a room
without any other guests, for example, it can
lead to privacy problems. These would not
appear if the application were intelligent
enough not to record such a scene.
Be sure that
Real Word - created only the Health Care and
High Threats user
Safe driving understands Security
a message!
Avoid location Video-based
data logging surveillance of kids
”Take a pill” Physiological
and other sensors in
reminders Personal Memory
Home Automation:
Aids and TV
door locks, stove Time-stamped personalization
safety etc log of location
and/ or activity Live Video Link or
Provide an easy data Video recording
c
In other domains we suggest that the data immediately in real time (performing
deployment of such applications should be real-time feature selection, or finding an-
postponed until the technology becomes more swers to predefined “pattern exists or not”
intelligent. For example, we suggest that the use queries), so that the storage of raw data
of physiological and video sensors and data (even temporarily) is avoided;
stamped with absolute times should be avoided • Encrypted or relative location stamping
unless it is critical for the preservation of life and time stamping: For example, instead
and security. The sugges- tions made above do of investigating the dependence of high
not apply to cases where the technology blood pressure on absolute time, an
performs its tasks reliably and the users do not application should stamp the data relative
perceive the privacy problems as being to the moment of taking a pill or calculate
important; for example, elderly people may be the average time when the user’s blood
willing to trade off privacy against the gaining pressure was above a given threshold;
of support in time, and babies do not care about • Data deletion or editing after an
privacy at all. applica- tion-dependent time: For
In addition, we suggest the following good example, when a user buys clothes, all
practices: information about the material, price,
designer, and so forth, should be deleted
• Real-time data processing: select algo- from the clothes’ RFID tags. For
rithms and hardware capable of processing applications that require active RFID tags
(such as finding lost objects (Orr, 1999), privacy violation problems which might
the RFID tag should be changed so that no result from
links are left between the shop database
and the personal clothes. Similarly, the
location of an emergency call does not
require the storage of long-term location
data, so that this should be avoided;
• Data processing in a personal device
instead of sending data to the environ-
ment: Instead of submitting a query with
personal financial preferences to a shop in
order to find suitable products, for
example, the application should submit a
more generic query, even at the cost of an
increase in data filtering in personal
devices, and anonymous payment
procedures should be used when- ever
possible.
• Choice of communication technologies
which do not use permanent hardware
IDs in their protocols, or at least have
control over access to these IDs, and
which allow the communication range to
be controlled. The current situation with
Bluetooth communi- cation, for example,
is that if a device owner enables ad-hoc
communication (in order to use the full
range of possible applications), the device
responds to each request with its ID,
allowing user tracking even over walls,
due to the fairly large communication
range that is beyond user control.
• Detection of hardware removals and
replacements: Users are currently not
warned about replacements/ removal of
attached sensors or memory cards when
devices are in the “off ” state, thus
making physical tampering easier
(Becher, 2006). Since personal devices
will be monitoring a user’s health in the
future (Bardram, 2004; ITEA, 2004),
unauthorized replacement of sensors could
result in a death if they failed to detect a
health crisis.
• Transparency tools: These are user-
friendly ways to warn users about possible
the technologies deployed around him/her
and ways to configure technology settings
easily. For example, users might prefer to
sacrifice some of the benefits of an
applica- tion for the sake of anonymity, to
reduce the level of control of applications
or adjust the way in which incoming
advertisements are filtered (if
advertisements which are considered
uninteresting by the applica- tion are
completely removed, this carries a danger
that the user will never hear about some
options). One solution could be to have
several “privacy profiles” in devices, so
that each profile defines which groups of
applications and means of communication
are enabled and which not in different set-
tings. Users would then just need to
switch between profiles instead of dealing
with a bundle of options with the risk of
forgetting some of them. Our own
experiences with data collection have
shown that since even Ubicomp
application developers do not fully
understand the possible consequences of
their data collection, transparency tools
should be really carefully designed.
• Means of disconnecting gracefully:
Users should be able to switch an
application or device off completely, or to
switch off some of its functionalities in
such a way that other people do not take it
as a desire by the user to hide, and in
such a way that the device is still usable
(e.g., users should be able to check
calendar data while having the com-
munication functionality switched off ).
concLusIon
AcKnoWLedGMent
references
This work was previously published in Advances in Ubiquitous Computing: Future Paradigms and Directions, edited by S.
Mostefaoui, Z. Maamar, and G. Giaglis, pp. 316-347, copyright 2008 by IGI Publishing (an imprint of IGI Global).
Deciphering Pervasive Computing
1450
Chapter 7.8
Deciphering
Pervasive
Computing:
A Study of Jurisdiction, E-Fraud
and
Privacy in Pervasive
Computing
Environmen
t
Grace Li
University of Technology, Sydney, Australia
Copyright © 2010, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Deciphering Pervasive Computing
future computing environment better, the legal healthcare records, lab order entry and results
and regulatory framework should focus on the reporting, billing and costs as well as personnel
improvement of internal monitoring of risks
and vulnerabilitiesgreater information sharing
about these risks and vulnerabilities. Moreover,
the role of government should focus on
education and training on the care and use of
these technologies and better reporting of risks
and responses. A fully embedded computing
environment that is safe and sound to live in
will need more collaboration between
individuals, commercial organizations, and the
government.
IntroductIon
172
substantial. With a full embedment of pervasive an
computing technology in the near future, the
vulnerability is bound to be raised to a more
significant level.
While it can be difficult to predict precisely
how technology will evolve, studying the his-
tory of written letter to telegraphy, telegraphy
to telephone, telephone to Internet, mainframe
to personal computer, it seems reasonable to
note that in the not-too-distant future,
interactive compute ring technology, in
whatever form, will be an integral, invisible
constituent of our lives. In the course of doing
it, the computing technol- ogy will also most
definitely raise problems in relation to the legal
frameworks that surrounds it. The following
part of this chapter is therefore to identify and
analyse three major legal aspects connecting
with the future embedded computing
environment. They are jurisdiction issue, online
fraud and privacy.
e-fraud
Privacy
concLusIon
references
AddItIonAL reAdInG
Key terMs
endnotes
1
Mark D. Weiser (July 23, 1952 – April 27,
1999) was a chief scientist at Xerox
PARC. Weiser is widely considered to be
the father of ubiquitous computing, a term
he coined in 1988.
2
Weiser wrote some of the earliest chapters
on the subject, largely defining it and
sketching out its major concerns.
Recognising that the extension of
processing power into everyday scenarios
would necessitate understand- ings of
social, cultural and psychological
phenomena beyond its proper ambit,
Weiser was influenced by many fields
outside computer science, including
“philosophy, phenomenology,
anthropology, psychology, post-
Modernism, sociology of science and
feminist criticism.” He was explicit about
“the humanistic origins of the ‘invisible
ideal
in post-modernist thought’”, referencing Data and on the Free Movement of Such
as well the ironically dystopian Philip K. Data, art. 4, § 1(c), 1995 O.J. (L 281)
Dick novel Ubik. MIT has also 31,
contributed sig- nificant research in this 39 Reidenberg J R. & Schwartz P M.,
field, notably Hiroshi Ishii’s Things That Date Protection Law and On-line
Think consortium at the Media Lab and Services : Regulatory Responses 28
the CSAIL effort known as Project (1998), available at
Oxygen. htt p://europa.eu.int/comm /inter nal_mar-
3
[2002] HCA 56. available at ket/privacy/docs/studies/regul_en.pdf (last
http://www.kent- law.edu/per Website visit 26 May 07).
ritt/courses/civpro/Dow%20 9
It was amended in 1994, 1996 and in 2001
Jones%20&%20Company%20Inc_%20 by the USA PATRIOT Act
v % 20 G u t n ick % 20 % 5 B20 0 2% 5 10
“Smart houses” (or “aware homes”) in-
D % 20 corporate intelligent, embedded systems
HCA%2056%20 (10%20December which interact with the occupants and
%20 with outside systems. See, e.g., Georgia
2002).htm Institute of Technology, The Aware
4
(W.D. Pa. Feb. 8, 2000). Home, http://
5
Lewis v. King, [2004] EWCA (Civ) 1329 ww w.cc.gatech.edu /fce/ahr i/. Philips Re-
(Eng. C.A.), available at search, Ambient Intelligence: A New User
http://www.court- Experience, http:// ww w.research.philips.
service.gov.uk/judgmentsfiles/j2844/lewis com/InformationCenter/Global/FArticle-
- v-king.htm Summary.asp? lNodeId=712; see also e.g.,
6
Pub. L. No. 105-277, 112 Stat. 2681 K. Ducatel et al., European Comm’n, IST
(1998) Advisory Group, Scenarios for Ambient
(codified at 15 U.S.C. 6501- Intelligence in 2010, 4-7, (2001)
6506).
7
15 U.S.C. 6501(2) (2000).
8
See Council Directive 95/46/EC of 24 Oc-
tober 1995 on the Protection of
Individuals with Regard to the Processing
of Personal
This work was previously published in Risk Assessment and Management in Pervasive Computing: Operational, Legal,
Ethical, and Financial Perspectives, edited by V. Godara, pp. 218-232, copyright 2009 by Information Science Reference
(an imprint of IGI Global).
Privacy Control Requirements for Context-Aware Mobile Services
1465
Chapter 7.9
Privacy Control
Requirements for Context-
Aware Mobile Services
Amr Ali Eldin
Accenture BV, The Netherlands
Zoran Stojanovic
IBM Nederland BV, The Netherlands
Copyright © 2010, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Privacy Control Requirements for Context-Aware Mobile Services
187
between both types of information: whether to Reagle, 2004). However, there have been some
control users’ identities, by deterring identity efforts to extend P3P to the mobile
cap- turing through anonymity solutions environment to provide users
(Camenisch & Herreweghen, 2002; Chaum,
1985; Lysyanskayal, Rivest, Sahai, & Wolf,
1999); or to control private information
perception such as watermarking techniques as
in Agrawal and Kiernan (2002), distributing
and encrypting of data packets in Clif- ton,
Kantarcioglu, Vaidya, Lin, and Zhu (2002), and
physical security through limiting data access
within a specified area (Langheinrich, 2001).
Most of the previous efforts lack the
involvement of us- ers. Stated differently, user
control of their privacy has not been taken
seriously as a requirement for the design of
context-aware services in previous efforts.
Instead, a lot of effort has concentrated on
developing sophisticated encryption mecha-
nisms that prohibit unauthorized access to
private information when stored locally on a
database server managed by the information
collector or by a trusted third party. We argue
that not only user identity information, but also
other informa- tion with different degrees of
confidentiality, can represent a private matter as
well, especially when user context is associated
with them. Therefore, controlling user
contextual information collec- tion could
represent a more realistic approach in such
context-aware systems. Controlling users’
contextual information perception implies
making decisions of whether to allow
contextual entities to be collected by a certain
party or not in what is known as user consent
decisions.
The Platform for Privacy Preferences (P3P),
submitted by the World Wide Web Consortium
(W3C), provides a mechanism to ensure that
users can better understand service providers’
privacy policies before they submit their
personal information, but it does not provide a
technical mechanism to enforce privacy
protection and to make sure that organizations
work according to their stated policies (Cranor,
Langheinrich, Marchiori, Presler-Marshall, &
with control over their location data. Most of
these efforts focused more on the technical
facilita- tion of this extension to suit mobile
devices and communication protocols, such as
Langheinrich (2002) and Nilsson, Lindskog,
and Fischer-Hübner (2001). More work is
required before the P3P pro- tocol becomes
widely applicable for the mobile environment
due to its limitations in automating this
expression and evaluation of privacy policies,
and users preferences by the limited capabilities
of the context-aware mobile devices, the
dynamic changing context of users, and the
large number of context information collectors.
In P3P (Cranor et al., 2004) and APPEL
(Cranor, Langheinrich, & Marchiori, 2002), it
is assumed that users’ consent should be given
as one entity per all collected information. In
privacy-threatening situations, APPEL
evaluation will block only identifying
information from be- ing transferred to the
collector side. In one effort to design a privacy
control architecture, Rodden, Friday, Henk, and
Dix (2002) propose a minimal asymmetry
approach to control personal location
information. A trusted party keeps location
infor- mation structured in such a way that
other parties cannot have full access privileges
until they have reached a service agreement.
Moreover, user iden- tities are replaced with
pseudonyms when other parties collect the
location information. Although this approach
gives users more control capabili- ties, it does
not provide a means of reducing the intensive
involvement of users.
Although a lot of efforts on privacy
protection have been exerted in the literature
(Ackerman, Darrell, & Weitzner, 2001;
Camenisch & Her- reweghen, 2002; Casal,
2001), not many have realized the option that
privacy can be negotiable. A user might be
willing to share his or her in- formation with
information collectors in order to get some
cheaper service or a better offer. What makes it
complex is that users’ privacy concerns can be
influenced not only by mostly known factors
such as culture and age, but also by their
context or situation when the information is
requested.
This influence of context becomes noticeable in • Individual participation principle: An
environments where the users’ context is individual should have the right to control
expected to change. his or her data after being collected by
In the following section, we give a brief erasing, completing, or amending it, and
over- view of the most famous privacy should be able to communicate with the
principles. data collector about the type of data being
collected.
• Accountability principle: The data
PrI VAcy PrIncIPLes And collec- tor should be accountable for
requIreMents complying with measures that give effect
to the principles stated above.
Privacy architectures try to meet the fair
informa- tion practices (FIP) principles Most of privacy laws and self-regulatory
developed since the frameworks basically follow the above-
1970s. The most well-known principles were mentioned principles. In order to meet the
set in 1980 by the Organization for Economic above-mentioned first two principles, users
Coop- eration and Development (OECD) in the should be notified on what information is being
form of the Guidelines on the Protection and collected, which parties are using the
Transborder Flows of Personal Data. We briefly information, for which purpose, and how long it
present these principles (OECD, 2003): will be used (Ackerman, Darrell, & Weitzner,
2001; Casal, 2001). This notification is mainly
• Collection limitation principle: This done through defining what is called data
prin- ciple states that there should be practices. These practices are usually expressed
limits to personal data collection, and that in privacy policies. A privacy policy consists of
it should be obtained in a lawful means a number of statements that represent how an
and with the consent of the user. information collector is going to deal with the
• Data quality principle: Data collection collected information. Most Web sites currently
should be relevant to the purposes for notify users using privacy policies. However,
which it was collected. most of these policies are so long that users do
• Purpose specification principle: not read or understand them completely
Purposes should be specified before the (Cranor, Guduru,
collection of the data and not after. & Arjula, 2006). This leads to the fact that this
• Use limitation principle: Personal data requirement is not always met, and thus users
should not be made available or otherwise can lose one of their rights in having control of
used for purposes other than those their private information. Secondly, users
specified, except by the consent of the should be able to select among different
user or by the authority of law. options. The mostly adopted approach,
• Security safeguards principle: Personal however, “take it or leave it,” should not be any
data should be protected by reasonable longer applied.
security safeguards against such risks as Service providers are asked to give the users
loss or unauthorized access, destruction, a number of alternatives to choose from that
use modification, or disclosure of data. provide them a flexible way of controlling the
• Openness principle: There should be a way their in- formation is being used. After
general policy of openness about develop- notifying users and allowing them to select
ments, practices, and policies with respect among different choices, it is required that the
to personal data. user explicitly declares his or her acceptance of
this type of usage. Most Web sites ask for the
user’s consent, once and for all, after the user
reads the privacy policies (if he or
she does): a user will have to accept or object as functionalities can be implemented using
the previously mentioned option “take it or middle- ware technology acting as a trusted
leave it,” and if he or she accepts the policy, third party.
then he or she is not allowed to change consent
even if his or her preferences change, unless the system Architecture
user stops using the service offered by that Web
site. When designing a solution for effective privacy
management, specifying a proper architecture
to provide a basis for implementation is of
A PrI VAcy controL crucial importance. The standard ANSI/IEEE
functIonAL ArcHItecture 1471-2000 that gives recommended practices
for describing the architecture of software-
With respect to the above-mentioned require- intensive systems defines architecture as the
ments, we argue that the following fundamental organiza- tion of a system
functionalities, as shown in Figure 1 in the embodied in its components, their relationships,
form of high-level domain architecture, must be and the environmentand the principles
taken into consid- eration to automate privacy governing its design and evolution (ANSI,
management. These 2000). Similarly, in the Rational Unified
Persoanl
User
data
profiles
Service Provider
Service Provider Agent
Middleware
Context Manager
User
Privacy profiles
Manager
Hosting Servers
context Manager
Privacy Manager
concLusIon
AcKnoWLedGMent
references
Almost disagree
Highly disagree
Highly agree
Neutral
Disagree
Almost agree
Statements
1 2 3 4 5 6 7
I could easily define my privacy settings.
People could contact me, though I did not ask for that.
This work was previously published in Personalized Information Retrieval and Access: Concepts, Methods, and Practices,
edited by R. González, N. Chen, and A. Dahanayake, pp. 151-166, copyright 2008 by Information Science Reference (an im-
print of IGI Global).
Access Control in Mobile and Ubiquitous Environments
1481
Chapter 7.10
Access Control in Mobile
and
Ubiquitous
Environments
Laurent Gomez
SAP Research, France
Annett Laube
SAP Research, France
Alessandro Sorniotti
SAP Research, France
Copyright © 2010, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Access Control in Mobile and Ubiquitous Environments
203
The personal medical record of a person Nevertheless, the broadcast of such informa-
con- tains in general more than the recorded tion raises a major security issue: information
sensors readings from body and ambient confidentiality. In a car accident for example,
sensors. Medical data may include the clinical all involved cars send information such as GPS
history, medica- tion history, hospital stays, position, insurance contract ID to their
activity records, and personal data. Looking on insurance company. This information is used to
the variety of roles defined in the e-health automati- cally fill the accident report and
scenario, it is not always necessary to disclose supports insurance companies to find an
the entire medical record: a nurse needs not to agreement. In the case of car theft, the car ID
know all the information that a doctor in turn and GPS location of car is sent to the police in
needs to access to perform a task. In most order to track the car.
situations, it is sufficient to grant access to the In the same way as in the e-health scenario,
smallest subset of information needed. an access control to the exchanged data within
In addition a doctor should have the rights this ubiquitous environment is required, in the
to book a room for a patient in a hospital or to case of the car accident scenario: one of the
plan a surgery only when he is physically in the security threats is the interception of the GPS
hospital or in his office and not on vacation. An location by fake garage owner. Pretending to be
emergency team member can get access to all certified ga- rage owner, they could steal the
medical information about a victim, only if he damaged car.
is close to the patient and the patient is
unconscious, whereas access to the private
medical informa- tion is normally restricted to conteX t-AWAre Access
the assigned doctors (general practitioner, controL
specialists, etc.).
An additional concept is represented by state-of-the-Art
context-aware delegation of rights. For
example, when a manager is out of office, he A classical approach to tackle the challenge of
can delegate some of his rights to his secretary protecting data access is role-based access
until he is back. Depending on the urgency of control (RBAC), introduced in (Sandhu, 1996).
certain tasks, the context-aware system can RBAC associates to each user one or more
decide whether delega- tion is allowed or not. roles. Permis- sions to objects (resources) are
defined for each role. However powerful,
scenario 2: e-Insurance simple RBAC reaches its limits when the
access control policy becomes more complex.
The use of context-aware security techniques is Additional information has to be integrated in
not restricted to e-health business domain. Car the policy enforcement process. Organization-
accident management is an e-Insurance based access control (OrBAC) (Kalam et al.,
scenario from the automotive business domain. 2003) is a solution for this kind of access
In this scenario, a car is uniquely identified control policies. As the name suggests, OR-
with a car ID. This information allows BAC uses contextual rules related to specific
authorized third party (e.g. police, insurance) to organizational structures. Other extensions of
authenticate a car and to map it to any the RBAC model, like generalized RBAC
information such as the car owner driving (GRBAC) or dynamic role-based access control
licenses, vehicle registration. In addition to the (DRBAC), also consider the use of context
car ID, speed information, fuel consumption, information to extend the standard RBAC
brakes usage information can be distributed. model.
Table 1 provides a set of approaches for
access control. Extended with contextual
information,
Table 1. Access control families
several architectures have been developed for Based Access Control (DRBAC) (Zhang, 2004)
context-aware access control. extends the traditional RBAC to use dynamic
Proximity-based access control defines a set context information during the decision
of security rules based on the proximity of enti- process. DRBAC addresses two key
ties or groups (Gupta, 2006). Temporal-RBAC requirements: (1) A user’s access privileges
(Bertino, 2001) supports periodic role enabling must change when the user’s context changes.
and disabling and temporal dependencies (2) A resource must adjust its access
among permissions by introducing time into the permissions when its system informa- tion
access control infrastructure. Encounter-based changes. (Roman, 2002) defines generic
access control (Thomas, 2004) is used to define context-based software architecture for physical
special policies related to the occurrence of spaces, so-called Gaia. A physical space is a
defined situa- tions or events. Covington et al geographic region with limited and well defined
(Covington, 2002) propose a uniform access boundaries, containing physical objects, hetero-
control framework for environmental roles, geneous networked devices, and users
named generalized RBAC (GRBAC), which is performing a range of activities. Derived from
an extension of the role- based access control the physical space concept, the Active Space
model. In an administrative domain, a role can system provides the user a computing
be, for example, an employee or a manager. A representation of physical space. Active Space
role determines the user’s posi- tion or ability in helps the user to interact with the physical
an administrative domain. An environmental space. Cerberus is a framework for context-
role is a role that captures environ- mental aware identification, authentication and access
conditions. Unlikely in the RBAC model which control and reasoning about context, based on
is only subject-oriented, GRBAC allows the Kerberos (Neuman, 1994) authentication and
definition of access control policies based on Gaia. Cerberus focuses on user’s identification
subject, object or environment. Dynamic Role via user’s context information such as
fingerprint, voice and face recognition. The
context-aware
authorization architecture proposed in backend system in charge of users’
(Wullems, authentication is the context-aware
2004) is based on the Kerberos authentication
and enables to activate or deactivate roles
assigned to a user depending on the context. In
(Hu, 2004), the authors propose a dynamic,
context-aware access control especially suited
for distributed healthcare application.
Permissions are associated with context-related
constraints that are dynami- cally evaluated.
challenges
solutions
<Policy PolicyId="EmergencyPolicy">
<Target>...</Target>
<Rule RuleId="MixedLocalisationRule" Effect="Permit">
<Target>
<Resources>...</Resources>
<Actions>...</Actions>
</Target>
<Condition FunctionId="function:string-equal">
<Apply FunctionId="function:string-one-and-only">
<SubjectAttributeDesignator DataType=string AttributeId="role"/>
</Apply>
<AttributeValue DataType="string">physician</AttributeValue>
</Condition>
<Condition FunctionId="function:and">
<Apply FunctionId="coolFunction#CloseTo">
<Apply FunctionId="coolFunction#findLocation">
<SubjectAttributeDesignator DataType=cool#GPSLocation
AttributeId="SubjectLocation"/>
</Apply>
<Apply FunctionId="coolFunction#findLocation">
<SubjectAttributeDesignator DataType="cool#GPSLocation"
AttributeId="ObjectLocation"/>
</Apply>
<AttributeValue DataType="integer">50</AttributeValue>
</Apply>
<Apply FunctionId="coolFunction#IsEmergency">
<Apply FunctionId="coolFunction#findEmergency">
<SubjectAttributeDesignator DataType="cool#Emergency"
AttributeId="ObjectEmergency"/>
</Apply>
</Apply>
</Condition>
</Rule>
</Policy>
outside of the service itself. The authorization Several concrete classes are derived from the
policy can then contain access control rules for abstract class BusinessPartner, see the class
parts of the resources. dia- gram in Figure 3. Each derivation adds
A resource hierarchy can be described as specific attributes that are only relevant for this
directed acyclic graph over a finite set of nodes, type of object. For example, an employee has a
built from a resource and all its direct salary grade, a private bank account, a private
descendants at any depth. The definition of the and a business address. A customer has
node set in the hierarchy and the relations marketing data, a shipping address and an
between the nodes are highly dependent on the invoice address.
application and the authorization policy. The use of the object BusinessPartner as
An example is a business application that interface for the application allows an identical
exposes a service to access an object Business- manipula- tion of the different classes in the
Partner. This object maintains the information hierarchy. The following authorization rules
of partners of the company, such as its shall, for instance, be applied to the retrieve
personnel or external companies like customers method of the Employee object:
or suppliers.
Figure 3. BusinessPartner class diagram
• Only a member of HR can access all data creates a very high dependency to the
of an employee. authoriza- tion policy.
• An employee has access to the business The second option is to have many smaller
ad- dress of all employees and to his own services that allow retrieving parts (sub
private address and bank account. He has resources) of the BusinessPartner, like the name
no write access to his salary. or bank account. This would create a big impact
• A manager has access to the data of on per- formance. Instead of having only one
his/her employees, except for the sensible service call to get all data, multiple calls are
personal data, like private address or bank necessary.
account. With the use of resource hierarchies, there is
no change of the service interface according to
RBAC roles would be powerful enough to the need of the authorization policy necessary.
differentiate between a HR accountant and a The originally defined interface
normal employee. However, RBAC gives either (BusinessPartner in our previous example) can
permission to a resource or denies it completely be used and, at the same time, it is possible to
(all or nothing paradigm). In order to enforce access control with a finer granularity.
implement the authorization rules described On the other hand, the concept facilitates the
above, it would be necessary to implement a integration of all kind of context information
much more detailed and fine-grained service into the policy enforcement process and also in
interface. In this case, there are two the process of defining the resource hierarchy.
possibilities: the first is to imple- ment an This result is highly fined grained,
interface specialized for each role, which
dynamically adaptable authorization
policies needed for mobile applications in ubiq- In (Sorniotti, 2008), the authors present a
uitous environments. pos- sible solution to the problem of access
control to data produced by wireless sensor
network, rely- ing on cryptography: right from
Access controL to WIreLess its production, data can be encrypted, and
sensor dAtA therefore its access is intrinsically restricted.
This way, sensors can encrypt data and publish
challenges it regardless of the pres- ent consumer(s). In a
centralized authorization module, the related
Under a standard access control scenario, access control policies are en- forced. If a user
entities that wish to benefit from the produced or application provides sufficient credentials to
information, have to authenticate themselves, get access to a certain authorization class, he
receive a cre- dential, produce the credential to gets the associated key to decrypt the data: the
the data source and receive a specialized stream knowledge of the cryptographic key used to
of information that contains just the encrypt data, belonging to a given level in the
information the requester received an defined hierarchy, allows proper decryption –
authorization for. Many solutions exist for this and therefore access – to data belonging to that
problem however most of them are unsuitable level. Conversely, it is impossible to access
for low-power ubiquitous environ- ments such encrypted data for consumers who do not have
as wireless sensor networks, given the the proper decryption key.
technological constraints of the nodes. Such In scenarios such as the e-health one, the
devices with limited capacities on memory, sensed data is often highly sensitive. Moreover,
CPU and battery power are rarely capable to the sensed data often has very different levels
evaluate complex access control policies. of sensitivity: the mere information on the room
occupancy of a hospital is not highly sensitive,
solution whereas the ECG of a given patient is indeed
very private informa- tion, since it could
Sensor nodes produce, on a broadcast medium, possibly reveal information about the health
highly diverse data, which is often very status of the person.
sensitive. Sensor listeners may be numerous, There can also be several consumers of
diverse and have different access rights to wireless sensor data, belonging to a
sensor data. The problem of multiple- heterogeneous popula- tion, and having
resources/multiple-accesses is usually solved intrinsically different data access rights: within
using access control. Under a standard access a healthcare scenario, patients, social workers,
control scenario, entities that wish to benefit nurses, relatives, generic physi- cians and
from the produced information, have to specialists naturally form a hierarchy of entities
authenticate themselves, receive a cre- dential, that are interested in the data delivered by a
produce the credential to the data source and healthcare WSN. Data consumers can be
receive a specialized stream of information that therefore conveniently organized in hierarchies.
contains just the information the requester Low levels in the hierarchy can just access data
received an authorization for. Many solutions with low level of sensitivity whilst higher levels
exist for this problem; however most of them can also access more sensitive data.
are unsuitable for WSN scenarios, given the To satisfy the hierarchical requirement, the
techno- logical constraints of the nodes. In idea is to map each distinct sensor data type to
addition, nodes produce data in real-time, hence an authorization level. Data, whose disclosure
the generation of multiple streams is difficult. does not rise high privacy issues, is mapped to
low authorization levels. Similarly, highly
private data
will be mapped to high authorization levels. tiation. It assures that a user, who is given the
The resulting mapping expresses the security decrypting key of a class, can generate the keys
prefer- ences of a central access control policy
point. The hierarchy of authorization levels is
then mapped to keys in a hierarchical structure,
whereby low-level keys can be derived from
high-level ones.
The hierarchy of authorization levels can be
modeled as a tree. The adoption of encryption
as a way to enforce access control reduces the
problem of granting, denying and revoking ac-
cess rights to a problem of key management.
The scheme assumes the presence of a central
access control manager (ACM) which – after
evaluation of data consumers’ (from now on
also referred to as users) credentials – takes
care of granting, denying and revoking access
rights. Granting a user to a given authorization
level means giving her the key to decrypt all
data units mapped to that level and to
descendant ones. Denying access simply
implies not providing the decryption key(s).
Finally, revocation of access rights is based on
rekeying: changing the keys used at a given
point, forces data consumers to re-contact the
ACM in order to receive the new keys.
Consumers whose access rights have been
revoked do not receive the new keys, which
accomplish the revocation. This approach
achieves the desirable property of no specific
interactions between data producers (the sensor
nodes) and data consumers, other than data
publishing.
state-of-the-Art
state-of-the-Art
user AutHentIc AtIon
In the literature, several researchers have
Access control mechanisms require the access already proposed models for authentication
requester to authenticate himself. After estab- factor metric. In (Reither, 1999), the authors
lishing a more flexible access control propose a set of principles for designing a
framework better responding to the challenges metric for authentication factors. Nevertheless,
of mobile and ubiquitous environments, they only focus on issuers of authentication
additional flexibility is needed also for the factors and not on supported authentication
authentication phase of the access control mechanisms. In (Burr, 2006), an assurance
model. Users of a ubiquitous computing system level on authentication factors is defined in an
should be able to authenticate themselves with arbitrary manner. It consists basically of a
the means at their disposal. For instance a categorization of authentication mechanisms.
physician should be able to authenticate with a Moreover, the authors do not propose any
login-password mechanism, with a certifi- cate solution for combining authentication factors in
stored on his private smart card or PDA, with order to achieve a better authentication level.
biometric information like fingerprints, or with (Al-Muhtadi,
a combination of two mechanisms. 2005) is closer to the authentication-level ap-
proach by introducing the notion of confidence
challenges values for authentication mechanisms. The
authors use the Gaia authentication framework,
In order to gain access to a resource protected which calculates the net confidence value of
by an authorization service, users are required available Gaia authentication modules. It
to authenticate. User authentication is implies that the user has to authenticate by
traditionally performed by producing a means of all avail- able authentication
combination of authen- tication factors (e.g. mechanisms. Moreover the authors do not
two-factor authentication) statically specified in consider the use of heuristics for combining
the access control policy of the authorization authentication mechanisms. In addi- tion, the
service. An authentication fac- tor is any piece confidence in the service implementing the
of information used to assess the identity of a authentication mechanisms is not considered as
user. Depending on the context, the user may criteria on authentication mechanisms. To
have access to different authentication services. combine confidence values, the authors finally
The flexibility of user’s authentication can be suggest using the consensus operator from
enhanced by allowing users to authenti- cate subjec- tive logic. In (M. Covington, 2004), the
using different authentication factors at his authors still propose to abstract authentication
disposal. In order to achieve that, the authoriza- factors to subjective logic opinions. In order to
tion service specifies an authentication level to calculate the confidence in a combination of
be reached in order to get access to a resource. authentication fac- tors, the author also uses the
Resource owner’s authentication preferences consensus operator from subjective logic.
are thus comprised in an authentication level Liberty Alliance (Liberty Alliance, 2005)
policy. The user is bound to reach a pre-defined introduces the notion of identity provider which
authentication level with the factor he owns. is in charge of federating user identities. When
The users want to consume a service, they
authenticate to their identity provider by means
of an authentication context encapsulated
in SAML assertions where the circumstance of In order to simplify user’s authentication,
the authentication (e.g. mechanism used, three
service) are described. With this additional objectives are defined:
information, the service provider can evaluate
its trust during user’s authentication. Moreover, • The authentication level specification is
the identity pro- vider can still combine done
different authentication context. Nevertheless, by resource owners.
the service provider still imposes the user to • The authentication level specified is met
authenticate by using statisti- cally defined by
authentication factors. legitimate users.
• The enforcement of access control can be
solution done based on a specified authentication
level, reached by combining different au-
In (Gomez, 2007), the following authentication thentication factors.
process (see Figure 4) is introduced:
The approach defines a metric for
• A user wants to gain access to a resource authentica- tion levels based on subjective logic.
protected by an authorization service. The The definition of confidence values for
authorization service responds to the user authentication mechanism on a fined grained
with an obligation stating an level enables to distinguish between, for
authentication level to be reached. example, a password of length of 4 characters
• The user attempts to reach the expected and another of length of 10 characters. The
authentication level by combining confidence values assigned to authentication
authenti- cation factors, using available factors and their combinations allow going
authentication services at his disposal. beyond the models described in the literature
• Then, the user forwards the chosen (Schneier,
combina- tion of authentication factors to 2005). The approach capitalizes on subjective
the autho- rization service, which then logic in order to define a trust metric for au-
checks if they meet the required thentication level. A new operator on subjective
authentication level. logic for mitigating opinions on combination of
authentication factors was defined. Figure 5 de-
picts the evolution of opinion combination. This
combination of two opinions, ωa and ωb fulfills
the two following requirements:
concLusIon