Sunteți pe pagina 1din 35

Participatory Personal Data: An Emerging Research Challenge for the Information Sciences Katie Shilton College of Information Studies

University of Maryland Room 4105 Hornbake Building, South Wing, College Park, MD 20742 Tel: (301) 405-3777 kshilton@umd.edu

This is a preprint of an article accepted for publication in Journal of the American Society for Information Science and Technology copyright 2012 (American Society for Information Science and Technology)

Abstract Individuals can increasingly collect data about their habits, routines, and environment using ubiquitous technologies. Running specialized software, personal devices such as phones and tablets can capture and transmit users location, images, motion, and text input. The data collected by these devices are both personal (identifying of an individual), and participatory (accessible by that individual for aggregation, analysis, and sharing). Such participatory personal data provide a new area of inquiry for the information sciences. This paper presents a review of literature from diverse fields, including information science, technology studies, surveillance studies, and participatory research traditions to explore how participatory personal data relate to existing personal data collections created by both research and surveillance. It applies three information perspectives information policy, information access and equity, and data curation and preservation to illustrate social impacts and concerns engendered by this new form of data collection. These perspectives suggest a set of research challenges for information science posed by participatory personal data. Introduction Self-quantification, mobile health, bio-hacking, self-surveillance, participatory sensing: under a variety of names, practices, and motivations, these growing movements are encouraging people to collect data about themselves (Dembosky, 2011; Estrin & Sim, 2010; Hill, 2011). Ubiquitous digital tools increasingly enable individuals to collect very granular data about their habits, routines, and environments. Although forms of self-tracking have always existed, ubiquitous technologies such as the mobile phone enable a new scope and scale for these activities. These always-on, always-present devices carried by billions can capture and transmit users location, images, motion, and text input. Mobile health developers are creating applications to track health behaviors, symptoms and side1

effects, and treatment adherence and effectiveness (Estrin & Sim, 2010). Citizen science enthusiasts banded together to test the Butter Mind experiment, volunteering to eat half a stick of butter a day and to record diverse metrics to test impact on mental performance (Gentry, 2010). Office workers, or their bosses, can use new software to track computer usage and analyze productivity. Athletes use a range of portable devices to monitor physiological factors that affect performance. And curious individuals track correlations between variables as diverse as stress, mood, food, sex, and sleep (Hill, 2011). Some hail the era of easy self-tracking and the resulting detailed stores of data for their potential to unlock patterns and new knowledge. Others raise privacy, accessibility, and other social concerns. As John Perry Barlow was recently quoted: Everything you do in life leads to a digital slime trail" (Boudreau, 2011). This new form of personal data invokes challenges and value judgments at the heart of information science, including best practices for data organization and management; the rights, power and agency of data collectors and aggregators; the nature of user participation in data collection and new knowledge creation; and what should, or will, be documented, shared, and remembered. This paper draws upon literatures from information science, technology studies, and surveillance studies to investigate granular personal data, collected by and for individuals, as an information problem. It uses this literature to explore two questions: how do these new forms of personal data depart from existing personal data? And how do information perspectives help us analyze the social values, impacts and concerns engendered by these new data? These questions suggest new research challenges for privacy and information policy, information access and equity, and data management, curation and preservation. The paper begins by defining participatory personal data, and suggesting how these data differ from existing research and surveillance data. It then explores literature from three information 2

perspectives key to understanding participatory personal data: information policy, information access, and data management and preservation. It uses these perspectives to suggest next steps to investigate participatory personal data as an information science problem. Defining participatory personal data This paper focuses on a new class of data about people and their environments, generated by an emerging network of embedded devices and specialized software. Software for mobile phones and embedded devices enable individuals to track and upload a diverse range of data, including images, text, location, and motion. Because using ubiquitous digital tools for data capture is a relatively new activity, the terminology used to describe this research is varied, going under names including mobile health or mHealth (Estrin & Sim, 2010), self-quantifying (Wolf, 2010), selfsurveillance (Hill, 2011; Kang, Shilton, Burke, Estrin, & Hansen, 2012), participatory sensing (Burke et al., 2006; Estrin, 2010), and urban sensing (Cuff, Hansen, & Kang, 2008). For example, Your Flowing Data (http://yourflowingdata.com) asks participants to use their mobile phones to send short messages recording data points (e.g., weight, exercise accomplished, mood, or food eaten) throughout the day. After these data are aggregated by the service, the software provides users with visualizations to explore patterns in their daily habits and learn from their data. A different example is the Personal Environmental Impact Report (PEIR) (http://peir.cens.ucla.edu), an application that uses participants mobile phones to record their location every thirty seconds. Location is uploaded to the PEIR servers, which use this time-location series to infer how much a participant drives each day, giving participants a daily calculation of their carbon footprint and exposure to air pollution. What unifies these projects is the data they collect: participatory personal data. Participatory personal data are any representation recorded by an individual, about an individual, using a mediating technology (which could be as simple as a spreadsheet or web entry form, but which is 3

commonly a mobile device). Participatory signals that these new data are accessible to the data subjects. This contrasts with both traditional research and surveillance data, which are typically obscured or hidden from data subjects (Shilton, 2010). Personal data are authored by an individual, describe an individual, or can be mapped to an individual (Kang, 1998). Participatory personal data often meet all three of these criteria. The individual uses a device to collect data, which are often descriptive of a persons life, routine, or environment. And the data can quite literally be mapped to a person. Geotagged photos or a GPS log of a persons movements throughout a day can be used to identify an individual, even if no names or identifiers are directly attached to the data (Anthony, Kotz, & Henderson, 2007; Iachello, Smith, Consolvo, Chen, & Abowd, 2005). Participatory personal data then, refers to aggregations of representations or measurements collected by people, about people. These data are part of a coordinated activity; they are not only captured, but processed, analyzed, displayed and shared through a technological infrastructure. This article uses participatory personal data project as an umbrella term for the activity of data capture, and participatory personal data to refer to the resulting information resources. Data collection technologies Participatory personal data are dependent on a technical infrastructure consisting of devices, software, data storage, and user interfaces. The devices are digital, networked, and embedded in human environments. In practice these are often mobile telephones, due to the ubiquity and accessibility of these devices (Estrin, 2010). But devices could also include tablets, networked body sensors, or instrumented smart homes or buildings (Hayes et al., 2007; Kim, Schmid, Charbiwala, & Srivastava, 2009). Software coordinates data collection by triggering samples (often using variables such as time of day, location, or battery life), storing data locally, and deciding when to upload them to central servers for processing (Froehlich, Chen, Consolvo, Harrison, & Landay, 2007). Storage is 4

largely cloud-based, although researchers concerned with privacy have also suggested personal home storage for added data security (Lam, 2009). User interfaces include both web- and phone-based charts, maps and graphs, which provide aggregation and analysis services, helping users see and interpret patterns in their data. The combination of devices, software, storage, and interfaces form technological platforms (Gillespie, 2010): infrastructures marshaled for new goals and purposes. Excluded from participatory personal data collection platforms are systems which do not reveal their data to the data subject. For example, instrumented homes that report energy use to a utility company, but not the homeowner, do not generate participatory data (though they may generate personal data). Opportunistic sensing research, which uses mobile phones to sense macroscale human activity without the consent or knowledge of individual phone users (Campbell, Eisenman, Lane, Miluzzo, & Peterson, 2006), also does not collect participatory personal data. Data collection projects The technological platform is only one part of participatory personal data projects. The actors and institutions involved in designing and deploying these platforms are quite diverse. Established corporations and start-ups, academics, community groups, nonprofits and international non-governmental organizations are all stakeholders in participatory personal data projects. Research centers at institutions such as UCLA, Dartmouth, MIT, Intel Research, and Microsoft Research are all developing participatory personal data platforms for use in the social, environmental, and health sciences. Corporations such as exercise-tracking company FitBit (http://www.fitbit.com/) and online-activity tracking service RescueTime (https://www.rescuetime.com/) have developed selftracking into a business model. Governments, telecommunications providers, health insurers, and advertisers also have an interest in these data (Hill, 2011). The variety of stakeholders collecting

participatory personal data will impact the social and political economy of collection, access, and preservation of these data. Characteristics of participatory personal data Data collection platforms and projects lend particular characteristics to participatory personal data. Common data types are currently bounded by the technical limitations of the devices used for capture, although these may shift over time as mobile devices become more advanced. Offthe-shelf mobile phones and tablets, by far the most accessible devices for collecting participatory personal data, are currently limited to a handful of on-board sensors. Phones can sense sound (using microphones), images (using cameras), location (using GPS or cell tower information), co-location (of other phones or devices using Bluetooth), and motion (using accelerometers). However, when accessibility or market penetration are not concerns, these on-board capabilities can be almost infinitely extended, as phones can interface with off-board sensors using Bluetooth or other communications protocols. What unites and also defines these collections of sound, images, locations, and motion is that that they are participatory: they are accessible to the subjects of the data themselves. This is a departure from traditional forms of personal data collection. Because granular data collection was expensive and time-consuming, it has historically been conducted by governments or corporations with both the resources to collect data and the social and economic motivations to do so (Marx, 2002). Though fair information practices have long mandated that personal data be made available to individuals upon request (Waldo, Lin, & Millett, 2007), participatory personal data upend this tradition by making individual access to personal data an integral part of collection. This change has been enabled by the proliferation of mobile and computing devices which enable individual capture,

processing, and analysis. The participatory nature of such data collection marks a departure from traditional data archives, social science research data, and corporate and government surveillance. How participatory personal data depart from existing data In a focus on collecting data about people, participatory personal data projects echo two familiar kinds of data: scientific and social science research data, and surveillance data. But because participatory personal data collection is increasingly performed by the data subjects themselves, it also contrasts with previous understandings of both kinds of data. Traditionally, data about individuals might have been collected by researchers, governments, or corporations. But by enabling dispersed data capture and sharing, participatory personal data projects collapse the role of data collectors and data subjects. This raises the issue of participation in data collection, and how participation alters the data landscape. Surveillance data Traditionally, granular personal data collection by corporations or governments has been labeled surveillance. Surveillance studies research suggests that an important element in this data collection is control (Lyon, 2001; Monahan, 2006b). Surveillance may protect people from danger, but it may also be used to prevent undesirable behaviors (Lyon, 2001). As surveillance scholars from Foucault (1979) to Vaz and Bruno (2003) have explained, data collection is often used to normalize and discipline individuals. Foucaults influential work suggests that surveilled populations will supervise and discipline themselves. Indeed, self-discipline is a goal embraced by participatory personal data platforms focused on health interventions or worker productivity. Similar disciplinary effects are seen when communities organize to collect data. The project Nation of Neighbors (http://www.nationofneighbors.com) uses mobile devices to expand a community watch model of 7

crime reporting, and civic data project CitySourced (http://www.citysourced.com) defines and reports predefined categories of community problems such as homeless nuisance. As this last example suggests, a pernicious effect of surveillance is its potential for uneven application to marginalized and disenfranchised groups. Broad-scale recording of purchase habits, location and movements enables data sorting and subsequent social profiling, by governments and corporations (Curry, Phillips, & Regan, 2004). As Monahan describes it: what is being secured are social relations, institutional structures, and cultural dispositions thatmore often than notaggravate existing social inequalities and establish rationales for increased, invasive surveillance of marginalized groups (Monahan, 2006b, p. ix) The social relations and institutional structures secured by participatory personal data are still forming. The capture and control of participatory personal data are distributed, and people from marginalized social positions may use the power to collect and analyze data to confront the powerful. Mobile phones have been used for cop watching and counter-surveillance (Huey, Walby, & Doyle, 2006; Institute for Applied Autonomy, 2006; Monahan, 2006a), peacekeeping and economic development (Donner, Verclas, & Toyama, 2008), and self-exploration and play (Albrechtslund, 2008). In these uses, participatory personal data collection technologies may fit into a tradition of counter-surveillance or sousveillance: the subversion of observation technologies by the less powerful to hold authorities accountable. Research data Data about people have long been important to health, environmental, and social sciences research (Borgman, 2007). When systematically collected for discovery or new knowledge creation, participatory personal data is research data. Such research data about people has traditionally been regulated by federal guidelines such as the Belmont Report (Office of the Secretary of The National 8

Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1979) and Title 45 Code of Federal Regulations, Part 46 (Office for Protection of Research Subjects, 2007). These codes emphasize respect for human subjects, beneficence, and justice. A critical component of respect, beneficence and justice is informed consent, which helps to differentiate research data from surveillance data. Traditions such as participatory research (PR) go farther than informed consent by arguing that the data capture, sorting and use performed as part of knowledge discovery can be empowering if it is conducted by the people most affected by the data: research subjects themselves (Cargo & Mercer, 2008). PR practitioners argue that data subjects should be participants not just in data capture, but in analysis of, and meaning-making from the data. Involvement with every stage of the research process empowers users and helps justify tradeoffs between new knowledge production and research risks for participants (Horowitz, Robinson, & Seifer, 2009). But while participatory ethics foster a stronger notion of consent, they may also complicate design, data capture, aggregation, and analysis practices. Participatory research traditions have been criticized for gathering inaccurate data or incorporating bias. Participants who purposefully withhold sensitive data from research projects may create problems for reliability and accuracy of results. As a variant of participatory research, participatory personal data projects will need to grapple with all of these challenges. Information perspectives on participatory personal data Three areas of traditional expertise in the information sciences privacy, information accessibility and equity, and information management and preservation can provide insight into shaping participatory personal data systems that enable participation, new knowledge, and discovery, rather

than control. These information perspectives provide theoretical frameworks that help to unpack and assess the social impacts and consequences of these emerging data. Privacy and information policy One traditional answer to the challenge of surveillance is found in the information science, computer science, legal, and ethics literatures focused on privacy. Participatory personal data projects gather, store and process large amounts of information, creating massive databases of individuals locations, movements, images, sound clips, text annotations, and even health data. Location information can reveal habits and routines which are socially sensitive (would you want such information shared with a boss or friend?) and may be linked with, or have an impact on, legally protected medical information. There are three major branches of literature that address privacy problems relevant to participatory personal data. The first are behavioral studies of users interactions with personal data and data collection systems. The second are conceptual, ethical and legal investigations into privacy as a human interest. The third are design and technical methods for ensuring privacy protections. Much of our understanding of current privacy norms stems from work that analyzes peoples privacy preferences and behaviors (Altman, 1977; Capurro, 2005; Palen & Dourish, 2003). Westins (1970) foundational research benchmarked American public opinion on privacy. Over several decades, Westin used large surveys to confirm a spectrum from privacy fundamentalists (very concerned) to pragmatic (sometimes concerned) to unconcerned. More recently, Sheehan (2002) confirmed a similar spectrum among internet users. The Pew Internet & American Life Project has produced several reports of privacy preferences based upon large U.S. surveys of adults (Madden, Fox, Smith, & Vitak, 2007) and teenagers (Lenhart & Madden, 2007). These reports continue to find privacy concerns, even among teens (whom popular wisdom assumes have 10

abandoned privacy as a value.) Pew finds, for example, that teens practice privacy-preserving behaviors such as limiting online information and taking an active part in identity management. A number of information science studies have attempted to describe such behaviors using more detailed scales for online privacy preferences. For example, both Yao et al (2007) and Buchanan et al (2007) suggest factors by which to measure online privacy concern. Yao et al (2007) focus on psychological variables, while Buchanan et al (2007) incorporate different social aspects of privacy such as accessibility, physical privacy, and benefits of surrendering privacy. A persistent problem in such surveys of privacy preferences, however, is that individuals frequently report preferences that they dont act upon in practice. There is evidence that many privacy studies prime respondents to think about privacy violations, making them more likely to report privacy concerns (John, Acquisti, & Loewenstein, 2009). These studies also make problematic assumptions that people act on a rational privacy interest (Acquisti & Grossklags, 2008). Studies that observe peoples real-world use of systems attempt to correct for these problems. Raento et al (2008), for example, present results from field trials of social awareness software that used smart phones to show contacts locations, length of stay, and activities. The authors found that: users are not worried not so much about losing their privacy rather about presenting themselves appropriately according to situationally arising demands (Raento, p. 529). This demonstrated concern for contextual privacy and identity management has been reiterated in both theoretical (Nissenbaum, 2009) and descriptive research (boyd & Hargittai, 2010). Nissenbaum (2004) labels this concern for fluid and variable disclosure contextual privacy and argues that its absence not only leads to exposure, but also decreasing individual autonomy and freedom, damage to human relationships, and eventually, degradation of democracy. Nissenbaum (2009) suggests that individuals sense of appropriate disclosure, as well as understanding of information flow developed 11

by experience within a space, contribute to individual discretion. Contextual privacy suggests that individuals may be willing to disclose highly personal information on social networking sites because they believe they understand the information flow of those sites (Lange, 2007). Because of complexities and inconsistencies in individual privacy behaviors, policy and legal researchers have sought to move away from user choices and individual risks, and towards new regulations to encourage social protections for privacy (J. E. Cohen, 2008; Rule, 2004; Swarthout, 1967; Waldo et al., 2007). U.S. law does not interpret personal data to be owned by the subject of those data. Instead, legal regimes give control of, and responsibility for, personal data to the institution that collected the data (Waldo et al., 2007). Fair information practices are the ethical standard for collection and sharing that those institutions are asked to follow. Originally codified in the 1970s, these practices are still considered the gold standard for privacy protection (Waldo et al., 2007, p. 48), and they have been voluntarily adopted by other nations as well as private entities. Fair information practices such as notice and awareness, choice and consent, access and participation, integrity and security, and enforcement and redress can certainly apply to participatory personal data. But if privacy concerns expand to include processes of enforcing personal boundaries (Shapiro, 1998), negotiating social situations (Camp & Connelly, 2008; Palen & Dourish, 2003), and portraying fluid identities (Phillips, 2002, 2005), fair information practices formulated for protecting corporate and government data may be insufficient for personal collections. Other researchers suggest that concerns about data capture extend beyond the protection of individual privacy. Curry, Phillips and Regan (2004) write that data capture makes places and populations increasingly visible or legible. Increasing knowledge about the actions of people and their movements can lead to function creep. For example, collections of demographic data can enable social discrimination through practices such as price gouging or delivery of unequal services. Could 12

participatory personal data gathered to track an individuals health concern or document a communitys assets be repurposed to deny health insurance or set higher prices for goods and services? All of this cross-disciplinary attention points to the fact that building participatory personal data systems that protect privacy remains a challenge. Human-computer interaction research considers ways that systems might notify or interact with users to help them understand privacy risks (Anthony et al., 2007; Bellotti, 1998; Nguyen & Mynatt, 2002). Computer science and engineering research innovates methods to obscure, hide, or anonymize data in order to give users privacy options (Ackerman & Cranor, 1999; Agrawal & Srikant, 2000; Fienberg, 2006; Frikken & Atallah, 2004; Ganti, Pham, Tsai, & Abdelzaher, 2008; Iachello & Hong, 2007). Anonymization of data, in particular, is a hotly debated issue in the privacy literature. Many scholars argue that possibilities for re-identification of data make anonymization insufficient for privacy protection (Narayanan & Shmatikov, 2008; Ohm, 2009). Other researchers are pursuing new anonymization techniques to respond to these concerns (Benitez & Malin, 2010; Malin & Sweeney, 2004). Privacy approaches for participatory personal data systems draw on a number of these developments (Christin, Reinhardt, Kanhere, & Hollick, 2011). These include limiting sensing by granularity, time of day, location, or social surroundings; providing capture and sharing options to match diverse user preferences; methods to collect and contribute data without revealing identifying information; data retention and deletion plans; and access control mechanisms. All of these methods focus on privacy by design: building features into systems to help users manage their personal data and sharing decisions (Spiekermann & Cranor, 2009). Privacy by design is a promising avenue of research for participatory personal data, and advocacy organizations such as the Center for

13

Democracy & Technology are currently pushing mobile application developers to take responsibility for privacy in their design practices (Center for Democracy & Technology, 2011). Privacy, of course, is only a relative value, and can frustrate other social goods. As Kang (1998) points out, commerce can suffer from strong privacy rights, as there is less information for both producers and consumers in the marketplace. Perhaps worse, truthfulness, openness, and accountability can suffer at the hands of strict privacy protections (Allen, 2003). Research using participatory personal data directly confronts this tradeoff between privacy, truthfulness, and accuracy. For example, researchers are developing algorithms for participatory personal data collection that allow users to replace sensitive location data with believable but fake data, effectively lying within the system (Ganti et al., 2008; Mun et al., 2009). What is good for privacy may not always be good for accuracy or accountability. Investigating privacy and policy for participatory personal data will include weighing these tradeoffs. It will also include integrating elements from contextual privacy and useable system design to present a range of appropriate privacy-preserving choices without unduly burdening participants. And finally, it will mean crafting new policy institutional as well as national to protect participants from function creep or discrimination based on their data. Information access and equity Privacy is not the only research tradition in information science that can inform a discussion about participatory personal data. These data also raise challenges for information access and equity. Who controls data capture, analysis, and presentation? Who instigates projects and sets research goals? Who owns the data or benefits from project data? Accumulating and manipulating information is a form of power in a global information economy (Castells, 1999; Lievrouw & Farb, 2003). Participatory personal data projects invoke this power by enabling previously impossible data 14

gathering and interpretation. How do participatory personal data project designers, clients, and users decide in whose hands this power will reside? The relationship between information, power and equity has long been a topic of interest in the information studies literature (Lievrouw & Farb, 2003). A large literature on the digital divide has focused on access to information, and ways that social demographics limit or enhance information access (Bertot, 2003). Participatory personal data evoke these basic questions of accessibility. Anecdotal evidence from popular news reports suggests that hobbyist self-quantifiers are largely American and European, white, and upper-middle class (Dembosky, 2011). Participants in health or urban planning data projects, however, may span a much greater socio-economic range. Indeed, mobile devices are some of the most accessible information technologies on earth (Kinkade & Verclas, 2008), spanning national, ethnic, and socio-economic groups. But there are challenges beyond accessibility as well. Lievrouw and Farb (2003) suggest that researchers concerned with information equity take a different approach, emphasizing the subjective and context-dependent nature of information needs and access, even among members of one social group. How do participatory personal data answer context-specific information needs? When individuals are generating the information in question, equity comes to hinge on who benefits from this information capture. Will it be individuals and informal communities, or more organized corporations and governments? Sociologists have proposed that loosely organized publics help to balance power held by formal organizations and governments (Fish, Murillo, Nguyen, Panofsky, & Kelty, 2011). But the rise of participatory culture has challenged this traditional understanding, organizing publics and intermeshing them with organizations. For example, participatory personal data projects exhibit elements of both organizations and publics. Research organizations such as UCLAs Center for Embedded Networked Sensing (CENS, http://urban.cens.ucla.edu/) partner 15

with community groups to actively recruit informal groups of participants into participatory personal data projects. Examples include health projects which recruited young mothers not only for data collection, but for focus groups about research design; as well as a community documentation project which engaged neighbors in Los Angeles Boyle Heights community. Will organizations like CENS hold the power that design, data, categories and social sorting can bring, or can it be distributed back to the publics who collect the data? Because data collection methods using mobile devices can range from participatory to opportunistic, it is unclear how much control individuals will have over what data are collected, how they are stored, and what inferences are drawn. It is important to note that increasing measures for participation does not solve problems of power and equity. As Kreiss et al (2011) point out, there are limits on the emancipatory potential of peer production. And participatory projects have been criticized for a range of failures, from struggling to create true participation (Elwood, 2006) to being outright disingenuous in their approach and goals (Cooke & Kothari, 2001). The intersection of information systems, values, and culture is also important to consider. Cultural expectations and norms are deeply embedded into the design of information systems, shaping everything from representation of relationships within databases (Srinivasan, 2004, 2007) to the explanations drawn from data (Byrne & Alexander, 2006; Corburn, 2003). The design process is never value-neutral, and questions of what, and whose, values are embodied by software and system architecture have been controversial for decades (Friedman, 1997). Affordances built into a technology may privilege some uses (and users) while marginalizing others. Design areas where bias can become particularly embedded include user interfaces (Friedman & Nissenbaum, 1997), access and input/output devices (Perry, Macken, Scott, & McKinley, 1997), and sorting and categorization mechanisms (Bowker & Star, 2000; Suchman, 1997). The intersections between culture, meaning, and information systems have spurred researchers to 16

experiment with culturally-specific databases, media archives, and information systems for indigenous, diasporic, and marginalized communities (Boast, Bravo, & Srinivasan, 2007; Monash University School of Information Management and Systems, 2006; Srinivasan, 2007; Srinivasan & Shilton, 2006). Such alternative design projects seek to investigate, expose, redirect, or even eliminate biases that arise in mainstream design projects (Nieusma, 2004). Participatory personal data projects, however, often adopt a universal rather than relativist vision, taking everyone as its intended users. What does it mean to design for everyone? As Suchman (1997) points out, designing technology is the process of designing not just artifacts, but also the practices that will be associated with those artifacts. What do designers, implicitly or explicitly, intend the practices associated with participatory personal data projects to be? And how will such practices fit into, clash against, or potentially even reshape diverse cultural contexts? Management, curation and preservation Privacy, access and equity challenges are all affected by an overarching information concern: how participatory personal data projects are managed, curated, and preserved. Metadata creation and ongoing management are necessary to ensure the access control, filtering, and security necessary to maintain privacy for participatory personal data. Accessibility and interpretability of the data by individuals as well as governments and corporations will be dependent upon its organization, retrieval, and visualization. And whether and how data are preserved or forgotten will be dependent upon curation mechanisms heavily reliant on metadata and data structures (Borgman, 2007). Participatory personal data echo many of the same management concerns found in large scientific datasets (D. Cohen, Fraistat, Kirschenbaum, & Scheinfeldt, 2009; Gray et al., 2005; Hey & Trefethen, 2005). Participatory personal data may consist of large quantities of samples recorded 17

every second or minute for days or months. The data are frequently quantitative measurements dependent upon machine processing and descriptive metadata for human comprehension. These characteristics require new techniques for organization and management. Developing such methods for organizing, analyzing, and extracting meaning from large, diverse and largely quantitative datasets is an emerging challenge for information sciences (Borgman, Wallis, Mayernik, & Pepe, 2007). Always-on, sensitive data capture brings up a number of theoretical and normative questions about whether and how these data should persist over time. Where and how will these data be curated and preserved? What are the benefits of preserving peoples movements, habits, and routines? And what problems might the ubiquitous nature of this memory raise? Participatory personal data present an institutional and logistical challenge for preservation. Like all digital material, methods for long-term preservation are costly and labor-intensive (Galloway, 2004). The distribution of these data across multiple stakeholders, including individuals, research organizations, corporations and governments, also challenges traditional preservation models based upon clearlydefined collecting institutions (Abraham, 1991; Lee & Tibbo, 2007). It is also unclear what institutions will be responsible for preserving data held by individuals and loosely-organized publics. Determining who is responsible for authoring, managing, and curating data distributed among individuals will challenge our existing notions of institutional data repositories and professional data management. Perhaps more difficult is the question of whether we should preserve participatory personal data at all. Historically, a major role of archival institutions was selecting records, keeping only a tiny portion of records deemed historically valuable (Boles, 1991; Cook, 1991). But the explosion of data generation paired with cheap storage and cloud computing raises the possibility of saving much more evidence of daily life. This possibility has become a subject of both celebration (Bell & 18

Gemmell, 2007) and debate (Blanchette & Johnson, 2002). Collections of granular personal data have been invoked to promise everything from improved health care (Hayes et al., 2008, 2007) to memory banks that allow one to vividly relive an event with sounds and images, enhancing personal reflection (Bell & Gemmell, 2007, p. 58). And new kinds of personal documentation could help to counteract the power structures that control current archival and memory practices, in which the narratives of powerful groups and people are reified while others are marginalized (Ketelaar, 2002; McKemmish, Gilliland-Swetland, & Ketelaar, 2005; Shilton & Srinivasan, 2007). But as more data are collected and indefinitely retained, there may be pernicious social consequences. Blanchette and Johnson (2002) point out that U.S. law has instituted a number of social structures to aid in social forgetting or enabling a clean slate. These include bankruptcy law, credit reports and the clearing of records of juvenile offenders. As information systems increasingly banish forgetting, we may face the unintended loss of the fresh start. Drawing on this argument, Bannon (2006) suggests that building systems that forget might encourage new forms of creativity. He argues that an emphasis on augmenting one human capacity, memory, has obscured an equally important capacity: that of forgetting. He proposes that designers think about ways that information systems might serve as forgetting support technologies (2006, p. 5). Mayer-Schoenberger (2007) presents a similar argument, advocating for a combination of policies and forgetful technologies that would allow for the decay of digital data. Information professionals interested in questions of data preservation will find difficult challenges for appraisal and curation in participatory personal data. Determining the nature and organization of the cyberinfrastructure that will support participatory personal data will affect many of these questions about privacy, equity, and memory.

19

Next Steps Participatory personal data, as objects of inquiry, present a range of interesting questions for the information sciences. Participatory personal data coexist with broad data surveillance by corporations and governments, and may be used towards pernicious ends. But with their emphasis on participation and targeted capture, participatory personal data may simultaneously give people their own ways to use data collection tools and platforms. Much of the social impact of participatory personal data will depend on how data are captured and organized; who has access; whether individuals consent and participate; and how (or whether) data are curated and preserved. These issues are ripe for investigation by IS researchers. Scholars focused on privacy might design and field test privacy-friendly participatory data systems. They could conduct user studies to evaluate how participatory personal data project participants understand and use their privacy choices. IS researchers could investigate the sensitivity of various participatory personal data, or study how combining multiple data collections might complicate privacy concerns. Or they might use conceptual analysis to answer the challenge of how to incorporate contextual privacy principles into pervasive systems that span social contexts. Researchers focused on access and equity can analyze power and participation in existing and emerging participatory personal data. They might collect demographic information on the populations participating in, and affected by, participatory personal data projects. IS researchers could interview stakeholders and understand the mix of organizational and informal publics involved in data collection projects. They could question and critique the usefulness of participatory personal data, or they might establish guidelines for making data collection efforts truly participatory.

20

Investigators in the areas of data management, curation and preservation should undertake the difficult cyberinfrastructure questions raised by participatory personal data. Ethnographies can reveal the conditions under which participatory personal data project organizers choose open source or proprietary designs. Social network analysis might help to understand the ways that management techniques or curation mechanisms spread. And philosophical inquiry into memory and forgetting can help to answer the normative questions of which data should be actively curated, and which data are better left to digital obscurity. Finally, these areas of inquiry may proceed simultaneously, but as is clear from the intersections of law, policy, social theory, and data management, these questions span diverse areas within our field and between fields. Continued interdisciplinary conversation can enable findings from conceptual research to impact system design; empirical findings to affect data management; and experience from data management and curation to influence social theory. Conversations in journals such as JASIST are one part of this equation; so too are interdisciplinary conferences and continued funding collaborations. By querying and shaping how participatory personal data are organized and managed; how privacy, consent, and participation are handled in pervasive systems; how participatory personal data affect the balance of power in an information economy; and how such data impact social and institutional memory and forgetting, information scholars can help to shape this emerging information landscape through building systems, constructing information policy, and shaping values in participatory personal data system design. Acknowledgements This work is based on material compiled for my doctoral dissertation, Building Values into the Design of Pervasive Mobile Technologies. Many thanks to my committee: Jeffrey Burke, Deborah Estrin, Christopher Kelty, Ramesh Srinivasan, and chair Christine Borgman. Their ideas, feedback 21

and guidance have shaped this work immensely. Thanks also to Jillian Wallis, Laura Wynholds, and Jes Koepfler for comments and suggestions on drafts, and to the anonymous reviewers for their helpful feedback. This work was funded by the National Science Foundation under grant number 0832873. References Abraham, T. (1991). Collection policy or documentation strategy: Theory and practice. American Archivist, 54(Winter), 44-52. Ackerman, M. S., & Cranor, L. F. (1999). Privacy critics: UI components to safeguard users privacy. Conference on Human Factors in Computing Systems CHI99 (pp. 258-259). ACM Publications. Acquisti, A., & Grossklags, J. (2008). What can behavioral economics teach us about privacy? Digital Privacy: Theory, Technologies, and Practices (pp. 363-377). New York and London: Auerbach Publications. Agrawal, R., & Srikant, R. (2000). Privacy-preserving data mining. 2000 ACM SIGMOD International Conference on Management of Data (pp. 439-450). Albrechtslund, A. (2008). Online social networking as participatory surveillance. First Monday, 13(3). Allen, A. L. (2003). Why privacy isnt everything: feminist reflections on personal accountability. Lanham, Boulder, New York and Oxford: Rowman & Littlefield Publishers, Inc. Altman, I. (1977). Privacy regulation: culturally universal or culturally specific? Journal of Social Issues, 33(3), 66-84. Anthony, D., Kotz, D., & Henderson, T. (2007). Privacy in location-aware computing environments. Pervasive Computing, 6(4), 64-72. Bannon, L. (2006). Forgetting as a feature, not a bug: the duality of memory and implications for ubiquitous computing. CoDesign, 2(1), 3-15. 22

Bell, G., & Gemmell, J. (2007). A digital life. Scientific American, (March), 58-65. Bellotti, V. (1998). Design for privacy in multimedia computing and communications environments. Technology and privacy: The new landscape (pp. 63-98). Cambridge, MA and London: The MIT Press. Benitez, K., & Malin, B. (2010). Evaluating re-identification risks with respect to the HIPAA privacy rule. Journal of the American Medical Informatics Association, 17(2), 169 -177. Bertot, J. C. (2003). The multiple dimensions of the digital divide: more than the technology `haves and `have nots. Government Information Quarterly, 20(2), 185-191. Blanchette, J.-F., & Johnson, D. G. (2002). Data retention and the panoptic society: the social benefits of forgetfulness. The Information Society, 18(33-45). Boast, R., Bravo, M., & Srinivasan, R. (2007). Return to Babel: emergent diversity, digital resources, and local knowledge. The Information Society, 23(5), 395-403. Boles, F. (1991). Archival Appraisal. New York and London: Neal-Schuman Publishers, Inc. Borgman, C. L. (2007). Scholarship in the digital age: information, infrastructure, and the internet. Cambridge, MA and London: The MIT Press. Borgman, C. L., Wallis, J. C., Mayernik, M. S., & Pepe, A. (2007). Drowning in data: digital library architecture to support scientific use of embedded sensor networks. ACM/IEEE Joint Conference on Digital Libraries 2007. Presented at the ACM/IEEE Joint Conference on Digital Libraries 2007, Vancouver, BC. Boudreau, J. (2011, July 7). Pondering effects of the data deluge. The Los Angeles Times. Los Angeles, CA. Retrieved from http://www.latimes.com/business/la-fi-big-data20110707,0,7170770.story

23

Bowker, G. C., & Star, S. L. (2000). Sorting Things Out: Classification and Its Consequences. Cambridge, MA and London: The MIT Press. boyd, danah, & Hargittai, E. (2010). Facebook privacy settings: who cares. First Monday, 15(8). Retrieved from http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/3086/2589 Buchanan, T., Paine, C., Joinson, A. N., & Reips, U.-D. (2007). Development of Measures of Online Privacy Concern and Protection for Use on the Internet. Journal of the American Society for Information Science and Technology, 58(2), 157-165. Burke, J., Estrin, D., Hansen, M., Parker, A., Ramanathan, N., Reddy, S., & Srivastava, M. B. (2006). Participatory sensing. World Sensor Web Workshop. Presented at the World Sensor Web Workshop, Boulder, CO: ACM. Byrne, E., & Alexander, P. M. (2006). Questions of ethics: Participatory information systems research in community settings. Proceedings of the 2006 annual research conference of the South African institute of computer scientists and information technologists on IT research in developing countries (pp. 117-126). Presented at the 2006 annual research conference of the South African institute of computer scientists and information technologists on IT research in developing countries, Somerset West, South Africa: South African Institute for Computer Scientists and Information Technologists. Camp, L. J., & Connelly, K. (2008). Beyond consent: privacy in ubiquitous computing (Ubicomp). Digital Privacy: Theory, Technologies, and Practices (pp. 327-343). New York and London: Auerbach Publications.

24

Campbell, A. T., Eisenman, S. B., Lane, N. D., Miluzzo, E., & Peterson, R. A. (2006). People-centric urban sensing. Proceedings of the 2nd annual international workshop on Wireless internet. Boston, MA: ACM. Capurro, R. (2005). Privacy. An Intercultural Perspective. Ethics and Information Technology, 7, 37-47. Cargo, M., & Mercer, S. L. (2008). The Value and Challenges of Participatory Research: Strengthening Its Practice. Annual Review of Public Health, 29, 325-350. Castells, M. (1999). Flows, Networks, and Identities: A Critical Theory of the Informational Society. Critical education in the new information age. Lanham, MD: Rowman & Littlefield Publishers, Inc. Center for Democracy & Technology. (2011). Best Practices for Mobile Applications Developers. Washington, D.C.: Center for Democracy & Technology. Retrieved from http://www.cdt.org/blogs/2112best-practices-mobile-applications-developers Christin, D., Reinhardt, A., Kanhere, S. S., & Hollick, M. (2011). A survey on privacy in mobile participatory sensing applications. Journal of Systems and Software, 84(11), 1928-1946. Cohen, D., Fraistat, N., Kirschenbaum, M., & Scheinfeldt, T. (2009). Tools for data-driven scholarship: past, present, future. Ellicott City, MD: Maryland Institute for Technology and the Humanities. Retrieved from http://mith.umd.edu/tools/?page_id=60 Cohen, J. E. (2008). Privacy, Visibility, Transparency, and Exposure. University of Chicago Law Review, 75(1), 181-201. Cook, T. (1991). The archival appraisal of records containing personal information: A RAMP study with guidelines. Paris: General Information Programme, United Nations Educational, Scientific and Cultural Organization. Cooke, B., & Kothari, U. (2001). Participation: the New Tyranny? London and New York: Zed Books.

25

Corburn, J. (2003). Bringing local knowledge into environmental decision making: Improving urban planning for communities at risk. Journal of Planning Education and Research, 22, 120-133. Cuff, D., Hansen, M., & Kang, J. (2008). Urban sensing: out of the woods. Communications of the ACM, 51(3), 24-33. Curry, M. R., Phillips, D. J., & Regan, P. M. (2004). Emergency response systems and the creeping legibility of people and places. The Information Society, 20, 357-369. Dembosky, A. (2011, June 10). Invasion of the body hackers. FT Magazine. Retrieved from http://www.ft.com/cms/s/2/3ccb11a0-923b-11e0-9e0000144feab49a.html#axzz1RxISmSku Donner, J., Verclas, K., & Toyama, K. (2008). Reflections on MobileActive 2008 and the M4D Landscape. MobileActive.org and Microsoft Research India. Retrieved from http://mobileactive.org/files/DVT_M4D_choices_final.pdf Elwood, S. (2006). Critical issues in participatory GIS: Deconstructions, reconstructions, and new research directions. Transactions in GIS, 10(5), 693-708. Estrin, D. (2010). Participatory sensing: applications and architecture [Internet predictions]. Internet Computing, IEEE, 14(1), 12-42. Estrin, D., & Sim, I. (2010). Open mHealth Architecture: An Engine for Health Care Innovation. Science, 330(6005), 759 -760. Fienberg, S. E. (2006). Privacy and confidentiality in an e-commerce world: Data mining, data warehousing, matching and disclosure limitation. Statistical Science, 21(2), 143-154. Fish, A., Murillo, L. F. R., Nguyen, L., Panofsky, A., & Kelty, C. M. (2011). Birds of the internet towards a field guide to the organization and governance of participation. Journal of Cultural Economy, 4(2), 157-187. 26

Foucault, M. (1979). Discipline and Punish: The Birth of the Prison. (A. Sheridan, Trans.). New York: Vintage Books. Friedman, B. (Ed.). (1997). Human values and the design of computer technology. CSLI Lecture Notes. Cambridge and New York: Cambridge University Press. Friedman, B., & Nissenbaum, H. (1997). Bias in computer systems. In B. Friedman (Ed.), Human values and the design of computer technology (pp. 21-40). Cambridge and New York: Cambridge University Press. Frikken, K. B., & Atallah, M. J. (2004). Privacy preserving route planning. Proceedings of the 2004 ACM workshop on privacy in the electronic society (pp. 8-15). Presented at the 2004 ACM workshop on Privacy in the electronic society, Washington DC, USA: ACM. Retrieved from http://portal.acm.org/citation.cfm?id=1029182 Froehlich, J., Chen, M. Y., Consolvo, S., Harrison, B., & Landay, J. A. (2007). MyExperience: a system for in situ tracing and capturing of user feedback on mobile phones. Proceedings of the 5th international conference on Mobile systems, applications and services, MobiSys 07 (pp. 5770). New York, NY, USA: ACM. Retrieved from http://doi.acm.org/10.1145/1247660.1247670 Galloway, P. (2004). Preservation of digital objects. Annual Review of Information Science and Technology, 38, 549-590. Ganti, R. K., Pham, N., Tsai, Y.-E., & Abdelzaher, T. F. (2008). PoolView: stream privacy for grassroots participatory sensing. Proceedings of the 6th ACM conference on embedded network sensor systems (pp. 281-294). Raleigh, NC, USA: ACM. Retrieved from http://portal.acm.org/citation.cfm?id=1460412.1460440

27

Gentry, E. (2010, October 13). Will Butter Make You Smarter? Introducing Butter Mindand Coconut Mind. Quantified Self. Weblog, . Retrieved July 9, 2011, from http://quantifiedself.com/2010/10/will-butter-make-you-smarter-i/ Gillespie, T. L. (2010). The politics of platforms. New Media & Society, 12(3). Gray, J., Liu, D. T., Nieto-Santisteban, M., Szalay, A., DeWitt, D. J., & Heber, G. (2005). Scientific data management in the coming decade. SIGMOD Rec., 34(4), 34-41. Hayes, G. R., Abowd, G., Davis, J., Blount, M., Ebling, M., & Mynatt, E. (2008). Opportunities for Pervasive Computing in Chronic Cancer Care. Pervasive Computing 2008, Lecture Notes in Computer Science (Vol. 5013, pp. 262-279). Heidelburg: Springer-Verlag. Hayes, G. R., Poole, E. S., Iachello, G., Patel, S. N., Grimes, A., Abowd, G., & Truong, K. N. (2007). Physical, social and experiential knowledge in pervasive computing environments. Pervasive Computing, 6(4), 56-63. Hey, T., & Trefethen, A. E. (2005). Cyberinfrastructure for e-Science. Science, 308(5723), 817-821. Hill, K. (2011, February 25). Adventures in Self-Surveillance: Fitbit, Tracking My Movement and Sleep. Forbes. Retrieved from http://blogs.forbes.com/kashmirhill/2011/02/25/adventuresin-self-surveillance-fitbit-tracking-my-movement-and-sleep/ Horowitz, C. R., Robinson, M., & Seifer, S. (2009). Community-Based Participatory Research From the Margin to the Mainstream: Are Researchers Prepared? Circulation, 119, 2633-2642. Huey, L., Walby, K., & Doyle, A. (2006). Cop watching in the downtown Eastside. Surveillance and Security: Technological Politics and Power in Everyday Life (pp. 149-165). New York and London: Routledge. Iachello, G., & Hong, J. (2007). End-user privacy in human-computer interaction. Foundations and Trends in Human-Computer Interaction, 1(1), 1-137. 28

Iachello, G., Smith, I., Consolvo, S., Chen, M., & Abowd, G. (2005). Developing privacy guidelines for social location disclosure applications and services. Proceedings of the 2005 symposium on Usable privacy and security (pp. 65-76). Pittsburgh, Pennsylvania: ACM. Retrieved from http://portal.acm.org/citation.cfm?id=1073001.1073008 Institute for Applied Autonomy. (2006). Defensive surveillance: Lessons from the Republican National Convention. Surveillance and security: Technological politics and power in everyday life (pp. 167-174). New York and London: Routledge. John, L. K., Acquisti, A., & Loewenstein, G. (2009). The Best of Strangers: Context Dependent Willingness to Divulge Personal Information (Working Paper). Pittsburgh, PA: Carnegie Mellon University. Retrieved from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1430482 Kang, J. (1998). Privacy in cyberspace transactions. Stanford Law Review, 50, 1193-1294. Kang, J., Shilton, K., Burke, J., Estrin, D., & Hansen, M. (2012). Self-surveillance privacy. Iowa Law Review, (97). Ketelaar, E. (2002). Archival Temples, Archival Prisons: Modes of Power and Protection. Archival Science, 2, 221-238. Kim, Y., Schmid, T., Charbiwala, Z. M., & Srivastava, M. B. (2009). ViridiScope: design and implementation of a fine grained power monitoring system for homes. Proceedings of the 11th international conference on Ubiquitous computing (pp. 245-254). Orlando, Florida, USA: ACM. Retrieved from http://portal.acm.org/citation.cfm?id=1620545.1620582 Kinkade, S., & Verclas, K. (2008). Wireless Technology for Social Change: Trends in Mobile Use by NGOs. Washington, DC and Berkshire, UK: The UN Foundation - Vodafone Group Foundation Partnership.

29

Kreiss, D., Finn, M., & Turner, F. (2011). The limits of peer production: Some reminders from Max Weber for the network society. New Media & Society, 13(2), 243 -259. Lam, M. (2009, April 14). Building a Social Networking Future Without Big Brother. Presented at the POMI 2020 Workshop, Palo Alto, CA. Retrieved from http://suif.stanford.edu/%7Elam/lam-pomi-ws09.pdf Lange, P. G. (2007). Publicly private and privately public: Social networking on YouTube. Journal of Computer-Mediated Communication, 13(1), n.d. Lee, C. A., & Tibbo, H. R. (2007). Digital Curation and Trusted Repositories: Steps Toward Success. Journal of Digital Information, 8(2). Retrieved from http://journals.tdl.org/jodi/article/viewArticle/229/183 Lenhart, A., & Madden, M. (2007). Teens, Privacy and SNS. Washington, DC: Pew Internet & American Life Project. Lievrouw, L. A., & Farb, S. E. (2003). Information and Social Equity. Annual Review of Information Science and Technology (pp. 499-540). 37: Information Today. Lyon, D. (2001). Surveillance Society (1st ed.). Buckingham and Philadelphia: Open University Press. Madden, M., Fox, S., Smith, A., & Vitak, J. (2007). Digital footprints: online identity management and search in the age of transparency. Washington, DC: Pew Internet & American Life Project. Malin, B., & Sweeney, L. (2004). How (not) to protect genoic data privacy in a distributed network: using trail re-identification to evaluate and design anonymity protection systems. Journal of Biomedial Informatics, 37(3), 179-192. Marx, G. T. (2002). Whats new about the new surveillance? Classifying for change and continuity. Surveillance & Society, 1(1), 9-29.

30

Mayer-Schoenberger, V. (2007). Useful void: the art of forgetting in the age of ubiquitous computing (Working Paper No. RWP07-022). Cambridge, MA: Harvard University. McKemmish, S., Gilliland-Swetland, A., & Ketelaar, E. (2005). Communities of memory: pluralising archival research and education agendas. Archives & Manuscripts, 33(1), 146-174. Monahan, T. (2006a). Counter-surveillance as Political Intervention? Social Semiotics, 16(4), 515-534. Monahan, T. (Ed.). (2006b). Surveillance and security: Technological politics and power in everyday life. New York and London: Routledge. Monash University School of Information Management and Systems. (2006, June 16). Trust and Technology Project: Building archival systems for Indigenous oral memory. Retrieved from http://www.sims.monash.edu.au/research/eirg/trust Mun, M., Reddy, S., Shilton, K., Yau, N., Boda, P., Burke, J., Estrin, D., et al. (2009). PEIR, the personal environmental impact report, as a platform for participatory sensing systems research. Proceedings of the International Conference on Mobile Systems, Applications, and Services. Presented at the International Conference on Mobile Systems, Applications, and Services, Krakow, Poland. Narayanan, A., & Shmatikov, V. (2008). Robust De-anonymization of Large Sparse Datasets. IEEE Symposium on Security and Privacy, 2008. SP 2008 (pp. 111-125). Presented at the IEEE Symposium on Security and Privacy, 2008. SP 2008, IEEE. Nguyen, D. H., & Mynatt, E. (2002). Privacy mirrors: understanding and shaping socio-technical ubiquitous computing systems. Georgia Institute of Technology. Nieusma, D. (2004). Alternative Design Scholarship: Working Toward Appropriate Design. Design Issues, 20(3), 13-24. Nissenbaum, H. (2004). Privacy as contextual integrity. Washington Law Review, 79(1), 119158. 31

Nissenbaum, H. (2009). Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford, CA: Stanford Law Books. Office for Protection of Research Subjects. (2007, March 30). UCLA Investigators Manual for the Protection of Human Subjects. Retrieved from http://www.oprs.ucla.edu/human/manual/TOC Office of the Secretary of The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Department of Health, Education, and Welfare. Ohm, P. (2009). Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization (Working Paper No. Research Paper No. 09-12). Boulder, CO: University of Colorado Law School. Retrieved from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1450006 Palen, L., & Dourish, P. (2003). Unpacking privacy for a networked world. CHI 2003 (Vol. 5, pp. 129-136). Ft. Lauderdale, FL: ACM. Perry, J., Macken, E., Scott, N., & McKinley, J. L. (1997). Disability, inability and cyberspace. In B. Friedman (Ed.), Human values and the design of computer technology (pp. 65-89). Cambridge and New York: Cambridge University Press. Phillips, D. J. (2002). Negotiation the digital closet: online pseudonyms and the politics of sexual identity. Information, Communication & Society, 5(3). Phillips, D. J. (2005). From privacy to visibility: context, identity, and power in ubiquitous computing environments. Social Text, 23(2), 95-108. Raento, M., & Oulasvirta, A. (2008). Designing for privacy and self-presentation in social awareness. Personal and Ubiquitous Computing, 12, 527542. Rule, J. B. (2004). Toward strong privacy. University of Toronto Law Journal, 54(2), 183-225. 32

Shapiro, S. (1998). Places and spaces: the historical interaction of technology, home, and privacy. The Information Society, 14(4), 275. Sheehan, K. B. (2002). Toward a typology of internet users and online privacy concerns. The Information Society, 18(1), 21-32. Shilton, K. (2010). Participatory sensing: building empowering surveillance. Surveillance & Society, 8(2), 131-150. Shilton, K., & Srinivasan, R. (2007). Participatory Appraisal and Arrangement for Multicultural Archival Collections. Archivaria, 63, 87-101. Spiekermann, S., & Cranor, L. F. (2009). Engineering Privacy. IEEE Transactions on Software Engineering, 35(1), 67-82. Srinivasan, R. (2004). Knowledge architectures for cultural narratives. Journal of Knowledge Management, 8(4), 65-74. Srinivasan, R. (2007). Ethnomethodological architectures: information systems driven by cultural and community visions. Journal of the American Society for Information Science and Technology, 58(5), 723-733. Srinivasan, R., & Shilton, K. (2006). The South Asian web: an emerging community information system in the South Asian diaspora. Ninth Conference on Participatory Design: Expanding boundaries in design (Vol. 1, pp. 125 - 133). Trento, Italy: ACM. Suchman, L. (1997). Do categories have politics? The language/action perspective reconsidered. In B. Friedman (Ed.), Human values and the design of computer technology (pp. 91-105). Cambridge and New York: Cambridge University Press. Swarthout, A. M. (1967). Eavesdropping as violating right of privacy. American Law Reports 3rd (p. 1296). 11: West Group. 33

Vaz, P., & Bruno, F. (2003). Types of Self-Surveillance: from abnormality to individuals at risk. Surveillance & Society, 1(3), 272-291. Waldo, J., Lin, H. S., & Millett, L. I. (2007). Engaging privacy and information technology in a digital age. Washington, D.C.: The National Academies Press. Westin, A. F. (1970). Privacy and freedom. New York: Atheneum. Wolf, G. (2010, April 26). The Data-Driven Life. The New York Times. Retrieved from http://www.nytimes.com/2010/05/02/magazine/02self-measurementt.html?hp=&pagewanted=print Yao, M. Z., Rice, R. E., & Wallis, K. (2007). Predicting user concerns about online privacy. Journal of the American Society for Information Science and Technology, 58(5), 710-722.

34

S-ar putea să vă placă și